Home / Model Expert Homepage / Model Expert Help / aiExpert / Creating and Editing your own queries
Import an MDG to create a Reference Model
Importing and exporting Reference Models
Editing RM Connector type properties
Stereotypes inheriting from other Element Types
Customizing Reference Model Properties
Customizing Reference Model Element Properties
Table of contents
5Tips
Creating and Editing your own queries
Table of contents
5Tips
A Model Query defines what aiExpert should check in your model and which best-practice rules it should apply. By editing Model Queries you can customize the feedback that aiExpert provides, focusing it on the aspects of modelling that matter most to you and your team.
Built-in Model Queries
We have created some example Model Queries which are saved on eaTeamWorks, and which will be downloaded when you open the aiExpert settings.
You can take copies of these examples, and save them into your repository.
What you can edit
-
Name and notes â give each query a clear name and notes so users can see what it does
- Language. You can choose which natural language the AI model should use when giving feedback. Default is English. aiExpert will automatically add an additional rule to your LLM query to ask for feedback in your chosen language.
Note: we have not been able to test this feature in detail, because whilst we can evaluate feedback on BPMN models in English, we don’t have the language skills to do so in any other languages. So we welcome your feedback in this area. -
Data Collector â choose how much model detail is sent to the AI model:
-
Names only -fastest, just sends the names of elements, so you can only get feedback on your choice of names
-
Names + connectors. Sends the names of all elements, and how all those elements are connected together. Includes parent/child relationships
-
Names + connectors + attributes + tagged values . This sends everything to the Ai model, and so will provide the richest feedback, but will take a little longer to process.
-
-
Target LLMÂ â select which Large Language Model to use (e.g. OpenAI or Gemini).
-
System instructions â advanced users may override the built-in instructions that control how aiExpert talks to the Large Language Model (LLM – currently either ChatGPT or Gemini). Only change this if you know what you are doing!
- Rules. This is the most important part of aiExpert. The rules tell the LLM, in normal language, how it should think about your EA information. See below for some ideas on how to edit query rules.
How to edit a Model Query
-
From the top EA ‘Specialize’ menu, choose Model Expert / aiExpert settings
- (you could just edit the underlying EA elements which contain all the data about Model Queries. These are Requirement elements, with a stereotype of ‘aiExpert Model Query”. The interesting data is saved as tagged values.)
- When you first use aiExpert, there may be no Model Queries saved in your repository, but just select one of the built-in Model Queries from the list at the top, and ‘save copy’. You can then edit this copy. Built-in queries are show ending in ‘*’.
- If you already have some save Model Queries, right click on a query in the EA Project browser, and choose Specialize / Model Expert / Edit query
-
You may need to tell Model Expert where to save these new Model Query elements. This must be a package in your repository to which you have write access.
-
Run aiExpert again to see the updated feedback.
Adding or changing rules
The main reason you might want to edit a Model Query is to change the Rules part of the query. (Changing the ‘instructions’ part is not recommended).
This might be because you have found some other publicly-available (and scanned by the LLM) standards that you want the LLM to apply. Or perhaps you want to modify some of the advice.
Tips
-
Keep your additional rules clear and simple.
-
Pick the Data Collector that balances speed and detail for your needs.
-
Share queries across your team to promote consistent modelling practice.
Benefits
-
Tailor aiExpert to check exactly what matters to you.
-
Balance speed vs. richness of results with different DataCollectors.
-
Flexibility to choose which LLM to use.
-
Clear naming and notes help teams reuse queries consistently.