Home / aiExpert Settings / aiExpert / aiExpert Settings
Import an MDG to create a Reference Model
Importing and exporting Reference Models
Editing RM Connector type properties
Stereotypes inheriting from other Element Types
Customizing Reference Model Properties
Customizing Reference Model Element Properties
Table of contents
aiExpert Settings
Table of contents
aiExpert settings are in two parts:
1 – the ModelQueries which you use to talk to the LLM
2 – other aiExpert settings, such as which LLM you want to use.
Model Queries in this model
This is a list of the Model Query elements in your current repository. These are just regular EA elements which save all the key information which aiExpert needs to talk to an LLM.
For more details on Model Queries, see Creating and Editing your own queries
aiExpert Settings
Settings Package
An important setting is for you to define where the settings for aiExpert will be saved. This should be an EA Package to which you, and your users, have write access. This package wil contain both the aiExpert settings for this repository, and also all your Model Queries.
Models and API Keys
aiExpert talks to either the Open AI ‘GPT’ or Google Gemini models. (in the current version, we have only tested with OpenAI ‘GPT’ models.)
To do this, it needs to have an account on the LLM you choose to use. We currently use GPT5 for our testing, as we have a paid-for account on that platform.
As a user of aiExpert, you get 50 free queries to our paid-for account. After that, you need to get your own paid-for account. All calls via an API to the OpenAI GPT models are paid for.
So you can choose to
- use the demo API key, which will submit queries via the eaTeamWorks account, and works for 50 queries
- use your own OpenAI (or, one day, Gemini) account
Using your own account
See article Getting Started for more details on how to get an OpenAI account.
Once you have done that, you will key an ‘API key’, which is what aiExpert uses on every query, to prove that it is allowed to use an account.
This API key will be shared by all the users of this repository who want to use aiExpert, but it must also be kept secure: anyone who has it can submit queries.
So, aiExpert saves this API key in its settings (as tagged values in your Settings package) but it is encrypted. It uses a lightweight encryption (only users who have access to you EA model can see the data) and aiExpert uses a simple passphrase (password) to encrypt the API Key. So, you can send modelers who you want to use aiExpert this passphrase, which they will need to input the first time they use aiExpert.
We think this is a reasonable balance between making aiExpert easy to use, but at the same time taking care of the APi key.
What will it cost?
As we said above, your first 50 queries are free – paid for by eaTeamWorks.
After that, once you have an OpenAI account, making requests via aiExpert is currently quite cheap: we have only used USD 11 since we started creating aiExpert. We think this is because the amount of data we are sending to the LLM is quite small (by LLM standards) and it’s all-text data, which is by far the cheapest.
As of November 2025, the pricing for GPT-5 Mini, which is our default LLM, is USD 0.25/1M tokens for input, and USD 2.00 per 1M output tokens. As a guide, a request to the LLM for one of our test BPMN diagrams (which had 15 elements) uses about 300 tokens, and the response (for a deliberately bad example with lots of errors) was about 15,000 tokens. So you can see that each call is very cheap. So loading your account with USD20 will probably last you for a long time.