a. In the following interface, click [Create new secret key] to start creating a new key.
b. After the key is created, copy and save the key.
Fill in the UOS AI configuration:
a. Open the UOS AI setting page, click [Add], you can check the model parameter configuration page;
b. LLM: In [Model Type], select [GPT 3.5] or [GPT 4.0] ;
c. Account: Name your model, which mainly used to distinguish different models.The name will be displayed in the model selection box in the main interface of UOS AI application.
d. APIKey: The saved OpenAI Key above.
Customized Model Configuration
2.1 Pre-conditions
Model Interface Requirement: The interface specification of UOS AI's custom model is OpenAI's /v1/chat/completions, so only model services that provide OpenAI-compatible interfaces can be added to UOS AI for use.
Application version requirement: UOS AI 1.3.0 or above.
2.2 Online Model Access
We will introduce the method of adding online models. In this example, we will use Moonshot API.
b. Click the “Create” button on the right side to generate API Key, and copy the generated key in this interface.
Add model in UOS AI.
Enter the UOS AI setting interface to add models. Switch to “Custom” in the model option of the model adding interface. Then fill in the following information:
a. API Key: Paste the key you copied in the previous step in the API Key field.
b. Account Name: Give your model a name, mainly used to distinguish different models, the name will be shown in the model selection box in the main interface of UOS AI application.
c.Model name: fill in the model name declared by Moonshot API in step 1: moonshot-v1-8k
d. Request address: UOS AI will automatically add /chat/completions to the request address, so the address filled in here needs to remove the /chat/completions from the Moonshot’ address.
e.The actual fill in is: https://api.moonshot.cn/v1
f. Click Confirm to complete the verification and then you can use it in the dialog window.
Other online models compatible with OpenAI API interface can be accessed by the above method. The following is the API description of some big model vendors.
a. ZhiPu: https://open.bigmodel.cn/dev/api#glm-4
b. Baichuan: https://platform.baichuan-ai.com/docs/api
Before running the local model, you need to install Ollama, which is a local deployment tool for open source programming language models, through which you can easily deploy open source large models locally.
Follow the tutorial to install the ollama program on your system.
a. Execute the following commands to directly install ollama.
b. Compile and install, Docker installation please refer to the instructions in Github.
Bash
curl -fsSL https://ollama.com/install.sh | sh
Get the service address: 127.0.0.1:11434; subsequent backup.
2.3.2 Run the model
After installing Ollama, run the model in the terminal, e.g. Qwen2's 7B model.
Bash
ollama run qwen2:7b
The model will be downloaded automatically the first time you run it, and you need to wait for
a while according to the network condition.
3.Ollama model repository can be found at https://ollama.com/library, you can choose the model according to your needs.
2.3.3 UOS AI Configuration
After Ollama starts the model, you can add the model in UOS AI.
Enter the model adding interface of UOS AI, and select the model type as Custom.
Account name: Name your model which mainly used to distinguish different models. The name will be displayed in the model selection box in the main interface of UOS AI application.
APIKey: Ollama has not enabled authentication, you can fill in the name as you like.
Model name: fill in the name of the model that Ollama is running, such as qwen2:7b in the previous section, then fill in qwen2:7b here.
Model request address:
a.Ollama's default service address is 127.0.0.1:11434 and its OpenAI compatible interface is http://127.0.0.1:11434/v1/chat/completions
b. Therefore, in UOS AI, you only need to fill in http://127.0.0.1:11434/v1.
After finishing adding, you can talk with local models in UOS AI.
GPT Model Configuration
To use the GPT model, you need to get the API Key of OpenAI first.
[API Key] Get link: https://platform.openai.com/settings/profile?tab=api-keys
a. In the following interface, click [Create new secret key] to start creating a new key.
b. After the key is created, copy and save the key.
a. Open the UOS AI setting page, click [Add], you can check the model parameter configuration page;
b. LLM: In [Model Type], select [GPT 3.5] or [GPT 4.0] ;
c. Account: Name your model, which mainly used to distinguish different models.The name will be displayed in the model selection box in the main interface of UOS AI application.
d. APIKey: The saved OpenAI Key above.
Customized Model Configuration
2.1 Pre-conditions
2.2 Online Model Access
We will introduce the method of adding online models. In this example, we will use Moonshot API.
a. According to the document above, the request address is: https://api.moonshot.cn/v1/chat/completions
b.Model name: moonshot-v1-8k
a.Login to the console desk of Moonshot and enter “API Key Management”: https://platform.moonshot.cn/console/api-keys
b. Click the “Create” button on the right side to generate API Key, and copy the generated key in this interface.
a. API Key: Paste the key you copied in the previous step in the API Key field.
b. Account Name: Give your model a name, mainly used to distinguish different models, the name will be shown in the model selection box in the main interface of UOS AI application.
c.Model name: fill in the model name declared by Moonshot API in step 1: moonshot-v1-8k
d. Request address: UOS AI will automatically add
/chat/completions
to the request address, so the address filled in here needs to remove the/chat/completions
from the Moonshot’ address.e.The actual fill in is: https://api.moonshot.cn/v1
f. Click Confirm to complete the verification and then you can use it in the dialog window.
a. ZhiPu: https://open.bigmodel.cn/dev/api#glm-4
b. Baichuan: https://platform.baichuan-ai.com/docs/api
c.Tongyi Qianqian: https://help.aliyun.com/zh/dashscope/developer-reference/compatibility-of-openai-with-dashscope
d.LingYiWanWu: https://platform.lingyiwanwu.com/docs#api-%E5%B9%B3%E5%8F%B0
e.Deepseek: https://platform.deepseek.com/api-docs/zh-cn/
a. Execute the following commands to directly install ollama.
b. Compile and install, Docker installation please refer to the instructions in Github.
a while according to the network condition.
3.Ollama model repository can be found at https://ollama.com/library, you can choose the model according to your needs.
model type
asCustom
.a.Ollama's default service address is
127.0.0.1:11434
and its OpenAI compatible interface ishttp://127.0.0.1:11434/v1/chat/completions
b. Therefore, in UOS AI, you only need to fill in http://127.0.0.1:11434/v1.