[Share Experiences] UOS AI Model Configuration Guide
Tofloor
poster avatar
Yoloy_Lan
Moderator
2024-07-24 16:25
Author

GPT Model Configuration

a. In the following interface, click [Create new secret key] to start creating a new key.

b. After the key is created, copy and save the key.

image.png

image.png

image.png

  • Fill in the UOS AI configuration:

a. Open the UOS AI setting page, click [Add], you can check the model parameter configuration page;

b. LLM: In [Model Type], select [GPT 3.5] or [GPT 4.0] ;

c. Account: Name your model, which mainly used to distinguish different models.The name will be displayed in the model selection box in the main interface of UOS AI application.

d. APIKey: The saved OpenAI Key above.

image.png

image.png

Customized Model Configuration

2.1 Pre-conditions

  1. Model Interface Requirement: The interface specification of UOS AI's custom model is OpenAI's /v1/chat/completions, so only model services that provide OpenAI-compatible interfaces can be added to UOS AI for use.
  2. Application version requirement: UOS AI 1.3.0 or above.

2.2 Online Model Access

We will introduce the method of adding online models. In this example, we will use Moonshot API.

  • 2.2.1 Moonshot
  1. Get API: Open the API description of the Moonshot: https://platform.moonshot.cn/docs/api/chat#api-%E8%AF%B4%E6%98%8E

image.png

a. According to the document above, the request address is: https://api.moonshot.cn/v1/chat/completions

b.Model name: moonshot-v1-8k

  1. Get API Key

a.Login to the console desk of Moonshot and enter “API Key Management”: https://platform.moonshot.cn/console/api-keys

b. Click the “Create” button on the right side to generate API Key, and copy the generated key in this interface.

  1. Add model in UOS AI.
  2. Enter the UOS AI setting interface to add models. Switch to “Custom” in the model option of the model adding interface. Then fill in the following information:

a. API Key: Paste the key you copied in the previous step in the API Key field.

b. Account Name: Give your model a name, mainly used to distinguish different models, the name will be shown in the model selection box in the main interface of UOS AI application.

c.Model name: fill in the model name declared by Moonshot API in step 1: moonshot-v1-8k

d. Request address: UOS AI will automatically add /chat/completions to the request address, so the address filled in here needs to remove the /chat/completions from the Moonshot’ address.

e.The actual fill in is: https://api.moonshot.cn/v1

f. Click Confirm to complete the verification and then you can use it in the dialog window.

image.png

  1. Other online models compatible with OpenAI API interface can be accessed by the above method. The following is the API description of some big model vendors.

a. ZhiPu: https://open.bigmodel.cn/dev/api#glm-4

b. Baichuan: https://platform.baichuan-ai.com/docs/api

c.Tongyi Qianqian: https://help.aliyun.com/zh/dashscope/developer-reference/compatibility-of-openai-with-dashscope

d.LingYiWanWu: https://platform.lingyiwanwu.com/docs#api-%E5%B9%B3%E5%8F%B0

e.Deepseek: https://platform.deepseek.com/api-docs/zh-cn/

  • 2.3 Local Model Access
  • 2.3.1 Install Ollama
  1. Before running the local model, you need to install Ollama, which is a local deployment tool for open source programming language models, through which you can easily deploy open source large models locally.
  2. Ollama repository address: https://github.com/ollama/ollama
  3. Follow the tutorial to install the ollama program on your system.

a. Execute the following commands to directly install ollama.

b. Compile and install, Docker installation please refer to the instructions in Github.

Bash
curl -fsSL https://ollama.com/install.sh | sh
  1. Get the service address: 127.0.0.1:11434; subsequent backup.

image.png

  • 2.3.2 Run the model
  1. After installing Ollama, run the model in the terminal, e.g. Qwen2's 7B model.
Bash
ollama run qwen2:7b
  1. The model will be downloaded automatically the first time you run it, and you need to wait for

a while according to the network condition.

image.png

3.Ollama model repository can be found at https://ollama.com/library, you can choose the model according to your needs.

image.png

image.png

  • 2.3.3 UOS AI Configuration
  1. After Ollama starts the model, you can add the model in UOS AI.
  2. Enter the model adding interface of UOS AI, and select the model type as Custom.
  3. Account name: Name your model which mainly used to distinguish different models. The name will be displayed in the model selection box in the main interface of UOS AI application.
  4. APIKey: Ollama has not enabled authentication, you can fill in the name as you like.
  5. Model name: fill in the name of the model that Ollama is running, such as qwen2:7b in the previous section, then fill in qwen2:7b here.
  6. Model request address:

a.Ollama's default service address is 127.0.0.1:11434 and its OpenAI compatible interface is http://127.0.0.1:11434/v1/chat/completions

b. Therefore, in UOS AI, you only need to fill in http://127.0.0.1:11434/v1.

image.png

  1. After finishing adding, you can talk with local models in UOS AI.
Reply Favorite View the author
All Replies
myehtrip
deepin
2024-08-01 02:01
#1
It has been deleted!
bootslearning
deepin
2024-08-01 02:13
#2
It has been deleted!
Owensuwu
deepin
2024-08-05 13:36
#3

thanks for this, i was needing this tutorial

like

Reply View the author