Skip to content

Releases: ystemsrx/code-atlas

Code Atlas 2.0.0

11 Jun 19:49
Compare
Choose a tag to compare

Optimized the build process, tool calling logic and cross-platform support

优化构建过程、工具调用逻辑以及跨平台支持

Code Atlas 1.2.0

18 Jan 14:21
30a7276
Compare
Choose a tag to compare

Add support for Ollama. / 添加对Ollama的支持

Please make sure you have started Ollama serve, then modify the config and use the API interface to connect to Ollama.

请确保你开启了Ollama serve,然后修改config,使用API接口来对接Ollama。

An example of config.json:

{
    "system": {
        "prompt": "**Identity Setup**:  \n- You are **Open Interpreter**, operating on the user's Windows computer.\n\n**Execution Capability**:  \n- Complete tasks using **Batch scripts** or **Python code**.\n\n**Operation Process**:  \n1. **Receive Request**: The user submits an operation request.\n2. **Develop Plan**: Plan the steps and required resources.\n3. **Choose Language**: Select Batch or Python.\n4. **Generate and Output Code**: Provide executable code to the user, which will be directly executed on the user's computer automatically.\n5. **Receive Execution Results**: Obtain the results of the executed code sent by the system.\n6. **Ensure Single Execution**: Accurately discern execution results to prevent repeated executions of the same code.\n\n**Code Requirements**:  \n- **No User Interaction**: No user input required, and don't ask the user to copy the code to a file to run.\n- **Path Handling**: Use the current directory by default, ensure paths are valid and secure.\n- **Execution Result Handling**: Obtain, parse, and succinctly feedback the results.\n\n**Multi-step Tasks**:  \n- Execute complete code snippets step-by-step, maintaining solution consistency. For the same problem, only one solution can be used.\n\n**Security and Efficiency**:  \n- Code is safe and harmless, follows best programming practices, ensuring efficiency and maintainability.\n- Must avoid hallucinations."
    },

    "model": {
        "name": "qwen2",
        "parameters": {
            "top_p": 0.95,
            "top_k": 40,
            "max_length": 4096,
            "temperature": 0.2,
            "context_window": 16384
        }
    },

    "api": {
        "enabled": true,
        "base_url": "http://localhost:11434/api/chat",
        "key": ""
    }
}

Initial Release: CodeAtlas v1.1.0 - Lightweight C++ Interpreter Powered by `llama.cpp`

12 Dec 13:35
6c3e428
Compare
Choose a tag to compare

Release Note | 发布说明

This is the executable file for the main program, which needs to be used in conjunction with your gguf file or API Base URL, API Key, and related files compiled from [llama.cpp](https://github.com/ggerganov/llama.cpp). By default, this executable file loads a file named model.gguf. You need to rename your model file to this name. Please write your API-related information, gguf filename, and other configurations into the config.json file.

这是主程序的可执行文件,需要配合你的 gguf 文件或API Base URL以及API Key以及从 llama.cpp编译 得到的相关文件一起使用。其中此可执行文件中默认加载的文件名为 model.gguf,你需要将你的模型文件改为这个名称。请在config.json中写入你的API相关或gguf文件名以及其他配置。

Initial Release: CodeAtlas v1.0.0 - Lightweight C++ Interpreter Powered by `llama.cpp`

25 Nov 05:17
0f5ba0b
Compare
Choose a tag to compare

Release Note | 发布说明

This is the executable file for the main program. To use it, you need to pair it with your gguf model file and the necessary files obtained from llama.cpp. The executable is configured to load a model file named model.gguf by default, so make sure to rename your model file accordingly.

这是主程序的可执行文件,需要配合你的 gguf 文件以及从 llama.cpp 得到的相关文件一起使用。其中此可执行文件中默认加载的文件名为 model.gguf,你需要将你的模型文件改为这个名称。