We introduce MindLink, a new family of large language models developed by Kunlun Inc. Built on Mistral, this experimental model incorporates our latest advances in post-training techniques. The model shows promising results across several common benchmarks and may be useful for various AI applications. We welcome feedback as we continue to refine our approach.
🚀 What's Next:
- Enhanced Models: More powerful versions are in development and launching soon.
- Technical Documentation: Detailed architecture specifications, training methodologies, and benchmark results will be available soon.
Stay tuned as we advance the frontier of language AI.
This experimental model can be accessed via our API for those interested in exploration and testing. To obtain your API key, simply drop us an email at mindlink@kunlun-inc.com mentioning your affiliation and intended use case.
Once approved, you will receive an API Key, which enables access to our hosted inference service.
Our Chat API supports OpenAI's format. Simply use your API Key with HTTP POST requests.
curl -X POST https://api.mindlink.kunlun-inc.com/v1/chat/completions \
-H "Authorization: Bearer <your_api_key>" \
-H "Content-Type: application/json" \
-d '{
"model": "MindLink_Beta",
"messages": [
{"role": "user", "content": "What is the capital of China?"}
],
"temperature": 0.7,
"max_tokens": 128,
"stream": false
}'
import requests
API_KEY = "your_api_key_here"
API_URL = "https://api.mindlink.kunlun-inc.com/v1/chat/completions"
headers = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
}
payload = {
"model": "MindLink_Beta",
"messages": [
{"role": "user", "content": "What is the capital of China?"}
],
"temperature": 0.7,
"max_tokens": 128,
"stream": False
}
response = requests.post(API_URL, headers=headers, json=payload)
if response.status_code == 200:
reply = response.json()
print("MindLink Response:")
print(reply["choices"][0]["message"]["content"])
else:
print(f"Error {response.status_code}: {response.text}")
- Endpoint:
https://api.mindlink.kunlun-inc.com/v1/chat/completions
- Authentication: Use your API key via
Authorization: Bearer <api_key>
- Request Format: Compatible with OpenAI's Chat Completion API
- Supported Fields:
model
,messages
,temperature
,top_p
,max_tokens
,stream
,stop
, etc. - Model Identifier: Use
"MindLink_Beta"
as the model name
The experimental MindLink has been evaluated on a range of benchmarks. More powerful model is training and the new scores will be updated soon.
MindLink: License and Usage Information
This experimental MindLink is released under the Mistral Community License Agreement. This means that MindLink itself is licensed under the Mistral Community License, with Copyright held by © Mistral Inc.
This experimental release of MindLink is specifically intended for research and testing purposes.
For a comprehensive understanding of how to use this software responsibly, please refer to our Responsible Use Guidelines.
Please Note: While the MIT License is mentioned elsewhere, for this experimental MindLink release, the Mistral Community License Agreement is the governing license. We encourage you to review the full terms of this license for detailed information regarding your rights and obligations.