Skip to content

Commit 5b8957c

Browse files
authored
Merge branch 'main' into qm-memory-improvements
2 parents 133da19 + ce8ccf8 commit 5b8957c

File tree

3 files changed

+108
-4
lines changed

3 files changed

+108
-4
lines changed

pages/ai-ecosystem/integrations.mdx

Lines changed: 97 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,102 @@
11
import {CommunityLinks} from '/components/social-card/CommunityLinks'
22
import { Cards } from 'nextra/components'
33
import { Callout } from 'nextra/components'
4-
4+
import GitHub from '/components/icons/GitHub'
55

66
# Integrations
77

88
Memgraph offers several integrations with popular AI frameworks to help you
9-
customize and build your own GenAI application from scratch. Below are some of
10-
the libraries integrated with Memgraph.
9+
customize and build your own GenAI application from scratch. Here is a list of
10+
Memgraph's officially supported integrations:
11+
- [Model Context Protocol](#model-context-protocol-mcp)
12+
- [LlamaIndex](#llamaindex)
13+
- [LangChain](#langchain)
14+
15+
## Model Context Protocol (MCP)
16+
17+
<Cards>
18+
<Cards.Card
19+
icon={<GitHub />}
20+
title="Memgraph MCP Server"
21+
href="https://github.com/memgraph/mcp-memgraph"
22+
/>
23+
</Cards>
24+
25+
Memgraph offers Memgraph the [Memgraph MCP
26+
Server](https://github.com/memgraph/mcp-memgraph) - a lightweight server
27+
implementation of the Model Context Protocol (MCP) designed to connect Memgraph
28+
with LLMs.
29+
30+
![mcp-server](/pages/ai-ecosystem/integrations/mcp-server.png)
31+
32+
<h3 className="custom-header">Quick start</h3>
33+
34+
<h4 className="custom-header">1. Run Memgraph MCP Server</h4>
35+
36+
1. Install [`uv`](https://docs.astral.sh/uv/getting-started/installation/) and create `venv` with `uv venv`. Activate virtual environment with `.venv\Scripts\activate`.
37+
2. Install dependencies: `uv add "mcp[cli]" httpx`
38+
3. Run Memgraph MCP server: `uv run server.py`.
39+
40+
41+
<h4 className="custom-header"> 2. Run MCP Client</h4>
42+
1. Install [Claude for Desktop](https://claude.ai/download).
43+
2. Add the Memgraph server to Claude config:
44+
45+
**MacOS/Linux**
46+
```
47+
code ~/Library/Application\ Support/Claude/claude_desktop_config.json
48+
```
49+
50+
**Windows**
51+
52+
```
53+
code $env:AppData\Claude\claude_desktop_config.json
54+
```
55+
56+
Example config:
57+
```
58+
{
59+
"mcpServers": {
60+
"mpc-memgraph": {
61+
"command": "/Users/katelatte/.local/bin/uv",
62+
"args": [
63+
"--directory",
64+
"/Users/katelatte/projects/mcp-memgraph",
65+
"run",
66+
"server.py"
67+
]
68+
}
69+
}
70+
}
71+
```
72+
<Callout type="info">
73+
You may need to put the full path to the uv executable in the command field. You can get this by running `which uv` on MacOS/Linux or `where uv` on Windows. Make sure you pass in the absolute path to your server.
74+
</Callout>
75+
76+
<h4 className="custom-header">3. Chat with the database</h4>
77+
1. Run Memgraph MAGE:
78+
```bash
79+
docker run -p 7687:7687 memgraph/memgraph-mage --schema-info-enabled=True
80+
```
81+
82+
The `--schema-info-enabled` configuration setting is set to `True` to allow LLM to run `SHOW SCHEMA INFO` query.
83+
2. Open Claude Desktop and see the Memgraph tools and resources listed. Try it out! (You can load dummy data from [Memgraph Lab](https://memgraph.com/docs/data-visualization) Datasets)
84+
85+
86+
<h3 className="custom-header">Resources</h3>
87+
- [Memgraph MCP Server Quick start](https://www.youtube.com/watch?v=0Tjw5QWj_qY): A video showcasing how to run Memgraph MCP Server.
88+
- [Introducing the Memgraph MCP Server](https://memgraph.com/blog/introducing-memgraph-mcp-server): A blog post on how to run Memgraph MCP Server and what are the future plans.
1189

1290
## LlamaIndex
1391

92+
<Cards>
93+
<Cards.Card
94+
icon={<GitHub />}
95+
title="LlamaIndex integration"
96+
href="https://github.com/run-llama/llama_index/tree/main/llama-index-integrations/graph_stores/llama-index-graph-stores-memgraph"
97+
/>
98+
</Cards>
99+
14100
LlamaIndex is a simple, flexible data framework for connecting custom data
15101
sources to large language models. Currently, [Memgraph's
16102
integration](https://docs.llamaindex.ai/en/stable/api_reference/storage/graph_stores/memgraph/)
@@ -159,6 +245,14 @@ work together in real-world applications with these interactive demos:
159245

160246
## LangChain
161247

248+
<Cards>
249+
<Cards.Card
250+
icon={<GitHub />}
251+
title="LangChain integration"
252+
href="https://github.com/memgraph/langchain-memgraph"
253+
/>
254+
</Cards>
255+
162256
[LangChain](https://www.langchain.com/) is a framework for developing applications powered by large language
163257
models (LLMs).
164258

pages/clustering/high-availability.mdx

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,7 @@ specifying only `--management-port` flag. This port is used for RPC network comm
4040
instances. When started by default, the data instance is MAIN. The coordinator will ensure that no data inconsistency can happen during and after the instance's
4141
restart. Once all instances are started, the user can start adding data instances to the cluster.
4242

43+
4344
<Callout type="info">
4445

4546
The Raft consensus algorithm ensures that all nodes in a distributed system
@@ -51,6 +52,15 @@ since Raft, as a consensus algorithm, works by forming a majority in the decisio
5152

5253
</Callout>
5354

55+
<Callout type="info">
56+
57+
When deploying coordinators to servers, you can use the instance of almost any size. Instances of 4GiB or 8GiB will suffice since coordinators'
58+
job mainly involves network communication and storing Raft metadata. Coordinators and data instances can be deployed on same servers (pairwise)
59+
but from the availability perspective, it is better to separate them physically.
60+
61+
</Callout>
62+
63+
5464

5565
## Bolt+routing
5666

@@ -874,4 +884,4 @@ that and automatically promote the first alive REPLICA to become the new MAIN. T
874884

875885
</Steps>
876886

877-
<CommunityLinks/>
887+
<CommunityLinks/>
Loading

0 commit comments

Comments
 (0)