|
1 | 1 | import {CommunityLinks} from '/components/social-card/CommunityLinks'
|
2 | 2 | import { Cards } from 'nextra/components'
|
3 | 3 | import { Callout } from 'nextra/components'
|
4 |
| - |
| 4 | +import GitHub from '/components/icons/GitHub' |
5 | 5 |
|
6 | 6 | # Integrations
|
7 | 7 |
|
8 | 8 | Memgraph offers several integrations with popular AI frameworks to help you
|
9 |
| -customize and build your own GenAI application from scratch. Below are some of |
10 |
| -the libraries integrated with Memgraph. |
| 9 | +customize and build your own GenAI application from scratch. Here is a list of |
| 10 | +Memgraph's officially supported integrations: |
| 11 | +- [Model Context Protocol](#model-context-protocol-mcp) |
| 12 | +- [LlamaIndex](#llamaindex) |
| 13 | +- [LangChain](#langchain) |
| 14 | + |
| 15 | +## Model Context Protocol (MCP) |
| 16 | + |
| 17 | +<Cards> |
| 18 | + <Cards.Card |
| 19 | + icon={<GitHub />} |
| 20 | + title="Memgraph MCP Server" |
| 21 | + href="https://github.com/memgraph/mcp-memgraph" |
| 22 | + /> |
| 23 | +</Cards> |
| 24 | + |
| 25 | +Memgraph offers Memgraph the [Memgraph MCP |
| 26 | +Server](https://github.com/memgraph/mcp-memgraph) - a lightweight server |
| 27 | +implementation of the Model Context Protocol (MCP) designed to connect Memgraph |
| 28 | +with LLMs. |
| 29 | + |
| 30 | + |
| 31 | + |
| 32 | +<h3 className="custom-header">Quick start</h3> |
| 33 | + |
| 34 | +<h4 className="custom-header">1. Run Memgraph MCP Server</h4> |
| 35 | + |
| 36 | +1. Install [`uv`](https://docs.astral.sh/uv/getting-started/installation/) and create `venv` with `uv venv`. Activate virtual environment with `.venv\Scripts\activate`. |
| 37 | +2. Install dependencies: `uv add "mcp[cli]" httpx` |
| 38 | +3. Run Memgraph MCP server: `uv run server.py`. |
| 39 | + |
| 40 | + |
| 41 | +<h4 className="custom-header"> 2. Run MCP Client</h4> |
| 42 | +1. Install [Claude for Desktop](https://claude.ai/download). |
| 43 | +2. Add the Memgraph server to Claude config: |
| 44 | + |
| 45 | +**MacOS/Linux** |
| 46 | +``` |
| 47 | +code ~/Library/Application\ Support/Claude/claude_desktop_config.json |
| 48 | +``` |
| 49 | + |
| 50 | +**Windows** |
| 51 | + |
| 52 | +``` |
| 53 | +code $env:AppData\Claude\claude_desktop_config.json |
| 54 | +``` |
| 55 | + |
| 56 | +Example config: |
| 57 | +``` |
| 58 | +{ |
| 59 | + "mcpServers": { |
| 60 | + "mpc-memgraph": { |
| 61 | + "command": "/Users/katelatte/.local/bin/uv", |
| 62 | + "args": [ |
| 63 | + "--directory", |
| 64 | + "/Users/katelatte/projects/mcp-memgraph", |
| 65 | + "run", |
| 66 | + "server.py" |
| 67 | + ] |
| 68 | + } |
| 69 | + } |
| 70 | +} |
| 71 | +``` |
| 72 | +<Callout type="info"> |
| 73 | +You may need to put the full path to the uv executable in the command field. You can get this by running `which uv` on MacOS/Linux or `where uv` on Windows. Make sure you pass in the absolute path to your server. |
| 74 | +</Callout> |
| 75 | + |
| 76 | +<h4 className="custom-header">3. Chat with the database</h4> |
| 77 | +1. Run Memgraph MAGE: |
| 78 | + ```bash |
| 79 | + docker run -p 7687:7687 memgraph/memgraph-mage --schema-info-enabled=True |
| 80 | + ``` |
| 81 | + |
| 82 | + The `--schema-info-enabled` configuration setting is set to `True` to allow LLM to run `SHOW SCHEMA INFO` query. |
| 83 | +2. Open Claude Desktop and see the Memgraph tools and resources listed. Try it out! (You can load dummy data from [Memgraph Lab](https://memgraph.com/docs/data-visualization) Datasets) |
| 84 | + |
| 85 | + |
| 86 | +<h3 className="custom-header">Resources</h3> |
| 87 | +- [Memgraph MCP Server Quick start](https://www.youtube.com/watch?v=0Tjw5QWj_qY): A video showcasing how to run Memgraph MCP Server. |
| 88 | +- [Introducing the Memgraph MCP Server](https://memgraph.com/blog/introducing-memgraph-mcp-server): A blog post on how to run Memgraph MCP Server and what are the future plans. |
11 | 89 |
|
12 | 90 | ## LlamaIndex
|
13 | 91 |
|
| 92 | +<Cards> |
| 93 | + <Cards.Card |
| 94 | + icon={<GitHub />} |
| 95 | + title="LlamaIndex integration" |
| 96 | + href="https://github.com/run-llama/llama_index/tree/main/llama-index-integrations/graph_stores/llama-index-graph-stores-memgraph" |
| 97 | + /> |
| 98 | +</Cards> |
| 99 | + |
14 | 100 | LlamaIndex is a simple, flexible data framework for connecting custom data
|
15 | 101 | sources to large language models. Currently, [Memgraph's
|
16 | 102 | integration](https://docs.llamaindex.ai/en/stable/api_reference/storage/graph_stores/memgraph/)
|
@@ -159,6 +245,14 @@ work together in real-world applications with these interactive demos:
|
159 | 245 |
|
160 | 246 | ## LangChain
|
161 | 247 |
|
| 248 | +<Cards> |
| 249 | + <Cards.Card |
| 250 | + icon={<GitHub />} |
| 251 | + title="LangChain integration" |
| 252 | + href="https://github.com/memgraph/langchain-memgraph" |
| 253 | + /> |
| 254 | +</Cards> |
| 255 | + |
162 | 256 | [LangChain](https://www.langchain.com/) is a framework for developing applications powered by large language
|
163 | 257 | models (LLMs).
|
164 | 258 |
|
|
0 commit comments