Skip to content

Commit 108175e

Browse files
authored
Merge pull request #12 from nedhmn/feature/docs-getting-started
Feature/docs getting started
2 parents f5e182e + e7c8998 commit 108175e

File tree

14 files changed

+333
-17
lines changed

14 files changed

+333
-17
lines changed

README.md

Lines changed: 65 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -3,22 +3,80 @@
33
[![Documentation](https://img.shields.io/badge/Documentation-Link-blue)](https://nedhmn.github.io/ygo-ruling-ai-chatbot/)
44
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](./LICENSE)
55

6-
A monorepo for a Yu-Gi-Oh! ruling AI chatbot focused on Goat Format, built with Turborepo. It leverages AI RAG with OpenAI embeddings and Pinecone for specialized context.
6+
A monorepo for a Yu-Gi-Oh! ruling AI chatbot focused on [Goat Format](https://www.goatformat.com/whatisgoat.html), built with Turborepo. It leverages AI RAG with OpenAI embeddings and Pinecone for specialized context.
77

88
<div align="center" style="margin-bottom: 20px">
99
<img src="./assets/preview.gif" alt="ygo-ruling-ai-chatbot-preview">
1010
</div>
1111

12-
## ✨ Features
12+
## Key Features
1313

14-
- **AI RAG**: Uses Retrieval Augmented Generation with OpenAI embeddings and Pinecone for accurate ruling context.
15-
- **Goat Format Specialized**: Trained and focused on Yu-Gi-Oh! Goat Format rulings.
16-
- **Data Seeding**: Includes a seeder package to populate the vector database.
17-
- **Monorepo Structure**: Managed efficiently with Turborepo.
14+
- **Accurate Ruling Responses:** Provides precise answers for Yu-Gi-Oh! Goat Format rulings using a RAG approach.
15+
- **AI-Powered Chatbot:** Combine AI reasoning and relevant context for comprehensive answers.
16+
- **Dockerized Environment:** Ensures easy, consistent setup.
17+
- **Turborepo Monorepo:** Efficiently manages project packages and applications.
18+
19+
## 🛠️ Technologies Used
20+
21+
- **Next.js:** React framework for the chatbot.
22+
- **Vercel AI SDK:** Integrates with AI models.
23+
- **OpenAI:** Used for embeddings and AI models.
24+
- **Pinecone:** Vector database for ruling embeddings.
25+
- **Tailwind CSS:** Utility-first CSS.
26+
- **shadcn/ui:** Reusable UI components.
27+
- **Turborepo:** Monorepo management.
28+
- **ESLint & Prettier:** Code linting and formatting.
29+
- **Cheerio:** Web scraping library for server-side jQuery.
30+
- **Nextra:** Documentation framework.
31+
- **Docker/docker-compose:** Containerization.
32+
- **TypeScript:** For type safety.
1833

1934
## 🚀 Getting Started
2035

21-
Coming Soon...
36+
This guide will help you get the Yu-Gi-Oh! Ruling AI Chatbot up and running.
37+
38+
### Prerequisites
39+
40+
You will need the following installed:
41+
42+
- **Docker** and **Docker Compose**
43+
44+
Refer to the **[Documentation](https://nedhmn.github.io/ygo-ruling-ai-chatbot/getting-started/installation)** for detailed installation instructions if needed.
45+
46+
### Clone the Repository
47+
48+
```bash
49+
git clone https://github.com/nedhmn/ygo-ruling-ai-chatbot.git
50+
cd ygo-ruling-ai-chatbot
51+
```
52+
53+
### Configure Your Environment
54+
55+
You need to configure environment variables for the `seeder` and `web` services. Example files are provided to help you.
56+
57+
1. **Create `./packages/seeder/.env.local`:** <br><br>
58+
Navigate to the `./packages/seeder` directory. Copy the content from `.env.example` in that directory and create a new file named `.env.local`. Fill in the required environment variables for the seeder service).
59+
60+
2. **Create `./apps/web/.env.local`:**<br><br>
61+
Navigate to the `./apps/web` directory. Copy the content from `.env.example` in that directory and create a new file named `.env.local`. Fill in the required environment variables for the web service.
62+
63+
Refer to the **[Documentation](https://nedhmn.github.io/ygo-ruling-ai-chatbot/getting-started/configuration)** for a full list and description of all configuration options.
64+
65+
### Run the Application
66+
67+
From the project root, run the following command to build and start the Docker containers:
68+
69+
```bash
70+
docker compose up --build
71+
```
72+
73+
This will start the seeder service (which will populate the database) and then the web application.
74+
75+
> [!NOTE]
76+
>
77+
> The `--build` flag is important the first time you run this command, or after making changes to the Dockerfiles.
78+
79+
Once the `web` service is running, the application should be accessible in your web browser at `http://localhost:3000`.
2280

2381
## 📄 License
2482

apps/docs/app/[[...mdxPath]]/page.tsx

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
import { generateStaticParamsFor, importPage } from "nextra/pages";
22
import { useMDXComponents as getMDXComponents } from "../../mdx-components";
3+
import { METADATA } from "@/lib/constants";
34

45
export const generateStaticParams = generateStaticParamsFor("mdxPath");
56

@@ -13,7 +14,10 @@ type Props = {
1314
export async function generateMetadata(props: Props) {
1415
const params = await props.params;
1516
const { metadata } = await importPage(params.mdxPath);
16-
return metadata;
17+
return {
18+
...metadata,
19+
title: `${metadata.title} | ${METADATA.title}`,
20+
};
1721
}
1822

1923
const Wrapper = getMDXComponents().wrapper;

apps/docs/content/_meta.js

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
export default {
2+
index: {
3+
title: "Introduction",
4+
theme: {
5+
breadcrumb: false,
6+
},
7+
},
8+
"getting-started": "Getting Started",
9+
guides: "Guides",
10+
};
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
export default {
2+
installation: "Installation",
3+
configuration: "Configuration",
4+
"running-the-app": "Running the App",
5+
};
Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,64 @@
1+
---
2+
title: Configuration
3+
---
4+
5+
# Configuration
6+
7+
Configure environment variables for the application and seeder. Create two `.env.local` files: `apps/web/.env.local` and `packages/seeder/.env.local`.
8+
9+
---
10+
11+
## Environment Variables
12+
13+
These variables are required in your `.env.local` files:
14+
15+
- **`OPENAI_API_KEY`**: Your OpenAI API key. **Required.**
16+
- **`PINECONE_API_KEY`**: Your Pinecone API key. **Required.**
17+
- **`PINECONE_INDEX_NAME`**: Your Pinecone index name. Default: `ygo-ruling-ai-chatbot`. **Must match the index created in Pinecone.**
18+
- **`PINECONE_INDEX_DIMENSION`**: Your Pinecone index dimension. Default: `1536`. Must match embedding dimension.
19+
- **`PINECONE_INDEX_METRIC`**: Pinecone index similarity metric. Default: `cosine`.
20+
- **`PINECONE_INDEX_CLOUD`**: Pinecone index cloud provider. Default: `aws`.
21+
- **`PINECONE_INDEX_REGION`**: Pinecone index region. Default: `us-east-1`.
22+
- **`OPENAI_EMBEDDING_MODEL`**: OpenAI embedding model. Default: `text-embedding-3-small`.
23+
- **`OPENAI_EMBEDDING_DIMENSIONS`**: Embedding dimension. Default: `1536`. Should match Pinecone index dimension.
24+
25+
The following variable is **only** required in `apps/web/.env.local`:
26+
27+
- **`OPENAI_MODEL`**: OpenAI model for chatbot responses. Default: `gpt-4.1-nano`.
28+
29+
## `apps/web/.env.local`
30+
31+
Create `apps/web/.env.local` and include all the variables listed above.
32+
33+
```bash filename="apps/web/.env.local" copy
34+
OPENAI_API_KEY=
35+
OPENAI_MODEL=gpt-4.1-nano
36+
OPENAI_EMBEDDING_MODEL=text-embedding-3-small
37+
OPENAI_EMBEDDING_DIMENSIONS=1536
38+
PINECONE_API_KEY=
39+
PINECONE_INDEX_NAME=ygo-ruling-ai-chatbot
40+
PINECONE_INDEX_DIMENSION=1536
41+
PINECONE_INDEX_METRIC=cosine
42+
PINECONE_INDEX_CLOUD=aws
43+
PINECONE_INDEX_REGION=us-east-1
44+
```
45+
46+
## `packages/seeder/.env.local`
47+
48+
Create `packages/seeder/.env.local` and include all variables listed above, **except** `OPENAI_MODEL`.
49+
50+
```bash filename="packages/seeder/.env.local" copy
51+
OPENAI_API_KEY=
52+
OPENAI_EMBEDDING_MODEL=text-embedding-3-small
53+
OPENAI_EMBEDDING_DIMENSIONS=1536
54+
PINECONE_API_KEY=
55+
PINECONE_INDEX_NAME=ygo-ruling-ai-chatbot
56+
PINECONE_INDEX_DIMENSION=1536
57+
PINECONE_INDEX_METRIC=cosine
58+
PINECONE_INDEX_CLOUD=aws
59+
PINECONE_INDEX_REGION=us-east-1
60+
```
61+
62+
> [!IMPORTANT]
63+
>
64+
> The values for shared variables must match in both `.env.local` files.
Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
---
2+
title: Installation
3+
---
4+
5+
# Installation
6+
7+
This guide will walk you through the necessary steps to get `ygo-ruling-ai-chatbot` set up on your local machine.
8+
9+
---
10+
11+
## Prerequisites
12+
13+
- **OpenAI Account:** You'll need an API key for generating embeddings and using AI models. Ensure your account has sufficient credits (at least a dollar to run the app). Sign up or log in at [https://openai.com/](https://openai.com/).
14+
- **Pinecone Account:** You'll need access to a Pinecone vector database instance. The free trial is sufficient for getting started. Sign up or log in at [https://www.pinecone.io/](https://www.pinecone.io/).
15+
- **Git:** For cloning the bot's repository ([installation guide](https://git-scm.com/downloads)).
16+
- **Docker:** You will need either [Docker Desktop](https://www.docker.com/products/docker-desktop/) (for Windows or macOS) or [Docker Engine](https://docs.docker.com/engine/install/) (for Linux).
17+
- **Docker Compose:** This is typically installed alongside Docker Desktop or can be installed separately for Docker Engine ([installation guide](https://docs.docker.com/compose/install/)).
18+
19+
## Cloning the Repository
20+
21+
Once you have the prerequisites installed, you can clone the `ygo-ruling-ai-chatbot` repository from GitHub.
22+
23+
1. Open your terminal or command prompt.
24+
2. Navigate to the directory where you want to store the project files.
25+
3. Run the following command:
26+
27+
```bash
28+
git clone https://github.com/nedhmn/ygo-ruling-ai-chatbot.git
29+
```
30+
31+
4. Navigate into the cloned directory:
32+
33+
```bash
34+
cd ygo-ruling-ai-chatbot
35+
```
36+
37+
You have now successfully installed the prerequisites and obtained the project files. The next step is to configure the project with your OpenAI and Pinecone details, along with other settings.
Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
---
2+
title: Running the App
3+
---
4+
5+
# Running the App
6+
7+
This document explains how to run the application using Docker Compose.
8+
9+
---
10+
11+
## Running the Application
12+
13+
1. Open your terminal in the project's main directory.
14+
2. Run this command:
15+
16+
```bash copy
17+
docker compose up --build
18+
```
19+
20+
> [!NOTE]
21+
>
22+
> The `--build` part is important the first time you run this, or after changing the setup.
23+
24+
## What Happens
25+
26+
When you run the command:
27+
28+
1. It sets up a data service (`seeder`) first.
29+
2. Then it starts the main app (`web`).
30+
3. The app will be ready at `http://localhost:3000`.
31+
32+
You'll see messages in your terminal as things are starting up.
33+
34+
> [!NOTE]
35+
>
36+
> The `seeder` will only seed the database once.
37+
38+
## Success!
39+
40+
Congratulations! If everything went well, the application should now be running and accessible in your web browser.
41+
42+
Here's a quick look at what the running application should look like:
43+
44+
![YGO Ruling Chatbot preview](/introduction-preview.gif)
45+
46+
## Stopping the Application
47+
48+
To stop, press `Ctrl + C` in the terminal.
Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
---
2+
title: Answering Ruling Questions
3+
---
4+
5+
# Answering Ruling Questions
6+
7+
This guide explains how to use the application to ask questions about rulings in the Goat Format.
8+
9+
---
10+
11+
## What is Goat Format?
12+
13+
This chatbot is specifically trained on the [Goat Format](https://www.goatformat.com/whatisgoat.html).
14+
15+
Goat Format is based on the summer 2005 Yu-Gi-Oh! era. The widely accepted card pool includes cards up to `The Lost Millennium (TLM)`, excluding `Exarion Universe` and `Cybernetic Revolution (CRV)`.
16+
17+
## Asking a Question
18+
19+
Once you have the application running, you'll see cards on the homepage. If you don't have an ongoing conversation, you can click on one of these cards to start asking a Goat Format question.
20+
21+
Here's an example of clicking a card to initiate a question:
22+
23+
![YGO Ruling Chatbot preview](/introduction-preview.gif)
24+
25+
After clicking, you can type your question about a Goat Format ruling into the chat interface.
26+
27+
> [!NOTE]
28+
>
29+
> For better and more detailed responses, you can edit the `OPENAI_MODEL` environment variable in `apps/web/.env.local` to use stronger models.
30+
31+
> [!IMPORTANT]
32+
>
33+
> As of `v1.0.0`, the chatbot is primarily trained on [individual card rulings](https://www.goatformat.com/indivrulings.html). We plan to expand its knowledge base in future updates.
34+
35+
## Getting Answers
36+
37+
The chatbot will process your question based on its training data and provide a ruling.

apps/docs/content/index.mdx

Lines changed: 41 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,45 @@
22
title: Introduction
33
---
44

5-
## Welcome
5+
# Introduction
66

7-
Welcome to Yu-Gi-Oh! Ruling AI Chatbot!
7+
Welcome to the documentation for the Yu-Gi-Oh! Ruling AI Chatbot project. This document provides an overview, setup instructions, and usage guides.
8+
9+
---
10+
11+
![YGO Ruling Chatbot preview](/introduction-preview.gif)
12+
13+
## Key Features
14+
15+
- **Accurate Ruling Responses:** Provides precise answers for Yu-Gi-Oh! Goat Format rulings using a RAG approach.
16+
- **AI-Powered Chatbot:** Combine AI reasoning and relevant context for comprehensive answers.
17+
- **Dockerized Environment:** Ensures easy, consistent setup.
18+
- **Turborepo Monorepo:** Efficiently manages project packages and applications.
19+
20+
## Technologies Used
21+
22+
- **Next.js:** React framework for the chatbot.
23+
- **Vercel AI SDK:** Integrates with AI models.
24+
- **OpenAI:** Used for embeddings and AI models.
25+
- **Pinecone:** Vector database for ruling embeddings.
26+
- **Tailwind CSS:** Utility-first CSS.
27+
- **shadcn/ui:** Reusable UI components.
28+
- **Turborepo:** Monorepo management.
29+
- **ESLint & Prettier:** Code linting and formatting.
30+
- **Cheerio:** Web scraping library for server-side jQuery.
31+
- **Nextra:** Documentation framework.
32+
- **Docker/docker-compose:** Containerization.
33+
- **TypeScript:** For type safety.
34+
35+
## Retrieval-Augmented Generation (RAG)
36+
37+
This project uses RAG for accurate Yu-Gi-Oh! ruling answers. Instead of just using the AI's general knowledge, RAG retrieves relevant ruling info from Pinecone's vector database.
38+
39+
How it works:
40+
41+
1. **Question Embedding:** User question is converted to an embedding using OpenAI.
42+
2. **Retrieval:** Embedding searches Pinecone for similar ruling embeddings.
43+
3. **Augmentation:** Retrieved ruling text provides context to the AI model.
44+
4. **Generation:** AI uses retrieved context and its reasoning to generate an accurate answer.
45+
46+
[Here's a fantastic blog](https://www.pinecone.io/learn/retrieval-augmented-generation/) about RAG and why it's used for more information.
1.91 MB
Loading

0 commit comments

Comments
 (0)