Skip to content

Conversation

@Anush008
Copy link
Contributor

@Anush008 Anush008 commented Aug 27, 2025

Description

This PR adds support for using Qdrant as the vector database with AIOS.
Users can configure the vector DB of their choice. Defaults to Chroma as before.

I've also applied some unrelated lint fixes to pass the pre-commit checks.

You can run the integration tests with

docker run -p 6333:6333 -p6334:6334 qdrant/qdrant

python -m unittest tests.modules.test_qdrant_integration

@Anush008 Anush008 marked this pull request as draft August 27, 2025 10:49
@Anush008 Anush008 marked this pull request as ready for review August 28, 2025 09:18
@BRama10 BRama10 self-requested a review September 5, 2025 03:56
BRama10
BRama10 previously approved these changes Sep 12, 2025
@Anush008
Copy link
Contributor Author

Hey @BRama10
Just bumping this PR. Please take a look when possible.

@evison
Copy link
Collaborator

evison commented Sep 17, 2025

Can you update the test workflows to add relevant tests for the new functionality you created, and make sure the newly added functions pass all tests? Thanks.

@Anush008
Copy link
Contributor Author

Hey @evison. Done.

Signed-off-by: Anush008 <anushshetty90@gmail.com>
client = QdrantClient(host="localhost", port=6333)
client.get_collections()
except Exception as e:
raise unittest.SkipTest(f"Qdrant is not available: {e}")
Copy link
Contributor Author

@Anush008 Anush008 Sep 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tests are skipped if Qdrant is not running.

docker run -p 6333:6333 qdrant/qdrant

You can view the data at http://localhost:6333/dashboard

@Anush008
Copy link
Contributor Author

Hey @evison. Just bumping this PR.
Please take a look when possible.

Co-authored-by: Anush  <anushshetty90@gmail.com>
@evison
Copy link
Collaborator

evison commented Sep 22, 2025

Code is almost ready to merge, please fix the following errors in the ollama test. Once these errors are fixed the code will be ready to merge.
For details of the test results please refer to: https://github.com/agiresearch/AIOS/actions/runs/17926625181/job/50974434351?pr=506

Thread 3: Completed in 0.20s with response: {"response_class": "llm", "response_message": null, "tool_calls": null, "finished": true, "error": "Connection Error with model 'qwen3:4b'. Could not reach the LLM service.", "status_code": 503}

Thread 1: Completed in 0.20s with response: {"response_class": "llm", "response_message": null, "tool_calls": null, "finished": true, "error": "Connection Error with model 'qwen3:4b'. Could not reach the LLM service.", "status_code": 503}

======================================================================
FAIL: test_ollama_different_models (tests.modules.llm.ollama.test_concurrent.TestConcurrentOllamaQueries)
Case 2: Queries specify different Ollama models (mixing qwen3:1.7b and qwen3:4b).
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/runner/work/AIOS/AIOS/tests/modules/llm/ollama/test_concurrent.py", line 126, in test_ollama_different_models
    self._verify_response(result, "Different Ollama Models", i)
  File "/home/runner/work/AIOS/AIOS/tests/modules/llm/ollama/test_concurrent.py", line 68, in _verify_response
    self.assertEqual(status, 200, f"Request {index} ({test_name}) should succeed, but failed with status {status}")
AssertionError: 503 != 200 : Request 1 (Different Ollama Models) should succeed, but failed with status 503
======================================================================
FAIL: test_ollama_some_specified_different_models (tests.modules.llm.ollama.test_concurrent.TestConcurrentOllamaQueries)
Case 4: Some queries specify different Ollama models, others don't specify any model.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/runner/work/AIOS/AIOS/tests/modules/llm/ollama/test_concurrent.py", line 193, in test_ollama_some_specified_different_models
    self._verify_response(result, "Some Specified Different Models", i)
  File "/home/runner/work/AIOS/AIOS/tests/modules/llm/ollama/test_concurrent.py", line 68, in _verify_response
    self.assertEqual(status, 200, f"Request {index} ({test_name}) should succeed, but failed with status {status}")
AssertionError: 503 != 200 : Request 2 (Some Specified Different Models) should succeed, but failed with status 503
----------------------------------------------------------------------

@Anush008
Copy link
Contributor Author

Hello @evison. Those seem unrelated to this PR.

You can see the same error logs in #507 too.

@aiosfoundation aiosfoundation merged commit ea12926 into agiresearch:main Sep 23, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants