diff --git a/examples/gemini/python/docs-agent/README.md b/examples/gemini/python/docs-agent/README.md index 26021768b..57a7f5653 100644 --- a/examples/gemini/python/docs-agent/README.md +++ b/examples/gemini/python/docs-agent/README.md @@ -7,6 +7,36 @@ Docs Agent provides a set of easy-to-use self-service tools designed to give you your team access to Google's [Gemini API][genai-doc-site] for learning, experimentation, and project deployment. +## Docs Agent MCP integration [NEW] + +With the latest MCP (Model Context Protocol) integration, you can set up and launch +a MCP server and enable the Docs Agent CLI (`agent tools`) to use this MCP server. + +The following example shows Docs Agent interacting with a +[`git` MCP server][git-mcp-server] on the host machine: + +``` +$ agent tools Show me the latest commit in the Docs Agent project. + +Using tools: ['git'] + +Commit: 082949927e88df429c76e6dbf0a9e216c88fa5b0 +Author: Bob Alice +Date: Tue May 13 11:22:19 2025 -0700 +Message: Update BeautifulSoup findAll to find_all. +``` + +To enable a MCP server, update the `config.yaml` file in your Docs Agent project, +for example: + +``` +mcp_servers: + - server_type: "stdio" + command: "uv" + name: "git" + args: ["--directory","/usr/local/home/user01/mcp_servers/servers/src/git", "run", "mcp-server-git"] +``` + ## Docs Agent web app Docs Agent uses a technique known as **Retrieval Augmented Generation (RAG)**, which @@ -64,8 +94,6 @@ The list below summarizes the tasks and features supported by Docs Agent: chunks that are most relevant to user questions. - **Add context to a user question**: Add chunks returned from a semantic search as [context][prompt-structure] to a prompt. -- **Fact-check responses**: This [experimental feature][fact-check-section] composes - a follow-up prompt and asks the language model to “fact-check” its own previous response. - **Generate related questions**: In addition to answering a question, Docs Agent can [suggest related questions][related-questions-section] based on the context of the question. @@ -113,6 +141,13 @@ The list below summarizes the tasks and features supported by Docs Agent: You can use this feature for creating tasks as well. For example, see the [DescribeImages][describe-images] task. +- **Interact with LLM using external tools**: The `agent tools` command allows + you to interact with the Gemini model using configured external tools + (through MCP - Model Context Protocol). This enables the agent to perform + actions by leveraging specialized tools. (See + [Docs Agent CLI reference][cli-reference] and + [Docs Agent concepts][docs-agent-concepts]). + For more information on Docs Agent's architecture and features, see the [Docs Agent concepts][docs-agent-concepts] page. @@ -244,7 +279,19 @@ Clone the Docs Agent project and install dependencies: poetry install ``` -4. Enter the `poetry` shell environment: +4. Set up the Poetry environment: + + ``` + poetry env activate + ``` + +5. Install the `shell` plugin: + + ``` + poetry self add poetry-plugin-shell + ``` + +6. Enter the `poetry` shell environment: ``` poetry shell @@ -253,7 +300,7 @@ Clone the Docs Agent project and install dependencies: **Important**: From this point, all `agent` command lines below need to run in this `poetry shell` environment. -5. (**Optional**) To enable autocomplete commands and flags related to +7. (**Optional**) To enable autocomplete commands and flags related to Docs Agent in your shell environment, run the following command: ``` @@ -450,7 +497,6 @@ Meggin Kearney (`@Meggin`), and Kyo Lee (`@kyolee415`). [set-up-docs-agent]: #set-up-docs-agent [preprocess-dir]: ./docs_agent/preprocess/ [populate-vector-database]: ./docs_agent/preprocess/populate_vector_database.py -[fact-check-section]: ./docs/concepts.md#using-a-language-model-to-fact_check-its-own-response [related-questions-section]: ./docs/concepts.md#using-a-language-model-to-suggest-related-questions [submit-a-rewrite]: ./docs/concepts.md#enabling-users-to-submit-a-rewrite-of-a-generated-response [like-generated-responses]: ./docs/concepts.md#enabling-users-to-like-generated-responses @@ -479,3 +525,4 @@ Meggin Kearney (`@Meggin`), and Kyo Lee (`@kyolee415`). [tasks-dir]: tasks/ [describe-images]: tasks/describe-images-for-alt-text-task.yaml [create-a-new-task]: docs/create-a-new-task.md +[git-mcp-server]: https://github.com/modelcontextprotocol/servers/tree/main/src/git diff --git a/examples/gemini/python/docs-agent/apps_script/drive_to_markdown.gs b/examples/gemini/python/docs-agent/apps_script/drive_to_markdown.gs index bf4b7f86f..5bfca82e3 100644 --- a/examples/gemini/python/docs-agent/apps_script/drive_to_markdown.gs +++ b/examples/gemini/python/docs-agent/apps_script/drive_to_markdown.gs @@ -75,7 +75,7 @@ function convertDriveFolder(folderName, outputFolderName="", indexFile="") { while (myfiles.hasNext()) { var myfile = myfiles.next(); var ftype = myfile.getMimeType(); - // If this is a shorcut, retrieve the target file + // If this is a shortcut, retrieve the target file if (ftype == "application/vnd.google-apps.shortcut") { var fid = myfile.getTargetId(); var myfile = DriveApp.getFileById(fid); @@ -105,7 +105,7 @@ function convertDriveFolder(folderName, outputFolderName="", indexFile="") { var furl = myfile.getUrl(); var fcreate = myfile.getDateCreated(); - //Function returns an array, assign each array value to seperate variables + //Function returns an array, assign each array value to separate variables var backup_results = returnBackupHash(sheet, "Backup", fid, start_data_row, 1, 9, 3); if (backup_results != undefined && backup_results[0] != "no_results") { var backup_fid = backup_results[0]; @@ -229,7 +229,7 @@ function convertDriveFolder(folderName, outputFolderName="", indexFile="") { status, ]; sheet.appendRow(metadata); - // Return final row to inserRichText into correct rows + // Return final row to insertRichText into correct rows row_number = sheet.getLastRow(); insertRichText(sheet, original_chip, "C", row_number); insertRichText(sheet, md_chip, "E", row_number); diff --git a/examples/gemini/python/docs-agent/apps_script/gmail_to_markdown.gs b/examples/gemini/python/docs-agent/apps_script/gmail_to_markdown.gs index 3263ef100..d8f4b6351 100644 --- a/examples/gemini/python/docs-agent/apps_script/gmail_to_markdown.gs +++ b/examples/gemini/python/docs-agent/apps_script/gmail_to_markdown.gs @@ -66,7 +66,7 @@ function exportEmailsToMarkdown(search, folderName) { let md5_hash = Utilities.computeDigest(Utilities.DigestAlgorithm.MD5,hash_content, Utilities.Charset.US_ASCII); let hash_str = byteToStr(md5_hash); - //Function returns an array, assign each array value to seperate variables. For emails, only need to retrieve + //Function returns an array, assign each array value to separate variables. For emails, only need to retrieve // backup markdown ids var backup_results = returnBackupHash(sheet, "Backup", hash_str, start_data_row, 7, 4, 5); if (backup_results != undefined && backup_results[0] != "no_results") { @@ -134,4 +134,4 @@ function exportEmailsToMarkdown(search, folderName) { Logger.log("There is a total of " + unchangedEmails + " unchanged emails."); Logger.log("Grand total of " + emailTotal + " emails."); } -} \ No newline at end of file +} diff --git a/examples/gemini/python/docs-agent/apps_script/main.gs b/examples/gemini/python/docs-agent/apps_script/main.gs index 2fe88de33..bc8f962e7 100644 --- a/examples/gemini/python/docs-agent/apps_script/main.gs +++ b/examples/gemini/python/docs-agent/apps_script/main.gs @@ -24,4 +24,4 @@ var folderInput = "input-folder" function main() { convertDriveFolderToMDForDocsAgent(folderInput); exportEmailsToMarkdown(SEARCH_QUERY, folderOutput); -} \ No newline at end of file +} diff --git a/examples/gemini/python/docs-agent/config.yaml b/examples/gemini/python/docs-agent/config.yaml index d104ab5f4..440d1612b 100644 --- a/examples/gemini/python/docs-agent/config.yaml +++ b/examples/gemini/python/docs-agent/config.yaml @@ -17,8 +17,8 @@ configs: - product_name: "Fuchsia" models: - - language_model: "models/gemini-1.5-flash-latest" - embedding_model: "models/embedding-001" + - language_model: "gemini-2.0-flash" + embedding_model: "text-embedding-004" api_endpoint: "generativelanguage.googleapis.com" embedding_api_call_limit: 1400 embedding_api_call_period: 60 @@ -41,10 +41,16 @@ configs: Read the context below first and answer the user's question at the end. In your answer, provide a summary in three or five sentences. (BUT DO NOT USE ANY INFORMATION YOU KNOW ABOUT THE WORLD.)" - fact_check_question: "Can you compare the text below to the information - provided in this prompt above and write a short message that warns the readers - about which part of the text they should consider fact-checking? (Please keep - your response concise, focus on only one important item, but DO NOT USE BOLD - TEXT IN YOUR RESPONSE.)" model_error_message: "Gemini is not able to answer this question at the moment. Rephrase the question and try asking again." + # mcp_servers: + # - name: "git" + # server_type: "stdio" + # command: "uv" + # args: ["--directory","/usr/local/home/mcp_servers/servers/src/git", "run", "mcp-server-git"] + # - name: "puppeteer" + # server_type: "stdio" + # command: "npx" + # args: ["-y", "@modelcontextprotocol/server-puppeteer"] + # env: + # PUPPETEER_LAUNCH_OPTIONS: '{ "headless": true, "args": [] }' diff --git a/examples/gemini/python/docs-agent/docs/cli-reference.md b/examples/gemini/python/docs-agent/docs/cli-reference.md index 7088cdeec..dad5f30aa 100644 --- a/examples/gemini/python/docs-agent/docs/cli-reference.md +++ b/examples/gemini/python/docs-agent/docs/cli-reference.md @@ -176,12 +176,18 @@ agent helpme --file ``` Replace `REQUEST` with a prompt and `PATH_TO_FILE` with a file's -absolure or relative path, for example: +absolute or relative path, for example: ```sh agent helpme write comments for this C++ file? --file ../my-project/test.cc ``` +You can also provide multiple files for the same request, for example: + +```sh +agent helpme summarize the content of this file? --file ../my-project/example_01.md --file ../my-project/example_02.md --file ~/my-new-project/example.md +``` + ### Ask for advice using RAG The command below uses a local or online vector database (specified in @@ -258,6 +264,32 @@ For example: agent helpme write a concept doc covering all features in this project? --allfiles ~/my-project --new ``` +### Ask the model to read a list of file names from an input file + +Similar to the `--perfile` flag, the command below reads the input +file that contains a list of filenames and applies the request to +each file in the list: + +```sh +agent helpme --list_file +``` + +For example: + +```sh +agent helpme write an alt text string for this image? --list_file ./mylist.txt +``` + +where the `mylist.txt` file contains a list of file names in plain text +as shown below: + +```none +$ cat mylist.txt +docs/images/apps-script-screenshot-01.png +docs/images/docs-agent-ui-screenshot-01.png +docs/images/docs-agent-embeddings-01.png +``` + ### Ask the model to print the output in JSON The command below prints the output from the model in JSON format: @@ -379,6 +411,18 @@ The command below deletes an online corpus: agent delete-corpus --name corpora/example01 ``` +### Interact with the model using external tools + +The command below sends your prompt to the Gemini model and allows the model to +use configured external tools (through MCP servers defined in `config.yaml`) to +fulfill the request. + +Note: You can use a `-v` flag to enable verbose mode to see the tool execution. + +```sh +agent tools +``` + [config-yaml]: ../config.yaml diff --git a/examples/gemini/python/docs-agent/docs/concepts.md b/examples/gemini/python/docs-agent/docs/concepts.md index c8cfb53ac..4530a11d3 100644 --- a/examples/gemini/python/docs-agent/docs/concepts.md +++ b/examples/gemini/python/docs-agent/docs/concepts.md @@ -97,10 +97,6 @@ The following list summarizes the tasks and features of the Docs Agent chat app: most relevant content given user questions. - **Add context to a user question**: Add a list of text chunks returned from a semantic search as context in a prompt. -- **(Experimental) “Fact-check” responses**: This experimental feature composes - a follow-up prompt and asks the language model to “fact-check” its own previous response. - (See the [Using a language model to fact-check its own response][fact-check-section] - section.) - **Generate related questions**: In addition to displaying a response to the user question, the web UI displays 5 questions generated by the language model based on the context of the user question. (See the @@ -148,29 +144,14 @@ The following events take place in the Docs Agent chat app: 8. The language model generates a response and the Docs Agent server renders it on the chat UI. -Additional events for [“fact-checking” a generated response][fact-check-section]: - -9. The Docs Agent server prepares another prompt that compares the generated response - (in step 8) to the context (in step 6) and asks the language model to look for - a discrepancy in the response. -10. The language model generates a response that points out one major discrepancy - (if it exists) between its previous response and the context. -11. The Docs Agent server renders this response on the chat UI as a call-out note. -12. The Docs Agent server passes this second response to the vector database to - perform semantic search. -13. The vector database returns a list of relevant content (that is closely related - to the second response). -14. The Docs Agent server renders the top URL of this list on the chat UI and - suggests that the user checks out this URL for fact-checking. - Additional events for [suggesting 5 questions related to the user question][related-questions-section]: -15. The Docs Agent server prepares another prompt that asks the language model to +9. The Docs Agent server prepares another prompt that asks the language model to generate 5 questions based on the context (in step 6). -16. The language model generates a response that contains a list of questions related +10. The language model generates a response that contains a list of questions related to the context. -17. The Docs Agent server renders the questions on the chat UI. +11. The Docs Agent server renders the questions on the chat UI. ## Supplementary features @@ -182,71 +163,6 @@ enhancing the usability of the Q&A experience powered by generative AI. **Figure 6**. A screenshot of the Docs Agent chat UI showing the sections generated by three distinct prompts. -### Using a language model to fact-check its own response - -In addition to using the prompt structure above (shown in Figure 3), we‘re currently -experimenting with the following prompt setup for “fact-checking” responses generated -by the language model: - -- Condition: - - ``` - You are a helpful chatbot answering questions from users. Read the following context - first and answer the question at the end: - ``` - -- Context: - - ``` - - ``` - -- Additional condition (for fact-checking): - - ``` - Can you compare the text below to the information provided in this prompt above - and write a short message that warns the readers about which part of the text they - should consider fact-checking? (Please keep your response concise and focus on only - one important item.)" - ``` - -- Previously generated response - - ``` - Text: - ``` - -This "fact-checking" prompt returns a response similar to the following example: - -``` -The text states that Flutter chose to use Dart because it is a fast, productive, object-oriented -language that is well-suited for building user interfaces. However, the context provided in the -prompt states that Flutter chose Dart because it is a fast, productive language that is well-suited -for Flutter's problem domain: creating visual user experiences. Therefore, readers should consider -fact-checking the claim that Dart is well-suited for building user interfaces. -``` - -After the second response, notice that the Docs Agent chat UI also suggests a URL to visit for -fact-checking (see Figure 6), which looks similar to the following example: - -``` -To verify this information, please check out: - -https://docs.flutter.dev/resources/faq -``` - -To identify this URL, the Docs Agent server takes the second response (which is the paragraph that -begins with “The text states that ...” in the example above) and uses it to query the vector -database. Once the vector database returns a list of the most relevant content to this response, -the UI only displays the top URL to the user. - -Keep in mind that this "fact-checking" prompt setup is currently considered **experimental** -because we‘ve seen cases where a language model would end up adding incorrect information into its -second response as well. However, we saw that adding this second response (which brings attention -to the language model’s possible hallucinations) seems to improve the usability of the system since it -serves as a reminder to the users that the language model‘s response is far from being perfect, which -helps encourage the users to take more steps to validate generated responses for themselves. - ### Using a language model to suggest related questions The project‘s latest web UI includes the “Related questions” section, which displays five @@ -351,13 +267,38 @@ _Semantic Retriever Quickstart_ page. Cloud project from your host machine. For detailed instructions, see the [Authentication with OAuth quickstart][oauth-quickstart] page. +### Using Tools (MCP Integration) + +Docs Agent integrates with external tools using the Model Command Platform +(MCP). This enables more complex and interactive workflows where the language +model can delegate specific tasks to specialized tools. + +1. **Configuration**: You define available MCP tool servers in your + [`config.yaml`][config-yaml] file under the `mcp_servers` key. Each entry + specifies how Docs Agent connects to a tool server (currently `stdio` or + `sse`). +2. **Tool Discovery**: When you run the `agent tools` command, Docs Agent + connects to the configured MCP servers and discovers the available tools and + their functions (including required parameters). +3. **Function Calling**: This list of tools is provided to the Gemini model. + When you provide a prompt (e.g., "Summarize recent changes in `main.py`"), + the model can decide if executing one of the available tools would help + fulfill the prompt. If so, it issues a *function call*. +4. **Execution**: Docs Agent intercepts this function call, executes the + corresponding tool through the appropriate MCP server, and captures the result. +5. **Response Generation**: The tool's result is sent back to the Gemini model + as context. The model then uses this result to generate the final response to + your original prompt. + +This mechanism allows Docs Agent to ground in real-time information or actions +performed by external tools brought up as MCP servers. + [set-up-docs-agent]: ../README.md#set-up-docs-agent [files-to-plain-text]: ../docs_agent/preprocess/files_to_plain_text.py [populate-vector-database]: ../docs_agent/preprocess/populate_vector_database.py [context-source-01]: http://eventhorizontelescope.org -[fact-check-section]: #using-a-language-model-to-fact_check-its-own-response [related-questions-section]: #using-a-language-model-to-suggest-related-questions [submit-a-rewrite]: #enabling-users-to-submit-a-rewrite-of-a-generated-response [like-generated-responses]: #enabling-users-to-like-generated-responses diff --git a/examples/gemini/python/docs-agent/docs/config-reference.md b/examples/gemini/python/docs-agent/docs/config-reference.md index 2cdc30629..5e9f32493 100644 --- a/examples/gemini/python/docs-agent/docs/config-reference.md +++ b/examples/gemini/python/docs-agent/docs/config-reference.md @@ -147,6 +147,31 @@ for example: secondary_corpus_name: "corpora/my-example-corpus" ``` +### mcp_servers + +This field defines a list of Model Context Protocol (MCP) servers that Docs +Agent can use. These servers expose external tools that the language model can +use when responding to prompts through the `agent tools` command. +Each entry in the list defines a connection to one MCP server. + +```yaml +mcp_servers: + - name: "my_custom_tool" # A unique identifier for this tool server + connection_type: "stdio" # Currently 'stdio' or 'sse' + command: ["npx"] # This can be npx to directly run a MCP server with npx + args: ["", "
"] # Command to use the server for stdio + env: + : '{ "headless": true, "args": [] }' # I.E. PUPPETEER_LAUNCH_OPTIONS: '{ "headless": true, "args": [] }' + # url: "http://localhost:8080/mcp" # URL for the server (for sse) + - name: "another_tool" + connection_type: "sse" + url: "http://localhost:8080/mcp" + - name: "git" + server_type: "stdio" + command: "uv" + name: "git" + args: ["--directory","~/mcp_servers/servers/src/git", "run", "mcp-server-git"] # Your machine needs the checkout +``` [config-yaml]: ../config.yaml diff --git a/examples/gemini/python/docs-agent/docs/create-a-new-task.md b/examples/gemini/python/docs-agent/docs/create-a-new-task.md index 48a8f5be0..0267b641d 100644 --- a/examples/gemini/python/docs-agent/docs/create-a-new-task.md +++ b/examples/gemini/python/docs-agent/docs/create-a-new-task.md @@ -121,6 +121,37 @@ A step that runs a POSIX command: **Important**: To run a POSIX command, the `function` field must be set to `posix`. +### A script step + +A step that runs a custom script: + +``` + steps: + - prompt: "extract_image_files.py" + function: "script" +``` + +**Important**: To run a custom script, the script must be stored in +the [`scripts`][scripts-dir] directory of the Docs Agent setup. + +You can provide a `script` step with a custom input string as +arguments to the script using the `script_input` field, for example: + +``` + steps: + - prompt: "extract_image_files.py" + function: "script" + flags: + script_input: "" + default_input: "./README.md" +``` + +This step runs the following commandline: + +```sh +$ python3 scripts/extract_image_files.py +``` + ### A step that reads a file The `file` flag reads the specified file and added its content @@ -148,6 +179,18 @@ A step that runs the `helpme` command with the `file` flag and accepts custom in When this step is run, the `` string will be replaced with the value provided in the `--custom_input` field at runtime. +You can also provide multiple files using a list as shown below: + +``` + steps: + - prompt: "Provide a concise, descriptive alt text for this PNG image." + flags: + file: + - "docs/images/apps-script-screenshot-01.png" + - "docs/images/docs-agent-ui-screenshot-01.png" + - "docs/images/docs-agent-embeddings-01.png" +``` + ### A step that reads all files in a directory The `allfiles` flag reads all the files in the specified directory @@ -206,6 +249,29 @@ and accepts custom input: When this step is run, the `` string will be replaced with the value provided in the `--custom_input` field at runtime. +### A step that reads a list of file names from an input file + +Similar to the `perfile` flag, the `list_file` flag reads an input +file that contains a list of filenames and applies the prompt to +each file in the list: + +``` + steps: + - prompt: "Write an alt text string for this image." + flags: + list_file: "out/mylist.txt" +``` + +where the `out/mylist.txt` file contains a list of file names in +plain text as shown below: + +```none +$ cat out/mylist.txt +docs/images/apps-script-screenshot-01.png +docs/images/docs-agent-ui-screenshot-01.png +docs/images/docs-agent-embeddings-01.png +``` + ### A step with the name field A step that runs the `helpme` command and the `name` field @@ -246,3 +312,4 @@ Using the `tellme` command requires **a vector database setup**. [model-code]: https://ai.google.dev/gemini-api/docs/models/gemini [tasks-dir]: ../tasks +[scripts-dir]: ../scripts diff --git a/examples/gemini/python/docs-agent/docs/whats-new.md b/examples/gemini/python/docs-agent/docs/whats-new.md index 7794324ac..ab4231b67 100644 --- a/examples/gemini/python/docs-agent/docs/whats-new.md +++ b/examples/gemini/python/docs-agent/docs/whats-new.md @@ -1,5 +1,16 @@ # What's new in Docs Agent + +## April 2025 + +* **Milestone: Introduced Tool Usage through MCP** +* Added the new `agent tools` CLI command, enabling interaction with the model + using external tools. +* Leverages the Model Context Protocol (MCP) for tool discovery and execution. + Define MCP servers in `config.yaml`. +* This allows the agent to use configured tools. +* Manages tools through the `ToolManager` and `MCPService`. + ## April 2024 * **Focus: Feature enhancements and usability improvements** diff --git a/examples/gemini/python/docs-agent/docs_agent/agents/docs_agent.py b/examples/gemini/python/docs-agent/docs_agent/agents/docs_agent.py index cc9fe2fef..e838edb12 100644 --- a/examples/gemini/python/docs-agent/docs_agent/agents/docs_agent.py +++ b/examples/gemini/python/docs-agent/docs_agent/agents/docs_agent.py @@ -17,25 +17,26 @@ """Docs Agent""" import typing -import os, pathlib - +from typing import List, Optional from absl import logging import google.api_core -import google.ai.generativelanguage as glm -from chromadb.utils import embedding_functions - -from docs_agent.storage.chroma import ChromaEnhanced - -from docs_agent.models.google_genai import Gemini from docs_agent.utilities.config import ProductConfig, Models from docs_agent.preprocess.splitters import markdown_splitter -from docs_agent.preprocess.splitters.markdown_splitter import Section as Section -from docs_agent.postprocess.docs_retriever import SectionDistance as SectionDistance from docs_agent.postprocess.docs_retriever import ( SectionProbability as SectionProbability, + query_vector_store_to_build, ) +from docs_agent.models.base import GenerativeLanguageModel +from docs_agent.models.llm import GenerativeLanguageModelFactory +from docs_agent.storage.rag import RAGFactory, return_collection_name +from docs_agent.storage.base import RAG + +from docs_agent.models.base import AQAModel +from docs_agent.models.aqa import AQAModelFactory + +from docs_agent.models.tools.tool_manager import ToolManager class DocsAgent: @@ -50,103 +51,126 @@ def __init__( ): # Models settings self.config = config - self.language_model = str(self.config.models.language_model) - self.embedding_model = str(self.config.models.embedding_model) + self.language_model_name = str(self.config.models.language_model) + self.embedding_model_name = str(self.config.models.embedding_model) self.api_endpoint = str(self.config.models.api_endpoint) - - # Initialize the default Gemini model. - if self.language_model.startswith("models/gemini"): - self.gemini = Gemini( - models_config=config.models, conditions=config.conditions + self.tool_manager: Optional[ToolManager] = None + if self.config.mcp_servers: + try: + self.tool_manager = ToolManager(config=self.config) + if self.tool_manager.tool_services: + logging.info( + f"ToolManager initialized successfully with {len(self.tool_manager.tool_services)} tool service instance(s)." + ) + else: + logging.warning( + "ToolManager initialized, but failed to set up any tool services." + ) + except Exception as e: + logging.error(f"Failed to instantiate ToolManager: {e}") + self.tool_manager = None + else: + logging.info("ToolManager not initialized: No MCP servers configured.") + self.language_model: GenerativeLanguageModel = ( + GenerativeLanguageModelFactory.create_model( + self.language_model_name, + models_config=config.models, + conditions=config.conditions, ) - self.context_model = self.language_model - + ) + self.context_model = self.language_model_name + if self.tool_manager and not hasattr( + self.language_model, "generate_content_async" + ): + logging.error( + f"Configured language model {self.language_model_name} does not support async generation required for tools. Disabling ToolManager." + ) + self.tool_manager = None # Use the new chroma db for all queries # Should make a function for this or clean this behavior if init_chroma: - for item in self.config.db_configs: - if "chroma" in item.db_type: - self.vector_db_dir = item.vector_db_dir - self.collection_name = item.collection_name - self.chroma = ChromaEnhanced(self.vector_db_dir) - logging.info( - "Using the local vector database created at %s", self.vector_db_dir - ) - self.collection = self.chroma.get_collection( - self.collection_name, - embedding_model=self.embedding_model, - embedding_function=embedding_function_gemini_retrieval( - self.config.models.api_key, self.embedding_model - ), - ) + self.rag: RAG = RAGFactory.create_rag(product_config=self.config) + collection_name = return_collection_name(product_config=self.config) + logging.info(f"Getting Chroma collection: {collection_name}") + try: + # Store the collection object directly on the instance + self.collection = self.rag.get_collection(collection_name) + logging.info(f"Successfully retrieved collection '{collection_name}'.") + except Exception as e: + logging.error( + f"Failed to get Chroma collection '{collection_name}': {e}" + ) + raise + else: + self.rag = None + self.collection = None # AQA model settings + self.aqa_model = None if init_semantic: - # Except in "full" and "pro" modes, the semantic retriever option requires - # the AQA model. If not, exit the program. + # Except in "full" and "pro" modes, the semantic retriever option + # requires the AQA model. If not, exit the program. if ( - self.config.app_mode != "full" - and self.config.app_mode != "widget-pro" + self.config.app_mode not in ("full", "widget-pro") and self.config.db_type == "google_semantic_retriever" + and self.language_model_name != "aqa" ): - if self.language_model != "models/aqa": - logging.error( - "The db_type `google_semnatic_retriever` option" - + " requires the AQA model (`models/aqa`)." - ) - exit(1) + logging.error( + "The db_type `google_semantic_retriever` option" + " requires the AQA model (`aqa`)." + ) + exit(1) # If the AQA model is selected or the web app is on "full" and "pro" modes. - if ( - self.language_model == "models/aqa" - or self.config.app_mode == "full" - or self.config.app_mode == "widget-pro" + if self.language_model_name == "aqa" or self.config.app_mode in ( + "full", + "widget-pro", ): - # AQA model setup - self.generative_service_client = glm.GenerativeServiceClient() - self.retriever_service_client = glm.RetrieverServiceClient() - self.permission_service_client = glm.PermissionServiceClient() - # Start a Gemini model for other tasks - self.context_model = "models/gemini-pro" + self.aqa_model: AQAModel = AQAModelFactory.create_model() + self.context_model = "gemini-pro" gemini_model_config = Models( language_model=self.context_model, - embedding_model=self.embedding_model, + embedding_model=self.embedding_model_name, api_endpoint=self.api_endpoint, ) - self.gemini = Gemini( - models_config=gemini_model_config, conditions=config.conditions + self.language_model = GenerativeLanguageModelFactory.create_model( + self.context_model, + models_config=gemini_model_config, + conditions=config.conditions, ) # If semantic retriever is selected as the main database. if self.config.db_type == "google_semantic_retriever": for item in self.config.db_configs: if "google_semantic_retriever" in item.db_type: self.corpus_name = item.corpus_name - if item.corpus_display: - self.corpus_display = item.corpus_display - else: - self.corpus_display = ( - self.config.product_name + " documentation" - ) + self.corpus_display = item.corpus_display or ( + self.config.product_name + " documentation" + ) self.aqa_response_buffer = "" - - # Always initialize the Gemini 1.0 pro model for other tasks. + else: + self.aqa_model = None + # Always initialize the Gemini 1.5 pro model for other tasks. gemini_pro_model_config = Models( - language_model="models/gemini-pro", - embedding_model=self.embedding_model, + language_model="gemini-1.5-pro", + embedding_model=self.embedding_model_name, api_endpoint=self.api_endpoint, ) - self.gemini_pro = Gemini( - models_config=gemini_pro_model_config, conditions=config.conditions + self.gemini_pro = GenerativeLanguageModelFactory.create_model( + "gemini-1.5-pro", + models_config=gemini_pro_model_config, + conditions=config.conditions, ) - if self.config.app_mode == "full" or self.config.app_mode == "widget-pro": + if self.config.app_mode in ("full", "widget-pro"): # Initialize the Gemini 1.5 model for generating main responses. gemini_15_model_config = Models( - language_model=self.language_model, - embedding_model=self.embedding_model, + language_model=self.language_model_name, + embedding_model=self.embedding_model_name, api_endpoint=self.api_endpoint, ) - self.gemini_15 = Gemini( - models_config=gemini_15_model_config, conditions=config.conditions + self.gemini_15 = GenerativeLanguageModelFactory.create_model( + self.language_model_name, + models_config=gemini_15_model_config, + conditions=config.conditions, ) else: self.gemini_15 = self.gemini_pro @@ -158,7 +182,7 @@ def ask_content_model_with_context(self, context, question): if self.config.log_level == "VERBOSE": self.print_the_prompt(new_prompt) try: - response = self.gemini.generate_content(new_prompt) + response = self.language_model.generate_content(new_prompt) except google.api_core.exceptions.InvalidArgument: return self.config.conditions.model_error_message # for chunk in response: @@ -174,143 +198,139 @@ def ask_aqa_model_using_local_vector_store( results_num: int = 5, answer_style: str = "VERBOSE", ): - user_query_content = glm.Content(parts=[glm.Part(text=question)]) + """ + Use this method for talking to Gemini's AQA model using inline passages. + + Args: + question (str): The user's question. + results_num (int, optional): The number of results to retrieve from the vector store. Defaults to 5. + answer_style (str, optional): The style of the answer. Can be "VERBOSE", "ABSTRACTIVE", or "EXTRACTIVE". Defaults to "VERBOSE". + + Returns: + tuple: A tuple containing the answer text and a list of SectionProbability objects. + Returns a model error message and an empty list if the model fails. + """ verbose_prompt = "Question: " + question + "\n" - # Retrieves from chroma, using up to 30k tokens - max gemini model tokens - chroma_search_result, final_context = self.query_vector_store_to_build( + # Retrieves from chroma, using up to 30k tokens + if not self.rag: + logging.error("Chroma collection not initialized.") + return "Chroma collection not initialized.", [] + if not self.aqa_model: + logging.error( + "AQA model is not initialized. Cannot generate answer using local vector store." + ) + return ( + "AQA model is not initialized. Cannot generate answer using local vector store.", + [], + ) + chroma_search_result, final_context = self.rag.query_vector_store_to_build( question=question, token_limit=30000, results_num=results_num, max_sources=results_num, ) - # Create the grounding inline passages - grounding_passages = glm.GroundingPassages() - i = 0 - aqa_search_result = [] + + # Create list of grounding passages texts + grounding_passages_texts = [] for item in chroma_search_result: returned_context = item.section.content - new_passage = glm.Content(parts=[glm.Part(text=returned_context)]) - index_id = str("{:03d}".format(i + 1)) - i += 1 - grounding_passages.passages.append( - glm.GroundingPassage(content=new_passage, id=index_id) - ) - verbose_prompt += "\nID: " + index_id + "\n" + returned_context + "\n" - req = glm.GenerateAnswerRequest( - model="models/aqa", - contents=[user_query_content], - inline_passages=grounding_passages, - answer_style=answer_style, + grounding_passages_texts.append(returned_context) + verbose_prompt += "\nID: \n" + returned_context + "\n" + + answer_text, aqa_search_result_initial = self.aqa_model.generate_answer( + question, grounding_passages_texts, answer_style ) - aqa_response = self.generative_service_client.generate_answer(req) - self.aqa_response_buffer = aqa_response - for item in chroma_search_result: - # Builds an object with sections + probability - aqa_search_result.append( - SectionProbability( - section=item.section, - probability=aqa_response.answerable_probability, - ) - ) - if self.config.log_level == "VERBOSE": - self.print_the_prompt(verbose_prompt) - elif self.config.log_level == "DEBUG": - self.print_the_prompt(verbose_prompt) - print(aqa_response) - try: - return aqa_response.answer.content.parts[0].text, aqa_search_result - except: - self.aqa_response_buffer = "" - return self.config.conditions.model_error_message, aqa_search_result - # Get the save response of the AQA model - def get_saved_aqa_response_json(self): - return self.aqa_response_buffer + # Map the AQA results back to SectionProbability objects. + aqa_search_result = [] + if answer_text: + for raw_result in aqa_search_result_initial: + for item in chroma_search_result: + if raw_result["text"] == item.section.content: + aqa_search_result.append( + SectionProbability( + section=item.section, + probability=raw_result["probability"], + ) + ) - # Retrieve the metadata dictionary from an AQA response grounding attribution entry - def get_aqa_response_metadata(self, aqa_response_item): - try: - chunk_resource_name = ( - aqa_response_item.source_id.semantic_retriever_chunk.chunk - ) - get_chunk_response = self.retriever_service_client.get_chunk( - name=chunk_resource_name - ) - metadata = get_chunk_response.custom_metadata - final_metadata = {} - for m in metadata: - if m.string_value: - value = m.string_value - elif m.numeric_value: - value = m.numeric_value - else: - value = "" - final_metadata[m.key] = value - except: - final_metadata = {} - return final_metadata + if self.config.log_level in ("VERBOSE", "DEBUG"): + self.print_the_prompt(verbose_prompt) + if self.config.log_level == "DEBUG": + print(self.aqa_model.get_saved_aqa_response_json()) + + if not answer_text: + return self.config.conditions.model_error_message, [] + return answer_text, aqa_search_result # Use this method for talking to Gemini's AQA model using a corpus # Answer style can be "VERBOSE" or ABSTRACTIVE, EXTRACTIVE def ask_aqa_model_using_corpora( self, question, corpus_name: str = "None", answer_style: str = "VERBOSE" ): - search_result = [] + """ + Use this method for talking to Gemini's AQA model using a corpus. + + Args: + question (str): The user's question. + corpus_name (str, optional): The name of the corpus to use. Defaults to "None". + answer_style (str, optional): The style of the answer. Can be "VERBOSE", "ABSTRACTIVE", or "EXTRACTIVE". Defaults to "VERBOSE". + + Returns: + tuple: A tuple containing the answer text and a list of SectionProbability objects. + Returns a model error message and an empty list if the model fails. + """ + if not self.aqa_model: + logging.error( + "AQA model is not initialized. Cannot generate answer using corpora." + ) + return ( + "AQA model is not initialized. Cannot generate answer using corpora.", + [], + ) if corpus_name == "None": corpus_name = self.corpus_name - # Prepare parameters for the AQA model - user_question_content = glm.Content( - parts=[glm.Part(text=question)], role="user" - ) - # Settings to retrieve grounding content from semantic retriever - retriever_config = glm.SemanticRetrieverConfig( - source=corpus_name, query=user_question_content - ) - - # Ask the AQA model. - req = glm.GenerateAnswerRequest( - model="models/aqa", - contents=[user_question_content], - semantic_retriever=retriever_config, - answer_style=answer_style, + ( + answer_text, + aqa_search_result_raw, + ) = self.aqa_model.generate_answer_with_corpora( + question, corpus_name, answer_style ) - try: - aqa_response = self.generative_service_client.generate_answer(req) - self.aqa_response_buffer = aqa_response - except: - self.aqa_response_buffer = "" - return self.config.conditions.model_error_message, search_result - + search_result = [] if self.config.log_level == "VERBOSE": verbose_prompt = "[question]\n" + question + "\n" verbose_prompt += ( "\n[answerable_probability]\n" - + str(aqa_response.answerable_probability) + + str( + self.aqa_model.get_saved_aqa_response_json().answerable_probability + ) + "\n" ) - for attribution in aqa_response.answer.grounding_attributions: + for ( + attribution + ) in ( + self.aqa_model.get_saved_aqa_response_json().answer.grounding_attributions + ): verbose_prompt += "\n[grounding_attributions]\n" + str( attribution.content.parts[0].text ) self.print_the_prompt(verbose_prompt) elif self.config.log_level == "DEBUG": - print(aqa_response) - try: - for item in aqa_response.answer.grounding_attributions: - metadata = self.get_aqa_response_metadata(item) - for part in item.content.parts: - metadata["content"] = part.text - section = markdown_splitter.DictionarytoSection(metadata) - search_result.append( - SectionProbability( - section=section, probability=aqa_response.answerable_probability - ) + print(self.aqa_model.get_saved_aqa_response_json()) + + if not answer_text: + return self.config.conditions.model_error_message, [] + + # Convert raw results to SectionProbability objects + for raw_result in aqa_search_result_raw: + section = markdown_splitter.DictionarytoSection(raw_result["metadata"]) + search_result.append( + SectionProbability( + section=section, probability=raw_result["probability"] ) - # Return the aqa_response object but also the actual text response - return aqa_response.answer.content.parts[0].text, search_result - except: - return self.config.conditions.model_error_message, search_result + ) + return answer_text, search_result def ask_aqa_model(self, question): response = "" @@ -320,31 +340,17 @@ def ask_aqa_model(self, question): response = self.ask_aqa_model_using_local_vector_store(question) return response - # Retrieve and return chunks that are most relevant to the input question. - def retrieve_chunks_from_corpus(self, question, corpus_name: str = "None"): - if corpus_name == "None": - corpus_name = self.corpus_name - user_query = question - results_count = 5 - # Quick fix: This was needed to allow the method to be called - # even when the model is not set to `models/aqa`. - retriever_service_client = glm.RetrieverServiceClient() - # Make the request - request = glm.QueryCorpusRequest( - name=corpus_name, query=user_query, results_count=results_count - ) - query_corpus_response = retriever_service_client.query_corpus(request) - return query_corpus_response - - # Use this method for asking a Gemini content model for fact-checking - def ask_content_model_to_fact_check(self, context, prev_response): - question = self.config.conditions.fact_check_question + "\n\nText: " - question += prev_response - return self.ask_content_model_with_context(context, question) - # Query the local Chroma vector database using the user question def query_vector_store(self, question, num_returns: int = 5): - return self.collection.query(question, num_returns) + if not self.rag and not self.collection: + logging.error("Chroma collection not initialized.") + return None + if not hasattr(self.collection, "query"): + raise AttributeError( + "Passed collection object does not have a 'query' method." + ) + else: + return self.collection.query(question, num_returns) # Add specific instruction as a prefix to the context def add_instruction_to_context(self, context): @@ -352,37 +358,6 @@ def add_instruction_to_context(self, context): new_context += self.config.conditions.condition_text + "\n\n" + context return new_context - # Add custom instruction as a prefix to the context - def add_custom_instruction_to_context(self, condition, context): - new_context = "" - new_context += condition + "\n\n" + context - return new_context - - # Return true if the aqa model used in this Docs Agent setup - def check_if_aqa_is_used(self): - if ( - self.config.models.language_model == "models/aqa" - or self.config.app_mode == "full" - or self.config.app_mode == "widget-pro" - ): - return True - else: - return False - - # Return the chroma collection name - def return_chroma_collection(self): - try: - return self.collection_name - except: - return None - - # Return the vector db name - def return_vector_db_dir(self): - try: - return self.vector_db_dir - except: - return None - # Print the prompt on the terminal for debugging def print_the_prompt(self, prompt): print("#########################################") @@ -394,123 +369,8 @@ def print_the_prompt(self, prompt): print("#########################################") print("\n") - # Query the local Chroma vector database. Starts with the number of results - # from results - # Results_num is the initial result set based on distance to the question - # Max_sources is the number of those results_num to use to build a final - # context page - def query_vector_store_to_build( - self, - question: str, - token_limit: float = 30000, - results_num: int = 10, - max_sources: int = 4, - ): - # Looks for contexts related to a question that is limited to an int - # Returns a list - contexts_query = self.collection.query(question, results_num) - # This returns a list of results - build_context = contexts_query.returnDBObjList() - # Use the token limit and distances to assign a token limit for each - # page. For time being split evenly into top max_sources - token_limit_temp = token_limit / max_sources - token_limit_per_source = [] - i = 0 - for i in range(max_sources): - token_limit_per_source.append(token_limit_temp) - same_document = "" - same_metadata = "" - # Each item is a chunk result along with all of it's metadata - # We can use metadata to identify if one of these chunks comes from the - # same page, potentially indicating a better match, so more token allocation - # You can see these objects contents with .content, .document, .distance, .metadata - plain_content = "" - search_result = [] - same_pages = [] - # For each result make a SectionDistance object that includes the - # Section along with it's distance from the question - for item in build_context: - # Check if this page was previously added as a source, to avoid - # duplicate count. These signals should be used to give a page higher token limits - # Make a page based on the section_id (this is where the search - # found a match) - section = SectionDistance( - section=Section( - id=item.metadata.get("section_id", None), - name_id=item.metadata.get("name_id", None), - page_title=item.metadata.get("page_title", None), - section_title=item.metadata.get("section_title", None), - level=item.metadata.get("level", None), - previous_id=item.metadata.get("previous_id", None), - parent_tree=item.metadata.get("parent_tree", None), - token_count=item.metadata.get("token_estimate", None), - content=item.document, - md_hash=item.metadata.get("md_hash", None), - url=item.metadata.get("url", None), - origin_uuid=item.metadata.get("origin_uuid", None), - ), - distance=item.distance, - ) - search_result.append(section) - # From this you can run queries to find all chunks from the same page - # since they all share the same origin_uuid which is a hash of the - # original source file name - # Limits the number of results to go through - final_page_content = [] - final_page_token = [] - plain_token = 0 - sources = [] - final_pages = [] - # Quick fix: Ensure max_sources is not larger than the array size of search_result. - this_range = len(search_result) - if this_range > max_sources: - this_range = max_sources - for i in range(this_range): - # The current section that is being built - # eval turns str representation of array into an array - curr_section_id = search_result[i].section.name_id - curr_parent_tree = eval(search_result[i].section.parent_tree) - # Assigned token limit for this position in the list - page_token_limit = token_limit_per_source[i] - # Returns a FullPage which is just a list of Section - same_page = self.collection.getPageOriginUUIDList( - origin_uuid=search_result[i].section.origin_uuid - ) - same_pages.append(same_page) - # Use all sections in experimental, only self when "normal" - if self.config.docs_agent_config == "experimental": - test_page = same_page.buildSections( - section_id=search_result[i].section.id, - selfSection=True, - children=True, - parent=True, - siblings=True, - token_limit=token_limit_per_source[i], - ) - else: - test_page = same_page.buildSections( - section_id=search_result[i].section.id, - selfSection=True, - children=False, - parent=False, - siblings=False, - token_limit=token_limit_per_source[i], - ) - final_pages.append(test_page) - # Each item here is a FullPage corresponding to the source - final_context = "" - for item in final_pages: - for source in item.section_list: - final_context += source.content + "\n\n" - final_context = final_context.strip() - # Result contains the search result of Section of the initial hits - # final_pages could be returned to get the full Section for displaying - # context with metadata - return search_result, final_context - # Use this method for talking to a Gemini content model # Optionally provide a prompt, if not use the one from config.yaml - # If prompt is "fact_checker" it will use the fact_check_question from # config.yaml for the prompt def ask_content_model_with_context_prompt( self, @@ -521,8 +381,6 @@ def ask_content_model_with_context_prompt( ): if prompt == None: prompt = self.config.conditions.condition_text - elif prompt == "fact_checker": - prompt = self.config.conditions.fact_check_question new_prompt = f"{prompt}\n\nContext:\n{context}\nQuestion:\n{question}" # Print the prompt for debugging if the log level is VERBOSE. if self.config.log_level == "VERBOSE": @@ -538,7 +396,7 @@ def ask_content_model_with_context_prompt( contents=new_prompt, log_level=self.config.log_level ) else: - response = self.gemini.generate_content( + response = self.language_model.generate_content( contents=new_prompt, log_level=self.config.log_level ) except Exception as e: @@ -547,103 +405,35 @@ def ask_content_model_with_context_prompt( return self.config.conditions.model_error_message, new_prompt return response, new_prompt - # Use this method for talking to a Gemini content model - # Provide a prompt, followed by the content of the file - # This isn't in use yet, but can be used to give an LLM a full or partial file - def ask_content_model_to_use_file(self, prompt: str, file: str): - new_prompt = prompt + file - # Print the prompt for debugging if the log level is VERBOSE. - if self.config.log_level == "VERBOSE": - self.print_the_prompt(new_prompt) - try: - response = self.gemini.generate_content(contents=new_prompt) - except google.api_core.exceptions.InvalidArgument: - return self.config.conditions.model_error_message - return response - - # Use this method for asking a Gemini content model for fact-checking. - # This uses ask_content_model_with_context_prompt w - def ask_content_model_to_fact_check_prompt(self, context: str, prev_response: str): - question = self.config.conditions.fact_check_question + "\n\nText: " - question += prev_response - return self.ask_content_model_with_context_prompt( - context=context, question=question, prompt="" - ) - - # Generate an embedding given text input - def generate_embedding(self, text, task_type: str = "SEMANTIC_SIMILARITY"): - return self.gemini.embed(text, task_type)[0] - - # Generate a response to an image - def ask_model_about_image(self, prompt: str, image): - if not prompt: - prompt = f"Describe this image:" - if self.context_model.startswith("models/gemini-1.5"): - try: - # Adding prompt in the beginning allows long contextual - # information to be added. - response = self.gemini.generate_content([prompt, image]) - except google.api_core.exceptions.InvalidArgument: - return self.config.conditions.model_error_message - else: - logging.error(f"The {self.context_model} can't read an image.") - response = None - exit(1) - return response - - # Generate a response to audio - def ask_model_about_audio(self, prompt: str, audio): - if not prompt: - prompt = f"Describe this audio clip:" - audio_size = os.path.getsize(audio) - # Limit is 20MB - if audio_size > 20000000: - logging.error(f"The audio clip {audio} is too large: {audio_size} bytes.") - exit(1) - # Get the mime type of the audio file and trim the . from the extension. - mime_type = "audio/" + pathlib.Path(audio).suffix[:1] - audio_clip = { - "mime_type": mime_type, - "data": pathlib.Path(audio).read_bytes() - } - if self.context_model.startswith("models/gemini-1.5"): - try: - response = self.gemini.generate_content([prompt, audio_clip]) - except google.api_core.exceptions.InvalidArgument: - return self.config.conditions.model_error_message - else: - logging.error(f"The {self.context_model} can't read an audio clip.") - exit(1) - return response - - # Generate a response to video - def ask_model_about_video(self, prompt: str, video): - if not prompt: - prompt = f"Describe this video clip:" - video_size = os.path.getsize(video) - # Limit is 2GB - if video_size > 2147483648: - logging.error(f"The video clip {video} is too large: {video_size} bytes.") - exit(1) - request_options = { - "timeout": 600 - } - mime_type = "video/" + pathlib.Path(video).suffix[:1] - video_clip_uploaded =self.gemini.upload_file(video) - video_clip = self.gemini.get_file(video_clip_uploaded) - if self.context_model.startswith("models/gemini-1.5"): - try: - response = self.gemini.generate_content([prompt, video_clip], - request_options=request_options) - except google.api_core.exceptions.InvalidArgument: - return self.config.conditions.model_error_message + async def process_prompt_with_tools( + self, + prompt: str, + verbose: bool = False, + ): + """ + Processes a prompt using tools. Returns an error if tools aren't + configured. + + Args: + prompt (str): The user's prompt. + verbose (bool): Whether to enable verbose logging. + + Returns: + str: The generated response or an error message. + """ + if self.tool_manager: + if hasattr(self.language_model, "generate_content_async"): + logging.info("Processing prompt with tools using ToolManager...") + return await self.tool_manager.process_prompt_with_tools( + prompt=prompt, + language_model=self.language_model, + verbose=verbose, + ) + else: + error_msg = f"Error: ToolManager is configured, but the language model '{self.language_model_name}' does not asynchronously generate content." + logging.error(error_msg) + return error_msg else: - logging.error(f"The {self.context_model} can't see video clips.") - exit(1) - return response - -# Function to give an embedding function for gemini using an API key -def embedding_function_gemini_retrieval(api_key, embedding_model: str): - return embedding_functions.GoogleGenerativeAiEmbeddingFunction( - api_key=api_key, model_name=embedding_model, task_type="RETRIEVAL_QUERY" - ) + error_msg = "Error: Tool processing was requested, but no 'mcp_servers' are configured in the config.yaml. Cannot use tools." + logging.error(error_msg) + return error_msg diff --git a/examples/gemini/python/docs-agent/docs_agent/benchmarks/run_benchmark_tests.py b/examples/gemini/python/docs-agent/docs_agent/benchmarks/run_benchmark_tests.py index fe15f9714..7baf5657c 100644 --- a/examples/gemini/python/docs-agent/docs_agent/benchmarks/run_benchmark_tests.py +++ b/examples/gemini/python/docs-agent/docs_agent/benchmarks/run_benchmark_tests.py @@ -26,10 +26,8 @@ from rich.markdown import Markdown from rich.panel import Panel -from docs_agent.storage.chroma import Format from docs_agent.agents.docs_agent import DocsAgent from docs_agent.utilities import config -from docs_agent.utilities.config import ProductConfig # A function that asks the questin to the AI model using the RAG technique. @@ -37,7 +35,7 @@ def ask_model(question: str, docs_agent: DocsAgent): results_num = 5 if "gemini" in docs_agent.config.models.language_model: # print("Asking a Gemini model") - (search_result, final_context) = docs_agent.query_vector_store_to_build( + (search_result, final_context) = docs_agent.rag.query_vector_store_to_build( question=question, token_limit=30000, results_num=results_num, @@ -147,7 +145,7 @@ def run_benchmarks(): vprint("################") vprint("Input text:") vprint(target_answer) - embedding_01 = docs_agent.generate_embedding(target_answer) + embedding_01 = docs_agent.language_model.embed(content=target_answer, task_type="SEMANTIC_SIMILARITY")[0] vprint("") vprint("Embedding:") vprint(str(embedding_01)) @@ -165,7 +163,7 @@ def run_benchmarks(): vprint("################") vprint("Input text:") vprint(response) - embedding_02 = docs_agent.generate_embedding(response) + embedding_02 = docs_agent.language_model.embed(content=response, task_type="SEMANTIC_SIMILARITY")[0] vprint("") vprint("Embedding:") vprint(str(embedding_02)) diff --git a/examples/gemini/python/docs-agent/docs_agent/interfaces/README.md b/examples/gemini/python/docs-agent/docs_agent/interfaces/README.md index 7b5b8e3f7..015ffd571 100644 --- a/examples/gemini/python/docs-agent/docs_agent/interfaces/README.md +++ b/examples/gemini/python/docs-agent/docs_agent/interfaces/README.md @@ -93,6 +93,18 @@ from your `$HOME` directory. poetry install ``` +4. Set up the Poetry environment: + + ``` + poetry env activate + ``` + +5. Install the `shell` plugin: + + ``` + poetry self add poetry-plugin-shell + ``` + ## 4. Try the Docs Agent CLI 1. Enter the `poetry shell` environment: @@ -138,7 +150,9 @@ For more details on these commands, see the [Interacting with language models][cli-reference-helpme] section in the CLI reference page. -## Appendices +For creating a new task, see [Create a new Docs Agent task][create-a-new-task]. + +## Appendix ### Authorize credentials for Docs Agent @@ -372,3 +386,4 @@ To set up this `helpme` command in your terminal, do the following: [genai-doc-site]: https://ai.google.dev/docs/gemini_api_overview [cli-reference-helpme]: ../../docs/cli-reference.md#interacting-with-language-models [docs-agent-tasks]: ../../tasks +[create-a-new-task]: ../../docs/create-a-new-task.md diff --git a/examples/gemini/python/docs-agent/docs_agent/interfaces/chatbot/chatui.py b/examples/gemini/python/docs-agent/docs_agent/interfaces/chatbot/chatui.py index 73e7378f0..95f790eaf 100644 --- a/examples/gemini/python/docs-agent/docs_agent/interfaces/chatbot/chatui.py +++ b/examples/gemini/python/docs-agent/docs_agent/interfaces/chatbot/chatui.py @@ -35,10 +35,6 @@ md_to_html, ) from docs_agent.utilities import config -from docs_agent.preprocess.splitters import markdown_splitter -from docs_agent.postprocess.docs_retriever import SectionProbability - -from docs_agent.storage.chroma import Format from docs_agent.agents.docs_agent import DocsAgent from docs_agent.memory.logging import ( @@ -334,14 +330,14 @@ def ask_model(question, agent, template: str = "chatui/index.html"): # Extract context from this AQA model's response. final_context = extract_context_from_search_result(search_result) # Save this AQA model's response. - aqa_response_json = docs_agent.get_saved_aqa_response_json() + aqa_response_json = docs_agent.aqa_model.get_saved_aqa_response_json() # Convert this AQA model's response to HTML for better rendering. if aqa_response_json: aqa_response_in_html = json.dumps( type(aqa_response_json).to_dict(aqa_response_json), indent=2 ) else: - # For the `gemini-*` model, alway use the Chroma database. + # For the `gemini-*` model, always use the Chroma database. if docs_agent.config.docs_agent_config == "experimental": results_num = 10 new_question_count = 5 @@ -356,14 +352,24 @@ def ask_model(question, agent, template: str = "chatui/index.html"): # Issue if max_sources > results_num, so leave the same for now else: this_token_limit = 30000 - if docs_agent.config.models.language_model.startswith("models/gemini-1.5"): + if docs_agent.config.models.language_model.startswith("gemini-1.5"): this_token_limit = 50000 - search_result, final_context = docs_agent.query_vector_store_to_build( - question=question, - token_limit=this_token_limit, - results_num=results_num, - max_sources=results_num, - ) + if not docs_agent.rag: + logging.error("No initialized Chroma collection.") + search_result = [] + final_context = "Error: Could not retrieve context." + else: + try: + search_result, final_context = docs_agent.rag.query_vector_store_to_build( + question=question, + token_limit=this_token_limit, + results_num=results_num, + max_sources=results_num, + ) + except Exception as e: + logging.error(f"Error retrieving content from Chroma: {e}") + search_result = [] + final_context = "Error: Could not retrieve context." try: response, full_prompt = docs_agent.ask_content_model_with_context_prompt( context=final_context, question=question @@ -374,8 +380,8 @@ def ask_model(question, agent, template: str = "chatui/index.html"): ### Check the AQA model's answerable_probability field probability = "None" - if docs_agent.check_if_aqa_is_used(): - aqa_response = docs_agent.get_saved_aqa_response_json() + if docs_agent.aqa_model: + aqa_response = docs_agent.aqa_model.get_saved_aqa_response_json() try: probability = aqa_response.answerable_probability except: @@ -430,7 +436,7 @@ def ask_model(question, agent, template: str = "chatui/index.html"): context=final_context, question=new_question, prompt=new_condition, - model="gemini-pro", + model="gemini-1.5", ) # Clean up the response to a proper html list related_questions = parse_related_questions_response_to_html_list( @@ -540,7 +546,11 @@ def ask_model_with_sources(question, agent): docs_agent = agent full_prompt = "" search_result, context = docs_agent.query_vector_store_to_build( - question=question, token_limit=30000, results_num=10, max_sources=10 + collection=docs_agent.collection, + docs_agent_config=docs_agent.config.docs_agent_config, + question=question, token_limit=30000, + results_num=10, + max_sources=10 ) context_with_instruction = docs_agent.add_instruction_to_context(context) if "gemini" in docs_agent.get_language_model_name(): diff --git a/examples/gemini/python/docs-agent/docs_agent/interfaces/cli/cli.py b/examples/gemini/python/docs-agent/docs_agent/interfaces/cli/cli.py index b6ff095e2..97de6d07e 100644 --- a/examples/gemini/python/docs-agent/docs_agent/interfaces/cli/cli.py +++ b/examples/gemini/python/docs-agent/docs_agent/interfaces/cli/cli.py @@ -24,11 +24,12 @@ from docs_agent.interfaces.cli.cli_helpme import cli_helpme from docs_agent.interfaces.cli.cli_tellme import cli_tellme from docs_agent.interfaces.cli.cli_posix import cli_posix +from docs_agent.interfaces.cli.cli_tools import cli_tools from docs_agent.interfaces.cli.cli_show_session import cli_show_session cli = click.CommandCollection( - sources=[cli_common, cli_admin, cli_runtask, cli_helpme, cli_tellme, cli_posix, cli_show_session], + sources=[cli_common, cli_admin, cli_runtask, cli_helpme, cli_tellme, cli_posix, cli_show_session, cli_tools], help="With Docs Agent, you can populate vector databases, manage online corpora, and interact with Google's Gemini models.", ) diff --git a/examples/gemini/python/docs-agent/docs_agent/interfaces/cli/cli_admin.py b/examples/gemini/python/docs-agent/docs_agent/interfaces/cli/cli_admin.py index 73560d07b..d347443d7 100644 --- a/examples/gemini/python/docs-agent/docs_agent/interfaces/cli/cli_admin.py +++ b/examples/gemini/python/docs-agent/docs_agent/interfaces/cli/cli_admin.py @@ -17,24 +17,16 @@ """Docs Agent CLI client""" import click -import sys import typing -from docs_agent.utilities import config -from docs_agent.utilities.config import ReadDbConfigs from docs_agent.utilities.config import return_config_and_product -from docs_agent.utilities.helpers import ( - parallel_backup_dir, - return_pure_dir, - end_path_backslash, - start_path_no_backslash, - resolve_path, -) +from docs_agent.utilities.helpers import resolve_path from docs_agent.preprocess import files_to_plain_text as chunker from docs_agent.preprocess import populate_vector_database as populate_script from docs_agent.benchmarks import run_benchmark_tests as benchmarks from docs_agent.interfaces import chatbot as chatbot_flask from docs_agent.storage.google_semantic_retriever import SemanticRetriever -from docs_agent.storage.chroma import ChromaEnhanced +from docs_agent.storage.rag import RAGFactory +from docs_agent.storage.base import RAG from docs_agent.memory.logging import write_logs_to_csv_file from docs_agent.interfaces.cli.cli_common import common_options from docs_agent.interfaces.cli.cli_common import show_config @@ -338,7 +330,7 @@ def cleanup_dev( print(f"Corpus name: {db.corpus_name}") corpus_name = db.corpus_name if chroma_dir != "": - command = "rm -fr " + chroma_dir + command = "rm -fr " + resolve_path(chroma_dir) if click.confirm( f"\nDeleting the Chroma database {chroma_dir} ({command}).\nDo you want to continue?", abort=True, @@ -370,29 +362,14 @@ def backup_chroma( loaded_config, product_config = return_config_and_product( config_file=config_file, product=product ) - if input_chroma == None: - # Get first product + try: input_product = product_config.return_first() - if input_product.db_type == "chroma": - input_chroma = ReadDbConfigs(input_product.db_configs).return_chroma_db() - else: - click.echo( - f"Your product {input_product.product_name} is not configured for chroma." - ) - sys.exit(0) - if output_dir == None: - output_dir = parallel_backup_dir(input_chroma) - else: - pure_path = return_pure_dir(input_chroma) - output_dir = end_path_backslash(start_path_no_backslash(output_dir)) + pure_path - # Initialize chroma and then use backup function - chroma_db = ChromaEnhanced(chroma_dir=input_chroma) - final_output_dir = chroma_db.backup_chroma( - chroma_dir=input_chroma, output_dir=output_dir - ) - if final_output_dir: + if input_chroma == None: + input_chroma = input_product.db_configs[0].vector_db_dir + chroma_db: RAG = RAGFactory.create_rag(product_config=input_product) + final_output_dir = chroma_db.backup(output_dir=output_dir) click.echo(f"Successfully backed up {input_chroma} to {final_output_dir}.") - else: + except: click.echo(f"Can't backup chroma database specified: {input_chroma}") diff --git a/examples/gemini/python/docs-agent/docs_agent/interfaces/cli/cli_helpme.py b/examples/gemini/python/docs-agent/docs_agent/interfaces/cli/cli_helpme.py index ea602fb8e..2e3839340 100644 --- a/examples/gemini/python/docs-agent/docs_agent/interfaces/cli/cli_helpme.py +++ b/examples/gemini/python/docs-agent/docs_agent/interfaces/cli/cli_helpme.py @@ -18,10 +18,12 @@ import click import typing -from docs_agent.utilities import config from docs_agent.utilities.config import ConfigFile -from docs_agent.utilities.config import return_config_and_product, get_project_path -from docs_agent.utilities.helpers import resolve_path +from docs_agent.utilities.config import return_config_and_product +from docs_agent.utilities.helpers import create_output_directory +from docs_agent.utilities.helpers import identify_file_type +from docs_agent.utilities.helpers import open_file +from docs_agent.utilities.helpers import resolve_and_ensure_path from docs_agent.interfaces import run_console as console from docs_agent.interfaces.cli.cli_common import common_options @@ -30,13 +32,11 @@ import string import re import time -import subprocess from pathlib import Path from rich.console import Console from rich.markdown import Markdown from rich.panel import Panel -from rich.text import Text from rich.style import Style @@ -64,6 +64,7 @@ def cli_helpme(ctx, config_file, product): @click.option( "--file", type=click.Path(), + multiple=True, help="Specify a file to be included as context.", ) @click.option( @@ -76,10 +77,20 @@ def cli_helpme(ctx, config_file, product): type=click.Path(), help="Specify a path where all files in the directory are used as context.", ) +@click.option( + "--list_file", + type=click.Path(), + help="Specify a path to a file that contains a list of input files.", +) @click.option( "--file_ext", help="Works with --perfile and --dir. Specify the file type to be selected. The default is set to use all files.", ) +@click.option( + "--repeat_until", + is_flag=True, + help="Repeat this step until conditions are met", +) @click.option( "--yaml", is_flag=True, @@ -151,7 +162,9 @@ def helpme( file: typing.Optional[str] = None, perfile: typing.Optional[str] = None, allfiles: typing.Optional[str] = None, + list_file: typing.Optional[str] = None, file_ext: typing.Optional[str] = None, + repeat_until: bool = False, yaml: bool = False, out: typing.Optional[str] = None, rag: bool = False, @@ -186,17 +199,28 @@ def helpme( # Get the language model. this_model = product_config.products[0].models.language_model + # Remove the "models/" prefix if it exists. models/ prefix is legacy + if this_model.startswith("models/"): + this_model = this_model.removeprefix("models/") # This feature is only available to the Gemini Pro models (not AQA). - if not this_model.startswith("models/gemini"): + if not this_model.startswith("gemini"): click.echo(f"File mode is not supported with this model: {this_model}") exit(1) - # This feature is only available to the Gemini 1.5 models. - if not this_model.startswith("models/gemini-1.5") and response_type != "text": - click.echo(f"x.enum and json only work on gemini-1.5 models. You are using: {this_model}") + # This feature is only available to the Gemini 1.5 or 2.0 models. + if ( + not (this_model.startswith("gemini-1.5") or this_model.startswith("gemini-2.0")) + and response_type != "text" + ): + click.echo( + f"x.enum and json only work on gemini-1.5 or gemini-2.0 models. " + + f"You are using: {this_model}" + ) exit(1) if response_type == "x.enum" and response_schema is None: - click.echo(f"You must specify a response_schema when using text/x.enum. Optional for json.") - exit(1) + click.echo( + f"You must specify a response_schema when using text/x.enum. Optional for json." + ) + exit(1) if response_type: product_config.products[0].models.response_type = response_type # Get the question string. @@ -210,40 +234,27 @@ def helpme( # help format the output of the final file. question_out_wrapper = ( f"The answer that you provide to the question below will be saved to a file " - + f"named {out}. Your response must only include what will go in this file. " - + f"Do your best to ensure that you provide a response that matches the format " - + f"of this file extension. For example .md indicates a Markdown file, .py " - + f"a Python file, and so on. Markdown files must always be in valid Markdown " - + f"format and begin with the # header (for instance, # ).\n\n" + + f"named {out}. Ensure that your response matches the format of this file " + + f"extension. For example, .md indicates a Markdown file, .py a Python file, " + + f"and so on. Markdown files must always be in valid Markdown format " + + f"and begin with the # header (for instance, # ).\n\n" ) - # Set output path to the agent_out directory. - if out is not None and out != "" and out != "None": - if out.startswith("~/"): - out = os.path.expanduser(out) - if out.startswith("/"): - base_out = os.path.dirname(out) - out = Path(out).name - # This includes paths like out.startswith("~") - else: - base_out = os.path.join(get_project_path(), "agent_out") - # Creates the output directory if it can't write, it will try home directory. - try: - os.makedirs(base_out, exist_ok=True) - except: - base_out = os.path.expanduser("~/docs_agent") - base_out = os.path.join(base_out, "agent_out") - try: - os.makedirs(base_out, exist_ok=True) - except: - base_out = os.path.join("/tmp/docs_agent", "agent_out") - try: - os.makedirs(base_out, exist_ok=True) - except: - print(f"Failed to create the output directory: {base_out}") - exit(1) - if base_out.endswith("/"): - base_out = base_out[:-1] - out = base_out + "/" + out + output_file_path = None + original_question = question + if out: + output_file_path = create_output_directory(out) + if not output_file_path: + click.echo("Error: Could not determine or create a valid output directory.") + exit(1) + out_filename_display = Path(output_file_path).name + question_out_wrapper = ( + f"The answer that you provide to the question below will be saved to a file " + + f"named {out_filename_display}. Ensure that your response matches the format of this file " + + f"extension. For example, .md indicates a Markdown file, .py a Python file, " + + f"and so on. Markdown files must always be in valid Markdown format " + + f"and begin with the # header (for instance, # ).\n\n" + ) + question = question_out_wrapper + question # Print the prompt for testing. if check: @@ -260,7 +271,9 @@ def helpme( helpme_mode = "PER_FILE" elif allfiles and allfiles != "None": helpme_mode = "ALL_FILES" - elif file and file != "None": + elif list_file and list_file != "None": + helpme_mode = "LIST_FILE" + elif file and file != "None" and file != [None]: helpme_mode = "SINGLE_FILE" elif cont: helpme_mode = "PREVIOUS_EXCHANGES" @@ -271,8 +284,6 @@ def helpme( # Select the mode. if helpme_mode == "PREVIOUS_EXCHANGES": - if out is not None and out != "" and out != "None": - question = question_out_wrapper + question # Continue mode, which uses the previous exchangs as the main context. this_output = console.ask_model_with_file( question.strip(), @@ -299,12 +310,12 @@ def helpme( # Save this exchange in a YAML file. if yaml is True: - output_filename = "./responses.yaml" + yaml_file_path = create_output_directory("responses.yaml") # Prepare output to be saved in the YAML file. yaml_buffer = ( f" - question: {question}\n" + f" response: {this_output}\n" ) - with open(output_filename, "w", encoding="utf-8") as yaml_file: + with open(yaml_file_path, "w", encoding="utf-8") as yaml_file: yaml_file.write("logs:\n") yaml_file.write(yaml_buffer) yaml_file.close() @@ -312,121 +323,84 @@ def helpme( # Save the response to the `out` file. if out is not None and out != "" and out != "None": try: - with open(out, "w", encoding="utf-8") as out_file: + output_file_path = create_output_directory(out) + with open(output_file_path, "w", encoding="utf-8") as out_file: out_file.write(f"{this_output}\n") out_file.close() except: print(f"Failed to write the output to file: {out}") - elif helpme_mode == "SINGLE_FILE": - if out is not None and out != "" and out != "None": - question = question_out_wrapper + question - # Single file mode. - if file.startswith("~/"): - file = os.path.expanduser(file) - this_file = os.path.realpath(os.path.join(os.getcwd(), file)) - this_output = "" - - # if the `--cont` flag is set, include the previous exchanges as additional context. - context_file = None - if cont: - context_file = history_file - - this_output = console.ask_model_with_file( - question.strip(), - product_config, - file=this_file, - context_file=context_file, - rag=rag, - return_output=True, - ) - - # Render the response. - if use_panel: - ai_console.print("[Response]", style=console_style) - ai_console.print(Panel(Markdown(this_output, code_theme="manni"))) - else: + # Check for conditions if repeat_until is True. + if repeat_until: print() - print(f"{this_output}") + print("The repeat_until flag is set.") print() + # print(f"{this_output}") + # print() + lines = this_output.splitlines() + # print(lines) + is_acceptable = False + is_path_found = False + yaml_lines = "" + for this_line in lines: + if this_line.startswith("- path:"): + print(this_line) + yaml_lines += this_line + "\n" + is_path_found = True + elif is_path_found and this_line.startswith(" response:"): + print(this_line) + yaml_lines += this_line + "\n" + is_acceptable = True + print() + if is_acceptable is True: + print("This yaml format is acceptable.") + try: + yaml_out_filename = "./agent_out/task_output.yaml" + with open(yaml_out_filename, "w", encoding="utf-8") as yaml_file: + yaml_file.write(f"{yaml_lines}") + yaml_file.close() + except: + print(f"Failed to write the output to file: {yaml_out_filename}") + else: + print("This yaml format is not acceptable!") + print() + return is_acceptable - # Read the file content to be included in the history file. - file_content = "" - if ( - this_file.endswith(".png") - or this_file.endswith(".jpg") - or this_file.endswith(".gif") - ): - file_content = "This is an image file.\n" - elif ( - this_file.endswith(".mp3") - or this_file.endswith(".wav") - or this_file.endswith(".ogg") - or this_file.endswith(".flac") - or this_file.endswith(".aac") - or this_file.endswith(".aiff") - or this_file.endswith(".mp4") - or this_file.endswith(".mov") - or this_file.endswith(".avi") - or this_file.endswith(".x-flv") - or this_file.endswith(".mpg") - or this_file.endswith(".webm") - or this_file.endswith(".wmv") - or this_file.endswith(".3gpp") - ): - file_content = "This is an audio file.\n" - else: - try: - with open(this_file, "r", encoding="utf-8") as target_file: - file_content = target_file.read() - target_file.close() - except: - print(f"[Error] This file cannot be opened: {this_file}\n") - exit(1) - - # If the `--new` flag is set, overwrite the history file. - write_mode = "a" - if new: - write_mode = "w" - # Record this exchange in the history file. - with open(history_file, write_mode, encoding="utf-8") as out_file: - out_file.write(f"QUESTION: {question}\n\n") - out_file.write(f"FILE NAME: {file}\n") - out_file.write(f"FILE CONTENT:\n\n{file_content}\n") - out_file.write(f"RESPONSE:\n\n{this_output}\n\n") - out_file.close() - - # Save this exchange in a YAML file. - if yaml is True: - output_filename = "./responses.yaml" - # Prepare output to be saved in the YAML file. - yaml_buffer = ( - f" - question: {question}\n" - + f" response: {this_output}\n" - + f" file: {this_file}\n" + elif helpme_mode == "SINGLE_FILE": + # Files mode, which makes the request to each file in the array. + list_of_files = file + input_file_count = 0 + is_multi = False + for this_file in list_of_files: + if len(list_of_files) > 1: + if use_panel is True and input_file_count > 0: + print() + print(f"Input file: {this_file}") + if use_panel is True: + print() + if input_file_count > 0: + is_multi = True + helpme_single_file_mode( + console, + question, + product_config, + history_file, + rag, + ai_console, + console_style, + use_panel, + new, + cont, + yaml, + out, + this_file, + is_multi, ) - with open(output_filename, "w", encoding="utf-8") as yaml_file: - yaml_file.write("logs:\n") - yaml_file.write(yaml_buffer) - yaml_file.close() - - # Save the response to the `out` file. - if out is not None and out != "" and out != "None": - try: - with open(out, "w", encoding="utf-8") as out_file: - out_file.write(f"{this_output}\n") - out_file.close() - except: - print(f"Failed to write the output to file: {out}") + input_file_count += 1 elif helpme_mode == "PER_FILE": # Per file mode, which makes the request to each file in the path. - if perfile.startswith("~/"): - perfile = os.path.expanduser(perfile) - this_path = os.path.realpath(resolve_path(perfile)) - if not os.path.exists(this_path): - print(f"[Error] Cannot access the input path: {this_path}") - exit(1) + this_path = resolve_and_ensure_path(perfile, check_exists=True) # Set the `file_type` variable for display only. file_type = "." + str(file_ext) if file_ext is None or file_ext == "": @@ -453,7 +427,7 @@ def helpme( out_buffer = "" out_buffer_2 = "" yaml_buffer = "" - for root, dirs, files in os.walk(resolve_path(perfile)): + for root, dirs, files in os.walk(resolve_and_ensure_path(perfile)): for file in files: file_path = os.path.realpath(os.path.join(root, file)) if file_ext == None: @@ -587,8 +561,8 @@ def helpme( out_file.close() if yaml is True: - output_filename = "./responses.yaml" - with open(output_filename, "w", encoding="utf-8") as yaml_file: + yaml_file_path = create_output_directory("responses.yaml") + with open(yaml_file_path, "w", encoding="utf-8") as yaml_file: yaml_file.write("logs:\n") yaml_file.write(yaml_buffer) yaml_file.close() @@ -596,29 +570,27 @@ def helpme( # Save the responses to the `out` file. if out is not None and out != "" and out != "None": try: - with open(out, "w", encoding="utf-8") as out_file: + output_file_path = create_output_directory(out) + with open(output_file_path, "w", encoding="utf-8") as out_file: out_file.write(f"{out_buffer_2}") out_file.close() except: print(f"Failed to write the output to file: {out}") elif helpme_mode == "ALL_FILES": - if allfiles.startswith("~/"): - allfiles = os.path.expanduser(allfiles) # All files mode, which makes all files in the path to be included as context. - this_path = os.path.realpath(resolve_path(allfiles)) - if not os.path.exists(this_path): - print(f"[Error] Cannot access the input path: {this_path}") - exit(1) + this_path = resolve_and_ensure_path(allfiles, check_exists=True) # Set the `file_type` variable for display only. file_type = "." + str(file_ext) if file_ext is None or file_ext == "": file_type = "All types" # Ask the user to confirm. + # Use original_question instead of question because question might be + # modified by the question_out_wrapper. confirm_string = ( f"Adding all files found in the path below to context:\n" - + f"Question: {question}\nPath: {this_path}\nFile type: {file_type}\n" + + f"Question: {original_question}\nPath: {this_path}\nFile type: {file_type}\n" ) if force or click.confirm( f"{confirm_string}" + f"Do you want to continue?", @@ -629,7 +601,7 @@ def helpme( else: print() context_buffer = "" - for root, dirs, files in os.walk(resolve_path(allfiles)): + for root, dirs, files in os.walk(resolve_and_ensure_path(allfiles)): for file in files: file_path = os.path.realpath(os.path.join(root, file)) file_content = "" @@ -710,26 +682,70 @@ def helpme( # Save this exchange in a YAML file. if yaml is True: - output_filename = "./responses.yaml" + yaml_file_path = create_output_directory("responses.yaml") + # output_filename = output_file_path + "/responses.yaml" # Prepare output to be saved in the YAML file. yaml_buffer = ( f" - question: {question}\n" + f" response: {this_output}\n" + f" path: {this_path}\n" ) - with open(output_filename, "w", encoding="utf-8") as yaml_file: + with open(yaml_file_path, "w", encoding="utf-8") as yaml_file: yaml_file.write("logs:\n") yaml_file.write(yaml_buffer) yaml_file.close() # Save the response to the `out` file. - if out is not None and out != "" and out != "None": + if output_file_path: try: - with open(out, "w", encoding="utf-8") as out_file: + with open(output_file_path, "w", encoding="utf-8") as out_file: out_file.write(f"{this_output}\n") out_file.close() except: - print(f"Failed to write the output to file: {out}") + print(f"Failed to write the output to file: {output_file_path}") + + elif helpme_mode == "LIST_FILE": + # List file mode, which reads a text file that contains a list of input files. + this_list_file = resolve_and_ensure_path(list_file, check_exists=True) + print(f"Input list file: {this_list_file}") + print() + list_of_files = [] + try: + with open(this_list_file, "r", encoding="utf-8") as file: + for line in file.readlines(): + # print(line.strip()) + list_of_files.append(line.strip()) + except: + print(f"[Error] Cannot access the input list file: {this_list_file}") + exit(1) + input_file_count = 0 + is_multi = False + for this_file in list_of_files: + if len(list_of_files) > 1: + if use_panel is True and input_file_count > 0: + print() + print(f"Input file: {this_file}") + if use_panel is True: + print() + if input_file_count > 0: + is_multi = True + helpme_single_file_mode( + console, + question, + product_config, + history_file, + rag, + ai_console, + console_style, + use_panel, + new, + cont, + yaml, + out, + this_file, + is_multi, + ) + input_file_count += 1 elif helpme_mode == "TERMINAL_OUTPUT": # Terminal output mode, which reads the terminal output as context. @@ -739,7 +755,7 @@ def helpme( # Set the maximum number of lines to read from the terminal. lines_limit = -150 # For the new 1.5 pro model, increase the limit to 5000 lines. - if this_model.startswith("models/gemini-1.5"): + if this_model.startswith("gemini-1.5") or this_model.startswith("gemini-2.0"): lines_limit = -5000 try: with open(file_path, "r", encoding="utf-8") as file: @@ -784,30 +800,133 @@ def helpme( # Save this exchange in a YAML file. if yaml is True: - output_filename = "./responses.yaml" + yaml_file_path = create_output_directory("responses.yaml") # Prepare output to be saved in the YAML file. yaml_buffer = ( f" - question: {question}\n" + f" response: {this_output}\n" ) - with open(output_filename, "w", encoding="utf-8") as yaml_file: + with open(yaml_file_path, "w", encoding="utf-8") as yaml_file: yaml_file.write("logs:\n") yaml_file.write(yaml_buffer) yaml_file.close() # Save the response to the `out` file. - if out is not None and out != "" and out != "None": + if output_file_path: try: - with open(out, "w", encoding="utf-8") as out_file: + with open(output_file_path, "w", encoding="utf-8") as out_file: out_file.write(f"{this_output}\n") out_file.close() except: - print(f"Failed to write the output to file: {out}") + print(f"Failed to write the output to file: {output_file_path}") # When the --sleep flag is provided, sleep for the specified duration. if sleep > 0: time.sleep(int(sleep)) +def helpme_single_file_mode( + console, + question, + product_config, + history_file, + rag, + ai_console, + console_style, + use_panel, + new, + cont, + yaml, + out, + file, + is_multi, +): + this_file = resolve_and_ensure_path(file) + this_output = "" + + # if the `--cont` flag is set, include the previous exchanges as additional context. + context_file = None + if cont: + context_file = history_file + + this_output = console.ask_model_with_file( + question.strip(), + product_config, + file=this_file, + context_file=context_file, + rag=rag, + return_output=True, + ) + + # Render the response. + if use_panel: + ai_console.print("[Response]", style=console_style) + ai_console.print(Panel(Markdown(this_output, code_theme="manni"))) + else: + print() + print(f"{this_output}") + print() + + # Read the file content to be included in the history file. + file_content = "" + file_type = identify_file_type(this_file) + if file_type == "image": + file_content = "This is an image file.\n" + elif file_type == "audio": + file_content = "This is an audio file.\n" + elif file_type == "video": + file_content = "This is a video file.\n" + else: + file_content = open_file(this_file) + # If the `--new` flag is set, overwrite the history file. + write_mode = "a" + if new: + write_mode = "w" + # However, if there are multiple files as input, do not overwrite the history file. + if is_multi is True: + write_mode = "a" + # Record this exchange in the history file. + with open(history_file, write_mode, encoding="utf-8") as out_file: + out_file.write(f"QUESTION: {question}\n\n") + out_file.write(f"FILE NAME: {file}\n") + out_file.write(f"FILE CONTENT:\n\n{file_content}\n") + out_file.write(f"RESPONSE:\n\n{this_output}\n\n") + out_file.close() + + # Save this exchange in a YAML file. + if yaml is True: + yaml_file_path = create_output_directory("responses.yaml") + # Prepare output to be saved in the YAML file. + yaml_buffer = ( + f" - question: {question}\n" + + f" response: {this_output}\n" + + f" file: {this_file}\n\n" + ) + yaml_write_mode = "w" + if is_multi is True: + yaml_write_mode = "a" + with open(yaml_file_path, yaml_write_mode, encoding="utf-8") as yaml_file: + yaml_file.write("logs:\n") + yaml_file.write(yaml_buffer) + yaml_file.close() + + # Save the response to the `out` file. + if out: + output_file_path = create_output_directory(out) + if not output_file_path: + click.echo("Error: Could not determine or create a valid output directory.") + exit(1) + try: + out_write_mode = "w" + if is_multi is True: + out_write_mode = "a" + with open(output_file_path, out_write_mode, encoding="utf-8") as out_file: + out_file.write(f"Input file: {this_file}\n\n") + out_file.write(f"{this_output}\n\n") + out_file.close() + except: + print(f"Failed to write the output to file: {output_file_path}") + + cli = click.CommandCollection( sources=[cli_helpme], help="With Docs Agent, you can interact with Google's Gemini models.", diff --git a/examples/gemini/python/docs-agent/docs_agent/interfaces/cli/cli_runtask.py b/examples/gemini/python/docs-agent/docs_agent/interfaces/cli/cli_runtask.py index 2eed0d2b6..6a5249498 100644 --- a/examples/gemini/python/docs-agent/docs_agent/interfaces/cli/cli_runtask.py +++ b/examples/gemini/python/docs-agent/docs_agent/interfaces/cli/cli_runtask.py @@ -29,6 +29,7 @@ from docs_agent.interfaces.cli.cli_helpme import helpme from docs_agent.interfaces.cli.cli_tellme import tellme from docs_agent.interfaces.cli.cli_posix import posix +from docs_agent.interfaces.cli.cli_script import script import os import re import time @@ -364,7 +365,10 @@ def runtask( top_level_model = model else: top_level_model = curr_task.model - if not top_level_model.startswith("models/gemini"): + # Remove the "models/" prefix if it exists. models/ prefix is legacy + if top_level_model.startswith("models/"): + top_level_model = top_level_model.removeprefix("models/") + if not top_level_model.startswith("gemini"): click.echo( f"runtask mode is not supported with this model: {top_level_model} for {curr_task.name}" ) @@ -420,8 +424,15 @@ def runtask( this_step_buffer += f"\nperfile: {step.flags.perfile}" if step.flags.allfiles is not None and step.flags.allfiles != "": this_step_buffer += f"\nallfiles: {step.flags.allfiles}" + if step.flags.list_file is not None and step.flags.list_file != "": + this_step_buffer += f"\nlist_file: {step.flags.list_file}" if step.flags.file_ext is not None and step.flags.file_ext != "": this_step_buffer += f"\nfile_ext: {step.flags.file_ext}" + if ( + step.flags.script_input is not None + and step.flags.script_input != "" + ): + this_step_buffer += f"\nscript_input: {step.flags.script_input}" if ( step.flags.default_input is not None and step.flags.default_input != "" @@ -480,8 +491,17 @@ def runtask( this_step_buffer += f" perfile: {step.flags.perfile}\n" if step.flags.allfiles is not None and step.flags.allfiles != "": this_step_buffer += f" allfiles: {step.flags.allfiles}\n" + if step.flags.list_file is not None and step.flags.list_file != "": + this_step_buffer += ( + f" list_file: {step.flags.list_file}\n" + ) if step.flags.file_ext is not None and step.flags.file_ext != "": this_step_buffer += f" file_ext: {step.flags.file_ext}\n" + if ( + step.flags.script_input is not None + and step.flags.script_input != "" + ): + this_step_buffer += f"\nscript_input: {step.flags.script_input}" if ( step.flags.default_input is not None and step.flags.default_input != "" @@ -570,12 +590,15 @@ def runtask( this_file = None this_perfile = None this_allfiles = None + this_list_file = None this_file_ext = None + this_repeat_until = None this_out = None this_yaml = None this_rag = None this_terminal = None this_default_input = None + this_script_input = None if hasattr(task, "flags"): if hasattr(task.flags, "file"): this_file = task.flags.file @@ -583,8 +606,12 @@ def runtask( this_perfile = task.flags.perfile if hasattr(task.flags, "allfiles"): this_allfiles = task.flags.allfiles + if hasattr(task.flags, "list_file"): + this_list_file = task.flags.list_file if hasattr(task.flags, "file_ext"): this_file_ext = task.flags.file_ext + if hasattr(task.flags, "repeat_until"): + this_repeat_until = task.flags.repeat_until if hasattr(task.flags, "out"): this_out = task.flags.out if hasattr(task.flags, "yaml"): @@ -595,6 +622,8 @@ def runtask( this_terminal = task.flags.terminal if hasattr(task.flags, "default_input"): this_default_input = task.flags.default_input + if hasattr(task.flags, "script_input"): + this_script_input = task.flags.script_input # Set the out filename to the default name. if this_out is None or this_out == "": @@ -614,27 +643,37 @@ def runtask( if custom_input is not None: # First try to replace them with the custom input value provided by # the --custom_input flag at runtime - if this_file == "": - this_file = custom_input + if this_file == [""]: + this_file = [custom_input] if this_perfile == "": this_perfile = custom_input if this_allfiles == "": this_allfiles = custom_input + if this_list_file == "": + this_list_file = custom_input + if this_script_input == "": + this_script_input = custom_input elif this_default_input is not None: # If no custom_input value is provided at runtime, # try to replace them with the default input value provided in the task file. - if this_file == "": - this_file = this_default_input + if this_file == [""]: + this_file = [this_default_input] if this_perfile == "": this_perfile = this_default_input if this_allfiles == "": this_allfiles = this_default_input + if this_list_file == "": + this_list_file = this_default_input + if this_script_input == "": + this_script_input = this_default_input else: # Error and exit if there is still in any fields. if ( - this_file == "" + this_file == [""] or this_perfile == "" or this_allfiles == "" + or this_list_file == "" + or this_script_input == "" ): print() print( @@ -683,14 +722,16 @@ def runtask( + task.prompt ) overwrite_words = overwrite_words.split() - ctx.invoke( + success = ctx.invoke( helpme, words=overwrite_words, force=True, file=this_file, perfile=this_perfile, allfiles=this_allfiles, + list_file=this_list_file, file_ext=this_file_ext, + repeat_until=this_repeat_until, out=this_out, yaml=this_yaml, rag=this_rag, @@ -700,6 +741,34 @@ def runtask( terminal=this_terminal, model=this_model, ) + if this_repeat_until: + print("Successful?") + print(success) + repeat_count = 0 + while success is not True and repeat_count < 3: + success = ctx.invoke( + helpme, + words=overwrite_words, + force=True, + file=this_file, + perfile=this_perfile, + allfiles=this_allfiles, + list_file=this_list_file, + file_ext=this_file_ext, + repeat_until=this_repeat_until, + out=this_out, + yaml=this_yaml, + rag=this_rag, + new=bool(is_new), + cont=bool(is_cont), + panel=bool(use_panel), + terminal=this_terminal, + model=this_model, + ) + print("Successful?") + print(success) + repeat_count += 1 + elif task.function == "tellme": # tellme Task # Note: Usually don't want to overwrite model from curr_task.model in @@ -767,6 +836,41 @@ def runtask( new=bool(is_new), cont=bool(is_cont), ) + elif task.function == "script": + # Render this step information. + if use_panel: + print() + ai_console.print( + Panel( + f"Script (script): {task.prompt}", + title=f"Step {this_step}. {task.name}", + title_align="left", + padding=(1, 2), + ), + style=console_style, + ) + print() + else: + print() + print(f"===================") + print(f"Running a script: {task.name}") + print(f"Script: {task.prompt}") + print(f"===================") + print() + # Append the custom input as arguments to the script. + if this_script_input is not None: + overwrite_words = ( + str(task.prompt) + " " + str(this_script_input) + ) + else: + overwrite_words = task.prompt + overwrite_words = overwrite_words.split() + ctx.invoke( + script, + words=overwrite_words, + new=bool(is_new), + cont=bool(is_cont), + ) else: logging.error("Unsupported task function: %s", task.function) exit(1) diff --git a/examples/gemini/python/docs-agent/docs_agent/interfaces/cli/cli_script.py b/examples/gemini/python/docs-agent/docs_agent/interfaces/cli/cli_script.py new file mode 100644 index 000000000..7f678de7f --- /dev/null +++ b/examples/gemini/python/docs-agent/docs_agent/interfaces/cli/cli_script.py @@ -0,0 +1,170 @@ +# +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +"""Docs Agent CLI client""" + +import click +import typing +from pathlib import Path +import os +import subprocess +import time + +from docs_agent.interfaces.cli.cli_common import common_options +from docs_agent.interfaces.cli.cli_common import show_config +from docs_agent.utilities import helpers # Import the helpers module + + +@click.group(invoke_without_command=True) +@common_options +@click.pass_context +def cli_script(ctx, config_file, product): + """With Docs Agent, you can interact with Google's Gemini + models and manage online corpora on Google Cloud.""" + ctx.ensure_object(dict) + # Print config.yaml if agent is run without a command. + if ctx.invoked_subcommand is None: + click.echo("Docs Agent configuration:\n") + show_config() + + +@cli_script.command(name="script") +@click.argument("words", nargs=-1) +@click.option( + "--new", + is_flag=True, + help="Start a new session.", +) +@click.option( + "--cont", + is_flag=True, + help="Use the previous responses in the session as context.", +) +@click.option( + "--sleep", + type=int, + default=0, + help="Sleep for a specified duration (in seconds) after completing the command.", + hidden=True, +) +@common_options +def script( + words, + config_file: typing.Optional[str] = None, + new: bool = False, + cont: bool = False, + sleep: int = 0, + product: list[str] = [""], +): + """Run a script from the project root and add its output into context.""" + # Set the filename for recording exchanges with the Gemini models. + history_file = "/tmp/docs_agent_responses" + + # Get the project root path using the helper function + try: + project_path = helpers.get_project_path() + except FileNotFoundError as e: + click.echo(f"Error: Could not find project root. {e}", err=True) + return + + # Extract the script name and its arguments + if not words: + click.echo("Error: No script name provided.", err=True) + return + + script_name = words[0] + + # Get the Current Working Directory where the command was invoked + # This is used especially for things like custom_input + original_cwd = Path.cwd() + + script_arguments_for_subprocess = [] + for arg in words[1:]: + # Expand ~ if present + if arg.startswith("~/"): + expanded_arg = os.path.expanduser(arg) + script_arguments_for_subprocess.append(expanded_arg) + else: + # Resolve the argument path relative to the original working directory + absolute_arg_path = original_cwd.resolve() / arg + # Passes absolute path to the subprocess + script_arguments_for_subprocess.append(str(absolute_arg_path.resolve())) + + # The script path itself should be relative to the project root + # This defines scripts/ + script_path_relative_to_project = Path("scripts") / script_name + + command = ["python3", str(script_path_relative_to_project)] + script_arguments_for_subprocess + + print("Running script from project root:") + print(f" Command: {' '.join(command)}\n") + + try: + # Execute the script with current working directory set to the project root + process = subprocess.run( + command, + capture_output=True, + text=True, + check=True, + cwd=project_path + ) + this_output = process.stdout + if process.stderr: + click.echo("Script produced warnings/errors on stderr:\n", err=True) + click.echo(process.stderr, err=True) + + # Catch all errors and print them to the click console + except FileNotFoundError: + click.echo( + f"Error: 'python3' command not found or script '{script_path_relative_to_project}' not found within '{project_path}'.", + err=True, + ) + return + except subprocess.CalledProcessError as e: + click.echo(f"Error: Script execution failed with exit code {e.returncode}", err=True) + click.echo(f"Stderr:\n{e.stderr}", err=True) + click.echo(f"Stdout:\n{e.stdout}", err=True) + return + except Exception as e: + click.echo(f"An unexpected error occurred: {e}", err=True) + return + + write_mode = "None" + if new: + write_mode = "w" + elif cont: + write_mode = "a" + if write_mode != "None": + try: + with open(history_file, write_mode, encoding="utf-8") as out_file: + # Enhanced the logging by adding the script name and arguments + out_file.write(f"SCRIPT (run in {project_path}): {' '.join(command)}\n\n") + out_file.write(f"RESPONSE:\n\n{this_output}\n\n") + except IOError as e: + click.echo(f"Error writing to history file '{history_file}': {e}", err=True) + + if sleep > 0: + time.sleep(int(sleep)) + + +cli = click.CommandCollection( + sources=[cli_script], + help="With Docs Agent, you can interact with Google's Gemini models.", +) + + +if __name__ == "__main__": + cli() \ No newline at end of file diff --git a/examples/gemini/python/docs-agent/docs_agent/interfaces/cli/cli_tools.py b/examples/gemini/python/docs-agent/docs_agent/interfaces/cli/cli_tools.py new file mode 100644 index 000000000..d750a6a52 --- /dev/null +++ b/examples/gemini/python/docs-agent/docs_agent/interfaces/cli/cli_tools.py @@ -0,0 +1,325 @@ +# +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import asyncio +import click +import logging +import traceback +import os + +from docs_agent.agents.docs_agent import DocsAgent +from docs_agent.utilities.config import return_config_and_product + +# Define history file path +history_file = "/tmp/docs_agent_responses" + +# --- Structured History Constants --- +# This header will be at the top of the history file and part of the LLM prompt with history. +HISTORY_FILE_AND_PROMPT_HEADER = ( + "## Conversation History (for context) ##\n" + "The following is a log of previous questions and responses.\n" + "Use this information as context to understand the current request.\n" + "------------------------------------------------------------\n" +) + +# This footer will be at the bottom of the history file and part of the LLM prompt with history. +HISTORY_FILE_AND_PROMPT_FOOTER = ( + "------------------------------------------------------------\n" + "## End of Conversation History ##\n" +) + +# This prefix is added *only* to the LLM prompt (not saved in the file) +# when history is present, before the new user question. +LLM_PROMPT_NEW_REQUEST_PREFIX = ( + "\n## New User Request ##\n" +) +# --- End Structured History Constants --- + +async def run_agent_processing( + prompt: str, + agent: DocsAgent, + verbose: bool = False, +): + """ + Main execution logic for the Agent using tools loop. + + Args: + prompt (str): The prompt to send to the agent including context. + agent (DocsAgent): The initialized DocsAgent instance. + verbose (bool): Enable verbose logging. + + Returns: + str: The final output text from the agent or an error message. + """ + final_result_text = "[MCP Service Initialization Failed]" + try: + logging.info("Starting agent processing loop...") + final_result_text = await agent.process_prompt_with_tools( + prompt=prompt, verbose=verbose + ) + except ConnectionRefusedError as e: + logging.error("MCP Connection Error: Could not connect to the server.") + logging.error(f"Details: {e}") + final_result_text = "[Connect ERR: Server refused connection]" + except FileNotFoundError as e: + logging.error("MCP Stdio Error: Command or script not found.") + logging.error(f"Details: {e}") + final_result_text = "[Stdio ERR: Command not found]" + except Exception as e: + logging.critical( + f"Unexpected runtime error during MCP session: {type(e).__name__}" + ) + logging.critical(f"Details: {e}") + if verbose: + traceback.print_exc() + final_result_text = f"[Runtime ERR: {type(e).__name__}: {e}]" + + return final_result_text + + +# Define the main click group +@click.group(invoke_without_command=True) +@click.pass_context +def cli_tools(ctx): + """Docs Agent tools using Tools.""" + if ctx.invoked_subcommand is None: + click.echo(ctx.get_help()) + + +@cli_tools.command(name="tools") +@click.argument("words", nargs=-1) +@click.option( + "--verbose", + "-v", + is_flag=True, + help="Enable verbose output, including full tool details and tracebacks.", +) +@click.option( + "--new", + is_flag=True, + help="Start a new session.", +) +@click.option( + "--cont", + is_flag=True, + help="Use the previous responses in the session as context.", +) +@click.pass_context +def run_agent_command(ctx, words: str, verbose: bool, new: bool, cont: bool): + """Runs the Docs Agent with the given prompt using Tools. + + \b + Args: + words: The initial prompt to send to the agent. + verbose: Enable verbose logging. + new: Start a new session. + cont: Continue the existing session. + """ + if verbose: + logging.getLogger().setLevel(logging.INFO) + logging.info("Verbose mode enabled.") + else: + logging.getLogger().setLevel(logging.WARNING) + initial_prompt_str = " ".join(words) + if not initial_prompt_str: + click.echo("Error: Prompt cannot be empty.", err=True) + ctx.exit(1) + # Log raw prompt before context handling + logging.info(f"Starting Docs Agent with Tools. Raw Prompt: {initial_prompt_str}") + + # --- History / Context Handling --- + llm_history_context_block = "" # HEADER + Parsed QAs + FOOTER for LLM + parsed_qa_content_from_history_file = "" # Just the Q/A part from a structured file + legacy_unstructured_content_to_migrate = "" # Full content of an old-format file + + if cont: + if new: + click.echo( + "Warning: Both --new and --cont flags specified. --new takes precedence, history will be cleared for this session's start.", + err=True, + ) + else: + try: + if os.path.exists(history_file): + with open(history_file, "r", encoding="utf-8") as f: + full_file_content = f.read() + + if not full_file_content.strip(): # File is empty or whitespace + logging.info(f"History file {history_file} is empty. No context to load.") + else: + header_start_idx = full_file_content.find(HISTORY_FILE_AND_PROMPT_HEADER) + footer_start_idx = -1 + if header_start_idx != -1: + footer_start_idx = full_file_content.find( + HISTORY_FILE_AND_PROMPT_FOOTER, + header_start_idx + len(HISTORY_FILE_AND_PROMPT_HEADER) + ) + + if header_start_idx != -1 and footer_start_idx != -1: + # Successfully found structured history + qa_content_start_offset = header_start_idx + len(HISTORY_FILE_AND_PROMPT_HEADER) + parsed_qa_content_from_history_file = full_file_content[qa_content_start_offset:footer_start_idx] + + llm_history_context_block = ( + HISTORY_FILE_AND_PROMPT_HEADER + + parsed_qa_content_from_history_file + + HISTORY_FILE_AND_PROMPT_FOOTER + ) + logging.info( + f"Successfully parsed {len(parsed_qa_content_from_history_file)} chars of Q/A content from structured history: {history_file}" + ) + else: + # File exists but is not in the new structured format + click.echo( + f"Warning: History file {history_file} is not in the expected structured format. " + "No prior context will be used for this query. The file will be converted to the new format upon saving.", + err=True, + ) + logging.warning( + f"History file {history_file} found but not in structured format. Storing its content for migration on save. Header found: {header_start_idx!=-1}, Footer found: {footer_start_idx!=-1} (after header)." + ) + legacy_unstructured_content_to_migrate = full_file_content + except IOError as e: + click.echo(f"Warning: Could not read history file {history_file}: {e}", err=True) + + # Construct the prompt for the agent + if llm_history_context_block: # If --cont successfully loaded and parsed structured history + prompt_with_context = ( + llm_history_context_block + + LLM_PROMPT_NEW_REQUEST_PREFIX + + initial_prompt_str + ) + logging.info("Using structured history as context for the prompt.") + else: + prompt_with_context = initial_prompt_str # No history or failed to parse structured + + logging.info(f"Final prompt for agent (with context if any):\n{prompt_with_context}") + # --- End History / Context Handling --- + + logging.info("Starting Docs Agent with Tools...") + # Log the original prompt, not the one with context for clarity here + logging.info(f"Original Prompt: {initial_prompt_str}") + + # Define the async part to be run + async def _main(): + # Load config and initialize Agent + _loaded_config, product_config = return_config_and_product() + if not product_config or not product_config.products: + logging.critical("Failed to load product configuration.") + click.echo( + "\n[Config Load ERR: No products found]", err=True + ) + ctx.exit(1) + + try: + # Uses products[0] for now + agent = DocsAgent( + config=product_config.products[0], + init_chroma=False, + init_semantic=False, + ) + if agent.tool_manager: + loaded_tool_names = [ + service.name + for service in agent.tool_manager.tool_services + if hasattr(service, "name") + ] + click.echo(f"\nUsing tools: {loaded_tool_names}\n") + except Exception as e: + logging.critical(f"Failed to initialize DocsAgent: {e}") + if verbose: + traceback.print_exc() + click.echo(f"\n[Agent Init ERR: {e}]", err=True) + ctx.exit(1) + final_output = "[Initialization Error]" + try: + # Call the async processing function + final_output = await run_agent_processing( + prompt=prompt_with_context, + agent=agent, + verbose=verbose, + ) + except Exception as e: + # Catch errors during the agent processing run + logging.critical( + f"Unexpected Error in agent processing: {type(e).__name__}: {e}" + ) + final_output = f"[Processing ERR: {type(e).__name__}]" + if verbose: + traceback.print_exc() + click.echo(f"Error during processing: {final_output}", err=True) + + return final_output + + # Run the async main function using asyncio.run() + final_output_result = "[Async Execution Error]" + try: + final_output_result = asyncio.run(_main()) + except Exception as e: + logging.critical(f"Error running asyncio main: {e}") + final_output_result = f"[Asyncio Run ERR: {e}]" + if verbose: + traceback.print_exc() + click.echo(f"Error during async execution: {final_output_result}", err=True) + ctx.exit(1) + + # --- Write History --- + if not (final_output_result.startswith("[") and final_output_result.endswith("]")): + new_qa_interaction_entry = f"QUESTION:\n{initial_prompt_str}\n\nRESPONSE:\n{final_output_result}\n\n" + + current_qa_block_for_saving = "" + if new: + current_qa_block_for_saving = new_qa_interaction_entry + logging.info(f"Starting new history Q/A block (due to --new flag).") + else: + if parsed_qa_content_from_history_file: # Successfully read structured history + current_qa_block_for_saving = parsed_qa_content_from_history_file + new_qa_interaction_entry + logging.info(f"Appending new interaction to existing structured Q/A block.") + elif legacy_unstructured_content_to_migrate: # Migrating old format + # Ensure there's a newline before appending the new Q/A if legacy content doesn't end with one + separator = "\n" if legacy_unstructured_content_to_migrate.strip() and not legacy_unstructured_content_to_migrate.endswith("\n") else "" + current_qa_block_for_saving = legacy_unstructured_content_to_migrate.strip() + separator + "\n" + new_qa_interaction_entry + logging.info(f"Converting legacy history content and appending new interaction for saving.") + else: # No prior history or file was empty + current_qa_block_for_saving = new_qa_interaction_entry + logging.info(f"Starting new Q/A block (no prior history or history file was empty).") + + history_content_to_write = ( + HISTORY_FILE_AND_PROMPT_HEADER + + current_qa_block_for_saving + + HISTORY_FILE_AND_PROMPT_FOOTER + ) + + try: + with open(history_file, "w", encoding="utf-8") as f: # Always "w" to write full structured content + f.write(history_content_to_write) + logging.info(f"Successfully wrote/updated history file in structured format: {history_file}") + except IOError as e: + click.echo(f"Warning: Could not write to history file {history_file}: {e}", err=True) + else: + logging.info(f"Skipping history write due to agent output indicating an error: {final_output_result}") + # --- End Write History --- + + # Print the final result + click.echo(f"\n{final_output_result}") + + +cli = click.CommandCollection( + sources=[cli_tools], + help="Docs Agent LLM interactions using tools.", +) + +if __name__ == "__main__": + cli() diff --git a/examples/gemini/python/docs-agent/docs_agent/interfaces/hello_world.py b/examples/gemini/python/docs-agent/docs_agent/interfaces/hello_world.py index d53cfd739..385f9b1dc 100644 --- a/examples/gemini/python/docs-agent/docs_agent/interfaces/hello_world.py +++ b/examples/gemini/python/docs-agent/docs_agent/interfaces/hello_world.py @@ -58,7 +58,7 @@ def main(): print(response_gemini) # Pass the context and question to the `aqa` model - if docs_agent.check_if_aqa_is_used(): + if docs_agent.aqa_model: response_aqa = docs_agent.ask_aqa_model(question) print("\n[AQA answer]:") print(response_aqa) diff --git a/examples/gemini/python/docs-agent/docs_agent/interfaces/run_console.py b/examples/gemini/python/docs-agent/docs_agent/interfaces/run_console.py index 80cc790a7..2f33fbb4a 100644 --- a/examples/gemini/python/docs-agent/docs_agent/interfaces/run_console.py +++ b/examples/gemini/python/docs-agent/docs_agent/interfaces/run_console.py @@ -23,10 +23,11 @@ from rich.panel import Panel from rich.text import Text from rich.progress import Progress -from PIL import Image from docs_agent.agents.docs_agent import DocsAgent from docs_agent.utilities.config import ConfigFile +from docs_agent.storage.rag import return_collection_name +from docs_agent.utilities.helpers import identify_file_type, open_file, open_image # This function is used by the `helpme` command to ask the Gemini Pro model @@ -77,7 +78,7 @@ def ask_model(question: str, product_configs: ConfigFile, return_output: bool = docs_agent = DocsAgent(config=product) progress.update( task_docs_agent, - description=f"[turquoise4 bold]Asking Gemini (model: {product.models.language_model}, source: {docs_agent.return_chroma_collection()}) ", + description=f"[turquoise4 bold]Asking Gemini (model: {product.models.language_model}, source: {return_collection_name(product_config=product)}) ", total=None, ) if docs_agent.config.docs_agent_config == "experimental": @@ -86,27 +87,34 @@ def ask_model(question: str, product_configs: ConfigFile, return_output: bool = else: results_num = 5 new_question_count = 5 - # Issue if max_sources > results_num, so leave the same for now - search_result, final_context = docs_agent.query_vector_store_to_build( - question=question, - token_limit=30000, - results_num=results_num, - max_sources=results_num, - ) - ( - response, - full_prompt, - ) = docs_agent.ask_content_model_with_context_prompt( - context=final_context, question=question - ) - if len(search_result) >= 1: - if search_result[0].section.url == "": - link = str(search_result[0].section) - else: - link = search_result[0].section.url - search_results.append(search_result) - responses.append(response) - links.append(link) + if not docs_agent.rag: + logging.error("No initialized Chroma collection.") + else: + try: + search_result, final_context = docs_agent.rag.query_vector_store_to_build( + question=question, + token_limit=30000, + results_num=results_num, + max_sources=results_num, + ) + except Exception as e: + logging.error(f"Error retrieving content from Chroma: {e}") + search_result = [] + final_context = "Error: Could not retrieve context." + ( + response, + full_prompt, + ) = docs_agent.ask_content_model_with_context_prompt( + context=final_context, question=question + ) + if len(search_result) >= 1: + if search_result[0].section.url == "": + link = str(search_result[0].section) + else: + link = search_result[0].section.url + search_results.append(search_result) + responses.append(response) + links.append(link) elif "aqa" in product.models.language_model: if product.db_type == "google_semantic_retriever": docs_agent = DocsAgent(config=product, init_chroma=False) @@ -135,7 +143,7 @@ def ask_model(question: str, product_configs: ConfigFile, return_output: bool = docs_agent = DocsAgent(config=product, init_chroma=True) progress.update( task_docs_agent, - description=f"[turquoise4 bold]Asking Gemini (model: {product.models.language_model}, source: {docs_agent.return_chroma_collection()}) ", + description=f"[turquoise4 bold]Asking Gemini (model: {product.models.language_model}, source: {return_collection_name(product_config=product)}) ", total=None, ) ( @@ -275,104 +283,25 @@ def ask_model_with_file( full_prompt = "" final_context = "" response = "" - - # Set the file extension. - file_ext = None - is_image = False - is_audio = False - is_video = False - loaded_image = None - if file != None: - if file.endswith(".png"): - file_ext = "png" - is_image = True - elif file.endswith(".jpg"): - file_ext = "jpg" - is_image = True - elif file.endswith(".gif"): - file_ext = "gif" - is_image = True - elif file.endswith(".wav"): - file_ext = "wav" - is_audio = True - elif file.endswith(".mp3"): - file_ext = "wav" - is_audio = True - elif file.endswith(".flac"): - file_ext = "flac" - is_audio = True - elif file.endswith(".aiff"): - file_ext = "aiff" - is_audio = True - elif file.endswith(".aac"): - file_ext = "aac" - is_audio = True - elif file.endswith(".ogg"): - file_ext = "aac" - is_audio = True - elif file.endswith(".mp4"): - file_ext = "mp4" - is_video = True - elif file.endswith(".mp4"): - file_ext = "mp4" - is_video = True - elif file.endswith(".mov"): - file_ext = "mov" - is_video = True - elif file.endswith(".avi"): - file_ext = "avi" - is_video = True - elif file.endswith(".x-flv"): - file_ext = "x-flv" - is_video = True - elif file.endswith(".mpg"): - file_ext = "mpg" - is_video = True - elif file.endswith(".webm"): - file_ext = "webm" - is_video = True - elif file.endswith(".wmv"): - file_ext = "wmv" - is_video = True - elif file.endswith(".3gpp"): - file_ext = "3gpp" - is_video = True - + # Identify the file type. + if file: + file_type = identify_file_type(file) + else: + file_type = None # Get the content of the target file. file_content = "" - if file != None and not is_image and not is_audio and not is_video: - try: - with open(file, "r", encoding="utf-8") as auto: - content = auto.read() - auto.close() - file_content = f"\nTHE CONTENT BELOW IS FROM THE FILE {file}:\n\n" + content - except: - print(f"Cannot open the file {file}") - exit(1) - elif is_image: - try: - with open(file, "rb") as image: - loaded_image = Image.open(image) - loaded_image.load() - except: - print(f"Cannot open the image {file}") - exit(1) + if file_type == "text": + content = open_file(file) + file_content = f"\nTHE CONTENT BELOW IS FROM THE FILE {file}:\n\n" + content # Get the content of the context file. - context_file_content = "" - if context_file != None: - try: - with open(context_file, "r", encoding="utf-8") as auto: - content = auto.read() - auto.close() - context_file_content = ( - f"\nTHE CONTENT BELOW IS FROM THE PREVIOUS EXCHANGES WITH GEMINI:\n\n" - + content - ) - file_content = context_file_content + "\n\n" + file_content - except: - print(f"Cannot open the context file {file}") - exit(1) + if context_file: + content = open_file(context_file) + context_file_content = ( + f"\nTHE CONTENT BELOW IS FROM THE PREVIOUS EXCHANGES WITH GEMINI:\n\n" + + content + ) + file_content = context_file_content + "\n\n" + file_content # Use the first product by default. product = product_configs.products[0] @@ -390,7 +319,7 @@ def ask_model_with_file( config=product, init_chroma=True, init_semantic=False ) # Get the Chroma collection name. - collection = docs_agent.return_chroma_collection() + collection = return_collection_name(product_config=product) # Set the progress bar. label = f"[turquoise4 bold]Asking Gemini (model: {language_model}, source: {collection}) " progress.update( @@ -400,7 +329,7 @@ def ask_model_with_file( ( search_result, returned_context, - ) = docs_agent.query_vector_store_to_build( + ) = docs_agent.rag.query_vector_store_to_build( question=question, token_limit=500000, results_num=5, @@ -436,8 +365,8 @@ def ask_model_with_file( task_docs_agent, description=label, total=None, refresh=True ) # Retrieve context from the online corpus. - context_chunks = docs_agent.retrieve_chunks_from_corpus( - question, corpus_name=str(corpus_name) + context_chunks = docs_agent.aqa_model.retrieve_chunks_from_corpus( + question=question, corpus_name=str(corpus_name) ) context_from_corpus = "" chunk_count = 0 @@ -466,21 +395,11 @@ def ask_model_with_file( total=None, ) final_context = file_content - if is_image: + if file_type != "text" and file_type: this_prompt = final_context + "\nQUESTION (REQUEST): " + question - response = docs_agent.ask_model_about_image( - prompt=this_prompt, image=loaded_image - ) - elif is_audio: - this_prompt = final_context + "\nQUESTION (REQUEST): " + question - response = docs_agent.ask_model_about_audio( - prompt=this_prompt, audio=file - ) - elif is_video: - this_prompt = final_context + "\nQUESTION (REQUEST): " + question - response = docs_agent.ask_model_about_video( - prompt=this_prompt, video=file - ) + response = docs_agent.language_model.ask_about_file( + prompt=this_prompt, file_path=file + ) else: # Ask Gemini with the question and final context. ( diff --git a/examples/gemini/python/docs-agent/docs_agent/models/aqa.py b/examples/gemini/python/docs-agent/docs_agent/models/aqa.py new file mode 100644 index 000000000..e87b7aef9 --- /dev/null +++ b/examples/gemini/python/docs-agent/docs_agent/models/aqa.py @@ -0,0 +1,27 @@ +# +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from docs_agent.models.aqa_models import AQA +from docs_agent.models.base import AQAModel + + +class AQAModelFactory: + """Factory for creating AQA model instances.""" + + @staticmethod + def create_model() -> AQAModel: + """Creates and returns an AQA model instance.""" + return AQA() diff --git a/examples/gemini/python/docs-agent/docs_agent/models/aqa_models.py b/examples/gemini/python/docs-agent/docs_agent/models/aqa_models.py new file mode 100644 index 000000000..372b2a0db --- /dev/null +++ b/examples/gemini/python/docs-agent/docs_agent/models/aqa_models.py @@ -0,0 +1,218 @@ +# +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import typing +from absl import logging +from docs_agent.models.base import AQAModel +import google.ai.generativelanguage as glm + + +class AQA(AQAModel): + """ + An implementation of AQAModel using Google's Generative AI API. + """ + + def __init__(self): + self.generative_service_client = glm.GenerativeServiceClient() + self.retriever_service_client = glm.RetrieverServiceClient() + self.permission_service_client = glm.PermissionServiceClient() + self.aqa_response_buffer: typing.Any = None + + def generate_answer( + self, + question: str, + grounding_passages_texts: typing.List[str], + answer_style: str, + ) -> typing.Tuple[str, typing.List[typing.Dict[str, typing.Any]]]: + """ + Generates an answer to a question using the provided grounding passages. + + Args: + question (str): The question to answer. + grounding_passages_texts (typing.List[str]): A list of texts to use as grounding passages. + answer_style (str): The style of the answer (e.g., "ABSTRACTIVE", "EXTRACTIVE"). + + Returns: + typing.Tuple[str, typing.List[typing.Dict[str, typing.Any]]]: A tuple containing the answer and a list of citations. + """ + user_query_content = glm.Content(parts=[glm.Part(text=question)]) + + grounding_passages = glm.GroundingPassages() + for i, passage_text in enumerate(grounding_passages_texts): + new_passage = glm.Content(parts=[glm.Part(text=passage_text)]) + index_id = str("{:03d}".format(i + 1)) + grounding_passages.passages.append( + glm.GroundingPassage(content=new_passage, id=index_id) + ) + + req = glm.GenerateAnswerRequest( + model="models/aqa", + contents=[user_query_content], + inline_passages=grounding_passages, + answer_style=answer_style, + ) + + try: + aqa_response = self.generative_service_client.generate_answer(req) + self.aqa_response_buffer = aqa_response + + # Create the structured result + result_list: typing.List[typing.Dict[str, typing.Any]] = [] + try: + answer_text = aqa_response.answer.content.parts[0].text + except (AttributeError, IndexError): + answer_text = "" + + if answer_text: + for i in range(len(grounding_passages_texts)): + result_list.append( + { + "text": grounding_passages_texts[i], + "probability": aqa_response.answerable_probability, + "metadata": {}, + } + ) + return answer_text, result_list + + except Exception as e: + logging.error(f"Error generating answer: {e}") + self.aqa_response_buffer = None + return "", [] + + def generate_answer_with_corpora( + self, question: str, corpus_name: str, answer_style: str + ) -> typing.Tuple[str, typing.List[typing.Dict[str, typing.Any]]]: + """ + Generates an answer to a question using the provided corpus. + + Args: + question (str): The question to answer. + corpus_name (str): The name of the corpus to use. + answer_style (str): The style of the answer (e.g., "ABSTRACTIVE", "EXTRACTIVE"). + + Returns: + typing.Tuple[str, typing.List[typing.Dict[str, typing.Any]]]: A tuple containing the answer and a list of citations. + """ + + user_question_content = glm.Content( + parts=[glm.Part(text=question)], role="user" + ) + retriever_config = glm.SemanticRetrieverConfig( + source=corpus_name, query=user_question_content + ) + req = glm.GenerateAnswerRequest( + model="models/aqa", + contents=[user_question_content], + semantic_retriever=retriever_config, + answer_style=answer_style, + ) + + try: + aqa_response = self.generative_service_client.generate_answer(req) + self.aqa_response_buffer = aqa_response + + result_list: typing.List[typing.Dict[str, typing.Any]] = [] + try: + answer_text = aqa_response.answer.content.parts[0].text + except (AttributeError, IndexError): + answer_text = "" + + if answer_text: + for item in aqa_response.answer.grounding_attributions: + metadata = self._get_aqa_response_metadata(item) + for part in item.content.parts: + metadata["content"] = part.text + result_list.append( + { + "metadata": metadata, + "probability": aqa_response.answerable_probability, + } + ) + + return answer_text, result_list + + except Exception as e: + logging.error(f"Error in generate_answer_with_corpora: {e}") + self.aqa_response_buffer = None + return "", [] + + def get_saved_aqa_response_json(self) -> typing.Any: + """ + Returns the raw AQA response from the last call to generate_answer or generate_answer_with_corpora. + + Returns: + typing.Any: The raw AQA response, or None if no response has been saved. + """ + return self.aqa_response_buffer + + def query_corpus(self, user_query: str, corpus_name: str, results_count: int) -> typing.Any: + """ + Queries a corpus for relevant information. + + Args: + user_query (str): The user's query. + corpus_name (str): The name of the corpus to query. + results_count (int): The number of results to return. + + Returns: + typing.Any: The response from the query. + """ + request = glm.QueryCorpusRequest( + name=corpus_name, query=user_query, results_count=results_count + ) + return self.retriever_service_client.query_corpus(request) + + def _get_aqa_response_metadata( + self, aqa_response_item: typing.Any + ) -> typing.Dict[str, typing.Any]: + """ + Retrieves metadata from an AQA response item. + + Args: + aqa_response_item (typing.Any): An item from the AQA response. + + Returns: + typing.Dict[str, typing.Any]: A dictionary containing the metadata. + """ + try: + chunk_resource_name = ( + aqa_response_item.source_id.semantic_retriever_chunk.chunk + ) + get_chunk_response = self.retriever_service_client.get_chunk( + name=chunk_resource_name + ) + metadata = get_chunk_response.custom_metadata + final_metadata = {} + for m in metadata: + if m.string_value: + value = m.string_value + elif m.numeric_value: + value = m.numeric_value + else: + value = "" + final_metadata[m.key] = value + return final_metadata + except Exception: + return {} + + # Retrieve and return chunks that are most relevant to the input question + def retrieve_chunks_from_corpus(self, question: str, corpus_name: str): + results_count = 5 + return self.query_corpus( + corpus_name=corpus_name, + user_query=question, + results_count=results_count + ) diff --git a/examples/gemini/python/docs-agent/docs_agent/models/base.py b/examples/gemini/python/docs-agent/docs_agent/models/base.py new file mode 100644 index 000000000..b93debfc9 --- /dev/null +++ b/examples/gemini/python/docs-agent/docs_agent/models/base.py @@ -0,0 +1,125 @@ +# +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import abc +import typing + + +class GenerativeLanguageModel(abc.ABC): + """Abstract base class for generative language models.""" + + @abc.abstractmethod + def generate_content(self, contents, request_options=None, log_level="NORMAL"): + """Generates content.""" + pass + + @abc.abstractmethod + async def generate_content_async( + self, + contents: typing.List[typing.Any], + tools: typing.Optional[typing.List[typing.Dict[str, typing.Any]]] = None, + ) -> typing.Any: + pass + + @abc.abstractmethod + def ask_content_model_with_context_prompt( + self, + context: str, + question: str, + prompt: typing.Optional[str] = None, + log_level: typing.Optional[str] = "NORMAL", + ): + pass + + @abc.abstractmethod + def embed(self, content, task_type="RETRIEVAL_QUERY", title=None): + """Embeds content.""" + pass + + @abc.abstractmethod + def ask_about_file(self, prompt: str, file_path: str): + """ + Use this method for asking a model about a file. + + Args: + prompt (str): The prompt to use for the model. + file_path (str): The path to the file. + + Returns: + str: The response from the model, or raises an exception. + """ + pass + + +class AQAModel(abc.ABC): + """Abstract base class for AQA models.""" + + @abc.abstractmethod + def generate_answer( + self, question: str, grounding_passages: typing.List[str], answer_style: str + ) -> typing.Tuple[str, typing.List[typing.Dict[str, typing.Any]]]: + """Generates an answer given a question and grounding passages. + + Args: + question: The user's question. + grounding_passages: A list of strings, each representing a passage. + answer_style: The desired answer style (e.g., "VERBOSE"). + + Returns: + A tuple containing: + - The answer text (string). + - A list of dictionaries, where each dictionary represents a relevant + section and contains its metadata and a probability score. + """ + pass + + @abc.abstractmethod + def generate_answer_with_corpora( + self, question: str, corpus_name: str, answer_style: str + ) -> typing.Tuple[str, typing.List[typing.Dict[str, typing.Any]]]: + """Generates an answer given a question using a specified corpus. + + Args: + question: The user's question. + corpus_name: The name of the corpus to use. + answer_style: The desired answer style. + + Returns: + A tuple containing: + - The answer text (string) + - A list of dictionaries, where each dictionary contains section data and probability. + """ + + @abc.abstractmethod + def get_saved_aqa_response_json(self) -> typing.Any: + """Retrieves and returns any buffered AQA response.""" + pass + + @abc.abstractmethod + def query_corpus( + self, + user_query: str, + corpus_name: str, + results_count: int) -> typing.Any: + """Queries a corpus and returns relevant results.""" + pass + + @abc.abstractmethod + def retrieve_chunks_from_corpus( + self, question: str, corpus_name: str + ) -> typing.Any: + """Retrieves chunks from a corpus.""" + pass diff --git a/examples/gemini/python/docs-agent/docs_agent/models/google_genai.py b/examples/gemini/python/docs-agent/docs_agent/models/google_genai.py index 757f29475..0f7693a9a 100644 --- a/examples/gemini/python/docs-agent/docs_agent/models/google_genai.py +++ b/examples/gemini/python/docs-agent/docs_agent/models/google_genai.py @@ -15,26 +15,37 @@ # """Rate limited Gemini wrapper""" - import typing -from typing import List +from typing import Any, Dict, List, cast import time +import os +import mimetypes +from PIL import Image +from io import BytesIO + +from absl import logging + +from google import genai +from google.genai import types -import google.generativeai -from google.generativeai.types import GenerationConfig from ratelimit import limits from ratelimit import sleep_and_retry from docs_agent.utilities.config import Models from docs_agent.utilities.config import Conditions +from docs_agent.utilities.helpers import open_image + +from docs_agent.models.base import GenerativeLanguageModel class Error(Exception): """Base error class for Gemini.""" + pass # Add pass here to avoid indentation errors + class GoogleNoAPIKeyError(Error, RuntimeError): - """Raised if no API key is provided nor found in environment variable.""" + """Raised if no API key is provided.""" def __init__(self) -> None: super().__init__( @@ -44,7 +55,7 @@ def __init__(self) -> None: class GoogleUnsupportedModelError(Error, RuntimeError): - """Raised if a specified model is not supported by the endpoint.""" + """Raised if a specified model is not supported.""" def __init__(self, model, api_endpoint) -> None: super().__init__( @@ -53,37 +64,28 @@ def __init__(self, model, api_endpoint) -> None: ) -# Create a class for the response schema -# class DocType(enum.Enum): -# CONCEPT = "concept" -# CODELAB = "codelab" -# REFERENCE = "reference" -# OTHER = "other" -# GUIDE = "guide" - - -class Gemini: - """Rate limited Gemini wrapper. - - This class exposes Gemini's chat, text, and embedding API, but with a rate - limit. Besides the rate limit, the `chat` and `generate_text` method has the - same name and behavior as `google.generativeai.chat` and - `google.generativeai.generate_text`, respectively. The `embed` method is - different from `google.generativeai.generate_embeddings` since `embed` - returns List[float] while `google.generativeai.generate_embeddings` returns a - dict. And that's why it has a different name. +class Gemini(GenerativeLanguageModel): + """ + A wrapper for the Google Gemini model. """ - minute = 60 # seconds in a minute - max_embed_per_minute = 1400 + minute = 60 + # 1400 calls per minute for embedding text-embedding-004 and embedding-001 + # Use half to avoid hitting the limit + max_embed_per_minute = 130 max_text_per_minute = 30 - # MAX_MESSAGE_PER_MINUTE = 30 def __init__( self, models_config: Models, conditions: typing.Optional[Conditions] = None, ) -> None: + """Initializes the Gemini model. + + Args: + models_config: The configuration for the models. + conditions: The conditions for the model. + """ if conditions is None: self.model_error_message = "Gemini model failed to generate" self.prompt_condition = "" @@ -98,31 +100,23 @@ def __init__( self.embedding_api_call_period = models_config.embedding_api_call_period self.response_type = models_config.response_type self.response_schema = models_config.response_schema - # Sets the response type to full mime type - if self.response_type: - match self.response_type: - case "x.enum": - self.response_type = "text/x.enum" - case "json": - self.response_type = "application/json" - case _: - self.response_type = "text/plain" - self.generation_config = GenerationConfig( - response_mime_type=self.response_type, + self.safety_settings = [ + types.SafetySetting( + category=types.HarmCategory.HARM_CATEGORY_HATE_SPEECH, + threshold=types.HarmBlockThreshold.BLOCK_NONE, + ) + ] + self.config = types.GenerateContentConfig(safety_settings=self.safety_settings) + # Configure the model for image generation + if self.language_model.startswith("gemini-2.0-flash-exp-image-generation"): + self.config = types.GenerateContentConfig( + response_modalities=["Text", "Image"], + safety_settings=self.safety_settings, ) # Configure the model - google.generativeai.configure( - api_key=self.api_key, client_options={"api_endpoint": self.api_endpoint} - ) - # Check whether the specified models are supported - # supported_models = set( - # model.name for model in google.generativeai.list_models() - # ) - # for model in (models_config.language_model, models_config.embedding_model): - # if model not in supported_models: - # raise GoogleUnsupportedModelError(model, self.api_endpoint) - - # TODO: bring in limit values from config files + self.client = genai.Client(api_key=self.api_key) + logging.info(f"Created Gemini client for model: {self.language_model}") + @sleep_and_retry @limits(calls=max_embed_per_minute, period=minute) def embed( @@ -132,59 +126,221 @@ def embed( title: typing.Optional[str] = None, ) -> List[float]: if ( - self.embed_model == "models/embedding-001" - or self.embed_model == "models/text-embedding-004" + self.embed_model == "embedding-001" + or self.embed_model == "text-embedding-004" + or self.embed_model == "gemini-embedding-exp-03-07" ): return [ - google.generativeai.embed_content( + self.client.models.embed_content( model=self.embed_model, - content=content, - task_type=task_type, - title=title, - )["embedding"] + contents=content, + config=types.EmbedContentConfig(task_type=task_type, title=title), + ) + .embeddings[0] + .values ] else: raise GoogleUnsupportedModelError(self.embed_model, self.api_endpoint) - # TODO: bring in limit values from config files @sleep_and_retry @limits(calls=max_text_per_minute, period=minute) def generate_content( - self, contents, request_options=None, log_level: typing.Optional[str] = "NORMAL" + self, + contents, + log_level: typing.Optional[str] = "NORMAL", + image_output_path: typing.Optional[str] = "image.png", ): + """ + Generates content using the Gemini model. + + Args: + contents: The content to generate from. + log_level: The level of logging. + image_output_path: The path to save the generated image. + + Returns: + The generated content or an error message. + """ if self.language_model is None: raise GoogleUnsupportedModelError(self.language_model, self.api_endpoint) - model = google.generativeai.GenerativeModel(model_name=self.language_model) try: - if request_options is None: - response = model.generate_content( - contents, generation_config=self.generation_config - ) - else: - response = model.generate_content( - contents, - request_options=request_options, - generation_config=self.generation_config, - ) - except google.api_core.exceptions.InvalidArgument: + response = self.client.models.generate_content( + model=self.language_model, + contents=contents, + config=self.config, + ) + except: return self.model_error_message if log_level == "VERBOSE" or log_level == "DEBUG": print("[Response JSON]") print(response) print() - for chunk in response: - if not hasattr(chunk, "candidates"): - return self.model_error_message - if len(chunk.candidates) == 0: - return self.model_error_message - if not hasattr(chunk.candidates[0], "content"): - return self.model_error_message - if str(chunk.candidates[0].content) == "": - return self.model_error_message - return response.text + try: + for part in response.candidates[0].content.parts: + # If the response contains an image, save it to the specified path. + if part.inline_data is not None: + image = Image.open(BytesIO((part.inline_data.data))) + image.save(image_output_path) + # Return a message indicating that the image was generated from + # the prompt. + return f"Image generated from your prompt." + if part.text is not None: + return part.text + except: + return self.model_error_message + + async def generate_content_async( + self, + contents: typing.List[typing.Dict[str, typing.Any]], + tools: typing.Optional[typing.List[typing.Dict[str, typing.Any]]] = None, + ) -> typing.Dict[str, typing.Any]: + """ + Generates content asynchronously using the Gemini model. + + Args: + contents: The conversation history as a list of dictionaries. + Expected format: [{"role": str, "parts": [Dict]}] + tools: The tools as a list of dictionaries (FunctionDeclaration format). + + Returns: + A dictionary representing the model's response, including potential + errors or blocking information. + Format: {"role": "model", "parts": [...], "error": Optional[str], "blocked": Optional[bool], ...} + """ + if self.language_model is None: + return {"error": f"Unsupported model: {self.language_model}", "role": "model", "parts": []} + logging.info(f"Gemini: Generating content asynchronously for model: {self.language_model}") + gemini_contents = [] + try: + for item in contents: + if isinstance(item, dict) and "role" in item and "parts" in item: + # Convert parts based on role + converted_parts = [] + for part_dict in item["parts"]: + if "text" in part_dict: + converted_parts.append(types.Part(text=part_dict["text"])) + elif "function_call" in part_dict: + fc_dict = part_dict["function_call"] + converted_parts.append(types.Part(function_call=types.FunctionCall(**fc_dict))) + elif "function_response" in part_dict: + fr_dict = part_dict["function_response"] + converted_parts.append(types.Part(function_response=types.FunctionResponse(**fr_dict))) + gemini_contents.append(types.Content(role=item["role"], parts=converted_parts)) + else: + logging.warning(f"Skipping invalid content item during conversion: {item}") + except Exception as e: + logging.error(f"Error converting generic contents to Gemini format: {e}") + return {"error": f"Content conversion failed: {e}", "role": "model", "parts": []} + + gemini_tools = None + if tools: + try: + declarations = [] + for tool_dict in tools: + if "name" in tool_dict and "description" in tool_dict and "parameters" in tool_dict: + params = tool_dict["parameters"] + if not isinstance(params, dict): + logging.warning(f"Tool '{tool_dict['name']}' has non-dict parameters: {type(params)}. Attempting to use anyway.") + declarations.append(types.FunctionDeclaration(**tool_dict)) + else: + logging.warning(f"Skipping invalid tool dict during conversion: {tool_dict}") + if declarations: + gemini_tools = [types.Tool(function_declarations=declarations)] + logging.info(f"Converted {len(declarations)} generic tools to Gemini format.") + else: + logging.warning("No valid generic tools found to convert for Gemini.") + except Exception as e: + logging.error(f"Error converting generic tools to Gemini format: {e}") + return {"error": f"Tool conversion failed: {e}", "role": "model", "parts": []} + + # --- Prepare API Call --- + model_config = {} + if self.safety_settings: + model_config["safety_settings"] = self.safety_settings + + if gemini_tools: + model_config["tools"] = gemini_tools + logging.info("Added converted tools to Gemini API call config.") + + # --- Call API and Process Response --- + try: + response = await self.client.aio.models.generate_content( + model=self.language_model, + contents=gemini_contents, + config=model_config, + ) + + # --- Convert google.genai response to generic dictionary --- + response_dict = {"role": "model", "parts": []} + finish_reason = None + block_reason = None + safety_ratings = [] + + # Check for blocking via prompt_feedback first + if hasattr(response, "prompt_feedback") and response.prompt_feedback: + block_reason = getattr(response.prompt_feedback, "block_reason", None) + if block_reason: + response_dict["blocked"] = True + response_dict["block_reason"] = block_reason.name # Or str(block_reason) + response_dict["error"] = f"Prompt blocked due to {block_reason.name}" + logging.error(f"Prompt blocked by API. Reason: {block_reason.name}") + return response_dict + + if hasattr(response, "candidates") and response.candidates: + candidate = response.candidates[0] + finish_reason = getattr(candidate, "finish_reason", None) + safety_ratings = getattr(candidate, "safety_ratings", []) + + # Check for blocking via finish_reason or safety_ratings + if finish_reason == types.FinishReason.SAFETY: + response_dict["blocked"] = True + response_dict["block_reason"] = finish_reason.name + response_dict["error"] = f"Response blocked due to {finish_reason.name}" + logging.error(f"Response blocked by safety settings. Finish Reason: {finish_reason.name}, Ratings: {safety_ratings}") + return response_dict + + parts_list: List[Dict[str, Any]] = response_dict["parts"] + if hasattr(candidate, "content") and candidate.content and hasattr(candidate.content, "parts"): + for part in candidate.content.parts: + part_dict = {} + if hasattr(part, "text") and part.text: + part_dict["text"] = part.text + if hasattr(part, "function_call") and part.function_call: + # Convert FunctionCall object to dict + fc = part.function_call + part_dict["function_call"] = {"name": fc.name, "args": dict(fc.args)} + + if part_dict: + # Append to the extracted list variable + parts_list.append(part_dict) + + # Add finish reason if needed for downstream logic + if finish_reason: + response_dict["finish_reason"] = finish_reason.name + + return response_dict + + except Exception as e: + logging.error(f"Gemini: Async generate_content call failed: {type(e).__name__}: {e}") + # Return error as dict + return {"error": f"API call failed: {type(e).__name__}: {e}", "role": "model", "parts": []} + + def upload_file(self, file): + print(f"Uploading file...") + uploaded_file = self.client.files.upload(file=file) + print(f"Completed upload: {uploaded_file.uri}") + return uploaded_file + + def get_file(self, file): + while file.state.name == "PROCESSING": + time.sleep(10) + file = self.client.files.get(name=file.name) + + if file.state.name == "FAILED": + print(f"Failed to get file: {file.name}") + raise ValueError(file.state.name) + return file - # Use this method for talking to a Gemini content model - # Optionally provide a prompt, if not use the one from config.yaml def ask_content_model_with_context_prompt( self, context: str, @@ -194,15 +350,10 @@ def ask_content_model_with_context_prompt( ): if prompt == None: prompt = self.prompt_condition - # elif prompt == "fact_checker": - # prompt = self.fact_check_question new_prompt = f"{prompt}\n\nQuestion: {question}\n\nContext:\n{context}" - # Print the prompt for debugging if the log level is VERBOSE. - # if LOG_LEVEL == "VERBOSE": - # self.print_the_prompt(new_prompt) try: response = self.generate_content(new_prompt) - except google.api_core.exceptions.InvalidArgument: + except: return self.model_error_message if log_level == "VERBOSE" or log_level == "DEBUG": print("[Response JSON]") @@ -219,22 +370,104 @@ def ask_content_model_with_context_prompt( return self.model_error_message return response.text, new_prompt - # Use this method for uploading a file to File API such as Video - # Returns the name of the uploaded file - def upload_file(self, file): - print(f"Uploading file...") - uploaded_file = google.generativeai.upload_file(path=file) - print(f"Completed upload: {uploaded_file.uri}") - return uploaded_file + def ask_about_file(self, prompt: str, file_path: str): + """ + Use this method for asking Gemini model about a file. - # Use this method for retrieving a file from the File API such as Video - # Returns the file object - def get_file(self, file): - while file.state.name == "PROCESSING": - time.sleep(10) - file = google.generativeai.get_file(file.name) + Args: + prompt (str): The prompt to use for the model. + file_path (str): The path to the file. - if file.state.name == "FAILED": - print(f"Failed to get file: {file.name}") - raise ValueError(file.state.name) - return file + Returns: + str: The response from the model or raise an exception. + """ + file_size = os.path.getsize(file_path) + # Unused value is the encoding + mime_type, _ = mimetypes.guess_type(file_path) + + if not mime_type: + logging.exception(f"Could not determine MIME type for {file_path}") + raise + + if mime_type.startswith("image/"): + if not prompt: + prompt = "Describe this image:" + # 7MB limit + max_size = 7 * 1024 * 1024 + if file_size > max_size: + logging.error( + f"Image file {file_path} exceeds size limit ({file_size} > {max_size} bytes)." + ) + raise + try: + file_content = open_image(file_path) + + except Exception as e: + logging.exception(f"Error reading image file {file_path}: {e}") + raise + + elif mime_type.startswith("audio/"): + if not prompt: + prompt = "Describe this audio clip:" + # 20MB limit + max_size = 20 * 1024 * 1024 + if file_size > max_size: + logging.exception( + f"Audio file {file_path} exceeds size limit ({file_size} > {max_size} bytes)." + ) + raise + try: + audio_clip_uploaded = self.upload_file(file_path) + file_content = self.get_file(audio_clip_uploaded) + except Exception as e: + logging.exception(f"Error reading audio file {file_path}: {e}") + raise + + elif mime_type.startswith("video/"): + if not prompt: + prompt = "Describe this video clip:" + # 2GB limit + max_size = 2 * 1024 * 1024 * 1024 + if file_size > max_size: + logging.exception( + f"Video file {file_path} exceeds size limit ({file_size} > {max_size} bytes)." + ) + raise + # Upload video and get file object + try: + video_clip_uploaded = self.upload_file(file_path) + file_content = self.get_file(video_clip_uploaded) + + except Exception as e: + logging.exception( + f"Error uploading or processing video file {file_path}: {e}" + ) + raise + else: + logging.exception(f"Unsupported file type: {mime_type}") + raise + + # Gemini multimodal models + # Need to fix it so models/ isn't required. Right now needed for scripts + gemini_multimodal_models = [ + "models/gemini-1.5", + "models/gemini-2.0", + "models/gemini-2.5", + "gemini-1.5", + "gemini-2.0", + "gemini-2.5", + ] + if any( + self.language_model.startswith(model) for model in gemini_multimodal_models + ): + try: + response = self.generate_content([prompt, file_content]) + return response + except: + logging.exception(f"Error generating content") + raise + else: + logging.exception( + f"The {self.language_model} doesn't support image, audio, or video processing." + ) + raise diff --git a/examples/gemini/python/docs-agent/docs_agent/models/llm.py b/examples/gemini/python/docs-agent/docs_agent/models/llm.py new file mode 100644 index 000000000..391a1fc6a --- /dev/null +++ b/examples/gemini/python/docs-agent/docs_agent/models/llm.py @@ -0,0 +1,50 @@ +# +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import typing + +from docs_agent.models.base import GenerativeLanguageModel +from docs_agent.models.google_genai import Gemini +from docs_agent.utilities.config import Models + + +class GenerativeLanguageModelFactory: + """Factory class for creating generative language models.""" + + @staticmethod + def create_model( + model_type: str, + models_config: Models, + conditions: typing.Optional[typing.Any] = None, + ) -> GenerativeLanguageModel: + """Creates a generative language model.""" + # Remove the "models/" prefix if it exists. models/ prefix is legacy + if model_type.startswith("models/"): + model_type = model_type.removeprefix("models/") + if model_type.startswith("gemini"): + return Gemini(models_config=models_config, conditions=conditions) + # This then needs to be moved for the embedding model + if model_type.startswith(("text-embedding", "embedding", "gemini-embedding")): + return Gemini(models_config=models_config, conditions=conditions) + elif model_type == "aqa": + gemini_config = Models( + language_model="gemini-2.0-flash", + embedding_model=models_config.embedding_model, + api_endpoint=models_config.api_endpoint, + ) + return Gemini(models_config=gemini_config, conditions=conditions) + else: + raise ValueError(f"Unsupported model type: {model_type}") diff --git a/examples/gemini/python/docs-agent/docs_agent/models/tools/__init__.py b/examples/gemini/python/docs-agent/docs_agent/models/tools/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/examples/gemini/python/docs-agent/docs_agent/models/tools/base.py b/examples/gemini/python/docs-agent/docs_agent/models/tools/base.py new file mode 100644 index 000000000..84d21ecc6 --- /dev/null +++ b/examples/gemini/python/docs-agent/docs_agent/models/tools/base.py @@ -0,0 +1,46 @@ +# +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import abc +from typing import List, Dict, Any + +class Tools(abc.ABC): + """ + Abstract base class for tools. + """ + + @abc.abstractmethod + async def list_tools(self) -> List[Any]: + """ + Lists the tools available in the session. + + Returns: + List[Any]: A list of tool objects, or an empty list if no tools + are found or an error occurs. + """ + pass + + @abc.abstractmethod + async def execute_tool(self, func_call: Any) -> Dict[str, Any]: + """ + Executes a tool call. + + Args: + func_call (Any): The function call object (e.g., from Gemini). + + Returns: + Dict[str, Any]: A dictionary containing the tool's result or an error. + """ + pass diff --git a/examples/gemini/python/docs-agent/docs_agent/models/tools/mcp_client.py b/examples/gemini/python/docs-agent/docs_agent/models/tools/mcp_client.py new file mode 100644 index 000000000..8a4a52f05 --- /dev/null +++ b/examples/gemini/python/docs-agent/docs_agent/models/tools/mcp_client.py @@ -0,0 +1,272 @@ +# +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import json +import logging +from typing import Any, Dict, List, Optional + +import mcp +from docs_agent.utilities.config import MCPServerConfig +from mcp.client import stdio +from mcp.client import sse + +from docs_agent.models.tools.base import Tools + +class MCPService(Tools): + """ + A service class that interacts with the MCP (Model Context Protocol) client. + + This class provides methods to list available tools and execute tool calls + within an active MCP session. + """ + + def __init__(self, config: MCPServerConfig, verbose: bool = False): + self.config = config + self.name = config.name + self.verbose = verbose + self._client_context = None + self.session: Optional[mcp.ClientSession] = None + self._read = None + self._write = None + self._stdio_params: Optional[mcp.StdioServerParameters] = None + + # Prepare StdioServerParameters + if self.config.server_type == "stdio": + self._stdio_params = mcp.StdioServerParameters( + command=self.config.command, + args=self.config.args, + env= self.config.env + ) + logging.info( + f"MCPService configured for STDIO connection: {self._stdio_params.command} {' '.join(self._stdio_params.args)}" + f"{f' with env {self.config.env}' if self.config.env else ''}" + ) + elif self.config.server_type == "sse": + logging.info(f"MCPService configured for SSE connection: {self.config.url}") + else: + raise ValueError( + f"Unsupported MCP server_type: {self.config.server_type}. Must be 'stdio' or 'sse'." + ) + + async def __aenter__(self): + """ + Connects to the MCP server and initializes the client session. + + Returns: + MCPService: The initialized MCPService instance. + + Raises: + ValueError: If stdio_params are missing for stdio server type. + ImportError: If required libraries for SSE client are missing. + """ + server_type_upper = self.config.server_type.upper() + logging.info(f"Attempting to connect through {server_type_upper}...") + if self.config.server_type == "stdio": + if not self._stdio_params: + raise RuntimeError( + "Internal Error: Stdio parameters not initialized for stdio connection." + ) + self._client_context = stdio.stdio_client(self._stdio_params) + elif self.config.server_type == "sse": + self._client_context = sse.sse_client(self.config.url) + else: + raise RuntimeError( + f"Internal Error: Unexpected server type {self.config.server_type}" + ) + + try: + # Enter the client context (stdio.stdio_client or sse.sse_client) + self._read, self._write = await self._client_context.__aenter__() + logging.info(f"MCP {server_type_upper} client connected successfully.") + + # Create and initialize the mcp.ClientSession + self.session = mcp.ClientSession(self._read, self._write) + await self.session.__aenter__() + await self.session.initialize() + logging.info("MCP Session initialized.") + except Exception as e: + logging.error(f"Failed to establish MCP connection or session: {e}") + # Clean up context if connection failed partially + if ( + self._client_context and self.session is None + ): # Connection started but session failed + try: + await self._client_context.__aexit__(type(e), e, e.__traceback__) + except Exception as cleanup_e: + logging.error( + f"Error during cleanup after connection failure: {cleanup_e}" + ) + self.session = None + self._client_context = None + self._read = None + self._write = None + raise # Re-raise the original exception + + # Return the MCPService instance + return self + + async def __aexit__(self, exc_type, exc_val, exc_tb): + """ + Closes the MCP session and the client connection. + + Args: + exc_type: The type of the exception that occurred, or None. + exc_val: The value of the exception that occurred, or None. + exc_tb: The traceback of the exception that occurred, or None. + """ + logging.info("Closing MCP Session and Connection...") + session_closed = False + if self.session: + try: + await self.session.__aexit__(exc_type, exc_val, exc_tb) + session_closed = True + logging.info("MCP Session closed.") + except Exception as e: + logging.error(f"Error closing MCP session: {e}") + else: + logging.warning("No active MCP session to close.") + + if self._client_context: + try: + server_type_upper = self.config.server_type.upper() + await self._client_context.__aexit__(exc_type, exc_val, exc_tb) + logging.info(f"MCP {server_type_upper} client connection closed.") + except Exception as e: + logging.error(f"Error closing MCP client context: {e}") + else: + logging.warning("No active MCP client context to close.") + + # Reset state + self.session = None + self._client_context = None + self._read = None + self._write = None + + async def list_tools(self) -> List[Any]: + """ + Lists the available tools in the MCP session. + + Returns: + List[Any]: A list of tool objects from the MCP session. + + Raises: + RuntimeError: If the MCP session is not active. + """ + if not self.session: + raise RuntimeError( + "MCP session not active. Use 'async with MCPService(...):'" + ) + logging.info("Listing tools from MCP session via MCPService...") + mcp_tools = [] + try: + mcp_tools_response = await self.session.list_tools() + mcp_tools = mcp_tools_response.tools + tool_count = len(mcp_tools) if hasattr(mcp_tools, "__len__") else "unknown" + logging.info(f"Found {tool_count} tools.") + if self.verbose and mcp_tools: + print("\n--- MCP Tool Details ---") + i = 0 + for tool in mcp_tools: + i += 1 + print(f"Tool {i}/{tool_count}:") + tool_name = getattr(tool, "name", "N/A") + tool_desc = getattr(tool, "description", "N/A") + tool_schema = getattr(tool, "inputSchema", {}) + print(f" Name: {tool_name}") + print(f" Description: {tool_desc}") + if tool_schema: + try: + print( + f" Input Schema (Original):\n{json.dumps(dict(tool_schema), indent=4)}" + ) + except Exception as schema_e: + print( + f" Input Schema: (Error print: {schema_e}) Raw: {tool_schema}" + ) + else: + print(" Input Schema: {}") + print("-" * 20) + print("--- End MCP Tool Details ---\n") + + except Exception as e: + logging.error(f"Error listing tools via MCPService: {e}") + mcp_tools = [] + + return mcp_tools + + async def execute_tool(self, func_call: Dict[str, Any]) -> Dict[str, Any]: + """ + Executes a tool call in the MCP session. + + Args: + func_call (Dict[str, Any]): The function call dictionary from the model. + + Returns: + Dict[str, Any]: A dictionary containing the tool's result or an error. + """ + if not self.session: + raise RuntimeError("MCP session not active.'") + if not isinstance(func_call, dict) or not func_call.get("name"): + logging.warning(f"Skipping invalid/empty function call dictionary: {func_call}") + return {"error": "Invalid function call dictionary received."} + + tool_name = func_call["name"] + args = func_call.get("args", {}) + + logging.info( + f'Calling MCP tool via MCPService: "{tool_name}" with args: {args}' + ) + tool_response_content: Dict[str, Any] + + try: + tool_result = await self.session.call_tool(tool_name, args) + logging.info(f'MCP tool "{tool_name}" executed via MCPService.') + is_error = getattr(tool_result, "isError", False) + content_parts = getattr(tool_result, "content", []) + result_text = None + if content_parts and hasattr(content_parts[0], "text"): + result_text = content_parts[0].text + + if is_error: + error_msg = result_text if result_text else "Unknown tool error" + logging.error(f'Tool "{tool_name}" returned an error: {error_msg}') + tool_response_content = {"error": error_msg} + elif result_text is not None: + log_msg = ( + f'Tool "{tool_name}" result: {result_text[:100]}' + f"{'...' if len(result_text) > 100 else ''}" + ) + if self.verbose and len(result_text) > 100: + log_msg += f"\nFull result:\n{result_text}" + logging.info(log_msg) + tool_response_content = {"result": result_text} + else: + logging.warning( + f'Tool "{tool_name}" succeeded but returned no standard text content. Raw result: {tool_result}' + ) + tool_response_content = {"result": ""} + + except Exception as e: + logging.critical( + f'!! Exception during MCP tool execution "{tool_name}" via MCPService: {type(e).__name__}: {e}' + ) + if self.verbose: + import traceback + traceback.print_exc() + tool_response_content = { + "error": f"MCP Execution Failed: {type(e).__name__}: {e}" + } + + return tool_response_content diff --git a/examples/gemini/python/docs-agent/docs_agent/models/tools/tool_manager.py b/examples/gemini/python/docs-agent/docs_agent/models/tools/tool_manager.py new file mode 100644 index 000000000..aa50d203a --- /dev/null +++ b/examples/gemini/python/docs-agent/docs_agent/models/tools/tool_manager.py @@ -0,0 +1,744 @@ +# +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import typing +from typing import List, Dict, Any, Optional +import json +import contextlib +from absl import logging + +from docs_agent.utilities.config import ProductConfig +from docs_agent.models.tools.base import Tools +from docs_agent.models.tools.tools import ToolsFactory +from docs_agent.models.base import GenerativeLanguageModel + + +class ToolManager: + """ + Manages and orchestrates interactions with various tool services. + + This class handles the initialization, connection, and execution of tools + from different tool services (e.g., MCP). It formats tool information for + use with GenerativeLanguageModels and manages the multi-turn interaction + loop, including tool execution and response processing. + """ + def __init__(self, config: ProductConfig): + self.config = config + self.tool_services: List[Tools] = [] + if self.config.mcp_servers: + try: + self.tool_services = ToolsFactory.create_tool_service( + mcp_servers=self.config.mcp_servers, + tool_service_type="mcp", + ) + logging.info( + f"ToolManager initialized with {len(self.tool_services)} tool service instance(s)." + ) + except Exception as e: + logging.error(f"ToolManager failed to initialize tool services: {e}") + self.tool_services = [] + else: + logging.info( + "ToolManager initialized without any tool services configured." + ) + + def clean_openapi_schema( + self, schema_data, keys_to_remove={"title", "default", "additionalProperties", "$schema"} + ): + """ + Recursively cleans an OpenAPI schema by removing specified keys and + handling `anyOf` compositions. This is necessary because MCP returns + schemas that are not always valid with Gemini models. + + Args: + schema_data (dict or list): The OpenAPI schema data to clean. + keys_to_remove (set): A set of keys to remove from the schema. + + Returns: + dict or list: The cleaned schema data. + """ + if isinstance(schema_data, dict): + if "anyOf" in schema_data: + any_of_list = schema_data.get("anyOf", []) + chosen_schema = None + for sub_schema in any_of_list: + if ( + isinstance(sub_schema, dict) + and sub_schema.get("type") != "null" + ): + chosen_schema = sub_schema + break + if chosen_schema is None and any_of_list: + chosen_schema = any_of_list[0] + if not isinstance(chosen_schema, dict): + logging.warning( + f"anyOf contained non-dict element, cannot determine type for: {schema_data}" + ) + chosen_schema = None + if chosen_schema: + new_schema = chosen_schema.copy() + for k, v in schema_data.items(): + if k not in ["anyOf", *keys_to_remove]: + new_schema[k] = v + return self.clean_openapi_schema(new_schema, keys_to_remove) + else: + logging.warning( + f'Could not extract valid type from "anyOf", creating minimal schema for: {schema_data}' + ) + minimal_schema = { + k: v + for k, v in schema_data.items() + if k not in ["anyOf", *keys_to_remove] + } + if "type" not in minimal_schema: + minimal_schema["type"] = "string" + logging.warning( + f'--> Defaulting to type: "string" for problematic anyOf schema.' + ) + return self.clean_openapi_schema(minimal_schema, keys_to_remove) + + cleaned_dict = {} + for k, v in schema_data.items(): + if k not in keys_to_remove: + cleaned_dict[k] = self.clean_openapi_schema(v, keys_to_remove) + + if "properties" in cleaned_dict and "type" not in cleaned_dict: + cleaned_dict["type"] = "object" + if ( + cleaned_dict + and "type" not in cleaned_dict + and "properties" not in cleaned_dict + and "items" not in cleaned_dict + ): + potential_type_indicators = {"format", "enum"} + if any(key in cleaned_dict for key in potential_type_indicators): + logging.warning( + f'Cleaned schema seems to be missing "type", defaulting to "string": {cleaned_dict}' + ) + cleaned_dict["type"] = "string" + return cleaned_dict + elif isinstance(schema_data, list): + return [ + self.clean_openapi_schema(item, keys_to_remove) for item in schema_data + ] + else: + return schema_data + + def format_tools_for_model( + self, raw_tools: List[Any], verbose: bool = False + ) -> List[Dict[str, Any]]: + """ + Formats a list of raw tool objects (e.g., from MCP) into a generic list + of dictionaries suitable for the GenerativeLanguageModel interface. + + Args: + raw_tools (List[Any]): A list of raw tool objects from a tool service. + verbose (bool): Enable verbose logging during formatting. + + Returns: + List[Dict[str, Any]]: A list of dictionaries, each representing a + tool's function declaration. + """ + generic_tools: List[Dict[str, Any]] = [] + if not raw_tools: + logging.info("No raw tools provided for formatting.") + return generic_tools + + logging.info("Attempting to format raw tools into generic model format...") + skipped_count = 0 + for tool in raw_tools: + tool_name = getattr(tool, "name", None) + tool_desc = getattr(tool, "description", None) + # Check for essential attributes before proceeding + if not (tool_name and tool_desc and hasattr(tool, "inputSchema")): + logging.warning( + f"Skipping tool \"{tool_name or 'Unnamed'}\" due to missing required attributes (name, description, or inputSchema)." + ) + skipped_count += 1 + continue + + original_schema = getattr(tool, "inputSchema", {}) + cleaned_schema = {} + try: + # Ensure we have a dict to clean + schema_dict_to_clean = ( + dict(original_schema) + if not isinstance(original_schema, dict) + else original_schema + ) + cleaned_schema = self.clean_openapi_schema(schema_dict_to_clean) + if not isinstance(cleaned_schema, dict): + logging.warning( + f'Schema cleaning for tool "{tool_name}" did not result in a dictionary. Attempting to use original schema as dict.' + ) + # Fallback attempt + cleaned_schema = ( + dict(original_schema) + if not isinstance(original_schema, dict) + else original_schema + ) + if not isinstance(cleaned_schema, dict): + raise TypeError( + "Cleaned schema and original schema could not be represented as a dictionary." + ) + + except Exception as clean_e: + logging.warning( + f'Could not clean schema for tool "{tool_name}": {clean_e}. Trying original schema as dict.' + ) + try: + # Ensure the fallback is also a dict + cleaned_schema = ( + dict(original_schema) + if not isinstance(original_schema, dict) + else original_schema + ) + if not isinstance(cleaned_schema, dict): + raise TypeError( + "Original schema could not be represented as a dictionary." + ) + except Exception as fallback_e: + logging.error( + f'Could not use original or cleaned schema for "{tool_name}": {fallback_e}. Skipping tool.' + ) + skipped_count += 1 + continue + generic_tool_dict = { + "name": tool_name, + "description": tool_desc, + "parameters": cleaned_schema, + } + generic_tools.append(generic_tool_dict) + + if verbose: + try: + logging.info( + f" Formatted tool '{tool_name}': description='{tool_desc[:50]}...', schema={json.dumps(cleaned_schema, indent=2)}" + ) + except Exception as log_e: + logging.info( + f" Formatted tool '{tool_name}' (logging schema failed: {log_e})" + ) + + valid_tool_count = len(generic_tools) + logging.info( + f"Formatted {valid_tool_count} tools into generic structure (skipped {skipped_count})." + ) + + return generic_tools + + async def _execute_tool_calls( + self, function_calls: List[Any], tool_to_service_map: typing.Dict[str, Tools] + ) -> List[Dict[str, Any]]: + """ + Executes a list of function calls (tool calls) using the appropriate tool services. + + Args: + function_calls (List[Any]): A list of function call objects. + tool_to_service_map (typing.Dict[str, Tools]): A dictionary mapping tool names to their + corresponding service instances. + + Returns: + List[Dict[str, Any]]: A list of function response parts (as dicts). + """ + function_response_parts_list: List[Dict[str, Any]] = [] + logging.info(f"Executing {len(function_calls)} tool call(s).") + + for func_call in function_calls: + tool_name = func_call.get("name", "unknown_tool") + # Find the correct service instance from the map + target_service = tool_to_service_map.get(tool_name) + + if not target_service: + logging.error( + f"Could not find active tool service for tool '{tool_name}'. Skipping call." + ) + # Optionally return an error part for the model + error_response = { + "error": f"Tool '{tool_name}' not found in active sessions." + } + function_response_parts_list.append( + { + "function_response": { + "name": tool_name, + "response": error_response, + } + } + ) + continue + + # Call execute_tool on the found Tools instance + tool_response_content = await target_service.execute_tool(func_call) + function_response_parts_list.append( + { + "function_response": { + "name": tool_name, + "response": tool_response_content, + } + } + ) + return function_response_parts_list + + def _extract_final_text_from_history(self, contents: List[Dict[str, Any]]) -> str: + """ + Extracts the final text response from the conversation history (list of dicts). + + Args: + contents (List[Dict[str, Any]]): A list of conversation content dictionaries. + + Returns: + str: The final text response, or an error message if not found. + """ + final_text = "" + try: + last_model_response_content = None + # Find the last "model" content object in the history + for item in reversed(contents): + if isinstance(item, dict) and item.get("role") == "model": + last_model_response_content = item + break + + if last_model_response_content is not None: + parts = last_model_response_content.get("parts", []) + if parts: + for part in parts: + # Check if part is a dict and has text, but not function_call + if isinstance(part, dict): + text = part.get("text") + is_function_call = "function_call" in part + if text and not is_function_call: + final_text += text + if not final_text: + logging.info( + "Last model response found but contained no text parts." + ) + else: + logging.info("Last model response found in history had no parts.") + else: + logging.info('No "model" role content found in the final history.') + + except Exception as e: + logging.error( + f"Error extracting final text from history: {type(e).__name__}: {e}" + ) + return "[ERR Extracting text from history]" + return final_text + + def _extract_final_response_text( + self, contents: List[Dict[str, Any]], last_response: Optional[Dict[str, Any]] + ) -> str: + """ + Extracts the final text response from the conversation history or the last API response. + + Args: + contents (List[Dict[str, Any]]): A list of conversation content dictionaries. + last_response (Optional[Dict[str, Any]]): The last API response object (as dict) + + Returns: + str: The final text response, or an error message if not found. + """ + # 1. Try extracting from history first + final_text = self._extract_final_text_from_history(contents) + + # 2. Fallback using the very last API response if history extraction failed/empty + if not final_text or final_text.startswith("[ERR"): + logging.info( + "No text found in last model history item, trying last API response." + ) + fallback_text = "" + try: + # Check the dictionary structure of last_response + if last_response and isinstance(last_response, dict): + # Attempt to find text parts in the last response dict + response_parts = last_response.get("parts", []) + if isinstance(response_parts, list): + for part in response_parts: + if isinstance(part, dict): + text = part.get("text") + is_function_call = "function_call" in part + if text and not is_function_call: + fallback_text += text + logging.info( + "(Fallback text extracted from last API response parts)" + ) + break + except Exception as e: + logging.error(f"Error during fallback text extraction: {e}") + + if fallback_text: + final_text = fallback_text + elif not final_text or final_text.startswith("[ERR"): + final_text = "[Agent loop finished. No final text content found.]" + logging.warning("No final text found in history or last response.") + + return final_text + + async def _setup_tool_services( + self, stack: contextlib.AsyncExitStack + ) -> tuple[List[Tools], Dict[str, Tools], List[Any]]: + """ + Sets up connections to all configured tool services, retrieves their tools, + and creates a mapping for tool name to service instance. + + Args: + stack (contextlib.AsyncExitStack): An async exit stack for managing + the tool service connections. + + Returns: + tuple[List[Tools], Dict[str, Tools], List[Any]]: A tuple containing: + - A list of active tool service instances. + - A dictionary mapping tool names to their corresponding service instances. + - A list of all raw tools retrieved from all services. + """ + active_services: List[Tools] = [] + tool_to_service_map: Dict[str, Tools] = {} + all_raw_tools: List[Any] = [] + + logging.info( + f"Attempting to connect to {len(self.tool_services)} tool service(s)..." + ) + # Enter context for each service + for service in self.tool_services: + try: + active_service_context = await stack.enter_async_context(service) + active_services.append(active_service_context) + service_config_repr = getattr( + service, "config", f"Instance of {type(service).__name__}" + ) + logging.info( + f"Successfully connected to tool service: {service_config_repr}" + ) + + mcp_tools = await active_service_context.list_tools() + all_raw_tools.extend(mcp_tools) + + for tool in mcp_tools: + tool_name = getattr(tool, "name", None) + if tool_name: + if tool_name in tool_to_service_map: + logging.warning( + f"Duplicate tool name '{tool_name}' found across services. Using the one from {service_config_repr}." + ) + tool_to_service_map[tool_name] = active_service_context + else: + logging.warning( + f"Found a tool without a name from service {service_config_repr}. Skipping." + ) + + except Exception as e: + service_config_repr = getattr( + service, "config", f"Instance of {type(service).__name__}" + ) + logging.error( + f"Failed to connect or list tools for service {service_config_repr}: {e}" + ) + + return active_services, tool_to_service_map, all_raw_tools + + async def _run_tool_interaction_loop( + self, + language_model: GenerativeLanguageModel, + initial_contents: List[Dict[str, Any]], + formatted_tools: List[Dict[str, Any]], + tool_to_service_map: Dict[str, Tools], + verbose: bool = False, + max_tool_turns: int = 5, + ) -> tuple[List[Dict[str, Any]], Optional[Dict[str, Any]]]: + contents = list(initial_contents) + last_response: Optional[Dict[str, Any]] = None + + logging.info("\n--- Turn 0: Initial Request ---") + logging.info( + f"Sending prompt {'with aggregated tools' if formatted_tools else '(no tools found/formatted)'} to model {language_model}..." + ) + try: + # Expecting generate_content_async to return a dictionary now + response_dict = await language_model.generate_content_async( + contents=contents[:1], + tools=formatted_tools, + ) + last_response = response_dict + except Exception as e: + logging.error(f"Initial LLM interaction failed: {type(e).__name__}: {e}") + # Attempt to extract error from response if it's a dict + if isinstance(last_response, dict) and last_response.get("error"): + raise RuntimeError(f"ERR: {last_response.get('error')}") from e + raise + + # --- Process Initial Response --- + if not isinstance(last_response, dict): + logging.error(f"Invalid model response type received: {type(last_response)}. Expected dict.") + raise RuntimeError("ERR: Invalid response format from model.") + logging.info(f"Received initial response dict: {json.dumps(last_response, indent=2)}") + # Check for errors or blocking indicated in the response dict (implementation specific) + if last_response.get("error"): + logging.error(f"Model response indicates error: {last_response['error']}") + raise RuntimeError(f"ERR: {last_response['error']}") + if last_response.get("blocked"): + reason = last_response.get("block_reason", "Unknown") + logging.error(f"Model response blocked. Reason: {reason}") + raise RuntimeError(f"ERR: Response blocked ({reason})") + + initial_model_content_dict = last_response + current_function_calls = [] + try: + # Add the model's response dictionary to the history + contents.append(initial_model_content_dict) + + response_parts = initial_model_content_dict.get("parts", []) + if isinstance(response_parts, list): + current_function_calls = [ + part["function_call"] + for part in response_parts + if isinstance(part, dict) and "function_call" in part + ] + logging.info(f"Extracted initial function calls: {current_function_calls}") + has_text = any( + isinstance(part, dict) and "text" in part and part["text"] + for part in response_parts + ) + response_summary = ( + "[Function Call]" + if current_function_calls + else "[Text]" + if has_text + else "[Empty Parts]" + ) + logging.info(f"Model initial response: {response_summary}") + if verbose: + logging.info( + f"Model initial response (dict): {json.dumps(initial_model_content_dict, indent=2)}" + ) + else: + logging.warning("Initial model response content dict has no 'parts' list.") + # Ensure history has a model entry even if parts are missing/invalid + if contents[-1] != initial_model_content_dict: + contents.append({"role": "model", "parts": []}) + + except (KeyError, TypeError) as e: + logging.error(f"Error parsing initial response dictionary: {e}") + if verbose and last_response: + try: + logging.info(f"Response structure issue. Raw response dict: {json.dumps(last_response, indent=2)}") + except Exception: pass + raise RuntimeError("ERR: Parse initial response dict.") from e + + # --- Tool Calling Loop --- + turn_count = 0 + while current_function_calls and turn_count < max_tool_turns: + turn_count += 1 + logging.info(f"\n--- Turn {turn_count}: Tool Execution ---") + + function_response_parts_list = await self._execute_tool_calls( + current_function_calls, tool_to_service_map + ) + + if function_response_parts_list: + contents.append( + {"role": "function", "parts": function_response_parts_list} + ) + logging.info( + f"Added {len(function_response_parts_list)} tool response(s) to history." + ) + else: + logging.warning("No tool calls successfully processed in this turn.") + break + + logging.info("Requesting next step from Model...") + try: + # Expecting a dictionary response again + response_dict = await language_model.generate_content_async( + contents=contents, # Send full history (list of dicts) + tools=formatted_tools, + ) + last_response = response_dict + logging.info(f"Received subsequent response dict: {json.dumps(last_response, indent=2)}") + except Exception as e: + logging.error( + f"Subsequent model API call failed: {type(e).__name__}: {e}" + ) + # Attempt to extract error from response if it's a dict + if isinstance(last_response, dict) and last_response.get("error"): + raise RuntimeError(f"ERR: {last_response.get('error')}") from e + raise RuntimeError(f"ERR: Subsequent API call: {e}") from e + + # --- Process Subsequent Response (as dict) --- + if not isinstance(last_response, dict): + logging.error(f"Invalid model response type after tool use: {type(last_response)}. Expected dict.") + raise RuntimeError("ERR: Invalid response format from model after tool use.") + + # Check for errors or blocking + if last_response.get("error"): + logging.error(f"Model response indicates error: {last_response['error']}") + raise RuntimeError(f"ERR: {last_response['error']}") + if last_response.get("blocked"): + reason = last_response.get("block_reason", "Unknown") + logging.error(f"Model response blocked after tool use. Reason: {reason}") + break + + model_content_dict = last_response + current_function_calls = [] + + try: + # Add the model's response dictionary to the history + contents.append(model_content_dict) + + response_parts = model_content_dict.get("parts", []) + if isinstance(response_parts, list): + current_function_calls = [ + part["function_call"] + for part in response_parts + if isinstance(part, dict) and "function_call" in part + ] + logging.info(f"Extracted subsequent function calls: {current_function_calls}") + has_text = any( + isinstance(part, dict) and "text" in part and part["text"] + for part in response_parts + ) + response_summary = ( + "[Function Call]" + if current_function_calls + else "[Text]" + if has_text + else "[Empty Parts]" + ) + logging.info(f"Model response: {response_summary}") + if verbose: + logging.info(f"Model response (dict): {json.dumps(model_content_dict, indent=2)}") + + # If the response is just text or empty, the loop will naturally end + if not current_function_calls: + logging.info("Model response contains text or is empty, ending tool loop.") + break + + else: + logging.warning("Model response dict has no 'parts' list after tool use.") + # Ensure history has a model entry + if contents[-1] != model_content_dict: + contents.append({"role": "model", "parts": []}) + break + + except (KeyError, TypeError) as e: + logging.error(f"Error parsing subsequent response dictionary: {e}") + if verbose and last_response: + try: + logging.info(f"Response structure issue. Raw response dict: {json.dumps(last_response, indent=2)}") + except Exception: pass + break + + # --- End Loop --- + if turn_count >= max_tool_turns and current_function_calls: + logging.warning(f"Max tool turns ({max_tool_turns}) reached.") + elif not current_function_calls: + logging.info("Model finished generating or no further tool calls needed.") + + return contents, last_response + + async def process_prompt_with_tools( + self, + prompt: str, + language_model: GenerativeLanguageModel, + verbose: bool = False, + ): + """ + Processes a user prompt using available tools and a language model. + + This method orchestrates the interaction between the language model and + tool services, managing the multi-turn conversation loop and tool + execution. + + Args: + prompt (str): The user's input prompt. + language_model (GenerativeLanguageModel): The language model instance + to use. + verbose (bool): Enable verbose logging. + + Returns: + str: The final text response from the model, or an error message. + """ + if not self.tool_services: + logging.warning( + "ToolManager.process_prompt_with_tools called, but no tool services are initialized." + ) + # Fallback to simple generation if desired, or raise error + try: + response = await language_model.generate_content_async( + contents=[{"role": "user", "parts": [{"text": prompt}]}], + tools=None, + ) + # Extract text from the expected dictionary response + final_text = "[ERR: Failed to get text from fallback]" + if isinstance(response, dict): + parts = response.get("parts", []) + if isinstance(parts, list): + for part in parts: + if isinstance(part, dict) and "text" in part: + final_text = part["text"] + break + return final_text + except Exception as e: + logging.error(f"LLM interaction failed (no tools): {type(e).__name__}: {e}") + return f"Error: Failed to generate content: {e}" + + final_text: Optional[str] = None + contents: List[Dict[str, Any]] = [{"role": "user", "parts": [{"text": prompt}]}] + last_response: Optional[Dict[str, Any]] = None + + async with contextlib.AsyncExitStack() as stack: + try: + ( + active_services, + tool_to_service_map, + all_raw_tools, + ) = await self._setup_tool_services(stack) + + if not active_services: + logging.error("Failed to establish connection with any tool service.") + # Set error text directly if connection fails + final_text = "[ERR: Failed to connect to any tool service]" + else: + # Proceed only if services are active + formatted_tools = self.format_tools_for_model(all_raw_tools, verbose) + + logging.info("--- Starting Loop (within ToolManager) ---") + # Run the loop + contents, last_response = await self._run_tool_interaction_loop( + language_model=language_model, + initial_contents=contents, + formatted_tools=formatted_tools, + tool_to_service_map=tool_to_service_map, + verbose=verbose, + ) + logging.info("\n--- Finished loop (within ToolManager) ---") + + except Exception as e: + logging.error( + f"Error during tool processing in ToolManager: {type(e).__name__}: {e}" + ) + # Set final_text only if an exception occurs + final_text = f"[ERR: {type(e).__name__} - {e}]" + if isinstance(last_response, dict) and last_response.get("error"): + final_text = f"[ERR: {last_response.get('error')}]" + + # Extract final text only if no error was explicitly set during try/except + if final_text is None: + if last_response is not None or contents: + # Call extraction method if loop completed successfully + final_text = self._extract_final_response_text(contents, last_response) + else: + final_text = "[ERR: No response or history available after loop]" + + # Ensure we always return a string + return final_text if final_text is not None else "[ERR: Unknown processing state]" diff --git a/examples/gemini/python/docs-agent/docs_agent/models/tools/tools.py b/examples/gemini/python/docs-agent/docs_agent/models/tools/tools.py new file mode 100644 index 000000000..6a5f9ad77 --- /dev/null +++ b/examples/gemini/python/docs-agent/docs_agent/models/tools/tools.py @@ -0,0 +1,76 @@ +# +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from typing import List +from absl import logging +from docs_agent.models.tools.base import Tools +from docs_agent.models.tools.mcp_client import MCPService +from docs_agent.utilities.config import MCPServerConfig + + +class ToolsFactory: + """ + A factory class for creating tool service instances based on the specified type. + """ + + @staticmethod + def create_tool_service( + mcp_servers: list[MCPServerConfig], + tool_service_type: str, + ) -> List[Tools]: + """ + Creates tool service instances based on the specified type and configurations. + + Args: + mcp_servers: A list of MCPServerConfig objects. + tool_service_type: The type of tool service to create ('mcp' is the only supported type). + + Returns: + A list of Tools instances. + + Raises: + ValueError: If an unsupported tool_service_type is provided. + """ + tool_services: List[Tools] = [] + + if not mcp_servers: + logging.info( + "No MCP servers defined in the configuration. No tool services created." + ) + return tool_services + + if tool_service_type.lower() != "mcp": + raise ValueError( + f"Unsupported tool_service_type: '{tool_service_type}'. Only 'mcp' is currently supported." + ) + + logging.info( + f"Creating MCPService instances for {len(mcp_servers)} configured server(s)..." + ) + for mcp_server_config in mcp_servers: + try: + # Create an instance for each server config + service_instance = MCPService(config=mcp_server_config) + tool_services.append(service_instance) + logging.info( + f" Successfully created MCPService instance for server: {mcp_server_config}" + ) + except Exception as e: + logging.error( + f"Failed to instantiate MCPService for config {mcp_server_config}: {e}" + ) + + logging.info(f"Created {len(tool_services)} MCPService instance(s).") + return tool_services diff --git a/examples/gemini/python/docs-agent/docs_agent/postprocess/docs_retriever.py b/examples/gemini/python/docs-agent/docs_agent/postprocess/docs_retriever.py index 885436f77..d71e8374e 100644 --- a/examples/gemini/python/docs-agent/docs_agent/postprocess/docs_retriever.py +++ b/examples/gemini/python/docs-agent/docs_agent/postprocess/docs_retriever.py @@ -16,6 +16,7 @@ # from markdown import markdown # from bs4 import BeautifulSoup # import re, os +import typing from docs_agent.models import tokenCount from docs_agent.preprocess.splitters.markdown_splitter import Section as Section @@ -166,7 +167,6 @@ def returnParentSection(self, section_id, token_limit: float = float("inf")): # If Section doesn't match, just return a FullPage with a blank list if not match: print(f"Could not find a section with the provided ID {section_id}") - # return FullPage(section_list=updated_list) return None # Start token count at 0 curr_token = 0 @@ -239,4 +239,129 @@ def buildSections( if item is not None: section_token_count += item.token_count final_page = FullPage(final_sections).sortSections(reverse=reverse) - return final_page \ No newline at end of file + return final_page + + +def query_vector_store_to_build( + collection: typing.Any, # TODO Update to use a RAG object + docs_agent_config: str, + question: str, + token_limit: float = 200000, + results_num: int = 10, + max_sources: int = 4, +) -> tuple[list[SectionDistance], str]: + """ + Queries the vector database collection and builds a context string. + + Args: + collection: The vector store collection object (e.g., Chroma collection). + docs_agent_config: The configuration string ('experimental' or other). + question (str): The user's question. + token_limit (float, optional): The total token limit for the context. Defaults to 200000. + results_num (int, optional): The initial number of results to retrieve from the vector store. Defaults to 10. + max_sources (int, optional): The maximum number of sources to use to build the context. Defaults to 4. + + Returns: + tuple: A tuple containing a list of SectionDistance objects and the final context string. + """ + if not hasattr(collection, 'query'): + raise AttributeError("Passed collection object does not have a 'query' method.") + contexts_query = collection.query(question, results_num) + + if not hasattr(contexts_query, 'returnDBObjList'): + raise AttributeError("Result of collection.query does not have a 'returnDBObjList' method.") + + build_context = contexts_query.returnDBObjList() + + if max_sources <= 0: + token_limit_per_source = [] + else: + token_limit_temp = token_limit / max_sources + token_limit_per_source = [token_limit_temp] * max_sources + + search_result = [] + same_pages = [] + for item in build_context: + if not hasattr(item, 'metadata') or not hasattr(item, 'document') or not hasattr(item, 'distance'): + print(f"Warning: Skipping item in query_vector_store_to_build due to missing attributes: {item}") + continue + if not isinstance(item.metadata, dict): + print(f"Warning: Skipping item due to unexpected metadata type: {type(item.metadata)}") + continue + + section = SectionDistance( + section=Section( + id=item.metadata.get("section_id", None), + name_id=item.metadata.get("name_id", None), + page_title=item.metadata.get("page_title", None), + section_title=item.metadata.get("section_title", None), + level=item.metadata.get("level", None), + previous_id=item.metadata.get("previous_id", None), + parent_tree=item.metadata.get("parent_tree", None), + token_count=item.metadata.get("token_estimate", None), + content=item.document, + md_hash=item.metadata.get("md_hash", None), + url=item.metadata.get("url", None), + origin_uuid=item.metadata.get("origin_uuid", None), + ), + distance=item.distance, + ) + search_result.append(section) + + final_pages = [] + this_range = min(len(search_result), max_sources) + + for i in range(this_range): + if not (search_result[i] and hasattr(search_result[i], 'section') and search_result[i].section): + print(f"Warning: Skipping index {i} in build loop due to invalid search result item.") + continue + + current_section = search_result[i].section + + page_token_limit = token_limit_per_source[i] if i < len(token_limit_per_source) else 0 + + if not hasattr(collection, 'getPageOriginUUIDList'): + raise AttributeError("Passed collection object does not have a 'getPageOriginUUIDList' method.") + + try: + same_page = collection.getPageOriginUUIDList( + origin_uuid=current_section.origin_uuid + ) + if not hasattr(same_page, 'buildSections'): + raise AttributeError("Object returned by getPageOriginUUIDList does not have a 'buildSections' method.") + + except Exception as e: + print(f"Error processing item {i} with origin_uuid {current_section.origin_uuid}: {e}") + continue + + if docs_agent_config == "experimental": + test_page = same_page.buildSections( + section_id=current_section.id, + selfSection=True, + children=True, + parent=True, + siblings=True, + token_limit=page_token_limit, + reverse=False + ) + else: + test_page = same_page.buildSections( + section_id=current_section.id, + selfSection=True, + children=False, + parent=False, + siblings=False, + token_limit=page_token_limit, + reverse=False + ) + final_pages.append(test_page) + + final_context = "" + for item in final_pages: + if hasattr(item, 'section_list') and item.section_list: + for source in item.section_list: + if hasattr(source, 'content') and source.content: + final_context += source.content + "\n\n" + final_context = final_context.strip() + + return search_result, final_context diff --git a/examples/gemini/python/docs-agent/docs_agent/preprocess/extract_image_path.py b/examples/gemini/python/docs-agent/docs_agent/preprocess/extract_image_path.py new file mode 100644 index 000000000..2d813c1e8 --- /dev/null +++ b/examples/gemini/python/docs-agent/docs_agent/preprocess/extract_image_path.py @@ -0,0 +1,169 @@ +# +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import os + +from absl import logging +from bs4 import BeautifulSoup as bs4 +from docs_agent.utilities.helpers import open_file +from docs_agent.utilities.helpers import trim_path_to_subdir +import markdown +from markdown.extensions import Extension +from markdown.treeprocessors import Treeprocessor + + +class ImgExtractor(Treeprocessor): + """ + This class is a Markdown treeprocessor that extracts all images from a + Markdown document and appends them to the markdown.images list. + """ + + def run(self, doc): + """Find all images and append to markdown.images.""" + self.md.images = [] + self.md.alt_texts = [] + self.md.image_titles = [] + for image in doc.findall(".//img"): + self.md.images.append(image.get("src")) + self.md.alt_texts.append(image.get("alt")) + if image.get("title") is not None: + self.md.image_titles.append(image.get("title")) + else: + self.md.image_titles.append("") + + +class ImgExtExtension(Extension): + """ + This class is a Markdown extension that registers the ImgExtractor + treeprocessor. + """ + + def extendMarkdown(self, md): + """Register the ImgExtractor treeprocessor with the Markdown instance.""" + img_ext = ImgExtractor(md) + md.treeprocessors.register(img_ext, "img_ext", 15) + + +def extract_image_path_from_markdown(markdown_text: str) -> list[str]: + """Extracts all image paths from a markdown text.""" + md = markdown.Markdown(extensions=[ImgExtExtension()]) + md.convert(markdown_text) + return md.images + + +def extract_image_alt_text_from_markdown(markdown_text: str) -> list[str]: + """Extracts all image paths from a markdown text.""" + md = markdown.Markdown(extensions=[ImgExtExtension()]) + md.convert(markdown_text) + return md.alt_texts + + +def extract_image_title_from_markdown(markdown_text: str) -> list[str]: + """Extracts all image paths from a markdown text.""" + md = markdown.Markdown(extensions=[ImgExtExtension()]) + md.convert(markdown_text) + return md.image_titles + + +def extract_image_path_from_html(html_text: str) -> list[str]: + """Extracts all image paths from a html page.""" + soup = bs4(html_text, "html.parser") + images = [] + for img in soup.findAll("img"): + images.append(img["src"]) + return images + + +def extract_image_alt_text_from_html(html_text: str) -> list[str]: + """Extracts all image paths from a html page.""" + soup = bs4(html_text, "html.parser") + alt_text = [] + for img in soup.findAll("img"): + alt_text.append(img["alt"]) + return alt_text + + +def extract_image_title_from_html(html_text: str) -> list[str]: + """Extracts all image paths from a html page.""" + soup = bs4(html_text, "html.parser") + title = [] + for img in soup.findAll("img"): + title.append(img["title"]) + return title + + +def parse_md_html_files_for_images(input_file: str) -> dict[list[str], list[str]]: + """ + Parses a file (markdown or html) to extract image paths. + + Args: + input_file: The path to the input file. + + Returns: + A dictionary containing the image paths, full image paths, and current + alt text. + """ + image_titles = [] + alt_texts = [] + image_paths = [] + if input_file.endswith(".md"): + file_content = open_file(input_file) + image_paths = extract_image_path_from_markdown(file_content) + alt_texts = extract_image_alt_text_from_markdown(file_content) + image_titles = extract_image_title_from_markdown(file_content) + elif input_file.endswith(".html") or input_file.endswith(".htm"): + file_content = open_file(input_file) + image_paths = extract_image_path_from_html(file_content) + alt_texts = extract_image_alt_text_from_html(file_content) + image_titles = extract_image_title_from_html(file_content) + else: + # This can get noisy so better to log as info. + logging.info( + "Skipping this file since it is not a markdown or html file: " + input_file + ) + image_def = {} + full_image_paths = [] + for image_path in image_paths: + dir_path = os.path.dirname(input_file) + if image_path.startswith("http://") or image_path.startswith("https://"): + logging.warning( + f"Skipping this image path since it is a URL: {image_path}\n" + ) + if image_path.startswith("./"): + image_path = image_path.removeprefix("./") + image_path = os.path.join(dir_path, image_path) + full_image_paths.append(image_path) + elif image_path[0].isalpha(): + image_path = os.path.join(dir_path, image_path) + full_image_paths.append(image_path) + elif image_path.startswith("/") and "/devsite/" in input_file: + # If the document is part of devsite, the path needs to be trimmed to the + # subdirectory (returns devsite tenant path) and then joined with the + # image path + devsite_path = trim_path_to_subdir(input_file, "en/") + image_path = image_path.removeprefix("/") + image_path = os.path.join(devsite_path, image_path) + full_image_paths.append(image_path) + else: + logging.error( + f"Skipping this image path because it cannot be parsed: {image_path}\n" + ) + image_def["full_image_paths"] = full_image_paths + image_def["image_paths"] = image_paths + image_def["alt_texts"] = alt_texts + image_def["image_titles"] = image_titles + image_obj = {"images": image_def} + return image_obj diff --git a/examples/gemini/python/docs-agent/docs_agent/preprocess/populate_vector_database.py b/examples/gemini/python/docs-agent/docs_agent/preprocess/populate_vector_database.py index 65a80fd97..34ba8ab66 100644 --- a/examples/gemini/python/docs-agent/docs_agent/preprocess/populate_vector_database.py +++ b/examples/gemini/python/docs-agent/docs_agent/preprocess/populate_vector_database.py @@ -22,12 +22,10 @@ import sys from absl import logging -import chromadb -from chromadb.utils import embedding_functions + import flatdict import tqdm -from docs_agent.models.google_genai import Gemini from docs_agent.preprocess.splitters import markdown_splitter from docs_agent.storage.google_semantic_retriever import SemanticRetriever from docs_agent.utilities import config @@ -35,6 +33,7 @@ from docs_agent.utilities.config import ProductConfig from docs_agent.utilities.helpers import end_path_backslash from docs_agent.utilities.helpers import resolve_path +from docs_agent.storage.chroma import ChromaEnhanced class chromaAddSection: @@ -73,10 +72,7 @@ def init_progress_bars(file_count): unchanged_file = tqdm.tqdm( position=2, desc="Total unchanged files 0", bar_format="{desc}" ) - update_file = tqdm.tqdm( - position=3, desc="Total updated files 0", bar_format="{desc}" - ) - return main, new_file, unchanged_file, update_file + return main, new_file, unchanged_file # Open a file and return its content. @@ -88,19 +84,6 @@ def get_file_content(full_path): auto.close() return content_file - -# Initialize Gemini objects for generating embeddings. -def init_gemini_model(product_config: ProductConfig): - gemini_new = Gemini(models_config=product_config.models) - # Use a chromadb function to initialize db - embedding_function_gemini = embedding_functions.GoogleGenerativeAiEmbeddingFunction( - api_key=product_config.models.api_key, - model_name=product_config.models.embedding_model, - task_type="RETRIEVAL_DOCUMENT", - ) - return gemini_new, embedding_function_gemini - - # Upload a text chunk to an online stroage using the Semantic Retrieval API. def upload_an_entry_to_a_corpus( semantic, corpus_name, document_name_in_corpus, this_item, is_this_first_chunk @@ -313,198 +296,239 @@ def populateToDbFromProduct(product_config: ProductConfig): Args: product_config: A ProductConfig object containing configuration details. """ - # Initialize Gemini objects. - (gemini_new, embedding_function_gemini) = init_gemini_model(product_config) - - # Initialize the Chroma database. - for item in product_config.db_configs: - if "chroma" in item.db_type: - logging.info("Initializing Chroma for a local storage.") - chroma_client = chromadb.PersistentClient( - path=resolve_path(item.vector_db_dir) - ) - collection = chroma_client.get_or_create_collection( - name=item.collection_name, - embedding_function=embedding_function_gemini, - ) - if ( - hasattr(product_config, "enable_delete_chunks") - and product_config.enable_delete_chunks == "True" - ): - # Delete entries in the database if we cannot find matches - # in the current dataset. - delete_unmatched_entries_in_chroma( - product_config, chroma_client, collection - ) - - # Initialzie the Semantic Retreival API. + logging.info("Starting populateToDbFromProduct") + # Initialize variables + chroma_collection = None + semantic = None corpus_name = "" - if product_config.db_type == "google_semantic_retriever": - logging.info("Initializing the Semantic Retrieval API for an online storage.") - semantic = SemanticRetriever() - for item in product_config.db_configs: - if "google_semantic_retriever" in item.db_type: - corpus_name = item.corpus_name - if semantic.does_this_corpus_exist(corpus_name) == False: - # Create a new corpus. - semantic.create_a_new_corpus(item.corpus_display, corpus_name) - elif ( + + # Initialize Chroma database and collection + for db_conf in product_config.db_configs: + if "chroma" in db_conf.db_type: + logging.info("Initializing Chroma.") + try: + chroma = ChromaEnhanced( + chroma_dir=resolve_path(db_conf.vector_db_dir), + models_config=product_config.models, + ) + logging.info(f"Attempting to get or create collection '{db_conf.collection_name}'") + chroma_collection = chroma.client.get_or_create_collection( + name=db_conf.collection_name, + embedding_function=chroma.embedding_function_instance, + ) + logging.info(f"Successfully got or created collection '{db_conf.collection_name}'") + # Delete unmatched entries in Chroma + if ( hasattr(product_config, "enable_delete_chunks") and product_config.enable_delete_chunks == "True" ): - # Delete chunks in the corpus if we cannot find matches in the current dataset. - delete_unmatched_entries_in_online_corpus( - product_config, semantic, corpus_name + delete_unmatched_entries_in_chroma( + product_config, chroma.client, chroma_collection ) + break + except Exception as e: + logging.error(f"Failed to initialize Chroma DB or collection '{db_conf.collection_name}': {e}", exc_info=True) + return - # Initialize progress bar objects. - file_count = get_file_count_in_a_dir(product_config.output_path) - ( - progress_bar, - progress_new_file, - progress_unchanged_file, - progress_update_file, - ) = init_progress_bars(file_count) - - # Get the preprocess information from the `file_index.json` file. - (index, full_index_path) = load_index(input_path=product_config.output_path) + # Initialize Semantic Retrieval API (if enabled) + if ("google_semantic_retriever" in product_config.db_type): + logging.info("Initializing the Semantic Retrieval API for an online storage.") + semantic = SemanticRetriever() + for db_conf in product_config.db_configs: + if "google_semantic_retriever" in db_conf.db_type: + corpus_name = db_conf.corpus_name + try: + if not semantic.does_this_corpus_exist(corpus_name): + semantic.create_a_new_corpus(db_conf.corpus_display, corpus_name) + elif ( + hasattr(product_config, "enable_delete_chunks") + and product_config.enable_delete_chunks == "True" + ): + delete_unmatched_entries_in_online_corpus( + product_config, semantic, corpus_name + ) + break + except Exception as e: + logging.error(f"Failed to initialize Semantic Retriever Corpus '{corpus_name}': {e}", exc_info=True) + semantic = None + break + + # Check for Chroma collection initialization + if chroma_collection is None and any("chroma" in db_conf.db_type for db_conf in product_config.db_configs): + logging.error("Chroma collection could not be initialized. Aborting population.") + return + + # Load the index file + logging.info(f"Loading file index... {product_config.output_path}") + index, full_index_path = load_index(input_path=product_config.output_path) + logging.info("File index loaded.") + # Resolve the output path + resolved_walk_path = resolve_path(product_config.output_path) + logging.info(f"Starting file processing in directory: {resolved_walk_path}") + if not os.path.isdir(resolved_walk_path): + logging.error(f"Target directory does not exist or is not a directory: {resolved_walk_path}") + return + + # Get the file count in the directory + file_count = get_file_count_in_a_dir(resolved_walk_path) + logging.info(f"Using os.walk file count ({file_count}) for main progress bar.") + # Initialize progress bars + progress_bar, progress_new_file, progress_unchanged_file = init_progress_bars(file_count) + + # Counters + total_files_processed = 0 + new_count = 0 + unchanged_count = 0 + skipped_invalid_uuid_count = 0 + skipped_other_count = 0 - # Local variables track the resource names of documents for the Semantic Retrieval API. + # Semantic Retriever state document_name_in_corpus = "" dict_document_names_in_corpus = {} - # Local variables for counting files. - total_files = 0 - updated_count = 0 - new_count = 0 - unchanged_count = 0 - - # Loop through each `path` in the `config.yaml` file. - for root, dirs, files in os.walk(product_config.output_path): - # Convert `output_path` to be a fully resolved path. - fully_resolved_output = end_path_backslash( - resolve_path(product_config.output_path) - ) - # Loop through all files found in the `output_path` directory. + # Loop through the files in the directory + for root, dirs, files in os.walk(resolved_walk_path): for file in files: - # Displays status bar, sleep helps to stick the progress + # Update main progress bar based on os.walk count total progress_bar.update(1) progress_bar.set_description_str(f"Processing file {file}", refresh=True) - # Get the full path for the file. - full_file_name = resolve_path(os.path.join(root, "")) + file - # Process only files with `.md` extension. + full_file_name = os.path.join(root, file) + # Skip the index file itself + if full_file_name == full_index_path: + continue if file.endswith(".md"): - # Open the file and get the content. - content_file = get_file_content(os.path.join(root, file)) - # Get a Section object from the file index object. - chroma_add_item = findFileinDict( - input_file_name=full_file_name, - index_object=index, - content_file=content_file, - ) - # Quick fix: If the filename ends with `_##.md`, extract the file prefix - # Then check if this prefix exists in a local dict, which tracks document - # resource names for the Semantic Retrieval API call. - file_page_prefix = "" - is_this_first_chunk = False - match_file_page = re.search(r"(.*)_\d+\.md$", full_file_name) - if match_file_page: - file_page_prefix = match_file_page.group(1) - if file_page_prefix in dict_document_names_in_corpus: - # If the prefix exists in the dict, retrieve the document resource name. - document_name_in_corpus = dict_document_names_in_corpus.get( - file_page_prefix - ) - else: - # if not, set the flag to indicate that a new `document` needs - # to be created. - is_this_first_chunk = True - document_name_in_corpus = "" - else: - # If the file is not in a group, treat it as its own document. - file_page_prefix = full_file_name - # Skip if the file size is larger than 10000 bytes (API limit) - if ( - chroma_add_item.section.content != "" - and len(chroma_add_item.section.content) < 10000 - and chroma_add_item.section.md_hash != "" - and chroma_add_item.section.uuid != "" - ): - # Compare the text chunk entries in the local Chroma database - # to check if the hash value has changed. - id_to_not_change = collection.get( - include=["metadatas"], - ids=chroma_add_item.section.uuid, - where={"md_hash": {"$eq": chroma_add_item.section.md_hash}}, - )["ids"] - if id_to_not_change != []: - # This text chunk is unchanged. Skip this text chunk. + try: + content_file = get_file_content(full_file_name) + chroma_add_item = findFileinDict( + input_file_name=full_file_name, + index_object=index, + content_file=content_file, + ) + # Check for invalid content + if not chroma_add_item.section.content: + logging.warning(f"Skipping {file}: Content is empty.") + skipped_other_count += 1 + continue + if len(chroma_add_item.section.content) >= 10000: + logging.warning(f"Skipping {file}: Content too large ({len(chroma_add_item.section.content)} bytes).") + skipped_other_count += 1 + continue + if not chroma_add_item.section.md_hash: + logging.warning(f"Skipping {file}: Missing md_hash in index data.") + skipped_other_count += 1 + continue + + # Check for invalid UUID + uuid_value = chroma_add_item.section.uuid + if not isinstance(uuid_value, str) or not uuid_value: + logging.error(f"File {file}: Invalid UUID detected ({repr(uuid_value)}). Skipping operation for this file.") + skipped_invalid_uuid_count += 1 + continue + id_to_not_change = [] + # Check for Chroma collection existence + if chroma_collection: + try: + ids_to_check = [uuid_value] + get_result = chroma_collection.get( + ids=ids_to_check, + where={"md_hash": {"$eq": chroma_add_item.section.md_hash}}, + include=[], + ) + id_to_not_change = get_result["ids"] + except Exception as e: + if "does not exist" in str(e).lower(): + id_to_not_change = [] + else: + logging.warning(f"File {file}: Error in Chroma to get for ID {uuid_value}: {e}") + id_to_not_change = [] + + # Check for existing entry with same hash + if id_to_not_change: + # Exists with same hash -> Unchanged qty_change = len(id_to_not_change) - progress_unchanged_file.update(qty_change) unchanged_count += qty_change - progress_unchanged_file.set_description_str( - f"Total unchanged file {unchanged_count}", - refresh=True, - ) - else: - # Process this text chunk and store it into the databases. - # Generate an embedding - this_embedding = gemini_new.embed( - content=chroma_add_item.section.content, - task_type="RETRIEVAL_DOCUMENT", - title=chroma_add_item.doc_title, - )[0] - # Store this text chunk entry in Chroma. - collection.add( - documents=[chroma_add_item.section.content], - embeddings=[this_embedding], - metadatas=[chroma_add_item.metadata], - ids=[chroma_add_item.section.uuid], - ) - # Update the progress bar. - new_count += 1 - progress_new_file.update(1) - progress_new_file.set_description_str( - f"Total new files {new_count}", refresh=True - ) - # Add this text chunk to the online storage. - if product_config.db_type == "google_semantic_retriever": - document_name = upload_an_entry_to_a_corpus( - semantic, - corpus_name, - document_name_in_corpus, - chroma_add_item, - is_this_first_chunk, - ) - # Store the document resource name - dict_document_names_in_corpus[ - file_page_prefix - ] = document_name - total_files += 1 - else: - if chroma_add_item.section.content == "": - logging.error(f"Skipped {file} because the file is empty.") + total_files_processed += qty_change + progress_unchanged_file.update(qty_change) + progress_unchanged_file.set_description_str(f"Total unchanged files {unchanged_count}", refresh=True) else: - logging.error( - f"Skipped {file} because the file is is too large {str(len(chroma_add_item.section.content))}" - ) - # Skips logging a warning if the file being walked is the index file - elif full_file_name == full_index_path: - next + # New or updated entry + if chroma_collection: + try: + doc_list = [chroma_add_item.section.content] + meta_list = [chroma_add_item.metadata] + id_list = [uuid_value] + # Upsert the entry + chroma_collection.upsert( + documents=doc_list, + metadatas=meta_list, + ids=id_list, + ) + new_count += 1 + total_files_processed += 1 + progress_new_file.update(1) + progress_new_file.set_description_str(f"Total new/updated files {new_count}", refresh=True) + + except Exception as e: + # Keep this error log + logging.error(f"Error during collection.upsert for ID {uuid_value}: {e}", exc_info=True) + skipped_other_count += 1 + else: + logging.warning(f"File {file}: Skipping add/upsert because Chroma collection is not available.") + skipped_other_count += 1 + + # Check for Semantic Retriever initialization + if semantic and corpus_name: + file_page_prefix = full_file_name + is_this_first_chunk = True + match_file_page = re.search(r"(.*)_\d+\.md$", full_file_name) + if match_file_page: + file_page_prefix = match_file_page.group(1) + if file_page_prefix in dict_document_names_in_corpus: + document_name_in_corpus = dict_document_names_in_corpus[file_page_prefix] + is_this_first_chunk = False + else: + document_name_in_corpus = "" + + try: + document_name = upload_an_entry_to_a_corpus( + semantic, + corpus_name, + document_name_in_corpus, + chroma_add_item, + is_this_first_chunk, + ) + dict_document_names_in_corpus[file_page_prefix] = document_name + except Exception as e: + logging.error(f"Failed to upload chunk for {file} to Semantic Retriever: {e}", exc_info=True) + + except Exception as e: + # Keep this error log for file-level processing errors + logging.error(f"Error processing file {full_file_name}: {e}", exc_info=True) + skipped_other_count += 1 else: - # Logs missing extensions from input directory that may be - # processed - file_name, extension = os.path.splitext(file) - logging.warning( - f"Skipped {file} because there is no configured parser for extension {extension}" - ) - - progress_bar.set_description_str( - f"Finished processing text chunk files (and file_index.json).", refresh=True - ) - progress_unchanged_file.set_description_str( - f"Total number of entries: {total_files}", refresh=True - ) + # Skip non-markdown files + pass + + # Close all progress bars + progress_bar.set_description_str(f"Finished processing.", refresh=True) + progress_bar.close() + progress_new_file.close() + progress_unchanged_file.close() + + # Simplified final summary print + print(f"\nProcessing Summary:") + print(f" Total files found for database: {total_files_processed}") + print(f" New or updated files: {new_count}") + print(f" Unchanged files: {unchanged_count}") + # Optionally report skipped counts if they are non-zero, otherwise omit for cleaner output + if skipped_invalid_uuid_count > 0: + print(f" Skipped (Invalid UUID): {skipped_invalid_uuid_count}") + if skipped_other_count > 0: + print(f" Skipped (Other reasons): {skipped_other_count}") + + print(f"\nFinished processing and generating embeddings for all files.") + if any("chroma" in db_conf.db_type for db_conf in product_config.db_configs): + print("Finalized generation of embeddings for all files in Chroma DB.") def findFileinDict(input_file_name: str, index_object, content_file): diff --git a/examples/gemini/python/docs-agent/docs_agent/preprocess/splitters/markdown_splitter.py b/examples/gemini/python/docs-agent/docs_agent/preprocess/splitters/markdown_splitter.py index 8b7cd0eef..a23ccc075 100644 --- a/examples/gemini/python/docs-agent/docs_agent/preprocess/splitters/markdown_splitter.py +++ b/examples/gemini/python/docs-agent/docs_agent/preprocess/splitters/markdown_splitter.py @@ -220,7 +220,7 @@ def markdown_to_text(markdown_string): html = markdown.markdown(markdown_string) # Extract text soup = bs4.BeautifulSoup(html, "html.parser") - text = "".join(soup.findAll(string=True)) + text = "".join(soup.find_all(string=True)) # Remove [][] in Markdown text = re.sub(r"\[(.*?)\]\[(.*?)\]", "\\1", text) # Remove {: } in Markdown diff --git a/examples/gemini/python/docs-agent/docs_agent/storage/base.py b/examples/gemini/python/docs-agent/docs_agent/storage/base.py new file mode 100644 index 000000000..c3e55d12e --- /dev/null +++ b/examples/gemini/python/docs-agent/docs_agent/storage/base.py @@ -0,0 +1,75 @@ +# +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import abc +import typing + + +class RAG(abc.ABC): + """Abstract base class for Retrieval-Augmented Generation.""" + + @abc.abstractmethod + def query_vector_store_to_build( + self, + question: str, + token_limit: float = 200000, + results_num: int = 10, + max_sources: int = 4, + collection_name: typing.Optional[str] = None, + docs_agent_config: typing.Optional[str] = "normal", + ) -> tuple[list[typing.Any], str]: + """ + Queries the vector store and builds context. Must be implemented + by subclasses. + + Args: + docs_agent_config: The configuration string ('experimental' or other). + question (str): The user's question. + token_limit (float, optional): The total token limit for the context. + results_num (int, optional): The initial number of results to retrieve. + max_sources (int, optional): The maximum number of sources to use. + + Returns: + tuple: A tuple containing a list of SectionDistance-like objects + and the final context string. + """ + pass + + @abc.abstractmethod + def get_collection(self, name, embedding_function=None, embedding_model=None): + """ + Gets the collection from the vector store. Must be implemented + by subclasses. + """ + pass + + @abc.abstractmethod + def backup(self): + """ + Backs up the vector store. + """ + pass + + @abc.abstractmethod + def embedding_function( + self, + api_key, + embedding_model, + task_type: str = "RETRIEVAL_QUERY"): + """ + Gets the embedding function. + """ + pass \ No newline at end of file diff --git a/examples/gemini/python/docs-agent/docs_agent/storage/chroma.py b/examples/gemini/python/docs-agent/docs_agent/storage/chroma.py index c17b73600..ced7a56a2 100644 --- a/examples/gemini/python/docs-agent/docs_agent/storage/chroma.py +++ b/examples/gemini/python/docs-agent/docs_agent/storage/chroma.py @@ -17,20 +17,31 @@ """Chroma wrapper""" from enum import auto, Enum -import os import string import shutil import typing from absl import logging import chromadb -from chromadb.utils import embedding_functions -from chromadb.api.models import Collection +from chromadb import Documents, EmbeddingFunction, Embeddings +from chromadb.api.types import Images from chromadb.api.types import QueryResult +from docs_agent.storage.base import RAG +from docs_agent.models.llm import GenerativeLanguageModelFactory +from docs_agent.utilities.config import Models, ProductConfig, DbConfig +from docs_agent.utilities.helpers import resolve_path from docs_agent.preprocess.splitters.markdown_splitter import Section as Section from docs_agent.postprocess.docs_retriever import FullPage as FullPage -from docs_agent.utilities.helpers import resolve_path, parallel_backup_dir +from docs_agent.postprocess.docs_retriever import ( + query_vector_store_to_build as retriever_query_vector_store_to_build, + SectionDistance, +) +from docs_agent.utilities import helpers + + +# Embeddable types for Chroma - from chroma docs +Embeddable = typing.Union[Documents, Images] class Error(Exception): @@ -289,99 +300,256 @@ def format(self, format_type: SectionDB, ref_index: typing.Optional[int] = None) return result -class ChromaEnhanced: +class GeminiEmbeddingFunction(EmbeddingFunction): + """Embedding function wrapper for Gemini models""" + + def __init__(self, models_config: Models, task_type: str = "RETRIEVAL_DOCUMENT"): + self.models_config = models_config + self.task_type = task_type + # Create the embedding model instance + self.model = GenerativeLanguageModelFactory.create_model( + model_type=self.models_config.embedding_model, + models_config=self.models_config, + ) + + def __call__(self, input: Embeddable) -> Embeddings: + # Handles list of strings + if isinstance(input, list) and all(isinstance(i, str) for i in input): + embeddings_list = self.model.embed(content=input, task_type=self.task_type) + # Commented out for now. Can use images here. + # elif isinstance(input, list) and all(isinstance(i, np.ndarray) for i in input): + # embeddings_list = model.embed_images(images=input, task_type=self.task_type) # Example + else: + logging.error( + f"Unsupported input type for embedding function: {type(input)}" + ) + # In case there is a single string, which is the most common case + if isinstance(input, str): + embeddings_list = self.model.embed( + content=[input], task_type=self.task_type + ) + else: + # Update this if images get enabled + raise TypeError("Input must be Documents (List[str])") + + return typing.cast(Embeddings, embeddings_list) + + +class ChromaEnhanced(RAG): """Chroma wrapper""" - def __init__(self, chroma_dir) -> None: + def __init__(self, chroma_dir: str, models_config: Models) -> None: self.client = chromadb.PersistentClient(path=chroma_dir) + self.models_config = models_config + self.chroma_dir = chroma_dir + self._collection_name: typing.Optional[str] = None + # Start the embedding function + self.embedding_function_instance = GeminiEmbeddingFunction( + models_config=self.models_config, task_type="RETRIEVAL_DOCUMENT" + ) + logging.info(f"ChromaEnhanced instance initialized for path: {chroma_dir}") + + @staticmethod + def from_product_config(product_config: ProductConfig) -> "ChromaEnhanced": + """Creates a ChromaEnhanced instance from a ProductConfig.""" + chroma_db_conf: DbConfig | None = None + for db_conf in product_config.db_configs: + if db_conf.db_type == "chroma": + chroma_db_conf = db_conf + break + if not chroma_db_conf: + logging.error("Chroma configuration not found in product config.") + raise ValueError("Chroma configuration not found in product config.") + + if not chroma_db_conf.vector_db_dir: + logging.error("Chroma vector_db_dir is missing in the configuration.") + raise ValueError("Chroma vector_db_dir is missing in the configuration.") + + logging.info( + f"[ChromaEnhanced] Relative chroma path from configuration: '{chroma_db_conf.vector_db_dir}'" + ) + try: + resolved_chroma_dir = resolve_path(chroma_db_conf.vector_db_dir) + logging.info( + f"[ChromaEnhanced] Resolved absolute chroma path: '{resolved_chroma_dir}'" + ) + except Exception as e: + logging.error(f"[ChromaEnhanced] Error resolving chroma path: {e}") + raise + + # Create the ChromaEnhanced instance + try: + chroma_instance = ChromaEnhanced( + chroma_dir=resolved_chroma_dir, models_config=product_config.models + ) + logging.info( + f"ChromaEnhanced successfully created for path: {resolved_chroma_dir}" + ) + return chroma_instance + except Exception as e: + logging.error(f"Error creating ChromaEnhanced instance: {e}") + raise + + # Returns the instance of the embedding function + def embedding_function(self, *args, **kwargs) -> GeminiEmbeddingFunction: + """Returns the embedding function instance configured for the Chroma wrapper.""" + return self.embedding_function_instance def list_collections(self): return self.client.list_collections() - # Returns output_dir if backup was successful, None if it failed - # Output dir can only be a child to chroma_dir - def backup_chroma(self, chroma_dir: str, output_dir: typing.Optional[str] = None): + def backup(self, output_dir: typing.Optional[str] = None): + """Backs up the chroma database to the specified output directory. + + Args: + output_dir (str, optional): The directory to backup to. If None, a + backup directory will be created that is parralel to the + self.chroma_dir. + + Returns: + str: The path to the backup directory, or None if the backup failed. + """ + if output_dir == None: + try: + output_dir = helpers.parallel_backup_dir(self.chroma_dir) + except: + logging.exception( + "Failed to create backup directory for: %s", self.chroma_dir + ) + return None + else: + try: + pure_path = helpers.return_pure_dir(self.chroma_dir) + output_dir = ( + helpers.end_path_backslash( + helpers.start_path_no_backslash(output_dir) + ) + + pure_path + ) + except: + logging.exception( + "Failed to resolve output directory for: %s", output_dir + ) + return None try: - chroma_dir = resolve_path(chroma_dir) if output_dir == None: - output_dir = parallel_backup_dir(chroma_dir) - shutil.copytree(chroma_dir, output_dir, dirs_exist_ok=True) - logging.info(f"Backed up from: {chroma_dir} to {output_dir}") + output_dir = helpers.parallel_backup_dir(self.chroma_dir) + shutil.copytree(self.chroma_dir, output_dir, dirs_exist_ok=True) + logging.info("Backed up from: %s to %s", self.chroma_dir, output_dir) return output_dir except: + logging.exception( + "Failed to backup from: %s to %s", self.chroma_dir, output_dir + ) return None # def getSameOriginUUID(self): # return self.client.get() - def get_collection(self, name, embedding_function=None, embedding_model=None): - if embedding_function is not None: - return ChromaCollectionEnhanced( - self.client.get_collection( - name=name, embedding_function=embedding_function - ), - embedding_function, - ) - # Read embedding meta information from the collection - collection = self.client.get_collection(name=name) - if embedding_model is None and collection.metadata: - embedding_model = collection.metadata.get("embedding_model", None) - if embedding_model is None: - # If embedding_model is not found in the metadata, - # use `models/embedding-001` by default. - logging.info( - "Embedding model is not specified in the metadata of " - "the collection %s. Using the default embedding model: models/embedding-001", - name, - ) - embedding_model = "models/embedding-001" - if embedding_model == "local/all-mpnet-base-v2": - base_dir = os.path.dirname(os.path.abspath(__file__)) - local_model_dir = os.path.join(base_dir, "models/all-mpnet-base-v2") - embedding_function = ( - embedding_functions.SentenceTransformerEmbeddingFunction( - model_name=local_model_dir - ) - ) - else: - raise ChromaEmbeddingModelNotSupportedError( - f"Embedding model {embedding_model} specified by collection {name} " - "is not supported." + def get_collection(self, name, embedding_function=None): + # Can override the embedding function + ef_to_use = ( + embedding_function + if embedding_function + else self.embedding_function_instance + ) + try: + collection = self.client.get_collection(name=name) + if self._collection_name is None: + self._collection_name = name + except Exception as e: + logging.error(f"Failed to get collection '{name}': {e}") + raise + return ChromaCollectionEnhanced(collection, ef_to_use) + + def query_vector_store_to_build( + self, + question: str, + token_limit: float = 200000, + results_num: int = 10, + max_sources: int = 4, + collection_name: typing.Optional[str] = None, + docs_agent_config: typing.Optional[str] = "normal", + ) -> tuple[list[SectionDistance], str]: + """ + Queries the vector database collection and builds a context string. + Calls the retriever function. + + Args: + question (str): The user's question. + token_limit (float, optional): The total token limit for the context. Defaults to 200000. + results_num (int, optional): The initial number of results to retrieve. Defaults to 10. + max_sources (int, optional): The maximum number of sources to use. Defaults to 4. + collection_name (str, optional): The name of the collection to query. + If None, uses the collection name stored + during the first get_collection call. + docs_agent_config (str, optional): The docs agent configuration string. "experimental" or "normal". Defaults to "normal". + + Returns: + tuple: A tuple containing a list of SectionDistance objects and the final context string. + """ + target_collection_name = collection_name or self._collection_name + if not target_collection_name: + logging.error("Collection name not provided and not previously set.") + raise ValueError( + "Must provide collection_name or call get_collection first." ) - return ChromaCollectionEnhanced( - self.client.get_collection( - name=name, embedding_function=embedding_function - ), - embedding_function, + target_docs_agent_config = docs_agent_config + try: + collection_obj = self.get_collection(name=target_collection_name) + except Exception as e: + logging.error(f"Failed to get collection '{target_collection_name}': {e}") + raise + + # Call the function from docs_retriever + return retriever_query_vector_store_to_build( + collection=collection_obj, + docs_agent_config=target_docs_agent_config, + question=question, + token_limit=token_limit, + results_num=results_num, + max_sources=max_sources, ) class ChromaCollectionEnhanced: """Chroma collection wrapper""" - def __init__(self, collection, embedding_function) -> None: + def __init__(self, collection, embedding_function_instance) -> None: self.collection = collection - self.embedding_function = embedding_function - - def query(self, text: str, top_k: int = 1): - dict = {} - # dict.update({"token_estimate": {"$gt": 100}}) - return ChromaQueryResultEnhanced( - self.collection.query(query_texts=[text], n_results=top_k, where=dict) - ) - - # same_page = self.collection.get(include=["documents","metadatas"], - # where={"origin_uuid": {"$eq": origin_uuid[i]}},) + # Store the embedding function instance + self.embedding_function = embedding_function_instance + # Retrieve the models config from the embedding function instance + if hasattr(embedding_function_instance, "models_config"): + self._models_config = embedding_function_instance.models_config + else: + self._models_config = None + logging.warning( + "ChromaCollectionEnhanced could not access models_config from the embedding function." + ) - # # Return all entries that match an origin_uuid - # def getPageOriginUUID(self, origin_uuid): - # return ChromaDBGet( - # self.collection.get( - # include=["metadatas", "documents"], - # where={"origin_uuid": {"$eq": origin_uuid}}, - # ) - # ) + def query(self, text: str, top_k: int = 1, where: dict = None): + """Queries the ChromaDB collection using appropriate query embeddings.""" + if self._models_config: + query_ef = GeminiEmbeddingFunction( + models_config=self._models_config, task_type="RETRIEVAL_QUERY" + ) + # Query the collection using the query embeddings + query_embeddings = query_ef([text]) + query_args = {"query_embeddings": query_embeddings, "n_results": top_k} + if where is not None: + query_args["where"] = where + result = self.collection.query(**query_args) + else: + logging.warning( + "Cannot create query-specific embedding function. Falling back to using collection's default embedding function for query." + ) + query_args = {"query_texts": [text], "n_results": top_k} + if where is not None: + query_args["where"] = where + result = self.collection.query(**query_args) + return ChromaQueryResultEnhanced(result) # Return a FullPage (list of Section) that match an origin_uuid def getPageOriginUUIDList(self, origin_uuid): @@ -429,7 +597,9 @@ def __init__(self, result: QueryResult) -> None: self.result = result def __len__(self): - return len(self.result["documents"][0]) + if self.result["documents"] and self.result["documents"][0]: + return len(self.result["documents"][0]) + return 0 # Get without considering distance def clean_get(self): @@ -477,10 +647,16 @@ def returnSectionObj(self, format_type: SectionDB, distance_threshold=float("inf # limit specific in query. You can then access from each list item with # .document, .id, etc... def returnDBObjList(self, distance_threshold=float("inf")): - results = self.fetch(distance_threshold=distance_threshold) contents = [] - for item in results: - contents.append(item) + # Check if results exist before iterating + if self.result and self.result.get("documents") and self.result["documents"][0]: + results = self.fetch(distance_threshold=distance_threshold) + for item in results: + contents.append(item) + else: + logging.warning( + "No documents found in Chroma query result for returnDBObjList." + ) return contents # This function returns a list of ChromaSectionDBItem that match results up to @@ -495,10 +671,20 @@ def returnDBObjListGet(self): return results def fetch_nearest(self): - return ChromaSectionDBItem(self.result, 0) + # Add check for empty results + if len(self) > 0: + return ChromaSectionDBItem(self.result, 0) + else: + logging.warning( + "Attempted to fetch nearest from empty Chroma query result." + ) + return None # Or raise an error def fetch_nearest_formatted(self, format_type: SectionDB): - return self.fetch_nearest().format(format_type) + nearest = self.fetch_nearest() + if nearest: + return nearest.format(format_type) + return "" # Return empty string if no nearest item # def return_response(self): # return ChromaSectionDBItem.returnSection(self) diff --git a/examples/gemini/python/docs-agent/docs_agent/storage/rag.py b/examples/gemini/python/docs-agent/docs_agent/storage/rag.py new file mode 100644 index 000000000..b0326e553 --- /dev/null +++ b/examples/gemini/python/docs-agent/docs_agent/storage/rag.py @@ -0,0 +1,48 @@ +# +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from absl import logging +from docs_agent.storage.base import RAG +from docs_agent.utilities.config import ProductConfig +from docs_agent.storage.chroma import ChromaEnhanced + + +class RAGFactory: + @staticmethod + def create_rag(product_config: ProductConfig) -> RAG: + # Find the Chroma DB configuration in the product config. + has_chroma = any(db.db_type == "chroma" for db in product_config.db_configs) + + if has_chroma: + logging.info("[RAGFactory] Chroma DB configuration found. Creating ChromaEnhanced instance.") + try: + # Create the ChromaEnhanced instance from the product config + return ChromaEnhanced.from_product_config(product_config) + except Exception as e: + logging.error(f"[RAGFactory] Failed to create Chroma RAG instance: {e}") + raise + else: + # Handle the case where no supported DB config is found + logging.error("[RAGFactory] No supported RAG database configuration found in product configuration.") + raise ValueError("No supported RAG database configuration found.") + + +def return_collection_name(product_config: ProductConfig) -> str: + collection_name = "" + for item in product_config.db_configs: + if "chroma" in item.db_type: + collection_name = item.collection_name + return collection_name diff --git a/examples/gemini/python/docs-agent/docs_agent/tests/test_vector_database.py b/examples/gemini/python/docs-agent/docs_agent/tests/test_vector_database.py deleted file mode 100644 index 5968f8879..000000000 --- a/examples/gemini/python/docs-agent/docs_agent/tests/test_vector_database.py +++ /dev/null @@ -1,184 +0,0 @@ -# -# Copyright 2023 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# - -"""Test the vector database""" - -import os -import sys -import google.generativeai as palm -import chromadb -from chromadb.config import Settings -from chromadb.utils import embedding_functions -from chromadb.api.types import Document, Embedding, Documents, Embeddings -from rich.console import Console -from rich.markdown import Markdown -from rich.panel import Panel - -# from rich import print -from ratelimit import limits, sleep_and_retry -from docs_agent.utilities import read_config -from docs_agent.storage.chroma import Chroma, ChromaEnhanced - - -def main(): - BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) - - # Set the directory path to locate the Chroma vector database - LOCAL_VECTOR_DB_DIR = os.path.join(BASE_DIR, "vector_stores/chroma") - COLLECTION_NAME = "docs_collection" - EMBEDDING_MODEL = None - - IS_CONFIG_FILE = True - if IS_CONFIG_FILE: - config_values = read_config.ReadConfig() - LOCAL_VECTOR_DB_DIR = config_values.returnConfigValue("vector_db_dir") - COLLECTION_NAME = config_values.returnConfigValue("collection_name") - EMBEDDING_MODEL = config_values.returnConfigValue("embedding_model") - # Set this in config.yaml as "experimental" to test new features - DOCS_AGENT_CONFIG = config_values.returnConfigValue("docs_agent_config") - - # Set a test question - QUESTION = "What are some differences between apples and oranges?" - NUM_RETURNS = 5 - - # Set up the PaLM API key from the environment - API_KEY = os.getenv("PALM_API_KEY") - if API_KEY is None: - sys.exit("Please set the environment variable PALM_API_KEY to be your API key.") - - # Select your PaLM API endpoint - PALM_API_ENDPOINT = "generativelanguage.googleapis.com" - palm.configure(api_key=API_KEY, client_options={"api_endpoint": PALM_API_ENDPOINT}) - - # Set up the path to the local LLM - # This value is used only when `EMBEDDINGS_TYPE` is set to `LOCAL` - LOCAL_LLM = os.path.join(BASE_DIR, "models/all-mpnet-base-v2") - - # Use the PaLM API for generating embeddings by default - EMBEDDINGS_TYPE = "PALM" - - # PaLM API call limit to 300 per minute - API_CALLS = 280 - API_CALL_PERIOD = 60 - - # Create embed function for PaLM - # API call limit to 5 qps - @sleep_and_retry - @limits(calls=API_CALLS, period=API_CALL_PERIOD) - def embed_palm_api_call(text: Document) -> Embedding: - if PALM_EMBEDDING_MODEL == "models/embedding-001": - # Use the `embed_content()` method if it's the new Gemini embedding model. - return palm.embed_content(model=PALM_EMBEDDING_MODEL, content=text)[ - "embedding" - ] - else: - return palm.generate_embeddings(model=PALM_EMBEDDING_MODEL, text=text)[ - "embedding" - ] - - def embed_palm(texts: Documents) -> Embeddings: - # Embed the documents using any supported method - return [embed_palm_api_call(text) for text in texts] - - # Initialize Rich console - ai_console = Console(width=160) - ai_console.rule("Fold") - - if DOCS_AGENT_CONFIG == "experimental": - chroma_client = ChromaEnhanced(LOCAL_VECTOR_DB_DIR) - else: - chroma_client = Chroma(LOCAL_VECTOR_DB_DIR) - - if EMBEDDINGS_TYPE == "PALM": - if EMBEDDING_MODEL is None: - PALM_EMBEDDING_MODEL = "models/embedding-gecko-001" - else: - PALM_EMBEDDING_MODEL = EMBEDDING_MODEL - emb_fn = embed_palm - elif EMBEDDINGS_TYPE == "LOCAL": - emb_fn = embedding_functions.SentenceTransformerEmbeddingFunction( - model_name=LOCAL_LLM - ) - else: - emb_fn = embedding_functions.SentenceTransformerEmbeddingFunction( - model_name=LOCAL_LLM - ) - - if DOCS_AGENT_CONFIG == "experimental": - embedding_function = embedding_functions.GoogleGenerativeAiEmbeddingFunction( - api_key=API_KEY, model_name=EMBEDDING_MODEL, task_type="RETRIEVAL_QUERY" - ) - else: - embedding_function = emb_fn - - collection = chroma_client.get_collection( - name=COLLECTION_NAME, embedding_function=embedding_function - ) - - if DOCS_AGENT_CONFIG == "experimental": - results = collection.query(QUESTION, NUM_RETURNS) - else: - results = collection.query(QUESTION, NUM_RETURNS) - # results = collection.query(text=[QUESTION], top_k=NUM_RETURNS) - - print("") - ai_console.print(Panel.fit(Markdown("Question: " + QUESTION))) - print("Results:") - if DOCS_AGENT_CONFIG == "experimental": - objlist = results.returnDBObjList() - for obj in objlist: - ai_console.print( - Panel.fit( - Markdown( - "Page title: " - + obj.metadata["page_title"] - + " ==== Section title: " - + obj.metadata["section_title"] - + "\n\nContent:\n" - + obj.document - ) - ) - ) - ai_console.print( - Panel.fit( - Markdown( - "Distance: " - + str(obj.distance) - + "\n\nURL:\n" - + obj.metadata["url"] - ) - ) - ) - else: - print(results) - i = 0 - for document in results["documents"]: - for content in document: - print("Content " + str(i) + ": ") - ai_console.print(Panel.fit(Markdown(content))) - source = results["metadatas"][0][i] - this_id = results["ids"][0][i] - distance = results["distances"][0][i] - print(" source: " + source["source"]) - print(" URL: " + source["url"]) - print(" ID: " + this_id) - print(" Distance: " + str(distance)) - print("") - i += 1 - - -if __name__ == "__main__": - main() diff --git a/examples/gemini/python/docs-agent/docs_agent/tests/utilities/__init__.py b/examples/gemini/python/docs-agent/docs_agent/tests/utilities/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/examples/gemini/python/docs-agent/docs_agent/tests/utilities/test_helpers.py b/examples/gemini/python/docs-agent/docs_agent/tests/utilities/test_helpers.py new file mode 100644 index 000000000..1d5efad72 --- /dev/null +++ b/examples/gemini/python/docs-agent/docs_agent/tests/utilities/test_helpers.py @@ -0,0 +1,503 @@ +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import unittest +import os +from pathlib import Path +from unittest.mock import patch, MagicMock +import bs4 +import urllib +from flask import Flask, url_for +from docs_agent.utilities import helpers + + +class TestHelpers(unittest.TestCase): + def setUp(self): + self.app = Flask(__name__) + self.app.config["TESTING"] = True + self.app.config["SERVER_NAME"] = "testserver" + + # This is where we define the route so the url_for function will work + @self.app.route('/question', methods=['GET']) + def question(): + return "test question" + + self.app.add_url_rule('/chatui/question', endpoint='chatui.question') + + self.app_context = self.app.app_context() + self.app_context.push() + + def tearDown(self): + self.app_context.pop() + + def test_expand_path_with_tilde(self): + """Tests that a path starting with '~/' is expanded correctly.""" + # Patch controls the output of os.path.expanduser + expected_home_dir = "/usr/local/home/testuser" + with patch("os.path.expanduser", return_value=os.path.join(expected_home_dir, "Documents")): + input_path = "~/Documents" + expected_path = os.path.join(expected_home_dir, "Documents") + self.assertEqual(helpers.expand_user_path(input_path), expected_path) + + def test_expand_path_without_tilde(self): + """Tests that a path not starting with '~/' is returned unchanged.""" + input_path = "/absolute/path/to/file" + self.assertEqual(helpers.expand_user_path(input_path), input_path) + + def test_expand_path_with_tilde_but_not_at_start(self): + """Tests that a path containing '~' but not at the start is returned unchanged.""" + input_path = "/some/path/~/something" + self.assertEqual(helpers.expand_user_path(input_path), input_path) + + def test_expand_path_with_empty_string_input(self): + """Tests that an empty string is returned unchanged.""" + input_path = "" + self.assertEqual(helpers.expand_user_path(input_path), input_path) + + def test_expand_path_none_input(self): + """Tests that None input returns None.""" + input_path = None + self.assertIsNone(helpers.expand_user_path(input_path)) + + def test_expand_path_just_tilde_slash(self): + """Tests that just '~/' expands to the home directory.""" + expected_home_dir = "/usr/local/home/testuser" + with patch("os.path.expanduser", return_value=expected_home_dir): + input_path = "~/" + self.assertEqual(helpers.expand_user_path(input_path), expected_home_dir) + + def test_get_project_path(self): + # Assumes this test file is in the project root's tests directory + project_path = helpers.get_project_path() + # Verify that the path is a directory + self.assertTrue(os.path.isdir(project_path)) + # Verify that the project path is within one subdirectory above this test file + self.assertEqual(Path(__file__).parent.parent.parent.parent, project_path) + + def test_resolve_path_absolute(self): + # Test an absolute path + abs_path = "/absolute/path" + self.assertEqual(helpers.resolve_path(abs_path), abs_path) + + def test_resolve_path_relative(self): + # Test a relative path + rel_path = "relative/path" + expected_path = os.path.join(helpers.get_project_path(), rel_path) + self.assertEqual(helpers.resolve_path(rel_path), expected_path) + + def test_resolve_path_with_base_dir(self): + # Test a relative path with a specified base directory + base_dir = Path("/base") + rel_path = "sub/path" + expected_path = os.path.join(base_dir, rel_path) + self.assertEqual(helpers.resolve_path(rel_path, base_dir), expected_path) + + def test_end_path_backslash(self): + # Test adding a backslash to a path + self.assertEqual(helpers.end_path_backslash("path"), "path/") + self.assertEqual(helpers.end_path_backslash("path/"), "path/") + + def test_start_path_no_backslash(self): + # Test removing a leading backslash + self.assertEqual(helpers.start_path_no_backslash("/path"), "path") + self.assertEqual(helpers.start_path_no_backslash("path"), "path") + + def test_parallel_backup_dir(self): + # Test creating a parallel backup directory + test_path = "/path/to/file.txt" + backup_dir = helpers.parallel_backup_dir(test_path, "backup") + expected_path = "/path/to/backup/file.txt" + self.assertEqual(backup_dir, expected_path) + + def test_parallel_backup_dir_relative(self): + # Test creating a parallel backup directory with a relative path + test_path = "path/to/file.txt" + backup_dir = helpers.parallel_backup_dir(test_path, "backup") + expected_path = os.path.join(helpers.get_project_path(), 'path', 'to', 'backup', 'file.txt') + self.assertEqual(backup_dir, expected_path) + + def test_parallel_backup_dir_custom_backup_name(self): + # Test creating a parallel backup directory with a custom backup name + test_path = "/path/to/file.txt" + backup_dir = helpers.parallel_backup_dir(test_path, "custom") + expected_path = "/path/to/custom/file.txt" + self.assertEqual(backup_dir, expected_path) + + def test_return_pure_dir(self): + # Test returning the parent directory name + self.assertEqual(helpers.return_pure_dir("/path/to/file.txt"), "file.txt") + self.assertEqual(helpers.return_pure_dir("path/to/file.txt"), "file.txt") + + def test_add_scheme_url(self): + # Test adding a URL scheme + self.assertEqual(helpers.add_scheme_url("google.com"), "https://google.com") + self.assertEqual(helpers.add_scheme_url("http://google.com"), "http://google.com") + self.assertEqual(helpers.add_scheme_url("https://google.com"), "https://google.com") + self.assertEqual(helpers.add_scheme_url("google.com", "http"), "http://google.com") + + def test_parse_related_questions_response_to_html_list(self): + # Test parsing related questions response and converting to HTML + response = """ +
    +
  • This is a

    paragraph.

  • +
  • This is a code example.
  • +
  • This is a simple item.
  • +
+ """ + parsed_html = helpers.parse_related_questions_response_to_html_list(response) + # Verify there are no p or code tags + self.assertIsNone(parsed_html.find("p")) + self.assertIsNone(parsed_html.find("code")) + + # Test there are now tags instead of raw string + a_tags = parsed_html.find_all("a") + self.assertEqual(len(a_tags), 3) + + self.assertEqual( + a_tags[0].get("href"), + url_for("chatui.question", ask=urllib.parse.quote_plus("This is a paragraph."), _external=True), + ) + self.assertEqual( + a_tags[1].get("href"), + url_for("chatui.question", ask=urllib.parse.quote_plus("This is a code example."), _external=True), + ) + self.assertEqual( + a_tags[2].get("href"), + url_for("chatui.question", ask=urllib.parse.quote_plus("This is a simple item."), _external=True), + ) + + self.assertEqual(a_tags[0].string, "This is a paragraph.") + self.assertEqual(a_tags[1].string, "This is a code example.") + self.assertEqual(a_tags[2].string, "This is a simple item.") + + def test_build_list_html_links_no_content(self): + # Test building a list of HTML links + urls = ["https://example.com/section1", "https://example.com/section2"] + section_titles = ["Section 1", "Section 2"] + page_titles = ["Page 1", "Page 2"] + distances = [0.1, 0.2] + + html_list = helpers.build_list_html_links( + urls, section_titles, page_titles, distances + ) + self.assertIn("
  • ", html_list) + self.assertIn("Section 1", html_list) + self.assertIn("Section 2", html_list) + self.assertIn("Page 1", html_list) + self.assertIn("Page 2", html_list) + self.assertIn("Distance: 0.1", html_list) + self.assertIn("Distance: 0.2", html_list) + + def test_build_list_html_links_with_content(self): + urls = ["https://example.com/section1", "https://example.com/section2"] + section_titles = ["Section 1", "Section 2"] + page_titles = ["Page 1", "Page 2"] + distances = [0.1, 0.2] + section_content = ["Content 1", "Content 2"] + + html_list = helpers.build_list_html_links( + urls, section_titles, page_titles, distances, section_content=section_content + ) + + self.assertIn("
  • ", html_list) + self.assertIn("Section 1", html_list) + self.assertIn("Section 2", html_list) + self.assertIn("Page 1", html_list) + self.assertIn("Page 2", html_list) + self.assertIn("Content 1", html_list) + self.assertIn("Content 2", html_list) + self.assertIn("Distance: 0.1", html_list) + self.assertIn("Distance: 0.2", html_list) + + def test_build_list_html_links_max_count(self): + urls = ["https://example.com/section1", "https://example.com/section2", "https://example.com/section3"] + section_titles = ["Section 1", "Section 2", "Section 3"] + page_titles = ["Page 1", "Page 2", "Page 3"] + distances = [0.1, 0.2, 0.3] + + html_list = helpers.build_list_html_links( + urls, section_titles, page_titles, distances, max_count=2 + ) + self.assertIn("
  • ", html_list) + self.assertIn("Section 1", html_list) + self.assertIn("Section 2", html_list) + self.assertNotIn("Section 3", html_list) + self.assertIn("Page 1", html_list) + self.assertIn("Page 2", html_list) + self.assertNotIn("Page 3", html_list) + self.assertIn("Distance: 0.1", html_list) + self.assertIn("Distance: 0.2", html_list) + self.assertNotIn("Distance: 0.3", html_list) + + def test_named_link_html(self): + # Test building an HTML link + html_link = helpers.named_link_html("google.com", "Google") + self.assertEqual(html_link, 'Google') + + # Test with a class attribute. Sort the attributes for comparison. + html_link = helpers.named_link_html("google.com", "Google", class_="test") + self.assertEqual(self._sort_html_attributes(html_link), self._sort_html_attributes('Google')) + + #Test with no label + html_link = helpers.named_link_html("google.com") + self.assertEqual(html_link, '') + + #Test with other kwargs + html_link = helpers.named_link_html("google.com", "Google", id="test-id", style="color: blue;") + self.assertEqual(self._sort_html_attributes(html_link), self._sort_html_attributes('Google')) + + # Test with http:// URL + html_link = helpers.named_link_html("http://google.com", "Google") + self.assertEqual(html_link, 'Google') + + # Test with quotes in attribute values (the KEY TEST CASE) + html_link = helpers.named_link_html("example.com", "Example", title='This is a "test" with quotes.') + self.assertEqual(html_link, 'Example') + + # Test another edge case with special characters + html_link = helpers.named_link_html("example.com", "Example & More", title="Special & stuff.") + self.assertEqual(html_link, 'Example & More') + + + def test_named_link_html_no_label(self): + # Test building an HTML link with no label + html_link = helpers.named_link_html("google.com") + soup1 = bs4.BeautifulSoup(html_link, features="html.parser") + soup2 = bs4.BeautifulSoup('', features="html.parser") + self.assertEqual(soup1.prettify(), soup2.prettify()) + + def test_named_link_md(self): + # Test building a Markdown link + md_link = helpers.named_link_md("google.com", "Google") + self.assertEqual(md_link, "[Google](https://google.com)") + + def test_trim_section_for_page_link(self): + # Test trimming the section from a URL + self.assertEqual( + helpers.trim_section_for_page_link("https://example.com/page#section"), + "https://example.com/page", + ) + self.assertEqual( + helpers.trim_section_for_page_link("https://example.com/page"), + "https://example.com/page", + ) + + def test_md_to_html(self): + # Test converting markdown to html + md = "# Header\n\nThis is a paragraph." + html = helpers.md_to_html(md) + self.assertTrue("

    Header

    " in html) + self.assertTrue("

    This is a paragraph.

    " in html) + + def _sort_html_attributes(self, html_string): + """Helper function to sort HTML attributes within a tag.""" + soup = bs4.BeautifulSoup(html_string, 'html.parser') + tag = soup.find('a') # Find the tag + if tag: + attrs = dict(sorted(tag.attrs.items())) # Sort attributes + tag.attrs = attrs # Replace with sorted attributes + return str(soup) + + +# Helper function to create a dummy file for testing existence checks +def create_test_file(filepath: Path): + filepath.parent.mkdir(parents=True, exist_ok=True) + filepath.touch() + + +# Helper function to remove a dummy file +def remove_test_file(filepath: Path): + if filepath.exists(): + filepath.unlink() + # Clean up parent directory + try: + filepath.parent.rmdir() + except OSError: + pass + + +class TestResolveAndEnsurePath(unittest.TestCase): + """Tests for the resolve_and_ensure_path function.""" + def setUp(self): + """Setup for test cases.""" + # Create a temporary directory for testing + self.test_dir = Path("./temp_test_dir_resolve_ensure") + self.test_dir.mkdir(exist_ok=True) + self.existing_file = self.test_dir / "existing_file.txt" + create_test_file(self.existing_file) + self.existing_abs_path = str(self.existing_file.resolve()) + self.non_existing_file = self.test_dir / "non_existing_file.txt" + self.non_existing_abs_path = str(self.non_existing_file.resolve()) + + # Mock get_project_path to return our test directory + self.project_path_patcher = patch("docs_agent.utilities.helpers.get_project_path", return_value=self.test_dir.resolve()) + self.mock_get_project_path = self.project_path_patcher.start() + + # Mock logging + self.log_patcher = patch("docs_agent.utilities.helpers.logging") + self.mock_logging = self.log_patcher.start() + + # Mock Path + self.path_patcher = patch("docs_agent.utilities.helpers.Path") + self.mock_path_class = self.path_patcher.start() + + def tearDown(self): + """Teardown for test cases.""" + # Clean up temporary files and directory + remove_test_file(self.existing_file) + if self.test_dir.exists(): + for item in self.test_dir.iterdir(): + if item.is_file(): + item.unlink() + self.test_dir.rmdir() + self.project_path_patcher.stop() + self.log_patcher.stop() + self.path_patcher.stop() + + def test_resolve_and_ensure_path_none_input(self): + """Test resolve_and_ensure_path with None input.""" + self.assertIsNone(helpers.resolve_and_ensure_path(None)) + self.mock_logging.error.assert_not_called() + + def test_resolve_and_ensure_path_empty_string_input(self): + """Test resolve_and_ensure_path with an empty string input.""" + self.assertIsNone(helpers.resolve_and_ensure_path("")) + self.mock_logging.error.assert_not_called() + + @patch("docs_agent.utilities.helpers.expand_user_path") + @patch("docs_agent.utilities.helpers.resolve_path") + def test_resolve_and_ensure_path_existing_file_check_exists_true(self, mock_resolve_path, mock_expand_user): + """Test with an existing file and check_exists=True.""" + input_path = "some/path/existing_file.txt" + mock_expand_user.return_value = input_path + mock_resolve_path.return_value = self.existing_abs_path + + mock_path_instance = MagicMock() + mock_path_instance.exists.return_value = True + mock_path_instance.__str__.return_value = self.existing_abs_path + self.mock_path_class.return_value = mock_path_instance + + result = helpers.resolve_and_ensure_path(input_path, check_exists=True) + + self.assertEqual(result, self.existing_abs_path) + mock_expand_user.assert_called_once_with(input_path) + mock_resolve_path.assert_called_once_with(input_path) + self.mock_path_class.assert_called_once_with(self.existing_abs_path) + mock_path_instance.exists.assert_called_once() + self.mock_logging.error.assert_not_called() + + @patch("docs_agent.utilities.helpers.expand_user_path") + @patch("docs_agent.utilities.helpers.resolve_path") + def test_resolve_and_ensure_path_non_existing_file_check_exists_true(self, mock_resolve_path, mock_expand_user): + """Test with a non-existing file and check_exists=True.""" + input_path = "some/path/non_existing_file.txt" + mock_expand_user.return_value = input_path + mock_resolve_path.return_value = self.non_existing_abs_path + + mock_path_instance = MagicMock() + mock_path_instance.exists.return_value = False + mock_path_instance.__str__.return_value = self.non_existing_abs_path + self.mock_path_class.return_value = mock_path_instance + + result = helpers.resolve_and_ensure_path(input_path, check_exists=True) + + self.assertIsNone(result) + mock_expand_user.assert_called_once_with(input_path) + mock_resolve_path.assert_called_once_with(input_path) + self.mock_path_class.assert_called_once_with(self.non_existing_abs_path) + mock_path_instance.exists.assert_called_once() + self.mock_logging.error.assert_called_once_with( + f"[Error] Cannot access the input path: {self.non_existing_abs_path}" + ) + + @patch("docs_agent.utilities.helpers.expand_user_path") + @patch("docs_agent.utilities.helpers.resolve_path") + def test_resolve_and_ensure_path_non_existing_file_check_exists_false(self, mock_resolve_path, mock_expand_user): + """Test with a non-existing file and check_exists=False.""" + input_path = "some/path/non_existing_file.txt" + mock_expand_user.return_value = input_path + mock_resolve_path.return_value = self.non_existing_abs_path + + mock_path_instance = MagicMock() + mock_path_instance.__str__.return_value = self.non_existing_abs_path + self.mock_path_class.return_value = mock_path_instance + + result = helpers.resolve_and_ensure_path(input_path, check_exists=False) + + self.assertEqual(result, self.non_existing_abs_path) + mock_expand_user.assert_called_once_with(input_path) + mock_resolve_path.assert_called_once_with(input_path) + self.mock_path_class.assert_called_once_with(self.non_existing_abs_path) + mock_path_instance.exists.assert_not_called() + self.mock_logging.error.assert_not_called() + + @patch("docs_agent.utilities.helpers.expand_user_path", return_value="expanded/path") + @patch("docs_agent.utilities.helpers.resolve_path", side_effect=FileNotFoundError("Mock file not found")) + def test_resolve_and_ensure_path_resolve_path_file_not_found(self, mock_resolve_path, mock_expand_user): + """Test when resolve_path raises FileNotFoundError.""" + input_path = "some/invalid/path" + result = helpers.resolve_and_ensure_path(input_path) + + self.assertIsNone(result) + mock_expand_user.assert_called_once_with(input_path) + mock_resolve_path.assert_called_once_with("expanded/path") + self.mock_path_class.assert_not_called() + self.mock_logging.error.assert_called_once_with( + "[Error] Failed to resolve path: Mock file not found" + ) + + @patch("docs_agent.utilities.helpers.expand_user_path", return_value="expanded/path") + @patch("docs_agent.utilities.helpers.resolve_path", side_effect=PermissionError("Mock permission error")) + def test_resolve_and_ensure_path_resolve_path_generic_exception(self, mock_resolve_path, mock_expand_user): + """Test when resolve_path raises a generic Exception.""" + input_path = "some/problematic/path" + result = helpers.resolve_and_ensure_path(input_path) + + self.assertIsNone(result) + mock_expand_user.assert_called_once_with(input_path) + mock_resolve_path.assert_called_once_with("expanded/path") + self.mock_path_class.assert_not_called() + # Use single quotes around input_path to match the actual log message + self.mock_logging.error.assert_called_once_with( + f"[Error] An unexpected error occurred resolving path '{input_path}': Mock permission error" + ) + + @patch("docs_agent.utilities.helpers.expand_user_path") + @patch("docs_agent.utilities.helpers.resolve_path") + def test_resolve_and_ensure_path_path_exists_exception(self, mock_resolve_path, mock_expand_user): + """Test when Path(...).exists() raises an exception.""" + input_path = "some/weird/path" + resolved_path_str = "/resolved/weird/path" + mock_expand_user.return_value = input_path + mock_resolve_path.return_value = resolved_path_str + + mock_path_instance = MagicMock() + mock_path_instance.exists.side_effect = OSError("Mock OS error checking existence") + mock_path_instance.__str__.return_value = resolved_path_str + self.mock_path_class.return_value = mock_path_instance + + result = helpers.resolve_and_ensure_path(input_path, check_exists=True) + + self.assertIsNone(result) + mock_expand_user.assert_called_once_with(input_path) + mock_resolve_path.assert_called_once_with(input_path) + self.mock_path_class.assert_called_once_with(resolved_path_str) + mock_path_instance.exists.assert_called_once() + # Use single quotes around input_path to match the actual log message + self.mock_logging.error.assert_called_once_with( + f"[Error] An unexpected error occurred resolving path '{input_path}': Mock OS error checking existence" + ) + +if __name__ == "__main__": + unittest.main() \ No newline at end of file diff --git a/examples/gemini/python/docs-agent/docs_agent/utilities/config.py b/examples/gemini/python/docs-agent/docs_agent/utilities/config.py index 904164912..2c8836acf 100644 --- a/examples/gemini/python/docs-agent/docs_agent/utilities/config.py +++ b/examples/gemini/python/docs-agent/docs_agent/utilities/config.py @@ -20,6 +20,7 @@ import sys import yaml import typing +from pathlib import Path from absl import logging from docs_agent.utilities.helpers import get_project_path @@ -166,6 +167,83 @@ def __str__(self): return self.input_list +class MCPServerConfig: + def __init__( + self, + server_type: str, + name: typing.Optional[str] = None, + # Stdio specific + command: typing.Optional[str] = None, + args: typing.Optional[typing.List[str]] = None, + env: typing.Optional[typing.Dict[str, str]] = None, + # SSE specific + url: typing.Optional[str] = None, + ): + self.server_type = server_type.lower() + self.name = name + self.command = command + self.args = args or [] + self.env = env or {} + self.url = url + + # Server validation + if self.server_type not in ["stdio", "sse"]: + raise ValueError(f"Unsupported MCP server_type: {server_type}. Must be 'stdio' or 'sse'.") + if self.server_type == "stdio" and not self.command: + raise ValueError("MCP server_type 'stdio' requires a 'command'.") + if self.server_type == "sse" and not self.url: + raise ValueError("MCP server_type 'sse' requires a 'url'.") + if self.server_type == "stdio" and self.env is not None and not isinstance(self.env, dict): + raise ValueError("MCP server_type 'stdio' requires 'env' to be a dictionary if provided.") + + def __str__(self): + details = [f"Type: {self.server_type}"] + if self.name: + details.append(f"Name: {self.name}") + if self.server_type == "stdio": + details.append(f"Command: {self.command}") + if self.args: + details.append(f"Args: {' '.join(self.args)}") + if self.env: + env_str = ", ".join(f"{k}={v}" for k, v in self.env.items()) + details.append(f"Env: [{env_str}]") + elif self.server_type == "sse": + details.append(f"URL: {self.url}") + return ", ".join(details) + + +class ReadMCPServerConfigs: + def __init__(self, input_list: list[dict]): + self.input_list = input_list + + def returnMCPServerConfigs(self) -> list[MCPServerConfig]: + configs = [] + if not isinstance(self.input_list, list): + logging.error("Expected a list of MCP server configs.") + return [] + + for item in self.input_list: + if not isinstance(item, dict): + logging.warning(f"Skipping item in MCP server config list: {item}") + continue + try: + config_item = MCPServerConfig( + server_type=item["server_type"], + name=item.get("name"), + command=item.get("command"), + args=item.get("args"), + env=item.get("env"), + url=item.get("url"), + ) + configs.append(config_item) + except KeyError as error: + logging.error(f"MCP server config item is missing required key {error}: {item}") + continue + except ValueError as error: + logging.error(f"Invalid MCP server config item: {error}. Config: {item}") + continue + return configs + class Models: def __init__( self, @@ -264,25 +342,13 @@ class Conditions: def __init__( self, condition_text: str, - fact_check_question: typing.Optional[str] = None, model_error_message: typing.Optional[str] = None, ): - default_fact_check_question = ( - "Can you compare the text below to the information provided in this" - " prompt above and write a short message that warns the readers" - " about which part of the text they should consider" - " fact-checking? (Please keep your response concise, focus on only" - " one important item, but DO NOT USE BOLD TEXT IN YOUR RESPONSE.)" - ) default_model_error_message = ( "Gemini is not able to answer this question at the moment." " Rephrase the question and try asking again." ) self.condition_text = condition_text - if fact_check_question is None: - self.fact_check_question = default_fact_check_question - else: - self.fact_check_question = fact_check_question if model_error_message is None: self.model_error_message = default_model_error_message else: @@ -292,8 +358,6 @@ def __str__(self): help_str = "" if self.condition_text is not None and self.condition_text != "": help_str += f"Condition text: {self.condition_text}\n" - if self.fact_check_question is not None and self.fact_check_question != "": - help_str += f"Fact check question: {self.fact_check_question}\n" if self.model_error_message is not None and self.model_error_message != "": help_str += f"Model error message: {self.model_error_message}\n" return help_str @@ -316,7 +380,6 @@ def returnConditions(self) -> Conditions: # Using .get let's you specify optional keys condition_item = Conditions( condition_text=item["condition_text"], - fact_check_question=item.get("fact_check_question", None), model_error_message=item.get("model_error_message", None), ) conditions.append(condition_item) @@ -351,6 +414,7 @@ def __init__( enable_delete_chunks: str = "False", secondary_db_type: typing.Optional[str] = None, secondary_corpus_name: typing.Optional[str] = None, + mcp_servers: typing.Optional[list[MCPServerConfig]] = None, ): self.product_name = product_name self.docs_agent_config = docs_agent_config @@ -371,6 +435,7 @@ def __init__( self.enable_delete_chunks = enable_delete_chunks self.secondary_db_type = secondary_db_type self.secondary_corpus_name = secondary_corpus_name + self.mcp_servers = mcp_servers def __str__(self): # Extracts the list of Inputs @@ -383,6 +448,10 @@ def __str__(self): for item in self.db_configs: dbconfigs.append(str(item)) db_config_str = "\n".join(dbconfigs) + mcp_servers = [] + for item in self.mcp_servers: + mcp_servers.append(str(item)) + mcp_server_str = "\n".join(mcp_servers) help_str = "" if self.product_name is not None and self.product_name != "": help_str += f"Product: {self.product_name}\n" @@ -422,6 +491,8 @@ def __str__(self): help_str += f"\nInputs:\n{input_str}\n" if self.conditions is not None and self.conditions != "": help_str += f"Conditions:\n{self.conditions}\n" + if mcp_server_str != "": + help_str += f"\nMCP Servers:\n{mcp_server_str}\n" return help_str @@ -450,23 +521,68 @@ def return_first(self): # returnProducts() with an optional product flag will return # all product configurations or the specified one class ReadConfig: - # Tries to ingest the configuration file and validate its keys - # Defaults to the config.yaml file in the source of the project - def __init__( - self, yaml_path: str = os.path.join(get_project_path(), "config.yaml") - ): - self.yaml_path = yaml_path + """ + Reads a configuration file to import configuration settings. + + Attributes: + yaml_path (str): The path to the YAML configuration file. + config_values (dict): The dictionary containing the configuration values. + """ + def __init__(self, yaml_path_input: str | None = None): + """ + Initializes ReadConfig. + + Args: + yaml_path_input: Optional path to the config file. Can be absolute or + relative. If relative, it's resolved from the project root. + If None, defaults to 'config.yaml' in the project root. + """ + # Default config file name + config_filename = "config.yaml" + calculated_yaml_path: Path + try: - with open(yaml_path, "r", encoding="utf-8") as inp_yaml: + # Find the project root first based on the config_filename + project_root = get_project_path(marker=config_filename) + if yaml_path_input is None: + # Default: Use config.yaml from the project root + calculated_yaml_path = project_root / config_filename + logging.info(f"No config path provided, using default: {calculated_yaml_path}") + else: + # If path is provided + explicit_path = Path(yaml_path_input) + if explicit_path.is_absolute(): + # Use the provided absolute path + calculated_yaml_path = explicit_path.resolve() + logging.info(f"Using specified absolute config path: {calculated_yaml_path}") + else: + # If relative, resolve from the project root + calculated_yaml_path = (project_root / explicit_path).resolve() + logging.info(f"Resolving specified relative config path '{yaml_path_input}' " + f"relative to project root '{project_root}': {calculated_yaml_path}") + # Store the calculated path + self.yaml_path = str(calculated_yaml_path) + # Load the configuration from the absolute path + if not calculated_yaml_path.is_file(): + raise FileNotFoundError(f"Configuration file not found at the calculated path: {self.yaml_path}") + + with open(calculated_yaml_path, "r", encoding="utf-8") as inp_yaml: self.config_values = yaml.safe_load(inp_yaml) - # self.yaml_path = yaml_path - except FileNotFoundError: - logging.error(f"The config file {self.yaml_path} does not exist.") - # Exits the scripts if there is no valid config file - return sys.exit(1) + logging.info(f"Successfully loaded config file: {self.yaml_path}") + + except FileNotFoundError as e: + logging.error(e) + sys.exit(1) + except yaml.YAMLError as e: + logging.error(f"Error parsing YAML file {getattr(self, 'yaml_path', yaml_path_input)}: {e}") + sys.exit(1) + except Exception as e: + logging.error(f"An unexpected error occurred during configuration loading: {e}", exc_info=True) + sys.exit(1) def __str__(self): - return self.yaml_path + # Returns the absoulte path to the config file or provides an error message + return getattr(self, "yaml_path", "Config path not determined") def returnProducts(self, product: typing.Optional[str] = None) -> ConfigFile: products = [] @@ -548,6 +664,7 @@ def returnProducts(self, product: typing.Optional[str] = None) -> ConfigFile: enable_delete_chunks=enable_delete_chunks, secondary_db_type=secondary_db_type, secondary_corpus_name=secondary_corpus_name, + mcp_servers=item.get("mcp_servers", None), ) # This is done for keys with children # Inputs @@ -567,6 +684,15 @@ def returnProducts(self, product: typing.Optional[str] = None) -> ConfigFile: input_list=item["db_configs"] ).returnDbConfigs() product_config.db_configs = new_db_configs + # MCP Servers + mcp_servers_raw = item.get("mcp_servers") + mcp_server_configs = None + if mcp_servers_raw is not None: + if isinstance(mcp_servers_raw, list): + mcp_server_configs = ReadMCPServerConfigs( + input_list=mcp_servers_raw + ).returnMCPServerConfigs() + product_config.mcp_servers = mcp_server_configs # Append products.append(product_config) except KeyError as error: @@ -608,7 +734,7 @@ def return_config_and_product( if config_file is None: loaded_config = ReadConfig() else: - loaded_config = ReadConfig(yaml_path=config_file) + loaded_config = ReadConfig(yaml_path_input=config_file) final_products = [] if product == () or product == [""]: product_config = loaded_config.returnProducts() diff --git a/examples/gemini/python/docs-agent/docs_agent/utilities/helpers.py b/examples/gemini/python/docs-agent/docs_agent/utilities/helpers.py index 2e51d777f..4bbb08d3d 100644 --- a/examples/gemini/python/docs-agent/docs_agent/utilities/helpers.py +++ b/examples/gemini/python/docs-agent/docs_agent/utilities/helpers.py @@ -16,45 +16,244 @@ """General utility functions""" -import urllib, os +import os +import typing +import urllib + +from absl import logging +import bs4 from flask import url_for import bs4 +import html import typing from pathlib import Path, PurePath import markdown +import yaml +from PIL import Image + + +def expand_user_path(path_str: typing.Optional[str]) -> typing.Optional[str]: + """ + Expands a path that starts with '~' to the user's home directory. + + Args: + path_str: The path string to expand. + + Returns: + The expanded path string, or the original path string if it doesn't + start with '~'. + """ + if path_str and path_str.startswith("~/"): + return os.path.expanduser(path_str) + return path_str + + +def resolve_and_ensure_path( + path_str: typing.Optional[str], check_exists: bool = True +) -> typing.Optional[str]: + """ + Resolves a path (handling '~') and optionally checks if it exists. + + Args: + path_str: The path string to resolve. + check_exists: If True, checks if the resolved path exists and logs an error if not. + + Returns: + The resolved absolute path as a string, or None if input is None or check fails. + """ + if not path_str: + return None + expanded_path = expand_user_path(path_str) + try: + resolved = resolve_path(expanded_path) + resolved_path = Path(resolved) -# This retrieves the project root, regardless of module path -def get_project_path() -> Path: - return Path(__file__).parent.parent.parent + if check_exists and not resolved_path.exists(): + logging.error(f"[Error] Cannot access the input path: {resolved_path}") + return None + return str(resolved_path) + except FileNotFoundError as e: + logging.error(f"[Error] Failed to resolve path: {e}") + return None + except Exception as e: + logging.error( + f"[Error] An unexpected error occurred resolving path '{path_str}': {e}" + ) + return None + + +def create_output_directory(output_path_str: str) -> typing.Optional[str]: + """ + Determines the output directory path, creates it if necessary, + and returns the full output file path. + Args: + output_path_str: The desired output file path (can be relative, absolute, or start with ~). -# Function to resolve path. If no base_dir is specified, use the project root -def resolve_path(rel_or_abs_path: str, base_dir: Path = get_project_path()): - path = rel_or_abs_path.strip() - if path.startswith("/"): - return path + Returns: + The full absolute path to the output file, or None if directory creation fails. + """ + if not output_path_str or output_path_str.lower() == "none": + return None + + output_path_str = expand_user_path(output_path_str) + output_path_obj = Path(output_path_str) + + if output_path_obj.is_absolute(): + base_out = output_path_obj.parent + out_filename = output_path_obj.name else: - return os.path.join(base_dir, path) + # Default to project's agent_out directory + try: + base_out = Path(get_project_path()) / "agent_out" + except FileNotFoundError: + logging.warning( + "Project root directory not found, using current directory for agent_out." + ) + base_out = Path.cwd() / "agent_out" + out_filename = output_path_obj.name + + # Create directory in the following order: + # 1. The specified output directory + # 2. The default agent_out directory in the project root + # 3. A temporary directory in /tmp + potential_dirs = [ + base_out, + Path(os.path.expanduser("~/docs_agent/agent_out")), + Path("/tmp/docs_agent/agent_out"), + ] + + created_dir = None + for potential_dir in potential_dirs: + try: + potential_dir.mkdir(parents=True, exist_ok=True) + # Check write permissions + test_file = potential_dir / ".writable_test" + try: + test_file.touch() + test_file.unlink() + created_dir = potential_dir + break + except OSError: + logging.warning( + f"Cannot write to directory: {potential_dir}. Trying next fallback." + ) + continue + except OSError as e: + logging.warning( + f"Failed to create or access directory {potential_dir}: {e}. Trying next fallback." + ) + + if not created_dir: + logging.error("Failed to create any suitable output directory.") + return None + + full_output_path = created_dir / out_filename + return str(full_output_path) + + +def get_project_path(marker: str = "config.yaml") -> Path: + """ + Finds the project root directory by searching upwards for a specified marker file. + + Args: + marker: The name of the file to search for (default is "config.yaml"). + + Returns: + The path to the project root directory. + + Raises: + FileNotFoundError: If the marker file is not found. + """ + start_dir = None + try: + # Start search from the directory containing this helpers.py file + start_dir = Path(__file__).resolve().parent + except NameError: + logging.warning( + "'__file__' not defined. Using current working directory as start path. This might be unreliable." + ) + start_dir = Path.cwd() + except Exception as e: + logging.warning( + f"Error determining start directory using '__file__': {e}. Falling back to CWD." + ) + start_dir = Path.cwd() + + current_dir: Path = start_dir + while True: + # Checks if the current directory contains the config.yaml file + # If so, return the current directory as the project root + # If not, try the parent directory + if (current_dir / marker).exists(): + _project_root_path_cache = current_dir + return _project_root_path_cache + + parent_dir = current_dir.parent + if parent_dir == current_dir: + # Reached the filesystem root + raise FileNotFoundError( + f"Could not find project marker '{marker}' from {start_dir}. " + f"Make sure that '{marker}' exists at the project root directory." + ) + current_dir = parent_dir + + +def resolve_path(rel_or_abs_path: str, base_dir: Path = get_project_path()) -> str: + """ + Resolves a relative or absolute path to a canonical absolute path. + + Args: + rel_or_abs_path: The path to resolve (can be relative or absolute). + base_dir: The base directory to use for relative paths (defaults to the project root). + + Returns: + The absolute path as a string. + """ + path_str = rel_or_abs_path.strip() + path_obj = Path(path_str) + + # If the path is absolute, return it as is. + if path_obj.is_absolute(): + return str(path_obj.resolve()) + else: + # Joins the path with / to ensure that the path is absolute. + resolved = (base_dir / path_obj).resolve() + return str(resolved) -# Function to add / to a path. def end_path_backslash(input_path: str): + """ + Adds a trailing backslash to a path if it doesn't already have one. + + Args: + input_path: The path to add the backslash to. + + Returns: + The path with a trailing backslash. + """ if not input_path.endswith("/"): input_path = input_path + "/" return input_path -# Function to remove / from a path to combine with url def start_path_no_backslash(input_path: str): + """ + Removes a leading backslash from a path if it has one. + + Args: + input_path: The path to remove the backslash from. + + Returns: + The path without a leading backslash. + """ if input_path.startswith("/"): # Drop first character input_path = input_path[1:] return input_path -# Function to create a path to a copy directory in the parent directory. -# Backup dir is relevant to the input path root def parallel_backup_dir(rel_or_abs_path: str, backup_dir_name: str = "backup"): path = Path(resolve_path(rel_or_abs_path)) pure_path = PurePath(resolve_path(rel_or_abs_path)) @@ -66,59 +265,67 @@ def parallel_backup_dir(rel_or_abs_path: str, backup_dir_name: str = "backup"): return backup_dir -# Function to return the parent directory -def return_pure_dir(rel_or_abs_path: str): +def return_pure_dir(rel_or_abs_path: str) -> str: + """ + Returns the parent directory of a given path. + + Args: + rel_or_abs_path: The path to get the parent directory of. + + Returns: + The parent directory as a string. + """ pure_path = PurePath(resolve_path(rel_or_abs_path)) return str(pure_path.name) -# This function adds a scheme URL -def add_scheme_url(url: str, scheme: str = "https"): +def add_scheme_url(url: str, scheme: str = "https") -> str: + """ + Adds a scheme (e.g., "https://") to a URL if it doesn't already have one. + + Args: + url: The URL to add the scheme to. + scheme: The scheme to add (default is "https"). + + Returns: + The URL with the scheme added, or the original URL if it already has a scheme. + """ return url if "://" in url else f"{scheme}://{url}" -# Parse a response containing a list of related questions from the language model -# and convert it into an HTML-based list. def parse_related_questions_response_to_html_list(response): + """ + Parses a related questions response and converts it to an HTML list. + + Args: + response: The response containing related questions (HTML). + + Returns: + A BeautifulSoup object representing the HTML list. + """ soup = bs4.BeautifulSoup(response, "html.parser") for item in soup.find_all("li"): if item.find("code"): # If there are tags, strip the tags. text = item.text - link = soup.new_tag( - "a", - href=url_for("chatui.question", ask=urllib.parse.quote_plus(text)), - ) - link.string = text - item.string = "" - item.code = "" - item.append(link) elif item.find("p"): # If there are

    tags, strip the tags. - text = item.find("p").text - link = soup.new_tag( - "a", - href=url_for("chatui.question", ask=urllib.parse.quote_plus(text)), - ) - link.string = text - item.string = "" - item.append(link) + text = item.text # Corrected: Get the full text of the

  • elif item.string is not None: - link = soup.new_tag( - "a", - href=url_for( - "chatui.question", ask=urllib.parse.quote_plus(item.string) - ), - ) - link.string = item.string - item.string = "" - item.append(link) + text = item.string + else: + continue # Skip if no text content + + link = soup.new_tag( + "a", + href=url_for("chatui.question", ask=urllib.parse.quote_plus(text)), + ) + link.string = text + item.clear() # Remove all existing children of the
  • + item.append(link) return soup -# Allows us to build a list of html, for example for fact checker and limit to -# a max_count. Optional section_content to display content chunks along with URLs -# This is not in use, but a good example to better manipulate data def build_list_html_links( urls: list, section_titles: list, @@ -127,6 +334,20 @@ def build_list_html_links( section_content: typing.Optional[list] = None, max_count: typing.Optional[int] = None, ): + """ + Builds an HTML list of links from given URLs, titles, and distances. + + Args: + urls: A list of URLs. + section_titles: A list of section titles corresponding to the URLs. + page_titles: A list of page titles corresponding to the URLs. + distances: A list of distances corresponding to the URLs. + section_content: Optional list of section content corresponding to the URLs. + max_count: Optional maximum number of links to include in the list. + + Returns: + An HTML string representing the list of links. + """ if max_count == None: max_count = len(urls) md_list = "" @@ -153,31 +374,186 @@ def build_list_html_links( # These functions are made to be used in a Jinja template when rendering a page -# Build an html URL link def named_link_html(url: str, label: str = "", **kwargs): - soup = bs4.BeautifulSoup("") + """Builds an HTML URL link with optional attributes. + + Args: + url: The URL for the link. + label: The text label for the link. + **kwargs: Additional HTML attributes (e.g., class, id, title). + + Returns: + A string containing the HTML link. + """ + soup = bs4.BeautifulSoup("", "html.parser") final_url = add_scheme_url(url) - attrs = dict(href=f"{final_url}", target=f"_blank", **kwargs) + attrs = {"href": final_url, "target": "_blank"} + for k, v in kwargs.items(): + # Remove trailing underscore from attribute names + key = k.rstrip("_") + attrs[key] = v tag = soup.new_tag(name="a", attrs=attrs) - # leading and trailing blank space doesn't get removed? - tag.string = label.strip() - return tag.prettify() + tag.append(label) + + attr_string = " ".join(f'{k}="{html.escape(str(v))}"' for k, v in attrs.items()) + return f"{label}" # Directly use label def named_link_md(url: str, label: str = ""): + """Builds a Markdown URL link. + + Args: + url: The URL for the link. + label: The text label for the link. + + Returns: + A string containing the Markdown link. + """ final_url = add_scheme_url(url) link = f"[{label}]({final_url})" return link -# Create a top level link for a page def trim_section_for_page_link(url: str): + """ + Trims a URL to remove the section part, keeping only the page URL. + + Args: + url: The URL to trim. + + Returns: + The page URL without the section part. + """ anchor_marker_url = "#" page_url = url.split(anchor_marker_url, 1)[0] return page_url -# Function to convert md to html for flask template def md_to_html(md: str): + """ + Converts a Markdown string to HTML. + + Args: + md: The Markdown string to convert. + + Returns: + The HTML representation of the Markdown string. + """ html = markdown.markdown(md) return html + + +def open_file(file_path) -> str: + """ + Opens a text file and returns its content. + + Args: + file_path: The path to the file. + + Returns: + The content of the file as a string, or an empty string if the file + cannot be opened. + """ + file_content = "" + file_type = identify_file_type(file_path) + if file_type == "text": + try: + with open(file_path, "r", encoding="utf-8") as auto: + file_content = auto.read() + auto.close() + except: + logging.error(f"Cannot open the text {file_path}\n") + return file_content + + +def open_image(file_path) -> typing.Optional[Image.Image]: + """ + Opens an image file and returns its content. + """ + loaded_image = None + file_type = identify_file_type(file_path) + if file_type == "image": + try: + with open(file_path, "rb") as image: + loaded_image = Image.open(image) + loaded_image.load() + except: + logging.error(f"Cannot open the image {file_path}\n") + return loaded_image + + +def save_file(output_path, content): + """ + Saves content to a file. + + Args: + output_path: The path to the output file. + content: The content to be written to the file. + """ + if output_path.endswith(".yaml"): + try: + with open(output_path, "w", encoding="utf-8") as auto: + auto.write(yaml.dump(content)) + auto.close() + except: + logging.error(f"Cannot save the file to: {output_path}\n") + else: + try: + with open(output_path, "w", encoding="utf-8") as auto: + auto.write(content) + auto.close() + except: + logging.error(f"Cannot save the file to: {output_path}\n") + + +def trim_path_to_subdir(full_path, subdir): + """Trims a full path up to a given subdirectory. + + Args: + full_path: The full path to trim. + subdir: The subdirectory to trim to (e.g., '/en/'). + + Returns: + The trimmed path, or the original path if the subdirectory is not found. + """ + + try: + index = full_path.index(subdir) + return full_path[: index + len(subdir)] + except ValueError: + return full_path + + +def identify_file_type(file_path: str) -> str: + """ + Identifies the type of a file based on its extension. + + Args: + file_path: The path to the file. + + Returns: + The file type (e.g., "text", "image", "audio", "video"). + """ + file_type = "text" + file_path = Path(file_path) + file_ext = file_path.suffix + image_extensions = [".png", ".jpeg", ".jpg", ".gif"] + audio_extensions = [".wav", ".mp3", ".flac", ".aiff", ".aac", ".ogg"] + video_extensions = [ + ".mp4", + ".mov", + ".avi", + ".x-flv", + ".mpg", + ".webm", + ".wmv", + ".3gpp", + ] + + if file_ext in image_extensions: + file_type = "image" + elif file_ext in audio_extensions: + file_type = "audio" + elif file_ext in video_extensions: + file_type = "video" + return file_type diff --git a/examples/gemini/python/docs-agent/docs_agent/utilities/tasks.py b/examples/gemini/python/docs-agent/docs_agent/utilities/tasks.py index 95b4e2391..beb826e02 100644 --- a/examples/gemini/python/docs-agent/docs_agent/utilities/tasks.py +++ b/examples/gemini/python/docs-agent/docs_agent/utilities/tasks.py @@ -29,10 +29,12 @@ class Flags: def __init__( self, model: typing.Optional[str] = None, - file: typing.Optional[str] = None, + file: typing.Optional[list[str]] = None, perfile: typing.Optional[str] = None, allfiles: typing.Optional[str] = None, + list_file: typing.Optional[str] = None, file_ext: typing.Optional[str] = None, + repeat_until: typing.Optional[bool] = False, rag: typing.Optional[bool] = False, yaml: typing.Optional[str] = None, out: typing.Optional[str] = None, @@ -40,13 +42,16 @@ def __init__( cont: typing.Optional[str] = None, terminal: typing.Optional[str] = None, default_input: typing.Optional[str] = None, + script_input: typing.Optional[str] = None, response_type: typing.Optional[str] = None, ): self.model = model self.file = file self.perfile = perfile self.allfiles = allfiles + self.list_file = list_file self.file_ext = file_ext + self.repeat_until = repeat_until self.rag = rag self.yaml = yaml self.out = out @@ -54,6 +59,7 @@ def __init__( self.cont = cont self.terminal = terminal self.default_input = default_input + self.script_input = script_input self.response_type = response_type def __str__(self): @@ -66,10 +72,16 @@ def __str__(self): help_str += f"Per file: {self.perfile}\n" if self.allfiles is not None and self.allfiles != "": help_str += f"All files: {self.allfiles}\n" + if self.list_file is not None and self.list_file != "": + help_str += f"List file: {self.list_file}\n" if self.default_input is not None and self.default_input != "": help_str += f"Default input: {self.default_input}\n" + if self.script_input is not None and self.script_input != "": + help_str += f"Default input: {self.script_input}\n" if self.file_ext is not None and self.file_ext != "": help_str += f"File ext: {self.file_ext}\n" + if self.repeat_until is not None and self.repeat_until != False: + help_str += f"Repeat until: {str(self.repeat_until)}\n" if self.rag is not None and self.rag != False: help_str += f"RAG: {str(self.rag)}\n" if self.yaml is not None and self.yaml != "": @@ -93,7 +105,12 @@ def dictionaryToFlags(flags: dict) -> Flags: else: model = "" if "file" in flags: - file = str(flags["file"]) + file = [] + if isinstance(flags["file"], (list, tuple)): + for item in flags["file"]: + file.append(str(item)) + else: + file.append(str(flags["file"])) else: file = "" if "perfile" in flags: @@ -104,10 +121,18 @@ def dictionaryToFlags(flags: dict) -> Flags: allfiles = str(flags["allfiles"]) else: allfiles = "" + if "list_file" in flags: + list_file = str(flags["list_file"]) + else: + list_file = "" if "file_ext" in flags: file_ext = str(flags["file_ext"]) else: file_ext = "" + if "repeat_until" in flags: + repeat_until = bool(flags["repeat_until"]) + else: + repeat_until = False if "rag" in flags: rag = bool(flags["rag"]) else: @@ -136,6 +161,10 @@ def dictionaryToFlags(flags: dict) -> Flags: default_input = str(flags["default_input"]) else: default_input = "" + if "script_input" in flags: + script_input = str(flags["script_input"]) + else: + script_input = "" if "response_type" in flags: response_type = str(flags["response_type"]) else: @@ -145,7 +174,9 @@ def dictionaryToFlags(flags: dict) -> Flags: file=file, perfile=perfile, allfiles=allfiles, + list_file=list_file, file_ext=file_ext, + repeat_until=repeat_until, rag=rag, yaml=yaml, out=out, @@ -153,6 +184,7 @@ def dictionaryToFlags(flags: dict) -> Flags: cont=cont, terminal=terminal, default_input=default_input, + script_input=script_input, response_type=response_type, ) return flags diff --git a/examples/gemini/python/docs-agent/poetry.lock b/examples/gemini/python/docs-agent/poetry.lock index 70e64a621..30b7b9c80 100644 --- a/examples/gemini/python/docs-agent/poetry.lock +++ b/examples/gemini/python/docs-agent/poetry.lock @@ -1,4 +1,4 @@ -# This file is automatically @generated by Poetry 1.6.1 and should not be changed by hand. +# This file is automatically @generated by Poetry 1.8.2 and should not be changed by hand. [[package]] name = "absl-py" @@ -24,42 +24,41 @@ files = [ [[package]] name = "anyio" -version = "4.8.0" +version = "4.9.0" description = "High level compatibility layer for multiple asynchronous event loop implementations" optional = false python-versions = ">=3.9" files = [ - {file = "anyio-4.8.0-py3-none-any.whl", hash = "sha256:b5011f270ab5eb0abf13385f851315585cc37ef330dd88e27ec3d34d651fd47a"}, - {file = "anyio-4.8.0.tar.gz", hash = "sha256:1d9fe889df5212298c0c0723fa20479d1b94883a2df44bd3897aa91083316f7a"}, + {file = "anyio-4.9.0-py3-none-any.whl", hash = "sha256:9f76d541cad6e36af7beb62e978876f3b41e3e04f2c1fbf0884604c0a9c4d93c"}, + {file = "anyio-4.9.0.tar.gz", hash = "sha256:673c0c244e15788651a4ff38710fea9675823028a6f08a5eda409e0c9840a028"}, ] [package.dependencies] -exceptiongroup = {version = ">=1.0.2", markers = "python_version < \"3.11\""} idna = ">=2.8" sniffio = ">=1.1" typing_extensions = {version = ">=4.5", markers = "python_version < \"3.13\""} [package.extras] -doc = ["Sphinx (>=7.4,<8.0)", "packaging", "sphinx-autodoc-typehints (>=1.2.0)", "sphinx_rtd_theme"] -test = ["anyio[trio]", "coverage[toml] (>=7)", "exceptiongroup (>=1.2.0)", "hypothesis (>=4.0)", "psutil (>=5.9)", "pytest (>=7.0)", "trustme", "truststore (>=0.9.1)", "uvloop (>=0.21)"] +doc = ["Sphinx (>=8.2,<9.0)", "packaging", "sphinx-autodoc-typehints (>=1.2.0)", "sphinx_rtd_theme"] +test = ["anyio[trio]", "blockbuster (>=1.5.23)", "coverage[toml] (>=7)", "exceptiongroup (>=1.2.0)", "hypothesis (>=4.0)", "psutil (>=5.9)", "pytest (>=7.0)", "trustme", "truststore (>=0.9.1)", "uvloop (>=0.21)"] trio = ["trio (>=0.26.1)"] [[package]] name = "array-record" -version = "0.6.0" +version = "0.7.1" description = "A file format that achieves a new frontier of IO efficiency" optional = false -python-versions = ">=3.9" +python-versions = ">=3.10" files = [ - {file = "array_record-0.6.0-cp310-cp310-macosx_11_0_universal2.whl", hash = "sha256:c51b53b90c7d4035ae94e8b265196925e6c5f5673aa35e04874aecca78656de3"}, - {file = "array_record-0.6.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5338900974e2f10b3021b874a4f226783ffdbb0be76c931363a557336d33e478"}, - {file = "array_record-0.6.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b28be32f7c81db3ec17d343899a6b5b8ae19f6d6e650448b8044de65774fa3e5"}, - {file = "array_record-0.6.0-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:1ea2596fb8bf19eade5e8c2d0dce9c4dc6a9d14222551863d32238f7e5754afe"}, - {file = "array_record-0.6.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:035575c271461f26a0684db5e3b65a487233d0921880933f680e7aeb86130a39"}, - {file = "array_record-0.6.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4c85df128819191a4f85937ab390f59f181ab7b6183626e5d0f5ecab47ecb022"}, - {file = "array_record-0.6.0-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:af81f6ae5404a42962b96f4efacd9a9b098cb2eeddae068cde9be0b8bfbfc457"}, - {file = "array_record-0.6.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:370cf9bdcdaab7537e897aae017ea607f75ac33378991d2fbb1e52b1fedb2bcf"}, - {file = "array_record-0.6.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c418b2b83410c630e6662d4ce0156e4e5120ee27ea9ed7672dd87c9cda39a060"}, + {file = "array_record-0.7.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:5026fe8f3e2ef30b1f78c0f6d16de2dc121e0f403abf0457e1fdc5d608b74651"}, + {file = "array_record-0.7.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:505abda27b17278604ceccb8a35262370b779dadae86beb711cb47ddd2974fd3"}, + {file = "array_record-0.7.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2436a3d62272d4143b7c78e609b19090da86e6b211bb04f010d6da9ccf5af218"}, + {file = "array_record-0.7.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:5b6a12f10384306b925b695c853862b817f74fd4570d4fa7a089e1878cf2f4f8"}, + {file = "array_record-0.7.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:91f742f1cd8ae6f42bafe8887b724f0d6884eaac6ea332fb337d453bc27fb9ed"}, + {file = "array_record-0.7.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:415ca730af6dd019c4b6075f6c9e53dececdd368b45c3dac894491c4ab6b23e3"}, + {file = "array_record-0.7.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8a5ffc164f217256ac00b1da13e6cd26278ed054fe138a761296ef7db4262d98"}, + {file = "array_record-0.7.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a73e54d16f22144caff105618f70af96abea3a12ef35ece5067fecbe0f16a9bf"}, + {file = "array_record-0.7.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c492469ba90b436fd3cd5613ac1b50d47aa4cb5a1d1df6e17bf83e3d2498b757"}, ] [package.dependencies] @@ -80,9 +79,6 @@ files = [ {file = "asgiref-3.8.1.tar.gz", hash = "sha256:c343bd80a0bec947a9860adb4c432ffa7db769836c64238fc34bdc3fec84d590"}, ] -[package.dependencies] -typing-extensions = {version = ">=4", markers = "python_version < \"3.11\""} - [package.extras] tests = ["mypy (>=0.800)", "pytest", "pytest-asyncio"] @@ -99,11 +95,7 @@ files = [ [package.dependencies] lazy-object-proxy = ">=1.4.0" -typing-extensions = {version = ">=4.0.0", markers = "python_version < \"3.11\""} -wrapt = [ - {version = ">=1.11,<2", markers = "python_version < \"3.11\""}, - {version = ">=1.14,<2", markers = "python_version >= \"3.11\""}, -] +wrapt = {version = ">=1.14,<2", markers = "python_version >= \"3.11\""} [[package]] name = "asttokens" @@ -137,20 +129,20 @@ wheel = ">=0.23.0,<1.0" [[package]] name = "attrs" -version = "25.1.0" +version = "25.3.0" description = "Classes Without Boilerplate" optional = false python-versions = ">=3.8" files = [ - {file = "attrs-25.1.0-py3-none-any.whl", hash = "sha256:c75a69e28a550a7e93789579c22aa26b0f5b83b75dc4e08fe092980051e1090a"}, - {file = "attrs-25.1.0.tar.gz", hash = "sha256:1c97078a80c814273a76b2a298a932eb681c87415c11dee0a6921de7f1b02c3e"}, + {file = "attrs-25.3.0-py3-none-any.whl", hash = "sha256:427318ce031701fea540783410126f03899a97ffc6f61596ad581ac2e40e3bc3"}, + {file = "attrs-25.3.0.tar.gz", hash = "sha256:75d7cefc7fb576747b2c81b4442d4d4a1ce0900973527c011d1030fd3bf4af1b"}, ] [package.extras] benchmark = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-codspeed", "pytest-mypy-plugins", "pytest-xdist[psutil]"] cov = ["cloudpickle", "coverage[toml] (>=5.3)", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"] dev = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pre-commit-uv", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"] -docs = ["cogapp", "furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier (<24.7)"] +docs = ["cogapp", "furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier"] tests = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"] tests-mypy = ["mypy (>=1.11.1)", "pytest-mypy-plugins"] @@ -167,36 +159,62 @@ files = [ [[package]] name = "bcrypt" -version = "4.2.1" +version = "4.3.0" description = "Modern password hashing for your software and your servers" optional = false -python-versions = ">=3.7" +python-versions = ">=3.8" files = [ - {file = "bcrypt-4.2.1-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:1340411a0894b7d3ef562fb233e4b6ed58add185228650942bdc885362f32c17"}, - {file = "bcrypt-4.2.1-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b1ee315739bc8387aa36ff127afc99120ee452924e0df517a8f3e4c0187a0f5f"}, - {file = "bcrypt-4.2.1-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8dbd0747208912b1e4ce730c6725cb56c07ac734b3629b60d4398f082ea718ad"}, - {file = "bcrypt-4.2.1-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:aaa2e285be097050dba798d537b6efd9b698aa88eef52ec98d23dcd6d7cf6fea"}, - {file = "bcrypt-4.2.1-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:76d3e352b32f4eeb34703370e370997065d28a561e4a18afe4fef07249cb4396"}, - {file = "bcrypt-4.2.1-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:b7703ede632dc945ed1172d6f24e9f30f27b1b1a067f32f68bf169c5f08d0425"}, - {file = "bcrypt-4.2.1-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:89df2aea2c43be1e1fa066df5f86c8ce822ab70a30e4c210968669565c0f4685"}, - {file = "bcrypt-4.2.1-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:04e56e3fe8308a88b77e0afd20bec516f74aecf391cdd6e374f15cbed32783d6"}, - {file = "bcrypt-4.2.1-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:cfdf3d7530c790432046c40cda41dfee8c83e29482e6a604f8930b9930e94139"}, - {file = "bcrypt-4.2.1-cp37-abi3-win32.whl", hash = "sha256:adadd36274510a01f33e6dc08f5824b97c9580583bd4487c564fc4617b328005"}, - {file = "bcrypt-4.2.1-cp37-abi3-win_amd64.whl", hash = "sha256:8c458cd103e6c5d1d85cf600e546a639f234964d0228909d8f8dbeebff82d526"}, - {file = "bcrypt-4.2.1-cp39-abi3-macosx_10_12_universal2.whl", hash = "sha256:8ad2f4528cbf0febe80e5a3a57d7a74e6635e41af1ea5675282a33d769fba413"}, - {file = "bcrypt-4.2.1-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:909faa1027900f2252a9ca5dfebd25fc0ef1417943824783d1c8418dd7d6df4a"}, - {file = "bcrypt-4.2.1-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cde78d385d5e93ece5479a0a87f73cd6fa26b171c786a884f955e165032b262c"}, - {file = "bcrypt-4.2.1-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:533e7f3bcf2f07caee7ad98124fab7499cb3333ba2274f7a36cf1daee7409d99"}, - {file = "bcrypt-4.2.1-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:687cf30e6681eeda39548a93ce9bfbb300e48b4d445a43db4298d2474d2a1e54"}, - {file = "bcrypt-4.2.1-cp39-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:041fa0155c9004eb98a232d54da05c0b41d4b8e66b6fc3cb71b4b3f6144ba837"}, - {file = "bcrypt-4.2.1-cp39-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:f85b1ffa09240c89aa2e1ae9f3b1c687104f7b2b9d2098da4e923f1b7082d331"}, - {file = "bcrypt-4.2.1-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:c6f5fa3775966cca251848d4d5393ab016b3afed251163c1436fefdec3b02c84"}, - {file = "bcrypt-4.2.1-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:807261df60a8b1ccd13e6599c779014a362ae4e795f5c59747f60208daddd96d"}, - {file = "bcrypt-4.2.1-cp39-abi3-win32.whl", hash = "sha256:b588af02b89d9fad33e5f98f7838bf590d6d692df7153647724a7f20c186f6bf"}, - {file = "bcrypt-4.2.1-cp39-abi3-win_amd64.whl", hash = "sha256:e84e0e6f8e40a242b11bce56c313edc2be121cec3e0ec2d76fce01f6af33c07c"}, - {file = "bcrypt-4.2.1-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:76132c176a6d9953cdc83c296aeaed65e1a708485fd55abf163e0d9f8f16ce0e"}, - {file = "bcrypt-4.2.1-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:e158009a54c4c8bc91d5e0da80920d048f918c61a581f0a63e4e93bb556d362f"}, - {file = "bcrypt-4.2.1.tar.gz", hash = "sha256:6765386e3ab87f569b276988742039baab087b2cdb01e809d74e74503c2faafe"}, + {file = "bcrypt-4.3.0-cp313-cp313t-macosx_10_12_universal2.whl", hash = "sha256:f01e060f14b6b57bbb72fc5b4a83ac21c443c9a2ee708e04a10e9192f90a6281"}, + {file = "bcrypt-4.3.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c5eeac541cefd0bb887a371ef73c62c3cd78535e4887b310626036a7c0a817bb"}, + {file = "bcrypt-4.3.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:59e1aa0e2cd871b08ca146ed08445038f42ff75968c7ae50d2fdd7860ade2180"}, + {file = "bcrypt-4.3.0-cp313-cp313t-manylinux_2_28_aarch64.whl", hash = "sha256:0042b2e342e9ae3d2ed22727c1262f76cc4f345683b5c1715f0250cf4277294f"}, + {file = "bcrypt-4.3.0-cp313-cp313t-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:74a8d21a09f5e025a9a23e7c0fd2c7fe8e7503e4d356c0a2c1486ba010619f09"}, + {file = "bcrypt-4.3.0-cp313-cp313t-manylinux_2_28_x86_64.whl", hash = "sha256:0142b2cb84a009f8452c8c5a33ace5e3dfec4159e7735f5afe9a4d50a8ea722d"}, + {file = "bcrypt-4.3.0-cp313-cp313t-manylinux_2_34_aarch64.whl", hash = "sha256:12fa6ce40cde3f0b899729dbd7d5e8811cb892d31b6f7d0334a1f37748b789fd"}, + {file = "bcrypt-4.3.0-cp313-cp313t-manylinux_2_34_x86_64.whl", hash = "sha256:5bd3cca1f2aa5dbcf39e2aa13dd094ea181f48959e1071265de49cc2b82525af"}, + {file = "bcrypt-4.3.0-cp313-cp313t-musllinux_1_1_aarch64.whl", hash = "sha256:335a420cfd63fc5bc27308e929bee231c15c85cc4c496610ffb17923abf7f231"}, + {file = "bcrypt-4.3.0-cp313-cp313t-musllinux_1_1_x86_64.whl", hash = "sha256:0e30e5e67aed0187a1764911af023043b4542e70a7461ad20e837e94d23e1d6c"}, + {file = "bcrypt-4.3.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:3b8d62290ebefd49ee0b3ce7500f5dbdcf13b81402c05f6dafab9a1e1b27212f"}, + {file = "bcrypt-4.3.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:2ef6630e0ec01376f59a006dc72918b1bf436c3b571b80fa1968d775fa02fe7d"}, + {file = "bcrypt-4.3.0-cp313-cp313t-win32.whl", hash = "sha256:7a4be4cbf241afee43f1c3969b9103a41b40bcb3a3f467ab19f891d9bc4642e4"}, + {file = "bcrypt-4.3.0-cp313-cp313t-win_amd64.whl", hash = "sha256:5c1949bf259a388863ced887c7861da1df681cb2388645766c89fdfd9004c669"}, + {file = "bcrypt-4.3.0-cp38-abi3-macosx_10_12_universal2.whl", hash = "sha256:f81b0ed2639568bf14749112298f9e4e2b28853dab50a8b357e31798686a036d"}, + {file = "bcrypt-4.3.0-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:864f8f19adbe13b7de11ba15d85d4a428c7e2f344bac110f667676a0ff84924b"}, + {file = "bcrypt-4.3.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3e36506d001e93bffe59754397572f21bb5dc7c83f54454c990c74a468cd589e"}, + {file = "bcrypt-4.3.0-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:842d08d75d9fe9fb94b18b071090220697f9f184d4547179b60734846461ed59"}, + {file = "bcrypt-4.3.0-cp38-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:7c03296b85cb87db865d91da79bf63d5609284fc0cab9472fdd8367bbd830753"}, + {file = "bcrypt-4.3.0-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:62f26585e8b219cdc909b6a0069efc5e4267e25d4a3770a364ac58024f62a761"}, + {file = "bcrypt-4.3.0-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:beeefe437218a65322fbd0069eb437e7c98137e08f22c4660ac2dc795c31f8bb"}, + {file = "bcrypt-4.3.0-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:97eea7408db3a5bcce4a55d13245ab3fa566e23b4c67cd227062bb49e26c585d"}, + {file = "bcrypt-4.3.0-cp38-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:191354ebfe305e84f344c5964c7cd5f924a3bfc5d405c75ad07f232b6dffb49f"}, + {file = "bcrypt-4.3.0-cp38-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:41261d64150858eeb5ff43c753c4b216991e0ae16614a308a15d909503617732"}, + {file = "bcrypt-4.3.0-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:33752b1ba962ee793fa2b6321404bf20011fe45b9afd2a842139de3011898fef"}, + {file = "bcrypt-4.3.0-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:50e6e80a4bfd23a25f5c05b90167c19030cf9f87930f7cb2eacb99f45d1c3304"}, + {file = "bcrypt-4.3.0-cp38-abi3-win32.whl", hash = "sha256:67a561c4d9fb9465ec866177e7aebcad08fe23aaf6fbd692a6fab69088abfc51"}, + {file = "bcrypt-4.3.0-cp38-abi3-win_amd64.whl", hash = "sha256:584027857bc2843772114717a7490a37f68da563b3620f78a849bcb54dc11e62"}, + {file = "bcrypt-4.3.0-cp39-abi3-macosx_10_12_universal2.whl", hash = "sha256:0d3efb1157edebfd9128e4e46e2ac1a64e0c1fe46fb023158a407c7892b0f8c3"}, + {file = "bcrypt-4.3.0-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:08bacc884fd302b611226c01014eca277d48f0a05187666bca23aac0dad6fe24"}, + {file = "bcrypt-4.3.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6746e6fec103fcd509b96bacdfdaa2fbde9a553245dbada284435173a6f1aef"}, + {file = "bcrypt-4.3.0-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:afe327968aaf13fc143a56a3360cb27d4ad0345e34da12c7290f1b00b8fe9a8b"}, + {file = "bcrypt-4.3.0-cp39-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:d9af79d322e735b1fc33404b5765108ae0ff232d4b54666d46730f8ac1a43676"}, + {file = "bcrypt-4.3.0-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:f1e3ffa1365e8702dc48c8b360fef8d7afeca482809c5e45e653af82ccd088c1"}, + {file = "bcrypt-4.3.0-cp39-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:3004df1b323d10021fda07a813fd33e0fd57bef0e9a480bb143877f6cba996fe"}, + {file = "bcrypt-4.3.0-cp39-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:531457e5c839d8caea9b589a1bcfe3756b0547d7814e9ce3d437f17da75c32b0"}, + {file = "bcrypt-4.3.0-cp39-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:17a854d9a7a476a89dcef6c8bd119ad23e0f82557afbd2c442777a16408e614f"}, + {file = "bcrypt-4.3.0-cp39-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:6fb1fd3ab08c0cbc6826a2e0447610c6f09e983a281b919ed721ad32236b8b23"}, + {file = "bcrypt-4.3.0-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:e965a9c1e9a393b8005031ff52583cedc15b7884fce7deb8b0346388837d6cfe"}, + {file = "bcrypt-4.3.0-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:79e70b8342a33b52b55d93b3a59223a844962bef479f6a0ea318ebbcadf71505"}, + {file = "bcrypt-4.3.0-cp39-abi3-win32.whl", hash = "sha256:b4d4e57f0a63fd0b358eb765063ff661328f69a04494427265950c71b992a39a"}, + {file = "bcrypt-4.3.0-cp39-abi3-win_amd64.whl", hash = "sha256:e53e074b120f2877a35cc6c736b8eb161377caae8925c17688bd46ba56daaa5b"}, + {file = "bcrypt-4.3.0-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:c950d682f0952bafcceaf709761da0a32a942272fad381081b51096ffa46cea1"}, + {file = "bcrypt-4.3.0-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:107d53b5c67e0bbc3f03ebf5b030e0403d24dda980f8e244795335ba7b4a027d"}, + {file = "bcrypt-4.3.0-pp310-pypy310_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:b693dbb82b3c27a1604a3dff5bfc5418a7e6a781bb795288141e5f80cf3a3492"}, + {file = "bcrypt-4.3.0-pp310-pypy310_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:b6354d3760fcd31994a14c89659dee887f1351a06e5dac3c1142307172a79f90"}, + {file = "bcrypt-4.3.0-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:a839320bf27d474e52ef8cb16449bb2ce0ba03ca9f44daba6d93fa1d8828e48a"}, + {file = "bcrypt-4.3.0-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:bdc6a24e754a555d7316fa4774e64c6c3997d27ed2d1964d55920c7c227bc4ce"}, + {file = "bcrypt-4.3.0-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:55a935b8e9a1d2def0626c4269db3fcd26728cbff1e84f0341465c31c4ee56d8"}, + {file = "bcrypt-4.3.0-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:57967b7a28d855313a963aaea51bf6df89f833db4320da458e5b3c5ab6d4c938"}, + {file = "bcrypt-4.3.0.tar.gz", hash = "sha256:3a3fd2204178b6d2adcf09cb4f6426ffef54762577a7c9b54c159008cb288c18"}, ] [package.extras] @@ -262,8 +280,6 @@ mypy-extensions = ">=0.4.3" packaging = ">=22.0" pathspec = ">=0.9.0" platformdirs = ">=2" -tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""} -typing-extensions = {version = ">=4.0.1", markers = "python_version < \"3.11\""} [package.extras] colorama = ["colorama (>=0.4.3)"] @@ -295,10 +311,8 @@ files = [ [package.dependencies] colorama = {version = "*", markers = "os_name == \"nt\""} -importlib-metadata = {version = ">=4.6", markers = "python_full_version < \"3.10.2\""} packaging = ">=19.1" pyproject_hooks = "*" -tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""} [package.extras] docs = ["furo (>=2023.08.17)", "sphinx (>=7.0,<8.0)", "sphinx-argparse-cli (>=1.5)", "sphinx-autodoc-typehints (>=1.10)", "sphinx-issues (>=3.0.0)"] @@ -443,13 +457,13 @@ files = [ [[package]] name = "chex" -version = "0.1.88" +version = "0.1.89" description = "Chex: Testing made fun, in JAX!" optional = false python-versions = ">=3.9" files = [ - {file = "chex-0.1.88-py3-none-any.whl", hash = "sha256:234b61a5baa8132802e4b9c5657167d6c8a911d90a59a0bec47d537567e41b75"}, - {file = "chex-0.1.88.tar.gz", hash = "sha256:565de897b1373232cdfca5e699f50fa49403d2c7d23f6c5a75a97ef713d2fe36"}, + {file = "chex-0.1.89-py3-none-any.whl", hash = "sha256:145241c27d8944adb634fb7d472a460e1c1b643f561507d4031ad5156ef82dfa"}, + {file = "chex-0.1.89.tar.gz", hash = "sha256:78f856e6a0a8459edfcbb402c2c044d2b8102eac4b633838cbdfdcdb09c6c8e0"}, ] [package.dependencies] @@ -463,36 +477,40 @@ typing_extensions = ">=4.2.0" [[package]] name = "chroma-hnswlib" -version = "0.7.3" +version = "0.7.6" description = "Chromas fork of hnswlib" optional = false python-versions = "*" files = [ - {file = "chroma-hnswlib-0.7.3.tar.gz", hash = "sha256:b6137bedde49fffda6af93b0297fe00429fc61e5a072b1ed9377f909ed95a932"}, - {file = "chroma_hnswlib-0.7.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:59d6a7c6f863c67aeb23e79a64001d537060b6995c3eca9a06e349ff7b0998ca"}, - {file = "chroma_hnswlib-0.7.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d71a3f4f232f537b6152947006bd32bc1629a8686df22fd97777b70f416c127a"}, - {file = "chroma_hnswlib-0.7.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1c92dc1ebe062188e53970ba13f6b07e0ae32e64c9770eb7f7ffa83f149d4210"}, - {file = "chroma_hnswlib-0.7.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:49da700a6656fed8753f68d44b8cc8ae46efc99fc8a22a6d970dc1697f49b403"}, - {file = "chroma_hnswlib-0.7.3-cp310-cp310-win_amd64.whl", hash = "sha256:108bc4c293d819b56476d8f7865803cb03afd6ca128a2a04d678fffc139af029"}, - {file = "chroma_hnswlib-0.7.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:11e7ca93fb8192214ac2b9c0943641ac0daf8f9d4591bb7b73be808a83835667"}, - {file = "chroma_hnswlib-0.7.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:6f552e4d23edc06cdeb553cdc757d2fe190cdeb10d43093d6a3319f8d4bf1c6b"}, - {file = "chroma_hnswlib-0.7.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f96f4d5699e486eb1fb95849fe35ab79ab0901265805be7e60f4eaa83ce263ec"}, - {file = "chroma_hnswlib-0.7.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:368e57fe9ebae05ee5844840fa588028a023d1182b0cfdb1d13f607c9ea05756"}, - {file = "chroma_hnswlib-0.7.3-cp311-cp311-win_amd64.whl", hash = "sha256:b7dca27b8896b494456db0fd705b689ac6b73af78e186eb6a42fea2de4f71c6f"}, - {file = "chroma_hnswlib-0.7.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:70f897dc6218afa1d99f43a9ad5eb82f392df31f57ff514ccf4eeadecd62f544"}, - {file = "chroma_hnswlib-0.7.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5aef10b4952708f5a1381c124a29aead0c356f8d7d6e0b520b778aaa62a356f4"}, - {file = "chroma_hnswlib-0.7.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7ee2d8d1529fca3898d512079144ec3e28a81d9c17e15e0ea4665697a7923253"}, - {file = "chroma_hnswlib-0.7.3-cp37-cp37m-win_amd64.whl", hash = "sha256:a4021a70e898783cd6f26e00008b494c6249a7babe8774e90ce4766dd288c8ba"}, - {file = "chroma_hnswlib-0.7.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a8f61fa1d417fda848e3ba06c07671f14806a2585272b175ba47501b066fe6b1"}, - {file = "chroma_hnswlib-0.7.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d7563be58bc98e8f0866907368e22ae218d6060601b79c42f59af4eccbbd2e0a"}, - {file = "chroma_hnswlib-0.7.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:51b8d411486ee70d7b66ec08cc8b9b6620116b650df9c19076d2d8b6ce2ae914"}, - {file = "chroma_hnswlib-0.7.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9d706782b628e4f43f1b8a81e9120ac486837fbd9bcb8ced70fe0d9b95c72d77"}, - {file = "chroma_hnswlib-0.7.3-cp38-cp38-win_amd64.whl", hash = "sha256:54f053dedc0e3ba657f05fec6e73dd541bc5db5b09aa8bc146466ffb734bdc86"}, - {file = "chroma_hnswlib-0.7.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:e607c5a71c610a73167a517062d302c0827ccdd6e259af6e4869a5c1306ffb5d"}, - {file = "chroma_hnswlib-0.7.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:c2358a795870156af6761890f9eb5ca8cade57eb10c5f046fe94dae1faa04b9e"}, - {file = "chroma_hnswlib-0.7.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7cea425df2e6b8a5e201fff0d922a1cc1d165b3cfe762b1408075723c8892218"}, - {file = "chroma_hnswlib-0.7.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:454df3dd3e97aa784fba7cf888ad191e0087eef0fd8c70daf28b753b3b591170"}, - {file = "chroma_hnswlib-0.7.3-cp39-cp39-win_amd64.whl", hash = "sha256:df587d15007ca701c6de0ee7d5585dd5e976b7edd2b30ac72bc376b3c3f85882"}, + {file = "chroma_hnswlib-0.7.6-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f35192fbbeadc8c0633f0a69c3d3e9f1a4eab3a46b65458bbcbcabdd9e895c36"}, + {file = "chroma_hnswlib-0.7.6-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6f007b608c96362b8f0c8b6b2ac94f67f83fcbabd857c378ae82007ec92f4d82"}, + {file = "chroma_hnswlib-0.7.6-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:456fd88fa0d14e6b385358515aef69fc89b3c2191706fd9aee62087b62aad09c"}, + {file = "chroma_hnswlib-0.7.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5dfaae825499c2beaa3b75a12d7ec713b64226df72a5c4097203e3ed532680da"}, + {file = "chroma_hnswlib-0.7.6-cp310-cp310-win_amd64.whl", hash = "sha256:2487201982241fb1581be26524145092c95902cb09fc2646ccfbc407de3328ec"}, + {file = "chroma_hnswlib-0.7.6-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:81181d54a2b1e4727369486a631f977ffc53c5533d26e3d366dda243fb0998ca"}, + {file = "chroma_hnswlib-0.7.6-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:4b4ab4e11f1083dd0a11ee4f0e0b183ca9f0f2ed63ededba1935b13ce2b3606f"}, + {file = "chroma_hnswlib-0.7.6-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:53db45cd9173d95b4b0bdccb4dbff4c54a42b51420599c32267f3abbeb795170"}, + {file = "chroma_hnswlib-0.7.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5c093f07a010b499c00a15bc9376036ee4800d335360570b14f7fe92badcdcf9"}, + {file = "chroma_hnswlib-0.7.6-cp311-cp311-win_amd64.whl", hash = "sha256:0540b0ac96e47d0aa39e88ea4714358ae05d64bbe6bf33c52f316c664190a6a3"}, + {file = "chroma_hnswlib-0.7.6-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:e87e9b616c281bfbe748d01705817c71211613c3b063021f7ed5e47173556cb7"}, + {file = "chroma_hnswlib-0.7.6-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:ec5ca25bc7b66d2ecbf14502b5729cde25f70945d22f2aaf523c2d747ea68912"}, + {file = "chroma_hnswlib-0.7.6-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:305ae491de9d5f3c51e8bd52d84fdf2545a4a2bc7af49765cda286b7bb30b1d4"}, + {file = "chroma_hnswlib-0.7.6-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:822ede968d25a2c88823ca078a58f92c9b5c4142e38c7c8b4c48178894a0a3c5"}, + {file = "chroma_hnswlib-0.7.6-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:2fe6ea949047beed19a94b33f41fe882a691e58b70c55fdaa90274ae78be046f"}, + {file = "chroma_hnswlib-0.7.6-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:feceff971e2a2728c9ddd862a9dd6eb9f638377ad98438876c9aeac96c9482f5"}, + {file = "chroma_hnswlib-0.7.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bb0633b60e00a2b92314d0bf5bbc0da3d3320be72c7e3f4a9b19f4609dc2b2ab"}, + {file = "chroma_hnswlib-0.7.6-cp37-cp37m-win_amd64.whl", hash = "sha256:a566abe32fab42291f766d667bdbfa234a7f457dcbd2ba19948b7a978c8ca624"}, + {file = "chroma_hnswlib-0.7.6-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6be47853d9a58dedcfa90fc846af202b071f028bbafe1d8711bf64fe5a7f6111"}, + {file = "chroma_hnswlib-0.7.6-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:3a7af35bdd39a88bffa49f9bb4bf4f9040b684514a024435a1ef5cdff980579d"}, + {file = "chroma_hnswlib-0.7.6-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a53b1f1551f2b5ad94eb610207bde1bb476245fc5097a2bec2b476c653c58bde"}, + {file = "chroma_hnswlib-0.7.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3085402958dbdc9ff5626ae58d696948e715aef88c86d1e3f9285a88f1afd3bc"}, + {file = "chroma_hnswlib-0.7.6-cp38-cp38-win_amd64.whl", hash = "sha256:77326f658a15adfb806a16543f7db7c45f06fd787d699e643642d6bde8ed49c4"}, + {file = "chroma_hnswlib-0.7.6-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:93b056ab4e25adab861dfef21e1d2a2756b18be5bc9c292aa252fa12bb44e6ae"}, + {file = "chroma_hnswlib-0.7.6-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:fe91f018b30452c16c811fd6c8ede01f84e5a9f3c23e0758775e57f1c3778871"}, + {file = "chroma_hnswlib-0.7.6-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e6c0e627476f0f4d9e153420d36042dd9c6c3671cfd1fe511c0253e38c2a1039"}, + {file = "chroma_hnswlib-0.7.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3e9796a4536b7de6c6d76a792ba03e08f5aaa53e97e052709568e50b4d20c04f"}, + {file = "chroma_hnswlib-0.7.6-cp39-cp39-win_amd64.whl", hash = "sha256:d30e2db08e7ffdcc415bd072883a322de5995eb6ec28a8f8c054103bbd3ec1e0"}, + {file = "chroma_hnswlib-0.7.6.tar.gz", hash = "sha256:4dce282543039681160259d29fcde6151cc9106c6461e0485f57cdccd83059b7"}, ] [package.dependencies] @@ -500,21 +518,22 @@ numpy = "*" [[package]] name = "chromadb" -version = "0.4.24" +version = "0.6.3" description = "Chroma." optional = false -python-versions = ">=3.8" +python-versions = ">=3.9" files = [ - {file = "chromadb-0.4.24-py3-none-any.whl", hash = "sha256:3a08e237a4ad28b5d176685bd22429a03717fe09d35022fb230d516108da01da"}, - {file = "chromadb-0.4.24.tar.gz", hash = "sha256:a5c80b4e4ad9b236ed2d4899a5b9e8002b489293f2881cb2cadab5b199ee1c72"}, + {file = "chromadb-0.6.3-py3-none-any.whl", hash = "sha256:4851258489a3612b558488d98d09ae0fe0a28d5cad6bd1ba64b96fdc419dc0e5"}, + {file = "chromadb-0.6.3.tar.gz", hash = "sha256:c8f34c0b704b9108b04491480a36d42e894a960429f87c6516027b5481d59ed3"}, ] [package.dependencies] bcrypt = ">=4.0.1" build = ">=1.0.3" -chroma-hnswlib = "0.7.3" +chroma-hnswlib = "0.7.6" fastapi = ">=0.95.2" grpcio = ">=1.58.0" +httpx = ">=0.27.0" importlib-resources = "*" kubernetes = ">=28.1.0" mmh3 = ">=4.0.1" @@ -527,16 +546,15 @@ opentelemetry-sdk = ">=1.2.0" orjson = ">=3.9.12" overrides = ">=7.3.1" posthog = ">=2.4.0" -pulsar-client = ">=3.1.0" pydantic = ">=1.9" pypika = ">=0.48.9" PyYAML = ">=6.0.0" -requests = ">=2.28" +rich = ">=10.11.0" tenacity = ">=8.2.3" tokenizers = ">=0.13.2" tqdm = ">=4.65.0" typer = ">=0.9.0" -typing-extensions = ">=4.5.0" +typing_extensions = ">=4.5.0" uvicorn = {version = ">=0.18.3", extras = ["standard"]} [[package]] @@ -609,13 +627,13 @@ cron = ["capturer (>=2.4)"] [[package]] name = "decorator" -version = "5.1.1" +version = "5.2.1" description = "Decorators for Humans" optional = false -python-versions = ">=3.5" +python-versions = ">=3.8" files = [ - {file = "decorator-5.1.1-py3-none-any.whl", hash = "sha256:b8c3f85900b9dc423225913c5aace94729fe1fa9763b38939a95226f02d37186"}, - {file = "decorator-5.1.1.tar.gz", hash = "sha256:637996211036b6385ef91435e4fae22989472f9d571faba8927ba8253acbc330"}, + {file = "decorator-5.2.1-py3-none-any.whl", hash = "sha256:d316bb415a2d9e2d2b3abcc4084c6502fc09240e292cd76a76afc106a1c8e04a"}, + {file = "decorator-5.2.1.tar.gz", hash = "sha256:65f266143752f734b0a7cc83c46f4618af75b8c5911b00ccb61d0ac9b6da0360"}, ] [[package]] @@ -651,7 +669,6 @@ chardet = ">=4.0.0" click = ">=8.0.0,<9.0.0" colorama = {version = ">=0.4.6", markers = "sys_platform == \"win32\""} pathspec = ">=0.9.0" -tomli = {version = ">=2.0.1,<3.0.0", markers = "python_version < \"3.11\""} [[package]] name = "dill" @@ -668,6 +685,72 @@ files = [ graph = ["objgraph (>=1.7.2)"] profile = ["gprof2dot (>=2022.7.29)"] +[[package]] +name = "distro" +version = "1.9.0" +description = "Distro - an OS platform information API" +optional = false +python-versions = ">=3.6" +files = [ + {file = "distro-1.9.0-py3-none-any.whl", hash = "sha256:7bffd925d65168f85027d8da9af6bddab658135b840670a223589bc0c8ef02b2"}, + {file = "distro-1.9.0.tar.gz", hash = "sha256:2fa77c6fd8940f116ee1d6b94a2f90b13b5ea8d019b98bc8bafdcabcdd9bdbed"}, +] + +[[package]] +name = "dm-tree" +version = "0.1.8" +description = "Tree is a library for working with nested data structures." +optional = false +python-versions = "*" +files = [ + {file = "dm-tree-0.1.8.tar.gz", hash = "sha256:0fcaabbb14e7980377439e7140bd05552739ca5e515ecb3119f234acee4b9430"}, + {file = "dm_tree-0.1.8-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:35cc164a79336bfcfafb47e5f297898359123bbd3330c1967f0c4994f9cf9f60"}, + {file = "dm_tree-0.1.8-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:39070ba268c0491af9fe7a58644d99e8b4f2cde6e5884ba3380bddc84ed43d5f"}, + {file = "dm_tree-0.1.8-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:2869228d9c619074de501a3c10dc7f07c75422f8fab36ecdcb859b6f1b1ec3ef"}, + {file = "dm_tree-0.1.8-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d20f2faa3672b52e5013f4077117bfb99c4cfc0b445d3bde1584c34032b57436"}, + {file = "dm_tree-0.1.8-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5483dca4d7eb1a0d65fe86d3b6a53ae717face83c1f17e0887b1a4a64ae5c410"}, + {file = "dm_tree-0.1.8-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1d7c26e431fc93cc7e0cba867eb000db6a05f6f2b25af11ac4e9dada88fc5bca"}, + {file = "dm_tree-0.1.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e4d714371bb08839e4e5e29024fc95832d9affe129825ef38836b143028bd144"}, + {file = "dm_tree-0.1.8-cp310-cp310-win_amd64.whl", hash = "sha256:d40fa4106ca6edc66760246a08f500ec0c85ef55c762fb4a363f6ee739ba02ee"}, + {file = "dm_tree-0.1.8-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:ad16ceba90a56ec47cf45b21856d14962ac314787975ef786efb5e6e9ca75ec7"}, + {file = "dm_tree-0.1.8-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:803bfc53b4659f447ac694dbd04235f94a73ef7c1fd1e0df7c84ac41e0bc963b"}, + {file = "dm_tree-0.1.8-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:378cc8ad93c5fe3590f405a309980721f021c790ca1bdf9b15bb1d59daec57f5"}, + {file = "dm_tree-0.1.8-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1607ce49aa42f010d1e5e616d92ce899d66835d4d8bea49679582435285515de"}, + {file = "dm_tree-0.1.8-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:343a4a4ebaa127451ff971254a4be4084eb4bdc0b2513c32b46f6f728fd03f9e"}, + {file = "dm_tree-0.1.8-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:fa42a605d099ee7d41ba2b5fb75e21423951fd26e5d50583a00471238fb3021d"}, + {file = "dm_tree-0.1.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:83b7764de0d855338abefc6e3ee9fe40d301668310aa3baea3f778ff051f4393"}, + {file = "dm_tree-0.1.8-cp311-cp311-win_amd64.whl", hash = "sha256:a5d819c38c03f0bb5b3b3703c60e4b170355a0fc6b5819325bf3d4ceb3ae7e80"}, + {file = "dm_tree-0.1.8-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:ea9e59e0451e7d29aece402d9f908f2e2a80922bcde2ebfd5dcb07750fcbfee8"}, + {file = "dm_tree-0.1.8-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:94d3f0826311f45ee19b75f5b48c99466e4218a0489e81c0f0167bda50cacf22"}, + {file = "dm_tree-0.1.8-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:435227cf3c5dc63f4de054cf3d00183790bd9ead4c3623138c74dde7f67f521b"}, + {file = "dm_tree-0.1.8-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:09964470f76a5201aff2e8f9b26842976de7889300676f927930f6285e256760"}, + {file = "dm_tree-0.1.8-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:75c5d528bb992981c20793b6b453e91560784215dffb8a5440ba999753c14ceb"}, + {file = "dm_tree-0.1.8-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c0a94aba18a35457a1b5cd716fd7b46c5dafdc4cf7869b4bae665b91c4682a8e"}, + {file = "dm_tree-0.1.8-cp312-cp312-win_amd64.whl", hash = "sha256:96a548a406a6fb15fe58f6a30a57ff2f2aafbf25f05afab00c8f5e5977b6c715"}, + {file = "dm_tree-0.1.8-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:8c60a7eadab64c2278861f56bca320b2720f163dca9d7558103c3b77f2416571"}, + {file = "dm_tree-0.1.8-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:af4b3d372f2477dcd89a6e717e4a575ca35ccc20cc4454a8a4b6f8838a00672d"}, + {file = "dm_tree-0.1.8-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:de287fabc464b8734be251e46e06aa9aa1001f34198da2b6ce07bd197172b9cb"}, + {file = "dm_tree-0.1.8-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:054b461f8176f4bce7a21f7b1870f873a1ced3bdbe1282c816c550bb43c71fa6"}, + {file = "dm_tree-0.1.8-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2f7915660f59c09068e428613c480150180df1060561fd0d1470684ae7007bd1"}, + {file = "dm_tree-0.1.8-cp37-cp37m-win_amd64.whl", hash = "sha256:b9f89a454e98806b44fe9d40ec9eee61f848388f7e79ac2371a55679bd5a3ac6"}, + {file = "dm_tree-0.1.8-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:0e9620ccf06393eb6b613b5e366469304622d4ea96ae6540b28a33840e6c89cf"}, + {file = "dm_tree-0.1.8-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b095ba4f8ca1ba19350fd53cf1f8f3eb0bd406aa28af64a6dfc86707b32a810a"}, + {file = "dm_tree-0.1.8-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b9bd9b9ccb59409d33d51d84b7668010c04c2af7d4a371632874c1ca356cff3d"}, + {file = "dm_tree-0.1.8-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0d3172394079a86c3a759179c65f64c48d1a42b89495fcf38976d11cc3bb952c"}, + {file = "dm_tree-0.1.8-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d1612fcaecd79023dbc6a6ae48d51a80beb5c385d6f3f6d71688e57bc8d07de8"}, + {file = "dm_tree-0.1.8-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c5c8c12e3fda754ef6af94161bacdaeda816d941995fac415d6855c6c386af68"}, + {file = "dm_tree-0.1.8-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:694c3654cfd2a81552c08ec66bb5c4a3d48fa292b9a181880fb081c36c5b9134"}, + {file = "dm_tree-0.1.8-cp38-cp38-win_amd64.whl", hash = "sha256:bb2d109f42190225112da899b9f3d46d0d5f26aef501c61e43529fe9322530b5"}, + {file = "dm_tree-0.1.8-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:d16e1f2a073604cfcc09f7131ae8d534674f43c3aef4c25742eae295bc60d04f"}, + {file = "dm_tree-0.1.8-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:250b692fb75f45f02e2f58fbef9ab338904ef334b90557565621fa251df267cf"}, + {file = "dm_tree-0.1.8-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:81fce77f22a302d7a5968aebdf4efafef4def7ce96528719a354e6990dcd49c7"}, + {file = "dm_tree-0.1.8-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f7ac31b9aecccb2c6e1ab29706f6ded3eba0c2c69c770322c9c685929c3d6afb"}, + {file = "dm_tree-0.1.8-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1fe962015b2fe1282892b28ebe962faed53c7f98d942da9a4625cbf27baef913"}, + {file = "dm_tree-0.1.8-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28c52cbf4f8b3dbd0beaedf44f69fa85eec5e9dede612e08035e06ada6ec9426"}, + {file = "dm_tree-0.1.8-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:181c35521d480d0365f39300542cb6cd7fd2b77351bb43d7acfda15aef63b317"}, + {file = "dm_tree-0.1.8-cp39-cp39-win_amd64.whl", hash = "sha256:8ed3564abed97c806db122c2d3e1a2b64c74a63debe9903aad795167cc301368"}, +] + [[package]] name = "dm-tree" version = "0.1.9" @@ -695,11 +778,7 @@ files = [ [package.dependencies] absl-py = ">=0.6.1" attrs = ">=18.2.0" -numpy = [ - {version = ">=1.21.2", markers = "python_version >= \"3.10\" and python_version < \"3.11\""}, - {version = ">=1.26.0", markers = "python_version >= \"3.12\""}, - {version = ">=1.23.3", markers = "python_version >= \"3.11\" and python_version < \"3.12\""}, -] +numpy = {version = ">=1.26.0", markers = "python_version >= \"3.12\" and python_version < \"3.13\""} wrapt = ">=1.11.2" [[package]] @@ -804,22 +883,36 @@ files = [ {file = "editdistance-0.8.1.tar.gz", hash = "sha256:d1cdf80a5d5014b0c9126a69a42ce55a457b457f6986ff69ca98e4fe4d2d8fed"}, ] +[[package]] +name = "einops" +version = "0.8.1" +description = "A new flavour of deep learning operations" +optional = false +python-versions = ">=3.8" +files = [ + {file = "einops-0.8.1-py3-none-any.whl", hash = "sha256:919387eb55330f5757c6bea9165c5ff5cfe63a642682ea788a6d472576d81737"}, + {file = "einops-0.8.1.tar.gz", hash = "sha256:de5d960a7a761225532e0f1959e5315ebeafc0cd43394732f103ca44b9837e84"}, +] + [[package]] name = "etils" -version = "1.12.0" +version = "1.12.2" description = "Collection of common python utils" optional = false python-versions = ">=3.10" files = [ - {file = "etils-1.12.0-py3-none-any.whl", hash = "sha256:f80c2ff4289cc504b58b7e7a9f9db8b373a33227e43694a66808bcc81e51ffb8"}, - {file = "etils-1.12.0.tar.gz", hash = "sha256:67aa7d549f9bee7851e07fbf0e099232b7f867c2825f468d7cbe728ab0d01bd8"}, + {file = "etils-1.12.2-py3-none-any.whl", hash = "sha256:4600bec9de6cf5cb043a171e1856e38b5f273719cf3ecef90199f7091a6b3912"}, + {file = "etils-1.12.2.tar.gz", hash = "sha256:c6b9e1f0ce66d1bbf54f99201b08a60ba396d3446d9eb18d4bc39b26a2e1a5ee"}, ] [package.dependencies] +absl-py = {version = "*", optional = true, markers = "extra == \"etqdm\""} +einops = {version = "*", optional = true, markers = "extra == \"enp\""} fsspec = {version = "*", optional = true, markers = "extra == \"epath\""} importlib_resources = {version = "*", optional = true, markers = "extra == \"epath\""} numpy = {version = "*", optional = true, markers = "extra == \"enp\""} -typing_extensions = {version = "*", optional = true, markers = "extra == \"epath\" or extra == \"epy\""} +tqdm = {version = "*", optional = true, markers = "extra == \"etqdm\""} +typing_extensions = {version = "*", optional = true, markers = "extra == \"epy\""} zipp = {version = "*", optional = true, markers = "extra == \"epath\""} [package.extras] @@ -830,7 +923,7 @@ docs = ["etils[all,dev]", "sphinx-apitree[ext]"] eapp = ["absl-py", "etils[epy]", "simple_parsing"] ecolab = ["etils[enp]", "etils[epy]", "etils[etree]", "jupyter", "mediapy", "numpy", "packaging", "protobuf"] edc = ["etils[epy]"] -enp = ["etils[epy]", "numpy"] +enp = ["einops", "etils[epy]", "numpy"] epath = ["etils[epy]", "fsspec", "importlib_resources", "typing_extensions", "zipp"] epath-gcs = ["etils[epath]", "gcsfs"] epath-s3 = ["etils[epath]", "s3fs"] @@ -842,20 +935,6 @@ etree-jax = ["etils[etree]", "jax[cpu]"] etree-tf = ["etils[etree]", "tensorflow"] lazy-imports = ["etils[ecolab]"] -[[package]] -name = "exceptiongroup" -version = "1.2.2" -description = "Backport of PEP 654 (exception groups)" -optional = false -python-versions = ">=3.7" -files = [ - {file = "exceptiongroup-1.2.2-py3-none-any.whl", hash = "sha256:3111b9d131c238bec2f8f516e123e14ba243563fb135d3fe885990585aa7795b"}, - {file = "exceptiongroup-1.2.2.tar.gz", hash = "sha256:47c2edf7c6738fafb49fd34290706d1a1a2f4d1c6df275526b62cbb4aa5393cc"}, -] - -[package.extras] -test = ["pytest (>=6)"] - [[package]] name = "executing" version = "2.2.0" @@ -872,18 +951,18 @@ tests = ["asttokens (>=2.1.0)", "coverage", "coverage-enable-subprocess", "ipyth [[package]] name = "fastapi" -version = "0.115.8" +version = "0.115.12" description = "FastAPI framework, high performance, easy to learn, fast to code, ready for production" optional = false python-versions = ">=3.8" files = [ - {file = "fastapi-0.115.8-py3-none-any.whl", hash = "sha256:753a96dd7e036b34eeef8babdfcfe3f28ff79648f86551eb36bfc1b0bf4a8cbf"}, - {file = "fastapi-0.115.8.tar.gz", hash = "sha256:0ce9111231720190473e222cdf0f07f7206ad7e53ea02beb1d2dc36e2f0741e9"}, + {file = "fastapi-0.115.12-py3-none-any.whl", hash = "sha256:e94613d6c05e27be7ffebdd6ea5f388112e5e430c8f7d6494a9d1d88d43e814d"}, + {file = "fastapi-0.115.12.tar.gz", hash = "sha256:1e2c2a2646905f9e83d32f04a3f86aff4a286669c6c950ca95b5fd68c2602681"}, ] [package.dependencies] pydantic = ">=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<2.0.0 || >2.0.0,<2.0.1 || >2.0.1,<2.1.0 || >2.1.0,<3.0.0" -starlette = ">=0.40.0,<0.46.0" +starlette = ">=0.40.0,<0.47.0" typing-extensions = ">=4.8.0" [package.extras] @@ -892,13 +971,13 @@ standard = ["email-validator (>=2.0.0)", "fastapi-cli[standard] (>=0.0.5)", "htt [[package]] name = "filelock" -version = "3.17.0" +version = "3.18.0" description = "A platform independent file lock." optional = false python-versions = ">=3.9" files = [ - {file = "filelock-3.17.0-py3-none-any.whl", hash = "sha256:533dc2f7ba78dc2f0f531fc6c4940addf7b70a481e269a5a3b93be94ffbe8338"}, - {file = "filelock-3.17.0.tar.gz", hash = "sha256:ee4e77401ef576ebb38cd7f13b9b28893194acc20a8e68e18730ba9c0e54660e"}, + {file = "filelock-3.18.0-py3-none-any.whl", hash = "sha256:c401f4f8377c4464e6db25fff06205fd89bdd83b65eb0488ed1b160f780e21de"}, + {file = "filelock-3.18.0.tar.gz", hash = "sha256:adbc88eabb99d2fec8c9c1b229b171f18afa655400173ddc653d5d01501fb9f2"}, ] [package.extras] @@ -951,22 +1030,19 @@ files = [ [[package]] name = "flax" -version = "0.10.3" +version = "0.10.5" description = "Flax: A neural network library for JAX designed for flexibility" optional = false python-versions = ">=3.10" files = [ - {file = "flax-0.10.3-py3-none-any.whl", hash = "sha256:7158b5dd6a05837e662a1ce1beea7adbad6d3612c0551c986b1c0a56071e3021"}, - {file = "flax-0.10.3.tar.gz", hash = "sha256:29cde8cf05ffbff39b7f7167f0fe9916694cce76ce4c14e8be3549c1fd1b7c81"}, + {file = "flax-0.10.5-py3-none-any.whl", hash = "sha256:0d8a3c06618af92bacfbb6f1e04ec19591b277a9f84f4ef6018049c5473e166e"}, + {file = "flax-0.10.5.tar.gz", hash = "sha256:0cc137d47fd44fe0b3e2d50c410f4c9955feb9cd100a8d409e0de40922ba1d5a"}, ] [package.dependencies] -jax = ">=0.4.27" +jax = ">=0.5.1" msgpack = "*" -numpy = [ - {version = ">=1.26.0", markers = "python_version >= \"3.12\""}, - {version = ">=1.23.2", markers = "python_version >= \"3.11\" and python_version < \"3.12\""}, -] +numpy = {version = ">=1.26.0", markers = "python_version >= \"3.12\""} optax = "*" orbax-checkpoint = "*" PyYAML = ">=5.4.1" @@ -983,13 +1059,13 @@ testing = ["cloudpickle (>=3.0.0)", "clu", "clu (<=0.0.9)", "einops", "gymnasium [[package]] name = "fsspec" -version = "2025.2.0" +version = "2025.3.2" description = "File-system specification" optional = false -python-versions = ">=3.8" +python-versions = ">=3.9" files = [ - {file = "fsspec-2025.2.0-py3-none-any.whl", hash = "sha256:9de2ad9ce1f85e1931858535bc882543171d197001a0a5eb2ddc04f1781ab95b"}, - {file = "fsspec-2025.2.0.tar.gz", hash = "sha256:1c24b16eaa0a1798afa0337aa0db9b256718ab2a89c425371f5628d22c3b6afd"}, + {file = "fsspec-2025.3.2-py3-none-any.whl", hash = "sha256:2daf8dc3d1dfa65b6aa37748d112773a7a08416f6c70d96b264c96476ecaf711"}, + {file = "fsspec-2025.3.2.tar.gz", hash = "sha256:e52c77ef398680bbd6a98c0e628fbc469491282981209907bbc8aea76a04fdc6"}, ] [package.extras] @@ -1045,57 +1121,52 @@ files = [ [package.dependencies] google-api-core = {version = ">=1.34.1,<2.0.dev0 || >=2.11.dev0,<3.0.0dev", extras = ["grpc"]} google-auth = ">=2.14.1,<2.24.0 || >2.24.0,<2.25.0 || >2.25.0,<3.0.0dev" -proto-plus = ">=1.22.3,<2.0.0dev" +proto-plus = [ + {version = ">=1.25.0,<2.0.0dev", markers = "python_version >= \"3.13\""}, + {version = ">=1.22.3,<2.0.0dev", markers = "python_version < \"3.13\""}, +] protobuf = ">=3.20.2,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<6.0.0dev" [[package]] name = "google-api-core" -version = "2.24.1" +version = "1.34.1" description = "Google API client core library" optional = false python-versions = ">=3.7" files = [ - {file = "google_api_core-2.24.1-py3-none-any.whl", hash = "sha256:bc78d608f5a5bf853b80bd70a795f703294de656c096c0968320830a4bc280f1"}, - {file = "google_api_core-2.24.1.tar.gz", hash = "sha256:f8b36f5456ab0dd99a1b693a40a31d1e7757beea380ad1b38faaf8941eae9d8a"}, + {file = "google-api-core-1.34.1.tar.gz", hash = "sha256:3399c92887a97d33038baa4bfd3bf07acc05d474b0171f333e1f641c1364e552"}, + {file = "google_api_core-1.34.1-py3-none-any.whl", hash = "sha256:52bcc9d9937735f8a3986fa0bbf9135ae9cf5393a722387e5eced520e39c774a"}, ] [package.dependencies] -google-auth = ">=2.14.1,<3.0.dev0" -googleapis-common-protos = ">=1.56.2,<2.0.dev0" -grpcio = [ - {version = ">=1.33.2,<2.0dev", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""}, - {version = ">=1.49.1,<2.0dev", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""}, -] -grpcio-status = [ - {version = ">=1.33.2,<2.0.dev0", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""}, - {version = ">=1.49.1,<2.0.dev0", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""}, -] -proto-plus = ">=1.22.3,<2.0.0dev" -protobuf = ">=3.19.5,<3.20.0 || >3.20.0,<3.20.1 || >3.20.1,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<6.0.0.dev0" -requests = ">=2.18.0,<3.0.0.dev0" +google-auth = ">=1.25.0,<3.0dev" +googleapis-common-protos = ">=1.56.2,<2.0dev" +grpcio = {version = ">=1.33.2,<2.0dev", optional = true, markers = "extra == \"grpc\""} +grpcio-status = {version = ">=1.33.2,<2.0dev", optional = true, markers = "extra == \"grpc\""} +protobuf = ">=3.19.5,<3.20.0 || >3.20.0,<3.20.1 || >3.20.1,<4.0.0dev" +requests = ">=2.18.0,<3.0.0dev" [package.extras] -async-rest = ["google-auth[aiohttp] (>=2.35.0,<3.0.dev0)"] -grpc = ["grpcio (>=1.33.2,<2.0dev)", "grpcio (>=1.49.1,<2.0dev)", "grpcio-status (>=1.33.2,<2.0.dev0)", "grpcio-status (>=1.49.1,<2.0.dev0)"] -grpcgcp = ["grpcio-gcp (>=0.2.2,<1.0.dev0)"] -grpcio-gcp = ["grpcio-gcp (>=0.2.2,<1.0.dev0)"] +grpc = ["grpcio (>=1.33.2,<2.0dev)", "grpcio-status (>=1.33.2,<2.0dev)"] +grpcgcp = ["grpcio-gcp (>=0.2.2,<1.0dev)"] +grpcio-gcp = ["grpcio-gcp (>=0.2.2,<1.0dev)"] [[package]] name = "google-api-python-client" -version = "2.161.0" +version = "2.166.0" description = "Google API Client Library for Python" optional = false python-versions = ">=3.7" files = [ - {file = "google_api_python_client-2.161.0-py2.py3-none-any.whl", hash = "sha256:9476a5a4f200bae368140453df40f9cda36be53fa7d0e9a9aac4cdb859a26448"}, - {file = "google_api_python_client-2.161.0.tar.gz", hash = "sha256:324c0cce73e9ea0a0d2afd5937e01b7c2d6a4d7e2579cdb6c384f9699d6c9f37"}, + {file = "google_api_python_client-2.166.0-py2.py3-none-any.whl", hash = "sha256:dd8cc74d9fc18538ab05cbd2e93cb4f82382f910c5f6945db06c91f1deae6e45"}, + {file = "google_api_python_client-2.166.0.tar.gz", hash = "sha256:b8cf843bd9d736c134aef76cf1dc7a47c9283a2ef24267b97207b9dd43b30ef7"}, ] [package.dependencies] -google-api-core = ">=1.31.5,<2.0.dev0 || >2.3.0,<3.0.0.dev0" -google-auth = ">=1.32.0,<2.24.0 || >2.24.0,<2.25.0 || >2.25.0,<3.0.0.dev0" +google-api-core = ">=1.31.5,<2.0.dev0 || >2.3.0,<3.0.0" +google-auth = ">=1.32.0,<2.24.0 || >2.24.0,<2.25.0 || >2.25.0,<3.0.0" google-auth-httplib2 = ">=0.2.0,<1.0.0" -httplib2 = ">=0.19.0,<1.dev0" +httplib2 = ">=0.19.0,<1.0.0" uritemplate = ">=3.0.1,<5" [[package]] @@ -1137,6 +1208,26 @@ files = [ google-auth = "*" httplib2 = ">=0.19.0" +[[package]] +name = "google-genai" +version = "1.10.0" +description = "GenAI Python SDK" +optional = false +python-versions = ">=3.9" +files = [ + {file = "google_genai-1.10.0-py3-none-any.whl", hash = "sha256:41b105a2fcf8a027fc45cc16694cd559b8cd1272eab7345ad58cfa2c353bf34f"}, + {file = "google_genai-1.10.0.tar.gz", hash = "sha256:f59423e0f155dc66b7792c8a0e6724c75c72dc699d1eb7907d4d0006d4f6186f"}, +] + +[package.dependencies] +anyio = ">=4.8.0,<5.0.0" +google-auth = ">=2.14.1,<3.0.0" +httpx = ">=0.28.1,<1.0.0" +pydantic = ">=2.0.0,<3.0.0" +requests = ">=2.28.1,<3.0.0" +typing-extensions = ">=4.11.0,<5.0.0" +websockets = ">=13.0.0,<15.1.0" + [[package]] name = "google-generativeai" version = "0.8.4" @@ -1177,87 +1268,78 @@ six = "*" [[package]] name = "googleapis-common-protos" -version = "1.68.0" +version = "1.69.2" description = "Common protobufs used in Google APIs" optional = false python-versions = ">=3.7" files = [ - {file = "googleapis_common_protos-1.68.0-py2.py3-none-any.whl", hash = "sha256:aaf179b2f81df26dfadac95def3b16a95064c76a5f45f07e4c68a21bb371c4ac"}, - {file = "googleapis_common_protos-1.68.0.tar.gz", hash = "sha256:95d38161f4f9af0d9423eed8fb7b64ffd2568c3464eb542ff02c5bfa1953ab3c"}, + {file = "googleapis_common_protos-1.69.2-py3-none-any.whl", hash = "sha256:0b30452ff9c7a27d80bfc5718954063e8ab53dd3697093d3bc99581f5fd24212"}, + {file = "googleapis_common_protos-1.69.2.tar.gz", hash = "sha256:3e1b904a27a33c821b4b749fd31d334c0c9c30e6113023d495e48979a3dc9c5f"}, ] [package.dependencies] -protobuf = ">=3.20.2,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<6.0.0.dev0" +protobuf = ">=3.20.2,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<7.0.0" [package.extras] -grpc = ["grpcio (>=1.44.0,<2.0.0.dev0)"] +grpc = ["grpcio (>=1.44.0,<2.0.0)"] [[package]] name = "grpcio" -version = "1.70.0" +version = "1.63.0" description = "HTTP/2-based RPC framework" optional = false python-versions = ">=3.8" files = [ - {file = "grpcio-1.70.0-cp310-cp310-linux_armv7l.whl", hash = "sha256:95469d1977429f45fe7df441f586521361e235982a0b39e33841549143ae2851"}, - {file = "grpcio-1.70.0-cp310-cp310-macosx_12_0_universal2.whl", hash = "sha256:ed9718f17fbdb472e33b869c77a16d0b55e166b100ec57b016dc7de9c8d236bf"}, - {file = "grpcio-1.70.0-cp310-cp310-manylinux_2_17_aarch64.whl", hash = "sha256:374d014f29f9dfdb40510b041792e0e2828a1389281eb590df066e1cc2b404e5"}, - {file = "grpcio-1.70.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f2af68a6f5c8f78d56c145161544ad0febbd7479524a59c16b3e25053f39c87f"}, - {file = "grpcio-1.70.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ce7df14b2dcd1102a2ec32f621cc9fab6695effef516efbc6b063ad749867295"}, - {file = "grpcio-1.70.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:c78b339869f4dbf89881e0b6fbf376313e4f845a42840a7bdf42ee6caed4b11f"}, - {file = "grpcio-1.70.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:58ad9ba575b39edef71f4798fdb5c7b6d02ad36d47949cd381d4392a5c9cbcd3"}, - {file = "grpcio-1.70.0-cp310-cp310-win32.whl", hash = "sha256:2b0d02e4b25a5c1f9b6c7745d4fa06efc9fd6a611af0fb38d3ba956786b95199"}, - {file = "grpcio-1.70.0-cp310-cp310-win_amd64.whl", hash = "sha256:0de706c0a5bb9d841e353f6343a9defc9fc35ec61d6eb6111802f3aa9fef29e1"}, - {file = "grpcio-1.70.0-cp311-cp311-linux_armv7l.whl", hash = "sha256:17325b0be0c068f35770f944124e8839ea3185d6d54862800fc28cc2ffad205a"}, - {file = "grpcio-1.70.0-cp311-cp311-macosx_10_14_universal2.whl", hash = "sha256:dbe41ad140df911e796d4463168e33ef80a24f5d21ef4d1e310553fcd2c4a386"}, - {file = "grpcio-1.70.0-cp311-cp311-manylinux_2_17_aarch64.whl", hash = "sha256:5ea67c72101d687d44d9c56068328da39c9ccba634cabb336075fae2eab0d04b"}, - {file = "grpcio-1.70.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cb5277db254ab7586769e490b7b22f4ddab3876c490da0a1a9d7c695ccf0bf77"}, - {file = "grpcio-1.70.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e7831a0fc1beeeb7759f737f5acd9fdcda520e955049512d68fda03d91186eea"}, - {file = "grpcio-1.70.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:27cc75e22c5dba1fbaf5a66c778e36ca9b8ce850bf58a9db887754593080d839"}, - {file = "grpcio-1.70.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d63764963412e22f0491d0d32833d71087288f4e24cbcddbae82476bfa1d81fd"}, - {file = "grpcio-1.70.0-cp311-cp311-win32.whl", hash = "sha256:bb491125103c800ec209d84c9b51f1c60ea456038e4734688004f377cfacc113"}, - {file = "grpcio-1.70.0-cp311-cp311-win_amd64.whl", hash = "sha256:d24035d49e026353eb042bf7b058fb831db3e06d52bee75c5f2f3ab453e71aca"}, - {file = "grpcio-1.70.0-cp312-cp312-linux_armv7l.whl", hash = "sha256:ef4c14508299b1406c32bdbb9fb7b47612ab979b04cf2b27686ea31882387cff"}, - {file = "grpcio-1.70.0-cp312-cp312-macosx_10_14_universal2.whl", hash = "sha256:aa47688a65643afd8b166928a1da6247d3f46a2784d301e48ca1cc394d2ffb40"}, - {file = "grpcio-1.70.0-cp312-cp312-manylinux_2_17_aarch64.whl", hash = "sha256:880bfb43b1bb8905701b926274eafce5c70a105bc6b99e25f62e98ad59cb278e"}, - {file = "grpcio-1.70.0-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9e654c4b17d07eab259d392e12b149c3a134ec52b11ecdc6a515b39aceeec898"}, - {file = "grpcio-1.70.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2394e3381071045a706ee2eeb6e08962dd87e8999b90ac15c55f56fa5a8c9597"}, - {file = "grpcio-1.70.0-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:b3c76701428d2df01964bc6479422f20e62fcbc0a37d82ebd58050b86926ef8c"}, - {file = "grpcio-1.70.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:ac073fe1c4cd856ebcf49e9ed6240f4f84d7a4e6ee95baa5d66ea05d3dd0df7f"}, - {file = "grpcio-1.70.0-cp312-cp312-win32.whl", hash = "sha256:cd24d2d9d380fbbee7a5ac86afe9787813f285e684b0271599f95a51bce33528"}, - {file = "grpcio-1.70.0-cp312-cp312-win_amd64.whl", hash = "sha256:0495c86a55a04a874c7627fd33e5beaee771917d92c0e6d9d797628ac40e7655"}, - {file = "grpcio-1.70.0-cp313-cp313-linux_armv7l.whl", hash = "sha256:aa573896aeb7d7ce10b1fa425ba263e8dddd83d71530d1322fd3a16f31257b4a"}, - {file = "grpcio-1.70.0-cp313-cp313-macosx_10_14_universal2.whl", hash = "sha256:d405b005018fd516c9ac529f4b4122342f60ec1cee181788249372524e6db429"}, - {file = "grpcio-1.70.0-cp313-cp313-manylinux_2_17_aarch64.whl", hash = "sha256:f32090238b720eb585248654db8e3afc87b48d26ac423c8dde8334a232ff53c9"}, - {file = "grpcio-1.70.0-cp313-cp313-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dfa089a734f24ee5f6880c83d043e4f46bf812fcea5181dcb3a572db1e79e01c"}, - {file = "grpcio-1.70.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f19375f0300b96c0117aca118d400e76fede6db6e91f3c34b7b035822e06c35f"}, - {file = "grpcio-1.70.0-cp313-cp313-musllinux_1_1_i686.whl", hash = "sha256:7c73c42102e4a5ec76608d9b60227d917cea46dff4d11d372f64cbeb56d259d0"}, - {file = "grpcio-1.70.0-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:0a5c78d5198a1f0aa60006cd6eb1c912b4a1520b6a3968e677dbcba215fabb40"}, - {file = "grpcio-1.70.0-cp313-cp313-win32.whl", hash = "sha256:fe9dbd916df3b60e865258a8c72ac98f3ac9e2a9542dcb72b7a34d236242a5ce"}, - {file = "grpcio-1.70.0-cp313-cp313-win_amd64.whl", hash = "sha256:4119fed8abb7ff6c32e3d2255301e59c316c22d31ab812b3fbcbaf3d0d87cc68"}, - {file = "grpcio-1.70.0-cp38-cp38-linux_armv7l.whl", hash = "sha256:8058667a755f97407fca257c844018b80004ae8035565ebc2812cc550110718d"}, - {file = "grpcio-1.70.0-cp38-cp38-macosx_10_14_universal2.whl", hash = "sha256:879a61bf52ff8ccacbedf534665bb5478ec8e86ad483e76fe4f729aaef867cab"}, - {file = "grpcio-1.70.0-cp38-cp38-manylinux_2_17_aarch64.whl", hash = "sha256:0ba0a173f4feacf90ee618fbc1a27956bfd21260cd31ced9bc707ef551ff7dc7"}, - {file = "grpcio-1.70.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:558c386ecb0148f4f99b1a65160f9d4b790ed3163e8610d11db47838d452512d"}, - {file = "grpcio-1.70.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:412faabcc787bbc826f51be261ae5fa996b21263de5368a55dc2cf824dc5090e"}, - {file = "grpcio-1.70.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:3b0f01f6ed9994d7a0b27eeddea43ceac1b7e6f3f9d86aeec0f0064b8cf50fdb"}, - {file = "grpcio-1.70.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:7385b1cb064734005204bc8994eed7dcb801ed6c2eda283f613ad8c6c75cf873"}, - {file = "grpcio-1.70.0-cp38-cp38-win32.whl", hash = "sha256:07269ff4940f6fb6710951116a04cd70284da86d0a4368fd5a3b552744511f5a"}, - {file = "grpcio-1.70.0-cp38-cp38-win_amd64.whl", hash = "sha256:aba19419aef9b254e15011b230a180e26e0f6864c90406fdbc255f01d83bc83c"}, - {file = "grpcio-1.70.0-cp39-cp39-linux_armv7l.whl", hash = "sha256:4f1937f47c77392ccd555728f564a49128b6a197a05a5cd527b796d36f3387d0"}, - {file = "grpcio-1.70.0-cp39-cp39-macosx_10_14_universal2.whl", hash = "sha256:0cd430b9215a15c10b0e7d78f51e8a39d6cf2ea819fd635a7214fae600b1da27"}, - {file = "grpcio-1.70.0-cp39-cp39-manylinux_2_17_aarch64.whl", hash = "sha256:e27585831aa6b57b9250abaf147003e126cd3a6c6ca0c531a01996f31709bed1"}, - {file = "grpcio-1.70.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c1af8e15b0f0fe0eac75195992a63df17579553b0c4af9f8362cc7cc99ccddf4"}, - {file = "grpcio-1.70.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cbce24409beaee911c574a3d75d12ffb8c3e3dd1b813321b1d7a96bbcac46bf4"}, - {file = "grpcio-1.70.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:ff4a8112a79464919bb21c18e956c54add43ec9a4850e3949da54f61c241a4a6"}, - {file = "grpcio-1.70.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:5413549fdf0b14046c545e19cfc4eb1e37e9e1ebba0ca390a8d4e9963cab44d2"}, - {file = "grpcio-1.70.0-cp39-cp39-win32.whl", hash = "sha256:b745d2c41b27650095e81dea7091668c040457483c9bdb5d0d9de8f8eb25e59f"}, - {file = "grpcio-1.70.0-cp39-cp39-win_amd64.whl", hash = "sha256:a31d7e3b529c94e930a117b2175b2efd179d96eb3c7a21ccb0289a8ab05b645c"}, - {file = "grpcio-1.70.0.tar.gz", hash = "sha256:8d1584a68d5922330025881e63a6c1b54cc8117291d382e4fa69339b6d914c56"}, + {file = "grpcio-1.63.0-cp310-cp310-linux_armv7l.whl", hash = "sha256:2e93aca840c29d4ab5db93f94ed0a0ca899e241f2e8aec6334ab3575dc46125c"}, + {file = "grpcio-1.63.0-cp310-cp310-macosx_12_0_universal2.whl", hash = "sha256:91b73d3f1340fefa1e1716c8c1ec9930c676d6b10a3513ab6c26004cb02d8b3f"}, + {file = "grpcio-1.63.0-cp310-cp310-manylinux_2_17_aarch64.whl", hash = "sha256:b3afbd9d6827fa6f475a4f91db55e441113f6d3eb9b7ebb8fb806e5bb6d6bd0d"}, + {file = "grpcio-1.63.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8f3f6883ce54a7a5f47db43289a0a4c776487912de1a0e2cc83fdaec9685cc9f"}, + {file = "grpcio-1.63.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cf8dae9cc0412cb86c8de5a8f3be395c5119a370f3ce2e69c8b7d46bb9872c8d"}, + {file = "grpcio-1.63.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:08e1559fd3b3b4468486b26b0af64a3904a8dbc78d8d936af9c1cf9636eb3e8b"}, + {file = "grpcio-1.63.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:5c039ef01516039fa39da8a8a43a95b64e288f79f42a17e6c2904a02a319b357"}, + {file = "grpcio-1.63.0-cp310-cp310-win32.whl", hash = "sha256:ad2ac8903b2eae071055a927ef74121ed52d69468e91d9bcbd028bd0e554be6d"}, + {file = "grpcio-1.63.0-cp310-cp310-win_amd64.whl", hash = "sha256:b2e44f59316716532a993ca2966636df6fbe7be4ab6f099de6815570ebe4383a"}, + {file = "grpcio-1.63.0-cp311-cp311-linux_armv7l.whl", hash = "sha256:f28f8b2db7b86c77916829d64ab21ff49a9d8289ea1564a2b2a3a8ed9ffcccd3"}, + {file = "grpcio-1.63.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:65bf975639a1f93bee63ca60d2e4951f1b543f498d581869922910a476ead2f5"}, + {file = "grpcio-1.63.0-cp311-cp311-manylinux_2_17_aarch64.whl", hash = "sha256:b5194775fec7dc3dbd6a935102bb156cd2c35efe1685b0a46c67b927c74f0cfb"}, + {file = "grpcio-1.63.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e4cbb2100ee46d024c45920d16e888ee5d3cf47c66e316210bc236d5bebc42b3"}, + {file = "grpcio-1.63.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1ff737cf29b5b801619f10e59b581869e32f400159e8b12d7a97e7e3bdeee6a2"}, + {file = "grpcio-1.63.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:cd1e68776262dd44dedd7381b1a0ad09d9930ffb405f737d64f505eb7f77d6c7"}, + {file = "grpcio-1.63.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:93f45f27f516548e23e4ec3fbab21b060416007dbe768a111fc4611464cc773f"}, + {file = "grpcio-1.63.0-cp311-cp311-win32.whl", hash = "sha256:878b1d88d0137df60e6b09b74cdb73db123f9579232c8456f53e9abc4f62eb3c"}, + {file = "grpcio-1.63.0-cp311-cp311-win_amd64.whl", hash = "sha256:756fed02dacd24e8f488f295a913f250b56b98fb793f41d5b2de6c44fb762434"}, + {file = "grpcio-1.63.0-cp312-cp312-linux_armv7l.whl", hash = "sha256:93a46794cc96c3a674cdfb59ef9ce84d46185fe9421baf2268ccb556f8f81f57"}, + {file = "grpcio-1.63.0-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:a7b19dfc74d0be7032ca1eda0ed545e582ee46cd65c162f9e9fc6b26ef827dc6"}, + {file = "grpcio-1.63.0-cp312-cp312-manylinux_2_17_aarch64.whl", hash = "sha256:8064d986d3a64ba21e498b9a376cbc5d6ab2e8ab0e288d39f266f0fca169b90d"}, + {file = "grpcio-1.63.0-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:219bb1848cd2c90348c79ed0a6b0ea51866bc7e72fa6e205e459fedab5770172"}, + {file = "grpcio-1.63.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a2d60cd1d58817bc5985fae6168d8b5655c4981d448d0f5b6194bbcc038090d2"}, + {file = "grpcio-1.63.0-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:9e350cb096e5c67832e9b6e018cf8a0d2a53b2a958f6251615173165269a91b0"}, + {file = "grpcio-1.63.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:56cdf96ff82e3cc90dbe8bac260352993f23e8e256e063c327b6cf9c88daf7a9"}, + {file = "grpcio-1.63.0-cp312-cp312-win32.whl", hash = "sha256:3a6d1f9ea965e750db7b4ee6f9fdef5fdf135abe8a249e75d84b0a3e0c668a1b"}, + {file = "grpcio-1.63.0-cp312-cp312-win_amd64.whl", hash = "sha256:d2497769895bb03efe3187fb1888fc20e98a5f18b3d14b606167dacda5789434"}, + {file = "grpcio-1.63.0-cp38-cp38-linux_armv7l.whl", hash = "sha256:fdf348ae69c6ff484402cfdb14e18c1b0054ac2420079d575c53a60b9b2853ae"}, + {file = "grpcio-1.63.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a3abfe0b0f6798dedd2e9e92e881d9acd0fdb62ae27dcbbfa7654a57e24060c0"}, + {file = "grpcio-1.63.0-cp38-cp38-manylinux_2_17_aarch64.whl", hash = "sha256:6ef0ad92873672a2a3767cb827b64741c363ebaa27e7f21659e4e31f4d750280"}, + {file = "grpcio-1.63.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b416252ac5588d9dfb8a30a191451adbf534e9ce5f56bb02cd193f12d8845b7f"}, + {file = "grpcio-1.63.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e3b77eaefc74d7eb861d3ffbdf91b50a1bb1639514ebe764c47773b833fa2d91"}, + {file = "grpcio-1.63.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:b005292369d9c1f80bf70c1db1c17c6c342da7576f1c689e8eee4fb0c256af85"}, + {file = "grpcio-1.63.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:cdcda1156dcc41e042d1e899ba1f5c2e9f3cd7625b3d6ebfa619806a4c1aadda"}, + {file = "grpcio-1.63.0-cp38-cp38-win32.whl", hash = "sha256:01799e8649f9e94ba7db1aeb3452188048b0019dc37696b0f5ce212c87c560c3"}, + {file = "grpcio-1.63.0-cp38-cp38-win_amd64.whl", hash = "sha256:6a1a3642d76f887aa4009d92f71eb37809abceb3b7b5a1eec9c554a246f20e3a"}, + {file = "grpcio-1.63.0-cp39-cp39-linux_armv7l.whl", hash = "sha256:75f701ff645858a2b16bc8c9fc68af215a8bb2d5a9b647448129de6e85d52bce"}, + {file = "grpcio-1.63.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:cacdef0348a08e475a721967f48206a2254a1b26ee7637638d9e081761a5ba86"}, + {file = "grpcio-1.63.0-cp39-cp39-manylinux_2_17_aarch64.whl", hash = "sha256:0697563d1d84d6985e40ec5ec596ff41b52abb3fd91ec240e8cb44a63b895094"}, + {file = "grpcio-1.63.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6426e1fb92d006e47476d42b8f240c1d916a6d4423c5258ccc5b105e43438f61"}, + {file = "grpcio-1.63.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e48cee31bc5f5a31fb2f3b573764bd563aaa5472342860edcc7039525b53e46a"}, + {file = "grpcio-1.63.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:50344663068041b34a992c19c600236e7abb42d6ec32567916b87b4c8b8833b3"}, + {file = "grpcio-1.63.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:259e11932230d70ef24a21b9fb5bb947eb4703f57865a404054400ee92f42f5d"}, + {file = "grpcio-1.63.0-cp39-cp39-win32.whl", hash = "sha256:a44624aad77bf8ca198c55af811fd28f2b3eaf0a50ec5b57b06c034416ef2d0a"}, + {file = "grpcio-1.63.0-cp39-cp39-win_amd64.whl", hash = "sha256:166e5c460e5d7d4656ff9e63b13e1f6029b122104c1633d5f37eaea348d7356d"}, + {file = "grpcio-1.63.0.tar.gz", hash = "sha256:f3023e14805c61bc439fb40ca545ac3d5740ce66120a678a3c6c2c55b70343d1"}, ] [package.extras] -protobuf = ["grpcio-tools (>=1.70.0)"] +protobuf = ["grpcio-tools (>=1.63.0)"] [[package]] name = "grpcio-status" @@ -1275,22 +1357,6 @@ googleapis-common-protos = ">=1.5.5" grpcio = ">=1.48.2" protobuf = ">=3.12.0" -[[package]] -name = "grpcio-status" -version = "1.70.0" -description = "Status proto mapping for gRPC" -optional = false -python-versions = ">=3.8" -files = [ - {file = "grpcio_status-1.70.0-py3-none-any.whl", hash = "sha256:fc5a2ae2b9b1c1969cc49f3262676e6854aa2398ec69cb5bd6c47cd501904a85"}, - {file = "grpcio_status-1.70.0.tar.gz", hash = "sha256:0e7b42816512433b18b9d764285ff029bde059e9d41f8fe10a60631bd8348101"}, -] - -[package.dependencies] -googleapis-common-protos = ">=1.5.5" -grpcio = ">=1.70.0" -protobuf = ">=5.26.1,<6.0dev" - [[package]] name = "h11" version = "0.14.0" @@ -1340,6 +1406,27 @@ files = [ [package.dependencies] numpy = ">=1.19.3" +[[package]] +name = "httpcore" +version = "1.0.7" +description = "A minimal low-level HTTP client." +optional = false +python-versions = ">=3.8" +files = [ + {file = "httpcore-1.0.7-py3-none-any.whl", hash = "sha256:a3fff8f43dc260d5bd363d9f9cf1830fa3a458b332856f34282de498ed420edd"}, + {file = "httpcore-1.0.7.tar.gz", hash = "sha256:8551cb62a169ec7162ac7be8d4817d561f60e08eaa485234898414bb5a8a0b4c"}, +] + +[package.dependencies] +certifi = "*" +h11 = ">=0.13,<0.15" + +[package.extras] +asyncio = ["anyio (>=4.0,<5.0)"] +http2 = ["h2 (>=3,<5)"] +socks = ["socksio (==1.*)"] +trio = ["trio (>=0.22.0,<1.0)"] + [[package]] name = "httplib2" version = "0.22.0" @@ -1409,15 +1496,50 @@ files = [ [package.extras] test = ["Cython (>=0.29.24)"] +[[package]] +name = "httpx" +version = "0.28.1" +description = "The next generation HTTP client." +optional = false +python-versions = ">=3.8" +files = [ + {file = "httpx-0.28.1-py3-none-any.whl", hash = "sha256:d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad"}, + {file = "httpx-0.28.1.tar.gz", hash = "sha256:75e98c5f16b0f35b567856f597f06ff2270a374470a5c2392242528e3e3e42fc"}, +] + +[package.dependencies] +anyio = "*" +certifi = "*" +httpcore = "==1.*" +idna = "*" + +[package.extras] +brotli = ["brotli", "brotlicffi"] +cli = ["click (==8.*)", "pygments (==2.*)", "rich (>=10,<14)"] +http2 = ["h2 (>=3,<5)"] +socks = ["socksio (==1.*)"] +zstd = ["zstandard (>=0.18.0)"] + +[[package]] +name = "httpx-sse" +version = "0.4.0" +description = "Consume Server-Sent Event (SSE) messages with HTTPX." +optional = false +python-versions = ">=3.8" +files = [ + {file = "httpx-sse-0.4.0.tar.gz", hash = "sha256:1e81a3a3070ce322add1d3529ed42eb5f70817f45ed6ec915ab753f961139721"}, + {file = "httpx_sse-0.4.0-py3-none-any.whl", hash = "sha256:f329af6eae57eaa2bdfd962b42524764af68075ea87370a2de920af5341e318f"}, +] + [[package]] name = "huggingface-hub" -version = "0.29.1" +version = "0.30.2" description = "Client library to download and publish models, datasets and other repos on the huggingface.co hub" optional = false python-versions = ">=3.8.0" files = [ - {file = "huggingface_hub-0.29.1-py3-none-any.whl", hash = "sha256:352f69caf16566c7b6de84b54a822f6238e17ddd8ae3da4f8f2272aea5b198d5"}, - {file = "huggingface_hub-0.29.1.tar.gz", hash = "sha256:9524eae42077b8ff4fc459ceb7a514eca1c1232b775276b009709fe2a084f250"}, + {file = "huggingface_hub-0.30.2-py3-none-any.whl", hash = "sha256:68ff05969927058cfa41df4f2155d4bb48f5f54f719dd0390103eefa9b191e28"}, + {file = "huggingface_hub-0.30.2.tar.gz", hash = "sha256:9a7897c5b6fd9dad3168a794a8998d6378210f5b9688d0dfc180b1a228dc2466"}, ] [package.dependencies] @@ -1435,6 +1557,7 @@ cli = ["InquirerPy (==0.3.4)"] dev = ["InquirerPy (==0.3.4)", "Jinja2", "Pillow", "aiohttp", "fastapi", "gradio (>=4.0.0)", "jedi", "libcst (==1.4.0)", "mypy (==1.5.1)", "numpy", "pytest (>=8.1.1,<8.2.2)", "pytest-asyncio", "pytest-cov", "pytest-env", "pytest-mock", "pytest-rerunfailures", "pytest-vcr", "pytest-xdist", "ruff (>=0.9.0)", "soundfile", "types-PyYAML", "types-requests", "types-simplejson", "types-toml", "types-tqdm", "types-urllib3", "typing-extensions (>=4.8.0)", "urllib3 (<2.0)"] fastai = ["fastai (>=2.4)", "fastcore (>=1.3.27)", "toml"] hf-transfer = ["hf-transfer (>=0.1.4)"] +hf-xet = ["hf-xet (>=0.1.4)"] inference = ["aiohttp"] quality = ["libcst (==1.4.0)", "mypy (==1.5.1)", "ruff (>=0.9.0)"] tensorflow = ["graphviz", "pydot", "tensorflow"] @@ -1459,13 +1582,13 @@ pyreadline3 = {version = "*", markers = "sys_platform == \"win32\" and python_ve [[package]] name = "humanize" -version = "4.12.1" +version = "4.12.2" description = "Python humanize utilities" optional = false python-versions = ">=3.9" files = [ - {file = "humanize-4.12.1-py3-none-any.whl", hash = "sha256:86014ca5c52675dffa1d404491952f1f5bf03b07c175a51891a343daebf01fea"}, - {file = "humanize-4.12.1.tar.gz", hash = "sha256:1338ba97415c96556758a6e2f65977ed406dddf4620d4c6db9bbdfd07f0f1232"}, + {file = "humanize-4.12.2-py3-none-any.whl", hash = "sha256:e4e44dced598b7e03487f3b1c6fd5b1146c30ea55a110e71d5d4bca3e094259e"}, + {file = "humanize-4.12.2.tar.gz", hash = "sha256:ce0715740e9caacc982bb89098182cf8ded3552693a433311c6a4ce6f4e12a2c"}, ] [package.extras] @@ -1498,13 +1621,13 @@ files = [ [[package]] name = "importlib-metadata" -version = "8.5.0" +version = "8.6.1" description = "Read metadata from Python packages" optional = false -python-versions = ">=3.8" +python-versions = ">=3.9" files = [ - {file = "importlib_metadata-8.5.0-py3-none-any.whl", hash = "sha256:45e54197d28b7a7f1559e60b95e7c567032b602131fbd588f1497f47880aa68b"}, - {file = "importlib_metadata-8.5.0.tar.gz", hash = "sha256:71522656f0abace1d072b9e5481a48f07c138e00f079c38c8f883823f9c26bd7"}, + {file = "importlib_metadata-8.6.1-py3-none-any.whl", hash = "sha256:02a89390c1e15fdfdc0d7c6b25cb3e62650d0494005c97d6f148bf5b9787525e"}, + {file = "importlib_metadata-8.6.1.tar.gz", hash = "sha256:310b41d755445d74569f993ccfc22838295d9fe005425094fad953d7f15c8580"}, ] [package.dependencies] @@ -1516,7 +1639,7 @@ cover = ["pytest-cov"] doc = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"] enabler = ["pytest-enabler (>=2.2)"] perf = ["ipython"] -test = ["flufl.flake8", "importlib-resources (>=1.3)", "jaraco.test (>=5.4)", "packaging", "pyfakefs", "pytest (>=6,!=8.1.*)", "pytest-perf (>=0.9.2)"] +test = ["flufl.flake8", "importlib_resources (>=1.3)", "jaraco.test (>=5.4)", "packaging", "pyfakefs", "pytest (>=6,!=8.1.*)", "pytest-perf (>=0.9.2)"] type = ["pytest-mypy"] [[package]] @@ -1538,21 +1661,31 @@ enabler = ["pytest-enabler (>=2.2)"] test = ["jaraco.test (>=5.4)", "pytest (>=6,!=8.1.*)", "zipp (>=3.17)"] type = ["pytest-mypy"] +[[package]] +name = "iniconfig" +version = "2.1.0" +description = "brain-dead simple config-ini parsing" +optional = false +python-versions = ">=3.8" +files = [ + {file = "iniconfig-2.1.0-py3-none-any.whl", hash = "sha256:9deba5723312380e77435581c6bf4935c94cbfab9b1ed33ef8d238ea168eb760"}, + {file = "iniconfig-2.1.0.tar.gz", hash = "sha256:3abbd2e30b36733fee78f9c7f7308f2d0050e88f0087fd25c2645f63c773e1c7"}, +] + [[package]] name = "ipython" -version = "8.32.0" +version = "8.35.0" description = "IPython: Productive Interactive Computing" optional = false python-versions = ">=3.10" files = [ - {file = "ipython-8.32.0-py3-none-any.whl", hash = "sha256:cae85b0c61eff1fc48b0a8002de5958b6528fa9c8defb1894da63f42613708aa"}, - {file = "ipython-8.32.0.tar.gz", hash = "sha256:be2c91895b0b9ea7ba49d33b23e2040c352b33eb6a519cca7ce6e0c743444251"}, + {file = "ipython-8.35.0-py3-none-any.whl", hash = "sha256:e6b7470468ba6f1f0a7b116bb688a3ece2f13e2f94138e508201fad677a788ba"}, + {file = "ipython-8.35.0.tar.gz", hash = "sha256:d200b7d93c3f5883fc36ab9ce28a18249c7706e51347681f80a0aef9895f2520"}, ] [package.dependencies] colorama = {version = "*", markers = "sys_platform == \"win32\""} decorator = "*" -exceptiongroup = {version = "*", markers = "python_version < \"3.11\""} jedi = ">=0.16" matplotlib-inline = "*" pexpect = {version = ">4.3", markers = "sys_platform != \"win32\" and sys_platform != \"emscripten\""} @@ -1560,7 +1693,6 @@ prompt_toolkit = ">=3.0.41,<3.1.0" pygments = ">=2.4.0" stack_data = "*" traitlets = ">=5.13.0" -typing_extensions = {version = ">=4.6", markers = "python_version < \"3.12\""} [package.extras] all = ["ipython[black,doc,kernel,matplotlib,nbconvert,nbformat,notebook,parallel,qtconsole]", "ipython[test,test-extra]"] @@ -1574,7 +1706,7 @@ notebook = ["ipywidgets", "notebook"] parallel = ["ipyparallel"] qtconsole = ["qtconsole"] test = ["packaging", "pickleshare", "pytest", "pytest-asyncio (<0.22)", "testpath"] -test-extra = ["curio", "ipython[test]", "matplotlib (!=3.2.0)", "nbformat", "numpy (>=1.23)", "pandas", "trio"] +test-extra = ["curio", "ipython[test]", "jupyter_ai", "matplotlib (!=3.2.0)", "nbformat", "numpy (>=1.23)", "pandas", "trio"] [[package]] name = "isort" @@ -1603,63 +1735,61 @@ files = [ [[package]] name = "jax" -version = "0.5.0" +version = "0.5.3" description = "Differentiate, compile, and transform Numpy code." optional = false python-versions = ">=3.10" files = [ - {file = "jax-0.5.0-py3-none-any.whl", hash = "sha256:b3907aa87ae2c340b39cdbf80c07a74550369cafcaf7398fb60ba58d167345ab"}, - {file = "jax-0.5.0.tar.gz", hash = "sha256:49df70bf293a345a7fb519f71193506d37a024c4f850b358042eb32d502c81c8"}, + {file = "jax-0.5.3-py3-none-any.whl", hash = "sha256:1483dc237b4f47e41755d69429e8c3c138736716147cd43bb2b99b259d4e3c41"}, + {file = "jax-0.5.3.tar.gz", hash = "sha256:f17fcb0fd61dc289394af6ce4de2dada2312f2689bb0d73642c6f026a95fbb2c"}, ] [package.dependencies] -jaxlib = "0.5.0" +jaxlib = "0.5.3" ml_dtypes = ">=0.4.0" -numpy = [ - {version = ">=1.25", markers = "python_version < \"3.12\""}, - {version = ">=1.26.0", markers = "python_version >= \"3.12\""}, -] +numpy = {version = ">=1.26.0", markers = "python_version >= \"3.12\""} opt_einsum = "*" scipy = ">=1.11.1" [package.extras] -ci = ["jaxlib (==0.4.38)"] -cuda = ["jax-cuda12-plugin[with-cuda] (==0.5.0)", "jaxlib (==0.5.0)"] -cuda12 = ["jax-cuda12-plugin[with-cuda] (==0.5.0)", "jaxlib (==0.5.0)"] -cuda12-local = ["jax-cuda12-plugin (==0.5.0)", "jaxlib (==0.5.0)"] -cuda12-pip = ["jax-cuda12-plugin[with-cuda] (==0.5.0)", "jaxlib (==0.5.0)"] +ci = ["jaxlib (==0.5.1)"] +cuda = ["jax-cuda12-plugin[with-cuda] (==0.5.3)", "jaxlib (==0.5.3)"] +cuda12 = ["jax-cuda12-plugin[with-cuda] (==0.5.3)", "jaxlib (==0.5.3)"] +cuda12-local = ["jax-cuda12-plugin (==0.5.3)", "jaxlib (==0.5.3)"] +cuda12-pip = ["jax-cuda12-plugin[with-cuda] (==0.5.3)", "jaxlib (==0.5.3)"] k8s = ["kubernetes"] -minimum-jaxlib = ["jaxlib (==0.5.0)"] -rocm = ["jax-rocm60-plugin (==0.5.0)", "jaxlib (==0.5.0)"] -tpu = ["jaxlib (==0.5.0)", "libtpu (==0.0.8)", "libtpu-nightly (==0.1.dev20241010+nightly.cleanup)", "requests"] +minimum-jaxlib = ["jaxlib (==0.5.3)"] +rocm = ["jax-rocm60-plugin (==0.5.3)", "jaxlib (==0.5.3)"] +tpu = ["jaxlib (==0.5.3)", "libtpu (==0.0.11.*)", "requests"] [[package]] name = "jaxlib" -version = "0.5.0" +version = "0.5.3" description = "XLA library for JAX" optional = false python-versions = ">=3.10" files = [ - {file = "jaxlib-0.5.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:1b8a6c4345f137f387650de2dbc488c20251b7412b55dd648e1a4f13bcf507fb"}, - {file = "jaxlib-0.5.0-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:5b2efe3dfebf18a84c451d3803ac884ee242021c1113b279c13f4bbc378c3dc0"}, - {file = "jaxlib-0.5.0-cp310-cp310-manylinux2014_x86_64.whl", hash = "sha256:74440b632107336400d4f97a16481d767f13ea914c53ba14e544c6fda54819b3"}, - {file = "jaxlib-0.5.0-cp310-cp310-win_amd64.whl", hash = "sha256:53478a28eee6c2ef01759b05a9491702daef9268c3ed013d6f8e2e5f5cae0887"}, - {file = "jaxlib-0.5.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:6cd762ed1623132499fa701c4203446102e0a9c82ca23194b87288f746d12a29"}, - {file = "jaxlib-0.5.0-cp311-cp311-manylinux2014_aarch64.whl", hash = "sha256:63088dbfaa85bb56cd521a925a3472fd7328b18ec93c2d8ffa85af331095c995"}, - {file = "jaxlib-0.5.0-cp311-cp311-manylinux2014_x86_64.whl", hash = "sha256:09113ef1582ba34d7cbc440fedb318f4855b59b776711a8aba2473c9727d3025"}, - {file = "jaxlib-0.5.0-cp311-cp311-win_amd64.whl", hash = "sha256:78289fc3ddc1e4e9510de2536a6375df9fe1c50de0ac60826c286b7a5c5090fe"}, - {file = "jaxlib-0.5.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:73e335715760c56e635109d61426435a5d7f46f3363a115daea09427d5cd0efd"}, - {file = "jaxlib-0.5.0-cp312-cp312-manylinux2014_aarch64.whl", hash = "sha256:4b4b01afb0ddec96c08356bff2bb685ddbe97fdffe4ed6e2d834b30aba972f22"}, - {file = "jaxlib-0.5.0-cp312-cp312-manylinux2014_x86_64.whl", hash = "sha256:f980c733e98c998a8da87c9a8cc61b6726d0be667a58bd664c1d717b4b4eae75"}, - {file = "jaxlib-0.5.0-cp312-cp312-win_amd64.whl", hash = "sha256:5baedbeeb60fa493c7528783254f04c6e986a2826266b198ed37e9336af2ef8c"}, - {file = "jaxlib-0.5.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:ed18ea7161d03aa8fd4d1b55494882f21420efdfea68e5f298c4aebcf2ac3f34"}, - {file = "jaxlib-0.5.0-cp313-cp313-manylinux2014_aarch64.whl", hash = "sha256:7d9b17a7ea19355d45ecdb2ff0db5d707a86f0c5a862d94b89b4568d6c45311a"}, - {file = "jaxlib-0.5.0-cp313-cp313-manylinux2014_x86_64.whl", hash = "sha256:11eef01d37c0f1c5306265b76f207f1002d13480ded2e31fd63ec76912c93ca2"}, - {file = "jaxlib-0.5.0-cp313-cp313-win_amd64.whl", hash = "sha256:61b4d26cd6a0c49ba0b1e4340c7d29198913ee2dc70b65ee90752717d22305bb"}, + {file = "jaxlib-0.5.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:48ff5c89fb8a0fe04d475e9ddc074b4879a91d7ab68a51cec5cd1e87f81e6c47"}, + {file = "jaxlib-0.5.3-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:972400db4af6e85270d81db5e6e620d31395f0472e510c50dfcd4cb3f72b7220"}, + {file = "jaxlib-0.5.3-cp310-cp310-manylinux2014_x86_64.whl", hash = "sha256:52be6c9775aff738a61170d8c047505c75bb799a45518e66a7a0908127b11785"}, + {file = "jaxlib-0.5.3-cp310-cp310-win_amd64.whl", hash = "sha256:b41a6fcaeb374fabc4ee7e74cfed60843bdab607cd54f60a68b7f7655cde2b66"}, + {file = "jaxlib-0.5.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b62bd8b29e5a4f9bfaa57c8daf6e04820b2c994f448f3dec602d64255545e9f2"}, + {file = "jaxlib-0.5.3-cp311-cp311-manylinux2014_aarch64.whl", hash = "sha256:a4666f81d72c060ed3e581ded116a9caa9b0a70a148a54cb12a1d3afca3624b5"}, + {file = "jaxlib-0.5.3-cp311-cp311-manylinux2014_x86_64.whl", hash = "sha256:29e1530fc81833216f1e28b578d0c59697654f72ee31c7a44ed7753baf5ac466"}, + {file = "jaxlib-0.5.3-cp311-cp311-win_amd64.whl", hash = "sha256:8eb54e38d789557579f900ea3d70f104a440f8555a9681ed45f4a122dcbfd92e"}, + {file = "jaxlib-0.5.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:d394dbde4a1c6bd67501cfb29d3819a10b900cb534cc0fc603319f7092f24cfa"}, + {file = "jaxlib-0.5.3-cp312-cp312-manylinux2014_aarch64.whl", hash = "sha256:bddf6360377aa1c792e47fd87f307c342e331e5ff3582f940b1bca00f6b4bc73"}, + {file = "jaxlib-0.5.3-cp312-cp312-manylinux2014_x86_64.whl", hash = "sha256:5a5e88ab1cd6fdf78d69abe3544e8f09cce200dd339bb85fbe3c2ea67f2a5e68"}, + {file = "jaxlib-0.5.3-cp312-cp312-win_amd64.whl", hash = "sha256:520665929649f29f7d948d4070dbaf3e032a4c1f7c11f2863eac73320fcee784"}, + {file = "jaxlib-0.5.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:31321c25282a06a6dfc940507bc14d0a0ac838d8ced6c07aa00a7fae34ce7b3f"}, + {file = "jaxlib-0.5.3-cp313-cp313-manylinux2014_aarch64.whl", hash = "sha256:e904b92dedfbc7e545725a8d7676987030ae9c069001d94701bc109c6dab4100"}, + {file = "jaxlib-0.5.3-cp313-cp313-manylinux2014_x86_64.whl", hash = "sha256:bb7593cb7fffcb13963f22fa5229ed960b8fb4ae5ec3b0820048cbd67f1e8e31"}, + {file = "jaxlib-0.5.3-cp313-cp313-win_amd64.whl", hash = "sha256:8019f73a10b1290f988dd3768c684f3a8a147239091c3b790ce7e47e3bbc00bd"}, + {file = "jaxlib-0.5.3-cp313-cp313t-manylinux2014_x86_64.whl", hash = "sha256:4c9a9d4cda091a3ef068ace8379fff9e98eea2fc51dbdd7c3386144a1bdf715d"}, ] [package.dependencies] -ml-dtypes = ">=0.2.0" +ml_dtypes = ">=0.2.0" numpy = ">=1.25" scipy = ">=1.11.1" @@ -1684,13 +1814,13 @@ testing = ["Django", "attrs", "colorama", "docopt", "pytest (<9.0.0)"] [[package]] name = "jinja2" -version = "3.1.5" +version = "3.1.6" description = "A very fast and expressive template engine." optional = false python-versions = ">=3.7" files = [ - {file = "jinja2-3.1.5-py3-none-any.whl", hash = "sha256:aba0f4dc9ed8013c424088f68a5c226f7d6097ed89b246d7749c2ec4175c6adb"}, - {file = "jinja2-3.1.5.tar.gz", hash = "sha256:8fefff8dc3034e27bb80d67c671eb8a9bc424c0ef4c0826edbff304cceff43bb"}, + {file = "jinja2-3.1.6-py3-none-any.whl", hash = "sha256:85ece4451f492d0c13c5dd7c13a64681a86afae63a5f347908daf103ce6d2f67"}, + {file = "jinja2-3.1.6.tar.gz", hash = "sha256:0137fb05990d35f1275a587e9aee6d56da821fc83491a0fb838183be43f66d6d"}, ] [package.dependencies] @@ -1701,13 +1831,13 @@ i18n = ["Babel (>=2.7)"] [[package]] name = "keras" -version = "3.8.0" +version = "3.9.2" description = "Multi-backend Keras" optional = false python-versions = ">=3.9" files = [ - {file = "keras-3.8.0-py3-none-any.whl", hash = "sha256:b65d125976b0f8bf8ad1e93311a98e7dfb334ff6023627a59a52b35499165ec3"}, - {file = "keras-3.8.0.tar.gz", hash = "sha256:6289006e6f6cb2b68a563b58cf8ae5a45569449c5a791df6b2f54c1877f3f344"}, + {file = "keras-3.9.2-py3-none-any.whl", hash = "sha256:404427856c2dc30e38c9fa6fa6a13ffb1844a8c35af312ca32a8e7dea9840f1e"}, + {file = "keras-3.9.2.tar.gz", hash = "sha256:322aab6418ee3de1e2bd0871b60a07f0e444e744a7e8cba79af8b42408879ecf"}, ] [package.dependencies] @@ -1945,6 +2075,32 @@ files = [ {file = "mccabe-0.7.0.tar.gz", hash = "sha256:348e0240c33b60bbdf4e523192ef919f28cb2c3d7d5c7794f74009290f236325"}, ] +[[package]] +name = "mcp" +version = "1.6.0" +description = "Model Context Protocol SDK" +optional = false +python-versions = ">=3.10" +files = [ + {file = "mcp-1.6.0-py3-none-any.whl", hash = "sha256:7bd24c6ea042dbec44c754f100984d186620d8b841ec30f1b19eda9b93a634d0"}, + {file = "mcp-1.6.0.tar.gz", hash = "sha256:d9324876de2c5637369f43161cd71eebfd803df5a95e46225cab8d280e366723"}, +] + +[package.dependencies] +anyio = ">=4.5" +httpx = ">=0.27" +httpx-sse = ">=0.4" +pydantic = ">=2.7.2,<3.0.0" +pydantic-settings = ">=2.5.2" +sse-starlette = ">=1.6.1" +starlette = ">=0.27" +uvicorn = ">=0.23.1" + +[package.extras] +cli = ["python-dotenv (>=1.0.0)", "typer (>=0.12.4)"] +rich = ["rich (>=13.9.4)"] +ws = ["websockets (>=15.0.1)"] + [[package]] name = "mdurl" version = "0.1.2" @@ -2002,12 +2158,47 @@ files = [ ] [package.dependencies] -numpy = [ - {version = ">=1.21.2", markers = "python_version >= \"3.10\" and python_version < \"3.11\""}, - {version = ">=1.26.0", markers = "python_version >= \"3.12\""}, - {version = ">=1.23.3", markers = "python_version >= \"3.11\" and python_version < \"3.12\""}, +numpy = {version = ">=1.26.0", markers = "python_version >= \"3.12\""} + +[package.extras] +dev = ["absl-py", "pyink", "pylint (>=2.6.0)", "pytest", "pytest-xdist"] + +[[package]] +name = "ml-dtypes" +version = "0.5.1" +description = "" +optional = false +python-versions = ">=3.9" +files = [ + {file = "ml_dtypes-0.5.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:bd73f51957949069573ff783563486339a9285d72e2f36c18e0c1aa9ca7eb190"}, + {file = "ml_dtypes-0.5.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:810512e2eccdfc3b41eefa3a27402371a3411453a1efc7e9c000318196140fed"}, + {file = "ml_dtypes-0.5.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:141b2ea2f20bb10802ddca55d91fe21231ef49715cfc971998e8f2a9838f3dbe"}, + {file = "ml_dtypes-0.5.1-cp310-cp310-win_amd64.whl", hash = "sha256:26ebcc69d7b779c8f129393e99732961b5cc33fcff84090451f448c89b0e01b4"}, + {file = "ml_dtypes-0.5.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:023ce2f502efd4d6c1e0472cc58ce3640d051d40e71e27386bed33901e201327"}, + {file = "ml_dtypes-0.5.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7000b6e4d8ef07542c05044ec5d8bbae1df083b3f56822c3da63993a113e716f"}, + {file = "ml_dtypes-0.5.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c09526488c3a9e8b7a23a388d4974b670a9a3dd40c5c8a61db5593ce9b725bab"}, + {file = "ml_dtypes-0.5.1-cp311-cp311-win_amd64.whl", hash = "sha256:15ad0f3b0323ce96c24637a88a6f44f6713c64032f27277b069f285c3cf66478"}, + {file = "ml_dtypes-0.5.1-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:6f462f5eca22fb66d7ff9c4744a3db4463af06c49816c4b6ac89b16bfcdc592e"}, + {file = "ml_dtypes-0.5.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6f76232163b5b9c34291b54621ee60417601e2e4802a188a0ea7157cd9b323f4"}, + {file = "ml_dtypes-0.5.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ad4953c5eb9c25a56d11a913c2011d7e580a435ef5145f804d98efa14477d390"}, + {file = "ml_dtypes-0.5.1-cp312-cp312-win_amd64.whl", hash = "sha256:9626d0bca1fb387d5791ca36bacbba298c5ef554747b7ebeafefb4564fc83566"}, + {file = "ml_dtypes-0.5.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:12651420130ee7cc13059fc56dac6ad300c3af3848b802d475148c9defd27c23"}, + {file = "ml_dtypes-0.5.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c9945669d3dadf8acb40ec2e57d38c985d8c285ea73af57fc5b09872c516106d"}, + {file = "ml_dtypes-0.5.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bf9975bda82a99dc935f2ae4c83846d86df8fd6ba179614acac8e686910851da"}, + {file = "ml_dtypes-0.5.1-cp313-cp313-win_amd64.whl", hash = "sha256:fd918d4e6a4e0c110e2e05be7a7814d10dc1b95872accbf6512b80a109b71ae1"}, + {file = "ml_dtypes-0.5.1-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:05f23447a1c20ddf4dc7c2c661aa9ed93fcb2658f1017c204d1e758714dc28a8"}, + {file = "ml_dtypes-0.5.1-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1b7fbe5571fdf28fd3aaab3ef4aafc847de9ebf263be959958c1ca58ec8eadf5"}, + {file = "ml_dtypes-0.5.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d13755f8e8445b3870114e5b6240facaa7cb0c3361e54beba3e07fa912a6e12b"}, + {file = "ml_dtypes-0.5.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:b8a9d46b4df5ae2135a8e8e72b465448ebbc1559997f4f9304a9ecc3413efb5b"}, + {file = "ml_dtypes-0.5.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afb2009ac98da274e893e03162f6269398b2b00d947e7057ee2469a921d58135"}, + {file = "ml_dtypes-0.5.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aefedc579ece2f8fb38f876aa7698204ee4c372d0e54f1c1ffa8ca580b54cc60"}, + {file = "ml_dtypes-0.5.1-cp39-cp39-win_amd64.whl", hash = "sha256:8f2c028954f16ede77902b223a8da2d9cbb3892375b85809a5c3cfb1587960c4"}, + {file = "ml_dtypes-0.5.1.tar.gz", hash = "sha256:ac5b58559bb84a95848ed6984eb8013249f90b6bab62aa5acbad876e256002c9"}, ] +[package.dependencies] +numpy = {version = ">=1.26.0", markers = "python_version >= \"3.12\" and python_version < \"3.13\""} + [package.extras] dev = ["absl-py", "pyink", "pylint (>=2.6.0)", "pytest", "pytest-xdist"] @@ -2306,32 +2497,29 @@ signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"] [[package]] name = "onnxruntime" -version = "1.20.1" +version = "1.21.0" description = "ONNX Runtime is a runtime accelerator for Machine Learning models" optional = false -python-versions = "*" +python-versions = ">=3.10" files = [ - {file = "onnxruntime-1.20.1-cp310-cp310-macosx_13_0_universal2.whl", hash = "sha256:e50ba5ff7fed4f7d9253a6baf801ca2883cc08491f9d32d78a80da57256a5439"}, - {file = "onnxruntime-1.20.1-cp310-cp310-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:7b2908b50101a19e99c4d4e97ebb9905561daf61829403061c1adc1b588bc0de"}, - {file = "onnxruntime-1.20.1-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:d82daaec24045a2e87598b8ac2b417b1cce623244e80e663882e9fe1aae86410"}, - {file = "onnxruntime-1.20.1-cp310-cp310-win32.whl", hash = "sha256:4c4b251a725a3b8cf2aab284f7d940c26094ecd9d442f07dd81ab5470e99b83f"}, - {file = "onnxruntime-1.20.1-cp310-cp310-win_amd64.whl", hash = "sha256:d3b616bb53a77a9463707bb313637223380fc327f5064c9a782e8ec69c22e6a2"}, - {file = "onnxruntime-1.20.1-cp311-cp311-macosx_13_0_universal2.whl", hash = "sha256:06bfbf02ca9ab5f28946e0f912a562a5f005301d0c419283dc57b3ed7969bb7b"}, - {file = "onnxruntime-1.20.1-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f6243e34d74423bdd1edf0ae9596dd61023b260f546ee17d701723915f06a9f7"}, - {file = "onnxruntime-1.20.1-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5eec64c0269dcdb8d9a9a53dc4d64f87b9e0c19801d9321246a53b7eb5a7d1bc"}, - {file = "onnxruntime-1.20.1-cp311-cp311-win32.whl", hash = "sha256:a19bc6e8c70e2485a1725b3d517a2319603acc14c1f1a017dda0afe6d4665b41"}, - {file = "onnxruntime-1.20.1-cp311-cp311-win_amd64.whl", hash = "sha256:8508887eb1c5f9537a4071768723ec7c30c28eb2518a00d0adcd32c89dea3221"}, - {file = "onnxruntime-1.20.1-cp312-cp312-macosx_13_0_universal2.whl", hash = "sha256:22b0655e2bf4f2161d52706e31f517a0e54939dc393e92577df51808a7edc8c9"}, - {file = "onnxruntime-1.20.1-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f1f56e898815963d6dc4ee1c35fc6c36506466eff6d16f3cb9848cea4e8c8172"}, - {file = "onnxruntime-1.20.1-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:bb71a814f66517a65628c9e4a2bb530a6edd2cd5d87ffa0af0f6f773a027d99e"}, - {file = "onnxruntime-1.20.1-cp312-cp312-win32.whl", hash = "sha256:bd386cc9ee5f686ee8a75ba74037750aca55183085bf1941da8efcfe12d5b120"}, - {file = "onnxruntime-1.20.1-cp312-cp312-win_amd64.whl", hash = "sha256:19c2d843eb074f385e8bbb753a40df780511061a63f9def1b216bf53860223fb"}, - {file = "onnxruntime-1.20.1-cp313-cp313-macosx_13_0_universal2.whl", hash = "sha256:cc01437a32d0042b606f462245c8bbae269e5442797f6213e36ce61d5abdd8cc"}, - {file = "onnxruntime-1.20.1-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:fb44b08e017a648924dbe91b82d89b0c105b1adcfe31e90d1dc06b8677ad37be"}, - {file = "onnxruntime-1.20.1-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:bda6aebdf7917c1d811f21d41633df00c58aff2bef2f598f69289c1f1dabc4b3"}, - {file = "onnxruntime-1.20.1-cp313-cp313-win_amd64.whl", hash = "sha256:d30367df7e70f1d9fc5a6a68106f5961686d39b54d3221f760085524e8d38e16"}, - {file = "onnxruntime-1.20.1-cp313-cp313t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c9158465745423b2b5d97ed25aa7740c7d38d2993ee2e5c3bfacb0c4145c49d8"}, - {file = "onnxruntime-1.20.1-cp313-cp313t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0df6f2df83d61f46e842dbcde610ede27218947c33e994545a22333491e72a3b"}, + {file = "onnxruntime-1.21.0-cp310-cp310-macosx_13_0_universal2.whl", hash = "sha256:95513c9302bc8dd013d84148dcf3168e782a80cdbf1654eddc948a23147ccd3d"}, + {file = "onnxruntime-1.21.0-cp310-cp310-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:635d4ab13ae0f150dd4c6ff8206fd58f1c6600636ecc796f6f0c42e4c918585b"}, + {file = "onnxruntime-1.21.0-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:7d06bfa0dd5512bd164f25a2bf594b2e7c9eabda6fc064b684924f3e81bdab1b"}, + {file = "onnxruntime-1.21.0-cp310-cp310-win_amd64.whl", hash = "sha256:b0fc22d219791e0284ee1d9c26724b8ee3fbdea28128ef25d9507ad3b9621f23"}, + {file = "onnxruntime-1.21.0-cp311-cp311-macosx_13_0_universal2.whl", hash = "sha256:8e16f8a79df03919810852fb46ffcc916dc87a9e9c6540a58f20c914c575678c"}, + {file = "onnxruntime-1.21.0-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:7f9156cf6f8ee133d07a751e6518cf6f84ed37fbf8243156bd4a2c4ee6e073c8"}, + {file = "onnxruntime-1.21.0-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:8a5d09815a9e209fa0cb20c2985b34ab4daeba7aea94d0f96b8751eb10403201"}, + {file = "onnxruntime-1.21.0-cp311-cp311-win_amd64.whl", hash = "sha256:1d970dff1e2fa4d9c53f2787b3b7d0005596866e6a31997b41169017d1362dd0"}, + {file = "onnxruntime-1.21.0-cp312-cp312-macosx_13_0_universal2.whl", hash = "sha256:893d67c68ca9e7a58202fa8d96061ed86a5815b0925b5a97aef27b8ba246a20b"}, + {file = "onnxruntime-1.21.0-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:37b7445c920a96271a8dfa16855e258dc5599235b41c7bbde0d262d55bcc105f"}, + {file = "onnxruntime-1.21.0-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:9a04aafb802c1e5573ba4552f8babcb5021b041eb4cfa802c9b7644ca3510eca"}, + {file = "onnxruntime-1.21.0-cp312-cp312-win_amd64.whl", hash = "sha256:7f801318476cd7003d636a5b392f7a37c08b6c8d2f829773f3c3887029e03f32"}, + {file = "onnxruntime-1.21.0-cp313-cp313-macosx_13_0_universal2.whl", hash = "sha256:85718cbde1c2912d3a03e3b3dc181b1480258a229c32378408cace7c450f7f23"}, + {file = "onnxruntime-1.21.0-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:94dff3a61538f3b7b0ea9a06bc99e1410e90509c76e3a746f039e417802a12ae"}, + {file = "onnxruntime-1.21.0-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c1e704b0eda5f2bbbe84182437315eaec89a450b08854b5a7762c85d04a28a0a"}, + {file = "onnxruntime-1.21.0-cp313-cp313-win_amd64.whl", hash = "sha256:19b630c6a8956ef97fb7c94948b17691167aa1aaf07b5f214fa66c3e4136c108"}, + {file = "onnxruntime-1.21.0-cp313-cp313t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:3995c4a2d81719623c58697b9510f8de9fa42a1da6b4474052797b0d712324fe"}, + {file = "onnxruntime-1.21.0-cp313-cp313t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:36b18b8f39c0f84e783902112a0dd3c102466897f96d73bb83f6a6bff283a423"}, ] [package.dependencies] @@ -2344,32 +2532,18 @@ sympy = "*" [[package]] name = "opentelemetry-api" -version = "1.30.0" +version = "1.31.1" description = "OpenTelemetry Python API" optional = false python-versions = ">=3.8" files = [ - {file = "opentelemetry_api-1.30.0-py3-none-any.whl", hash = "sha256:d5f5284890d73fdf47f843dda3210edf37a38d66f44f2b5aedc1e89ed455dc09"}, - {file = "opentelemetry_api-1.30.0.tar.gz", hash = "sha256:375893400c1435bf623f7dfb3bcd44825fe6b56c34d0667c542ea8257b1a1240"}, + {file = "opentelemetry_api-1.31.1-py3-none-any.whl", hash = "sha256:1511a3f470c9c8a32eeea68d4ea37835880c0eed09dd1a0187acc8b1301da0a1"}, + {file = "opentelemetry_api-1.31.1.tar.gz", hash = "sha256:137ad4b64215f02b3000a0292e077641c8611aab636414632a9b9068593b7e91"}, ] [package.dependencies] deprecated = ">=1.2.6" -importlib-metadata = ">=6.0,<=8.5.0" - -[[package]] -name = "opentelemetry-exporter-otlp-proto-common" -version = "1.30.0" -description = "OpenTelemetry Protobuf encoding" -optional = false -python-versions = ">=3.8" -files = [ - {file = "opentelemetry_exporter_otlp_proto_common-1.30.0-py3-none-any.whl", hash = "sha256:5468007c81aa9c44dc961ab2cf368a29d3475977df83b4e30aeed42aa7bc3b38"}, - {file = "opentelemetry_exporter_otlp_proto_common-1.30.0.tar.gz", hash = "sha256:ddbfbf797e518411857d0ca062c957080279320d6235a279f7b64ced73c13897"}, -] - -[package.dependencies] -opentelemetry-proto = "1.30.0" +importlib-metadata = ">=6.0,<8.7.0" [[package]] name = "opentelemetry-exporter-otlp-proto-grpc" @@ -2393,81 +2567,61 @@ opentelemetry-sdk = ">=1.12,<2.0" [package.extras] test = ["pytest-grpc"] -[[package]] -name = "opentelemetry-exporter-otlp-proto-grpc" -version = "1.30.0" -description = "OpenTelemetry Collector Protobuf over gRPC Exporter" -optional = false -python-versions = ">=3.8" -files = [ - {file = "opentelemetry_exporter_otlp_proto_grpc-1.30.0-py3-none-any.whl", hash = "sha256:2906bcae3d80acc54fd1ffcb9e44d324e8631058b502ebe4643ca71d1ff30830"}, - {file = "opentelemetry_exporter_otlp_proto_grpc-1.30.0.tar.gz", hash = "sha256:d0f10f0b9b9a383b7d04a144d01cb280e70362cccc613987e234183fd1f01177"}, -] - -[package.dependencies] -deprecated = ">=1.2.6" -googleapis-common-protos = ">=1.52,<2.0" -grpcio = ">=1.63.2,<2.0.0" -opentelemetry-api = ">=1.15,<2.0" -opentelemetry-exporter-otlp-proto-common = "1.30.0" -opentelemetry-proto = "1.30.0" -opentelemetry-sdk = ">=1.30.0,<1.31.0" - [[package]] name = "opentelemetry-instrumentation" -version = "0.51b0" +version = "0.52b1" description = "Instrumentation Tools & Auto Instrumentation for OpenTelemetry Python" optional = false python-versions = ">=3.8" files = [ - {file = "opentelemetry_instrumentation-0.51b0-py3-none-any.whl", hash = "sha256:c6de8bd26b75ec8b0e54dff59e198946e29de6a10ec65488c357d4b34aa5bdcf"}, - {file = "opentelemetry_instrumentation-0.51b0.tar.gz", hash = "sha256:4ca266875e02f3988536982467f7ef8c32a38b8895490ddce9ad9604649424fa"}, + {file = "opentelemetry_instrumentation-0.52b1-py3-none-any.whl", hash = "sha256:8c0059c4379d77bbd8015c8d8476020efe873c123047ec069bb335e4b8717477"}, + {file = "opentelemetry_instrumentation-0.52b1.tar.gz", hash = "sha256:739f3bfadbbeec04dd59297479e15660a53df93c131d907bb61052e3d3c1406f"}, ] [package.dependencies] opentelemetry-api = ">=1.4,<2.0" -opentelemetry-semantic-conventions = "0.51b0" +opentelemetry-semantic-conventions = "0.52b1" packaging = ">=18.0" wrapt = ">=1.0.0,<2.0.0" [[package]] name = "opentelemetry-instrumentation-asgi" -version = "0.51b0" +version = "0.52b1" description = "ASGI instrumentation for OpenTelemetry" optional = false python-versions = ">=3.8" files = [ - {file = "opentelemetry_instrumentation_asgi-0.51b0-py3-none-any.whl", hash = "sha256:e8072993db47303b633c6ec1bc74726ba4d32bd0c46c28dfadf99f79521a324c"}, - {file = "opentelemetry_instrumentation_asgi-0.51b0.tar.gz", hash = "sha256:b3fe97c00f0bfa934371a69674981d76591c68d937b6422a5716ca21081b4148"}, + {file = "opentelemetry_instrumentation_asgi-0.52b1-py3-none-any.whl", hash = "sha256:f7179f477ed665ba21871972f979f21e8534edb971232e11920c8a22f4759236"}, + {file = "opentelemetry_instrumentation_asgi-0.52b1.tar.gz", hash = "sha256:a6dbce9cb5b2c2f45ce4817ad21f44c67fd328358ad3ab911eb46f0be67f82ec"}, ] [package.dependencies] asgiref = ">=3.0,<4.0" opentelemetry-api = ">=1.12,<2.0" -opentelemetry-instrumentation = "0.51b0" -opentelemetry-semantic-conventions = "0.51b0" -opentelemetry-util-http = "0.51b0" +opentelemetry-instrumentation = "0.52b1" +opentelemetry-semantic-conventions = "0.52b1" +opentelemetry-util-http = "0.52b1" [package.extras] instruments = ["asgiref (>=3.0,<4.0)"] [[package]] name = "opentelemetry-instrumentation-fastapi" -version = "0.51b0" +version = "0.52b1" description = "OpenTelemetry FastAPI Instrumentation" optional = false python-versions = ">=3.8" files = [ - {file = "opentelemetry_instrumentation_fastapi-0.51b0-py3-none-any.whl", hash = "sha256:10513bbc11a1188adb9c1d2c520695f7a8f2b5f4de14e8162098035901cd6493"}, - {file = "opentelemetry_instrumentation_fastapi-0.51b0.tar.gz", hash = "sha256:1624e70f2f4d12ceb792d8a0c331244cd6723190ccee01336273b4559bc13abc"}, + {file = "opentelemetry_instrumentation_fastapi-0.52b1-py3-none-any.whl", hash = "sha256:73c8804f053c5eb2fd2c948218bff9561f1ef65e89db326a6ab0b5bf829969f4"}, + {file = "opentelemetry_instrumentation_fastapi-0.52b1.tar.gz", hash = "sha256:d26ab15dc49e041301d5c2571605b8f5c3a6ee4a85b60940338f56c120221e98"}, ] [package.dependencies] opentelemetry-api = ">=1.12,<2.0" -opentelemetry-instrumentation = "0.51b0" -opentelemetry-instrumentation-asgi = "0.51b0" -opentelemetry-semantic-conventions = "0.51b0" -opentelemetry-util-http = "0.51b0" +opentelemetry-instrumentation = "0.52b1" +opentelemetry-instrumentation-asgi = "0.52b1" +opentelemetry-semantic-conventions = "0.52b1" +opentelemetry-util-http = "0.52b1" [package.extras] instruments = ["fastapi (>=0.58,<1.0)"] @@ -2486,60 +2640,46 @@ files = [ [package.dependencies] protobuf = ">=3.19,<5.0" -[[package]] -name = "opentelemetry-proto" -version = "1.30.0" -description = "OpenTelemetry Python Proto" -optional = false -python-versions = ">=3.8" -files = [ - {file = "opentelemetry_proto-1.30.0-py3-none-any.whl", hash = "sha256:c6290958ff3ddacc826ca5abbeb377a31c2334387352a259ba0df37c243adc11"}, - {file = "opentelemetry_proto-1.30.0.tar.gz", hash = "sha256:afe5c9c15e8b68d7c469596e5b32e8fc085eb9febdd6fb4e20924a93a0389179"}, -] - -[package.dependencies] -protobuf = ">=5.0,<6.0" - [[package]] name = "opentelemetry-sdk" -version = "1.30.0" +version = "1.31.1" description = "OpenTelemetry Python SDK" optional = false python-versions = ">=3.8" files = [ - {file = "opentelemetry_sdk-1.30.0-py3-none-any.whl", hash = "sha256:14fe7afc090caad881addb6926cec967129bd9260c4d33ae6a217359f6b61091"}, - {file = "opentelemetry_sdk-1.30.0.tar.gz", hash = "sha256:c9287a9e4a7614b9946e933a67168450b9ab35f08797eb9bc77d998fa480fa18"}, + {file = "opentelemetry_sdk-1.31.1-py3-none-any.whl", hash = "sha256:882d021321f223e37afaca7b4e06c1d8bbc013f9e17ff48a7aa017460a8e7dae"}, + {file = "opentelemetry_sdk-1.31.1.tar.gz", hash = "sha256:c95f61e74b60769f8ff01ec6ffd3d29684743404603df34b20aa16a49dc8d903"}, ] [package.dependencies] -opentelemetry-api = "1.30.0" -opentelemetry-semantic-conventions = "0.51b0" +opentelemetry-api = "1.31.1" +opentelemetry-semantic-conventions = "0.52b1" typing-extensions = ">=3.7.4" [[package]] name = "opentelemetry-semantic-conventions" -version = "0.51b0" +version = "0.52b1" description = "OpenTelemetry Semantic Conventions" optional = false python-versions = ">=3.8" files = [ - {file = "opentelemetry_semantic_conventions-0.51b0-py3-none-any.whl", hash = "sha256:fdc777359418e8d06c86012c3dc92c88a6453ba662e941593adb062e48c2eeae"}, - {file = "opentelemetry_semantic_conventions-0.51b0.tar.gz", hash = "sha256:3fabf47f35d1fd9aebcdca7e6802d86bd5ebc3bc3408b7e3248dde6e87a18c47"}, + {file = "opentelemetry_semantic_conventions-0.52b1-py3-none-any.whl", hash = "sha256:72b42db327e29ca8bb1b91e8082514ddf3bbf33f32ec088feb09526ade4bc77e"}, + {file = "opentelemetry_semantic_conventions-0.52b1.tar.gz", hash = "sha256:7b3d226ecf7523c27499758a58b542b48a0ac8d12be03c0488ff8ec60c5bae5d"}, ] [package.dependencies] deprecated = ">=1.2.6" -opentelemetry-api = "1.30.0" +opentelemetry-api = "1.31.1" [[package]] name = "opentelemetry-util-http" -version = "0.51b0" +version = "0.52b1" description = "Web util for OpenTelemetry" optional = false python-versions = ">=3.8" files = [ - {file = "opentelemetry_util_http-0.51b0-py3-none-any.whl", hash = "sha256:0561d7a6e9c422b9ef9ae6e77eafcfcd32a2ab689f5e801475cbb67f189efa20"}, - {file = "opentelemetry_util_http-0.51b0.tar.gz", hash = "sha256:05edd19ca1cc3be3968b1e502fd94816901a365adbeaab6b6ddb974384d3a0b9"}, + {file = "opentelemetry_util_http-0.52b1-py3-none-any.whl", hash = "sha256:6a6ab6bfa23fef96f4995233e874f67602adf9d224895981b4ab9d4dde23de78"}, + {file = "opentelemetry_util_http-0.52b1.tar.gz", hash = "sha256:c03c8c23f1b75fadf548faece7ead3aecd50761c5593a2b2831b48730eee5b31"}, ] [[package]] @@ -2580,91 +2720,93 @@ test = ["dm-tree (>=0.1.7)", "flax (>=0.5.3)", "scikit-learn", "scipy (>=1.7.1)" [[package]] name = "optree" -version = "0.14.0" +version = "0.15.0" description = "Optimized PyTree Utilities." optional = false python-versions = ">=3.8" files = [ - {file = "optree-0.14.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d83eca94393fd4a3dbcd5c64ed90e45606c96d28041653fce1318ed19dbfb93c"}, - {file = "optree-0.14.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:b89e755790644d92c9780f10eb77ee2aca0e2a28d11abacd9fc08be9b10b4b1a"}, - {file = "optree-0.14.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aeac4d1a936d71367afb382c0019f699f402f1354f54f350311e5d5ec31a4b23"}, - {file = "optree-0.14.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1ce82e985fee053455290c68ebedc86a0b1adc204fef26c16f136ccc523b4bef"}, - {file = "optree-0.14.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ac060f9716e52bb79d26cb26b13eaf4d14bfd1357ba95d0804d7479f957b4b65"}, - {file = "optree-0.14.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2ae71f7b4dbf914064ef824623230677f6a5dfe312f67e2bef47d3a7f864564c"}, - {file = "optree-0.14.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:875da3a78d9adf3d8175716c72693aad8719bd3a1f72d9dfe47ced98ce9449c2"}, - {file = "optree-0.14.0-cp310-cp310-win32.whl", hash = "sha256:762dbe52a79538bc25eb93586ce7449b77a65c136a410fe1101c96dfed73f889"}, - {file = "optree-0.14.0-cp310-cp310-win_amd64.whl", hash = "sha256:3e62e8c2987376340337a1ad6767dd54f3c4be4cb26523598af53c6500fecff0"}, - {file = "optree-0.14.0-cp310-cp310-win_arm64.whl", hash = "sha256:21d5d41e3ffae3cf27f89370fab4eb2bef65dafbc8cb0924db30f3f486684507"}, - {file = "optree-0.14.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:0adb1ad31a55ae4e32595dc94cac3b06b53f6a7b1710acec9b56f5ccfc82c873"}, - {file = "optree-0.14.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f74dd8365ea32573a2f334717dd784349aafb00bb5e01a3536da951a4db31cd4"}, - {file = "optree-0.14.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:83209a27df29e297398a1fc0b8c2412946aac5bd1372cdb9c952bcc4b4fe0ed6"}, - {file = "optree-0.14.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9d35bc23e478234181dde92e082ae6c8403e2aa9499a8a2e307fb962e4a407a4"}, - {file = "optree-0.14.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:333951d76c9cb10fc3e435f105af6cca72463fb1f2c9ba018d04763f4eb52baf"}, - {file = "optree-0.14.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ccef727fff1731f72a078cfbdef3eb6f972dd1bbeea049b32fb2ef7cd88e3e0a"}, - {file = "optree-0.14.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6ef0a191e3696cad377faa191390328bb83e5cac01a68a8be793e222c59f327d"}, - {file = "optree-0.14.0-cp311-cp311-win32.whl", hash = "sha256:c30ea1dfff229183941c97159a58216ea354b97d181e6cd02b1e9faf5023af4f"}, - {file = "optree-0.14.0-cp311-cp311-win_amd64.whl", hash = "sha256:68bdf5cc6cf87983462720095bf0982920065bddec24831c90be4e424071dfe8"}, - {file = "optree-0.14.0-cp311-cp311-win_arm64.whl", hash = "sha256:fd53ad33bf2c677da5c177a577b2c74dd1374e9c69ee45a804302b38be24a88a"}, - {file = "optree-0.14.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:14da8391e74e315ec7e19e7da6a4ed88f4ff928ca1be59e13d4572b60e3f95bf"}, - {file = "optree-0.14.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:ebe98ca371b98881c7568a8ea88fb0446d92687485da0ef71fa5e45902c03b7b"}, - {file = "optree-0.14.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dfff8174eaae1c11bd52a30a78a739ad7e75fae6cceaaf3f63e2c8c9dd40dd70"}, - {file = "optree-0.14.0-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dc8c1689faa73f5a2f3f38476ae5620b6bda6d06a4b04d1882b8faf1ee0d94f1"}, - {file = "optree-0.14.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c2d6d3fba532ab9f55be9efde7b5f0b22efed198e640199fdbe7da61c9412dff"}, - {file = "optree-0.14.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:74444c294795895456e376d31840197f7cf91381d73cd3ebcaa0e30818aad12e"}, - {file = "optree-0.14.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6b63187a249cd3a4d0d1221e1d2f82175c4a147e7374230a121c44df5364da9f"}, - {file = "optree-0.14.0-cp312-cp312-win32.whl", hash = "sha256:c153bb5b5d2286109d1d8bee704b59f9303aed9c92822075e7002ea5362fa534"}, - {file = "optree-0.14.0-cp312-cp312-win_amd64.whl", hash = "sha256:c79cad5da479ee6931f2c96cacccf588ff75029072661021963117df895305d9"}, - {file = "optree-0.14.0-cp312-cp312-win_arm64.whl", hash = "sha256:c844427e28cc661782fdfba6a2a13d89acabc3b183f49f5e366f8b4fab9616f4"}, - {file = "optree-0.14.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:a6ee278342971b784d13fb04bb7429d03a16098a43d278c69dcfa41f7bae8d84"}, - {file = "optree-0.14.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:a975c1539da8213a211e405cc85aae756a3621e40bacd4d98bec69d354c7cc91"}, - {file = "optree-0.14.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0bac8873fa99f8d4e58548e04b66c310ad65ed966238a00c7eaf61378da6d017"}, - {file = "optree-0.14.0-cp313-cp313-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:949ac03a3df191a9182e9edfdef3987403894a55733c42177a2c666a321330a7"}, - {file = "optree-0.14.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1c7f49a4936d20ebd1a66366a8f6ba0c49c50d409352b05e155b674bb6648209"}, - {file = "optree-0.14.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7bea222b49d486338741a1a45b19861ac6588367916bbc671bb51ba337e5551f"}, - {file = "optree-0.14.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:220e987ed6d92ac5be51d8cdba21d99229cfec00f5a4d2ca3846c208a69709ac"}, - {file = "optree-0.14.0-cp313-cp313-win32.whl", hash = "sha256:4fee67b46a341c7e397b87b8507ea0f41415ce9953549967df89a174110f2f16"}, - {file = "optree-0.14.0-cp313-cp313-win_amd64.whl", hash = "sha256:c4f241e30060bf1fe0f904c1ac28ec11008c055373f3b5b5a86e1d40d2f164ad"}, - {file = "optree-0.14.0-cp313-cp313-win_arm64.whl", hash = "sha256:6e0e12696df16f3205a5a5cf4a1bb5ad2c81d53e2f2bec25982a713421476f62"}, - {file = "optree-0.14.0-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:17ce5ed199cda125d79fb779efc16aad86e6e1f392b430e83797f23149b4554c"}, - {file = "optree-0.14.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:79d3d414b021e2fd21243de8cb93ee47d4dc0b5e66871a0b33e1f32244823267"}, - {file = "optree-0.14.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9d0f576c01b6ecf669d6fbc1db9dd43f380dc604fec76475886fe71604bd21a7"}, - {file = "optree-0.14.0-cp313-cp313t-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f3edaeb9a362146fded1a71846ae821cece9c5b2d1f02437cebb8c9bd9654c6a"}, - {file = "optree-0.14.0-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:50a4d441e113bb034f1356089f9fbf0c7989f20e0a4b71ecc566046894b36ef2"}, - {file = "optree-0.14.0-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:60bde1756d90910f32f33f66d7416e42dd74d10545c9961b17ab7bb064a644bb"}, - {file = "optree-0.14.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:176c9e2908133957513b20370be93d72f8f2e4b3acbc94a1b8186cc715f05403"}, - {file = "optree-0.14.0-cp313-cp313t-win32.whl", hash = "sha256:9171d842057e05c6e98caf7f8d3b5b79d80ac2bea649a3cde1cc9f4c6cdd0e3b"}, - {file = "optree-0.14.0-cp313-cp313t-win_amd64.whl", hash = "sha256:321c5648578cebe435bf13b8c096ad8e8e43ba69ec80195fd5a3368bdafff616"}, - {file = "optree-0.14.0-cp313-cp313t-win_arm64.whl", hash = "sha256:10826cdf0a2d48244f9f8b63e04b934274012aadcf0898fd87180b6839070f0c"}, - {file = "optree-0.14.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:db73d8750deb66cd6402fee86c1b3a2df32a0bca1049448829eaa1023408f282"}, - {file = "optree-0.14.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:614c97c6e42a7e9a7765c051cff0ad3f482750205f2b6a113eecb5c381da38d5"}, - {file = "optree-0.14.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3127e77bd5eabd28bd3388db3291f1ea15eaeedd86bb4e71770f8aba4bb68acb"}, - {file = "optree-0.14.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:faab435742987c8ea244e81b7526234c6f86cfc8fec5ec11d48184348e92aada"}, - {file = "optree-0.14.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4eee7d0248129465d1ad1c391ab38fe76f5af789571551823f131c81a008ceb1"}, - {file = "optree-0.14.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4c0c65c764cda12841759a03ff86dec79404f96b2750f90859b042d60e9a2d82"}, - {file = "optree-0.14.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:53f14de1c07d64e381acdb29254dbdd86bba84138e7c789a6d2be026d03a36a9"}, - {file = "optree-0.14.0-cp38-cp38-win32.whl", hash = "sha256:202e97dab0b7eae95738d8775cba4417a26e8539568f5b7e0a50e500263a3703"}, - {file = "optree-0.14.0-cp38-cp38-win_amd64.whl", hash = "sha256:9e1dfb12bcdf2d759602b7ad1bc6228ec5a19451c3504a80bd5445b9c8e53bab"}, - {file = "optree-0.14.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:80a70cc5f944d2db3eae1a225b41a935d957c928d324f7677f8387e4ab3e8626"}, - {file = "optree-0.14.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:8b1ca7d17007b46223c5f3c02ffa9effc812adff5bc30f561dbfe88f241a16ba"}, - {file = "optree-0.14.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c3a7704f7f3cd45caa684e0b762bac29207435ea811ca3da7b2d93cc2fa54310"}, - {file = "optree-0.14.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6e0fd04f11bbb9862bedee4f4e7b3b1ed7476c34a3e7bf25a2169d43a1b23e90"}, - {file = "optree-0.14.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:27b66f1d542cf4cc9867268485cad3c719bee3e80731a3dc45649c9c57c66f25"}, - {file = "optree-0.14.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d47cf9c991505aae3e93879404bf9bb47efaeb2c84951610d9b63453b8edfadb"}, - {file = "optree-0.14.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0a08dcc8b5a7529ebef64533cba13444de46ba9e923a9c54a9c1dcceb4de2f55"}, - {file = "optree-0.14.0-cp39-cp39-win32.whl", hash = "sha256:e3aa3421fc50619cf15caaa457952c06b532a192df02d9e94a8a6aabe5acbebf"}, - {file = "optree-0.14.0-cp39-cp39-win_amd64.whl", hash = "sha256:b1f03ed925afee44fea9e26bf99a297111f313d88cfb69142463a3cb359f7953"}, - {file = "optree-0.14.0-cp39-cp39-win_arm64.whl", hash = "sha256:81122a324237fccb4f8abe5dca1b00be12cf4c0a53d3a4872cfc1f060c713854"}, - {file = "optree-0.14.0-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:a4934f4da6f79314760e9559f8c8484e00aa99ea79f8d3326f66cf8e11db71b0"}, - {file = "optree-0.14.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:78d33c499c102e2aba05abf99876025ba7f1d5ca98f2e3c75d5cddc9dc42cfa5"}, - {file = "optree-0.14.0-pp310-pypy310_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b3eea1ab8fb32cf5745eead68671100db8547e6d22e8b5c3780376369560659c"}, - {file = "optree-0.14.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d3fe8f48cb16454e3b9c44f081b940062180e0d6c10fda0a098ed7855be8d0a9"}, - {file = "optree-0.14.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:3e53c3aa6303efb9a64ccef160ec6638bb4a97b41b77c3871a1204397e27a98a"}, - {file = "optree-0.14.0-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:ede3b9ccf4cfd5e1ec12db79b93bf45e14e5c1596b339761d3296ce85739ef7a"}, - {file = "optree-0.14.0-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:68803a66b836f595c291347a2bff237852ca80fcfbb2606fee88d046764240de"}, - {file = "optree-0.14.0-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:aec7dfa57fc9a42e18a2e23bc8c011dbacdf16d8da0a62cc3b4b5ef0fba13d05"}, - {file = "optree-0.14.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f505038e5be2a84155e642c396811bbf1e88a4c6aea6a8766b2c57b562bc65de"}, - {file = "optree-0.14.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:9527a9b3a2f4f73334e9fdbebaec1d7001f717a0c2d195e8419cc5d0ba3183b6"}, - {file = "optree-0.14.0.tar.gz", hash = "sha256:d2b4b8784f5c7651a899997c9d6d4cd814c4222cd450c76d1fa386b8f5728d61"}, + {file = "optree-0.15.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:6e73e390520a545ebcaa0b77fd77943a85d1952df658268129e6c523d4d38972"}, + {file = "optree-0.15.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c45593a818c67b72fd0beaeaa6410fa3c5debd39af500127fa367f8ee1f4bd8e"}, + {file = "optree-0.15.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e4e440de109529ce919d0a0a4fa234d3b949da6f99630c9406c9f21160800831"}, + {file = "optree-0.15.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7614ad2f7bde7b905c897011be573d89a9cb5cf851784ee8efb0020d8e067b27"}, + {file = "optree-0.15.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:655ab99f9f9570fbb124f81fdf7e480250b59b1f1d9bd07df04c8751eecc1450"}, + {file = "optree-0.15.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e63b965b62f461513983095750fd1331cad5674153bf3811bd7e2748044df4cd"}, + {file = "optree-0.15.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:14e515b011d965bd3f7aeb021bb523265cb49fde47be0033ba5601e386fff90a"}, + {file = "optree-0.15.0-cp310-cp310-win32.whl", hash = "sha256:27031f507828c18606047e695129e9ec9678cd4321f57856da59c7fcc8f8666c"}, + {file = "optree-0.15.0-cp310-cp310-win_amd64.whl", hash = "sha256:f0392bebcd24fc70ca9a397c1eb2373727fa775e1007f27f3983c50f16a98e45"}, + {file = "optree-0.15.0-cp310-cp310-win_arm64.whl", hash = "sha256:c3122f73eca03e38712ceee16a6acf75d5244ba8b8d1adf5cd6613d1a60a6c26"}, + {file = "optree-0.15.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:c15d98e6f587badb9df67d67fa914fcfa0b63db2db270951915c563816c29f3d"}, + {file = "optree-0.15.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f8d58949ef132beb3a025ace512a71a0fcf92e0e5ef350f289f33a782ae6cb85"}, + {file = "optree-0.15.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f71d4759de0c4abc132dab69d1aa6ea4561ba748efabeee7b25db57c08652b79"}, + {file = "optree-0.15.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4ba65d4c48d76bd5caac7f0b1b8db55223c1c3707d26f6d1d2ff18baf6f81850"}, + {file = "optree-0.15.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:aad3878acdb082701e5f77a153cd86af8819659bfa7e27debd0dc1a52f16c365"}, + {file = "optree-0.15.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6676b8c3f4cd4c8d8d052b66767a9e4cf852627bf256da6e49d2c38a95f07712"}, + {file = "optree-0.15.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a1f185b0d21bc4dda1f4fd03f5ba9e2bc9d28ca14bce3ce3d36b5817140a345e"}, + {file = "optree-0.15.0-cp311-cp311-win32.whl", hash = "sha256:927b579a76c13b9328580c09dd4a9947646531f0a371a170a785002c50dedb94"}, + {file = "optree-0.15.0-cp311-cp311-win_amd64.whl", hash = "sha256:d6525d6a550a1030957e5205e57a415d608a9f7561154e0fb29670e967424578"}, + {file = "optree-0.15.0-cp311-cp311-win_arm64.whl", hash = "sha256:081e8bed7583b625819659d68288365bd4348b3c4281935a6ecfa53c93619b13"}, + {file = "optree-0.15.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:ba2eee9de9d57e145b4c1a71749f7f8b8fe1c645abbb306d4a26cfa45a9cdbb5"}, + {file = "optree-0.15.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:4aad5023686cd7caad68d70ad3706b82cfe9ae8ff9a13c08c1edef2a9b4c9d72"}, + {file = "optree-0.15.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9810e84466025da55ce19ac6b2b79a5cb2c0c1349d318a17504f6e44528221f8"}, + {file = "optree-0.15.0-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:20b07d8a097b810d68b0ee35f287c1f0b7c9844133ada613a92cc10bade9cdbe"}, + {file = "optree-0.15.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0304ec416258edebe2cd2a1ef71770e43405d5e7366ecbc134c520b4ab44d155"}, + {file = "optree-0.15.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:759a72e6dcca3e7239d202a253e1e8e44e8df5033a5e178df585778ac85ddd13"}, + {file = "optree-0.15.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:01a0dc75c594c884d0ca502b8d169cec538e19a70883d2e5f5b9b08fce740958"}, + {file = "optree-0.15.0-cp312-cp312-win32.whl", hash = "sha256:7e10e5c2a8110f5f4fbc999ff8580d1db3a915f851f63f602fff3bbd250ffa20"}, + {file = "optree-0.15.0-cp312-cp312-win_amd64.whl", hash = "sha256:def5b08f219c31edd029b47624e689ffa07747b0694222156f28a28d341d29ac"}, + {file = "optree-0.15.0-cp312-cp312-win_arm64.whl", hash = "sha256:8ec6d3040b1cbfe3f0bc045a3302ee9f9e329c2cd96e928360d22e1cfd9d973a"}, + {file = "optree-0.15.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:4ab606720ae319cb43da47c71d7d5fa7cfbb6a02e6da4857331e6f93800c970e"}, + {file = "optree-0.15.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:9cfc5771115f85b0bfa8f72cce1599186fd6a0ea71c8154d8b2751d9170be428"}, + {file = "optree-0.15.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8f958a20a311854aaab8bdd0f124aab5b9848f07976b54da3e95526a491aa860"}, + {file = "optree-0.15.0-cp313-cp313-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:47ce7e9d81eaed5a05004df1fa279d2608e063dd5eb236e9c95803b4fa0a286c"}, + {file = "optree-0.15.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c6d6ab3717d48e0e747d9e348e23be1fa0f8a812f73632face6303c438d259ba"}, + {file = "optree-0.15.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9c7d101a15be39a9c7c4afae9f0bb85f682eb7d719117e2f9e5fb39c9f6f2c92"}, + {file = "optree-0.15.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aae337ab30b45a096eb5b4ffc3ad8909731617543a7eb288e0b297b9d10a241f"}, + {file = "optree-0.15.0-cp313-cp313-win32.whl", hash = "sha256:eb9c51d728485f5908111191b5403a3f9bc310d121a981f29fad45750b9ff89c"}, + {file = "optree-0.15.0-cp313-cp313-win_amd64.whl", hash = "sha256:7f00e6f011f021ae470efe070ec4d2339fb1a8cd0dcdd16fe3dab782a47aba45"}, + {file = "optree-0.15.0-cp313-cp313-win_arm64.whl", hash = "sha256:17990fbc7f4c461de7ae546fc5661f6a248c3dcee966c89c2e2e5ad7f6228bae"}, + {file = "optree-0.15.0-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:b31c88af70e3f5c14ff2aacd38c4076e6cde98f75169fe0bb59543f01bfb9719"}, + {file = "optree-0.15.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:bc440f81f738d9c822030c3b4f53b6dec9ceb52410f02fd06b9338dc25a8447f"}, + {file = "optree-0.15.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:76ffc2dd8c754e95495163dde55b38dc37e6712b6a3bc7f2190b0547a2c403bb"}, + {file = "optree-0.15.0-cp313-cp313t-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9fa9fb0197cd7b5f2b1fa7e05d30946b3b79bcfc3608fe54dbfc67969895cac9"}, + {file = "optree-0.15.0-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6828639b01ba1177c04875dd9529d938d7b28122c97e7ae14ec41c68ec22826c"}, + {file = "optree-0.15.0-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:93c74eed0f52818c30212dba4867f5672e498567bad49dcdffbe8db6703a0d65"}, + {file = "optree-0.15.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:12188f6832c29dac37385a2f42fce961e303349909cff6d40e21cb27a8d09023"}, + {file = "optree-0.15.0-cp313-cp313t-win32.whl", hash = "sha256:d7b8ce7d13580985922dcfbda515da3f004cd7cb1b03320b96ea32d8cfd76392"}, + {file = "optree-0.15.0-cp313-cp313t-win_amd64.whl", hash = "sha256:daccdb583abaab14346f0af316ee570152a5c058e7b9fb09d8f8171fe751f2b3"}, + {file = "optree-0.15.0-cp313-cp313t-win_arm64.whl", hash = "sha256:e0162a36a6cedb0829efe980d0b370d4e5970fdb28a6609daa2c906d547add5f"}, + {file = "optree-0.15.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:1a99941604a5a958b4e1cbd0caa8b2339aa716babde0189a92843b39d2a77e48"}, + {file = "optree-0.15.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:8678ac0cdf752d6194f75637f3cd19af9071bc00967b05f00aff48727d373aab"}, + {file = "optree-0.15.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ce2a8d57b8fe0179f967494a7d19cff18f8cc0f2e9aff0ed2cb5e5605475a19a"}, + {file = "optree-0.15.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d1feca7404e69a0860940c9cf6a4e3af23457613c4c2338991dc9355dfbbc1ab"}, + {file = "optree-0.15.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d13c5d7d9af345bc96f441bfb313e4286f4495a20d29ad6499a8923c581c593e"}, + {file = "optree-0.15.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5614aebb65a18db496bbdf8b6ce4873779be5352cc91c7e2372984eaf1d4cce4"}, + {file = "optree-0.15.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:def382dbd35ab715008c8604d64c67baf0d97a5f7389a56b5148bbfc9bb006a7"}, + {file = "optree-0.15.0-cp38-cp38-win32.whl", hash = "sha256:e7e0fb32ea05beec7d46a79e4c03701f060a2cfbd5ffa89abaf7b7d17e2d28aa"}, + {file = "optree-0.15.0-cp38-cp38-win_amd64.whl", hash = "sha256:ae66e98634f0c843c5c6b4f27508200971c1a66b726db29c30aba368cf23de5f"}, + {file = "optree-0.15.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:0f9ea3208a14d1677c8966ea1eabe5b8f148424a8c3214ed4d4769beecd48a8a"}, + {file = "optree-0.15.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ebd608b02cb207e4851983b78f57e800c542758f131abe3b23cd4a5f0153676c"}, + {file = "optree-0.15.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:09d11111194a6211e9d806828d29d932ad5f998ea156c76ad0e4d5da39654541"}, + {file = "optree-0.15.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:44cb5d1e5317dbb3044ad4b76af2d4f5e51de73d6ff6e858077d8af00756fe16"}, + {file = "optree-0.15.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1493f3e97f921b8742368406d3de390f051a7c405959e2088d72b4a4ff3f5394"}, + {file = "optree-0.15.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:fef87f006da3c4dfc914f6c0f863c7f4712e958f56c991c320b06026e9ccfd87"}, + {file = "optree-0.15.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ad409276099b89fb5077b0b9311c9e8500086888eba9c77546353c18d520bfe5"}, + {file = "optree-0.15.0-cp39-cp39-win32.whl", hash = "sha256:a6103a3d33cc300ea567f373680e29a29ae854e8775bf87231aae12664b4732e"}, + {file = "optree-0.15.0-cp39-cp39-win_amd64.whl", hash = "sha256:a68a813a2141493566178ae87e1906856f1549e2c3e439ff76801f8fb05bd3a7"}, + {file = "optree-0.15.0-cp39-cp39-win_arm64.whl", hash = "sha256:59d8d252cb83465ecac2f7ff457489606a56316fe8b8f7635770ee8e27b6a3b8"}, + {file = "optree-0.15.0-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:b30673fe30d4d77eef18534420491c27837f0b55dfe18107cfd9eca39a62de3b"}, + {file = "optree-0.15.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d0f378d08b8a09f7e495c49cd94141c1acebc2aa7d567d7dd2cb44a707f29268"}, + {file = "optree-0.15.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90dae741d683cbc47cba16a1b4af3c0d5d8c1042efb7c4aec7664a4f3f07eca2"}, + {file = "optree-0.15.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:cf790dd21dcaa0857888c03233276f5513821abfe605964e825837a30a24f0d7"}, + {file = "optree-0.15.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:21afadec56475f2a13670b8ecf7b767af4feb3ba5bd3a246cbbd8c1822e2a664"}, + {file = "optree-0.15.0-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1a39bccc63223e040f36eb8b413fa1f94a190289eb82e7b384ed32d95d1ffd67"}, + {file = "optree-0.15.0-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:06aed485ab9c94f5b45a18f956bcb89bf6bad29632421da69da268cb49adb37b"}, + {file = "optree-0.15.0-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:07e9d75867ca39cce98375249b83a2033b0313cbfa32cbd06f93f7bc15104afc"}, + {file = "optree-0.15.0-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:3d237605b277d5600748c8a6f83f65e00c294b000ac8772f473fa41eb587ca15"}, + {file = "optree-0.15.0-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9c82f0e88f43b5ec57b8e225175003dc6624dfa400fb56c18c0e4b4667bef805"}, + {file = "optree-0.15.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2245f9a9fd5c7f042f07a476695fd4f6074f85036b5ff3d004f4da121220bf56"}, + {file = "optree-0.15.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:29e1fa90153908d968a2fcebf62bbbc0b434b5a75463a202c33ba3e13dc170ea"}, + {file = "optree-0.15.0.tar.gz", hash = "sha256:d00a45e3b192093ef2cd32bf0d541ecbfc93c1bd73a5f3fe36293499f28a50cf"}, ] [package.dependencies] @@ -2674,20 +2816,20 @@ typing-extensions = ">=4.5.0" benchmark = ["dm-tree (>=0.1,<0.2.0a0)", "jax[cpu] (>=0.4.6,<0.5.0a0)", "pandas", "tabulate", "termcolor", "torch (>=2.0,<2.6.0a0)", "torchvision"] docs = ["docutils", "jax[cpu]", "numpy", "sphinx", "sphinx-autoapi", "sphinx-autobuild", "sphinx-autodoc-typehints", "sphinx-copybutton", "sphinx-rtd-theme", "sphinxcontrib-bibtex", "torch"] jax = ["jax"] -lint = ["black", "cpplint", "doc8", "flake8", "flake8-bugbear", "flake8-comprehensions", "flake8-docstrings", "flake8-pyi", "flake8-simplify", "mypy", "pre-commit", "pydocstyle", "pyenchant", "pylint[spelling]", "ruff", "xdoctest"] +lint = ["cpplint", "doc8", "mypy", "pre-commit", "pyenchant", "pylint[spelling]", "ruff", "xdoctest"] numpy = ["numpy"] -test = ["pytest", "pytest-cov", "pytest-xdist"] +test = ["covdefaults", "pytest", "pytest-cov", "pytest-xdist"] torch = ["torch"] [[package]] name = "orbax-checkpoint" -version = "0.11.6" +version = "0.11.12" description = "Orbax Checkpoint" optional = false python-versions = ">=3.10" files = [ - {file = "orbax_checkpoint-0.11.6-py3-none-any.whl", hash = "sha256:fb208012e5d3601ee37b1100fe4331f9982b814df89f572749be9094fa499e1f"}, - {file = "orbax_checkpoint-0.11.6.tar.gz", hash = "sha256:e16a8bbabe7bc0c94f611d115b2b7790183e6847152804a261048160b81b9628"}, + {file = "orbax_checkpoint-0.11.12-py3-none-any.whl", hash = "sha256:2880c3b2805a0f709265cdac0ba16dfad5a19a1c5849e56d1af274fd0080d93f"}, + {file = "orbax_checkpoint-0.11.12.tar.gz", hash = "sha256:f7c500ff0dc9ad5b2104ec0fbd2eeeae61830b00e0fc7e15d8e9eaaa095e8a8a"}, ] [package.dependencies] @@ -2701,98 +2843,88 @@ numpy = "*" protobuf = "*" pyyaml = "*" simplejson = ">=3.16.0" -tensorstore = ">=0.1.68" +tensorstore = ">=0.1.71" typing_extensions = "*" [package.extras] -testing = ["chex", "flax", "google-cloud-logging", "mock", "pytest", "pytest-xdist"] +docs = ["flax", "google-cloud-logging"] +testing = ["aiofiles", "chex", "flax", "google-cloud-logging", "mock", "pytest", "pytest-xdist"] [[package]] name = "orjson" -version = "3.10.15" +version = "3.10.16" description = "Fast, correct Python JSON library supporting dataclasses, datetimes, and numpy" optional = false -python-versions = ">=3.8" +python-versions = ">=3.9" files = [ - {file = "orjson-3.10.15-cp310-cp310-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:552c883d03ad185f720d0c09583ebde257e41b9521b74ff40e08b7dec4559c04"}, - {file = "orjson-3.10.15-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:616e3e8d438d02e4854f70bfdc03a6bcdb697358dbaa6bcd19cbe24d24ece1f8"}, - {file = "orjson-3.10.15-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:7c2c79fa308e6edb0ffab0a31fd75a7841bf2a79a20ef08a3c6e3b26814c8ca8"}, - {file = "orjson-3.10.15-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:73cb85490aa6bf98abd20607ab5c8324c0acb48d6da7863a51be48505646c814"}, - {file = "orjson-3.10.15-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:763dadac05e4e9d2bc14938a45a2d0560549561287d41c465d3c58aec818b164"}, - {file = "orjson-3.10.15-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a330b9b4734f09a623f74a7490db713695e13b67c959713b78369f26b3dee6bf"}, - {file = "orjson-3.10.15-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:a61a4622b7ff861f019974f73d8165be1bd9a0855e1cad18ee167acacabeb061"}, - {file = "orjson-3.10.15-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:acd271247691574416b3228db667b84775c497b245fa275c6ab90dc1ffbbd2b3"}, - {file = "orjson-3.10.15-cp310-cp310-musllinux_1_2_armv7l.whl", hash = "sha256:e4759b109c37f635aa5c5cc93a1b26927bfde24b254bcc0e1149a9fada253d2d"}, - {file = "orjson-3.10.15-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:9e992fd5cfb8b9f00bfad2fd7a05a4299db2bbe92e6440d9dd2fab27655b3182"}, - {file = "orjson-3.10.15-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:f95fb363d79366af56c3f26b71df40b9a583b07bbaaf5b317407c4d58497852e"}, - {file = "orjson-3.10.15-cp310-cp310-win32.whl", hash = "sha256:f9875f5fea7492da8ec2444839dcc439b0ef298978f311103d0b7dfd775898ab"}, - {file = "orjson-3.10.15-cp310-cp310-win_amd64.whl", hash = "sha256:17085a6aa91e1cd70ca8533989a18b5433e15d29c574582f76f821737c8d5806"}, - {file = "orjson-3.10.15-cp311-cp311-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:c4cc83960ab79a4031f3119cc4b1a1c627a3dc09df125b27c4201dff2af7eaa6"}, - {file = "orjson-3.10.15-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ddbeef2481d895ab8be5185f2432c334d6dec1f5d1933a9c83014d188e102cef"}, - {file = "orjson-3.10.15-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:9e590a0477b23ecd5b0ac865b1b907b01b3c5535f5e8a8f6ab0e503efb896334"}, - {file = "orjson-3.10.15-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a6be38bd103d2fd9bdfa31c2720b23b5d47c6796bcb1d1b598e3924441b4298d"}, - {file = "orjson-3.10.15-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ff4f6edb1578960ed628a3b998fa54d78d9bb3e2eb2cfc5c2a09732431c678d0"}, - {file = "orjson-3.10.15-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b0482b21d0462eddd67e7fce10b89e0b6ac56570424662b685a0d6fccf581e13"}, - {file = "orjson-3.10.15-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:bb5cc3527036ae3d98b65e37b7986a918955f85332c1ee07f9d3f82f3a6899b5"}, - {file = "orjson-3.10.15-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d569c1c462912acdd119ccbf719cf7102ea2c67dd03b99edcb1a3048651ac96b"}, - {file = "orjson-3.10.15-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:1e6d33efab6b71d67f22bf2962895d3dc6f82a6273a965fab762e64fa90dc399"}, - {file = "orjson-3.10.15-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:c33be3795e299f565681d69852ac8c1bc5c84863c0b0030b2b3468843be90388"}, - {file = "orjson-3.10.15-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:eea80037b9fae5339b214f59308ef0589fc06dc870578b7cce6d71eb2096764c"}, - {file = "orjson-3.10.15-cp311-cp311-win32.whl", hash = "sha256:d5ac11b659fd798228a7adba3e37c010e0152b78b1982897020a8e019a94882e"}, - {file = "orjson-3.10.15-cp311-cp311-win_amd64.whl", hash = "sha256:cf45e0214c593660339ef63e875f32ddd5aa3b4adc15e662cdb80dc49e194f8e"}, - {file = "orjson-3.10.15-cp312-cp312-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:9d11c0714fc85bfcf36ada1179400862da3288fc785c30e8297844c867d7505a"}, - {file = "orjson-3.10.15-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dba5a1e85d554e3897fa9fe6fbcff2ed32d55008973ec9a2b992bd9a65d2352d"}, - {file = "orjson-3.10.15-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:7723ad949a0ea502df656948ddd8b392780a5beaa4c3b5f97e525191b102fff0"}, - {file = "orjson-3.10.15-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6fd9bc64421e9fe9bd88039e7ce8e58d4fead67ca88e3a4014b143cec7684fd4"}, - {file = "orjson-3.10.15-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dadba0e7b6594216c214ef7894c4bd5f08d7c0135f4dd0145600be4fbcc16767"}, - {file = "orjson-3.10.15-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b48f59114fe318f33bbaee8ebeda696d8ccc94c9e90bc27dbe72153094e26f41"}, - {file = "orjson-3.10.15-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:035fb83585e0f15e076759b6fedaf0abb460d1765b6a36f48018a52858443514"}, - {file = "orjson-3.10.15-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:d13b7fe322d75bf84464b075eafd8e7dd9eae05649aa2a5354cfa32f43c59f17"}, - {file = "orjson-3.10.15-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:7066b74f9f259849629e0d04db6609db4cf5b973248f455ba5d3bd58a4daaa5b"}, - {file = "orjson-3.10.15-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:88dc3f65a026bd3175eb157fea994fca6ac7c4c8579fc5a86fc2114ad05705b7"}, - {file = "orjson-3.10.15-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:b342567e5465bd99faa559507fe45e33fc76b9fb868a63f1642c6bc0735ad02a"}, - {file = "orjson-3.10.15-cp312-cp312-win32.whl", hash = "sha256:0a4f27ea5617828e6b58922fdbec67b0aa4bb844e2d363b9244c47fa2180e665"}, - {file = "orjson-3.10.15-cp312-cp312-win_amd64.whl", hash = "sha256:ef5b87e7aa9545ddadd2309efe6824bd3dd64ac101c15dae0f2f597911d46eaa"}, - {file = "orjson-3.10.15-cp313-cp313-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:bae0e6ec2b7ba6895198cd981b7cca95d1487d0147c8ed751e5632ad16f031a6"}, - {file = "orjson-3.10.15-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f93ce145b2db1252dd86af37d4165b6faa83072b46e3995ecc95d4b2301b725a"}, - {file = "orjson-3.10.15-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:7c203f6f969210128af3acae0ef9ea6aab9782939f45f6fe02d05958fe761ef9"}, - {file = "orjson-3.10.15-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8918719572d662e18b8af66aef699d8c21072e54b6c82a3f8f6404c1f5ccd5e0"}, - {file = "orjson-3.10.15-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f71eae9651465dff70aa80db92586ad5b92df46a9373ee55252109bb6b703307"}, - {file = "orjson-3.10.15-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e117eb299a35f2634e25ed120c37c641398826c2f5a3d3cc39f5993b96171b9e"}, - {file = "orjson-3.10.15-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:13242f12d295e83c2955756a574ddd6741c81e5b99f2bef8ed8d53e47a01e4b7"}, - {file = "orjson-3.10.15-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:7946922ada8f3e0b7b958cc3eb22cfcf6c0df83d1fe5521b4a100103e3fa84c8"}, - {file = "orjson-3.10.15-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:b7155eb1623347f0f22c38c9abdd738b287e39b9982e1da227503387b81b34ca"}, - {file = "orjson-3.10.15-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:208beedfa807c922da4e81061dafa9c8489c6328934ca2a562efa707e049e561"}, - {file = "orjson-3.10.15-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:eca81f83b1b8c07449e1d6ff7074e82e3fd6777e588f1a6632127f286a968825"}, - {file = "orjson-3.10.15-cp313-cp313-win32.whl", hash = "sha256:c03cd6eea1bd3b949d0d007c8d57049aa2b39bd49f58b4b2af571a5d3833d890"}, - {file = "orjson-3.10.15-cp313-cp313-win_amd64.whl", hash = "sha256:fd56a26a04f6ba5fb2045b0acc487a63162a958ed837648c5781e1fe3316cfbf"}, - {file = "orjson-3.10.15-cp38-cp38-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:5e8afd6200e12771467a1a44e5ad780614b86abb4b11862ec54861a82d677746"}, - {file = "orjson-3.10.15-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da9a18c500f19273e9e104cca8c1f0b40a6470bcccfc33afcc088045d0bf5ea6"}, - {file = "orjson-3.10.15-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:bb00b7bfbdf5d34a13180e4805d76b4567025da19a197645ca746fc2fb536586"}, - {file = "orjson-3.10.15-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:33aedc3d903378e257047fee506f11e0833146ca3e57a1a1fb0ddb789876c1e1"}, - {file = "orjson-3.10.15-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dd0099ae6aed5eb1fc84c9eb72b95505a3df4267e6962eb93cdd5af03be71c98"}, - {file = "orjson-3.10.15-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7c864a80a2d467d7786274fce0e4f93ef2a7ca4ff31f7fc5634225aaa4e9e98c"}, - {file = "orjson-3.10.15-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:c25774c9e88a3e0013d7d1a6c8056926b607a61edd423b50eb5c88fd7f2823ae"}, - {file = "orjson-3.10.15-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:e78c211d0074e783d824ce7bb85bf459f93a233eb67a5b5003498232ddfb0e8a"}, - {file = "orjson-3.10.15-cp38-cp38-musllinux_1_2_armv7l.whl", hash = "sha256:43e17289ffdbbac8f39243916c893d2ae41a2ea1a9cbb060a56a4d75286351ae"}, - {file = "orjson-3.10.15-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:781d54657063f361e89714293c095f506c533582ee40a426cb6489c48a637b81"}, - {file = "orjson-3.10.15-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:6875210307d36c94873f553786a808af2788e362bd0cf4c8e66d976791e7b528"}, - {file = "orjson-3.10.15-cp38-cp38-win32.whl", hash = "sha256:305b38b2b8f8083cc3d618927d7f424349afce5975b316d33075ef0f73576b60"}, - {file = "orjson-3.10.15-cp38-cp38-win_amd64.whl", hash = "sha256:5dd9ef1639878cc3efffed349543cbf9372bdbd79f478615a1c633fe4e4180d1"}, - {file = "orjson-3.10.15-cp39-cp39-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:ffe19f3e8d68111e8644d4f4e267a069ca427926855582ff01fc012496d19969"}, - {file = "orjson-3.10.15-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d433bf32a363823863a96561a555227c18a522a8217a6f9400f00ddc70139ae2"}, - {file = "orjson-3.10.15-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:da03392674f59a95d03fa5fb9fe3a160b0511ad84b7a3914699ea5a1b3a38da2"}, - {file = "orjson-3.10.15-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3a63bb41559b05360ded9132032239e47983a39b151af1201f07ec9370715c82"}, - {file = "orjson-3.10.15-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3766ac4702f8f795ff3fa067968e806b4344af257011858cc3d6d8721588b53f"}, - {file = "orjson-3.10.15-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7a1c73dcc8fadbd7c55802d9aa093b36878d34a3b3222c41052ce6b0fc65f8e8"}, - {file = "orjson-3.10.15-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b299383825eafe642cbab34be762ccff9fd3408d72726a6b2a4506d410a71ab3"}, - {file = "orjson-3.10.15-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:abc7abecdbf67a173ef1316036ebbf54ce400ef2300b4e26a7b843bd446c2480"}, - {file = "orjson-3.10.15-cp39-cp39-musllinux_1_2_armv7l.whl", hash = "sha256:3614ea508d522a621384c1d6639016a5a2e4f027f3e4a1c93a51867615d28829"}, - {file = "orjson-3.10.15-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:295c70f9dc154307777ba30fe29ff15c1bcc9dfc5c48632f37d20a607e9ba85a"}, - {file = "orjson-3.10.15-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:63309e3ff924c62404923c80b9e2048c1f74ba4b615e7584584389ada50ed428"}, - {file = "orjson-3.10.15-cp39-cp39-win32.whl", hash = "sha256:a2f708c62d026fb5340788ba94a55c23df4e1869fec74be455e0b2f5363b8507"}, - {file = "orjson-3.10.15-cp39-cp39-win_amd64.whl", hash = "sha256:efcf6c735c3d22ef60c4aa27a5238f1a477df85e9b15f2142f9d669beb2d13fd"}, - {file = "orjson-3.10.15.tar.gz", hash = "sha256:05ca7fe452a2e9d8d9d706a2984c95b9c2ebc5db417ce0b7a49b91d50642a23e"}, + {file = "orjson-3.10.16-cp310-cp310-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:4cb473b8e79154fa778fb56d2d73763d977be3dcc140587e07dbc545bbfc38f8"}, + {file = "orjson-3.10.16-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:622a8e85eeec1948690409a19ca1c7d9fd8ff116f4861d261e6ae2094fe59a00"}, + {file = "orjson-3.10.16-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:c682d852d0ce77613993dc967e90e151899fe2d8e71c20e9be164080f468e370"}, + {file = "orjson-3.10.16-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8c520ae736acd2e32df193bcff73491e64c936f3e44a2916b548da048a48b46b"}, + {file = "orjson-3.10.16-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:134f87c76bfae00f2094d85cfab261b289b76d78c6da8a7a3b3c09d362fd1e06"}, + {file = "orjson-3.10.16-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b59afde79563e2cf37cfe62ee3b71c063fd5546c8e662d7fcfc2a3d5031a5c4c"}, + {file = "orjson-3.10.16-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:113602f8241daaff05d6fad25bd481d54c42d8d72ef4c831bb3ab682a54d9e15"}, + {file = "orjson-3.10.16-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:4fc0077d101f8fab4031e6554fc17b4c2ad8fdbc56ee64a727f3c95b379e31da"}, + {file = "orjson-3.10.16-cp310-cp310-musllinux_1_2_armv7l.whl", hash = "sha256:9c6bf6ff180cd69e93f3f50380224218cfab79953a868ea3908430bcfaf9cb5e"}, + {file = "orjson-3.10.16-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:5673eadfa952f95a7cd76418ff189df11b0a9c34b1995dff43a6fdbce5d63bf4"}, + {file = "orjson-3.10.16-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:5fe638a423d852b0ae1e1a79895851696cb0d9fa0946fdbfd5da5072d9bb9551"}, + {file = "orjson-3.10.16-cp310-cp310-win32.whl", hash = "sha256:33af58f479b3c6435ab8f8b57999874b4b40c804c7a36b5cc6b54d8f28e1d3dd"}, + {file = "orjson-3.10.16-cp310-cp310-win_amd64.whl", hash = "sha256:0338356b3f56d71293c583350af26f053017071836b07e064e92819ecf1aa055"}, + {file = "orjson-3.10.16-cp311-cp311-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:44fcbe1a1884f8bc9e2e863168b0f84230c3d634afe41c678637d2728ea8e739"}, + {file = "orjson-3.10.16-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:78177bf0a9d0192e0b34c3d78bcff7fe21d1b5d84aeb5ebdfe0dbe637b885225"}, + {file = "orjson-3.10.16-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:12824073a010a754bb27330cad21d6e9b98374f497f391b8707752b96f72e741"}, + {file = "orjson-3.10.16-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ddd41007e56284e9867864aa2f29f3136bb1dd19a49ca43c0b4eda22a579cf53"}, + {file = "orjson-3.10.16-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0877c4d35de639645de83666458ca1f12560d9fa7aa9b25d8bb8f52f61627d14"}, + {file = "orjson-3.10.16-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9a09a539e9cc3beead3e7107093b4ac176d015bec64f811afb5965fce077a03c"}, + {file = "orjson-3.10.16-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:31b98bc9b40610fec971d9a4d67bb2ed02eec0a8ae35f8ccd2086320c28526ca"}, + {file = "orjson-3.10.16-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:0ce243f5a8739f3a18830bc62dc2e05b69a7545bafd3e3249f86668b2bcd8e50"}, + {file = "orjson-3.10.16-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:64792c0025bae049b3074c6abe0cf06f23c8e9f5a445f4bab31dc5ca23dbf9e1"}, + {file = "orjson-3.10.16-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:ea53f7e68eec718b8e17e942f7ca56c6bd43562eb19db3f22d90d75e13f0431d"}, + {file = "orjson-3.10.16-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:a741ba1a9488c92227711bde8c8c2b63d7d3816883268c808fbeada00400c164"}, + {file = "orjson-3.10.16-cp311-cp311-win32.whl", hash = "sha256:c7ed2c61bb8226384c3fdf1fb01c51b47b03e3f4536c985078cccc2fd19f1619"}, + {file = "orjson-3.10.16-cp311-cp311-win_amd64.whl", hash = "sha256:cd67d8b3e0e56222a2e7b7f7da9031e30ecd1fe251c023340b9f12caca85ab60"}, + {file = "orjson-3.10.16-cp312-cp312-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:6d3444abbfa71ba21bb042caa4b062535b122248259fdb9deea567969140abca"}, + {file = "orjson-3.10.16-cp312-cp312-macosx_15_0_arm64.whl", hash = "sha256:30245c08d818fdcaa48b7d5b81499b8cae09acabb216fe61ca619876b128e184"}, + {file = "orjson-3.10.16-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a0ba1d0baa71bf7579a4ccdcf503e6f3098ef9542106a0eca82395898c8a500a"}, + {file = "orjson-3.10.16-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:eb0beefa5ef3af8845f3a69ff2a4aa62529b5acec1cfe5f8a6b4141033fd46ef"}, + {file = "orjson-3.10.16-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6daa0e1c9bf2e030e93c98394de94506f2a4d12e1e9dadd7c53d5e44d0f9628e"}, + {file = "orjson-3.10.16-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9da9019afb21e02410ef600e56666652b73eb3e4d213a0ec919ff391a7dd52aa"}, + {file = "orjson-3.10.16-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:daeb3a1ee17b69981d3aae30c3b4e786b0f8c9e6c71f2b48f1aef934f63f38f4"}, + {file = "orjson-3.10.16-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:80fed80eaf0e20a31942ae5d0728849862446512769692474be5e6b73123a23b"}, + {file = "orjson-3.10.16-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:73390ed838f03764540a7bdc4071fe0123914c2cc02fb6abf35182d5fd1b7a42"}, + {file = "orjson-3.10.16-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:a22bba012a0c94ec02a7768953020ab0d3e2b884760f859176343a36c01adf87"}, + {file = "orjson-3.10.16-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:5385bbfdbc90ff5b2635b7e6bebf259652db00a92b5e3c45b616df75b9058e88"}, + {file = "orjson-3.10.16-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:02c6279016346e774dd92625d46c6c40db687b8a0d685aadb91e26e46cc33e1e"}, + {file = "orjson-3.10.16-cp312-cp312-win32.whl", hash = "sha256:7ca55097a11426db80f79378e873a8c51f4dde9ffc22de44850f9696b7eb0e8c"}, + {file = "orjson-3.10.16-cp312-cp312-win_amd64.whl", hash = "sha256:86d127efdd3f9bf5f04809b70faca1e6836556ea3cc46e662b44dab3fe71f3d6"}, + {file = "orjson-3.10.16-cp313-cp313-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:148a97f7de811ba14bc6dbc4a433e0341ffd2cc285065199fb5f6a98013744bd"}, + {file = "orjson-3.10.16-cp313-cp313-macosx_15_0_arm64.whl", hash = "sha256:1d960c1bf0e734ea36d0adc880076de3846aaec45ffad29b78c7f1b7962516b8"}, + {file = "orjson-3.10.16-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a318cd184d1269f68634464b12871386808dc8b7c27de8565234d25975a7a137"}, + {file = "orjson-3.10.16-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:df23f8df3ef9223d1d6748bea63fca55aae7da30a875700809c500a05975522b"}, + {file = "orjson-3.10.16-cp313-cp313-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b94dda8dd6d1378f1037d7f3f6b21db769ef911c4567cbaa962bb6dc5021cf90"}, + {file = "orjson-3.10.16-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f12970a26666a8775346003fd94347d03ccb98ab8aa063036818381acf5f523e"}, + {file = "orjson-3.10.16-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:15a1431a245d856bd56e4d29ea0023eb4d2c8f71efe914beb3dee8ab3f0cd7fb"}, + {file = "orjson-3.10.16-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c83655cfc247f399a222567d146524674a7b217af7ef8289c0ff53cfe8db09f0"}, + {file = "orjson-3.10.16-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:fa59ae64cb6ddde8f09bdbf7baf933c4cd05734ad84dcf4e43b887eb24e37652"}, + {file = "orjson-3.10.16-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:ca5426e5aacc2e9507d341bc169d8af9c3cbe88f4cd4c1cf2f87e8564730eb56"}, + {file = "orjson-3.10.16-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:6fd5da4edf98a400946cd3a195680de56f1e7575109b9acb9493331047157430"}, + {file = "orjson-3.10.16-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:980ecc7a53e567169282a5e0ff078393bac78320d44238da4e246d71a4e0e8f5"}, + {file = "orjson-3.10.16-cp313-cp313-win32.whl", hash = "sha256:28f79944dd006ac540a6465ebd5f8f45dfdf0948ff998eac7a908275b4c1add6"}, + {file = "orjson-3.10.16-cp313-cp313-win_amd64.whl", hash = "sha256:fe0a145e96d51971407cb8ba947e63ead2aa915db59d6631a355f5f2150b56b7"}, + {file = "orjson-3.10.16-cp39-cp39-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:c35b5c1fb5a5d6d2fea825dec5d3d16bea3c06ac744708a8e1ff41d4ba10cdf1"}, + {file = "orjson-3.10.16-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c9aac7ecc86218b4b3048c768f227a9452287001d7548500150bb75ee21bf55d"}, + {file = "orjson-3.10.16-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:6e19f5102fff36f923b6dfdb3236ec710b649da975ed57c29833cb910c5a73ab"}, + {file = "orjson-3.10.16-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:17210490408eb62755a334a6f20ed17c39f27b4f45d89a38cd144cd458eba80b"}, + {file = "orjson-3.10.16-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:fbbe04451db85916e52a9f720bd89bf41f803cf63b038595674691680cbebd1b"}, + {file = "orjson-3.10.16-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6a966eba501a3a1f309f5a6af32ed9eb8f316fa19d9947bac3e6350dc63a6f0a"}, + {file = "orjson-3.10.16-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:01e0d22f06c81e6c435723343e1eefc710e0510a35d897856766d475f2a15687"}, + {file = "orjson-3.10.16-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:7c1e602d028ee285dbd300fb9820b342b937df64d5a3336e1618b354e95a2569"}, + {file = "orjson-3.10.16-cp39-cp39-musllinux_1_2_armv7l.whl", hash = "sha256:d230e5020666a6725629df81e210dc11c3eae7d52fe909a7157b3875238484f3"}, + {file = "orjson-3.10.16-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:0f8baac07d4555f57d44746a7d80fbe6b2c4fe2ed68136b4abb51cfec512a5e9"}, + {file = "orjson-3.10.16-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:524e48420b90fc66953e91b660b3d05faaf921277d6707e328fde1c218b31250"}, + {file = "orjson-3.10.16-cp39-cp39-win32.whl", hash = "sha256:a9f614e31423d7292dbca966a53b2d775c64528c7d91424ab2747d8ab8ce5c72"}, + {file = "orjson-3.10.16-cp39-cp39-win_amd64.whl", hash = "sha256:c338dc2296d1ed0d5c5c27dfb22d00b330555cb706c2e0be1e1c3940a0895905"}, + {file = "orjson-3.10.16.tar.gz", hash = "sha256:d2aaa5c495e11d17b9b93205f5fa196737ee3202f000aaebf028dc9a73750f10"}, ] [[package]] @@ -2947,33 +3079,49 @@ xmp = ["defusedxml"] [[package]] name = "platformdirs" -version = "4.3.6" +version = "4.3.7" description = "A small Python package for determining appropriate platform-specific dirs, e.g. a `user data dir`." optional = false +python-versions = ">=3.9" +files = [ + {file = "platformdirs-4.3.7-py3-none-any.whl", hash = "sha256:a03875334331946f13c549dbd8f4bac7a13a50a895a0eb1e8c6a8ace80d40a94"}, + {file = "platformdirs-4.3.7.tar.gz", hash = "sha256:eb437d586b6a0986388f0d6f74aa0cde27b48d0e3d66843640bfb6bdcdb6e351"}, +] + +[package.extras] +docs = ["furo (>=2024.8.6)", "proselint (>=0.14)", "sphinx (>=8.1.3)", "sphinx-autodoc-typehints (>=3)"] +test = ["appdirs (==1.4.4)", "covdefaults (>=2.3)", "pytest (>=8.3.4)", "pytest-cov (>=6)", "pytest-mock (>=3.14)"] +type = ["mypy (>=1.14.1)"] + +[[package]] +name = "pluggy" +version = "1.5.0" +description = "plugin and hook calling mechanisms for python" +optional = false python-versions = ">=3.8" files = [ - {file = "platformdirs-4.3.6-py3-none-any.whl", hash = "sha256:73e575e1408ab8103900836b97580d5307456908a03e92031bab39e4554cc3fb"}, - {file = "platformdirs-4.3.6.tar.gz", hash = "sha256:357fb2acbc885b0419afd3ce3ed34564c13c9b95c89360cd9563f73aa5e2b907"}, + {file = "pluggy-1.5.0-py3-none-any.whl", hash = "sha256:44e1ad92c8ca002de6377e165f3e0f1be63266ab4d554740532335b9d75ea669"}, + {file = "pluggy-1.5.0.tar.gz", hash = "sha256:2cffa88e94fdc978c4c574f15f9e59b7f4201d439195c3715ca9e2486f1d0cf1"}, ] [package.extras] -docs = ["furo (>=2024.8.6)", "proselint (>=0.14)", "sphinx (>=8.0.2)", "sphinx-autodoc-typehints (>=2.4)"] -test = ["appdirs (==1.4.4)", "covdefaults (>=2.3)", "pytest (>=8.3.2)", "pytest-cov (>=5)", "pytest-mock (>=3.14)"] -type = ["mypy (>=1.11.2)"] +dev = ["pre-commit", "tox"] +testing = ["pytest", "pytest-benchmark"] [[package]] name = "posthog" -version = "3.14.2" +version = "3.23.0" description = "Integrate PostHog into any python application." optional = false python-versions = "*" files = [ - {file = "posthog-3.14.2-py2.py3-none-any.whl", hash = "sha256:f50d41dfe116ace4971b304518de57e0de34a936cdfdff84efed0dd993dfbcda"}, - {file = "posthog-3.14.2.tar.gz", hash = "sha256:b9794aa5b316767cc7f8685292f8ff3e0df8b01fcaf2905afe2efa9696cb5c77"}, + {file = "posthog-3.23.0-py2.py3-none-any.whl", hash = "sha256:2b07d06670170ac2e21465dffa8d356722834cc877ab34e583da6e525c1037df"}, + {file = "posthog-3.23.0.tar.gz", hash = "sha256:1ac0305ab6c54a80c4a82c137231f17616bef007bbf474d1a529cda032d808eb"}, ] [package.dependencies] backoff = ">=1.10.0" +distro = ">=1.5.0" monotonic = ">=1.5" python-dateutil = ">2.1" requests = ">=2.7,<3.0" @@ -2983,7 +3131,7 @@ six = ">=1.5" dev = ["black", "django-stubs", "flake8", "flake8-print", "isort", "lxml", "mypy", "mypy-baseline", "pre-commit", "pydantic", "types-mock", "types-python-dateutil", "types-requests", "types-setuptools", "types-six"] langchain = ["langchain (>=0.2.0)"] sentry = ["django", "sentry-sdk"] -test = ["anthropic", "coverage", "django", "flake8", "freezegun (==1.5.1)", "langchain-anthropic (>=0.2.0)", "langchain-community (>=0.2.0)", "langchain-openai (>=0.2.0)", "langgraph", "mock (>=2.0.0)", "openai", "pydantic", "pylint", "pytest", "pytest-asyncio", "pytest-timeout"] +test = ["anthropic", "coverage", "django", "flake8", "freezegun (==1.5.1)", "langchain-anthropic (>=0.2.0)", "langchain-community (>=0.2.0)", "langchain-openai (>=0.2.0)", "langgraph", "mock (>=2.0.0)", "openai", "parameterized (>=0.8.1)", "pydantic", "pylint", "pytest", "pytest-asyncio", "pytest-timeout"] [[package]] name = "promise" @@ -3017,17 +3165,17 @@ wcwidth = "*" [[package]] name = "proto-plus" -version = "1.26.0" +version = "1.26.1" description = "Beautiful, Pythonic protocol buffers" optional = false python-versions = ">=3.7" files = [ - {file = "proto_plus-1.26.0-py3-none-any.whl", hash = "sha256:bf2dfaa3da281fc3187d12d224c707cb57214fb2c22ba854eb0c105a3fb2d4d7"}, - {file = "proto_plus-1.26.0.tar.gz", hash = "sha256:6e93d5f5ca267b54300880fff156b6a3386b3fa3f43b1da62e680fc0c586ef22"}, + {file = "proto_plus-1.26.1-py3-none-any.whl", hash = "sha256:13285478c2dcf2abb829db158e1047e2f1e8d63a077d94263c2b88b043c75a66"}, + {file = "proto_plus-1.26.1.tar.gz", hash = "sha256:21a515a4c4c0088a773899e23c7bbade3d18f9c66c73edd4c7ee3816bc96a012"}, ] [package.dependencies] -protobuf = ">=3.19.0,<6.0.0dev" +protobuf = ">=3.19.0,<7.0.0" [package.extras] testing = ["google-api-core (>=1.31.5)"] @@ -3063,26 +3211,6 @@ files = [ {file = "protobuf-3.20.3.tar.gz", hash = "sha256:2e3427429c9cffebf259491be0af70189607f365c2f41c7c3764af6f337105f2"}, ] -[[package]] -name = "protobuf" -version = "5.29.3" -description = "" -optional = false -python-versions = ">=3.8" -files = [ - {file = "protobuf-5.29.3-cp310-abi3-win32.whl", hash = "sha256:3ea51771449e1035f26069c4c7fd51fba990d07bc55ba80701c78f886bf9c888"}, - {file = "protobuf-5.29.3-cp310-abi3-win_amd64.whl", hash = "sha256:a4fa6f80816a9a0678429e84973f2f98cbc218cca434abe8db2ad0bffc98503a"}, - {file = "protobuf-5.29.3-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:a8434404bbf139aa9e1300dbf989667a83d42ddda9153d8ab76e0d5dcaca484e"}, - {file = "protobuf-5.29.3-cp38-abi3-manylinux2014_aarch64.whl", hash = "sha256:daaf63f70f25e8689c072cfad4334ca0ac1d1e05a92fc15c54eb9cf23c3efd84"}, - {file = "protobuf-5.29.3-cp38-abi3-manylinux2014_x86_64.whl", hash = "sha256:c027e08a08be10b67c06bf2370b99c811c466398c357e615ca88c91c07f0910f"}, - {file = "protobuf-5.29.3-cp38-cp38-win32.whl", hash = "sha256:84a57163a0ccef3f96e4b6a20516cedcf5bb3a95a657131c5c3ac62200d23252"}, - {file = "protobuf-5.29.3-cp38-cp38-win_amd64.whl", hash = "sha256:b89c115d877892a512f79a8114564fb435943b59067615894c3b13cd3e1fa107"}, - {file = "protobuf-5.29.3-cp39-cp39-win32.whl", hash = "sha256:0eb32bfa5219fc8d4111803e9a690658aa2e6366384fd0851064b963b6d1f2a7"}, - {file = "protobuf-5.29.3-cp39-cp39-win_amd64.whl", hash = "sha256:6ce8cc3389a20693bfde6c6562e03474c40851b44975c9b2bf6df7d8c4f864da"}, - {file = "protobuf-5.29.3-py3-none-any.whl", hash = "sha256:0a18ed4a24198528f2333802eb075e59dea9d679ab7a6c5efb017a59004d849f"}, - {file = "protobuf-5.29.3.tar.gz", hash = "sha256:5da0f41edaf117bde316404bad1a486cb4ededf8e4a54891296f648e8e076620"}, -] - [[package]] name = "psutil" version = "7.0.0" @@ -3119,50 +3247,50 @@ files = [ [[package]] name = "pulsar-client" -version = "3.6.0" +version = "3.5.0" description = "Apache Pulsar Python client library" optional = false python-versions = "*" files = [ - {file = "pulsar_client-3.6.0-cp310-cp310-macosx_13_0_universal2.whl", hash = "sha256:f478b5bb3880bb1eade7775f61db7c7d1f539f11044fc79332e4ff5881120b98"}, - {file = "pulsar_client-3.6.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:46bbbb5dba3cce3e1bc276b0a1dd5ed3a902814715020e3e19d2d0268eee9769"}, - {file = "pulsar_client-3.6.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ac358797517dde3152b76be036f275f079d51d88c864a636492124d8f93128cb"}, - {file = "pulsar_client-3.6.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:06dfbc5675c736fa85c1a922c62f39408d239f4b9ed04dada37f417c953c0b0b"}, - {file = "pulsar_client-3.6.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:d5736d33d2f476a6fe0c57800fd7eaea8aa86d7b2ba7836ab79fcc84e01e1575"}, - {file = "pulsar_client-3.6.0-cp310-cp310-win_amd64.whl", hash = "sha256:18d48ddfe0d0718da05a9afae15a6b42feede9af9259c02f07e98341af3d649b"}, - {file = "pulsar_client-3.6.0-cp311-cp311-macosx_13_0_universal2.whl", hash = "sha256:bbf0cf2c826e83338691fc871e5d4e864fa78790b1aa8a12df4549dd421d4169"}, - {file = "pulsar_client-3.6.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bfdb700f5d355e1cb22c6060e870937d7f00eb20b28a6b685cc70cbeabb0f3a0"}, - {file = "pulsar_client-3.6.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:109c1662784026ccc22937c15a9c6f4f22630af1bee92b7c22d718516079ffde"}, - {file = "pulsar_client-3.6.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:b6ef10cc84db3f1813ac1074e6097818ec721d2a1047eaf0b99bf0a17171d5e4"}, - {file = "pulsar_client-3.6.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:155baa18d9934fa0c3390e4185399d018c29ff101888a8fd04e2c5cefab31c26"}, - {file = "pulsar_client-3.6.0-cp311-cp311-win_amd64.whl", hash = "sha256:3ff243e60b2e6556979039281bfb25d5de63ab2edb0a2dbb4b6ad06ae87f6d87"}, - {file = "pulsar_client-3.6.0-cp312-cp312-macosx_13_0_universal2.whl", hash = "sha256:e5903102b708c228c5b52f0b7b56b473a3cc00a42ec1d657352d36cea70892cb"}, - {file = "pulsar_client-3.6.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:192d7b1ec1824a2784fabc19fceb917d6ccdb4b1daf7d2bab4fcae783529c2ac"}, - {file = "pulsar_client-3.6.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c67d95ca20e78d42229bdb5c01b18d6b3f0626773d93e15d03b7cef832496838"}, - {file = "pulsar_client-3.6.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:8c9c1c2093e713469e42e561af6970c59e7ebe638ada823813ca5b3d89325002"}, - {file = "pulsar_client-3.6.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:2982b6a90a63988dea3c83c9e6ce105f15ce2d79caf49c87787dd66ca3fbc9f0"}, - {file = "pulsar_client-3.6.0-cp312-cp312-win_amd64.whl", hash = "sha256:2a3b1d5198791fb284e729489c4f0598d0f475f439d64fa4527bbdf232ce0ae5"}, - {file = "pulsar_client-3.6.0-cp313-cp313-macosx_13_0_universal2.whl", hash = "sha256:9f20da120a93f5e8b3c75ca95ab09cff29573d6256ebaa57df7240687f6b4b91"}, - {file = "pulsar_client-3.6.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:83c2c2eb1b29bde71e3d103625cd99eac42bebcdd57ca4d014a4a4265ed0a4f0"}, - {file = "pulsar_client-3.6.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c477a72e9461397345a607e72828ef9f16eb6a75fbbbe6569a2d1a27ff0f0c11"}, - {file = "pulsar_client-3.6.0-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:3dece3e07dbdf726747fb15afe253bcca2d793d498dcc7ac5cdb8477a3d50f43"}, - {file = "pulsar_client-3.6.0-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:5392e9ad5a1fd99f60dcf8908dd885f43d7ca32cb207a73e4af8e57f97e72a5f"}, - {file = "pulsar_client-3.6.0-cp313-cp313-win_amd64.whl", hash = "sha256:6807d54ee7a2e6dd7c3b6916ab3da4dc185dd3868de7e59e501a8ca69d3ab43e"}, - {file = "pulsar_client-3.6.0-cp39-cp39-macosx_13_0_universal2.whl", hash = "sha256:cf9be416e5fec3e3eebe64cdd07540a03ff977e69da70344164c59468ccf59b4"}, - {file = "pulsar_client-3.6.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:de2dbbccfdb96f3849a5c67d90cd40aa8b633ee083db3359ef5f39040ee60022"}, - {file = "pulsar_client-3.6.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9454a114b4821a5efec2c7187919b36cb3242c7218cc80bac1a84dbbd9dfc14c"}, - {file = "pulsar_client-3.6.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:5e38a4d7848181c818861e46dac54f53f8dbebb5cec90766913dd16813f72d7a"}, - {file = "pulsar_client-3.6.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:43dcd775dfd5da87ab112e2f03268de7651f2a6644eb2f5b09394cbe22b95d85"}, - {file = "pulsar_client-3.6.0-cp39-cp39-win_amd64.whl", hash = "sha256:0de9048189d10ab3843963a287ac2ee47c74dc672ae7b425a0138bca2f63bb5c"}, + {file = "pulsar_client-3.5.0-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:c18552edb2f785de85280fe624bc507467152bff810fc81d7660fa2dfa861f38"}, + {file = "pulsar_client-3.5.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:18d438e456c146f01be41ef146f649dedc8f7bc714d9eaef94cff2e34099812b"}, + {file = "pulsar_client-3.5.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:18a26a0719841103c7a89eb1492c4a8fedf89adaa386375baecbb4fa2707e88f"}, + {file = "pulsar_client-3.5.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:ab0e1605dc5f44a126163fd06cd0a768494ad05123f6e0de89a2c71d6e2d2319"}, + {file = "pulsar_client-3.5.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:cdef720891b97656fdce3bf5913ea7729b2156b84ba64314f432c1e72c6117fa"}, + {file = "pulsar_client-3.5.0-cp310-cp310-win_amd64.whl", hash = "sha256:a42544e38773191fe550644a90e8050579476bb2dcf17ac69a4aed62a6cb70e7"}, + {file = "pulsar_client-3.5.0-cp311-cp311-macosx_10_15_universal2.whl", hash = "sha256:fd94432ea5d398ea78f8f2e09a217ec5058d26330c137a22690478c031e116da"}, + {file = "pulsar_client-3.5.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d6252ae462e07ece4071213fdd9c76eab82ca522a749f2dc678037d4cbacd40b"}, + {file = "pulsar_client-3.5.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:03b4d440b2d74323784328b082872ee2f206c440b5d224d7941eb3c083ec06c6"}, + {file = "pulsar_client-3.5.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:f60af840b8d64a2fac5a0c1ce6ae0ddffec5f42267c6ded2c5e74bad8345f2a1"}, + {file = "pulsar_client-3.5.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:2277a447c3b7f6571cb1eb9fc5c25da3fdd43d0b2fb91cf52054adfadc7d6842"}, + {file = "pulsar_client-3.5.0-cp311-cp311-win_amd64.whl", hash = "sha256:f20f3e9dd50db2a37059abccad42078b7a4754b8bc1d3ae6502e71c1ad2209f0"}, + {file = "pulsar_client-3.5.0-cp312-cp312-macosx_10_15_universal2.whl", hash = "sha256:d61f663d85308e12f44033ba95af88730f581a7e8da44f7a5c080a3aaea4878d"}, + {file = "pulsar_client-3.5.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2a1ba0be25b6f747bcb28102b7d906ec1de48dc9f1a2d9eacdcc6f44ab2c9e17"}, + {file = "pulsar_client-3.5.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a181e3e60ac39df72ccb3c415d7aeac61ad0286497a6e02739a560d5af28393a"}, + {file = "pulsar_client-3.5.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:3c72895ff7f51347e4f78b0375b2213fa70dd4790bbb78177b4002846f1fd290"}, + {file = "pulsar_client-3.5.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:547dba1b185a17eba915e51d0a3aca27c80747b6187e5cd7a71a3ca33921decc"}, + {file = "pulsar_client-3.5.0-cp312-cp312-win_amd64.whl", hash = "sha256:443b786eed96bc86d2297a6a42e79f39d1abf217ec603e0bd303f3488c0234af"}, + {file = "pulsar_client-3.5.0-cp38-cp38-macosx_10_15_universal2.whl", hash = "sha256:15b58f5d759dd6166db8a2d90ed05a38063b05cda76c36d190d86ef5c9249397"}, + {file = "pulsar_client-3.5.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:af34bfe813dddf772a8a298117fa0a036ee963595d8bc8f00d969a0329ae6ed9"}, + {file = "pulsar_client-3.5.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:27a0fec1dd74e1367d3742ce16679c1807994df60f5e666f440cf39323938fad"}, + {file = "pulsar_client-3.5.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:dbcd26ef9c03f96fb9cd91baec3bbd3c4b997834eb3556670d31f41cc25b5f64"}, + {file = "pulsar_client-3.5.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:afea1d0b6e793fd56e56463145751ff3aa79fdcd5b26e90d0da802a1bbabe07e"}, + {file = "pulsar_client-3.5.0-cp38-cp38-win_amd64.whl", hash = "sha256:da1ab2fb1bef64b966e9403a0a186ebc90368d99e054ce2cae5b1128478f4ef4"}, + {file = "pulsar_client-3.5.0-cp39-cp39-macosx_10_15_universal2.whl", hash = "sha256:9ad5dcc0eb8d2a7c0fb8e1fa146a0c6d4bdaf934f1169080b2c64b2f0573e086"}, + {file = "pulsar_client-3.5.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e5870c6805b1a57962ed908d1173e97e13470415998393925c86a43694420389"}, + {file = "pulsar_client-3.5.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:29cb5fedb969895b78301dc00a979133e69940812b8332e4de948bb0ad3db7cb"}, + {file = "pulsar_client-3.5.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:e53c74bfa59b20c66adea95023169060f5048dd8d843e6ef9cd3b8ee2d23e93b"}, + {file = "pulsar_client-3.5.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:99dbadb13967f1add57010971ed36b5a77d24afcdaea01960d0e55e56cf4ba6f"}, + {file = "pulsar_client-3.5.0-cp39-cp39-win_amd64.whl", hash = "sha256:058887661d438796f42307dcc8054c84dea88a37683dae36498b95d7e1c39b37"}, ] [package.dependencies] certifi = "*" [package.extras] -all = ["apache-bookkeeper-client (>=4.16.1)", "fastavro (>=1.9.2)", "grpcio (>=1.59.3)", "prometheus-client", "protobuf (>=3.6.1,<=3.20.3)", "ratelimit"] +all = ["apache-bookkeeper-client (>=4.16.1)", "fastavro (>=1.9.2)", "grpcio (>=1.60.0)", "prometheus-client", "protobuf (>=3.6.1,<=3.20.3)", "ratelimit"] avro = ["fastavro (>=1.9.2)"] -functions = ["apache-bookkeeper-client (>=4.16.1)", "grpcio (>=1.59.3)", "prometheus-client", "protobuf (>=3.6.1,<=3.20.3)", "ratelimit"] +functions = ["apache-bookkeeper-client (>=4.16.1)", "grpcio (>=1.60.0)", "prometheus-client", "protobuf (>=3.6.1,<=3.20.3)", "ratelimit"] [[package]] name = "pure-eval" @@ -3245,33 +3373,34 @@ files = [ [[package]] name = "pyasn1-modules" -version = "0.4.1" +version = "0.4.2" description = "A collection of ASN.1-based protocols modules" optional = false python-versions = ">=3.8" files = [ - {file = "pyasn1_modules-0.4.1-py3-none-any.whl", hash = "sha256:49bfa96b45a292b711e986f222502c1c9a5e1f4e568fc30e2574a6c7d07838fd"}, - {file = "pyasn1_modules-0.4.1.tar.gz", hash = "sha256:c28e2dbf9c06ad61c71a075c7e0f9fd0f1b0bb2d2ad4377f240d33ac2ab60a7c"}, + {file = "pyasn1_modules-0.4.2-py3-none-any.whl", hash = "sha256:29253a9207ce32b64c3ac6600edc75368f98473906e8fd1043bd6b5b1de2c14a"}, + {file = "pyasn1_modules-0.4.2.tar.gz", hash = "sha256:677091de870a80aae844b1ca6134f54652fa2c8c5a52aa396440ac3106e941e6"}, ] [package.dependencies] -pyasn1 = ">=0.4.6,<0.7.0" +pyasn1 = ">=0.6.1,<0.7.0" [[package]] name = "pydantic" -version = "2.10.6" +version = "2.11.3" description = "Data validation using Python type hints" optional = false -python-versions = ">=3.8" +python-versions = ">=3.9" files = [ - {file = "pydantic-2.10.6-py3-none-any.whl", hash = "sha256:427d664bf0b8a2b34ff5dd0f5a18df00591adcee7198fbd71981054cef37b584"}, - {file = "pydantic-2.10.6.tar.gz", hash = "sha256:ca5daa827cce33de7a42be142548b0096bf05a7e7b365aebfa5f8eeec7128236"}, + {file = "pydantic-2.11.3-py3-none-any.whl", hash = "sha256:a082753436a07f9ba1289c6ffa01cd93db3548776088aa917cc43b63f68fa60f"}, + {file = "pydantic-2.11.3.tar.gz", hash = "sha256:7471657138c16adad9322fe3070c0116dd6c3ad8d649300e3cbdfe91f4db4ec3"}, ] [package.dependencies] annotated-types = ">=0.6.0" -pydantic-core = "2.27.2" +pydantic-core = "2.33.1" typing-extensions = ">=4.12.2" +typing-inspection = ">=0.4.0" [package.extras] email = ["email-validator (>=2.0.0)"] @@ -3279,116 +3408,135 @@ timezone = ["tzdata"] [[package]] name = "pydantic-core" -version = "2.27.2" +version = "2.33.1" description = "Core functionality for Pydantic validation and serialization" optional = false -python-versions = ">=3.8" +python-versions = ">=3.9" files = [ - {file = "pydantic_core-2.27.2-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:2d367ca20b2f14095a8f4fa1210f5a7b78b8a20009ecced6b12818f455b1e9fa"}, - {file = "pydantic_core-2.27.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:491a2b73db93fab69731eaee494f320faa4e093dbed776be1a829c2eb222c34c"}, - {file = "pydantic_core-2.27.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7969e133a6f183be60e9f6f56bfae753585680f3b7307a8e555a948d443cc05a"}, - {file = "pydantic_core-2.27.2-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:3de9961f2a346257caf0aa508a4da705467f53778e9ef6fe744c038119737ef5"}, - {file = "pydantic_core-2.27.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e2bb4d3e5873c37bb3dd58714d4cd0b0e6238cebc4177ac8fe878f8b3aa8e74c"}, - {file = "pydantic_core-2.27.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:280d219beebb0752699480fe8f1dc61ab6615c2046d76b7ab7ee38858de0a4e7"}, - {file = "pydantic_core-2.27.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:47956ae78b6422cbd46f772f1746799cbb862de838fd8d1fbd34a82e05b0983a"}, - {file = "pydantic_core-2.27.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:14d4a5c49d2f009d62a2a7140d3064f686d17a5d1a268bc641954ba181880236"}, - {file = "pydantic_core-2.27.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:337b443af21d488716f8d0b6164de833e788aa6bd7e3a39c005febc1284f4962"}, - {file = "pydantic_core-2.27.2-cp310-cp310-musllinux_1_1_armv7l.whl", hash = "sha256:03d0f86ea3184a12f41a2d23f7ccb79cdb5a18e06993f8a45baa8dfec746f0e9"}, - {file = "pydantic_core-2.27.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:7041c36f5680c6e0f08d922aed302e98b3745d97fe1589db0a3eebf6624523af"}, - {file = "pydantic_core-2.27.2-cp310-cp310-win32.whl", hash = "sha256:50a68f3e3819077be2c98110c1f9dcb3817e93f267ba80a2c05bb4f8799e2ff4"}, - {file = "pydantic_core-2.27.2-cp310-cp310-win_amd64.whl", hash = "sha256:e0fd26b16394ead34a424eecf8a31a1f5137094cabe84a1bcb10fa6ba39d3d31"}, - {file = "pydantic_core-2.27.2-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:8e10c99ef58cfdf2a66fc15d66b16c4a04f62bca39db589ae8cba08bc55331bc"}, - {file = "pydantic_core-2.27.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:26f32e0adf166a84d0cb63be85c562ca8a6fa8de28e5f0d92250c6b7e9e2aff7"}, - {file = "pydantic_core-2.27.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8c19d1ea0673cd13cc2f872f6c9ab42acc4e4f492a7ca9d3795ce2b112dd7e15"}, - {file = "pydantic_core-2.27.2-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:5e68c4446fe0810e959cdff46ab0a41ce2f2c86d227d96dc3847af0ba7def306"}, - {file = "pydantic_core-2.27.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d9640b0059ff4f14d1f37321b94061c6db164fbe49b334b31643e0528d100d99"}, - {file = "pydantic_core-2.27.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:40d02e7d45c9f8af700f3452f329ead92da4c5f4317ca9b896de7ce7199ea459"}, - {file = "pydantic_core-2.27.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1c1fd185014191700554795c99b347d64f2bb637966c4cfc16998a0ca700d048"}, - {file = "pydantic_core-2.27.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d81d2068e1c1228a565af076598f9e7451712700b673de8f502f0334f281387d"}, - {file = "pydantic_core-2.27.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:1a4207639fb02ec2dbb76227d7c751a20b1a6b4bc52850568e52260cae64ca3b"}, - {file = "pydantic_core-2.27.2-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:3de3ce3c9ddc8bbd88f6e0e304dea0e66d843ec9de1b0042b0911c1663ffd474"}, - {file = "pydantic_core-2.27.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:30c5f68ded0c36466acede341551106821043e9afaad516adfb6e8fa80a4e6a6"}, - {file = "pydantic_core-2.27.2-cp311-cp311-win32.whl", hash = "sha256:c70c26d2c99f78b125a3459f8afe1aed4d9687c24fd677c6a4436bc042e50d6c"}, - {file = "pydantic_core-2.27.2-cp311-cp311-win_amd64.whl", hash = "sha256:08e125dbdc505fa69ca7d9c499639ab6407cfa909214d500897d02afb816e7cc"}, - {file = "pydantic_core-2.27.2-cp311-cp311-win_arm64.whl", hash = "sha256:26f0d68d4b235a2bae0c3fc585c585b4ecc51382db0e3ba402a22cbc440915e4"}, - {file = "pydantic_core-2.27.2-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:9e0c8cfefa0ef83b4da9588448b6d8d2a2bf1a53c3f1ae5fca39eb3061e2f0b0"}, - {file = "pydantic_core-2.27.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:83097677b8e3bd7eaa6775720ec8e0405f1575015a463285a92bfdfe254529ef"}, - {file = "pydantic_core-2.27.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:172fce187655fece0c90d90a678424b013f8fbb0ca8b036ac266749c09438cb7"}, - {file = "pydantic_core-2.27.2-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:519f29f5213271eeeeb3093f662ba2fd512b91c5f188f3bb7b27bc5973816934"}, - {file = "pydantic_core-2.27.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:05e3a55d124407fffba0dd6b0c0cd056d10e983ceb4e5dbd10dda135c31071d6"}, - {file = "pydantic_core-2.27.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9c3ed807c7b91de05e63930188f19e921d1fe90de6b4f5cd43ee7fcc3525cb8c"}, - {file = "pydantic_core-2.27.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6fb4aadc0b9a0c063206846d603b92030eb6f03069151a625667f982887153e2"}, - {file = "pydantic_core-2.27.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:28ccb213807e037460326424ceb8b5245acb88f32f3d2777427476e1b32c48c4"}, - {file = "pydantic_core-2.27.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:de3cd1899e2c279b140adde9357c4495ed9d47131b4a4eaff9052f23398076b3"}, - {file = "pydantic_core-2.27.2-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:220f892729375e2d736b97d0e51466252ad84c51857d4d15f5e9692f9ef12be4"}, - {file = "pydantic_core-2.27.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:a0fcd29cd6b4e74fe8ddd2c90330fd8edf2e30cb52acda47f06dd615ae72da57"}, - {file = "pydantic_core-2.27.2-cp312-cp312-win32.whl", hash = "sha256:1e2cb691ed9834cd6a8be61228471d0a503731abfb42f82458ff27be7b2186fc"}, - {file = "pydantic_core-2.27.2-cp312-cp312-win_amd64.whl", hash = "sha256:cc3f1a99a4f4f9dd1de4fe0312c114e740b5ddead65bb4102884b384c15d8bc9"}, - {file = "pydantic_core-2.27.2-cp312-cp312-win_arm64.whl", hash = "sha256:3911ac9284cd8a1792d3cb26a2da18f3ca26c6908cc434a18f730dc0db7bfa3b"}, - {file = "pydantic_core-2.27.2-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:7d14bd329640e63852364c306f4d23eb744e0f8193148d4044dd3dacdaacbd8b"}, - {file = "pydantic_core-2.27.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:82f91663004eb8ed30ff478d77c4d1179b3563df6cdb15c0817cd1cdaf34d154"}, - {file = "pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:71b24c7d61131bb83df10cc7e687433609963a944ccf45190cfc21e0887b08c9"}, - {file = "pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:fa8e459d4954f608fa26116118bb67f56b93b209c39b008277ace29937453dc9"}, - {file = "pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ce8918cbebc8da707ba805b7fd0b382816858728ae7fe19a942080c24e5b7cd1"}, - {file = "pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:eda3f5c2a021bbc5d976107bb302e0131351c2ba54343f8a496dc8783d3d3a6a"}, - {file = "pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bd8086fa684c4775c27f03f062cbb9eaa6e17f064307e86b21b9e0abc9c0f02e"}, - {file = "pydantic_core-2.27.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:8d9b3388db186ba0c099a6d20f0604a44eabdeef1777ddd94786cdae158729e4"}, - {file = "pydantic_core-2.27.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:7a66efda2387de898c8f38c0cf7f14fca0b51a8ef0b24bfea5849f1b3c95af27"}, - {file = "pydantic_core-2.27.2-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:18a101c168e4e092ab40dbc2503bdc0f62010e95d292b27827871dc85450d7ee"}, - {file = "pydantic_core-2.27.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:ba5dd002f88b78a4215ed2f8ddbdf85e8513382820ba15ad5ad8955ce0ca19a1"}, - {file = "pydantic_core-2.27.2-cp313-cp313-win32.whl", hash = "sha256:1ebaf1d0481914d004a573394f4be3a7616334be70261007e47c2a6fe7e50130"}, - {file = "pydantic_core-2.27.2-cp313-cp313-win_amd64.whl", hash = "sha256:953101387ecf2f5652883208769a79e48db18c6df442568a0b5ccd8c2723abee"}, - {file = "pydantic_core-2.27.2-cp313-cp313-win_arm64.whl", hash = "sha256:ac4dbfd1691affb8f48c2c13241a2e3b60ff23247cbcf981759c768b6633cf8b"}, - {file = "pydantic_core-2.27.2-cp38-cp38-macosx_10_12_x86_64.whl", hash = "sha256:d3e8d504bdd3f10835468f29008d72fc8359d95c9c415ce6e767203db6127506"}, - {file = "pydantic_core-2.27.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:521eb9b7f036c9b6187f0b47318ab0d7ca14bd87f776240b90b21c1f4f149320"}, - {file = "pydantic_core-2.27.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:85210c4d99a0114f5a9481b44560d7d1e35e32cc5634c656bc48e590b669b145"}, - {file = "pydantic_core-2.27.2-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:d716e2e30c6f140d7560ef1538953a5cd1a87264c737643d481f2779fc247fe1"}, - {file = "pydantic_core-2.27.2-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f66d89ba397d92f840f8654756196d93804278457b5fbede59598a1f9f90b228"}, - {file = "pydantic_core-2.27.2-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:669e193c1c576a58f132e3158f9dfa9662969edb1a250c54d8fa52590045f046"}, - {file = "pydantic_core-2.27.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9fdbe7629b996647b99c01b37f11170a57ae675375b14b8c13b8518b8320ced5"}, - {file = "pydantic_core-2.27.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d262606bf386a5ba0b0af3b97f37c83d7011439e3dc1a9298f21efb292e42f1a"}, - {file = "pydantic_core-2.27.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:cabb9bcb7e0d97f74df8646f34fc76fbf793b7f6dc2438517d7a9e50eee4f14d"}, - {file = "pydantic_core-2.27.2-cp38-cp38-musllinux_1_1_armv7l.whl", hash = "sha256:d2d63f1215638d28221f664596b1ccb3944f6e25dd18cd3b86b0a4c408d5ebb9"}, - {file = "pydantic_core-2.27.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:bca101c00bff0adb45a833f8451b9105d9df18accb8743b08107d7ada14bd7da"}, - {file = "pydantic_core-2.27.2-cp38-cp38-win32.whl", hash = "sha256:f6f8e111843bbb0dee4cb6594cdc73e79b3329b526037ec242a3e49012495b3b"}, - {file = "pydantic_core-2.27.2-cp38-cp38-win_amd64.whl", hash = "sha256:fd1aea04935a508f62e0d0ef1f5ae968774a32afc306fb8545e06f5ff5cdf3ad"}, - {file = "pydantic_core-2.27.2-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:c10eb4f1659290b523af58fa7cffb452a61ad6ae5613404519aee4bfbf1df993"}, - {file = "pydantic_core-2.27.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ef592d4bad47296fb11f96cd7dc898b92e795032b4894dfb4076cfccd43a9308"}, - {file = "pydantic_core-2.27.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c61709a844acc6bf0b7dce7daae75195a10aac96a596ea1b776996414791ede4"}, - {file = "pydantic_core-2.27.2-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:42c5f762659e47fdb7b16956c71598292f60a03aa92f8b6351504359dbdba6cf"}, - {file = "pydantic_core-2.27.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4c9775e339e42e79ec99c441d9730fccf07414af63eac2f0e48e08fd38a64d76"}, - {file = "pydantic_core-2.27.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:57762139821c31847cfb2df63c12f725788bd9f04bc2fb392790959b8f70f118"}, - {file = "pydantic_core-2.27.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0d1e85068e818c73e048fe28cfc769040bb1f475524f4745a5dc621f75ac7630"}, - {file = "pydantic_core-2.27.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:097830ed52fd9e427942ff3b9bc17fab52913b2f50f2880dc4a5611446606a54"}, - {file = "pydantic_core-2.27.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:044a50963a614ecfae59bb1eaf7ea7efc4bc62f49ed594e18fa1e5d953c40e9f"}, - {file = "pydantic_core-2.27.2-cp39-cp39-musllinux_1_1_armv7l.whl", hash = "sha256:4e0b4220ba5b40d727c7f879eac379b822eee5d8fff418e9d3381ee45b3b0362"}, - {file = "pydantic_core-2.27.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:5e4f4bb20d75e9325cc9696c6802657b58bc1dbbe3022f32cc2b2b632c3fbb96"}, - {file = "pydantic_core-2.27.2-cp39-cp39-win32.whl", hash = "sha256:cca63613e90d001b9f2f9a9ceb276c308bfa2a43fafb75c8031c4f66039e8c6e"}, - {file = "pydantic_core-2.27.2-cp39-cp39-win_amd64.whl", hash = "sha256:77d1bca19b0f7021b3a982e6f903dcd5b2b06076def36a652e3907f596e29f67"}, - {file = "pydantic_core-2.27.2-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:2bf14caea37e91198329b828eae1618c068dfb8ef17bb33287a7ad4b61ac314e"}, - {file = "pydantic_core-2.27.2-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:b0cb791f5b45307caae8810c2023a184c74605ec3bcbb67d13846c28ff731ff8"}, - {file = "pydantic_core-2.27.2-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:688d3fd9fcb71f41c4c015c023d12a79d1c4c0732ec9eb35d96e3388a120dcf3"}, - {file = "pydantic_core-2.27.2-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3d591580c34f4d731592f0e9fe40f9cc1b430d297eecc70b962e93c5c668f15f"}, - {file = "pydantic_core-2.27.2-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:82f986faf4e644ffc189a7f1aafc86e46ef70372bb153e7001e8afccc6e54133"}, - {file = "pydantic_core-2.27.2-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:bec317a27290e2537f922639cafd54990551725fc844249e64c523301d0822fc"}, - {file = "pydantic_core-2.27.2-pp310-pypy310_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:0296abcb83a797db256b773f45773da397da75a08f5fcaef41f2044adec05f50"}, - {file = "pydantic_core-2.27.2-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:0d75070718e369e452075a6017fbf187f788e17ed67a3abd47fa934d001863d9"}, - {file = "pydantic_core-2.27.2-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:7e17b560be3c98a8e3aa66ce828bdebb9e9ac6ad5466fba92eb74c4c95cb1151"}, - {file = "pydantic_core-2.27.2-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:c33939a82924da9ed65dab5a65d427205a73181d8098e79b6b426bdf8ad4e656"}, - {file = "pydantic_core-2.27.2-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:00bad2484fa6bda1e216e7345a798bd37c68fb2d97558edd584942aa41b7d278"}, - {file = "pydantic_core-2.27.2-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c817e2b40aba42bac6f457498dacabc568c3b7a986fc9ba7c8d9d260b71485fb"}, - {file = "pydantic_core-2.27.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:251136cdad0cb722e93732cb45ca5299fb56e1344a833640bf93b2803f8d1bfd"}, - {file = "pydantic_core-2.27.2-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d2088237af596f0a524d3afc39ab3b036e8adb054ee57cbb1dcf8e09da5b29cc"}, - {file = "pydantic_core-2.27.2-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:d4041c0b966a84b4ae7a09832eb691a35aec90910cd2dbe7a208de59be77965b"}, - {file = "pydantic_core-2.27.2-pp39-pypy39_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:8083d4e875ebe0b864ffef72a4304827015cff328a1be6e22cc850753bfb122b"}, - {file = "pydantic_core-2.27.2-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:f141ee28a0ad2123b6611b6ceff018039df17f32ada8b534e6aa039545a3efb2"}, - {file = "pydantic_core-2.27.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:7d0c8399fcc1848491f00e0314bd59fb34a9c008761bcb422a057670c3f65e35"}, - {file = "pydantic_core-2.27.2.tar.gz", hash = "sha256:eb026e5a4c1fee05726072337ff51d1efb6f59090b7da90d30ea58625b1ffb39"}, + {file = "pydantic_core-2.33.1-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:3077cfdb6125cc8dab61b155fdd714663e401f0e6883f9632118ec12cf42df26"}, + {file = "pydantic_core-2.33.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8ffab8b2908d152e74862d276cf5017c81a2f3719f14e8e3e8d6b83fda863927"}, + {file = "pydantic_core-2.33.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5183e4f6a2d468787243ebcd70cf4098c247e60d73fb7d68d5bc1e1beaa0c4db"}, + {file = "pydantic_core-2.33.1-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:398a38d323f37714023be1e0285765f0a27243a8b1506b7b7de87b647b517e48"}, + {file = "pydantic_core-2.33.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:87d3776f0001b43acebfa86f8c64019c043b55cc5a6a2e313d728b5c95b46969"}, + {file = "pydantic_core-2.33.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c566dd9c5f63d22226409553531f89de0cac55397f2ab8d97d6f06cfce6d947e"}, + {file = "pydantic_core-2.33.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a0d5f3acc81452c56895e90643a625302bd6be351e7010664151cc55b7b97f89"}, + {file = "pydantic_core-2.33.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d3a07fadec2a13274a8d861d3d37c61e97a816beae717efccaa4b36dfcaadcde"}, + {file = "pydantic_core-2.33.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:f99aeda58dce827f76963ee87a0ebe75e648c72ff9ba1174a253f6744f518f65"}, + {file = "pydantic_core-2.33.1-cp310-cp310-musllinux_1_1_armv7l.whl", hash = "sha256:902dbc832141aa0ec374f4310f1e4e7febeebc3256f00dc359a9ac3f264a45dc"}, + {file = "pydantic_core-2.33.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:fe44d56aa0b00d66640aa84a3cbe80b7a3ccdc6f0b1ca71090696a6d4777c091"}, + {file = "pydantic_core-2.33.1-cp310-cp310-win32.whl", hash = "sha256:ed3eb16d51257c763539bde21e011092f127a2202692afaeaccb50db55a31383"}, + {file = "pydantic_core-2.33.1-cp310-cp310-win_amd64.whl", hash = "sha256:694ad99a7f6718c1a498dc170ca430687a39894a60327f548e02a9c7ee4b6504"}, + {file = "pydantic_core-2.33.1-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:6e966fc3caaf9f1d96b349b0341c70c8d6573bf1bac7261f7b0ba88f96c56c24"}, + {file = "pydantic_core-2.33.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:bfd0adeee563d59c598ceabddf2c92eec77abcb3f4a391b19aa7366170bd9e30"}, + {file = "pydantic_core-2.33.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:91815221101ad3c6b507804178a7bb5cb7b2ead9ecd600041669c8d805ebd595"}, + {file = "pydantic_core-2.33.1-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:9fea9c1869bb4742d174a57b4700c6dadea951df8b06de40c2fedb4f02931c2e"}, + {file = "pydantic_core-2.33.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d20eb4861329bb2484c021b9d9a977566ab16d84000a57e28061151c62b349a"}, + {file = "pydantic_core-2.33.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0fb935c5591573ae3201640579f30128ccc10739b45663f93c06796854405505"}, + {file = "pydantic_core-2.33.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c964fd24e6166420d18fb53996d8c9fd6eac9bf5ae3ec3d03015be4414ce497f"}, + {file = "pydantic_core-2.33.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:681d65e9011f7392db5aa002b7423cc442d6a673c635668c227c6c8d0e5a4f77"}, + {file = "pydantic_core-2.33.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e100c52f7355a48413e2999bfb4e139d2977a904495441b374f3d4fb4a170961"}, + {file = "pydantic_core-2.33.1-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:048831bd363490be79acdd3232f74a0e9951b11b2b4cc058aeb72b22fdc3abe1"}, + {file = "pydantic_core-2.33.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:bdc84017d28459c00db6f918a7272a5190bec3090058334e43a76afb279eac7c"}, + {file = "pydantic_core-2.33.1-cp311-cp311-win32.whl", hash = "sha256:32cd11c5914d1179df70406427097c7dcde19fddf1418c787540f4b730289896"}, + {file = "pydantic_core-2.33.1-cp311-cp311-win_amd64.whl", hash = "sha256:2ea62419ba8c397e7da28a9170a16219d310d2cf4970dbc65c32faf20d828c83"}, + {file = "pydantic_core-2.33.1-cp311-cp311-win_arm64.whl", hash = "sha256:fc903512177361e868bc1f5b80ac8c8a6e05fcdd574a5fb5ffeac5a9982b9e89"}, + {file = "pydantic_core-2.33.1-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:1293d7febb995e9d3ec3ea09caf1a26214eec45b0f29f6074abb004723fc1de8"}, + {file = "pydantic_core-2.33.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:99b56acd433386c8f20be5c4000786d1e7ca0523c8eefc995d14d79c7a081498"}, + {file = "pydantic_core-2.33.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:35a5ec3fa8c2fe6c53e1b2ccc2454398f95d5393ab398478f53e1afbbeb4d939"}, + {file = "pydantic_core-2.33.1-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:b172f7b9d2f3abc0efd12e3386f7e48b576ef309544ac3a63e5e9cdd2e24585d"}, + {file = "pydantic_core-2.33.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9097b9f17f91eea659b9ec58148c0747ec354a42f7389b9d50701610d86f812e"}, + {file = "pydantic_core-2.33.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cc77ec5b7e2118b152b0d886c7514a4653bcb58c6b1d760134a9fab915f777b3"}, + {file = "pydantic_core-2.33.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d5e3d15245b08fa4a84cefc6c9222e6f37c98111c8679fbd94aa145f9a0ae23d"}, + {file = "pydantic_core-2.33.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:ef99779001d7ac2e2461d8ab55d3373fe7315caefdbecd8ced75304ae5a6fc6b"}, + {file = "pydantic_core-2.33.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:fc6bf8869e193855e8d91d91f6bf59699a5cdfaa47a404e278e776dd7f168b39"}, + {file = "pydantic_core-2.33.1-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:b1caa0bc2741b043db7823843e1bde8aaa58a55a58fda06083b0569f8b45693a"}, + {file = "pydantic_core-2.33.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:ec259f62538e8bf364903a7d0d0239447059f9434b284f5536e8402b7dd198db"}, + {file = "pydantic_core-2.33.1-cp312-cp312-win32.whl", hash = "sha256:e14f369c98a7c15772b9da98987f58e2b509a93235582838bd0d1d8c08b68fda"}, + {file = "pydantic_core-2.33.1-cp312-cp312-win_amd64.whl", hash = "sha256:1c607801d85e2e123357b3893f82c97a42856192997b95b4d8325deb1cd0c5f4"}, + {file = "pydantic_core-2.33.1-cp312-cp312-win_arm64.whl", hash = "sha256:8d13f0276806ee722e70a1c93da19748594f19ac4299c7e41237fc791d1861ea"}, + {file = "pydantic_core-2.33.1-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:70af6a21237b53d1fe7b9325b20e65cbf2f0a848cf77bed492b029139701e66a"}, + {file = "pydantic_core-2.33.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:282b3fe1bbbe5ae35224a0dbd05aed9ccabccd241e8e6b60370484234b456266"}, + {file = "pydantic_core-2.33.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4b315e596282bbb5822d0c7ee9d255595bd7506d1cb20c2911a4da0b970187d3"}, + {file = "pydantic_core-2.33.1-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:1dfae24cf9921875ca0ca6a8ecb4bb2f13c855794ed0d468d6abbec6e6dcd44a"}, + {file = "pydantic_core-2.33.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6dd8ecfde08d8bfadaea669e83c63939af76f4cf5538a72597016edfa3fad516"}, + {file = "pydantic_core-2.33.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2f593494876eae852dc98c43c6f260f45abdbfeec9e4324e31a481d948214764"}, + {file = "pydantic_core-2.33.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:948b73114f47fd7016088e5186d13faf5e1b2fe83f5e320e371f035557fd264d"}, + {file = "pydantic_core-2.33.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e11f3864eb516af21b01e25fac915a82e9ddad3bb0fb9e95a246067398b435a4"}, + {file = "pydantic_core-2.33.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:549150be302428b56fdad0c23c2741dcdb5572413776826c965619a25d9c6bde"}, + {file = "pydantic_core-2.33.1-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:495bc156026efafd9ef2d82372bd38afce78ddd82bf28ef5276c469e57c0c83e"}, + {file = "pydantic_core-2.33.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:ec79de2a8680b1a67a07490bddf9636d5c2fab609ba8c57597e855fa5fa4dacd"}, + {file = "pydantic_core-2.33.1-cp313-cp313-win32.whl", hash = "sha256:ee12a7be1742f81b8a65b36c6921022301d466b82d80315d215c4c691724986f"}, + {file = "pydantic_core-2.33.1-cp313-cp313-win_amd64.whl", hash = "sha256:ede9b407e39949d2afc46385ce6bd6e11588660c26f80576c11c958e6647bc40"}, + {file = "pydantic_core-2.33.1-cp313-cp313-win_arm64.whl", hash = "sha256:aa687a23d4b7871a00e03ca96a09cad0f28f443690d300500603bd0adba4b523"}, + {file = "pydantic_core-2.33.1-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:401d7b76e1000d0dd5538e6381d28febdcacb097c8d340dde7d7fc6e13e9f95d"}, + {file = "pydantic_core-2.33.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7aeb055a42d734c0255c9e489ac67e75397d59c6fbe60d155851e9782f276a9c"}, + {file = "pydantic_core-2.33.1-cp313-cp313t-win_amd64.whl", hash = "sha256:338ea9b73e6e109f15ab439e62cb3b78aa752c7fd9536794112e14bee02c8d18"}, + {file = "pydantic_core-2.33.1-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:5ab77f45d33d264de66e1884fca158bc920cb5e27fd0764a72f72f5756ae8bdb"}, + {file = "pydantic_core-2.33.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e7aaba1b4b03aaea7bb59e1b5856d734be011d3e6d98f5bcaa98cb30f375f2ad"}, + {file = "pydantic_core-2.33.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7fb66263e9ba8fea2aa85e1e5578980d127fb37d7f2e292773e7bc3a38fb0c7b"}, + {file = "pydantic_core-2.33.1-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:3f2648b9262607a7fb41d782cc263b48032ff7a03a835581abbf7a3bec62bcf5"}, + {file = "pydantic_core-2.33.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:723c5630c4259400818b4ad096735a829074601805d07f8cafc366d95786d331"}, + {file = "pydantic_core-2.33.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d100e3ae783d2167782391e0c1c7a20a31f55f8015f3293647544df3f9c67824"}, + {file = "pydantic_core-2.33.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:177d50460bc976a0369920b6c744d927b0ecb8606fb56858ff542560251b19e5"}, + {file = "pydantic_core-2.33.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:a3edde68d1a1f9af1273b2fe798997b33f90308fb6d44d8550c89fc6a3647cf6"}, + {file = "pydantic_core-2.33.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:a62c3c3ef6a7e2c45f7853b10b5bc4ddefd6ee3cd31024754a1a5842da7d598d"}, + {file = "pydantic_core-2.33.1-cp39-cp39-musllinux_1_1_armv7l.whl", hash = "sha256:c91dbb0ab683fa0cd64a6e81907c8ff41d6497c346890e26b23de7ee55353f96"}, + {file = "pydantic_core-2.33.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:9f466e8bf0a62dc43e068c12166281c2eca72121dd2adc1040f3aa1e21ef8599"}, + {file = "pydantic_core-2.33.1-cp39-cp39-win32.whl", hash = "sha256:ab0277cedb698749caada82e5d099dc9fed3f906a30d4c382d1a21725777a1e5"}, + {file = "pydantic_core-2.33.1-cp39-cp39-win_amd64.whl", hash = "sha256:5773da0ee2d17136b1f1c6fbde543398d452a6ad2a7b54ea1033e2daa739b8d2"}, + {file = "pydantic_core-2.33.1-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:5c834f54f8f4640fd7e4b193f80eb25a0602bba9e19b3cd2fc7ffe8199f5ae02"}, + {file = "pydantic_core-2.33.1-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:049e0de24cf23766f12cc5cc71d8abc07d4a9deb9061b334b62093dedc7cb068"}, + {file = "pydantic_core-2.33.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1a28239037b3d6f16916a4c831a5a0eadf856bdd6d2e92c10a0da3a59eadcf3e"}, + {file = "pydantic_core-2.33.1-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9d3da303ab5f378a268fa7d45f37d7d85c3ec19769f28d2cc0c61826a8de21fe"}, + {file = "pydantic_core-2.33.1-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:25626fb37b3c543818c14821afe0fd3830bc327a43953bc88db924b68c5723f1"}, + {file = "pydantic_core-2.33.1-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:3ab2d36e20fbfcce8f02d73c33a8a7362980cff717926bbae030b93ae46b56c7"}, + {file = "pydantic_core-2.33.1-pp310-pypy310_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:2f9284e11c751b003fd4215ad92d325d92c9cb19ee6729ebd87e3250072cdcde"}, + {file = "pydantic_core-2.33.1-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:048c01eee07d37cbd066fc512b9d8b5ea88ceeb4e629ab94b3e56965ad655add"}, + {file = "pydantic_core-2.33.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:5ccd429694cf26af7997595d627dd2637e7932214486f55b8a357edaac9dae8c"}, + {file = "pydantic_core-2.33.1-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:3a371dc00282c4b84246509a5ddc808e61b9864aa1eae9ecc92bb1268b82db4a"}, + {file = "pydantic_core-2.33.1-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:f59295ecc75a1788af8ba92f2e8c6eeaa5a94c22fc4d151e8d9638814f85c8fc"}, + {file = "pydantic_core-2.33.1-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:08530b8ac922003033f399128505f513e30ca770527cc8bbacf75a84fcc2c74b"}, + {file = "pydantic_core-2.33.1-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bae370459da6a5466978c0eacf90690cb57ec9d533f8e63e564ef3822bfa04fe"}, + {file = "pydantic_core-2.33.1-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e3de2777e3b9f4d603112f78006f4ae0acb936e95f06da6cb1a45fbad6bdb4b5"}, + {file = "pydantic_core-2.33.1-pp311-pypy311_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:3a64e81e8cba118e108d7126362ea30e021291b7805d47e4896e52c791be2761"}, + {file = "pydantic_core-2.33.1-pp311-pypy311_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:52928d8c1b6bda03cc6d811e8923dffc87a2d3c8b3bfd2ce16471c7147a24850"}, + {file = "pydantic_core-2.33.1-pp311-pypy311_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:1b30d92c9412beb5ac6b10a3eb7ef92ccb14e3f2a8d7732e2d739f58b3aa7544"}, + {file = "pydantic_core-2.33.1-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:f995719707e0e29f0f41a8aa3bcea6e761a36c9136104d3189eafb83f5cec5e5"}, + {file = "pydantic_core-2.33.1-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:7edbc454a29fc6aeae1e1eecba4f07b63b8d76e76a748532233c4c167b4cb9ea"}, + {file = "pydantic_core-2.33.1-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:ad05b683963f69a1d5d2c2bdab1274a31221ca737dbbceaa32bcb67359453cdd"}, + {file = "pydantic_core-2.33.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:df6a94bf9452c6da9b5d76ed229a5683d0306ccb91cca8e1eea883189780d568"}, + {file = "pydantic_core-2.33.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7965c13b3967909a09ecc91f21d09cfc4576bf78140b988904e94f130f188396"}, + {file = "pydantic_core-2.33.1-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:3f1fdb790440a34f6ecf7679e1863b825cb5ffde858a9197f851168ed08371e5"}, + {file = "pydantic_core-2.33.1-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:5277aec8d879f8d05168fdd17ae811dd313b8ff894aeeaf7cd34ad28b4d77e33"}, + {file = "pydantic_core-2.33.1-pp39-pypy39_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:8ab581d3530611897d863d1a649fb0644b860286b4718db919bfd51ece41f10b"}, + {file = "pydantic_core-2.33.1-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:0483847fa9ad5e3412265c1bd72aad35235512d9ce9d27d81a56d935ef489672"}, + {file = "pydantic_core-2.33.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:de9e06abe3cc5ec6a2d5f75bc99b0bdca4f5c719a5b34026f8c57efbdecd2ee3"}, + {file = "pydantic_core-2.33.1.tar.gz", hash = "sha256:bcc9c6fdb0ced789245b02b7d6603e17d1563064ddcfc36f046b61c0c05dd9df"}, ] [package.dependencies] typing-extensions = ">=4.6.0,<4.7.0 || >4.7.0" +[[package]] +name = "pydantic-settings" +version = "2.8.1" +description = "Settings management using Pydantic" +optional = false +python-versions = ">=3.8" +files = [ + {file = "pydantic_settings-2.8.1-py3-none-any.whl", hash = "sha256:81942d5ac3d905f7f3ee1a70df5dfb62d5569c12f51a5a647defc1c3d9ee2e9c"}, + {file = "pydantic_settings-2.8.1.tar.gz", hash = "sha256:d5c663dfbe9db9d5e1c646b2e161da12f0d734d422ee56f567d0ea2cee4e8585"}, +] + +[package.dependencies] +pydantic = ">=2.7.0" +python-dotenv = ">=0.21.0" + +[package.extras] +azure-key-vault = ["azure-identity (>=1.16.0)", "azure-keyvault-secrets (>=4.8.0)"] +toml = ["tomli (>=2.0.1)"] +yaml = ["pyyaml (>=6.0.1)"] + [[package]] name = "pyglove" version = "0.4.4" @@ -3431,14 +3579,10 @@ files = [ [package.dependencies] astroid = ">=2.15.8,<=2.17.0-dev0" colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""} -dill = [ - {version = ">=0.2", markers = "python_version < \"3.11\""}, - {version = ">=0.3.6", markers = "python_version >= \"3.11\""}, -] +dill = {version = ">=0.3.6", markers = "python_version >= \"3.11\""} isort = ">=4.2.5,<6" mccabe = ">=0.6,<0.8" platformdirs = ">=2.2.0" -tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""} tomlkit = ">=0.10.1" [package.extras] @@ -3447,13 +3591,13 @@ testutils = ["gitpython (>3)"] [[package]] name = "pyparsing" -version = "3.2.1" +version = "3.2.3" description = "pyparsing module - Classes and methods to define and execute parsing grammars" optional = false python-versions = ">=3.9" files = [ - {file = "pyparsing-3.2.1-py3-none-any.whl", hash = "sha256:506ff4f4386c4cec0590ec19e6302d3aedb992fdc02c761e90416f158dacf8e1"}, - {file = "pyparsing-3.2.1.tar.gz", hash = "sha256:61980854fd66de3a90028d679a954d5f2623e83144b5afe5ee86f43d762e5f0a"}, + {file = "pyparsing-3.2.3-py3-none-any.whl", hash = "sha256:a749938e02d6fd0b59b356ca504a24982314bb090c383e3cf201c95ef7e2bfcf"}, + {file = "pyparsing-3.2.3.tar.gz", hash = "sha256:b9c13f1ab8b3b542f72e28f634bad4de758ab3ce4546e4301970ad6fa77c38be"}, ] [package.extras] @@ -3494,6 +3638,26 @@ files = [ [package.extras] dev = ["build", "flake8", "mypy", "pytest", "twine"] +[[package]] +name = "pytest" +version = "8.3.5" +description = "pytest: simple powerful testing with Python" +optional = false +python-versions = ">=3.8" +files = [ + {file = "pytest-8.3.5-py3-none-any.whl", hash = "sha256:c69214aa47deac29fad6c2a4f590b9c4a9fdb16a403176fe154b79c0b4d4d820"}, + {file = "pytest-8.3.5.tar.gz", hash = "sha256:f4efe70cc14e511565ac476b57c279e12a855b11f48f212af1080ef2263d3845"}, +] + +[package.dependencies] +colorama = {version = "*", markers = "sys_platform == \"win32\""} +iniconfig = "*" +packaging = "*" +pluggy = ">=1.5,<2" + +[package.extras] +dev = ["argcomplete", "attrs (>=19.2)", "hypothesis (>=3.56)", "mock", "pygments (>=2.7.2)", "requests", "setuptools", "xmlschema"] + [[package]] name = "python-dateutil" version = "2.9.0.post0" @@ -3510,13 +3674,13 @@ six = ">=1.5" [[package]] name = "python-dotenv" -version = "1.0.1" +version = "1.1.0" description = "Read key-value pairs from a .env file and set them as environment variables" optional = false -python-versions = ">=3.8" +python-versions = ">=3.9" files = [ - {file = "python-dotenv-1.0.1.tar.gz", hash = "sha256:e324ee90a023d808f1959c46bcbc04446a10ced277783dc6ee09987c37ec10ca"}, - {file = "python_dotenv-1.0.1-py3-none-any.whl", hash = "sha256:f7b63ef50f1b690dddf550d03497b66d609393b40b564ed0d674909a68ebf16a"}, + {file = "python_dotenv-1.1.0-py3-none-any.whl", hash = "sha256:d7c01d9e2293916c18baf562d95698754b0dbbb5e74d457c45d4f6561fb9d55d"}, + {file = "python_dotenv-1.1.0.tar.gz", hash = "sha256:41f90bc6f5f177fb41f53e87666db362025010eb28f60a01c9143bfa33a2b2d5"}, ] [package.extras] @@ -3542,13 +3706,13 @@ test = ["mypy", "pyaml", "pytest", "toml", "types-PyYAML", "types-toml"] [[package]] name = "pytz" -version = "2025.1" +version = "2025.2" description = "World timezone definitions, modern and historical" optional = false python-versions = "*" files = [ - {file = "pytz-2025.1-py2.py3-none-any.whl", hash = "sha256:89dd22dca55b46eac6eda23b2d72721bf1bdfef212645d81513ef5d03038de57"}, - {file = "pytz-2025.1.tar.gz", hash = "sha256:c2db42be2a2518b28e65f9207c4d05e6ff547d1efa4086469ef855e4ab70178e"}, + {file = "pytz-2025.2-py2.py3-none-any.whl", hash = "sha256:5ddf76296dd8c44c26eb8f4b6f35488f3ccbf6fbbd7adee0b7262d43f0ec2f00"}, + {file = "pytz-2025.2.tar.gz", hash = "sha256:360b9e3dbb49a209c21ad61809c7fb453643e048b38924c765813546746e81c3"}, ] [[package]] @@ -3676,7 +3840,6 @@ files = [ [package.dependencies] markdown-it-py = ">=2.2.0" pygments = ">=2.13.0,<3.0.0" -typing-extensions = {version = ">=4.0.0,<5.0", markers = "python_version < \"3.11\""} [package.extras] jupyter = ["ipywidgets (>=7.5.1,<9)"] @@ -3851,18 +4014,18 @@ test = ["pytest"] [[package]] name = "setuptools" -version = "75.8.0" +version = "78.1.0" description = "Easily download, build, install, upgrade, and uninstall Python packages" optional = false python-versions = ">=3.9" files = [ - {file = "setuptools-75.8.0-py3-none-any.whl", hash = "sha256:e3982f444617239225d675215d51f6ba05f845d4eec313da4418fdbb56fb27e3"}, - {file = "setuptools-75.8.0.tar.gz", hash = "sha256:c5afc8f407c626b8313a86e10311dd3f661c6cd9c09d4bf8c15c0e11f9f2b0e6"}, + {file = "setuptools-78.1.0-py3-none-any.whl", hash = "sha256:3e386e96793c8702ae83d17b853fb93d3e09ef82ec62722e61da5cd22376dcd8"}, + {file = "setuptools-78.1.0.tar.gz", hash = "sha256:18fd474d4a82a5f83dac888df697af65afa82dec7323d09c3e37d1f14288da54"}, ] [package.extras] check = ["pytest-checkdocs (>=2.4)", "pytest-ruff (>=0.2.1)", "ruff (>=0.8.0)"] -core = ["importlib_metadata (>=6)", "jaraco.collections", "jaraco.functools (>=4)", "jaraco.text (>=3.7)", "more_itertools", "more_itertools (>=8.8)", "packaging", "packaging (>=24.2)", "platformdirs (>=4.2.2)", "tomli (>=2.0.1)", "wheel (>=0.43.0)"] +core = ["importlib_metadata (>=6)", "jaraco.functools (>=4)", "jaraco.text (>=3.7)", "more_itertools", "more_itertools (>=8.8)", "packaging (>=24.2)", "platformdirs (>=4.2.2)", "tomli (>=2.0.1)", "wheel (>=0.43.0)"] cover = ["pytest-cov"] doc = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "pyproject-hooks (!=1.1)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-inline-tabs", "sphinx-lint", "sphinx-notfound-page (>=1,<2)", "sphinx-reredirects", "sphinxcontrib-towncrier", "towncrier (<24.7)"] enabler = ["pytest-enabler (>=2.2)"] @@ -4051,6 +4214,25 @@ files = [ {file = "soupsieve-2.6.tar.gz", hash = "sha256:e2e68417777af359ec65daac1057404a3c8a5455bb8abc36f1a9866ab1a51abb"}, ] +[[package]] +name = "sse-starlette" +version = "2.2.1" +description = "SSE plugin for Starlette" +optional = false +python-versions = ">=3.9" +files = [ + {file = "sse_starlette-2.2.1-py3-none-any.whl", hash = "sha256:6410a3d3ba0c89e7675d4c273a301d64649c03a5ef1ca101f10b47f895fd0e99"}, + {file = "sse_starlette-2.2.1.tar.gz", hash = "sha256:54470d5f19274aeed6b2d473430b08b4b379ea851d953b11d7f1c4a2c118b419"}, +] + +[package.dependencies] +anyio = ">=4.7.0" +starlette = ">=0.41.3" + +[package.extras] +examples = ["fastapi"] +uvicorn = ["uvicorn (>=0.34.0)"] + [[package]] name = "stack-data" version = "0.6.3" @@ -4072,13 +4254,13 @@ tests = ["cython", "littleutils", "pygments", "pytest", "typeguard"] [[package]] name = "starlette" -version = "0.45.3" +version = "0.46.1" description = "The little ASGI library that shines." optional = false python-versions = ">=3.9" files = [ - {file = "starlette-0.45.3-py3-none-any.whl", hash = "sha256:dfb6d332576f136ec740296c7e8bb8c8a7125044e7c6da30744718880cdd059d"}, - {file = "starlette-0.45.3.tar.gz", hash = "sha256:2cbcba2a75806f8a41c722141486f37c28e30a0921c5f6fe4346cb0dcee1302f"}, + {file = "starlette-0.46.1-py3-none-any.whl", hash = "sha256:77c74ed9d2720138b25875133f3a2dae6d854af2ec37dceb56aef370c1d8a227"}, + {file = "starlette-0.46.1.tar.gz", hash = "sha256:3c88d58ee4bd1bb807c0d1acb381838afc7752f9ddaec81bbe4383611d833230"}, ] [package.dependencies] @@ -4106,13 +4288,13 @@ dev = ["hypothesis (>=6.70.0)", "pytest (>=7.1.0)"] [[package]] name = "tenacity" -version = "9.0.0" +version = "9.1.2" description = "Retry code until it succeeds" optional = false -python-versions = ">=3.8" +python-versions = ">=3.9" files = [ - {file = "tenacity-9.0.0-py3-none-any.whl", hash = "sha256:93de0c98785b27fcf659856aa9f54bfbd399e29969b0621bc7f762bd441b4539"}, - {file = "tenacity-9.0.0.tar.gz", hash = "sha256:807f37ca97d62aa361264d497b0e31e92b8027044942bfa756160d908320d73b"}, + {file = "tenacity-9.1.2-py3-none-any.whl", hash = "sha256:f77bf36710d8b73a50b2dd155c97b870017ad21afe6ab300326b0371b3b05138"}, + {file = "tenacity-9.1.2.tar.gz", hash = "sha256:1169d376c297e7de388d18b4481760d478b0e99a777cad3a9c86e556f4b697cb"}, ] [package.extras] @@ -4141,6 +4323,28 @@ six = ">1.9" tensorboard-data-server = ">=0.7.0,<0.8.0" werkzeug = ">=1.0.1" +[[package]] +name = "tensorboard" +version = "2.19.0" +description = "TensorBoard lets you watch Tensors Flow" +optional = false +python-versions = ">=3.9" +files = [ + {file = "tensorboard-2.19.0-py3-none-any.whl", hash = "sha256:5e71b98663a641a7ce8a6e70b0be8e1a4c0c45d48760b076383ac4755c35b9a0"}, +] + +[package.dependencies] +absl-py = ">=0.4" +grpcio = ">=1.48.2" +markdown = ">=2.6.8" +numpy = ">=1.12.0" +packaging = "*" +protobuf = ">=3.19.6,<4.24.0 || >4.24.0" +setuptools = ">=41.0.0" +six = ">1.9" +tensorboard-data-server = ">=0.7.0,<0.8.0" +werkzeug = ">=1.0.1" + [[package]] name = "tensorboard-data-server" version = "0.7.2" @@ -4155,27 +4359,27 @@ files = [ [[package]] name = "tensorflow" -version = "2.18.0" +version = "2.18.1" description = "TensorFlow is an open source machine learning framework for everyone." optional = false python-versions = ">=3.9" files = [ - {file = "tensorflow-2.18.0-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:8da90a9388a1f6dd00d626590d2b5810faffbb3e7367f9783d80efff882340ee"}, - {file = "tensorflow-2.18.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:589342fb9bdcab2e9af0f946da4ca97757677e297d934fcdc087e87db99d6353"}, - {file = "tensorflow-2.18.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1eb77fae50d699442726d1b23c7512c97cd688cc7d857b028683d4535bbf3709"}, - {file = "tensorflow-2.18.0-cp310-cp310-win_amd64.whl", hash = "sha256:46f5a8b4e6273f488dc069fc3ac2211b23acd3d0437d919349c787fa341baa8a"}, - {file = "tensorflow-2.18.0-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:453cb60638a02fd26316fb36c8cbcf1569d33671f17c658ca0cf2b4626f851e7"}, - {file = "tensorflow-2.18.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:85f1e7369af6d329b117b52e86093cd1e0458dd5404bf5b665853f873dd00b48"}, - {file = "tensorflow-2.18.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:70b8dd70fa3600bfce66ab529eebb804e1f9d7c863d2f71bc8fe9fc7a1ec3976"}, - {file = "tensorflow-2.18.0-cp311-cp311-win_amd64.whl", hash = "sha256:6e8b0f499ef0b7652480a58e358a73844932047f21c42c56f7f3bdcaf0803edc"}, - {file = "tensorflow-2.18.0-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:ec4133a215c59314e929e7cbe914579d3afbc7874d9fa924873ee633fe4f71d0"}, - {file = "tensorflow-2.18.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4822904b3559d8a9c25f0fe5fef191cfc1352ceca42ca64f2a7bc7ae0ff4a1f5"}, - {file = "tensorflow-2.18.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bfdd65ea7e064064283dd78d529dd621257ee617218f63681935fd15817c6286"}, - {file = "tensorflow-2.18.0-cp312-cp312-win_amd64.whl", hash = "sha256:a701c2d3dca5f2efcab315b2c217f140ebd3da80410744e87d77016b3aaf53cb"}, - {file = "tensorflow-2.18.0-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:336cace378c129c20fee6292f6a541165073d153a9a4c9cf4f14478a81895776"}, - {file = "tensorflow-2.18.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bcfd32134de8f95515b2d0ced89cdae15484b787d3a21893e9291def06c10c4e"}, - {file = "tensorflow-2.18.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ada1f7290c75b34748ee7378c1b77927e4044c94b8dc72dc75e7667c4fdaeb94"}, - {file = "tensorflow-2.18.0-cp39-cp39-win_amd64.whl", hash = "sha256:f8c946df1cb384504578fac1c199a95322373b8e04abd88aa8ae01301df469ea"}, + {file = "tensorflow-2.18.1-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:8baba2b0f9f286f8115a0005d17c020d2febf95e434302eaf758f2020c1c4de5"}, + {file = "tensorflow-2.18.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2dd7284768f5a6b10e41a700e8141de70756dc62ed5d0b93360d131ccc0a6ba8"}, + {file = "tensorflow-2.18.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1f929842999d60e7da67743ae5204b477259f3b771c02e5e437d232267e49f18"}, + {file = "tensorflow-2.18.1-cp310-cp310-win_amd64.whl", hash = "sha256:db1d186c17b6a7c51813e275d0a83e964669822372aa01d074cf64b853ee76ac"}, + {file = "tensorflow-2.18.1-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:661029cd769b311db910b79a3a6ef50a5a61ecc947172228c777a49989722508"}, + {file = "tensorflow-2.18.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8a6485edd2148f70d011dbd1d8dc2c775e91774a5a159466e83d0d1f21580944"}, + {file = "tensorflow-2.18.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c9f87e5d2a680a4595f5dc30daf6bbaec9d4129b46d7ef1b2af63c46ac7d2828"}, + {file = "tensorflow-2.18.1-cp311-cp311-win_amd64.whl", hash = "sha256:99223d0dde08aec4ceebb3bf0f80da7802e18462dab0d5048225925c064d2af7"}, + {file = "tensorflow-2.18.1-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:98afa9c7f21481cdc6ccd09507a7878d533150fbb001840cc145e2132eb40942"}, + {file = "tensorflow-2.18.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1ba52b9c06ab8102b31e50acfaf56899b923171e603c8942f2bfeb181d6bb59e"}, + {file = "tensorflow-2.18.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:442d2a774811789a8ad948e7286cb950fe3d87d3754e8cc6449d53b03dbfdaa6"}, + {file = "tensorflow-2.18.1-cp312-cp312-win_amd64.whl", hash = "sha256:210baf6d421f3e044b6e09efd04494a33b75334922fe6cf11970e2885172620a"}, + {file = "tensorflow-2.18.1-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:e0ffa318b969779baad01a11e7799dda9677ee33ccbbcdbf7b735c27f53d2a9b"}, + {file = "tensorflow-2.18.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1c0cd29c323908ed35ce72fbcce66f2ef7c8657f9c5024860ffd7ea64cf5d35d"}, + {file = "tensorflow-2.18.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4ecbb9b3cd3f223ff6861faa1a4c2719c138d870dba90545826685b1c5ba5901"}, + {file = "tensorflow-2.18.1-cp39-cp39-win_amd64.whl", hash = "sha256:0f84a4c87a30cfb279c30b0077541cb5aaac7506d32adde585adb185277e49d2"}, ] [package.dependencies] @@ -4188,7 +4392,7 @@ grpcio = ">=1.24.3,<2.0" h5py = ">=3.11.0" keras = ">=3.5.0" libclang = ">=13.0.0" -ml-dtypes = ">=0.4.0,<0.5.0" +ml-dtypes = ">=0.4.0,<1.0.0" numpy = ">=1.26.0,<2.1.0" opt-einsum = ">=2.3.2" packaging = "*" @@ -4197,7 +4401,6 @@ requests = ">=2.21.0,<3" setuptools = "*" six = ">=1.12.0" tensorboard = ">=2.18,<2.19" -tensorflow-io-gcs-filesystem = {version = ">=0.23.1", markers = "python_version < \"3.12\""} termcolor = ">=1.1.0" typing-extensions = ">=3.6.6" wrapt = ">=1.11.0" @@ -4206,54 +4409,70 @@ wrapt = ">=1.11.0" and-cuda = ["nvidia-cublas-cu12 (==12.5.3.2)", "nvidia-cuda-cupti-cu12 (==12.5.82)", "nvidia-cuda-nvcc-cu12 (==12.5.82)", "nvidia-cuda-nvrtc-cu12 (==12.5.82)", "nvidia-cuda-runtime-cu12 (==12.5.82)", "nvidia-cudnn-cu12 (==9.3.0.75)", "nvidia-cufft-cu12 (==11.2.3.61)", "nvidia-curand-cu12 (==10.3.6.82)", "nvidia-cusolver-cu12 (==11.6.3.83)", "nvidia-cusparse-cu12 (==12.5.1.3)", "nvidia-nccl-cu12 (==2.21.5)", "nvidia-nvjitlink-cu12 (==12.5.82)"] [[package]] -name = "tensorflow-io-gcs-filesystem" -version = "0.37.1" -description = "TensorFlow IO" +name = "tensorflow" +version = "2.19.0" +description = "TensorFlow is an open source machine learning framework for everyone." optional = false -python-versions = "<3.13,>=3.7" +python-versions = ">=3.9" files = [ - {file = "tensorflow_io_gcs_filesystem-0.37.1-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:249c12b830165841411ba71e08215d0e94277a49c551e6dd5d72aab54fe5491b"}, - {file = "tensorflow_io_gcs_filesystem-0.37.1-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:257aab23470a0796978efc9c2bcf8b0bc80f22e6298612a4c0a50d3f4e88060c"}, - {file = "tensorflow_io_gcs_filesystem-0.37.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8febbfcc67c61e542a5ac1a98c7c20a91a5e1afc2e14b1ef0cb7c28bc3b6aa70"}, - {file = "tensorflow_io_gcs_filesystem-0.37.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9679b36e3a80921876f31685ab6f7270f3411a4cc51bc2847e80d0e4b5291e27"}, - {file = "tensorflow_io_gcs_filesystem-0.37.1-cp311-cp311-macosx_10_14_x86_64.whl", hash = "sha256:32c50ab4e29a23c1f91cd0f9ab8c381a0ab10f45ef5c5252e94965916041737c"}, - {file = "tensorflow_io_gcs_filesystem-0.37.1-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:b02f9c5f94fd62773954a04f69b68c4d576d076fd0db4ca25d5479f0fbfcdbad"}, - {file = "tensorflow_io_gcs_filesystem-0.37.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6e1f2796b57e799a8ca1b75bf47c2aaa437c968408cc1a402a9862929e104cda"}, - {file = "tensorflow_io_gcs_filesystem-0.37.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ee7c8ee5fe2fd8cb6392669ef16e71841133041fee8a330eff519ad9b36e4556"}, - {file = "tensorflow_io_gcs_filesystem-0.37.1-cp312-cp312-macosx_10_14_x86_64.whl", hash = "sha256:ffebb6666a7bfc28005f4fbbb111a455b5e7d6cd3b12752b7050863ecb27d5cc"}, - {file = "tensorflow_io_gcs_filesystem-0.37.1-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:fe8dcc6d222258a080ac3dfcaaaa347325ce36a7a046277f6b3e19abc1efb3c5"}, - {file = "tensorflow_io_gcs_filesystem-0.37.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fbb33f1745f218464a59cecd9a18e32ca927b0f4d77abd8f8671b645cc1a182f"}, - {file = "tensorflow_io_gcs_filesystem-0.37.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:286389a203a5aee1a4fa2e53718c661091aa5fea797ff4fa6715ab8436b02e6c"}, - {file = "tensorflow_io_gcs_filesystem-0.37.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:ee5da49019670ed364f3e5fb86b46420841a6c3cb52a300553c63841671b3e6d"}, - {file = "tensorflow_io_gcs_filesystem-0.37.1-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:8943036bbf84e7a2be3705cb56f9c9df7c48c9e614bb941f0936c58e3ca89d6f"}, - {file = "tensorflow_io_gcs_filesystem-0.37.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:426de1173cb81fbd62becec2012fc00322a295326d90eb6c737fab636f182aed"}, - {file = "tensorflow_io_gcs_filesystem-0.37.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0df00891669390078a003cedbdd3b8e645c718b111917535fa1d7725e95cdb95"}, + {file = "tensorflow-2.19.0-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:c95604f25c3032e9591c7e01e457fdd442dde48e9cc1ce951078973ab1b4ca34"}, + {file = "tensorflow-2.19.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2b39293cae3aeee534dc4746dc6097b48c281e5e8b9a423efbd14d4495968e5c"}, + {file = "tensorflow-2.19.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:83e2d6c748105488205d30e43093f28fc90e8da0176db9ddee12e2784cf435e8"}, + {file = "tensorflow-2.19.0-cp310-cp310-win_amd64.whl", hash = "sha256:d3f47452246bd08902f0c865d3839fa715f1738d801d256934b943aa21c5a1d2"}, + {file = "tensorflow-2.19.0-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:68d462278ad88c193c16d7b905864ff0117d61dc20deded9264d1999d513c115"}, + {file = "tensorflow-2.19.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c92d3ff958ac0ee0eb343f10d4055b3a2815635cb3ee0836f9b1d735c76ee098"}, + {file = "tensorflow-2.19.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:390747786ac979809fa1cfcf6916220ef0bfed6b9e1b8c643b6b09184a868fe4"}, + {file = "tensorflow-2.19.0-cp311-cp311-win_amd64.whl", hash = "sha256:ade03804d81e696f8b9045bbe2dd5d0146e36c63d85bf2eae8225ffa74a03713"}, + {file = "tensorflow-2.19.0-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:821916beebd541c95b451dd911af442e11a7cb3aabde9084cab2be5c4d8b2bae"}, + {file = "tensorflow-2.19.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:10f4bfbd33ee23408b98c67e63654f4697845f005555dcc6b790ecfaeabd1308"}, + {file = "tensorflow-2.19.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e28b26594cd793e7f52471b8f2d98aafc6d232868a366462d238f7967935a6f6"}, + {file = "tensorflow-2.19.0-cp312-cp312-win_amd64.whl", hash = "sha256:5eae58946f5a22f4d5656a95e54c5d7aae5a5483c388922a207667d8858c37b9"}, + {file = "tensorflow-2.19.0-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:ad15dbf488e287127a18e2274c64a201ea50ee32444a84657ead72d10438cb09"}, + {file = "tensorflow-2.19.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cb87fb2052b819adffb749b7e9426bd109c8cf98751e684de73567424ab2a88"}, + {file = "tensorflow-2.19.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:849f72820e2bb1bfd4f6446d09db4635896f2ceaa35212a98a1238c9439f6f93"}, + {file = "tensorflow-2.19.0-cp39-cp39-win_amd64.whl", hash = "sha256:88c594d98bbe6d81d069f418ae823b03f7273c8b612d7073a09373483f212d9a"}, ] +[package.dependencies] +absl-py = ">=1.0.0" +astunparse = ">=1.6.0" +flatbuffers = ">=24.3.25" +gast = ">=0.2.1,<0.5.0 || >0.5.0,<0.5.1 || >0.5.1,<0.5.2 || >0.5.2" +google-pasta = ">=0.1.1" +grpcio = ">=1.24.3,<2.0" +h5py = ">=3.11.0" +keras = ">=3.5.0" +libclang = ">=13.0.0" +ml-dtypes = ">=0.5.1,<1.0.0" +numpy = ">=1.26.0,<2.2.0" +opt-einsum = ">=2.3.2" +packaging = "*" +protobuf = ">=3.20.3,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<6.0.0dev" +requests = ">=2.21.0,<3" +setuptools = "*" +six = ">=1.12.0" +tensorboard = ">=2.19.0,<2.20.0" +termcolor = ">=1.1.0" +typing-extensions = ">=3.6.6" +wrapt = ">=1.11.0" + [package.extras] -tensorflow = ["tensorflow (>=2.16.0,<2.17.0)"] -tensorflow-aarch64 = ["tensorflow-aarch64 (>=2.16.0,<2.17.0)"] -tensorflow-cpu = ["tensorflow-cpu (>=2.16.0,<2.17.0)"] -tensorflow-gpu = ["tensorflow-gpu (>=2.16.0,<2.17.0)"] -tensorflow-rocm = ["tensorflow-rocm (>=2.16.0,<2.17.0)"] +and-cuda = ["nvidia-cublas-cu12 (==12.5.3.2)", "nvidia-cuda-cupti-cu12 (==12.5.82)", "nvidia-cuda-nvcc-cu12 (==12.5.82)", "nvidia-cuda-nvrtc-cu12 (==12.5.82)", "nvidia-cuda-runtime-cu12 (==12.5.82)", "nvidia-cudnn-cu12 (==9.3.0.75)", "nvidia-cufft-cu12 (==11.2.3.61)", "nvidia-curand-cu12 (==10.3.6.82)", "nvidia-cusolver-cu12 (==11.6.3.83)", "nvidia-cusparse-cu12 (==12.5.1.3)", "nvidia-nccl-cu12 (==2.23.4)", "nvidia-nvjitlink-cu12 (==12.5.82)"] [[package]] name = "tensorflow-metadata" -version = "1.16.1" +version = "1.14.0" description = "Library and standards for schema and statistics." optional = false -python-versions = "<4,>=3.9" +python-versions = ">=3.8,<4" files = [ - {file = "tensorflow_metadata-1.16.1-py3-none-any.whl", hash = "sha256:2ce72ea31d78a00c0c74c6d465482335aa5cb2a3b2a104dedba0b258bc7bb18a"}, + {file = "tensorflow_metadata-1.14.0-py3-none-any.whl", hash = "sha256:5ff79bf96f98c800fc08270b852663afe7e74d7e1f92b50ba1487bfc63894cdb"}, ] [package.dependencies] -absl-py = ">=0.9,<3.0.0" -googleapis-common-protos = {version = ">=1.56.4,<2", markers = "python_version >= \"3.11\""} -protobuf = [ - {version = ">=3.20.3,<4.21", markers = "python_version < \"3.11\""}, - {version = ">=4.25.2,<6.0.0dev", markers = "python_version >= \"3.11\""}, -] +absl-py = ">=0.9,<2.0.0" +googleapis-common-protos = ">=1.52.0,<2" +protobuf = ">=3.20.3,<4.21" [[package]] name = "tensorflow-text" @@ -4283,34 +4502,62 @@ tensorflow = ">=2.18.0,<2.19" tensorflow-cpu = ["tensorflow-cpu (>=2.18.0,<2.19)"] tests = ["absl-py", "pytest", "tensorflow-datasets (>=3.2.0)"] +[[package]] +name = "tensorflow-text" +version = "2.19.0" +description = "TF.Text is a TensorFlow library of text related ops, modules, and subgraphs." +optional = false +python-versions = "*" +files = [ + {file = "tensorflow_text-2.19.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:5839e214c0a24d9f85e02611ad083c9e5bd3943aedbf2b3efe12a86059e45f07"}, + {file = "tensorflow_text-2.19.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:01119fbf7836a50bf7beb118d4fba2211a5b4a1001c7118db2c0294b3f7772ad"}, + {file = "tensorflow_text-2.19.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3b9ae0cf9c6b35fd846eea13c5ad3f002660bbeccdb9806ba52a141860d84fd6"}, + {file = "tensorflow_text-2.19.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:640f50e8e49dfb6438c90c2ae79b816ee4a14dee195d6e87958716a19ad3ccfc"}, + {file = "tensorflow_text-2.19.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bf3582d6de40f4198f7b59b872692f159e61c0173bfbceccc691fdf8a476b781"}, + {file = "tensorflow_text-2.19.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:23468daf9ccc8220317b0457e9a72e199a78894fda269dec68e5188e153887e6"}, + {file = "tensorflow_text-2.19.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:4e73841fbc06061b0d6f0ef6367e94ec171e4eff2e45938e1d411a27021e0e31"}, + {file = "tensorflow_text-2.19.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a77839d411aae75eacd60fa9d85453d1a2bcada79fc391d10a6d6aacc82b6088"}, + {file = "tensorflow_text-2.19.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fc159c551243cf7926be66c68dd0d480d2f7e0d82b4c7e494cb2bfa20d7617ca"}, + {file = "tensorflow_text-2.19.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:739d03528d33c16f5fbffefc22847173243ae41f81a3443a27de8c6125a253ac"}, + {file = "tensorflow_text-2.19.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:29ec88631a21230fdda707391272160a016397483b2323d82d15abbae864e723"}, + {file = "tensorflow_text-2.19.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6245f390185cfbc944f7b0859ed8f298faac16c1b12899d65cd374a636ee6163"}, +] + +[package.dependencies] +tensorflow = ">=2.19.0,<2.20" + +[package.extras] +tensorflow-cpu = ["tensorflow-cpu (>=2.19.0,<2.20)"] +tests = ["absl-py", "pytest", "tensorflow-datasets (>=3.2.0)"] + [[package]] name = "tensorstore" -version = "0.1.72" +version = "0.1.73" description = "Read and write large, multi-dimensional arrays" optional = false python-versions = ">=3.10" files = [ - {file = "tensorstore-0.1.72-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:a41b4fe0603943d23472619a8ada70b8d2c9458747fad88b0ce7b29f1ccf4e74"}, - {file = "tensorstore-0.1.72-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:170172b698fefb4b5507c6cb339ca0b75d56d12ba6a43d9569c61800c1eeb121"}, - {file = "tensorstore-0.1.72-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b71134b85f540e17a1ae65da1fb906781b7470ef0ed71d98d29459325897f574"}, - {file = "tensorstore-0.1.72-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:08c5318535aac5e20e247c6e9b43f5887b2293f548de7279650bc73804ccf3ed"}, - {file = "tensorstore-0.1.72-cp310-cp310-win_amd64.whl", hash = "sha256:9113d3fcf78c1366688aa90ee7efdc86b57962ea72276944cc57e916a6180749"}, - {file = "tensorstore-0.1.72-cp311-cp311-macosx_10_14_x86_64.whl", hash = "sha256:599cc7b26b0c96373e89ff5bcf9b76e832802169229680bef985b10011f9bae7"}, - {file = "tensorstore-0.1.72-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:a7e7b02da26ca5c95b3c613efd0fe10c082dfa4dc3e9818fefc69e30fe70ea1e"}, - {file = "tensorstore-0.1.72-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4a6825cdb6751663ca0bd9abd528ea354ad2199f549bf1f36feac79a6c06efe2"}, - {file = "tensorstore-0.1.72-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ed916b9aeca242a3f367679f65ba376149251ebb28b873becd76c73b688399b6"}, - {file = "tensorstore-0.1.72-cp311-cp311-win_amd64.whl", hash = "sha256:5d410c879dc4b34036ec38e20ff05c7e3b0ad5d1eb595412b27a9dbb5e435035"}, - {file = "tensorstore-0.1.72-cp312-cp312-macosx_10_14_x86_64.whl", hash = "sha256:92fac5e2cbc90e5ca8fc72c5bf112816d981e266a3cf9fb1681ba8b3f59537ef"}, - {file = "tensorstore-0.1.72-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:7c9413f8318a4fa259ec5325f569c0759bccee936df44bd2f7bb35c8afdcdfc8"}, - {file = "tensorstore-0.1.72-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c0f722218f494b1631dbec451b9863f579054e27da2f39aab418db4493694abe"}, - {file = "tensorstore-0.1.72-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d5dced3f367308e9fa8e7b72e9e57a4c491fa47c066e035ac33421e2b2408e3f"}, - {file = "tensorstore-0.1.72-cp312-cp312-win_amd64.whl", hash = "sha256:721d599db0113d75ab6ba1365989bbaf2ab752d7a6268f975c8bfd3a8eb6084b"}, - {file = "tensorstore-0.1.72-cp313-cp313-macosx_10_14_x86_64.whl", hash = "sha256:9c3a36f681ffcc104ba931d471447e8901e64e8cc6913b61792870ff59529961"}, - {file = "tensorstore-0.1.72-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:0cd951e593a17babbbde1410cfadb4a04e1cddfa5ace0de5ccb41029223f96b9"}, - {file = "tensorstore-0.1.72-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2fdfa0118be0721c110bcbe7e464758f78d3e14ee8c30a911eb8f4465e6c2e81"}, - {file = "tensorstore-0.1.72-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5ed6fe937b0433b573c3d6805d0759d33ccc24aa2aba720e4b8ba689c2f9775f"}, - {file = "tensorstore-0.1.72-cp313-cp313-win_amd64.whl", hash = "sha256:66c0658689243af0825fff222fb56fdf05a8553bcb3b471dbf18830161302986"}, - {file = "tensorstore-0.1.72.tar.gz", hash = "sha256:763d7f6898711783f199c8226a9c0b259546f5c6d9b4dc0ad3c9e39627060022"}, + {file = "tensorstore-0.1.73-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:03cec5141a27d2e65e4ff604641cfb1f7989d66c361534392e810b80cbda617d"}, + {file = "tensorstore-0.1.73-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:7b4e08bfa61880863bedb90499a23c63d9493cf9310207c230086b0a3700c75d"}, + {file = "tensorstore-0.1.73-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:87fb7879af73a5b7ded9c9de3e2014baf6468d9d7c47edfc19490907b346e0a6"}, + {file = "tensorstore-0.1.73-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:05f7fdcb063f08f40f74c49f92c0f0136c5b715d49e111950bf025b12a72a907"}, + {file = "tensorstore-0.1.73-cp310-cp310-win_amd64.whl", hash = "sha256:7a812e8297a4ed70109057628b767c1a12b535f2db657635f0ed1517b23b990b"}, + {file = "tensorstore-0.1.73-cp311-cp311-macosx_10_14_x86_64.whl", hash = "sha256:e99ae99ac48f41c4e36b1e3717c6dbdab96dd27fc91618dd01afb9ad848a9293"}, + {file = "tensorstore-0.1.73-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:dd7fa6d7e9579a1a75e6185d7df10e28fcc7db2e14190ed60261a71b9c09e1df"}, + {file = "tensorstore-0.1.73-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4433dcfcb943e100b90b0fc8e0b1d174e8c2c1cedb1fcc86e6d20b6a2e961831"}, + {file = "tensorstore-0.1.73-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0eb83a2526e211a721842c3e98293e4bc9e1fdb9dac37ecf37d6ccbde84b8ee3"}, + {file = "tensorstore-0.1.73-cp311-cp311-win_amd64.whl", hash = "sha256:a11d2e496d7442c68b35cd222a8c8df3fdee9e30fb2984c91546d81faff8bf61"}, + {file = "tensorstore-0.1.73-cp312-cp312-macosx_10_14_x86_64.whl", hash = "sha256:0429bf781ce3ed45be761b46f4bc5979412dadf063f509cb7e9581981a1e097b"}, + {file = "tensorstore-0.1.73-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:440569458b91974e0ffa210654a01f2721758476c48240f7c925fc0d107056be"}, + {file = "tensorstore-0.1.73-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:192feb8a8fd0f37fa298588d037d4889d2f9d07b18b3295488f05ee268f57b70"}, + {file = "tensorstore-0.1.73-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:44d70dd0c000db8c0d2386e788c5e91d3b37ebee8f629f3848d7a012c85d1e11"}, + {file = "tensorstore-0.1.73-cp312-cp312-win_amd64.whl", hash = "sha256:be3f5ef6f359486ee52785e8a302819152e51286c50181c6c35f316b7568ce60"}, + {file = "tensorstore-0.1.73-cp313-cp313-macosx_10_14_x86_64.whl", hash = "sha256:70d57b63706de4a3a9c1c217b338658fa160b2d41f5b399e6926f9eaf29b2a4d"}, + {file = "tensorstore-0.1.73-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:5fc9feab09de9e99c381145adeef5ff9e01f898e509b851ff2edd940c8b2384a"}, + {file = "tensorstore-0.1.73-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:83c6ca5cb39ffeeb4a562942e3b9e2f32b026f362b2b7266c44201bd7c3116a5"}, + {file = "tensorstore-0.1.73-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:421a3f87864a0a8837b4f9f0c8ee86079b46b112de902496d3b90c72f51d02ea"}, + {file = "tensorstore-0.1.73-cp313-cp313-win_amd64.whl", hash = "sha256:2aed43498b00d37df583da9e06328751cfe695bb166043aa9ef7183174cf7e29"}, + {file = "tensorstore-0.1.73.tar.gz", hash = "sha256:f24b325385fd30be612ab8494a29d3bfef37b9444357912ba184f30f325f093b"}, ] [package.dependencies] @@ -4319,13 +4566,13 @@ numpy = ">=1.22.0" [[package]] name = "termcolor" -version = "2.5.0" +version = "3.0.1" description = "ANSI color formatting for output in terminal" optional = false python-versions = ">=3.9" files = [ - {file = "termcolor-2.5.0-py3-none-any.whl", hash = "sha256:37b17b5fc1e604945c2642c872a3764b5d547a48009871aea3edd3afa180afb8"}, - {file = "termcolor-2.5.0.tar.gz", hash = "sha256:998d8d27da6d48442e8e1f016119076b690d962507531df4890fcd2db2ef8a6f"}, + {file = "termcolor-3.0.1-py3-none-any.whl", hash = "sha256:da1ed4ec8a5dc5b2e17476d859febdb3cccb612be1c36e64511a6f2485c10c69"}, + {file = "termcolor-3.0.1.tar.gz", hash = "sha256:a6abd5c6e1284cea2934443ba806e70e5ec8fd2449021be55c280f8a3731b611"}, ] [package.extras] @@ -4333,23 +4580,20 @@ tests = ["pytest", "pytest-cov"] [[package]] name = "tfds-nightly" -version = "4.9.7.dev202502210044" +version = "4.9.8.dev202504100044" description = "tensorflow/datasets is a library of datasets ready to use with TensorFlow." optional = false python-versions = ">=3.10" files = [ - {file = "tfds_nightly-4.9.7.dev202502210044-py3-none-any.whl", hash = "sha256:65f08dbbbb80e9a6576d937b774f93e4da3f3e7f2b8758d97690bf973c92b4f0"}, - {file = "tfds_nightly-4.9.7.dev202502210044.tar.gz", hash = "sha256:58572fe30f65042c5b71d386a248e915111d2b0e77309dddc6b9cc08ef32d3a7"}, + {file = "tfds_nightly-4.9.8.dev202504100044-py3-none-any.whl", hash = "sha256:942721ed5fb7c2fd759338d581ce4a118c67923b8a6332b2d21e5f21885da32f"}, + {file = "tfds_nightly-4.9.8.dev202504100044.tar.gz", hash = "sha256:b9ba8f36c4973ce04207c5c6b1100d3de51974ad82f7bcf6edb6f8a3ff52d46e"}, ] [package.dependencies] absl-py = "*" array_record = {version = ">=0.5.0", markers = "platform_system == \"Linux\""} dm-tree = "*" -etils = [ - {version = ">=1.6.0", extras = ["edc", "enp", "epath", "epy", "etree"], markers = "python_version < \"3.11\""}, - {version = ">=1.9.1", extras = ["edc", "enp", "epath", "epy", "etree"], markers = "python_version >= \"3.11\""}, -] +etils = {version = ">=1.9.1", extras = ["edc", "enp", "epath", "epy", "etree"], markers = "python_version >= \"3.11\""} immutabledict = "*" numpy = "*" promise = "*" @@ -4378,7 +4622,7 @@ duke-ultrasound = ["scipy"] eurosat = ["imagecodecs", "scikit-image", "tifffile"] groove = ["pretty_midi", "pydub"] gtzan = ["pydub"] -huggingface = ["Pillow", "Pillow", "bs4", "conllu", "datasets", "dill", "envlogger", "envlogger", "gcld3", "gcsfs", "h5py", "imagecodecs", "jax[cpu] (==0.4.28)", "jupyter", "langdetect", "lxml", "matplotlib", "mlcroissant (>=1.0.9)", "mwparserfromhell", "mwxml", "networkx", "nltk (==3.8.1)", "opencv-python", "pandas", "pandas", "pandas", "pandas", "pandas", "pandas", "pretty_midi", "pycocotools", "pydub", "pydub", "pydub", "pydub", "pydub", "pytest", "pytest-shard", "pytest-xdist", "pyyaml", "scikit-image", "scikit-image", "scipy", "scipy", "scipy", "scipy", "scipy", "tensorflow-io[tensorflow]", "tifffile", "tldextract", "zarr (<3.0.0)"] +huggingface = ["Pillow", "Pillow", "bs4", "conllu", "datasets", "dill", "envlogger", "envlogger", "gcld3", "gcsfs", "h5py", "imagecodecs", "jax[cpu] (==0.4.28)", "jupyter", "langdetect", "lxml", "matplotlib", "mlcroissant (>=1.0.9)", "mwparserfromhell", "mwxml", "networkx", "nltk (==3.8.1)", "opencv-python", "pandas", "pandas", "pandas", "pandas", "pandas", "pandas", "pretty_midi", "pycocotools", "pydub", "pydub", "pydub", "pydub", "pydub", "pydub", "pytest", "pytest-shard", "pytest-xdist", "pyyaml", "scikit-image", "scikit-image", "scipy", "scipy", "scipy", "scipy", "scipy", "tensorflow-io[tensorflow]", "tifffile", "tldextract", "zarr (<3.0.0)"] imagenet2012-corrupted = ["opencv-python", "scikit-image", "scipy"] librispeech = ["pydub"] locomotion = ["envlogger"] @@ -4391,10 +4635,11 @@ qm9 = ["pandas"] robonet = ["h5py"] robosuite-panda-pick-place-can = ["envlogger"] smartwatch-gestures = ["pandas"] +speech-commands = ["pydub"] svhn = ["scipy"] tensorflow = ["tensorflow (>=2.1)"] tensorflow-data-validation = ["tensorflow-data-validation"] -tests-all = ["Pillow", "Pillow", "apache-beam", "apache-beam", "apache-beam", "apache-beam", "apache-beam", "apache-beam", "bs4", "conllu", "dill", "envlogger", "envlogger", "gcld3", "gcsfs", "h5py", "imagecodecs", "jax[cpu] (==0.4.28)", "jupyter", "langdetect", "lxml", "matplotlib", "mlcroissant (>=1.0.9)", "mwparserfromhell", "mwxml", "networkx", "nltk (==3.8.1)", "opencv-python", "pandas", "pandas", "pandas", "pandas", "pandas", "pandas", "pretty_midi", "pycocotools", "pydub", "pydub", "pydub", "pydub", "pydub", "pytest", "pytest-shard", "pytest-xdist", "pyyaml", "scikit-image", "scikit-image", "scipy", "scipy", "scipy", "scipy", "scipy", "tensorflow-io[tensorflow]", "tifffile", "tldextract", "zarr (<3.0.0)"] +tests-all = ["Pillow", "Pillow", "apache-beam", "apache-beam", "apache-beam", "apache-beam", "apache-beam", "apache-beam", "bs4", "conllu", "dill", "envlogger", "envlogger", "gcld3", "gcsfs", "h5py", "imagecodecs", "jax[cpu] (==0.4.28)", "jupyter", "langdetect", "lxml", "matplotlib", "mlcroissant (>=1.0.9)", "mwparserfromhell", "mwxml", "networkx", "nltk (==3.8.1)", "opencv-python", "pandas", "pandas", "pandas", "pandas", "pandas", "pandas", "pretty_midi", "pycocotools", "pydub", "pydub", "pydub", "pydub", "pydub", "pydub", "pytest", "pytest-shard", "pytest-xdist", "pyyaml", "scikit-image", "scikit-image", "scipy", "scipy", "scipy", "scipy", "scipy", "tensorflow-io[tensorflow]", "tifffile", "tldextract", "zarr (<3.0.0)"] tf-nightly = ["tf-nightly"] the300w-lp = ["scipy"] wake-vision = ["pandas"] @@ -4406,26 +4651,26 @@ youtube-vis = ["pycocotools"] [[package]] name = "tokenizers" -version = "0.21.0" +version = "0.21.1" description = "" optional = false -python-versions = ">=3.7" +python-versions = ">=3.9" files = [ - {file = "tokenizers-0.21.0-cp39-abi3-macosx_10_12_x86_64.whl", hash = "sha256:3c4c93eae637e7d2aaae3d376f06085164e1660f89304c0ab2b1d08a406636b2"}, - {file = "tokenizers-0.21.0-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:f53ea537c925422a2e0e92a24cce96f6bc5046bbef24a1652a5edc8ba975f62e"}, - {file = "tokenizers-0.21.0-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6b177fb54c4702ef611de0c069d9169f0004233890e0c4c5bd5508ae05abf193"}, - {file = "tokenizers-0.21.0-cp39-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:6b43779a269f4629bebb114e19c3fca0223296ae9fea8bb9a7a6c6fb0657ff8e"}, - {file = "tokenizers-0.21.0-cp39-abi3-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9aeb255802be90acfd363626753fda0064a8df06031012fe7d52fd9a905eb00e"}, - {file = "tokenizers-0.21.0-cp39-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d8b09dbeb7a8d73ee204a70f94fc06ea0f17dcf0844f16102b9f414f0b7463ba"}, - {file = "tokenizers-0.21.0-cp39-abi3-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:400832c0904f77ce87c40f1a8a27493071282f785724ae62144324f171377273"}, - {file = "tokenizers-0.21.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e84ca973b3a96894d1707e189c14a774b701596d579ffc7e69debfc036a61a04"}, - {file = "tokenizers-0.21.0-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:eb7202d231b273c34ec67767378cd04c767e967fda12d4a9e36208a34e2f137e"}, - {file = "tokenizers-0.21.0-cp39-abi3-musllinux_1_2_armv7l.whl", hash = "sha256:089d56db6782a73a27fd8abf3ba21779f5b85d4a9f35e3b493c7bbcbbf0d539b"}, - {file = "tokenizers-0.21.0-cp39-abi3-musllinux_1_2_i686.whl", hash = "sha256:c87ca3dc48b9b1222d984b6b7490355a6fdb411a2d810f6f05977258400ddb74"}, - {file = "tokenizers-0.21.0-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:4145505a973116f91bc3ac45988a92e618a6f83eb458f49ea0790df94ee243ff"}, - {file = "tokenizers-0.21.0-cp39-abi3-win32.whl", hash = "sha256:eb1702c2f27d25d9dd5b389cc1f2f51813e99f8ca30d9e25348db6585a97e24a"}, - {file = "tokenizers-0.21.0-cp39-abi3-win_amd64.whl", hash = "sha256:87841da5a25a3a5f70c102de371db120f41873b854ba65e52bccd57df5a3780c"}, - {file = "tokenizers-0.21.0.tar.gz", hash = "sha256:ee0894bf311b75b0c03079f33859ae4b2334d675d4e93f5a4132e1eae2834fe4"}, + {file = "tokenizers-0.21.1-cp39-abi3-macosx_10_12_x86_64.whl", hash = "sha256:e78e413e9e668ad790a29456e677d9d3aa50a9ad311a40905d6861ba7692cf41"}, + {file = "tokenizers-0.21.1-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:cd51cd0a91ecc801633829fcd1fda9cf8682ed3477c6243b9a095539de4aecf3"}, + {file = "tokenizers-0.21.1-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:28da6b72d4fb14ee200a1bd386ff74ade8992d7f725f2bde2c495a9a98cf4d9f"}, + {file = "tokenizers-0.21.1-cp39-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:34d8cfde551c9916cb92014e040806122295a6800914bab5865deb85623931cf"}, + {file = "tokenizers-0.21.1-cp39-abi3-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:aaa852d23e125b73d283c98f007e06d4595732104b65402f46e8ef24b588d9f8"}, + {file = "tokenizers-0.21.1-cp39-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a21a15d5c8e603331b8a59548bbe113564136dc0f5ad8306dd5033459a226da0"}, + {file = "tokenizers-0.21.1-cp39-abi3-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2fdbd4c067c60a0ac7eca14b6bd18a5bebace54eb757c706b47ea93204f7a37c"}, + {file = "tokenizers-0.21.1-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2dd9a0061e403546f7377df940e866c3e678d7d4e9643d0461ea442b4f89e61a"}, + {file = "tokenizers-0.21.1-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:db9484aeb2e200c43b915a1a0150ea885e35f357a5a8fabf7373af333dcc8dbf"}, + {file = "tokenizers-0.21.1-cp39-abi3-musllinux_1_2_armv7l.whl", hash = "sha256:ed248ab5279e601a30a4d67bdb897ecbe955a50f1e7bb62bd99f07dd11c2f5b6"}, + {file = "tokenizers-0.21.1-cp39-abi3-musllinux_1_2_i686.whl", hash = "sha256:9ac78b12e541d4ce67b4dfd970e44c060a2147b9b2a21f509566d556a509c67d"}, + {file = "tokenizers-0.21.1-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:e5a69c1a4496b81a5ee5d2c1f3f7fbdf95e90a0196101b0ee89ed9956b8a168f"}, + {file = "tokenizers-0.21.1-cp39-abi3-win32.whl", hash = "sha256:1039a3a5734944e09de1d48761ade94e00d0fa760c0e0551151d4dd851ba63e3"}, + {file = "tokenizers-0.21.1-cp39-abi3-win_amd64.whl", hash = "sha256:0f0dcbcc9f6e13e675a66d7a5f2f225a736745ce484c1a4e07476a89ccdad382"}, + {file = "tokenizers-0.21.1.tar.gz", hash = "sha256:a1bb04dc5b448985f86ecd4b05407f5a8d97cb2c0532199b2a302a604a0165ab"}, ] [package.dependencies] @@ -4447,47 +4692,6 @@ files = [ {file = "toml-0.10.2.tar.gz", hash = "sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f"}, ] -[[package]] -name = "tomli" -version = "2.2.1" -description = "A lil' TOML parser" -optional = false -python-versions = ">=3.8" -files = [ - {file = "tomli-2.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678e4fa69e4575eb77d103de3df8a895e1591b48e740211bd1067378c69e8249"}, - {file = "tomli-2.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:023aa114dd824ade0100497eb2318602af309e5a55595f76b626d6d9f3b7b0a6"}, - {file = "tomli-2.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ece47d672db52ac607a3d9599a9d48dcb2f2f735c6c2d1f34130085bb12b112a"}, - {file = "tomli-2.2.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6972ca9c9cc9f0acaa56a8ca1ff51e7af152a9f87fb64623e31d5c83700080ee"}, - {file = "tomli-2.2.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c954d2250168d28797dd4e3ac5cf812a406cd5a92674ee4c8f123c889786aa8e"}, - {file = "tomli-2.2.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:8dd28b3e155b80f4d54beb40a441d366adcfe740969820caf156c019fb5c7ec4"}, - {file = "tomli-2.2.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:e59e304978767a54663af13c07b3d1af22ddee3bb2fb0618ca1593e4f593a106"}, - {file = "tomli-2.2.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:33580bccab0338d00994d7f16f4c4ec25b776af3ffaac1ed74e0b3fc95e885a8"}, - {file = "tomli-2.2.1-cp311-cp311-win32.whl", hash = "sha256:465af0e0875402f1d226519c9904f37254b3045fc5084697cefb9bdde1ff99ff"}, - {file = "tomli-2.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:2d0f2fdd22b02c6d81637a3c95f8cd77f995846af7414c5c4b8d0545afa1bc4b"}, - {file = "tomli-2.2.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:4a8f6e44de52d5e6c657c9fe83b562f5f4256d8ebbfe4ff922c495620a7f6cea"}, - {file = "tomli-2.2.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8d57ca8095a641b8237d5b079147646153d22552f1c637fd3ba7f4b0b29167a8"}, - {file = "tomli-2.2.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4e340144ad7ae1533cb897d406382b4b6fede8890a03738ff1683af800d54192"}, - {file = "tomli-2.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:db2b95f9de79181805df90bedc5a5ab4c165e6ec3fe99f970d0e302f384ad222"}, - {file = "tomli-2.2.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:40741994320b232529c802f8bc86da4e1aa9f413db394617b9a256ae0f9a7f77"}, - {file = "tomli-2.2.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:400e720fe168c0f8521520190686ef8ef033fb19fc493da09779e592861b78c6"}, - {file = "tomli-2.2.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:02abe224de6ae62c19f090f68da4e27b10af2b93213d36cf44e6e1c5abd19fdd"}, - {file = "tomli-2.2.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:b82ebccc8c8a36f2094e969560a1b836758481f3dc360ce9a3277c65f374285e"}, - {file = "tomli-2.2.1-cp312-cp312-win32.whl", hash = "sha256:889f80ef92701b9dbb224e49ec87c645ce5df3fa2cc548664eb8a25e03127a98"}, - {file = "tomli-2.2.1-cp312-cp312-win_amd64.whl", hash = "sha256:7fc04e92e1d624a4a63c76474610238576942d6b8950a2d7f908a340494e67e4"}, - {file = "tomli-2.2.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f4039b9cbc3048b2416cc57ab3bda989a6fcf9b36cf8937f01a6e731b64f80d7"}, - {file = "tomli-2.2.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:286f0ca2ffeeb5b9bd4fcc8d6c330534323ec51b2f52da063b11c502da16f30c"}, - {file = "tomli-2.2.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a92ef1a44547e894e2a17d24e7557a5e85a9e1d0048b0b5e7541f76c5032cb13"}, - {file = "tomli-2.2.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9316dc65bed1684c9a98ee68759ceaed29d229e985297003e494aa825ebb0281"}, - {file = "tomli-2.2.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e85e99945e688e32d5a35c1ff38ed0b3f41f43fad8df0bdf79f72b2ba7bc5272"}, - {file = "tomli-2.2.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:ac065718db92ca818f8d6141b5f66369833d4a80a9d74435a268c52bdfa73140"}, - {file = "tomli-2.2.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:d920f33822747519673ee656a4b6ac33e382eca9d331c87770faa3eef562aeb2"}, - {file = "tomli-2.2.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:a198f10c4d1b1375d7687bc25294306e551bf1abfa4eace6650070a5c1ae2744"}, - {file = "tomli-2.2.1-cp313-cp313-win32.whl", hash = "sha256:d3f5614314d758649ab2ab3a62d4f2004c825922f9e370b29416484086b264ec"}, - {file = "tomli-2.2.1-cp313-cp313-win_amd64.whl", hash = "sha256:a38aa0308e754b0e3c67e344754dff64999ff9b513e691d0e786265c93583c69"}, - {file = "tomli-2.2.1-py3-none-any.whl", hash = "sha256:cb55c73c5f4408779d0cf3eef9f762b9c9f147a77de7b258bef0a5628adc85cc"}, - {file = "tomli-2.2.1.tar.gz", hash = "sha256:cd45e1dc79c835ce60f7404ec8119f2eb06d38b1deba146f07ced3bbc44505ff"}, -] - [[package]] name = "tomlkit" version = "0.13.2" @@ -4568,13 +4772,13 @@ test = ["absl-py (>=1.4.0)", "jax (>=0.4.23)", "omegaconf (>=2.0.0)", "pydantic [[package]] name = "typer" -version = "0.15.1" +version = "0.15.2" description = "Typer, build great CLIs. Easy to code. Based on Python type hints." optional = false python-versions = ">=3.7" files = [ - {file = "typer-0.15.1-py3-none-any.whl", hash = "sha256:7994fb7b8155b64d3402518560648446072864beefd44aa2dc36972a5972e847"}, - {file = "typer-0.15.1.tar.gz", hash = "sha256:a0588c0a7fa68a1978a069818657778f86abe6ff5ea6abf472f940a08bfe4f0a"}, + {file = "typer-0.15.2-py3-none-any.whl", hash = "sha256:46a499c6107d645a9c13f7ee46c5d5096cae6f5fc57dd11eccbbb9ae3e44ddfc"}, + {file = "typer-0.15.2.tar.gz", hash = "sha256:ab2fab47533a813c49fe1f16b1a370fd5819099c00b119e0633df65f22144ba5"}, ] [package.dependencies] @@ -4585,15 +4789,29 @@ typing-extensions = ">=3.7.4.3" [[package]] name = "typing-extensions" -version = "4.12.2" +version = "4.13.1" description = "Backported and Experimental Type Hints for Python 3.8+" optional = false python-versions = ">=3.8" files = [ - {file = "typing_extensions-4.12.2-py3-none-any.whl", hash = "sha256:04e5ca0351e0f3f85c6853954072df659d0d13fac324d0072316b67d7794700d"}, - {file = "typing_extensions-4.12.2.tar.gz", hash = "sha256:1a7ead55c7e559dd4dee8856e3a88b41225abfe1ce8df57b7c13915fe121ffb8"}, + {file = "typing_extensions-4.13.1-py3-none-any.whl", hash = "sha256:4b6cf02909eb5495cfbc3f6e8fd49217e6cc7944e145cdda8caa3734777f9e69"}, + {file = "typing_extensions-4.13.1.tar.gz", hash = "sha256:98795af00fb9640edec5b8e31fc647597b4691f099ad75f469a2616be1a76dff"}, ] +[[package]] +name = "typing-inspection" +version = "0.4.0" +description = "Runtime typing introspection tools" +optional = false +python-versions = ">=3.9" +files = [ + {file = "typing_inspection-0.4.0-py3-none-any.whl", hash = "sha256:50e72559fcd2a6367a19f7a7e610e6afcb9fac940c650290eed893d61386832f"}, + {file = "typing_inspection-0.4.0.tar.gz", hash = "sha256:9765c87de36671694a67904bf2c96e395be9c6439bb6c87b5142569dcdd65122"}, +] + +[package.dependencies] +typing-extensions = ">=4.12.0" + [[package]] name = "uritemplate" version = "4.1.1" @@ -4650,7 +4868,6 @@ h11 = ">=0.8" httptools = {version = ">=0.6.3", optional = true, markers = "extra == \"standard\""} python-dotenv = {version = ">=0.13", optional = true, markers = "extra == \"standard\""} pyyaml = {version = ">=5.1", optional = true, markers = "extra == \"standard\""} -typing-extensions = {version = ">=4.0", markers = "python_version < \"3.11\""} uvloop = {version = ">=0.14.0,<0.15.0 || >0.15.0,<0.15.1 || >0.15.1", optional = true, markers = "(sys_platform != \"win32\" and sys_platform != \"cygwin\") and platform_python_implementation != \"PyPy\" and extra == \"standard\""} watchfiles = {version = ">=0.13", optional = true, markers = "extra == \"standard\""} websockets = {version = ">=10.4", optional = true, markers = "extra == \"standard\""} @@ -4711,82 +4928,82 @@ test = ["aiohttp (>=3.10.5)", "flake8 (>=5.0,<6.0)", "mypy (>=0.800)", "psutil", [[package]] name = "watchfiles" -version = "1.0.4" +version = "1.0.5" description = "Simple, modern and high performance file watching and code reload in python." optional = false python-versions = ">=3.9" files = [ - {file = "watchfiles-1.0.4-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:ba5bb3073d9db37c64520681dd2650f8bd40902d991e7b4cfaeece3e32561d08"}, - {file = "watchfiles-1.0.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9f25d0ba0fe2b6d2c921cf587b2bf4c451860086534f40c384329fb96e2044d1"}, - {file = "watchfiles-1.0.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:47eb32ef8c729dbc4f4273baece89398a4d4b5d21a1493efea77a17059f4df8a"}, - {file = "watchfiles-1.0.4-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:076f293100db3b0b634514aa0d294b941daa85fc777f9c698adb1009e5aca0b1"}, - {file = "watchfiles-1.0.4-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1eacd91daeb5158c598fe22d7ce66d60878b6294a86477a4715154990394c9b3"}, - {file = "watchfiles-1.0.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:13c2ce7b72026cfbca120d652f02c7750f33b4c9395d79c9790b27f014c8a5a2"}, - {file = "watchfiles-1.0.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:90192cdc15ab7254caa7765a98132a5a41471cf739513cc9bcf7d2ffcc0ec7b2"}, - {file = "watchfiles-1.0.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:278aaa395f405972e9f523bd786ed59dfb61e4b827856be46a42130605fd0899"}, - {file = "watchfiles-1.0.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:a462490e75e466edbb9fc4cd679b62187153b3ba804868452ef0577ec958f5ff"}, - {file = "watchfiles-1.0.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8d0d0630930f5cd5af929040e0778cf676a46775753e442a3f60511f2409f48f"}, - {file = "watchfiles-1.0.4-cp310-cp310-win32.whl", hash = "sha256:cc27a65069bcabac4552f34fd2dce923ce3fcde0721a16e4fb1b466d63ec831f"}, - {file = "watchfiles-1.0.4-cp310-cp310-win_amd64.whl", hash = "sha256:8b1f135238e75d075359cf506b27bf3f4ca12029c47d3e769d8593a2024ce161"}, - {file = "watchfiles-1.0.4-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:2a9f93f8439639dc244c4d2902abe35b0279102bca7bbcf119af964f51d53c19"}, - {file = "watchfiles-1.0.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:9eea33ad8c418847dd296e61eb683cae1c63329b6d854aefcd412e12d94ee235"}, - {file = "watchfiles-1.0.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:31f1a379c9dcbb3f09cf6be1b7e83b67c0e9faabed0471556d9438a4a4e14202"}, - {file = "watchfiles-1.0.4-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ab594e75644421ae0a2484554832ca5895f8cab5ab62de30a1a57db460ce06c6"}, - {file = "watchfiles-1.0.4-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fc2eb5d14a8e0d5df7b36288979176fbb39672d45184fc4b1c004d7c3ce29317"}, - {file = "watchfiles-1.0.4-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3f68d8e9d5a321163ddacebe97091000955a1b74cd43724e346056030b0bacee"}, - {file = "watchfiles-1.0.4-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f9ce064e81fe79faa925ff03b9f4c1a98b0bbb4a1b8c1b015afa93030cb21a49"}, - {file = "watchfiles-1.0.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b77d5622ac5cc91d21ae9c2b284b5d5c51085a0bdb7b518dba263d0af006132c"}, - {file = "watchfiles-1.0.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:1941b4e39de9b38b868a69b911df5e89dc43767feeda667b40ae032522b9b5f1"}, - {file = "watchfiles-1.0.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:4f8c4998506241dedf59613082d1c18b836e26ef2a4caecad0ec41e2a15e4226"}, - {file = "watchfiles-1.0.4-cp311-cp311-win32.whl", hash = "sha256:4ebbeca9360c830766b9f0df3640b791be569d988f4be6c06d6fae41f187f105"}, - {file = "watchfiles-1.0.4-cp311-cp311-win_amd64.whl", hash = "sha256:05d341c71f3d7098920f8551d4df47f7b57ac5b8dad56558064c3431bdfc0b74"}, - {file = "watchfiles-1.0.4-cp311-cp311-win_arm64.whl", hash = "sha256:32b026a6ab64245b584acf4931fe21842374da82372d5c039cba6bf99ef722f3"}, - {file = "watchfiles-1.0.4-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:229e6ec880eca20e0ba2f7e2249c85bae1999d330161f45c78d160832e026ee2"}, - {file = "watchfiles-1.0.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:5717021b199e8353782dce03bd8a8f64438832b84e2885c4a645f9723bf656d9"}, - {file = "watchfiles-1.0.4-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0799ae68dfa95136dde7c472525700bd48777875a4abb2ee454e3ab18e9fc712"}, - {file = "watchfiles-1.0.4-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:43b168bba889886b62edb0397cab5b6490ffb656ee2fcb22dec8bfeb371a9e12"}, - {file = "watchfiles-1.0.4-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fb2c46e275fbb9f0c92e7654b231543c7bbfa1df07cdc4b99fa73bedfde5c844"}, - {file = "watchfiles-1.0.4-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:857f5fc3aa027ff5e57047da93f96e908a35fe602d24f5e5d8ce64bf1f2fc733"}, - {file = "watchfiles-1.0.4-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:55ccfd27c497b228581e2838d4386301227fc0cb47f5a12923ec2fe4f97b95af"}, - {file = "watchfiles-1.0.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5c11ea22304d17d4385067588123658e9f23159225a27b983f343fcffc3e796a"}, - {file = "watchfiles-1.0.4-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:74cb3ca19a740be4caa18f238298b9d472c850f7b2ed89f396c00a4c97e2d9ff"}, - {file = "watchfiles-1.0.4-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:c7cce76c138a91e720d1df54014a047e680b652336e1b73b8e3ff3158e05061e"}, - {file = "watchfiles-1.0.4-cp312-cp312-win32.whl", hash = "sha256:b045c800d55bc7e2cadd47f45a97c7b29f70f08a7c2fa13241905010a5493f94"}, - {file = "watchfiles-1.0.4-cp312-cp312-win_amd64.whl", hash = "sha256:c2acfa49dd0ad0bf2a9c0bb9a985af02e89345a7189be1efc6baa085e0f72d7c"}, - {file = "watchfiles-1.0.4-cp312-cp312-win_arm64.whl", hash = "sha256:22bb55a7c9e564e763ea06c7acea24fc5d2ee5dfc5dafc5cfbedfe58505e9f90"}, - {file = "watchfiles-1.0.4-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:8012bd820c380c3d3db8435e8cf7592260257b378b649154a7948a663b5f84e9"}, - {file = "watchfiles-1.0.4-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:aa216f87594f951c17511efe5912808dfcc4befa464ab17c98d387830ce07b60"}, - {file = "watchfiles-1.0.4-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:62c9953cf85529c05b24705639ffa390f78c26449e15ec34d5339e8108c7c407"}, - {file = "watchfiles-1.0.4-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:7cf684aa9bba4cd95ecb62c822a56de54e3ae0598c1a7f2065d51e24637a3c5d"}, - {file = "watchfiles-1.0.4-cp313-cp313-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f44a39aee3cbb9b825285ff979ab887a25c5d336e5ec3574f1506a4671556a8d"}, - {file = "watchfiles-1.0.4-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a38320582736922be8c865d46520c043bff350956dfc9fbaee3b2df4e1740a4b"}, - {file = "watchfiles-1.0.4-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:39f4914548b818540ef21fd22447a63e7be6e24b43a70f7642d21f1e73371590"}, - {file = "watchfiles-1.0.4-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f12969a3765909cf5dc1e50b2436eb2c0e676a3c75773ab8cc3aa6175c16e902"}, - {file = "watchfiles-1.0.4-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:0986902677a1a5e6212d0c49b319aad9cc48da4bd967f86a11bde96ad9676ca1"}, - {file = "watchfiles-1.0.4-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:308ac265c56f936636e3b0e3f59e059a40003c655228c131e1ad439957592303"}, - {file = "watchfiles-1.0.4-cp313-cp313-win32.whl", hash = "sha256:aee397456a29b492c20fda2d8961e1ffb266223625346ace14e4b6d861ba9c80"}, - {file = "watchfiles-1.0.4-cp313-cp313-win_amd64.whl", hash = "sha256:d6097538b0ae5c1b88c3b55afa245a66793a8fec7ada6755322e465fb1a0e8cc"}, - {file = "watchfiles-1.0.4-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:d3452c1ec703aa1c61e15dfe9d482543e4145e7c45a6b8566978fbb044265a21"}, - {file = "watchfiles-1.0.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:7b75fee5a16826cf5c46fe1c63116e4a156924d668c38b013e6276f2582230f0"}, - {file = "watchfiles-1.0.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4e997802d78cdb02623b5941830ab06f8860038faf344f0d288d325cc9c5d2ff"}, - {file = "watchfiles-1.0.4-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:e0611d244ce94d83f5b9aff441ad196c6e21b55f77f3c47608dcf651efe54c4a"}, - {file = "watchfiles-1.0.4-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9745a4210b59e218ce64c91deb599ae8775c8a9da4e95fb2ee6fe745fc87d01a"}, - {file = "watchfiles-1.0.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4810ea2ae622add560f4aa50c92fef975e475f7ac4900ce5ff5547b2434642d8"}, - {file = "watchfiles-1.0.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:740d103cd01458f22462dedeb5a3382b7f2c57d07ff033fbc9465919e5e1d0f3"}, - {file = "watchfiles-1.0.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cdbd912a61543a36aef85e34f212e5d2486e7c53ebfdb70d1e0b060cc50dd0bf"}, - {file = "watchfiles-1.0.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0bc80d91ddaf95f70258cf78c471246846c1986bcc5fd33ccc4a1a67fcb40f9a"}, - {file = "watchfiles-1.0.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:ab0311bb2ffcd9f74b6c9de2dda1612c13c84b996d032cd74799adb656af4e8b"}, - {file = "watchfiles-1.0.4-cp39-cp39-win32.whl", hash = "sha256:02a526ee5b5a09e8168314c905fc545c9bc46509896ed282aeb5a8ba9bd6ca27"}, - {file = "watchfiles-1.0.4-cp39-cp39-win_amd64.whl", hash = "sha256:a5ae5706058b27c74bac987d615105da17724172d5aaacc6c362a40599b6de43"}, - {file = "watchfiles-1.0.4-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:cdcc92daeae268de1acf5b7befcd6cfffd9a047098199056c72e4623f531de18"}, - {file = "watchfiles-1.0.4-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:d8d3d9203705b5797f0af7e7e5baa17c8588030aaadb7f6a86107b7247303817"}, - {file = "watchfiles-1.0.4-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bdef5a1be32d0b07dcea3318a0be95d42c98ece24177820226b56276e06b63b0"}, - {file = "watchfiles-1.0.4-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:342622287b5604ddf0ed2d085f3a589099c9ae8b7331df3ae9845571586c4f3d"}, - {file = "watchfiles-1.0.4-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:9fe37a2de80aa785d340f2980276b17ef697ab8db6019b07ee4fd28a8359d2f3"}, - {file = "watchfiles-1.0.4-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:9d1ef56b56ed7e8f312c934436dea93bfa3e7368adfcf3df4c0da6d4de959a1e"}, - {file = "watchfiles-1.0.4-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:95b42cac65beae3a362629950c444077d1b44f1790ea2772beaea95451c086bb"}, - {file = "watchfiles-1.0.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5e0227b8ed9074c6172cf55d85b5670199c99ab11fd27d2c473aa30aec67ee42"}, - {file = "watchfiles-1.0.4.tar.gz", hash = "sha256:6ba473efd11062d73e4f00c2b730255f9c1bdd73cd5f9fe5b5da8dbd4a717205"}, + {file = "watchfiles-1.0.5-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:5c40fe7dd9e5f81e0847b1ea64e1f5dd79dd61afbedb57759df06767ac719b40"}, + {file = "watchfiles-1.0.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8c0db396e6003d99bb2d7232c957b5f0b5634bbd1b24e381a5afcc880f7373fb"}, + {file = "watchfiles-1.0.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b551d4fb482fc57d852b4541f911ba28957d051c8776e79c3b4a51eb5e2a1b11"}, + {file = "watchfiles-1.0.5-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:830aa432ba5c491d52a15b51526c29e4a4b92bf4f92253787f9726fe01519487"}, + {file = "watchfiles-1.0.5-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a16512051a822a416b0d477d5f8c0e67b67c1a20d9acecb0aafa3aa4d6e7d256"}, + {file = "watchfiles-1.0.5-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:bfe0cbc787770e52a96c6fda6726ace75be7f840cb327e1b08d7d54eadc3bc85"}, + {file = "watchfiles-1.0.5-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d363152c5e16b29d66cbde8fa614f9e313e6f94a8204eaab268db52231fe5358"}, + {file = "watchfiles-1.0.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7ee32c9a9bee4d0b7bd7cbeb53cb185cf0b622ac761efaa2eba84006c3b3a614"}, + {file = "watchfiles-1.0.5-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:29c7fd632ccaf5517c16a5188e36f6612d6472ccf55382db6c7fe3fcccb7f59f"}, + {file = "watchfiles-1.0.5-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8e637810586e6fe380c8bc1b3910accd7f1d3a9a7262c8a78d4c8fb3ba6a2b3d"}, + {file = "watchfiles-1.0.5-cp310-cp310-win32.whl", hash = "sha256:cd47d063fbeabd4c6cae1d4bcaa38f0902f8dc5ed168072874ea11d0c7afc1ff"}, + {file = "watchfiles-1.0.5-cp310-cp310-win_amd64.whl", hash = "sha256:86c0df05b47a79d80351cd179893f2f9c1b1cae49d96e8b3290c7f4bd0ca0a92"}, + {file = "watchfiles-1.0.5-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:237f9be419e977a0f8f6b2e7b0475ababe78ff1ab06822df95d914a945eac827"}, + {file = "watchfiles-1.0.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e0da39ff917af8b27a4bdc5a97ac577552a38aac0d260a859c1517ea3dc1a7c4"}, + {file = "watchfiles-1.0.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2cfcb3952350e95603f232a7a15f6c5f86c5375e46f0bd4ae70d43e3e063c13d"}, + {file = "watchfiles-1.0.5-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:68b2dddba7a4e6151384e252a5632efcaa9bc5d1c4b567f3cb621306b2ca9f63"}, + {file = "watchfiles-1.0.5-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:95cf944fcfc394c5f9de794ce581914900f82ff1f855326f25ebcf24d5397418"}, + {file = "watchfiles-1.0.5-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ecf6cd9f83d7c023b1aba15d13f705ca7b7d38675c121f3cc4a6e25bd0857ee9"}, + {file = "watchfiles-1.0.5-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:852de68acd6212cd6d33edf21e6f9e56e5d98c6add46f48244bd479d97c967c6"}, + {file = "watchfiles-1.0.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d5730f3aa35e646103b53389d5bc77edfbf578ab6dab2e005142b5b80a35ef25"}, + {file = "watchfiles-1.0.5-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:18b3bd29954bc4abeeb4e9d9cf0b30227f0f206c86657674f544cb032296acd5"}, + {file = "watchfiles-1.0.5-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:ba5552a1b07c8edbf197055bc9d518b8f0d98a1c6a73a293bc0726dce068ed01"}, + {file = "watchfiles-1.0.5-cp311-cp311-win32.whl", hash = "sha256:2f1fefb2e90e89959447bc0420fddd1e76f625784340d64a2f7d5983ef9ad246"}, + {file = "watchfiles-1.0.5-cp311-cp311-win_amd64.whl", hash = "sha256:b6e76ceb1dd18c8e29c73f47d41866972e891fc4cc7ba014f487def72c1cf096"}, + {file = "watchfiles-1.0.5-cp311-cp311-win_arm64.whl", hash = "sha256:266710eb6fddc1f5e51843c70e3bebfb0f5e77cf4f27129278c70554104d19ed"}, + {file = "watchfiles-1.0.5-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:b5eb568c2aa6018e26da9e6c86f3ec3fd958cee7f0311b35c2630fa4217d17f2"}, + {file = "watchfiles-1.0.5-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:0a04059f4923ce4e856b4b4e5e783a70f49d9663d22a4c3b3298165996d1377f"}, + {file = "watchfiles-1.0.5-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3e380c89983ce6e6fe2dd1e1921b9952fb4e6da882931abd1824c092ed495dec"}, + {file = "watchfiles-1.0.5-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:fe43139b2c0fdc4a14d4f8d5b5d967f7a2777fd3d38ecf5b1ec669b0d7e43c21"}, + {file = "watchfiles-1.0.5-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ee0822ce1b8a14fe5a066f93edd20aada932acfe348bede8aa2149f1a4489512"}, + {file = "watchfiles-1.0.5-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a0dbcb1c2d8f2ab6e0a81c6699b236932bd264d4cef1ac475858d16c403de74d"}, + {file = "watchfiles-1.0.5-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a2014a2b18ad3ca53b1f6c23f8cd94a18ce930c1837bd891262c182640eb40a6"}, + {file = "watchfiles-1.0.5-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:10f6ae86d5cb647bf58f9f655fcf577f713915a5d69057a0371bc257e2553234"}, + {file = "watchfiles-1.0.5-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:1a7bac2bde1d661fb31f4d4e8e539e178774b76db3c2c17c4bb3e960a5de07a2"}, + {file = "watchfiles-1.0.5-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:4ab626da2fc1ac277bbf752446470b367f84b50295264d2d313e28dc4405d663"}, + {file = "watchfiles-1.0.5-cp312-cp312-win32.whl", hash = "sha256:9f4571a783914feda92018ef3901dab8caf5b029325b5fe4558c074582815249"}, + {file = "watchfiles-1.0.5-cp312-cp312-win_amd64.whl", hash = "sha256:360a398c3a19672cf93527f7e8d8b60d8275119c5d900f2e184d32483117a705"}, + {file = "watchfiles-1.0.5-cp312-cp312-win_arm64.whl", hash = "sha256:1a2902ede862969077b97523987c38db28abbe09fb19866e711485d9fbf0d417"}, + {file = "watchfiles-1.0.5-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:0b289572c33a0deae62daa57e44a25b99b783e5f7aed81b314232b3d3c81a11d"}, + {file = "watchfiles-1.0.5-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:a056c2f692d65bf1e99c41045e3bdcaea3cb9e6b5a53dcaf60a5f3bd95fc9763"}, + {file = "watchfiles-1.0.5-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b9dca99744991fc9850d18015c4f0438865414e50069670f5f7eee08340d8b40"}, + {file = "watchfiles-1.0.5-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:894342d61d355446d02cd3988a7326af344143eb33a2fd5d38482a92072d9563"}, + {file = "watchfiles-1.0.5-cp313-cp313-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ab44e1580924d1ffd7b3938e02716d5ad190441965138b4aa1d1f31ea0877f04"}, + {file = "watchfiles-1.0.5-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d6f9367b132078b2ceb8d066ff6c93a970a18c3029cea37bfd7b2d3dd2e5db8f"}, + {file = "watchfiles-1.0.5-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f2e55a9b162e06e3f862fb61e399fe9f05d908d019d87bf5b496a04ef18a970a"}, + {file = "watchfiles-1.0.5-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0125f91f70e0732a9f8ee01e49515c35d38ba48db507a50c5bdcad9503af5827"}, + {file = "watchfiles-1.0.5-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:13bb21f8ba3248386337c9fa51c528868e6c34a707f729ab041c846d52a0c69a"}, + {file = "watchfiles-1.0.5-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:839ebd0df4a18c5b3c1b890145b5a3f5f64063c2a0d02b13c76d78fe5de34936"}, + {file = "watchfiles-1.0.5-cp313-cp313-win32.whl", hash = "sha256:4a8ec1e4e16e2d5bafc9ba82f7aaecfeec990ca7cd27e84fb6f191804ed2fcfc"}, + {file = "watchfiles-1.0.5-cp313-cp313-win_amd64.whl", hash = "sha256:f436601594f15bf406518af922a89dcaab416568edb6f65c4e5bbbad1ea45c11"}, + {file = "watchfiles-1.0.5-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:2cfb371be97d4db374cba381b9f911dd35bb5f4c58faa7b8b7106c8853e5d225"}, + {file = "watchfiles-1.0.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a3904d88955fda461ea2531fcf6ef73584ca921415d5cfa44457a225f4a42bc1"}, + {file = "watchfiles-1.0.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2b7a21715fb12274a71d335cff6c71fe7f676b293d322722fe708a9ec81d91f5"}, + {file = "watchfiles-1.0.5-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:dfd6ae1c385ab481766b3c61c44aca2b3cd775f6f7c0fa93d979ddec853d29d5"}, + {file = "watchfiles-1.0.5-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b659576b950865fdad31fa491d31d37cf78b27113a7671d39f919828587b429b"}, + {file = "watchfiles-1.0.5-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1909e0a9cd95251b15bff4261de5dd7550885bd172e3536824bf1cf6b121e200"}, + {file = "watchfiles-1.0.5-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:832ccc221927c860e7286c55c9b6ebcc0265d5e072f49c7f6456c7798d2b39aa"}, + {file = "watchfiles-1.0.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:85fbb6102b3296926d0c62cfc9347f6237fb9400aecd0ba6bbda94cae15f2b3b"}, + {file = "watchfiles-1.0.5-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:15ac96dd567ad6c71c71f7b2c658cb22b7734901546cd50a475128ab557593ca"}, + {file = "watchfiles-1.0.5-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:4b6227351e11c57ae997d222e13f5b6f1f0700d84b8c52304e8675d33a808382"}, + {file = "watchfiles-1.0.5-cp39-cp39-win32.whl", hash = "sha256:974866e0db748ebf1eccab17862bc0f0303807ed9cda465d1324625b81293a18"}, + {file = "watchfiles-1.0.5-cp39-cp39-win_amd64.whl", hash = "sha256:9848b21ae152fe79c10dd0197304ada8f7b586d3ebc3f27f43c506e5a52a863c"}, + {file = "watchfiles-1.0.5-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:f59b870db1f1ae5a9ac28245707d955c8721dd6565e7f411024fa374b5362d1d"}, + {file = "watchfiles-1.0.5-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:9475b0093767e1475095f2aeb1d219fb9664081d403d1dff81342df8cd707034"}, + {file = "watchfiles-1.0.5-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fc533aa50664ebd6c628b2f30591956519462f5d27f951ed03d6c82b2dfd9965"}, + {file = "watchfiles-1.0.5-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fed1cd825158dcaae36acce7b2db33dcbfd12b30c34317a88b8ed80f0541cc57"}, + {file = "watchfiles-1.0.5-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:554389562c29c2c182e3908b149095051f81d28c2fec79ad6c8997d7d63e0009"}, + {file = "watchfiles-1.0.5-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:a74add8d7727e6404d5dc4dcd7fac65d4d82f95928bbee0cf5414c900e86773e"}, + {file = "watchfiles-1.0.5-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cb1489f25b051a89fae574505cc26360c8e95e227a9500182a7fe0afcc500ce0"}, + {file = "watchfiles-1.0.5-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c0901429650652d3f0da90bad42bdafc1f9143ff3605633c455c999a2d786cac"}, + {file = "watchfiles-1.0.5.tar.gz", hash = "sha256:b7529b5dcc114679d43827d8c35a07c493ad6f083633d573d81c660abc5979e9"}, ] [package.dependencies] @@ -4821,80 +5038,80 @@ test = ["websockets"] [[package]] name = "websockets" -version = "15.0" +version = "15.0.1" description = "An implementation of the WebSocket Protocol (RFC 6455 & 7692)" optional = false python-versions = ">=3.9" files = [ - {file = "websockets-15.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:5e6ee18a53dd5743e6155b8ff7e8e477c25b29b440f87f65be8165275c87fef0"}, - {file = "websockets-15.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:ee06405ea2e67366a661ed313e14cf2a86e84142a3462852eb96348f7219cee3"}, - {file = "websockets-15.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8711682a629bbcaf492f5e0af72d378e976ea1d127a2d47584fa1c2c080b436b"}, - {file = "websockets-15.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:94c4a9b01eede952442c088d415861b0cf2053cbd696b863f6d5022d4e4e2453"}, - {file = "websockets-15.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:45535fead66e873f411c1d3cf0d3e175e66f4dd83c4f59d707d5b3e4c56541c4"}, - {file = "websockets-15.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e389efe46ccb25a1f93d08c7a74e8123a2517f7b7458f043bd7529d1a63ffeb"}, - {file = "websockets-15.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:67a04754d121ea5ca39ddedc3f77071651fb5b0bc6b973c71c515415b44ed9c5"}, - {file = "websockets-15.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:bd66b4865c8b853b8cca7379afb692fc7f52cf898786537dfb5e5e2d64f0a47f"}, - {file = "websockets-15.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:a4cc73a6ae0a6751b76e69cece9d0311f054da9b22df6a12f2c53111735657c8"}, - {file = "websockets-15.0-cp310-cp310-win32.whl", hash = "sha256:89da58e4005e153b03fe8b8794330e3f6a9774ee9e1c3bd5bc52eb098c3b0c4f"}, - {file = "websockets-15.0-cp310-cp310-win_amd64.whl", hash = "sha256:4ff380aabd7a74a42a760ee76c68826a8f417ceb6ea415bd574a035a111fd133"}, - {file = "websockets-15.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:dd24c4d256558429aeeb8d6c24ebad4e982ac52c50bc3670ae8646c181263965"}, - {file = "websockets-15.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:f83eca8cbfd168e424dfa3b3b5c955d6c281e8fc09feb9d870886ff8d03683c7"}, - {file = "websockets-15.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:4095a1f2093002c2208becf6f9a178b336b7572512ee0a1179731acb7788e8ad"}, - {file = "websockets-15.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fb915101dfbf318486364ce85662bb7b020840f68138014972c08331458d41f3"}, - {file = "websockets-15.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:45d464622314973d78f364689d5dbb9144e559f93dca11b11af3f2480b5034e1"}, - {file = "websockets-15.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ace960769d60037ca9625b4c578a6f28a14301bd2a1ff13bb00e824ac9f73e55"}, - {file = "websockets-15.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:c7cd4b1015d2f60dfe539ee6c95bc968d5d5fad92ab01bb5501a77393da4f596"}, - {file = "websockets-15.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:4f7290295794b5dec470867c7baa4a14182b9732603fd0caf2a5bf1dc3ccabf3"}, - {file = "websockets-15.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:3abd670ca7ce230d5a624fd3d55e055215d8d9b723adee0a348352f5d8d12ff4"}, - {file = "websockets-15.0-cp311-cp311-win32.whl", hash = "sha256:110a847085246ab8d4d119632145224d6b49e406c64f1bbeed45c6f05097b680"}, - {file = "websockets-15.0-cp311-cp311-win_amd64.whl", hash = "sha256:8d7bbbe2cd6ed80aceef2a14e9f1c1b61683194c216472ed5ff33b700e784e37"}, - {file = "websockets-15.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:cccc18077acd34c8072578394ec79563664b1c205f7a86a62e94fafc7b59001f"}, - {file = "websockets-15.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:d4c22992e24f12de340ca5f824121a5b3e1a37ad4360b4e1aaf15e9d1c42582d"}, - {file = "websockets-15.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:1206432cc6c644f6fc03374b264c5ff805d980311563202ed7fef91a38906276"}, - {file = "websockets-15.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5d3cc75ef3e17490042c47e0523aee1bcc4eacd2482796107fd59dd1100a44bc"}, - {file = "websockets-15.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b89504227a5311610e4be16071465885a0a3d6b0e82e305ef46d9b064ce5fb72"}, - {file = "websockets-15.0-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:56e3efe356416bc67a8e093607315951d76910f03d2b3ad49c4ade9207bf710d"}, - {file = "websockets-15.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:0f2205cdb444a42a7919690238fb5979a05439b9dbb73dd47c863d39640d85ab"}, - {file = "websockets-15.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:aea01f40995fa0945c020228ab919b8dfc93fc8a9f2d3d705ab5b793f32d9e99"}, - {file = "websockets-15.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:a9f8e33747b1332db11cf7fcf4a9512bef9748cb5eb4d3f7fbc8c30d75dc6ffc"}, - {file = "websockets-15.0-cp312-cp312-win32.whl", hash = "sha256:32e02a2d83f4954aa8c17e03fe8ec6962432c39aca4be7e8ee346b05a3476904"}, - {file = "websockets-15.0-cp312-cp312-win_amd64.whl", hash = "sha256:ffc02b159b65c05f2ed9ec176b715b66918a674bd4daed48a9a7a590dd4be1aa"}, - {file = "websockets-15.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:d2244d8ab24374bed366f9ff206e2619345f9cd7fe79aad5225f53faac28b6b1"}, - {file = "websockets-15.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:3a302241fbe825a3e4fe07666a2ab513edfdc6d43ce24b79691b45115273b5e7"}, - {file = "websockets-15.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:10552fed076757a70ba2c18edcbc601c7637b30cdfe8c24b65171e824c7d6081"}, - {file = "websockets-15.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c53f97032b87a406044a1c33d1e9290cc38b117a8062e8a8b285175d7e2f99c9"}, - {file = "websockets-15.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1caf951110ca757b8ad9c4974f5cac7b8413004d2f29707e4d03a65d54cedf2b"}, - {file = "websockets-15.0-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8bf1ab71f9f23b0a1d52ec1682a3907e0c208c12fef9c3e99d2b80166b17905f"}, - {file = "websockets-15.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:bfcd3acc1a81f106abac6afd42327d2cf1e77ec905ae11dc1d9142a006a496b6"}, - {file = "websockets-15.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:c8c5c8e1bac05ef3c23722e591ef4f688f528235e2480f157a9cfe0a19081375"}, - {file = "websockets-15.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:86bfb52a9cfbcc09aba2b71388b0a20ea5c52b6517c0b2e316222435a8cdab72"}, - {file = "websockets-15.0-cp313-cp313-win32.whl", hash = "sha256:26ba70fed190708551c19a360f9d7eca8e8c0f615d19a574292b7229e0ae324c"}, - {file = "websockets-15.0-cp313-cp313-win_amd64.whl", hash = "sha256:ae721bcc8e69846af00b7a77a220614d9b2ec57d25017a6bbde3a99473e41ce8"}, - {file = "websockets-15.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:c348abc5924caa02a62896300e32ea80a81521f91d6db2e853e6b1994017c9f6"}, - {file = "websockets-15.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5294fcb410ed0a45d5d1cdedc4e51a60aab5b2b3193999028ea94afc2f554b05"}, - {file = "websockets-15.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:c24ba103ecf45861e2e1f933d40b2d93f5d52d8228870c3e7bf1299cd1cb8ff1"}, - {file = "websockets-15.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cc8821a03bcfb36e4e4705316f6b66af28450357af8a575dc8f4b09bf02a3dee"}, - {file = "websockets-15.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ffc5ae23ada6515f31604f700009e2df90b091b67d463a8401c1d8a37f76c1d7"}, - {file = "websockets-15.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7ac67b542505186b3bbdaffbc303292e1ee9c8729e5d5df243c1f20f4bb9057e"}, - {file = "websockets-15.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:c86dc2068f1c5ca2065aca34f257bbf4f78caf566eb230f692ad347da191f0a1"}, - {file = "websockets-15.0-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:30cff3ef329682b6182c01c568f551481774c476722020b8f7d0daacbed07a17"}, - {file = "websockets-15.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:98dcf978d4c6048965d1762abd534c9d53bae981a035bfe486690ba11f49bbbb"}, - {file = "websockets-15.0-cp39-cp39-win32.whl", hash = "sha256:37d66646f929ae7c22c79bc73ec4074d6db45e6384500ee3e0d476daf55482a9"}, - {file = "websockets-15.0-cp39-cp39-win_amd64.whl", hash = "sha256:24d5333a9b2343330f0f4eb88546e2c32a7f5c280f8dd7d3cc079beb0901781b"}, - {file = "websockets-15.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:b499caef4bca9cbd0bd23cd3386f5113ee7378094a3cb613a2fa543260fe9506"}, - {file = "websockets-15.0-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:17f2854c6bd9ee008c4b270f7010fe2da6c16eac5724a175e75010aacd905b31"}, - {file = "websockets-15.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:89f72524033abbfde880ad338fd3c2c16e31ae232323ebdfbc745cbb1b3dcc03"}, - {file = "websockets-15.0-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1657a9eecb29d7838e3b415458cc494e6d1b194f7ac73a34aa55c6fb6c72d1f3"}, - {file = "websockets-15.0-pp310-pypy310_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e413352a921f5ad5d66f9e2869b977e88d5103fc528b6deb8423028a2befd842"}, - {file = "websockets-15.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:8561c48b0090993e3b2a54db480cab1d23eb2c5735067213bb90f402806339f5"}, - {file = "websockets-15.0-pp39-pypy39_pp73-macosx_10_15_x86_64.whl", hash = "sha256:190bc6ef8690cd88232a038d1b15714c258f79653abad62f7048249b09438af3"}, - {file = "websockets-15.0-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:327adab7671f3726b0ba69be9e865bba23b37a605b585e65895c428f6e47e766"}, - {file = "websockets-15.0-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2bd8ef197c87afe0a9009f7a28b5dc613bfc585d329f80b7af404e766aa9e8c7"}, - {file = "websockets-15.0-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:789c43bf4a10cd067c24c321238e800b8b2716c863ddb2294d2fed886fa5a689"}, - {file = "websockets-15.0-pp39-pypy39_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7394c0b7d460569c9285fa089a429f58465db930012566c03046f9e3ab0ed181"}, - {file = "websockets-15.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:2ea4f210422b912ebe58ef0ad33088bc8e5c5ff9655a8822500690abc3b1232d"}, - {file = "websockets-15.0-py3-none-any.whl", hash = "sha256:51ffd53c53c4442415b613497a34ba0aa7b99ac07f1e4a62db5dcd640ae6c3c3"}, - {file = "websockets-15.0.tar.gz", hash = "sha256:ca36151289a15b39d8d683fd8b7abbe26fc50be311066c5f8dcf3cb8cee107ab"}, + {file = "websockets-15.0.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d63efaa0cd96cf0c5fe4d581521d9fa87744540d4bc999ae6e08595a1014b45b"}, + {file = "websockets-15.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:ac60e3b188ec7574cb761b08d50fcedf9d77f1530352db4eef1707fe9dee7205"}, + {file = "websockets-15.0.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:5756779642579d902eed757b21b0164cd6fe338506a8083eb58af5c372e39d9a"}, + {file = "websockets-15.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0fdfe3e2a29e4db3659dbd5bbf04560cea53dd9610273917799f1cde46aa725e"}, + {file = "websockets-15.0.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4c2529b320eb9e35af0fa3016c187dffb84a3ecc572bcee7c3ce302bfeba52bf"}, + {file = "websockets-15.0.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ac1e5c9054fe23226fb11e05a6e630837f074174c4c2f0fe442996112a6de4fb"}, + {file = "websockets-15.0.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:5df592cd503496351d6dc14f7cdad49f268d8e618f80dce0cd5a36b93c3fc08d"}, + {file = "websockets-15.0.1-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:0a34631031a8f05657e8e90903e656959234f3a04552259458aac0b0f9ae6fd9"}, + {file = "websockets-15.0.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:3d00075aa65772e7ce9e990cab3ff1de702aa09be3940d1dc88d5abf1ab8a09c"}, + {file = "websockets-15.0.1-cp310-cp310-win32.whl", hash = "sha256:1234d4ef35db82f5446dca8e35a7da7964d02c127b095e172e54397fb6a6c256"}, + {file = "websockets-15.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:39c1fec2c11dc8d89bba6b2bf1556af381611a173ac2b511cf7231622058af41"}, + {file = "websockets-15.0.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:823c248b690b2fd9303ba00c4f66cd5e2d8c3ba4aa968b2779be9532a4dad431"}, + {file = "websockets-15.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678999709e68425ae2593acf2e3ebcbcf2e69885a5ee78f9eb80e6e371f1bf57"}, + {file = "websockets-15.0.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:d50fd1ee42388dcfb2b3676132c78116490976f1300da28eb629272d5d93e905"}, + {file = "websockets-15.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d99e5546bf73dbad5bf3547174cd6cb8ba7273062a23808ffea025ecb1cf8562"}, + {file = "websockets-15.0.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:66dd88c918e3287efc22409d426c8f729688d89a0c587c88971a0faa2c2f3792"}, + {file = "websockets-15.0.1-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8dd8327c795b3e3f219760fa603dcae1dcc148172290a8ab15158cf85a953413"}, + {file = "websockets-15.0.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:8fdc51055e6ff4adeb88d58a11042ec9a5eae317a0a53d12c062c8a8865909e8"}, + {file = "websockets-15.0.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:693f0192126df6c2327cce3baa7c06f2a117575e32ab2308f7f8216c29d9e2e3"}, + {file = "websockets-15.0.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:54479983bd5fb469c38f2f5c7e3a24f9a4e70594cd68cd1fa6b9340dadaff7cf"}, + {file = "websockets-15.0.1-cp311-cp311-win32.whl", hash = "sha256:16b6c1b3e57799b9d38427dda63edcbe4926352c47cf88588c0be4ace18dac85"}, + {file = "websockets-15.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:27ccee0071a0e75d22cb35849b1db43f2ecd3e161041ac1ee9d2352ddf72f065"}, + {file = "websockets-15.0.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:3e90baa811a5d73f3ca0bcbf32064d663ed81318ab225ee4f427ad4e26e5aff3"}, + {file = "websockets-15.0.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:592f1a9fe869c778694f0aa806ba0374e97648ab57936f092fd9d87f8bc03665"}, + {file = "websockets-15.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:0701bc3cfcb9164d04a14b149fd74be7347a530ad3bbf15ab2c678a2cd3dd9a2"}, + {file = "websockets-15.0.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e8b56bdcdb4505c8078cb6c7157d9811a85790f2f2b3632c7d1462ab5783d215"}, + {file = "websockets-15.0.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0af68c55afbd5f07986df82831c7bff04846928ea8d1fd7f30052638788bc9b5"}, + {file = "websockets-15.0.1-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:64dee438fed052b52e4f98f76c5790513235efaa1ef7f3f2192c392cd7c91b65"}, + {file = "websockets-15.0.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:d5f6b181bb38171a8ad1d6aa58a67a6aa9d4b38d0f8c5f496b9e42561dfc62fe"}, + {file = "websockets-15.0.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:5d54b09eba2bada6011aea5375542a157637b91029687eb4fdb2dab11059c1b4"}, + {file = "websockets-15.0.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:3be571a8b5afed347da347bfcf27ba12b069d9d7f42cb8c7028b5e98bbb12597"}, + {file = "websockets-15.0.1-cp312-cp312-win32.whl", hash = "sha256:c338ffa0520bdb12fbc527265235639fb76e7bc7faafbb93f6ba80d9c06578a9"}, + {file = "websockets-15.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:fcd5cf9e305d7b8338754470cf69cf81f420459dbae8a3b40cee57417f4614a7"}, + {file = "websockets-15.0.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:ee443ef070bb3b6ed74514f5efaa37a252af57c90eb33b956d35c8e9c10a1931"}, + {file = "websockets-15.0.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:5a939de6b7b4e18ca683218320fc67ea886038265fd1ed30173f5ce3f8e85675"}, + {file = "websockets-15.0.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:746ee8dba912cd6fc889a8147168991d50ed70447bf18bcda7039f7d2e3d9151"}, + {file = "websockets-15.0.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:595b6c3969023ecf9041b2936ac3827e4623bfa3ccf007575f04c5a6aa318c22"}, + {file = "websockets-15.0.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3c714d2fc58b5ca3e285461a4cc0c9a66bd0e24c5da9911e30158286c9b5be7f"}, + {file = "websockets-15.0.1-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0f3c1e2ab208db911594ae5b4f79addeb3501604a165019dd221c0bdcabe4db8"}, + {file = "websockets-15.0.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:229cf1d3ca6c1804400b0a9790dc66528e08a6a1feec0d5040e8b9eb14422375"}, + {file = "websockets-15.0.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:756c56e867a90fb00177d530dca4b097dd753cde348448a1012ed6c5131f8b7d"}, + {file = "websockets-15.0.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:558d023b3df0bffe50a04e710bc87742de35060580a293c2a984299ed83bc4e4"}, + {file = "websockets-15.0.1-cp313-cp313-win32.whl", hash = "sha256:ba9e56e8ceeeedb2e080147ba85ffcd5cd0711b89576b83784d8605a7df455fa"}, + {file = "websockets-15.0.1-cp313-cp313-win_amd64.whl", hash = "sha256:e09473f095a819042ecb2ab9465aee615bd9c2028e4ef7d933600a8401c79561"}, + {file = "websockets-15.0.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:5f4c04ead5aed67c8a1a20491d54cdfba5884507a48dd798ecaf13c74c4489f5"}, + {file = "websockets-15.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:abdc0c6c8c648b4805c5eacd131910d2a7f6455dfd3becab248ef108e89ab16a"}, + {file = "websockets-15.0.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a625e06551975f4b7ea7102bc43895b90742746797e2e14b70ed61c43a90f09b"}, + {file = "websockets-15.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d591f8de75824cbb7acad4e05d2d710484f15f29d4a915092675ad3456f11770"}, + {file = "websockets-15.0.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:47819cea040f31d670cc8d324bb6435c6f133b8c7a19ec3d61634e62f8d8f9eb"}, + {file = "websockets-15.0.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ac017dd64572e5c3bd01939121e4d16cf30e5d7e110a119399cf3133b63ad054"}, + {file = "websockets-15.0.1-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:4a9fac8e469d04ce6c25bb2610dc535235bd4aa14996b4e6dbebf5e007eba5ee"}, + {file = "websockets-15.0.1-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:363c6f671b761efcb30608d24925a382497c12c506b51661883c3e22337265ed"}, + {file = "websockets-15.0.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:2034693ad3097d5355bfdacfffcbd3ef5694f9718ab7f29c29689a9eae841880"}, + {file = "websockets-15.0.1-cp39-cp39-win32.whl", hash = "sha256:3b1ac0d3e594bf121308112697cf4b32be538fb1444468fb0a6ae4feebc83411"}, + {file = "websockets-15.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:b7643a03db5c95c799b89b31c036d5f27eeb4d259c798e878d6937d71832b1e4"}, + {file = "websockets-15.0.1-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:0c9e74d766f2818bb95f84c25be4dea09841ac0f734d1966f415e4edfc4ef1c3"}, + {file = "websockets-15.0.1-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:1009ee0c7739c08a0cd59de430d6de452a55e42d6b522de7aa15e6f67db0b8e1"}, + {file = "websockets-15.0.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:76d1f20b1c7a2fa82367e04982e708723ba0e7b8d43aa643d3dcd404d74f1475"}, + {file = "websockets-15.0.1-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f29d80eb9a9263b8d109135351caf568cc3f80b9928bccde535c235de55c22d9"}, + {file = "websockets-15.0.1-pp310-pypy310_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b359ed09954d7c18bbc1680f380c7301f92c60bf924171629c5db97febb12f04"}, + {file = "websockets-15.0.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:cad21560da69f4ce7658ca2cb83138fb4cf695a2ba3e475e0559e05991aa8122"}, + {file = "websockets-15.0.1-pp39-pypy39_pp73-macosx_10_15_x86_64.whl", hash = "sha256:7f493881579c90fc262d9cdbaa05a6b54b3811c2f300766748db79f098db9940"}, + {file = "websockets-15.0.1-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:47b099e1f4fbc95b701b6e85768e1fcdaf1630f3cbe4765fa216596f12310e2e"}, + {file = "websockets-15.0.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:67f2b6de947f8c757db2db9c71527933ad0019737ec374a8a6be9a956786aaf9"}, + {file = "websockets-15.0.1-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d08eb4c2b7d6c41da6ca0600c077e93f5adcfd979cd777d747e9ee624556da4b"}, + {file = "websockets-15.0.1-pp39-pypy39_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b826973a4a2ae47ba357e4e82fa44a463b8f168e1ca775ac64521442b19e87f"}, + {file = "websockets-15.0.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:21c1fa28a6a7e3cbdc171c694398b6df4744613ce9b36b1a498e816787e28123"}, + {file = "websockets-15.0.1-py3-none-any.whl", hash = "sha256:f7a866fbc1e97b5c617ee4116daaa09b722101d4a3c170c787450ba409f9736f"}, + {file = "websockets-15.0.1.tar.gz", hash = "sha256:82544de02076bafba038ce055ee6412d68da13ab47f0c60cab827346de828dee"}, ] [[package]] @@ -5037,5 +5254,5 @@ type = ["pytest-mypy"] [metadata] lock-version = "2.0" -python-versions = ">=3.10,<=3.12.8" -content-hash = "c4d8cefacb3d477cf4ae5387d7a4970ddbed842e8969cc58add9204549fc0910" +python-versions = "^3.12" +content-hash = "ff34c2275ee15b46c553f276a15c2d8dbd2c8aba9bb893cfd8d8d056f23af983" diff --git a/examples/gemini/python/docs-agent/pyproject.toml b/examples/gemini/python/docs-agent/pyproject.toml index b4389765a..b417bc30d 100644 --- a/examples/gemini/python/docs-agent/pyproject.toml +++ b/examples/gemini/python/docs-agent/pyproject.toml @@ -1,31 +1,38 @@ [tool.poetry] name = "docs-agent" -version = "0.4.1" +version = "0.4.2" description = "" authors = ["Docs Agent contributors"] readme = "README.md" packages = [{include = "docs_agent"}] [tool.poetry.dependencies] -python = ">=3.10,<=3.12.8" +python = "^3.12" +protobuf = "^3.10.0" rich = "^13.3.5" Markdown = "^3.4.3" beautifulsoup4 = "^4.12.2" -protobuf = ">=3.20" ratelimit = "^2.2.1" absl-py = "^1.4.0" python-frontmatter = "^1.0.0" flatdict = "^4.0.1" -google-generativeai = "^0.8.3" +grpcio = "==1.63.0" uuid = "^1.30" pytz = ">=2020.1" -chromadb = "^0.4.22" +chromadb = "^0.6.3" click = "^8.1.7" pyyaml = "^6.0.1" numpy = "^1.26.4" tqdm = "^4.66.2" flask = "^2.3.2" pillow = "^11.0.0" +# Temporarily pin pulsar-client to 3.5.0 to avoid missing 3.6.0 dependencies. +pulsar-client = "3.5.0" +pytest = "^8.3.4" +google-genai = "^1.10.0" +google-generativeai = "^0.8.4" +setuptools = "^78.1.0" +mcp = "^1.6.0" [tool.poetry.group.dev.dependencies] ipython = "^8.13.2" @@ -49,6 +56,5 @@ requires = ["poetry-core"] build-backend = "poetry.core.masonry.api" - [tool.poetry.scripts] agent = "docs_agent.interfaces.cli.cli:cli" diff --git a/examples/gemini/python/docs-agent/scripts/autocomplete.sh b/examples/gemini/python/docs-agent/scripts/autocomplete.sh index 4447ea475..7bb8a197e 100644 --- a/examples/gemini/python/docs-agent/scripts/autocomplete.sh +++ b/examples/gemini/python/docs-agent/scripts/autocomplete.sh @@ -66,11 +66,17 @@ _script() ExamineChangesBetweenTwoCommits ReviseContentStructure IndexPageGenerator - DraftFuchsiaReleaseNotes DraftPSA PreparePodcasteFromDir DescribeImages DescribeImagesWithoutMarkdown + DescribeImagesFromDoc + gemini-2.5-flash-preview-04-17 + gemini-2.5-pro-preview-05-06 + gemini-2.0-flash + gemini-1.5-flash + gemini-1.5-pro + models/gemini-2.0-flash models/gemini-1.5-flash models/gemini-1.5-flash-latest models/gemini-1.5-flash-001 diff --git a/examples/gemini/python/docs-agent/scripts/create_file_dictionary.py b/examples/gemini/python/docs-agent/scripts/create_file_dictionary.py new file mode 100644 index 000000000..1476d7b6c --- /dev/null +++ b/examples/gemini/python/docs-agent/scripts/create_file_dictionary.py @@ -0,0 +1,112 @@ +# +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +""" +This script extracts image paths and alt text from markdown, html, or a directory of files. + +Usage: + python create_file_dictionary.py + +Example: + python create_file_dictionary.py my_document.md + python create_file_dictionary.py my_document.html + python create_file_dictionary.py my_documents_folder +""" + +import os +import sys + +from docs_agent.preprocess.extract_image_path import parse_md_html_files_for_images +from docs_agent.utilities.helpers import resolve_path +from docs_agent.utilities.helpers import save_file +from docs_agent.utilities.helpers import create_output_directory + + +def main(input_path: str = sys.argv[1]): + """Main function to extract image paths and alt text, and update markdown files. + + Args: + input_path: The path to the input file or directory. + """ + # Create a file containing image paths and current alt text for input md files + # extract_image_files(input_path) + file_dictionary = walk_directory(input_path) + create_output_directory("agent_out") + print(f"Saving file dictionary to: agent_out/file_alt_text.yaml") + save_file(output_path="agent_out/file_alt_text.yaml", content=file_dictionary) + # Create a file containing image paths to be given to Docs Agent task + save_image_paths(file_dictionary) + + +def walk_directory(input_path: str) -> dict: + """Walks through the input path (file or directory) and generates a dictionary + containing image paths and alt text for each markdown or html file. + + Args: + input_path: The path to the input file or directory. + + Returns: + A dictionary containing the files list. + """ + if input_path.startswith("~/"): + input_path = os.path.expanduser(input_path) + input_path = os.path.realpath(os.path.join(os.getcwd(), input_path)) + files_list = [] + if os.path.isdir(input_path): + for root, _, files in os.walk(resolve_path(input_path)): + for file in files: + file_path = os.path.realpath(os.path.join(root, file)) + file_data = generate_dictionary_md_file(file_path) + # Prevents empty dictionaries from being added + if file_data and "files" in file_data: + files_list.append(file_data["files"]) + else: + file_data = generate_dictionary_md_file(input_path) + if file_data and "files" in file_data: + files_list.append(file_data["files"]) + + # Return a dictionary containing the files list + return {"files": files_list} + + +def generate_dictionary_md_file(input_file: str) -> dict: + """Generates a dictionary containing alt text for each image in the input file. + + Args: + input_file: The path to the input file. + + Returns: + A dictionary containing the alt text for each image in the input file. + """ + md_obj = {} + if input_file.endswith((".md", ".html")): + image_obj = parse_md_html_files_for_images(input_file) + md_obj["files"] = { + "path": resolve_path(input_file), + "images": image_obj["images"], + } + return md_obj + + +def save_image_paths(input_dictionary: dict) -> None: + """Returns the image paths from the input dictionary.""" + image_paths = [] + for file_data in input_dictionary["files"]: + image_paths.extend(file_data["images"]["full_image_paths"]) + create_output_directory("agent_out") + save_file(output_path="agent_out/image_paths.txt", content="\n".join(image_paths)) + + +main() diff --git a/examples/gemini/python/docs-agent/scripts/extract_image_files.py b/examples/gemini/python/docs-agent/scripts/extract_image_files.py new file mode 100644 index 000000000..9a8fe50ef --- /dev/null +++ b/examples/gemini/python/docs-agent/scripts/extract_image_files.py @@ -0,0 +1,146 @@ +# +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +""" +This script extracts image paths from markdown, html, or directory of files. + +Usage: + python extract_image_files.py + +Example: + python extract_image_files.py my_document.md + python extract_image_files.py my_document.html + python extract_image_files.py my_documents_folder +""" +import os +import sys +from absl import logging + +from docs_agent.preprocess.extract_image_path import extract_image_path_from_html +from docs_agent.preprocess.extract_image_path import extract_image_path_from_markdown +from docs_agent.utilities.helpers import resolve_path + + +def main(input: str = sys.argv[1]): + """ + Extracts image paths from markdown, html, or directory of files. + + Args: + input: The path to the input file. + """ + dir_name = "agent_out" + if input.startswith("~/"): + input = os.path.expanduser(input) + input = os.path.realpath(os.path.join(os.getcwd(), input)) + content = "" + if os.path.isdir(input): + for root, _, files in os.walk(resolve_path(input)): + for file in files: + file_path = os.path.realpath(os.path.join(root, file)) + content += parse_files(file_path) + else: + content += parse_files(input) + if not os.path.exists(dir_name): + os.makedirs(dir_name) + save_file(dir_name + "/image_paths.txt", content) + + +def parse_files(input: str) -> str: + """ + Parses the input file and extracts image paths. + + Args: + input: The path to the input file. + + Returns: + A string containing the image paths each on a new line. + """ + if input.endswith(".md"): + file_content = open_file(input) + image_paths = extract_image_path_from_markdown(file_content) + elif input.endswith(".html") or input.endswith(".htm"): + file_content = open_file(input) + image_paths = extract_image_path_from_html(file_content) + else: + image_paths = [] + # This can get noisy so better to log as info. + logging.info("Skipping this file since it is not a markdown or html file: " + input) + content = "" + for image_path in image_paths: + dir_path = os.path.dirname(input) + if (image_path.startswith("http://") or image_path.startswith("https://")): + logging.warning(f"Skipping this image path since it is a URL: {image_path}\n") + if image_path.startswith("./"): + image_path = image_path.removeprefix("./") + image_path = os.path.join(dir_path, image_path) + content += image_path + "\n" + elif image_path[0].isalpha(): + image_path = os.path.join(dir_path, image_path) + content += image_path + "\n" + elif image_path.startswith("/") and "/devsite/" in input: + # If the document is part of devsite, the path needs to be trimmed to the + # subdirectory (returns devsite tenant path) and then joined with the + # image path + devsite_path = trim_path_to_subdir(input, "en/") + image_path = image_path.removeprefix("/") + image_path = os.path.join(devsite_path, image_path) + content += image_path + "\n" + else: + logging.error(f"Skipping this image path because it cannot be parsed: {image_path}\n") + return content + + +def open_file(file_path): + file_content = "" + try: + with open(file_path, "r", encoding="utf-8") as auto: + file_content = auto.read() + auto.close() + except: + logging.error( + f"Skipping this file because it cannot be opened: {input}\n" + ) + return file_content + + +def save_file(output_path, content): + try: + with open(output_path, "w", encoding="utf-8") as auto: + auto.write(content) + auto.close() + except: + logging.error( + f"Cannot save the file to: {output_path}\n" + ) + + +def trim_path_to_subdir(full_path, subdir): + """Trims a full path up to a given subdirectory. + + Args: + full_path: The full path to trim. + subdir: The subdirectory to trim to (e.g., '/en/'). + + Returns: + The trimmed path, or the original path if the subdirectory is not found. + """ + + try: + index = full_path.index(subdir) + return full_path[: index + len(subdir)] + except ValueError: + return full_path + +main() \ No newline at end of file diff --git a/examples/gemini/python/docs-agent/scripts/extract_replace_image_alt_text.py b/examples/gemini/python/docs-agent/scripts/extract_replace_image_alt_text.py new file mode 100644 index 000000000..4112d646d --- /dev/null +++ b/examples/gemini/python/docs-agent/scripts/extract_replace_image_alt_text.py @@ -0,0 +1,340 @@ +# +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +""" +This script extracts image paths from markdown, html, or directory of files. + +Usage: + python extract_image_files.py + +Example: + python extract_image_files.py my_document.md + python extract_image_files.py my_document.html + python extract_image_files.py my_documents_folder +""" + +import os +import sys +import re + +from absl import logging +from docs_agent.interfaces import run_console as console +from docs_agent.preprocess.extract_image_path import extract_image_path_from_html +from docs_agent.preprocess.extract_image_path import extract_image_path_from_markdown +from docs_agent.utilities.config import return_config_and_product +from docs_agent.utilities.helpers import resolve_path +import yaml + + +def main(input_path: str = sys.argv[1]): + """ + Main function to extract image paths and alt text, and update markdown files. + + Args: + input_path: The path to the input file or directory. + """ + # Create a file containing image paths and alt text + create_image_paths_file(input_path, replace_alt_text=True) + # Update the markdown files in place with the new image paths and alt text + update_markdown_files(yaml_file_path="agent_out/file_alt_text.yaml") + + +def create_image_paths_file(input_path: str, replace_alt_text: bool = False)-> None: + """ + Creates a file containing image paths and alt text. + + Args: + input_path: The path to the input file. + replace_alt_text: Whether to replace the alt text. + """ + dir_name = "agent_out" + if input_path.startswith("~/"): + input_path = os.path.expanduser(input_path) + input_path = os.path.realpath(os.path.join(os.getcwd(), input_path)) + paths_plain_text = "" + file_alt_text = {} + if os.path.isdir(input_path): + for root, _, files in os.walk(resolve_path(input_path)): + for file in files: + file_path = os.path.realpath(os.path.join(root, file)) + image_obj = parse_files(file_path) + for path in image_obj["full_image_paths"]: + paths_plain_text += path + "\n" + if replace_alt_text: + file_alt_text.update(generate_alt_text_dictionary(file_path, replace_alt_text=replace_alt_text)) + else: + image_obj = parse_files(os.path.realpath(input_path)) + for path in image_obj["full_image_paths"]: + paths_plain_text += path + "\n" + if replace_alt_text: + file_alt_text = generate_alt_text_dictionary(input_path, replace_alt_text=replace_alt_text) + if not os.path.exists(dir_name): + os.makedirs(dir_name) + if replace_alt_text: + save_file(dir_name + "/file_alt_text.yaml", yaml.dump(file_alt_text)) + save_file(dir_name + "/image_paths.txt", paths_plain_text) + + +def generate_alt_text_dictionary(input_file: str, replace_alt_text: bool = False)-> dict: + """ + Generates a dictionary containing alt text for each image in the input file. + + Args: + input_file: The path to the input file. + replace_alt_text: Whether to replace the alt text. + + Returns: + A dictionary containing the alt text for each image in the input file. + """ + prompt = """When you generate a response for alt text, your suggestion should + not start with Picture of, Image of, or Screenshot of. Your new alt text + suggestion must be fewer than 125 characters. Do not exceed 125 characters. + Provide the option that is most suitable for alt text. Output only the alt + text suggestion. Do not include any explanations or commentary. Do not include + end punctuation. Using the above information as context, provide concise, + descriptive alt text for this image that captures its essence and is suitable + for users with visual impairments. Use any existing alt text found in the + information above for context.""" + paths_plain_text = "" + summary = "" + file_alt_text = {} + loaded_config, product_config = return_config_and_product( + config_file="config.yaml", product=[""] + ) + if input_file.endswith(".md" or input_file.endswith(".html")): + # file_content = open_file(input_file) + image_obj = parse_files(input_file) + if image_obj["full_image_paths"]: + print(f"Generating summary for: {input_file}") + summary = console.ask_model_with_file(product_configs=product_config, + question="Summarize this file.", + file=input_file, + return_output=True) + else: + print(f"No images found for: {input_file}") + return file_alt_text + alt_texts = [] + for path in image_obj["full_image_paths"]: + paths_plain_text += path + "\n" + if replace_alt_text: + print(f"Generating alt text for: {path}") + alt_text = console.ask_model_with_file(product_configs=product_config, + question= summary + "\n" + prompt, + file=path, + return_output=True + ) + if alt_text is None: + alt_texts.append("") + else: + alt_texts.append(alt_text.strip()) + file_alt_text[input_file] = {"page_summary": summary.strip(), + "image_paths": image_obj["image_paths"], + "full_image_paths": image_obj["full_image_paths"], + "alt_texts": alt_texts} + return file_alt_text + + +def parse_files(input_file: str) -> dict[list[str], list[str]]: + """ + Parses a file (markdown or html) to extract image paths. + + Args: + input_file: The path to the input file. + + Returns: + A dictionary containing the image paths and full image paths. + """ + if input_file.endswith(".md"): + file_content = open_file(input_file) + image_paths = extract_image_path_from_markdown(file_content) + elif input_file.endswith(".html") or input_file.endswith(".htm"): + file_content = open_file(input_file) + image_paths = extract_image_path_from_html(file_content) + else: + image_paths = [] + # This can get noisy so better to log as info. + logging.info("Skipping this file since it is not a markdown or html file: " + input_file) + image_obj = {} + full_image_paths = [] + for image_path in image_paths: + dir_path = os.path.dirname(input_file) + if (image_path.startswith("http://") or image_path.startswith("https://")): + logging.warning(f"Skipping this image path since it is a URL: {image_path}\n") + if image_path.startswith("./"): + image_path = image_path.removeprefix("./") + image_path = os.path.join(dir_path, image_path) + full_image_paths.append(image_path) + elif image_path[0].isalpha(): + image_path = os.path.join(dir_path, image_path) + full_image_paths.append(image_path) + elif image_path.startswith("/") and "/devsite/" in input_file: + # If the document is part of devsite, the path needs to be trimmed to the + # subdirectory (returns devsite tenant path) and then joined with the + # image path + devsite_path = trim_path_to_subdir(input_file, "en/") + image_path = image_path.removeprefix("/") + image_path = os.path.join(devsite_path, image_path) + full_image_paths.append(image_path) + else: + logging.error(f"Skipping this image path because it cannot be parsed: {image_path}\n") + image_obj["full_image_paths"] = full_image_paths + image_obj["image_paths"] = image_paths + return image_obj + + +def open_file(file_path): + """ + Opens a file and returns its content. + + Args: + file_path: The path to the file. + + Returns: + The content of the file as a string, or an empty string if the file + cannot be opened. + """ + file_content = "" + try: + with open(file_path, "r", encoding="utf-8") as auto: + file_content = auto.read() + auto.close() + except: + logging.error( + f"Skipping this file because it cannot be opened: {file_path}\n" + ) + return file_content + + +def save_file(output_path, content): + """ + Saves content to a file. + + Args: + output_path: The path to the output file. + content: The content to be written to the file. + """ + try: + with open(output_path, "w", encoding="utf-8") as auto: + auto.write(content) + auto.close() + except: + logging.error( + f"Cannot save the file to: {output_path}\n" + ) + + +def process_markdown_with_yaml(yaml_file_path: str) -> dict[str, str]: + """ + Reads a YAML file, processes the referenced Markdown files (replacing + image paths and adding alt text), and updates the Markdown files + in place. + + Args: + yaml_file_path: Path to the YAML file. + + Returns: + A dictionary containing the modified markdown content. + """ + try: + with open(yaml_file_path, "r", encoding="utf-8") as yaml_file: + yaml_data = yaml.safe_load(yaml_file) + except (FileNotFoundError, yaml.YAMLError) as e: + print(f"Error reading or parsing YAML file: {e}") + return {} + + modified_markdown_files = {} + + for markdown_file_path, markdown_data in yaml_data.items(): + try: + with open(markdown_file_path, "r", encoding="utf-8") as md_file: + markdown_content = md_file.read() + except FileNotFoundError as e: + print(f"Error reading Markdown file: {markdown_file_path} - {e}") + # Store empty string for failed files + modified_markdown_files[markdown_file_path] = "" + continue # Skip to the next Markdown file + # Extract relevant data from YAML, with checks for existence + if not all(key in markdown_data for key in ["image_paths", "full_image_paths", "alt_texts"]): + print(f"YAML data for {markdown_file_path} is missing required fields.") + modified_markdown_files[markdown_file_path] = "" + continue + + image_paths = markdown_data["image_paths"] + full_image_paths = markdown_data["full_image_paths"] + alt_texts = markdown_data["alt_texts"] + + if len(image_paths) != len(full_image_paths) or len(image_paths) != len(alt_texts): + print(f"Inconsistent image data lengths for {markdown_file_path}.") + modified_markdown_files[markdown_file_path] = "" + continue + + # Create a mapping from short image path to full image path and alt text + image_map = {} + for i in range(len(image_paths)): + image_map[image_paths[i]] = (full_image_paths[i], alt_texts[i]) + + # Function to replace image paths and add alt text + def replace_image(match): + image_path = match.group(1) + if image_path in image_map: + full_path, alt_text = image_map[image_path] + return f"![{alt_text}]({image_path})" + else: + print(f"Warning: No full image path found for: {image_path} in {markdown_file_path}") + return match.group(0) # Return the original Markdown + # Regex to find Markdown image syntax + markdown_content = re.sub(r"!\[.*?\]\((.*?)\)", replace_image, markdown_content) + modified_markdown_files[markdown_file_path] = markdown_content + + return modified_markdown_files + + +def update_markdown_files(yaml_file_path: str) -> None: + """ + Updates markdown files with the new image paths and alt text from the YAML file. + + Args: + yaml_file_path: Path to the YAML file containing image data. + """ + modified_markdown = process_markdown_with_yaml(yaml_file_path) + + for file_path, new_content in modified_markdown.items(): + if new_content != "": # Only update if processing was successful + try: + with open(file_path, "w", encoding="utf-8") as f: + f.write(new_content) + print(f"Successfully updated: {file_path}") + except Exception as e: + print(f"Error writing to {file_path}: {e}") + + +def trim_path_to_subdir(full_path, subdir): + """Trims a full path up to a given subdirectory. + + Args: + full_path: The full path to trim. + subdir: The subdirectory to trim to (e.g., '/en/'). + + Returns: + The trimmed path, or the original path if the subdirectory is not found. + """ + + try: + index = full_path.index(subdir) + return full_path[: index + len(subdir)] + except ValueError: + return full_path + +main() \ No newline at end of file diff --git a/examples/gemini/python/docs-agent/scripts/update_files_from_yaml.py b/examples/gemini/python/docs-agent/scripts/update_files_from_yaml.py new file mode 100644 index 000000000..186b07cb3 --- /dev/null +++ b/examples/gemini/python/docs-agent/scripts/update_files_from_yaml.py @@ -0,0 +1,189 @@ +# +# Copyright 2023 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +""" +This script updates markdown files with image paths and alt text from a YAML file. + +Usage: + python update_files_from_yaml.py + +Example: + python update_files_from_yaml.py +""" + +import re +import sys + +from docs_agent.utilities.helpers import save_file +import yaml + + +def main(input_path: str = sys.argv[1]): + """Main function to update markdown files. + + Args: + input_path: The path to the input file or directory. + """ + # Update the markdown files in place with the new image paths and alt text + update_markdown_files(yaml_file_path=input_path) + + +def process_markdown_with_yaml(yaml_file_path: str) -> dict[str, str]: + """ + Reads a YAML file, processes the referenced Markdown files (replacing + image paths and update alt text with LLM alt text), and updates the Markdown + files in place. + + Args: + yaml_file_path: Path to the YAML file. + + Returns: + A dictionary containing the modified markdown content. + """ + # This allows reading in a response from the LLM that is not a valid YAML + # file. + try: + with open(yaml_file_path, "r", encoding="utf-8") as file: + file_content = file.read() + except FileNotFoundError as e: + print(f"Error: YAML file not found: {yaml_file_path} - {e}") + return {} + + # Extract YAML content using regex. + match = re.search(r"```yaml\n(.*?)\n```", file_content, re.DOTALL | re.IGNORECASE) + if not match: + print("Error: No YAML content found within ```yaml ... ``` tags.") + return {} + + yaml_content = match.group(1) + try: + yaml_data = yaml.safe_load(yaml_content) # Parse the extracted YAML. + except yaml.YAMLError as e: + print(f"Error parsing YAML content: {e}") + return {} + + modified_markdown_files = {} + + # Iterate through the list of files in the YAML + if "files" not in yaml_data: + print("Error: YAML file does not contain a 'files' list.") + return {} + + # Process each Markdown file listed in 'files' + for file_data in yaml_data["files"]: + markdown_file_path = file_data["path"] + try: + with open(markdown_file_path, "r", encoding="utf-8") as md_file: + markdown_content = md_file.read() + except FileNotFoundError as e: + print(f"Error reading Markdown file: {markdown_file_path} - {e}") + modified_markdown_files[markdown_file_path] = "" + continue + + image_data = file_data.get("images") + # If no image data is found, skip this file. + if not image_data or not image_data.get("full_image_paths"): + continue + + image_paths = image_data.get("image_paths", []) + full_image_paths = image_data.get("full_image_paths", []) + alt_texts = image_data.get("alt_texts", []) # alt_texts, not llm_alt_texts + image_titles = image_data.get("image_titles", []) + llm_alt_texts = image_data.get("llm_alt_texts", []) + + # If llm_alt_texts are present, use those; otherwise, fall back to alt_texts, + # or an empty string if neither exists. + final_alt_texts = [] + for i in range(max(len(image_paths), len(llm_alt_texts), len(alt_texts))): + if i < len(llm_alt_texts): + final_alt_texts.append(llm_alt_texts[i]) + elif i < len(alt_texts): + final_alt_texts.append(alt_texts[i]) + else: + final_alt_texts.append("") + # Ensure image_titles has the same length as other lists + final_image_titles = [] + for i in range(len(image_paths)): + if i < len(image_titles): + final_image_titles.append(image_titles[i]) + else: + final_image_titles.append("") # Pad with empty strings + + if not ( + len(image_paths) + == len(full_image_paths) + == len(final_alt_texts) + == len(final_image_titles) + ): + print(f"Inconsistent image data lengths for {markdown_file_path}.") + modified_markdown_files[markdown_file_path] = "" + continue + + # Build a dictionary mapping image paths to image data + # (full image path, alt text, image title) + image_map = {} + for i in range(len(image_paths)): + image_map[image_paths[i]] = ( + full_image_paths[i], + final_alt_texts[i], + final_image_titles[i], + ) + + def replace_image(match): + image_path = match.group(2).strip() + if image_path in image_map: + _, alt_text, image_title = image_map[image_path] + # Build the Markdown image tag, handling titles + if image_title: + return f'![{alt_text}]({image_path} "{image_title}")' + else: + return f"![{alt_text}]({image_path})" + else: + print( + f"Warning: No matching image path found for: {image_path} in {markdown_file_path}" + ) + return match.group(0) + + # Improved regex to handle existing titles + markdown_content = re.sub( + r'!\[(.*?)\]\((.*?)(?:\s+"(.*?)")?\s*\)', replace_image, markdown_content + ) + modified_markdown_files[markdown_file_path] = markdown_content + + return modified_markdown_files + + +def update_markdown_files(yaml_file_path: str) -> None: + """ + Updates the markdown files with the new image paths and alt text from the + YAML file. + + Args: + yaml_file_path: Path to the YAML file containing the image data. + """ + modified_markdown = process_markdown_with_yaml(yaml_file_path) + save_file(output_path="agent_out/md_output.yaml", content=modified_markdown) + + for file_path, new_content in modified_markdown.items(): + if new_content != "": # Only update if processing was successful + try: + with open(file_path, "w", encoding="utf-8") as f: + f.write(new_content) + print(f"Successfully updated: {file_path}") + except Exception as e: + print(f"Error writing to {file_path}: {e}") + + +main() diff --git a/examples/gemini/python/docs-agent/tasks/describe-images-alt-text-no-markdown.yaml b/examples/gemini/python/docs-agent/tasks/describe-images-alt-text-no-markdown.yaml index 8bda27d4a..166f2d1e5 100644 --- a/examples/gemini/python/docs-agent/tasks/describe-images-alt-text-no-markdown.yaml +++ b/examples/gemini/python/docs-agent/tasks/describe-images-alt-text-no-markdown.yaml @@ -1,20 +1,28 @@ tasks: - name: "DescribeImagesWithoutMarkdown" - model: "models/gemini-1.5-flash" - description: "An agent that describes each image in the input directory to help write alt text." - preamble: "When describing an image, keep it short and concise, and focus on the essence of the image." + model: "models/gemini-2.0-flash" + description: > + An agent that describes each image in the input directory to help write alt text. + preamble: > + When describing an image, keep it short and concise, and focus on the essence of the image. steps: - - prompt: "Provide a concise, descriptive alt text for this JPEG image that captures its essence and is suitable for users with visual impairments." + - prompt: > + Provide a concise, descriptive alt text for this JPEG image that captures its essence + and is suitable for users with visual impairments. flags: perfile: "" default_input: "./docs" file_ext: "jpg" - - prompt: "Provide a concise, descriptive alt text for this PNG image that captures its essence and is suitable for users with visual impairments." + - prompt: > + Provide a concise, descriptive alt text for this PNG image that captures its essence + and is suitable for users with visual impairments. flags: perfile: "" default_input: "./docs" file_ext: "png" - - prompt: "Provide a concise, descriptive alt text for this GIF image that captures its essence and is suitable for users with visual impairments." + - prompt: > + Provide a concise, descriptive alt text for this GIF image that captures its essence + and is suitable for users with visual impairments. flags: perfile: "" default_input: "./docs" diff --git a/examples/gemini/python/docs-agent/tasks/describe-images-and-replace.yaml b/examples/gemini/python/docs-agent/tasks/describe-images-and-replace.yaml new file mode 100644 index 000000000..6613af17b --- /dev/null +++ b/examples/gemini/python/docs-agent/tasks/describe-images-and-replace.yaml @@ -0,0 +1,57 @@ +tasks: + - name: "DescribeImagesAndReplace" + model: "models/gemini-2.0-flash-thinking-exp" + description: > + An agent that extracts all image file names in an input doc and + generates alt text for the images. + preamble: > + When you generate a response for alt text, your suggestion should not + start with Picture of, Image of, or Screenshot of. Your new alt text + suggestion must be fewer than 125 characters. Do not exceed 125 + characters. Provide the option that is most suitable for alt text. + Output only the alt text suggestion. Do not include any explanations + or commentary. Do not include end punctuation. + steps: + - prompt: "create_file_dictionary.py" + function: "script" + description: > + This script extract all image files found in the input file and + store the list of image file names in the + agent_out/files_alt_text.yaml file. + flags: + script_input: "" + default_input: "./README.md" + - prompt: > + Provide a brief description of each Markdown file in this + directory, emphasizing the key content and purpose of each file. + flags: + allfiles: "" + default_input: "./docs" + file_ext: "md" + - prompt: > + Using the above information as context, provide concise, + descriptive alt text for this image that captures its essence and + is suitable for users with visual impairments. Use any existing + alt text found in the information above for context. + flags: + list_file: "agent_out/image_paths.txt" + model: "models/gemini-2.0-flash" + - prompt: > + Update the provided YAML file and map each description as a key + that is parallel to the relevant image path, the key should be + "llm_alt_texts" for every description. Make sure that the values + are wrapped in double quotes in case the strings have special + values. Do not wrap your response with triple backticks yaml or + add any additional text around your response. Just return a valid + YAML file. + flags: + file: "agent_out/file_alt_text.yaml" + out: "llm_file_alt_text.yaml" + - prompt: "update_files_from_yaml.py" + function: "script" + description: > + This script updates all files with new alt text from the + llm_file_alt_text.yaml file. + flags: + script_input: "agent_out/llm_file_alt_text.yaml" + default_input: "./README.md" \ No newline at end of file diff --git a/examples/gemini/python/docs-agent/tasks/describe-images-for-alt-text-task.yaml b/examples/gemini/python/docs-agent/tasks/describe-images-for-alt-text-task.yaml index b93385d5c..e1e055af1 100644 --- a/examples/gemini/python/docs-agent/tasks/describe-images-for-alt-text-task.yaml +++ b/examples/gemini/python/docs-agent/tasks/describe-images-for-alt-text-task.yaml @@ -1,27 +1,36 @@ tasks: - name: "DescribeImages" - model: "models/gemini-1.5-flash" - description: "An agent that describes each image in the input directory to help write alt text." - preamble: "When describing an image, limit the description to 80 or less characters." + model: "models/gemini-2.0-flash" + description: > + An agent that describes each image in the input directory to help write alt text. + preamble: > + When describing an image, limit the description to 80 or less characters. steps: - - prompt: "Provide a brief description of each Markdown file in this directory, emphasizing the key content and purpose of each file." + - prompt: > + Provide a brief description of each Markdown file in this directory, emphasizing + the key content and purpose of each file. flags: allfiles: "" default_input: "./docs" file_ext: "md" - - prompt: "Using the Markdown descriptions as context, provide a concise, descriptive alt text for this JPEG image that captures its essence and is suitable for users with visual impairments." + - prompt: > + Using the Markdown descriptions as context, provide a concise, descriptive alt text for + this JPEG image that captures its essence and is suitable for users with visual impairments. flags: perfile: "" default_input: "./docs" file_ext: "jpg" - - prompt: "Using the Markdown descriptions as context, provide a concise, descriptive alt text for this PNG image that captures its essence and is suitable for users with visual impairments." + - prompt: > + Using the Markdown descriptions as context, provide a concise, descriptive alt text for + this PNG image that captures its essence and is suitable for users with visual impairments. flags: perfile: "" default_input: "./docs" file_ext: "png" - - prompt: "Using the Markdown descriptions as context, provide a concise, descriptive alt text for this GIF image that captures its essence and is suitable for users with visual impairments." + - prompt: > + Using the Markdown descriptions as context, provide a concise, descriptive alt text for + this GIF image that captures its essence and is suitable for users with visual impairments. flags: perfile: "" default_input: "./docs" file_ext: "gif" - diff --git a/examples/gemini/python/docs-agent/tasks/describe-images-from-doc-replace.yaml b/examples/gemini/python/docs-agent/tasks/describe-images-from-doc-replace.yaml new file mode 100644 index 000000000..ff9e24462 --- /dev/null +++ b/examples/gemini/python/docs-agent/tasks/describe-images-from-doc-replace.yaml @@ -0,0 +1,20 @@ +tasks: + - name: "DescribeImagesFromDocReplace" + model: "models/gemini-2.0-flash" + description: > + An agent that extracts all image files names in an input doc and generates alt text for the images. + preamble: > + When you generate a response for alt text, your suggestion should not start with Picture of, Image + of, or Screenshot of. Your new alt text suggestion must be fewer than 125 characters. Do not exceed + 125 characters. Provide the option that is most suitable for alt text. Output only the alt text + suggestion. Do not include any explanations or commentary. Do not include end punctuation. + steps: + - prompt: "extract_replace_image_alt_text.py" + function: "script" + description: > + This script extract all image files found in the input file and store the list of image file + names in the agent_out/image_paths.txt file. + flags: + script_input: "" + default_input: "./README.md" + diff --git a/examples/gemini/python/docs-agent/tasks/describe-images-from-doc.yaml b/examples/gemini/python/docs-agent/tasks/describe-images-from-doc.yaml new file mode 100644 index 000000000..927e8f7c0 --- /dev/null +++ b/examples/gemini/python/docs-agent/tasks/describe-images-from-doc.yaml @@ -0,0 +1,37 @@ +tasks: + - name: "DescribeImagesFromDoc" + model: "models/gemini-2.0-flash" + description: > + An agent that extracts all image files names in an input doc and generates alt text for the images. + preamble: > + When you generate a response for alt text, your suggestion should not start with Picture of, + Image of, or Screenshot of. Your new alt text suggestion must be fewer than 125 characters. + Do not exceed 125 characters. Provide the option that is most suitable for alt text. Output + only the alt text suggestion. Do not include any explanations or commentary. Do not include + end punctuation. Do not include colon(:). + steps: + - prompt: > + Provide an overview of this file (under 100 words). + flags: + file: "" + default_input: "./README.md" + - prompt: "extract_image_files.py" + function: "script" + description: > + This script extract all image files found in the input file and store the list of image + file names in the agent_out/image_paths.txt file. + flags: + script_input: "" + default_input: "./README.md" + - prompt: > + Using the above information as context, provide concise, descriptive alt text for this image + that captures its essence and is suitable for users with visual impairments. Use any existing + alt text found in the information above for context. + flags: + list_file: "agent_out/image_paths.txt" + - prompt: > + Provide a YAML file that maps each alt text to the image path, where each entry has the - path: + field and the response: field (in next line). Return only the content of the + YAML file without any additional explanation in your response. + flags: + repeat_until: True diff --git a/examples/gemini/python/docs-agent/tasks/extract-workflows-task.yaml b/examples/gemini/python/docs-agent/tasks/extract-workflows-task.yaml index bb22fcb0c..778c13f7b 100644 --- a/examples/gemini/python/docs-agent/tasks/extract-workflows-task.yaml +++ b/examples/gemini/python/docs-agent/tasks/extract-workflows-task.yaml @@ -1,12 +1,22 @@ tasks: - name: "ExtractWorkflows" - model: "models/gemini-1.5-flash-latest" - description: "An agent that extracts workflows from a source doc." + model: "models/gemini-2.0-flash" + description: > + An agent that extracts workflows from a source doc. steps: - - prompt: "Summarize the contents of this document in a concise and informative manner. Focus on the key procedures, steps, or workflows described." + - prompt: > + Summarize the contents of this document in a concise and informative manner. + Focus on the key procedures, steps, or workflows described. flags: file: "" default_input: "./README.md" - - prompt: "Identify and list all key workflows described in the document. Provide a brief description for each workflow, highlighting its purpose and key steps." - - prompt: "Identify all command lines used in the workflows described in the document. Focus on command lines that are essential for executing the workflow steps." - - prompt: "For each identified command line, provide a detailed description of its function and purpose. Include specific examples of its usage, showcasing how it is integrated within the workflows." + - prompt: > + Identify and list all key workflows described in the document. Provide a brief + description for each workflow, highlighting its purpose and key steps. + - prompt: > + Identify all command lines used in the workflows described in the document. + Focus on command lines that are essential for executing the workflow steps. + - prompt: > + For each identified command line, provide a detailed description of its function + and purpose. Include specific examples of its usage, showcasing how it is + integrated within the workflows. diff --git a/examples/gemini/python/docs-agent/tasks/help-polish-prompts-task.yaml b/examples/gemini/python/docs-agent/tasks/help-polish-prompts-task.yaml index a1283abed..e2327baa5 100644 --- a/examples/gemini/python/docs-agent/tasks/help-polish-prompts-task.yaml +++ b/examples/gemini/python/docs-agent/tasks/help-polish-prompts-task.yaml @@ -1,13 +1,19 @@ tasks: - name: "HelpPolishPrompts" model: "models/gemini-1.5-flash-latest" - description: "An agent that helps polish prompts in a task file." - preamble: "When writing a prompt, always be direct and concise." + description: > + An agent that helps polish prompts in a task file. + preamble: > + When writing a prompt, always be direct and concise. steps: - - prompt: "Revise the prompts in this task YAML file to improve their effectiveness in generating responses." + - prompt: > + Revise the prompts in this task YAML file to improve their effectiveness + in generating responses. flags: file: "" default_input: "./tasks/index-page-generator-task.yaml" file_ext: "yaml" - - prompt: "Replace the original prompts in the task YAML file with the revised prompts." - - prompt: "Suggest additional prompts relevant to the main task in this task YAML file." + - prompt: > + Replace the original prompts in the task YAML file with the revised prompts. + - prompt: > + Suggest additional prompts relevant to the main task in this task YAML file. diff --git a/examples/gemini/python/docs-agent/tasks/index-page-generator-task.yaml b/examples/gemini/python/docs-agent/tasks/index-page-generator-task.yaml index 747ddc8a1..26a159095 100644 --- a/examples/gemini/python/docs-agent/tasks/index-page-generator-task.yaml +++ b/examples/gemini/python/docs-agent/tasks/index-page-generator-task.yaml @@ -1,15 +1,28 @@ tasks: - name: "IndexPageGenerator" - model: "models/gemini-1.5-flash" - description: "An agent that generates a draft of an index page from source docs." - preamble: "When generating a Markdown file, limit the number of characters per line to 80 or less, and always include a brief description to each page in the list. Never include a file's full path, but use a relative path. Refer to pages using page titles, not file names (however, use the paths and filenames when generating reference links)." + model: "models/gemini-2.0-flash" + description: > + An agent that generates a draft of an index page from source docs. + preamble: > + When generating a Markdown file, limit the number of characters per line to 80 or less, + and always include a brief description to each page in the list. Never include a file's + full path, but use a relative path. Refer to pages using page titles, not file names + (however, use the paths and filenames when generating reference links). steps: - - prompt: "Provide a brief description of each file in this directory." + - prompt: > + Provide a brief description of each file in this directory. flags: allfiles: "" default_input: "./docs_agent" file_ext: "md" - - prompt: "Based on the file descriptions provided, generate a draft of an index page that organizes the pages by related topics. The index page should be designed for new developers who need to quickly understand the structure and contents of the documentation." - - prompt: "Please update the introduction paragraph of the index page draft to provide a more helpful and descriptive overview of the documentation. Maintain the remaining content of the draft." - - prompt: "Identify key words and phrases that are relevant to the documentation files. Generate a list of Markdown reference-style links that map these keywords to the corresponding files. Include the list of links at the bottom of the index page." - + - prompt: > + Based on the file descriptions provided, generate a draft of an index page that + organizes the pages by related topics. The index page should be designed for new + developers who need to quickly understand the structure and contents of the documentation. + - prompt: > + Update the introduction paragraph of the index page draft to provide a more helpful and + descriptive overview of the documentation. Maintain the remaining content of the draft. + - prompt: > + Identify key words and phrases that are relevant to the documentation files. Generate + a list of Markdown reference-style links that map these keywords to the corresponding + files. Include the list of links at the bottom of the index page.