diff --git a/sdk/ai/azure-ai-agents/README.md b/sdk/ai/azure-ai-agents/README.md deleted file mode 100644 index a71db514276b..000000000000 --- a/sdk/ai/azure-ai-agents/README.md +++ /dev/null @@ -1,1284 +0,0 @@ - -# Azure AI Agents client library for Python - -Use the AI Agents client library to: - -* **Develop Agents using the Azure AI Agents Service**, leveraging an extensive ecosystem of models, tools, and capabilities from OpenAI, Microsoft, and other LLM providers. The Azure AI Agents Service enables the building of Agents for a wide range of generative AI use cases. -* **Note:** While this package can be used independently, we recommend using the Azure AI Projects client library (azure-ai-projects) for an enhanced experience. -The Projects library provides simplified access to advanced functionality, such as creating and managing agents, enumerating AI models, working with datasets and -managing search indexes, evaluating generative AI performance, and enabling OpenTelemetry tracing. - -[Product documentation](https://aka.ms/azsdk/azure-ai-agents/product-doc) -| [Samples][samples] -| [API reference documentation](https://aka.ms/azsdk/azure-ai-agents/python/reference) -| [Package (PyPI)](https://aka.ms/azsdk/azure-ai-agents/python/package) -| [SDK source code](https://aka.ms/azsdk/azure-ai-agents/python/code) -| [AI Starter Template](https://aka.ms/azsdk/azure-ai-agents/python/ai-starter-template) - -## Reporting issues - -To report an issue with the client library, or request additional features, please open a GitHub issue [here](https://github.com/Azure/azure-sdk-for-python/issues). Mention the package name "azure-ai-agents" in the title or content. - -## Table of contents - -- [Getting started](#getting-started) - - [Prerequisite](#prerequisite) - - [Install the package](#install-the-package) -- [Key concepts](#key-concepts) - - [Create and authenticate the client](#create-and-authenticate-the-client) -- [Examples](#examples) - - [Create an Agent](#create-agent) with: - - [File Search](#create-agent-with-file-search) - - [Enterprise File Search](#create-agent-with-enterprise-file-search) - - [Code interpreter](#create-agent-with-code-interpreter) - - [Bing grounding](#create-agent-with-bing-grounding) - - [Azure AI Search](#create-agent-with-azure-ai-search) - - [Function call](#create-agent-with-function-call) - - [Azure Function Call](#create-agent-with-azure-function-call) - - [OpenAPI](#create-agent-with-openapi) - - [Fabric data](#create-an-agent-with-fabric) - - [Create thread](#create-thread) with - - [Tool resource](#create-thread-with-tool-resource) - - [Create message](#create-message) with: - - [File search attachment](#create-message-with-file-search-attachment) - - [Code interpreter attachment](#create-message-with-code-interpreter-attachment) - - [Create Message with Image Inputs](#create-message-with-image-inputs) - - [Execute Run, Run_and_Process, or Stream](#execute-run-run_and_process-or-stream) - - [Retrieve message](#retrieve-message) - - [Retrieve file](#retrieve-file) - - [Tear down by deleting resource](#teardown) - - [Tracing](#tracing) - - [Installation](#installation) - - [How to enable tracing](#how-to-enable-tracing) - - [How to trace your own functions](#how-to-trace-your-own-functions) -- [Troubleshooting](#troubleshooting) - - [Logging](#logging) - - [Reporting issues](#reporting-issues) -- [Next steps](#next-steps) -- [Contributing](#contributing) - -## Getting started - -### Prerequisite - -- Python 3.9 or later. -- An [Azure subscription][azure_sub]. -- A [project in Azure AI Foundry](https://learn.microsoft.com/azure/ai-studio/how-to/create-projects). -- The project endpoint string. It can be found in your Azure AI Foundry project overview page, under "Project details". Below we will assume the environment variable `PROJECT_ENDPOINT_STRING` was defined to hold this value. -- Entra ID is needed to authenticate the client. Your application needs an object that implements the [TokenCredential](https://learn.microsoft.com/python/api/azure-core/azure.core.credentials.tokencredential) interface. Code samples here use [DefaultAzureCredential](https://learn.microsoft.com/python/api/azure-identity/azure.identity.defaultazurecredential). To get that working, you will need: - * An appropriate role assignment. see [Role-based access control in Azure AI Foundry portal](https://learn.microsoft.com/azure/ai-foundry/concepts/rbac-ai-foundry). Role assigned can be done via the "Access Control (IAM)" tab of your Azure AI Project resource in the Azure portal. - * [Azure CLI](https://learn.microsoft.com/cli/azure/install-azure-cli) installed. - * You are logged into your Azure account by running `az login`. - * Note that if you have multiple Azure subscriptions, the subscription that contains your Azure AI Project resource must be your default subscription. Run `az account list --output table` to list all your subscription and see which one is the default. Run `az account set --subscription "Your Subscription ID or Name"` to change your default subscription. - -### Install the package - -```bash -pip install azure-ai-agents -``` - -## Key concepts - -### Create and authenticate the client - -To construct a synchronous client: - -```python -import os -from azure.ai.agents import AgentsClient -from azure.identity import DefaultAzureCredential - -agents_client = AgentsClient( - endpoint=os.environ["PROJECT_ENDPOINT"], - credential=DefaultAzureCredential(), -) -``` - -To construct an asynchronous client, Install the additional package [aiohttp](https://pypi.org/project/aiohttp/): - -```bash -pip install aiohttp -``` - -and update the code above to import `asyncio`, and import `AgentsClient` from the `azure.ai.agents.aio` namespace: - -```python -import os -import asyncio -from azure.ai.agents.aio import AgentsClient -from azure.core.credentials import AzureKeyCredential - -agent_client = AgentsClient( - endpoint=os.environ["PROJECT_ENDPOINT"], - credential=DefaultAzureCredential(), -) -``` - -## Examples - -### Create Agent - -Before creating an Agent, you need to set up Azure resources to deploy your model. [Create a New Agent Quickstart](https://learn.microsoft.com/azure/ai-services/agents/quickstart?pivots=programming-language-python-azure) details selecting and deploying your Agent Setup. - -Here is an example of how to create an Agent: - - -```python - - agent = agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are helpful agent", - ) -``` - - - -To allow Agents to access your resources or custom functions, you need tools. You can pass tools to `create_agent` by either `toolset` or combination of `tools` and `tool_resources`. - -Here is an example of `toolset`: - - -```python -functions = FunctionTool(user_functions) -code_interpreter = CodeInterpreterTool() - -toolset = ToolSet() -toolset.add(functions) -toolset.add(code_interpreter) - -# To enable tool calls executed automatically -agents_client.enable_auto_function_calls(toolset) - -agent = agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are a helpful agent", - toolset=toolset, -) -``` - - - -Also notices that if you use asynchronous client, you use `AsyncToolSet` instead. Additional information related to `AsyncFunctionTool` be discussed in the later sections. - -Here is an example to use `tools` and `tool_resources`: - - -```python -file_search_tool = FileSearchTool(vector_store_ids=[vector_store.id]) - -# Notices that FileSearchTool as tool and tool_resources must be added or the agent unable to search the file -agent = agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are helpful agent", - tools=file_search_tool.definitions, - tool_resources=file_search_tool.resources, -) -``` - - - -In the following sections, we show you sample code in either `toolset` or combination of `tools` and `tool_resources`. - -### Create Agent with File Search - -To perform file search by an Agent, we first need to upload a file, create a vector store, and associate the file to the vector store. Here is an example: - - - -```python -file = agents_client.files.upload_and_poll(file_path=asset_file_path, purpose=FilePurpose.AGENTS) -print(f"Uploaded file, file ID: {file.id}") - -vector_store = agents_client.vector_stores.create_and_poll(file_ids=[file.id], name="my_vectorstore") -print(f"Created vector store, vector store ID: {vector_store.id}") - -# Create file search tool with resources followed by creating agent -file_search = FileSearchTool(vector_store_ids=[vector_store.id]) - -agent = agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="Hello, you are helpful agent and can search information from uploaded files", - tools=file_search.definitions, - tool_resources=file_search.resources, -) -``` - - - -### Create Agent with Enterprise File Search - -We can upload file to Azure as it is shown in the example, or use the existing Azure blob storage. In the code below we demonstrate how this can be achieved. First we upload file to azure and create `VectorStoreDataSource`, which then is used to create vector store. This vector store is then given to the `FileSearchTool` constructor. - - - -```python -# We will upload the local file to Azure and will use it for vector store creation. -asset_uri = os.environ["AZURE_BLOB_URI"] - -# Create a vector store with no file and wait for it to be processed -ds = VectorStoreDataSource(asset_identifier=asset_uri, asset_type=VectorStoreDataSourceAssetType.URI_ASSET) -vector_store = agents_client.vector_stores.create_and_poll(data_sources=[ds], name="sample_vector_store") -print(f"Created vector store, vector store ID: {vector_store.id}") - -# Create a file search tool -file_search_tool = FileSearchTool(vector_store_ids=[vector_store.id]) - -# Notices that FileSearchTool as tool and tool_resources must be added or the agent unable to search the file -agent = agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are helpful agent", - tools=file_search_tool.definitions, - tool_resources=file_search_tool.resources, -) -``` - - - -We also can attach files to the existing vector store. In the code snippet below, we first create an empty vector store and add file to it. - - - -```python -# Create a vector store with no file and wait for it to be processed -vector_store = agents_client.vector_stores.create_and_poll(data_sources=[], name="sample_vector_store") -print(f"Created vector store, vector store ID: {vector_store.id}") - -ds = VectorStoreDataSource(asset_identifier=asset_uri, asset_type=VectorStoreDataSourceAssetType.URI_ASSET) -# Add the file to the vector store or you can supply data sources in the vector store creation -vector_store_file_batch = agents_client.vector_store_file_batches.create_and_poll( - vector_store_id=vector_store.id, data_sources=[ds] -) -print(f"Created vector store file batch, vector store file batch ID: {vector_store_file_batch.id}") - -# Create a file search tool -file_search_tool = FileSearchTool(vector_store_ids=[vector_store.id]) -``` - - - -### Create Agent with Code Interpreter - -Here is an example to upload a file and use it for code interpreter by an Agent: - - - -```python -file = agents_client.files.upload_and_poll(file_path=asset_file_path, purpose=FilePurpose.AGENTS) -print(f"Uploaded file, file ID: {file.id}") - -code_interpreter = CodeInterpreterTool(file_ids=[file.id]) - -# Create agent with code interpreter tool and tools_resources -agent = agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are helpful agent", - tools=code_interpreter.definitions, - tool_resources=code_interpreter.resources, -) -``` - - - -### Create Agent with Bing Grounding - -To enable your Agent to perform search through Bing search API, you use `BingGroundingTool` along with a connection. - -Here is an example: - - - -```python -conn_id = os.environ["AZURE_BING_CONNECTION_ID"] - -# Initialize agent bing tool and add the connection id -bing = BingGroundingTool(connection_id=conn_id) - -# Create agent with the bing tool and process agent run -with agents_client: - agent = agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are a helpful agent", - tools=bing.definitions, - ) -``` - - - -### Create Agent with Azure AI Search - -Azure AI Search is an enterprise search system for high-performance applications. It integrates with Azure OpenAI Service and Azure Machine Learning, offering advanced search technologies like vector search and full-text search. Ideal for knowledge base insights, information discovery, and automation. Creating an Agent with Azure AI Search requires an existing Azure AI Search Index. For more information and setup guides, see [Azure AI Search Tool Guide](https://learn.microsoft.com/azure/ai-services/agents/how-to/tools/azure-ai-search?tabs=azurecli%2Cpython&pivots=overview-azure-ai-search). - -Here is an example to integrate Azure AI Search: - - - -```python -conn_id = os.environ["AI_AZURE_AI_CONNECTION_ID"] - -print(conn_id) - -# Initialize agent AI search tool and add the search index connection id -ai_search = AzureAISearchTool( - index_connection_id=conn_id, index_name="sample_index", query_type=AzureAISearchQueryType.SIMPLE, top_k=3, filter="" -) - -# Create agent with AI search tool and process agent run -with agents_client: - agent = agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are a helpful agent", - tools=ai_search.definitions, - tool_resources=ai_search.resources, - ) -``` - - - -If the agent has found the relevant information in the index, the reference -and annotation will be provided in the message response. In the example above, we replace -the reference placeholder by the actual reference and url. Please note, that to -get sensible result, the index needs to have "embedding", "token", "category" and "title" fields. - - - -```python -# Fetch and log all messages -messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) -for message in messages: - if message.role == MessageRole.AGENT and message.url_citation_annotations: - placeholder_annotations = { - annotation.text: f" [see {annotation.url_citation.title}] ({annotation.url_citation.url})" - for annotation in message.url_citation_annotations - } - for message_text in message.text_messages: - message_str = message_text.text.value - for k, v in placeholder_annotations.items(): - message_str = message_str.replace(k, v) - print(f"{message.role}: {message_str}") - else: - for message_text in message.text_messages: - print(f"{message.role}: {message_text.text.value}") -``` - - - -### Create Agent with Function Call - -You can enhance your Agents by defining callback functions as function tools. These can be provided to `create_agent` via either the `toolset` parameter or the combination of `tools` and `tool_resources`. Here are the distinctions: - -For more details about requirements and specification of functions, refer to [Function Tool Specifications](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-agents/FunctionTool.md) - -Here is an example to use [user functions](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-agents/samples/utils/user_functions.py) in `toolset`: - - -```python -functions = FunctionTool(user_functions) -toolset = ToolSet() -toolset.add(functions) -agents_client.enable_auto_function_calls(toolset) - -agent = agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are a helpful agent", - toolset=toolset, -) -``` - - - -For asynchronous functions, you must import `AgentsClient` from `azure.ai.agents.aio` and use `AsyncFunctionTool`. Here is an example using [asynchronous user functions](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_functions_async.py): - -```python -from azure.ai.agents.aio import AgentsClient -``` - - - -```python -functions = AsyncFunctionTool(user_async_functions) - -toolset = AsyncToolSet() -toolset.add(functions) -agents_client.enable_auto_function_calls(toolset) - -agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are a helpful agent", - toolset=toolset, -) -``` - - - -Notice that if `enable_auto_function_calls` is called, the SDK will invoke the functions automatically during `create_and_process` or streaming. If you prefer to execute them manually, refer to [`sample_agents_stream_eventhandler_with_functions.py`](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_eventhandler_with_functions.py) or -[`sample_agents_functions.py`](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_functions.py) - -### Create Agent With Azure Function Call - -The AI agent leverages Azure Functions triggered asynchronously via Azure Storage Queues. To enable the agent to perform Azure Function calls, you must set up the corresponding `AzureFunctionTool`, specifying input and output queues as well as parameter definitions. - -Example Python snippet illustrating how you create an agent utilizing the Azure Function Tool: - -```python -azure_function_tool = AzureFunctionTool( - name="foo", - description="Get answers from the foo bot.", - parameters={ - "type": "object", - "properties": { - "query": {"type": "string", "description": "The question to ask."}, - "outputqueueuri": {"type": "string", "description": "The full output queue uri."}, - }, - }, - input_queue=AzureFunctionStorageQueue( - queue_name="azure-function-foo-input", - storage_service_endpoint=storage_service_endpoint, - ), - output_queue=AzureFunctionStorageQueue( - queue_name="azure-function-tool-output", - storage_service_endpoint=storage_service_endpoint, - ), -) - -agent = agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="azure-function-agent-foo", - instructions=f"You are a helpful support agent. Use the provided function any time the prompt contains the string 'What would foo say?'. When you invoke the function, ALWAYS specify the output queue uri parameter as '{storage_service_endpoint}/azure-function-tool-output'. Always responds with \"Foo says\" and then the response from the tool.", - tools=azure_function_tool.definitions, -) -print(f"Created agent, agent ID: {agent.id}") -``` - ---- - -**Limitations** - -Currently, the Azure Function integration for the AI Agent has the following limitations: - -- Supported trigger for Azure Function is currently limited to **Queue triggers** only. - HTTP or other trigger types and streaming responses are not supported at this time. - ---- - -**Create and Deploy Azure Function** - -Before you can use the agent with AzureFunctionTool, you need to create and deploy Azure Function. - -Below is an example Python Azure Function responding to queue-triggered messages and placing responses on the output queue: - -```python -import azure.functions as func -import logging -import json - -app = func.FunctionApp() - - -@app.function_name(name="Foo") -@app.queue_trigger( - arg_name="arguments", - queue_name="azure-function-foo-input", - connection="AzureWebJobsStorage") -@app.queue_output( - arg_name="outputQueue", - queue_name="azure-function-tool-output", - connection="AzureWebJobsStorage") -def foo(arguments: func.QueueMessage, outputQueue: func.Out[str]) -> None: - """ - The function, answering question. - - :param arguments: The arguments, containing json serialized request. - :param outputQueue: The output queue to write messages to. - """ - - parsed_args = json.loads(arguments.get_body().decode('utf-8')) - try: - response = { - "Value": "Bar", - "CorrelationId": parsed_args['CorrelationId'] - } - outputQueue.set(json.dumps(response)) - logging.info(f'The function returns the following message: {json.dumps(response)}') - except Exception as e: - logging.error(f"Error processing message: {e}") - raise -``` - -> **Important:** Both input and output payloads must contain the `CorrelationId`, which must match in request and response. - ---- - -**Azure Function Project Creation and Deployment** - -To deploy your function to Azure properly, follow Microsoft's official documentation step by step: - -[Azure Functions Python Developer Guide](https://learn.microsoft.com/azure/azure-functions/create-first-function-cli-python?tabs=windows%2Cbash%2Cazure-cli%2Cbrowser) - -**Summary of required steps:** - -- Use the Azure CLI or Azure Portal to create an Azure Function App. -- Create input and output queues in Azure Storage. -- Deploy your Function code. - ---- - -**Verification and Testing Azure Function** - -To ensure that your Azure Function deployment functions correctly: - -1. Place the following style message manually into the input queue (`input`): - -{ - "CorrelationId": "42" -} - -Check the output queue (`output`) and validate the structured message response: - -{ - "Value": "Bar", - "CorrelationId": "42" -} - ---- - -**Required Role Assignments (IAM Configuration)** - -Ensure your Azure AI Project identity has the following storage account permissions: -- `Storage Account Contributor` -- `Storage Blob Data Contributor` -- `Storage File Data Privileged Contributor` -- `Storage Queue Data Contributor` -- `Storage Table Data Contributor` - ---- - -**Additional Important Configuration Notes** - -- The Azure Function configured above uses the `AzureWebJobsStorage` connection string for queue connectivity. You may alternatively use managed identity-based connections as described in the official Azure Functions Managed Identity documentation. -- Storage queues you specify (`input` & `output`) should already exist in the storage account before the Function deployment or invocation, created manually via Azure portal or CLI. -- When using Azure storage account connection strings, make sure the account has enabled storage account key access (`Storage Account > Settings > Configuration`). - ---- - -With the above steps complete, your Azure Function integration with your AI Agent is ready for use. - - -### Create Agent With Logic Apps - -Logic Apps allow HTTP requests to trigger actions. For more information, refer to the guide [Logic App Workflows for Function Calling](https://learn.microsoft.com/azure/ai-services/openai/how-to/assistants-logic-apps). - -Your Logic App must be in the same resource group as your Azure AI Project, shown in the Azure Portal. Agents SDK accesses Logic Apps through Workflow URLs, which are fetched and called as requests in functions. - -Below is an example of how to create an Azure Logic App utility tool and register a function with it. - - - -```python - -# Create the agents client -agents_client = AgentsClient( - endpoint=os.environ["PROJECT_ENDPOINT"], - credential=DefaultAzureCredential(), -) - -# Extract subscription and resource group from the project scope -subscription_id = os.environ["SUBSCRIPTION_ID"] -resource_group = os.environ["resource_group_name"] - -# Logic App details -logic_app_name = "" -trigger_name = "" - -# Create and initialize AzureLogicAppTool utility -logic_app_tool = AzureLogicAppTool(subscription_id, resource_group) -logic_app_tool.register_logic_app(logic_app_name, trigger_name) -print(f"Registered logic app '{logic_app_name}' with trigger '{trigger_name}'.") - -# Create the specialized "send_email_via_logic_app" function for your agent tools -send_email_func = create_send_email_function(logic_app_tool, logic_app_name) - -# Prepare the function tools for the agent -functions_to_use: Set = { - fetch_current_datetime, - send_email_func, # This references the AzureLogicAppTool instance via closure -} -``` - - - -After this the functions can be incorporated normally into code using `FunctionTool`. - - -### Create Agent With OpenAPI - -OpenAPI specifications describe REST operations against a specific endpoint. Agents SDK can read an OpenAPI spec, create a function from it, and call that function against the REST endpoint without additional client-side execution. - -Here is an example creating an OpenAPI tool (using anonymous authentication): - - - -```python - -with open(weather_asset_file_path, "r") as f: - openapi_weather = jsonref.loads(f.read()) - -with open(countries_asset_file_path, "r") as f: - openapi_countries = jsonref.loads(f.read()) - -# Create Auth object for the OpenApiTool (note that connection or managed identity auth setup requires additional setup in Azure) -auth = OpenApiAnonymousAuthDetails() - -# Initialize agent OpenApi tool using the read in OpenAPI spec -openapi_tool = OpenApiTool( - name="get_weather", spec=openapi_weather, description="Retrieve weather information for a location", auth=auth -) -openapi_tool.add_definition( - name="get_countries", spec=openapi_countries, description="Retrieve a list of countries", auth=auth -) - -# Create agent with OpenApi tool and process agent run -with agents_client: - agent = agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are a helpful agent", - tools=openapi_tool.definitions, - ) -``` - - - - -### Create an Agent with Fabric - -To enable your Agent to answer queries using Fabric data, use `FabricTool` along with a connection to the Fabric resource. - -Here is an example: - - - -```python -conn_id = os.environ["FABRIC_CONNECTION_ID"] - -print(conn_id) - -# Initialize an Agent Fabric tool and add the connection id -fabric = FabricTool(connection_id=conn_id) - -# Create an Agent with the Fabric tool and process an Agent run -with agents_client: - agent = agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are a helpful agent", - tools=fabric.definitions, - ) -``` - - - - -### Create Thread - -For each session or conversation, a thread is required. Here is an example: - - - -```python -thread = agents_client.threads.create() -``` - - - -### Create Thread with Tool Resource - -In some scenarios, you might need to assign specific resources to individual threads. To achieve this, you provide the `tool_resources` argument to `create_thread`. In the following example, you create a vector store and upload a file, enable an Agent for file search using the `tools` argument, and then associate the file with the thread using the `tool_resources` argument. - - - -```python -file = agents_client.files.upload_and_poll(file_path=asset_file_path, purpose=FilePurpose.AGENTS) -print(f"Uploaded file, file ID: {file.id}") - -vector_store = agents_client.vector_stores.create_and_poll(file_ids=[file.id], name="my_vectorstore") -print(f"Created vector store, vector store ID: {vector_store.id}") - -# Create file search tool with resources followed by creating agent -file_search = FileSearchTool(vector_store_ids=[vector_store.id]) - -agent = agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="Hello, you are helpful agent and can search information from uploaded files", - tools=file_search.definitions, -) - -print(f"Created agent, ID: {agent.id}") - -# Create thread with file resources. -# If the agent has multiple threads, only this thread can search this file. -thread = agents_client.threads.create(tool_resources=file_search.resources) -``` - - - -#### List Threads - -To list all threads attached to a given agent, use the list_threads API: - -```python -threads = agents_client.threads.list() -``` - -### Create Message - -To create a message for agent to process, you pass `user` as `role` and a question as `content`: - - - -```python -message = agents_client.messages.create(thread_id=thread.id, role="user", content="Hello, tell me a joke") -``` - - - -### Create Message with File Search Attachment - -To attach a file to a message for content searching, you use `MessageAttachment` and `FileSearchTool`: - - - -```python -attachment = MessageAttachment(file_id=file.id, tools=FileSearchTool().definitions) -message = agents_client.messages.create( - thread_id=thread.id, role="user", content="What feature does Smart Eyewear offer?", attachments=[attachment] -) -``` - - - -### Create Message with Code Interpreter Attachment - -To attach a file to a message for data analysis, use `MessageAttachment` and `CodeInterpreterTool` classes. You must pass `CodeInterpreterTool` as `tools` or `toolset` in `create_agent` call or the file attachment cannot be opened for code interpreter. - -Here is an example to pass `CodeInterpreterTool` as tool: - - - -```python -# Notice that CodeInterpreter must be enabled in the agent creation, -# otherwise the agent will not be able to see the file attachment for code interpretation -agent = agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are helpful agent", - tools=CodeInterpreterTool().definitions, -) -print(f"Created agent, agent ID: {agent.id}") - -thread = agents_client.threads.create() -print(f"Created thread, thread ID: {thread.id}") - -# Create an attachment -attachment = MessageAttachment(file_id=file.id, tools=CodeInterpreterTool().definitions) - -# Create a message -message = agents_client.messages.create( - thread_id=thread.id, - role="user", - content="Could you please create bar chart in TRANSPORTATION sector for the operating profit from the uploaded csv file and provide file to me?", - attachments=[attachment], -) -``` - - - -Azure blob storage can be used as a message attachment. In this case, use `VectorStoreDataSource` as a data source: - - - -```python -# We will upload the local file to Azure and will use it for vector store creation. -asset_uri = os.environ["AZURE_BLOB_URI"] -ds = VectorStoreDataSource(asset_identifier=asset_uri, asset_type=VectorStoreDataSourceAssetType.URI_ASSET) - -# Create a message with the attachment -attachment = MessageAttachment(data_source=ds, tools=code_interpreter.definitions) -message = agents_client.messages.create( - thread_id=thread.id, role="user", content="What does the attachment say?", attachments=[attachment] -) -``` - - - -### Create Message with Image Inputs - -You can send messages to Azure agents with image inputs in following ways: - -- **Using an image stored as a uploaded file** -- **Using a public image accessible via URL** -- **Using a base64 encoded image string** - -The following examples demonstrate each method: - -#### Create message using uploaded image file - -```python -# Upload the local image file -image_file = agents_client.files.upload_and_poll(file_path="image_file.png", purpose="assistants") - -# Construct content using uploaded image -file_param = MessageImageFileParam(file_id=image_file.id, detail="high") -content_blocks = [ - MessageInputTextBlock(text="Hello, what is in the image?"), - MessageInputImageFileBlock(image_file=file_param), -] - -# Create the message -message = agents_client.messages.create( - thread_id=thread.id, - role="user", - content=content_blocks -) -``` - -#### Create message with an image URL input - -```python -# Specify the public image URL -image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg" - -# Create content directly referencing image URL -url_param = MessageImageUrlParam(url=image_url, detail="high") -content_blocks = [ - MessageInputTextBlock(text="Hello, what is in the image?"), - MessageInputImageUrlBlock(image_url=url_param), -] - -# Create the message -message = agents_client.messages.create( - thread_id=thread.id, - role="user", - content=content_blocks -) -``` - -#### Create message with base64-encoded image input - -```python -import base64 - -def image_file_to_base64(path: str) -> str: - with open(path, "rb") as f: - return base64.b64encode(f.read()).decode("utf-8") - -# Convert your image file to base64 format -image_base64 = image_file_to_base64("image_file.png") - -# Prepare the data URL -img_data_url = f"data:image/png;base64,{image_base64}" - -# Use base64 encoded string as image URL parameter -url_param = MessageImageUrlParam(url=img_data_url, detail="high") -content_blocks = [ - MessageInputTextBlock(text="Hello, what is in the image?"), - MessageInputImageUrlBlock(image_url=url_param), -] - -# Create the message -message = agents_client.messages.create( - thread_id=thread.id, - role="user", - content=content_blocks -) -``` - -### Execute Run, Run_and_Process, or Stream - -To process your message, you can use `runs.create`, `runs.create_and_process`, or `runs.stream`. - -`create_run` requests the Agent to process the message without polling for the result. If you are using `function tools` regardless as `toolset` or not, your code is responsible for polling for the result and acknowledging the status of `Run`. When the status is `requires_action`, your code is responsible for calling the function tools. For a code sample, visit [`sample_agents_functions.py`](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_functions.py). - -Here is an example of `runs.create` and poll until the run is completed: - - - -```python -run = agents_client.runs.create(thread_id=thread.id, agent_id=agent.id) - -# Poll the run as long as run status is queued or in progress -while run.status in ["queued", "in_progress", "requires_action"]: - # Wait for a second - time.sleep(1) - run = agents_client.runs.get(thread_id=thread.id, run_id=run.id) -``` - - - -To have the SDK poll on your behalf and call `function tools`, use the `create_and_process` method. Note that `function tools` will only be invoked if they are provided as `toolset` during the `create_agent` call. - -Here is an example: - - - -```python -run = agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) -``` - - - -With streaming, polling need not be considered. If `function tools` are provided as `toolset` during the `create_agent` call, they will be invoked by the SDK. - -Here is an example of streaming: - - - -```python -with agents_client.runs.stream(thread_id=thread.id, agent_id=agent.id) as stream: - - for event_type, event_data, _ in stream: - - if isinstance(event_data, MessageDeltaChunk): - print(f"Text delta received: {event_data.text}") - - elif isinstance(event_data, ThreadMessage): - print(f"ThreadMessage created. ID: {event_data.id}, Status: {event_data.status}") - - elif isinstance(event_data, ThreadRun): - print(f"ThreadRun status: {event_data.status}") - - elif isinstance(event_data, RunStep): - print(f"RunStep type: {event_data.type}, Status: {event_data.status}") - - elif event_type == AgentStreamEvent.ERROR: - print(f"An error occurred. Data: {event_data}") - - elif event_type == AgentStreamEvent.DONE: - print("Stream completed.") - break - - else: - print(f"Unhandled Event Type: {event_type}, Data: {event_data}") -``` - - - -In the code above, because an `event_handler` object is not passed to the `stream` function, the SDK will instantiate `AgentEventHandler` or `AsyncAgentEventHandler` as the default event handler and produce an iterable object with `event_type` and `event_data`. `AgentEventHandler` and `AsyncAgentEventHandler` are overridable. Here is an example: - - - -```python -# With AgentEventHandler[str], the return type for each event functions is optional string. -class MyEventHandler(AgentEventHandler[str]): - - def on_message_delta(self, delta: "MessageDeltaChunk") -> Optional[str]: - return f"Text delta received: {delta.text}" - - def on_thread_message(self, message: "ThreadMessage") -> Optional[str]: - return f"ThreadMessage created. ID: {message.id}, Status: {message.status}" - - def on_thread_run(self, run: "ThreadRun") -> Optional[str]: - return f"ThreadRun status: {run.status}" - - def on_run_step(self, step: "RunStep") -> Optional[str]: - return f"RunStep type: {step.type}, Status: {step.status}" - - def on_error(self, data: str) -> Optional[str]: - return f"An error occurred. Data: {data}" - - def on_done(self) -> Optional[str]: - return "Stream completed." - - def on_unhandled_event(self, event_type: str, event_data: Any) -> Optional[str]: - return f"Unhandled Event Type: {event_type}, Data: {event_data}" -``` - - - - - - -```python -with agents_client.runs.stream(thread_id=thread.id, agent_id=agent.id, event_handler=MyEventHandler()) as stream: - for event_type, event_data, func_return in stream: - print(f"Received data.") - print(f"Streaming receive Event Type: {event_type}") - print(f"Event Data: {str(event_data)[:100]}...") - print(f"Event Function return: {func_return}\n") -``` - - - -As you can see, this SDK parses the events and produces various event types similar to OpenAI agents. In your use case, you might not be interested in handling all these types and may decide to parse the events on your own. To achieve this, please refer to [override base event handler](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_with_base_override_eventhandler.py). - -``` -Note: Multiple streaming processes may be chained behind the scenes. - -When the SDK receives a `ThreadRun` event with the status `requires_action`, the next event will be `Done`, followed by termination. The SDK will submit the tool calls using the same event handler. The event handler will then chain the main stream with the tool stream. - -Consequently, when you iterate over the streaming using a for loop similar to the example above, the for loop will receive events from the main stream followed by events from the tool stream. -``` - - -### Retrieve Message - -To retrieve messages from agents, use the following example: - - - -```python -messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) -for msg in messages: - if msg.text_messages: - last_text = msg.text_messages[-1] - print(f"{msg.role}: {last_text.text.value}") -``` - - - -In addition, `messages` and `messages.data[]` offer helper properties such as `text_messages`, `image_contents`, `file_citation_annotations`, and `file_path_annotations` to quickly retrieve content from one message or all messages. - -### Retrieve File - -Files uploaded by Agents cannot be retrieved back. If your use case need to access the file content uploaded by the Agents, you are advised to keep an additional copy accessible by your application. However, files generated by Agents are retrievable by `save_file` or `get_file_content`. - -Here is an example retrieving file ids from messages and save to the local drive: - - - -```python -messages = agents_client.messages.list(thread_id=thread.id) -print(f"Messages: {messages}") - -for msg in messages: - # Save every image file in the message - for img in msg.image_contents: - file_id = img.image_file.file_id - file_name = f"{file_id}_image_file.png" - agents_client.files.save(file_id=file_id, file_name=file_name) - print(f"Saved image file to: {Path.cwd() / file_name}") - - # Print details of every file-path annotation - for ann in msg.file_path_annotations: - print("File Paths:") - print(f" Type: {ann.type}") - print(f" Text: {ann.text}") - print(f" File ID: {ann.file_path.file_id}") - print(f" Start Index: {ann.start_index}") - print(f" End Index: {ann.end_index}") -``` - - - -Here is an example to use `get_file_content`: - -```python -from pathlib import Path - -async def save_file_content(client, file_id: str, file_name: str, target_dir: Optional[Union[str, Path]] = None): - # Determine the target directory - path = Path(target_dir).expanduser().resolve() if target_dir else Path.cwd() - path.mkdir(parents=True, exist_ok=True) - - # Retrieve the file content - file_content_stream = await client.files.get_content(file_id) - if not file_content_stream: - raise RuntimeError(f"No content retrievable for file ID '{file_id}'.") - - # Collect all chunks asynchronously - chunks = [] - async for chunk in file_content_stream: - if isinstance(chunk, (bytes, bytearray)): - chunks.append(chunk) - else: - raise TypeError(f"Expected bytes or bytearray, got {type(chunk).__name__}") - - target_file_path = path / file_name - - # Write the collected content to the file synchronously - with open(target_file_path, "wb") as file: - for chunk in chunks: - file.write(chunk) -``` - -### Teardown - -To remove resources after completing tasks, use the following functions: - - - -```python -# Delete the file when done -agents_client.vector_stores.delete(vector_store.id) -print("Deleted vector store") - -agents_client.files.delete(file_id=file.id) -print("Deleted file") - -# Delete the agent when done -agents_client.delete_agent(agent.id) -print("Deleted agent") -``` - - - -## Tracing - -You can add an Application Insights Azure resource to your Azure AI Foundry project. See the Tracing tab in your AI Foundry project. If one was enabled, you can get the Application Insights connection string, configure your Agents, and observe the full execution path through Azure Monitor. Typically, you might want to start tracing before you create an Agent. - -### Installation - -Make sure to install OpenTelemetry and the Azure SDK tracing plugin via - -```bash -pip install opentelemetry -pip install azure-ai-agents azure-identity opentelemetry-sdk azure-core-tracing-opentelemetry -``` - -You will also need an exporter to send telemetry to your observability backend. You can print traces to the console or use a local viewer such as [Aspire Dashboard](https://learn.microsoft.com/dotnet/aspire/fundamentals/dashboard/standalone?tabs=bash). - -To connect to Aspire Dashboard or another OpenTelemetry compatible backend, install OTLP exporter: - -```bash -pip install opentelemetry-exporter-otlp -``` - -### How to enable tracing - -Here is a code sample that shows how to enable Azure Monitor tracing: - - - -```python -from opentelemetry import trace -from azure.monitor.opentelemetry import configure_azure_monitor - -# Enable Azure Monitor tracing -application_insights_connection_string = os.environ["APPLICATIONINSIGHTS_CONNECTION_STRING"] -configure_azure_monitor(connection_string=application_insights_connection_string) - -scenario = os.path.basename(__file__) -tracer = trace.get_tracer(__name__) - -with tracer.start_as_current_span(scenario): - with agents_client: -``` - - - -In addition, you might find helpful to see the tracing logs in console. You can achieve by the following code: - -```python -from azure.ai.agents.telemetry import enable_telemetry - -enable_telemetry(destination=sys.stdout) -``` -### How to trace your own functions - -The decorator `trace_function` is provided for tracing your own function calls using OpenTelemetry. By default the function name is used as the name for the span. Alternatively you can provide the name for the span as a parameter to the decorator. - -This decorator handles various data types for function parameters and return values, and records them as attributes in the trace span. The supported data types include: -* Basic data types: str, int, float, bool -* Collections: list, dict, tuple, set - * Special handling for collections: - - If a collection (list, dict, tuple, set) contains nested collections, the entire collection is converted to a string before being recorded as an attribute. - - Sets and dictionaries are always converted to strings to ensure compatibility with span attributes. - -Object types are omitted, and the corresponding parameter is not traced. - -The parameters are recorded in attributes `code.function.parameter.` and the return value is recorder in attribute `code.function.return.value` - -## Troubleshooting - -### Logging - -The client uses the standard [Python logging library](https://docs.python.org/3/library/logging.html). The SDK logs HTTP request and response details, which may be useful in troubleshooting. To log to stdout, add the following: - -```python -import sys -import logging - -# Acquire the logger for this client library. Use 'azure' to affect both -# 'azure.core` and `azure.ai.inference' libraries. -logger = logging.getLogger("azure") - -# Set the desired logging level. logging.INFO or logging.DEBUG are good options. -logger.setLevel(logging.DEBUG) - -# Direct logging output to stdout: -handler = logging.StreamHandler(stream=sys.stdout) -# Or direct logging output to a file: -# handler = logging.FileHandler(filename="sample.log") -logger.addHandler(handler) - -# Optional: change the default logging format. Here we add a timestamp. -#formatter = logging.Formatter("%(asctime)s:%(levelname)s:%(name)s:%(message)s") -#handler.setFormatter(formatter) -``` - -By default logs redact the values of URL query strings, the values of some HTTP request and response headers (including `Authorization` which holds the key or token), and the request and response payloads. To create logs without redaction, add `logging_enable = True` to the client constructor: - -```python -agents_client = AgentsClient( - endpoint=os.environ["PROJECT_ENDPOINT"], - credential=DefaultAzureCredential(), - logging_enable = True -) -``` - -Note that the log level must be set to `logging.DEBUG` (see above code). Logs will be redacted with any other log level. - -Be sure to protect non redacted logs to avoid compromising security. - -For more information, see [Configure logging in the Azure libraries for Python](https://aka.ms/azsdk/python/logging) - -### Reporting issues - -To report an issue with the client library, or request additional features, please open a GitHub issue [here](https://github.com/Azure/azure-sdk-for-python/issues). Mention the package name "azure-ai-agents" in the title or content. - - -## Next steps - -Have a look at the [Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-agents/samples) folder, containing fully runnable Python code for synchronous and asynchronous clients. - -Explore the [AI Starter Template](https://aka.ms/azsdk/azure-ai-agents/python/ai-starter-template). This template creates an Azure AI Foundry hub, project and connected resources including Azure OpenAI Service, AI Search and more. It also deploys a simple chat application to Azure Container Apps. - -## Contributing - -This project welcomes contributions and suggestions. Most contributions require -you to agree to a Contributor License Agreement (CLA) declaring that you have -the right to, and actually do, grant us the rights to use your contribution. -For details, visit https://cla.microsoft.com. - -When you submit a pull request, a CLA-bot will automatically determine whether -you need to provide a CLA and decorate the PR appropriately (e.g., label, -comment). Simply follow the instructions provided by the bot. You will only -need to do this once across all repos using our CLA. - -This project has adopted the -[Microsoft Open Source Code of Conduct][code_of_conduct]. For more information, -see the Code of Conduct FAQ or contact opencode@microsoft.com with any -additional questions or comments. - - -[samples]: https://aka.ms/azsdk/azure-ai-projects/python/samples/ -[code_of_conduct]: https://opensource.microsoft.com/codeofconduct/ -[entra_id]: https://learn.microsoft.com/azure/ai-services/authentication?tabs=powershell#authenticate-with-microsoft-entra-id -[azure_identity_credentials]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity#credentials -[azure_identity_pip]: https://pypi.org/project/azure-identity/ -[default_azure_credential]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity#defaultazurecredential -[pip]: https://pypi.org/project/pip/ -[azure_sub]: https://azure.microsoft.com/free/ -[evaluators]: https://learn.microsoft.com/azure/ai-studio/how-to/develop/evaluate-sdk -[azure_ai_evaluation]: https://learn.microsoft.com/python/api/overview/azure/ai-evaluation-readme -[evaluator_library]: https://learn.microsoft.com/azure/ai-studio/how-to/evaluate-generative-ai-app#view-and-manage-the-evaluators-in-the-evaluator-library \ No newline at end of file diff --git a/sdk/ai/azure-ai-agents/README.md b/sdk/ai/azure-ai-agents/README.md new file mode 120000 index 000000000000..6687ec1c88d9 --- /dev/null +++ b/sdk/ai/azure-ai-agents/README.md @@ -0,0 +1 @@ +../azure-ai-projects/README_AGENTS.md \ No newline at end of file diff --git a/sdk/ai/azure-ai-agents/samples b/sdk/ai/azure-ai-agents/samples new file mode 120000 index 000000000000..8e198d66109e --- /dev/null +++ b/sdk/ai/azure-ai-agents/samples @@ -0,0 +1 @@ +../azure-ai-projects/samples/agents \ No newline at end of file diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_azure_functions_async.py b/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_azure_functions_async.py deleted file mode 100644 index 797be02b32cf..000000000000 --- a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_azure_functions_async.py +++ /dev/null @@ -1,108 +0,0 @@ -# pylint: disable=line-too-long,useless-suppression -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ -import asyncio - -""" -DESCRIPTION: - This sample demonstrates how to use azure function agent operations from - the Azure Agents service using a asynchronous client. - -USAGE: - python sample_agents_azure_functions_async.py - - Before running the sample: - - pip install azure-ai-projects azure-identity - - Set these environment variables with your own values: - 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview - page of your Azure AI Foundry portal. - 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model, as found under the "Name" column in - the "Models + endpoints" tab in your Azure AI Foundry project. - 3) STORAGE_SERVICE_ENDPONT - the storage service queue endpoint, triggering Azure function. - - Please see Getting Started with Azure Functions page for more information on Azure Functions: - https://learn.microsoft.com/azure/azure-functions/functions-get-started -""" - -import os -from azure.ai.agents.aio import AgentsClient -from azure.identity.aio import DefaultAzureCredential -from azure.ai.agents.models import ( - AzureFunctionStorageQueue, - AzureFunctionTool, - MessageRole, -) - - -async def main(): - - async with DefaultAzureCredential( - exclude_managed_identity_credential=True, exclude_environment_credential=True - ) as creds: - async with AgentsClient( - endpoint=os.environ["PROJECT_ENDPOINT"], - credential=creds, - ) as agents_client: - - storage_service_endpoint = os.environ["STORAGE_SERVICE_ENDPONT"] - azure_function_tool = AzureFunctionTool( - name="foo", - description="Get answers from the foo bot.", - parameters={ - "type": "object", - "properties": { - "query": {"type": "string", "description": "The question to ask."}, - "outputqueueuri": {"type": "string", "description": "The full output queue uri."}, - }, - }, - input_queue=AzureFunctionStorageQueue( - queue_name="azure-function-foo-input", - storage_service_endpoint=storage_service_endpoint, - ), - output_queue=AzureFunctionStorageQueue( - queue_name="azure-function-tool-output", - storage_service_endpoint=storage_service_endpoint, - ), - ) - - agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="azure-function-agent-foo", - instructions=f"You are a helpful support agent. Use the provided function any time the prompt contains the string 'What would foo say?'. When you invoke the function, ALWAYS specify the output queue uri parameter as '{storage_service_endpoint}/azure-function-tool-output'. Always responds with \"Foo says\" and then the response from the tool.", - tools=azure_function_tool.definitions, - ) - print(f"Created agent, agent ID: {agent.id}") - - # Create a thread - thread = await agents_client.threads.create() - print(f"Created thread, thread ID: {thread.id}") - - # Create a message - message = await agents_client.messages.create( - thread_id=thread.id, - role="user", - content="What is the most prevalent element in the universe? What would foo say?", - ) - print(f"Created message, message ID: {message.id}") - - run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) - if run.status == "failed": - print(f"Run failed: {run.last_error}") - - # Get the last message from the sender - last_msg = await agents_client.messages.get_last_message_text_by_role( - thread_id=thread.id, role=MessageRole.AGENT - ) - if last_msg: - print(f"Last Message: {last_msg.text.value}") - - # Delete the agent once done - await agents_client.delete_agent(agent.id) - - -if __name__ == "__main__": - asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_basics_async.py b/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_basics_async.py deleted file mode 100644 index 04ff45f31ec5..000000000000 --- a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_basics_async.py +++ /dev/null @@ -1,82 +0,0 @@ -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ - -""" -DESCRIPTION: - This sample demonstrates how to use basic agent operations from - the Azure Agents service using a asynchronous client. - -USAGE: - python sample_agents_basics_async.py - - Before running the sample: - - pip install azure-ai-agents azure-identity aiohttp - - Set these environment variables with your own values: - 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview - page of your Azure AI Foundry portal. - 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model, as found under the "Name" column in - the "Models + endpoints" tab in your Azure AI Foundry project. -""" -import asyncio -import time - -from azure.ai.agents.aio import AgentsClient -from azure.ai.agents.models import MessageTextContent, ListSortOrder -from azure.identity.aio import DefaultAzureCredential - -import os - - -async def main() -> None: - - async with DefaultAzureCredential() as creds: - agents_client = AgentsClient( - endpoint=os.environ["PROJECT_ENDPOINT"], - credential=creds, - ) - - async with agents_client: - agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", instructions="You are helpful agent" - ) - print(f"Created agent, agent ID: {agent.id}") - - thread = await agents_client.threads.create() - print(f"Created thread, thread ID: {thread.id}") - - message = await agents_client.messages.create( - thread_id=thread.id, role="user", content="Hello, tell me a joke" - ) - print(f"Created message, message ID: {message.id}") - - run = await agents_client.runs.create(thread_id=thread.id, agent_id=agent.id) - - # Poll the run as long as run status is queued or in progress - while run.status in ["queued", "in_progress", "requires_action"]: - # Wait for a second - time.sleep(1) - run = await agents_client.runs.get(thread_id=thread.id, run_id=run.id) - print(f"Run status: {run.status}") - - if run.status == "failed": - print(f"Run error: {run.last_error}") - - await agents_client.delete_agent(agent.id) - print("Deleted agent") - - messages = agents_client.messages.list( - thread_id=thread.id, - order=ListSortOrder.ASCENDING, - ) - async for msg in messages: - last_part = msg.content[-1] - if isinstance(last_part, MessageTextContent): - print(f"{msg.role}: {last_part.text.value}") - - -if __name__ == "__main__": - asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_basics_create_thread_and_process_run_async.py b/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_basics_create_thread_and_process_run_async.py deleted file mode 100644 index 8512d64525aa..000000000000 --- a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_basics_create_thread_and_process_run_async.py +++ /dev/null @@ -1,79 +0,0 @@ -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ -""" -DESCRIPTION: - Asynchronous variant of sample_agents_basics_thread_and_process_run.py. - This sample demonstrates how to use the new convenience method - `create_thread_and_process_run` in the Azure AI Agents service. - This single call will create a thread, start a run, poll to - completion (including any tool calls), and return the final result. - -USAGE: - python sample_agents_basics_thread_and_process_run_async.py - - Before running: - - pip install azure-ai-agents azure-identity aiohttp - - Set these environment variables with your own values: - 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview - page of your Azure AI Foundry portal. - 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model, as found under the "Name" column in - the "Models + endpoints" tab in your Azure AI Foundry project. -""" -import asyncio -import os - -from azure.ai.agents.aio import AgentsClient -from azure.ai.agents.models import ( - AgentThreadCreationOptions, - ThreadMessageOptions, - MessageTextContent, - ListSortOrder, -) -from azure.identity.aio import DefaultAzureCredential - - -async def main() -> None: - async with DefaultAzureCredential() as credential: - agents_client = AgentsClient( - endpoint=os.environ["PROJECT_ENDPOINT"], - credential=credential, - ) - - async with agents_client: - agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="sample-agent", - instructions="You are a helpful assistant that tells jokes.", - ) - print(f"Created agent, agent ID: {agent.id}") - - run = await agents_client.create_thread_and_process_run( - agent_id=agent.id, - thread=AgentThreadCreationOptions( - messages=[ThreadMessageOptions(role="user", content="Hi! Tell me your favorite programming joke.")] - ), - ) - - if run.status == "failed": - print(f"Run error: {run.last_error}") - - # List all messages in the thread, in ascending order of creation - messages = agents_client.messages.list( - thread_id=run.thread_id, - order=ListSortOrder.ASCENDING, - ) - async for msg in messages: - last_part = msg.content[-1] - if isinstance(last_part, MessageTextContent): - print(f"{msg.role}: {last_part.text.value}") - - await agents_client.delete_agent(agent.id) - print(f"Deleted agent {agent.id!r}") - - -if __name__ == "__main__": - asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_basics_create_thread_and_run_async.py b/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_basics_create_thread_and_run_async.py deleted file mode 100644 index aa7f203b9c25..000000000000 --- a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_basics_create_thread_and_run_async.py +++ /dev/null @@ -1,88 +0,0 @@ -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ -""" -DESCRIPTION: - Asynchronous variant of sample_agents_basics_thread_and_run.py. - It creates an agent, starts a new thread, and immediately runs it - using the async Azure AI Agents client. - -USAGE: - python sample_agents_basics_thread_and_run_async.py - - Before running: - - pip install azure-ai-agents azure-identity aiohttp - - Set these environment variables with your own values: - 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview - page of your Azure AI Foundry portal. - 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model, as found under the "Name" column in - the "Models + endpoints" tab in your Azure AI Foundry project. -""" -import asyncio -import os - -from azure.ai.agents.aio import AgentsClient -from azure.ai.agents.models import ( - AgentThreadCreationOptions, - ThreadMessageOptions, - MessageTextContent, - ListSortOrder, -) -from azure.identity.aio import DefaultAzureCredential - - -async def main() -> None: - async with DefaultAzureCredential() as credential: - agents_client = AgentsClient( - endpoint=os.environ["PROJECT_ENDPOINT"], - credential=credential, - ) - - async with agents_client: - agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="sample-agent", - instructions="You are a helpful assistant that tells jokes.", - ) - print(f"Created agent, agent ID: {agent.id}") - - # Prepare the initial user message - initial_message = ThreadMessageOptions( - role="user", - content="Hello! Can you tell me a joke?", - ) - - # Create a new thread and immediately start a run on it - run = await agents_client.create_thread_and_run( - agent_id=agent.id, - thread=AgentThreadCreationOptions(messages=[initial_message]), - ) - - # Poll the run as long as run status is queued or in progress - while run.status in {"queued", "in_progress", "requires_action"}: - await asyncio.sleep(1) - run = await agents_client.runs.get(thread_id=run.thread_id, run_id=run.id) - print(f"Run status: {run.status}") - - if run.status == "failed": - print(f"Run error: {run.last_error}") - - # List all messages in the thread, in ascending order of creation - messages = agents_client.messages.list( - thread_id=run.thread_id, - order=ListSortOrder.ASCENDING, - ) - async for msg in messages: - last_part = msg.content[-1] - if isinstance(last_part, MessageTextContent): - print(f"{msg.role}: {last_part.text.value}") - - await agents_client.delete_agent(agent.id) - print(f"Deleted agent {agent.id!r}") - - -if __name__ == "__main__": - asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_code_interpreter_async.py b/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_code_interpreter_async.py deleted file mode 100644 index 7d01c36f73b7..000000000000 --- a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_code_interpreter_async.py +++ /dev/null @@ -1,106 +0,0 @@ -# pylint: disable=line-too-long,useless-suppression -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ - -""" -DESCRIPTION: - This sample demonstrates how to use code interpreter tool with agent from - the Azure Agents service using a asynchronous client. - -USAGE: - python sample_agents_code_interpreter_async.py - - Before running the sample: - - pip install azure-ai-agents azure-identity aiohttp - - Set these environment variables with your own values: - 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview - page of your Azure AI Foundry portal. - 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model, as found under the "Name" column in - the "Models + endpoints" tab in your Azure AI Foundry project. -""" -import asyncio - -from azure.ai.agents.aio import AgentsClient -from azure.ai.agents.models import CodeInterpreterTool, FilePurpose, ListSortOrder, MessageRole -from azure.identity.aio import DefaultAzureCredential -from pathlib import Path - -import os - -asset_file_path = os.path.abspath( - os.path.join(os.path.dirname(__file__), "../assets/synthetic_500_quarterly_results.csv") -) - - -async def main() -> None: - - async with DefaultAzureCredential() as creds: - - async with AgentsClient(endpoint=os.environ["PROJECT_ENDPOINT"], credential=creds) as agents_client: - # Upload a file and wait for it to be processed - file = await agents_client.files.upload_and_poll(file_path=asset_file_path, purpose=FilePurpose.AGENTS) - print(f"Uploaded file, file ID: {file.id}") - - code_interpreter = CodeInterpreterTool(file_ids=[file.id]) - - agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are helpful agent", - tools=code_interpreter.definitions, - tool_resources=code_interpreter.resources, - ) - print(f"Created agent, agent ID: {agent.id}") - - thread = await agents_client.threads.create() - print(f"Created thread, thread ID: {thread.id}") - - message = await agents_client.messages.create( - thread_id=thread.id, - role="user", - content="Could you please create bar chart in TRANSPORTATION sector for the operating profit from the uploaded csv file and provide file to me?", - ) - print(f"Created message, message ID: {message.id}") - - run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) - print(f"Run finished with status: {run.status}") - - if run.status == "failed": - # Check if you got "Rate limit is exceeded.", then you want to get more quota - print(f"Run failed: {run.last_error}") - - messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) - - last_msg = await agents_client.messages.get_last_message_text_by_role( - thread_id=thread.id, role=MessageRole.AGENT - ) - if last_msg: - print(f"Last Message: {last_msg.text.value}") - - async for msg in messages: - # Save every image file in the message - for img in msg.image_contents: - file_id = img.image_file.file_id - file_name = f"{file_id}_image_file.png" - await agents_client.files.save(file_id=file_id, file_name=file_name) - print(f"Saved image file to: {Path.cwd() / file_name}") - - # Print details of every file-path annotation - for ann in msg.file_path_annotations: - print("File Paths:") - print(f" Type: {ann.type}") - print(f" Text: {ann.text}") - print(f" File ID: {ann.file_path.file_id}") - print(f" Start Index: {ann.start_index}") - print(f" End Index: {ann.end_index}") - - await agents_client.delete_agent(agent.id) - print("Deleted agent") - - -if __name__ == "__main__": - asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_code_interpreter_attachment_async.py b/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_code_interpreter_attachment_async.py deleted file mode 100644 index 2474a7242e81..000000000000 --- a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_code_interpreter_attachment_async.py +++ /dev/null @@ -1,89 +0,0 @@ -# pylint: disable=line-too-long,useless-suppression -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ - -""" -DESCRIPTION: - This sample demonstrates how to use agent operations with code interpreter from - the Azure Agents service using a synchronous client. - -USAGE: - python sample_agents_code_interpreter_attachment_async.py - - Before running the sample: - - pip install azure-ai-agents azure-identity aiohttp - - Set these environment variables with your own values: - 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview - page of your Azure AI Foundry portal. - 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model, as found under the "Name" column in - the "Models + endpoints" tab in your Azure AI Foundry project. -""" -import asyncio -import os -from azure.ai.agents.aio import AgentsClient -from azure.ai.agents.models import ( - CodeInterpreterTool, - FilePurpose, - MessageAttachment, - ListSortOrder, - MessageTextContent, -) -from azure.identity.aio import DefaultAzureCredential - -asset_file_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../assets/product_info_1.md")) - - -async def main(): - async with DefaultAzureCredential() as creds: - async with AgentsClient(endpoint=os.environ["PROJECT_ENDPOINT"], credential=creds) as agents_client: - # Upload a file and wait for it to be processed - file = await agents_client.files.upload_and_poll(file_path=asset_file_path, purpose=FilePurpose.AGENTS) - print(f"Uploaded file, file ID: {file.id}") - - code_interpreter = CodeInterpreterTool() - - # Notice that CodeInterpreter must be enabled in the agent creation, otherwise the agent will not be able to see the file attachment - agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are helpful agent", - tools=code_interpreter.definitions, - ) - print(f"Created agent, agent ID: {agent.id}") - - thread = await agents_client.threads.create() - print(f"Created thread, thread ID: {thread.id}") - - # Create a message with the attachment - attachment = MessageAttachment(file_id=file.id, tools=code_interpreter.definitions) - message = await agents_client.messages.create( - thread_id=thread.id, role="user", content="What does the attachment say?", attachments=[attachment] - ) - print(f"Created message, message ID: {message.id}") - - run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) - print(f"Run finished with status: {run.status}") - - if run.status == "failed": - # Check if you got "Rate limit is exceeded.", then you want to get more quota - print(f"Run failed: {run.last_error}") - - await agents_client.files.delete(file.id) - print("Deleted file") - - await agents_client.delete_agent(agent.id) - print("Deleted agent") - - messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) - async for msg in messages: - last_part = msg.content[-1] - if isinstance(last_part, MessageTextContent): - print(f"{msg.role}: {last_part.text.value}") - - -if __name__ == "__main__": - asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_functions_async.py b/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_functions_async.py deleted file mode 100644 index 678e1b717fd7..000000000000 --- a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_functions_async.py +++ /dev/null @@ -1,120 +0,0 @@ -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ - -""" -FILE: sample_agents_functions_async.py - -DESCRIPTION: - This sample demonstrates how to use agent operations with custom functions from - the Azure Agents service using a asynchronous client. - -USAGE: - python sample_agents_functions_async.py - - Before running the sample: - - pip install azure-ai-agents azure-identity aiohttp - - Set these environment variables with your own values: - 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview - page of your Azure AI Foundry portal. - 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model, as found under the "Name" column in - the "Models + endpoints" tab in your Azure AI Foundry project. -""" -import asyncio -import time -import os -from azure.ai.agents.aio import AgentsClient -from azure.ai.agents.models import ( - AsyncFunctionTool, - RequiredFunctionToolCall, - SubmitToolOutputsAction, - ToolOutput, - ListSortOrder, - MessageTextContent, -) -from azure.identity.aio import DefaultAzureCredential -from utils.user_async_functions import user_async_functions - - -async def main() -> None: - async with DefaultAzureCredential() as creds: - async with AgentsClient(endpoint=os.environ["PROJECT_ENDPOINT"], credential=creds) as agents_client: - # Initialize agent functions - functions = AsyncFunctionTool(functions=user_async_functions) - - # Create agent - agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are helpful agent", - tools=functions.definitions, - ) - print(f"Created agent, agent ID: {agent.id}") - - # Create thread for communication - thread = await agents_client.threads.create() - print(f"Created thread, ID: {thread.id}") - - # Create and send message - message = await agents_client.messages.create( - thread_id=thread.id, role="user", content="Hello, what's the time?" - ) - print(f"Created message, ID: {message.id}") - - # Create and run agent task - run = await agents_client.runs.create(thread_id=thread.id, agent_id=agent.id) - print(f"Created run, ID: {run.id}") - - # Polling loop for run status - while run.status in ["queued", "in_progress", "requires_action"]: - time.sleep(4) - run = await agents_client.runs.get(thread_id=thread.id, run_id=run.id) - - if run.status == "requires_action" and isinstance(run.required_action, SubmitToolOutputsAction): - tool_calls = run.required_action.submit_tool_outputs.tool_calls - if not tool_calls: - print("No tool calls provided - cancelling run") - await agents_client.runs.cancel(thread_id=thread.id, run_id=run.id) - break - - tool_outputs = [] - for tool_call in tool_calls: - if isinstance(tool_call, RequiredFunctionToolCall): - try: - output = await functions.execute(tool_call) - tool_outputs.append( - ToolOutput( - tool_call_id=tool_call.id, - output=output, - ) - ) - except Exception as e: - print(f"Error executing tool_call {tool_call.id}: {e}") - - print(f"Tool outputs: {tool_outputs}") - if tool_outputs: - await agents_client.runs.submit_tool_outputs( - thread_id=thread.id, run_id=run.id, tool_outputs=tool_outputs - ) - - print(f"Current run status: {run.status}") - - print(f"Run completed with status: {run.status}") - - # Delete the agent when done - await agents_client.delete_agent(agent.id) - print("Deleted agent") - - # Fetch and log messages - messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) - async for msg in messages: - last_part = msg.content[-1] - if isinstance(last_part, MessageTextContent): - print(f"{msg.role}: {last_part.text.value}") - - -if __name__ == "__main__": - asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_image_input_base64_async.py b/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_image_input_base64_async.py deleted file mode 100644 index 6c2aa34302ec..000000000000 --- a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_image_input_base64_async.py +++ /dev/null @@ -1,113 +0,0 @@ -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ - -""" -DESCRIPTION: - This sample demonstrates how to use basic agent operations using image file input for the - the Azure Agents service using a synchronous client. - -USAGE: - python sample_agents_image_input_base64.py - - Before running the sample: - - pip install azure-ai-projects azure-identity - - Set these environment variables with your own values: - 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview - page of your Azure AI Foundry portal. - 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model, as found under the "Name" column in - the "Models + endpoints" tab in your Azure AI Foundry project. -""" -import asyncio -import os, time, base64 -from typing import List -from azure.ai.agents.aio import AgentsClient -from azure.identity.aio import DefaultAzureCredential -from azure.ai.agents.models import ( - ListSortOrder, - MessageTextContent, - MessageInputContentBlock, - MessageImageUrlParam, - MessageInputTextBlock, - MessageInputImageUrlBlock, -) - -asset_file_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../assets/image_file.png")) - - -def image_to_base64(image_path: str) -> str: - """ - Convert an image file to a Base64-encoded string. - - :param image_path: The path to the image file (e.g. 'image_file.png') - :return: A Base64-encoded string representing the image. - :raises FileNotFoundError: If the provided file path does not exist. - :raises OSError: If there's an error reading the file. - """ - if not os.path.isfile(image_path): - raise FileNotFoundError(f"File not found at: {image_path}") - - try: - with open(image_path, "rb") as image_file: - file_data = image_file.read() - return base64.b64encode(file_data).decode("utf-8") - except Exception as exc: - raise OSError(f"Error reading file '{image_path}'") from exc - - -async def main(): - async with DefaultAzureCredential() as creds: - async with AgentsClient(endpoint=os.environ["PROJECT_ENDPOINT"], credential=creds) as agents_client: - - agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are helpful agent", - ) - print(f"Created agent, agent ID: {agent.id}") - - thread = await agents_client.threads.create() - print(f"Created thread, thread ID: {thread.id}") - - input_message = "Hello, what is in the image ?" - image_base64 = image_to_base64(asset_file_path) - img_url = f"data:image/png;base64,{image_base64}" - url_param = MessageImageUrlParam(url=img_url, detail="high") - content_blocks: List[MessageInputContentBlock] = [ - MessageInputTextBlock(text=input_message), - MessageInputImageUrlBlock(image_url=url_param), - ] - message = await agents_client.messages.create(thread_id=thread.id, role="user", content=content_blocks) - print(f"Created message, message ID: {message.id}") - - run = await agents_client.runs.create(thread_id=thread.id, agent_id=agent.id) - - # Poll the run as long as run status is queued or in progress - while run.status in ["queued", "in_progress", "requires_action"]: - # Wait for a second - time.sleep(1) - run = await agents_client.runs.get(thread_id=thread.id, run_id=run.id) - print(f"Run status: {run.status}") - - if run.status == "failed": - print(f"Run failed: {run.last_error}") - - await agents_client.delete_agent(agent.id) - print("Deleted agent") - - messages = agents_client.messages.list( - thread_id=thread.id, - order=ListSortOrder.ASCENDING, - ) - - async for msg in messages: - last_part = msg.content[-1] - if isinstance(last_part, MessageTextContent): - print(f"{msg.role}: {last_part.text.value}") - - -if __name__ == "__main__": - asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_image_input_file_async.py b/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_image_input_file_async.py deleted file mode 100644 index 081a0ae3f111..000000000000 --- a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_image_input_file_async.py +++ /dev/null @@ -1,97 +0,0 @@ -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ - -""" -DESCRIPTION: - This sample demonstrates how to use basic agent operations using image file input for the - the Azure Agents service using a synchronous client. - -USAGE: - python sample_agents_image_input_file.py - - Before running the sample: - - pip install azure-ai-projects azure-identity - - Set these environment variables with your own values: - 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview - page of your Azure AI Foundry portal. - 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model, as found under the "Name" column in - the "Models + endpoints" tab in your Azure AI Foundry project. -""" -import asyncio -import os, time -from typing import List -from azure.ai.agents.aio import AgentsClient -from azure.identity.aio import DefaultAzureCredential -from azure.ai.agents.models import ( - ListSortOrder, - MessageTextContent, - MessageInputContentBlock, - MessageImageFileParam, - MessageInputTextBlock, - MessageInputImageFileBlock, - FilePurpose, -) - -asset_file_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../assets/image_file.png")) - - -async def main(): - async with DefaultAzureCredential() as creds: - async with AgentsClient(endpoint=os.environ["PROJECT_ENDPOINT"], credential=creds) as agents_client: - - agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are helpful agent", - ) - print(f"Created agent, agent ID: {agent.id}") - - thread = await agents_client.threads.create() - print(f"Created thread, thread ID: {thread.id}") - - image_file = await agents_client.files.upload_and_poll( - file_path=asset_file_path, purpose=FilePurpose.AGENTS - ) - print(f"Uploaded file, file ID: {image_file.id}") - - input_message = "Hello, what is in the image ?" - file_param = MessageImageFileParam(file_id=image_file.id, detail="high") - content_blocks: List[MessageInputContentBlock] = [ - MessageInputTextBlock(text=input_message), - MessageInputImageFileBlock(image_file=file_param), - ] - message = await agents_client.messages.create(thread_id=thread.id, role="user", content=content_blocks) - print(f"Created message, message ID: {message.id}") - - run = await agents_client.runs.create(thread_id=thread.id, agent_id=agent.id) - - # Poll the run as long as run status is queued or in progress - while run.status in ["queued", "in_progress", "requires_action"]: - # Wait for a second - time.sleep(1) - run = await agents_client.runs.get(thread_id=thread.id, run_id=run.id) - print(f"Run status: {run.status}") - - if run.status == "failed": - print(f"Run failed: {run.last_error}") - - await agents_client.delete_agent(agent.id) - print("Deleted agent") - - messages = agents_client.messages.list( - thread_id=thread.id, - order=ListSortOrder.ASCENDING, - ) - - async for msg in messages: - last_part = msg.content[-1] - if isinstance(last_part, MessageTextContent): - print(f"{msg.role}: {last_part.text.value}") - - -if __name__ == "__main__": - asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_image_input_url_async.py b/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_image_input_url_async.py deleted file mode 100644 index 7efed0084ade..000000000000 --- a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_image_input_url_async.py +++ /dev/null @@ -1,91 +0,0 @@ -# pylint: disable=line-too-long,useless-suppression -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ - -""" -DESCRIPTION: - This sample demonstrates how to use basic agent operations using image url input for the - the Azure Agents service using a synchronous client. - -USAGE: - python sample_agents_image_input_url.py - - Before running the sample: - - pip install azure-ai-projects azure-identity - - Set these environment variables with your own values: - 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview - page of your Azure AI Foundry portal. - 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model, as found under the "Name" column in - the "Models + endpoints" tab in your Azure AI Foundry project. -""" -import asyncio -import os, time -from typing import List -from azure.ai.agents.aio import AgentsClient -from azure.identity.aio import DefaultAzureCredential -from azure.ai.agents.models import ( - ListSortOrder, - MessageTextContent, - MessageInputContentBlock, - MessageImageUrlParam, - MessageInputTextBlock, - MessageInputImageUrlBlock, -) - - -async def main(): - async with DefaultAzureCredential() as creds: - async with AgentsClient(endpoint=os.environ["PROJECT_ENDPOINT"], credential=creds) as agents_client: - - agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are helpful agent", - ) - print(f"Created agent, agent ID: {agent.id}") - - thread = await agents_client.threads.create() - print(f"Created thread, thread ID: {thread.id}") - - image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg" - input_message = "Hello, what is in the image ?" - url_param = MessageImageUrlParam(url=image_url, detail="high") - content_blocks: List[MessageInputContentBlock] = [ - MessageInputTextBlock(text=input_message), - MessageInputImageUrlBlock(image_url=url_param), - ] - message = await agents_client.messages.create(thread_id=thread.id, role="user", content=content_blocks) - print(f"Created message, message ID: {message.id}") - - run = await agents_client.runs.create(thread_id=thread.id, agent_id=agent.id) - - # Poll the run as long as run status is queued or in progress - while run.status in ["queued", "in_progress", "requires_action"]: - # Wait for a second - time.sleep(1) - run = await agents_client.runs.get(thread_id=thread.id, run_id=run.id) - print(f"Run status: {run.status}") - - if run.status == "failed": - print(f"Run failed: {run.last_error}") - - await agents_client.delete_agent(agent.id) - print("Deleted agent") - - messages = agents_client.messages.list( - thread_id=thread.id, - order=ListSortOrder.ASCENDING, - ) - - async for msg in messages: - last_part = msg.content[-1] - if isinstance(last_part, MessageTextContent): - print(f"{msg.role}: {last_part.text.value}") - - -if __name__ == "__main__": - asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_json_schema_async.py b/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_json_schema_async.py deleted file mode 100644 index 81ecc816d5e9..000000000000 --- a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_json_schema_async.py +++ /dev/null @@ -1,106 +0,0 @@ -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ - -""" -DESCRIPTION: - This sample demonstrates how to use agents with JSON schema output format. - -USAGE: - python sample_agents_json_schema_async.py - - Before running the sample: - - pip install azure-ai-agents azure-identity pydantic - - Set these environment variables with your own values: - 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview - page of your Azure AI Foundry portal. - 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model, as found under the "Name" column in - the "Models + endpoints" tab in your Azure AI Foundry project. -""" - -import asyncio -import os - -from enum import Enum -from pydantic import BaseModel, TypeAdapter -from azure.ai.agents.aio import AgentsClient -from azure.identity.aio import DefaultAzureCredential -from azure.ai.agents.models import ( - ListSortOrder, - MessageTextContent, - MessageRole, - ResponseFormatJsonSchema, - ResponseFormatJsonSchemaType, - RunStatus, -) - - -# Create the pydantic model to represent the planet names and there masses. -class Planets(str, Enum): - Earth = "Earth" - Mars = "Mars" - Jupyter = "Jupyter" - - -class Planet(BaseModel): - planet: Planets - mass: float - - -async def main(): - async with DefaultAzureCredential() as creds: - async with AgentsClient( - endpoint=os.environ["PROJECT_ENDPOINT"], - credential=creds, - ) as agents_client: - - agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="Extract the information about planets.", - response_format=ResponseFormatJsonSchemaType( - json_schema=ResponseFormatJsonSchema( - name="planet_mass", - description="Extract planet mass.", - schema=Planet.model_json_schema(), - ) - ), - ) - print(f"Created agent, agent ID: {agent.id}") - - thread = await agents_client.threads.create() - print(f"Created thread, thread ID: {thread.id}") - - message = await agents_client.messages.create( - thread_id=thread.id, - role="user", - content=("The mass of the Mars is 6.4171E23 kg; the mass of the Earth is 5.972168E24 kg;"), - ) - print(f"Created message, message ID: {message.id}") - - run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) - - if run.status != RunStatus.COMPLETED: - print(f"The run did not succeed: {run.status=}.") - - await agents_client.delete_agent(agent.id) - print("Deleted agent") - - messages = agents_client.messages.list( - thread_id=thread.id, - order=ListSortOrder.ASCENDING, - ) - - async for msg in messages: - if msg.role == MessageRole.AGENT: - last_part = msg.content[-1] - if isinstance(last_part, MessageTextContent): - planet = TypeAdapter(Planet).validate_json(last_part.text.value) - print(f"The mass of {planet.planet} is {planet.mass} kg.") - - -if __name__ == "__main__": - asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_run_with_toolset_async.py b/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_run_with_toolset_async.py deleted file mode 100644 index 4985e790668a..000000000000 --- a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_run_with_toolset_async.py +++ /dev/null @@ -1,88 +0,0 @@ -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ - -""" -DESCRIPTION: - This sample demonstrates how to use agent operations with toolset from - the Azure Agents service using a synchronous client. - -USAGE: - python sample_agents_run_with_toolset_async.py - - Before running the sample: - - pip install azure-ai-agents azure-identity aiohttp - - Set these environment variables with your own values: - 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview - page of your Azure AI Foundry portal. - 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model. -""" - -import os, asyncio -from azure.ai.agents.aio import AgentsClient -from azure.identity.aio import DefaultAzureCredential -from azure.ai.agents.models import AsyncFunctionTool, AsyncToolSet, ListSortOrder, MessageTextContent -from utils.user_async_functions import user_async_functions - - -async def main() -> None: - - async with DefaultAzureCredential() as creds: - async with AgentsClient( - endpoint=os.environ["PROJECT_ENDPOINT"], - credential=creds, - ) as agents_client: - - # Initialize agent toolset with user functions and code interpreter - # [START create_agent_with_async_function_tool] - functions = AsyncFunctionTool(user_async_functions) - - toolset = AsyncToolSet() - toolset.add(functions) - agents_client.enable_auto_function_calls(toolset) - - agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are a helpful agent", - toolset=toolset, - ) - # [END create_agent_with_async_function_tool] - print(f"Created agent, ID: {agent.id}") - - # Create thread for communication - thread = await agents_client.threads.create() - print(f"Created thread, ID: {thread.id}") - - # Create message to thread - message = await agents_client.messages.create( - thread_id=thread.id, - role="user", - content="Hello, send an email with the datetime and weather information in New York?", - ) - print(f"Created message, ID: {message.id}") - - # Create and process agent run in thread with tools - run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) - print(f"Run finished with status: {run.status}") - - if run.status == "failed": - print(f"Run failed: {run.last_error}") - - # Delete the agent when done - await agents_client.delete_agent(agent.id) - print("Deleted agent") - - # Fetch and log all messages - messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) - async for msg in messages: - last_part = msg.content[-1] - if isinstance(last_part, MessageTextContent): - print(f"{msg.role}: {last_part.text.value}") - - -if __name__ == "__main__": - asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_stream_iteration_async.py b/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_stream_iteration_async.py deleted file mode 100644 index f3ce35dd591d..000000000000 --- a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_stream_iteration_async.py +++ /dev/null @@ -1,94 +0,0 @@ -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ - -""" -DESCRIPTION: - This sample demonstrates how to use agent operations with interation in streaming from - the Azure Agents service using a asynchronous client. - -USAGE: - python sample_agents_stream_iteration_async.py - - Before running the sample: - - pip install azure-ai-agents azure-identity aiohttp - - Set these environment variables with your own values: - 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview - page of your Azure AI Foundry portal. - 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model. -""" -import asyncio - -from azure.ai.agents.aio import AgentsClient -from azure.ai.agents.models import AgentStreamEvent -from azure.ai.agents.models import ( - MessageDeltaChunk, - RunStep, - ThreadMessage, - ThreadRun, - ListSortOrder, - MessageTextContent, -) -from azure.identity.aio import DefaultAzureCredential - -import os - - -async def main() -> None: - async with DefaultAzureCredential() as creds: - async with AgentsClient( - endpoint=os.environ["PROJECT_ENDPOINT"], - credential=creds, - ) as agents_client: - agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", instructions="You are helpful agent" - ) - print(f"Created agent, agent ID: {agent.id}") - - thread = await agents_client.threads.create() - print(f"Created thread, thread ID {thread.id}") - - message = await agents_client.messages.create( - thread_id=thread.id, role="user", content="Hello, tell me a joke" - ) - print(f"Created message, message ID {message.id}") - - async with await agents_client.runs.stream(thread_id=thread.id, agent_id=agent.id) as stream: - async for event_type, event_data, _ in stream: - - if isinstance(event_data, MessageDeltaChunk): - print(f"Text delta received: {event_data.text}") - - elif isinstance(event_data, ThreadMessage): - print(f"ThreadMessage created. ID: {event_data.id}, Status: {event_data.status}") - - elif isinstance(event_data, ThreadRun): - print(f"ThreadRun status: {event_data.status}") - elif isinstance(event_data, RunStep): - print(f"RunStep type: {event_data.type}, Status: {event_data.status}") - - elif event_type == AgentStreamEvent.ERROR: - print(f"An error occurred. Data: {event_data}") - - elif event_type == AgentStreamEvent.DONE: - print("Stream completed.") - break - - else: - print(f"Unhandled Event Type: {event_type}, Data: {event_data}") - - await agents_client.delete_agent(agent.id) - print("Deleted agent") - - messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) - async for msg in messages: - last_part = msg.content[-1] - if isinstance(last_part, MessageTextContent): - print(f"{msg.role}: {last_part.text.value}") - - -if __name__ == "__main__": - asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_vector_store_batch_enterprise_file_search_async.py b/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_vector_store_batch_enterprise_file_search_async.py deleted file mode 100644 index a6d8a4b7e444..000000000000 --- a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_vector_store_batch_enterprise_file_search_async.py +++ /dev/null @@ -1,122 +0,0 @@ -# pylint: disable=line-too-long,useless-suppression -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ -""" -DESCRIPTION: - This sample demonstrates how to use agent operations to add files to an existing vector store and perform search from - the Azure Agents service using a synchronous client. - -USAGE: - python sample_agents_vector_store_batch_enterprise_file_search_async.py - - Before running the sample: - - pip install azure-ai-agents azure-identity azure-ai-ml aiohttp - - Set these environment variables with your own values: - 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview - page of your Azure AI Foundry portal. - 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model. -""" -import asyncio -import os - -from azure.ai.agents.aio import AgentsClient -from azure.ai.agents.models import ( - FileSearchTool, - ListSortOrder, - MessageTextContent, - VectorStoreDataSource, - VectorStoreDataSourceAssetType, -) -from azure.identity.aio import DefaultAzureCredential - - -async def main(): - - async with DefaultAzureCredential() as creds: - async with AgentsClient( - endpoint=os.environ["PROJECT_ENDPOINT"], - credential=creds, - ) as agents_client: - # We will upload the local file to Azure and will use it for vector store creation. - asset_uri = os.environ["AZURE_BLOB_URI"] - ds = VectorStoreDataSource( - asset_identifier=asset_uri, - asset_type=VectorStoreDataSourceAssetType.URI_ASSET, - ) - vector_store = await agents_client.vector_stores.create_and_poll(file_ids=[], name="sample_vector_store") - print(f"Created vector store, vector store ID: {vector_store.id}") - - # Add the file to the vector store or you can supply file ids in the vector store creation - vector_store_file_batch = await agents_client.vector_store_file_batches.create_and_poll( - vector_store_id=vector_store.id, data_sources=[ds] - ) - print(f"Created vector store file batch, vector store file batch ID: {vector_store_file_batch.id}") - - # Create a file search tool - file_search_tool = FileSearchTool(vector_store_ids=[vector_store.id]) - - # Notices that FileSearchTool as tool and tool_resources must be added or the agent unable to search the file - agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are helpful agent", - tools=file_search_tool.definitions, - tool_resources=file_search_tool.resources, - ) - print(f"Created agent, agent ID: {agent.id}") - - thread = await agents_client.threads.create() - print(f"Created thread, thread ID: {thread.id}") - - message = await agents_client.messages.create( - thread_id=thread.id, - role="user", - content="What feature does Smart Eyewear offer?", - ) - print(f"Created message, message ID: {message.id}") - - run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) - print(f"Created run, run ID: {run.id}") - - file_search_tool.remove_vector_store(vector_store.id) - print(f"Removed vector store from file search, vector store ID: {vector_store.id}") - - await agents_client.update_agent( - agent_id=agent.id, - tools=file_search_tool.definitions, - tool_resources=file_search_tool.resources, - ) - print(f"Updated agent, agent ID: {agent.id}") - - thread = await agents_client.threads.create() - print(f"Created thread, thread ID: {thread.id}") - - message = await agents_client.messages.create( - thread_id=thread.id, - role="user", - content="What feature does Smart Eyewear offer?", - ) - print(f"Created message, message ID: {message.id}") - - run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) - print(f"Created run, run ID: {run.id}") - - await agents_client.vector_stores.delete(vector_store.id) - print("Deleted vector store") - - await agents_client.delete_agent(agent.id) - print("Deleted agent") - - messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) - async for msg in messages: - last_part = msg.content[-1] - if isinstance(last_part, MessageTextContent): - print(f"{msg.role}: {last_part.text.value}") - - -if __name__ == "__main__": - asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_vector_store_batch_file_search_async.py b/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_vector_store_batch_file_search_async.py deleted file mode 100644 index 753d25ca56c9..000000000000 --- a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_vector_store_batch_file_search_async.py +++ /dev/null @@ -1,114 +0,0 @@ -# pylint: disable=line-too-long,useless-suppression -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ - -""" -DESCRIPTION: - This sample demonstrates how to use agent operations to add files to an existing vector store and perform search from - the Azure Agents service using a asynchronous client. - -USAGE: - python sample_agents_vector_store_batch_file_search_async.py - - Before running the sample: - - pip install azure-ai-agents azure-identity aiohttp - - Set these environment variables with your own values: - 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview - page of your Azure AI Foundry portal. - 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model. -""" - -import asyncio -import os -from azure.ai.agents.aio import AgentsClient -from azure.ai.agents.models import FileSearchTool, FilePurpose, ListSortOrder, MessageTextContent -from azure.identity.aio import DefaultAzureCredential - -asset_file_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../assets/product_info_1.md")) - - -async def main() -> None: - async with DefaultAzureCredential() as creds: - async with AgentsClient( - endpoint=os.environ["PROJECT_ENDPOINT"], - credential=creds, - ) as agents_client: - # Upload a file and wait for it to be processed - file = await agents_client.files.upload_and_poll(file_path=asset_file_path, purpose=FilePurpose.AGENTS) - print(f"Uploaded file, file ID: {file.id}") - - # Create a vector store with no file and wait for it to be processed - vector_store = await agents_client.vector_stores.create_and_poll(file_ids=[], name="sample_vector_store") - print(f"Created vector store, vector store ID: {vector_store.id}") - - # Add the file to the vector store or you can supply file ids in the vector store creation - vector_store_file_batch = await agents_client.vector_store_file_batches.create_and_poll( - vector_store_id=vector_store.id, file_ids=[file.id] - ) - print(f"Created vector store file batch, vector store file batch ID: {vector_store_file_batch.id}") - - # Create a file search tool - file_search_tool = FileSearchTool(vector_store_ids=[vector_store.id]) - - # Notices that FileSearchTool as tool and tool_resources must be added or the agent unable to search the file - agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are helpful agent", - tools=file_search_tool.definitions, - tool_resources=file_search_tool.resources, - ) - print(f"Created agent, agent ID: {agent.id}") - - thread = await agents_client.threads.create() - print(f"Created thread, thread ID: {thread.id}") - - message = await agents_client.messages.create( - thread_id=thread.id, role="user", content="What feature does Smart Eyewear offer?" - ) - print(f"Created message, message ID: {message.id}") - - run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) - print(f"Created run, run ID: {run.id}") - - file_search_tool.remove_vector_store(vector_store.id) - print(f"Removed vector store from file search, vector store ID: {vector_store.id}") - - await agents_client.update_agent( - agent_id=agent.id, tools=file_search_tool.definitions, tool_resources=file_search_tool.resources - ) - print(f"Updated agent, agent ID: {agent.id}") - - thread = await agents_client.threads.create() - print(f"Created thread, thread ID: {thread.id}") - - message = await agents_client.messages.create( - thread_id=thread.id, role="user", content="What feature does Smart Eyewear offer?" - ) - print(f"Created message, message ID: {message.id}") - - run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) - print(f"Created run, run ID: {run.id}") - - await agents_client.files.delete(file.id) - print("Deleted file") - - await agents_client.vector_stores.delete(vector_store.id) - print("Deleted vector store") - - await agents_client.delete_agent(agent.id) - print("Deleted agent") - - messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) - async for msg in messages: - last_part = msg.content[-1] - if isinstance(last_part, MessageTextContent): - print(f"{msg.role}: {last_part.text.value}") - - -if __name__ == "__main__": - asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_vector_store_enterprise_file_search_async.py b/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_vector_store_enterprise_file_search_async.py deleted file mode 100644 index 54c595c6d5e6..000000000000 --- a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_vector_store_enterprise_file_search_async.py +++ /dev/null @@ -1,88 +0,0 @@ -# pylint: disable=line-too-long,useless-suppression -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ -""" -DESCRIPTION: - This sample demonstrates how to add files to agent during the vector store creation. - -USAGE: - python sample_agents_vector_store_enterprise_file_search_async.py - - Before running the sample: - - pip install azure-ai-agents azure-identity azure-ai-ml aiohttp - - Set these environment variables with your own values: - 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview - page of your Azure AI Foundry portal. - 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model. -""" -import asyncio -import os - -from azure.ai.agents.aio import AgentsClient -from azure.ai.agents.models import ( - FileSearchTool, - VectorStoreDataSource, - VectorStoreDataSourceAssetType, - ListSortOrder, - MessageTextContent, -) -from azure.identity.aio import DefaultAzureCredential - - -async def main(): - async with DefaultAzureCredential() as credential: - async with AgentsClient( - endpoint=os.environ["PROJECT_ENDPOINT"], - credential=credential, - ) as agents_client: - # We will upload the local file to Azure and will use it for vector store creation. - asset_uri = os.environ["AZURE_BLOB_URI"] - ds = VectorStoreDataSource(asset_identifier=asset_uri, asset_type=VectorStoreDataSourceAssetType.URI_ASSET) - vector_store = await agents_client.vector_stores.create_and_poll( - data_sources=[ds], name="sample_vector_store" - ) - print(f"Created vector store, vector store ID: {vector_store.id}") - - # Create a file search tool - file_search_tool = FileSearchTool(vector_store_ids=[vector_store.id]) - - # Notices that FileSearchTool as tool and tool_resources must be added or the agent unable to search the file - agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are helpful agent", - tools=file_search_tool.definitions, - tool_resources=file_search_tool.resources, - ) - print(f"Created agent, agent ID: {agent.id}") - - thread = await agents_client.threads.create() - print(f"Created thread, thread ID: {thread.id}") - - message = await agents_client.messages.create( - thread_id=thread.id, role="user", content="What feature does Smart Eyewear offer?" - ) - print(f"Created message, message ID: {message.id}") - - run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) - print(f"Created run, run ID: {run.id}") - - await agents_client.vector_stores.delete(vector_store.id) - print("Deleted vector store") - - await agents_client.delete_agent(agent.id) - print("Deleted agent") - - messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) - async for msg in messages: - last_part = msg.content[-1] - if isinstance(last_part, MessageTextContent): - print(f"{msg.role}: {last_part.text.value}") - - -if __name__ == "__main__": - asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_vector_store_file_search_async.py b/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_vector_store_file_search_async.py deleted file mode 100644 index ab2efdebbd03..000000000000 --- a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_vector_store_file_search_async.py +++ /dev/null @@ -1,86 +0,0 @@ -# pylint: disable=line-too-long,useless-suppression -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ -""" -DESCRIPTION: - This sample demonstrates how to add files to agent during the vector store creation. - -USAGE: - python sample_agents_vector_store_file_search_async.py - - Before running the sample: - - pip install azure-ai-agents azure-identity aiohttp - - Set these environment variables with your own values: - 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview - page of your Azure AI Foundry portal. - 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model. -""" -import asyncio -import os - -from azure.ai.agents.aio import AgentsClient -from azure.ai.agents.models import FileSearchTool, FilePurpose, MessageTextContent, ListSortOrder -from azure.identity.aio import DefaultAzureCredential - -asset_file_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../assets/product_info_1.md")) - - -async def main(): - async with DefaultAzureCredential() as credential: - async with AgentsClient( - endpoint=os.environ["PROJECT_ENDPOINT"], - credential=credential, - ) as agents_client: - # Upload a file and wait for it to be processed - file = await agents_client.files.upload_and_poll(file_path=asset_file_path, purpose=FilePurpose.AGENTS) - print(f"Uploaded file, file ID: {file.id}") - - # Create a vector store with no file and wait for it to be processed - vector_store = await agents_client.vector_stores.create_and_poll( - file_ids=[file.id], name="sample_vector_store" - ) - print(f"Created vector store, vector store ID: {vector_store.id}") - - # Create a file search tool - file_search_tool = FileSearchTool(vector_store_ids=[vector_store.id]) - - # Notices that FileSearchTool as tool and tool_resources must be added or the agent unable to search the file - agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are helpful agent", - tools=file_search_tool.definitions, - tool_resources=file_search_tool.resources, - ) - print(f"Created agent, agent ID: {agent.id}") - - thread = await agents_client.threads.create() - print(f"Created thread, thread ID: {thread.id}") - - message = await agents_client.messages.create( - thread_id=thread.id, role="user", content="What feature does Smart Eyewear offer?" - ) - print(f"Created message, message ID: {message.id}") - - run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) - print(f"Created run, run ID: {run.id}") - - await agents_client.vector_stores.delete(vector_store.id) - print("Deleted vector store") - - await agents_client.delete_agent(agent.id) - print("Deleted agent") - - messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) - async for msg in messages: - last_part = msg.content[-1] - if isinstance(last_part, MessageTextContent): - print(f"{msg.role}: {last_part.text.value}") - - -if __name__ == "__main__": - asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_with_file_search_attachment_async.py b/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_with_file_search_attachment_async.py deleted file mode 100644 index ad3dcb664504..000000000000 --- a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_with_file_search_attachment_async.py +++ /dev/null @@ -1,88 +0,0 @@ -# pylint: disable=line-too-long,useless-suppression -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ - -""" -DESCRIPTION: - This sample demonstrates how to use agent operations to create messages with file search attachments from - the Azure Agents service using a asynchronous client. - -USAGE: - python sample_agents_with_file_search_attachment_async.py - - Before running the sample: - - pip install azure-ai-agents azure-identity aiohttp - - Set these environment variables with your own values: - 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview - page of your Azure AI Foundry portal. - 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model. -""" -import asyncio - -from azure.ai.agents.aio import AgentsClient -from azure.ai.agents.models import FilePurpose -from azure.ai.agents.models import FileSearchTool, MessageAttachment, ListSortOrder, MessageTextContent -from azure.identity.aio import DefaultAzureCredential - -import os - -asset_file_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../assets/product_info_1.md")) - - -async def main() -> None: - async with DefaultAzureCredential() as creds: - async with AgentsClient( - endpoint=os.environ["PROJECT_ENDPOINT"], - credential=creds, - ) as agents_client: - # Upload a file and wait for it to be processed - file = await agents_client.files.upload_and_poll(file_path=asset_file_path, purpose=FilePurpose.AGENTS) - - # Create agent - agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are helpful agent", - ) - print(f"Created agent, agent ID: {agent.id}") - - thread = await agents_client.threads.create() - print(f"Created thread, thread ID: {thread.id}") - - # Create a message with the file search attachment - # Notice that vector store is created temporarily when using attachments with a default expiration policy of seven days. - attachment = MessageAttachment(file_id=file.id, tools=FileSearchTool().definitions) - message = await agents_client.messages.create( - thread_id=thread.id, - role="user", - content="What feature does Smart Eyewear offer?", - attachments=[attachment], - ) - print(f"Created message, message ID: {message.id}") - - run = await agents_client.runs.create_and_process( - thread_id=thread.id, agent_id=agent.id, polling_interval=4 - ) - print(f"Created run, run ID: {run.id}") - - print(f"Run completed with status: {run.status}") - - await agents_client.files.delete(file.id) - print("Deleted file") - - await agents_client.delete_agent(agent.id) - print("Deleted agent") - - messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) - async for msg in messages: - last_part = msg.content[-1] - if isinstance(last_part, MessageTextContent): - print(f"{msg.role}: {last_part.text.value}") - - -if __name__ == "__main__": - asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/agents_multiagent/sample_agents_agent_team_custom_team_leader.py b/sdk/ai/azure-ai-agents/samples/agents_multiagent/sample_agents_agent_team_custom_team_leader.py deleted file mode 100644 index 1a8c9f33a6bd..000000000000 --- a/sdk/ai/azure-ai-agents/samples/agents_multiagent/sample_agents_agent_team_custom_team_leader.py +++ /dev/null @@ -1,117 +0,0 @@ -# pylint: disable=line-too-long,useless-suppression -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ - -""" -DESCRIPTION: - This sample demonstrates how to multiple agents using AgentTeam with traces. - -USAGE: - python sample_agents_agent_team_custom_team_leader.py - - Before running the sample: - - pip install azure-ai-agents azure-identity - - Set these environment variables with your own values: - PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview - page of your Azure AI Foundry portal. - MODEL_DEPLOYMENT_NAME - the name of the model deployment to use. -""" - -import os -from typing import Optional, Set -from azure.ai.agents import AgentsClient -from azure.identity import DefaultAzureCredential -from utils.agent_team import AgentTeam, AgentTask -from utils.agent_trace_configurator import AgentTraceConfigurator -from azure.ai.agents.models import FunctionTool, ToolSet - -agents_client = AgentsClient( - endpoint=os.environ["PROJECT_ENDPOINT"], - credential=DefaultAzureCredential(), -) - -model_deployment_name = os.getenv("MODEL_DEPLOYMENT_NAME") - - -def create_task(team_name: str, recipient: str, request: str, requestor: str) -> str: - """ - Requests another agent in the team to complete a task. - - :param team_name (str): The name of the team. - :param recipient (str): The name of the agent that is being requested to complete the task. - :param request (str): A description of the to complete. This can also be a question. - :param requestor (str): The name of the agent who is requesting the task. - :return: True if the task was successfully received, False otherwise. - :rtype: str - """ - task = AgentTask(recipient=recipient, task_description=request, requestor=requestor) - team: Optional[AgentTeam] = None - try: - team = AgentTeam.get_team(team_name) - except: - pass - if team is not None: - team.add_task(task) - return "True" - return "False" - - -# Any additional functions that might be used by the agents: -agent_team_default_functions: Set = { - create_task, -} - -default_function_tool = FunctionTool(functions=agent_team_default_functions) - -agents_client.enable_auto_function_calls({create_task}) - -if model_deployment_name is not None: - AgentTraceConfigurator(agents_client=agents_client).setup_tracing() - with agents_client: - agent_team = AgentTeam("test_team", agents_client=agents_client) - toolset = ToolSet() - toolset.add(default_function_tool) - agent_team.set_team_leader( - model=model_deployment_name, - name="TeamLeader", - instructions="You are an agent named 'TeamLeader'. You are a leader of a team of agents. The name of your team is 'test_team'." - "You are an agent that is responsible for receiving requests from user and utilizing a team of agents to complete the task. " - "When you are passed a request, the only thing you will do is evaluate which team member should do which task next to complete the request. " - "You will use the provided create_task function to create a task for the agent that is best suited for handling the task next. " - "You will respond with the description of who you assigned the task and why. When you think that the original user request is " - "processed completely utilizing all the talent available in the team, you do not have to create anymore tasks. " - "Using the skills of all the team members when applicable is highly valued. " - "Do not create parallel tasks. " - "Here are the other agents in your team: " - "- Coder: You are software engineer who writes great code. Your name is Coder. " - "- Reviewer: You are software engineer who reviews code. Your name is Reviewer.", - toolset=toolset, - ) - agent_team.add_agent( - model=model_deployment_name, - name="Coder", - instructions="You are software engineer who writes great code. Your name is Coder.", - ) - agent_team.add_agent( - model=model_deployment_name, - name="Reviewer", - instructions="You are software engineer who reviews code. Your name is Reviewer.", - ) - agent_team.assemble_team() - - print("A team of agents specialized in software engineering is available for requests.") - while True: - user_input = input("Input (type 'quit' or 'exit' to exit): ") - if user_input.lower() == "quit": - break - elif user_input.lower() == "exit": - break - agent_team.process_request(request=user_input) - - agent_team.dismantle_team() -else: - print("Error: Please define the environment variable MODEL_DEPLOYMENT_NAME.") diff --git a/sdk/ai/azure-ai-agents/samples/agents_multiagent/sample_agents_multi_agent_team.py b/sdk/ai/azure-ai-agents/samples/agents_multiagent/sample_agents_multi_agent_team.py deleted file mode 100644 index ba3fff30140f..000000000000 --- a/sdk/ai/azure-ai-agents/samples/agents_multiagent/sample_agents_multi_agent_team.py +++ /dev/null @@ -1,133 +0,0 @@ -# pylint: disable=line-too-long,useless-suppression -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ - -""" -DESCRIPTION: - This sample demonstrates how to use an AgentTeam to execute a multi-step - user request with automatic function calling and trace collection. - - The team consists of - • one leader agent - created automatically from the configuration in - `utils/agent_team_config.yaml` - • three worker agents - `TimeWeatherAgent`, `SendEmailAgent`, and - `TemperatureAgent`, each defined in the code below with its own tools - - IMPORTANT - leader-agent model configuration - `utils/agent_team_config.yaml` contains the key TEAM_LEADER_MODEL. - Its value must be the name of a **deployed** model in your Azure AI - project (e.g. "gpt-4o-mini"). - If this deployment does not exist, AgentTeam cannot instantiate the - leader agent and the sample will fail. - -USAGE: - python sample_agents_multi_agent_team.py - - Before running the sample: - - 1. pip install azure-ai-agents azure-identity - 2. Ensure `utils/agent_team_config.yaml` is present and TEAM_LEADER_MODEL points - to a valid model deployment. - 3. Set these environment variables with your own values: - PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview - page of your Azure AI Foundry portal. - MODEL_DEPLOYMENT_NAME - The model deployment name used for the worker agents. -""" - -import os -from typing import Set - -from utils.user_functions_with_traces import ( - fetch_current_datetime, - fetch_weather, - send_email_using_recipient_name, - convert_temperature, -) - -from azure.ai.agents import AgentsClient -from azure.ai.agents.models import ToolSet, FunctionTool -from azure.identity import DefaultAzureCredential -from utils.agent_team import AgentTeam, _create_task -from utils.agent_trace_configurator import AgentTraceConfigurator - -agents_client = AgentsClient( - endpoint=os.environ["PROJECT_ENDPOINT"], - credential=DefaultAzureCredential(), -) - -user_function_set_1: Set = {fetch_current_datetime, fetch_weather} - -user_function_set_2: Set = {send_email_using_recipient_name} - -user_function_set_3: Set = {convert_temperature} - -agents_client.enable_auto_function_calls( - { - _create_task, - fetch_current_datetime, - fetch_weather, - send_email_using_recipient_name, - convert_temperature, - } -) - -model_deployment_name = os.getenv("MODEL_DEPLOYMENT_NAME") - -if model_deployment_name is not None: - AgentTraceConfigurator(agents_client=agents_client).setup_tracing() - with agents_client: - - functions = FunctionTool(functions=user_function_set_1) - toolset1 = ToolSet() - toolset1.add(functions) - - agent_team = AgentTeam("test_team", agents_client=agents_client) - - agent_team.add_agent( - model=model_deployment_name, - name="TimeWeatherAgent", - instructions="You are a specialized agent for time and weather queries.", - toolset=toolset1, - can_delegate=True, - ) - - functions = FunctionTool(functions=user_function_set_2) - toolset2 = ToolSet() - toolset2.add(functions) - - agent_team.add_agent( - model=model_deployment_name, - name="SendEmailAgent", - instructions="You are a specialized agent for sending emails.", - toolset=toolset2, - can_delegate=False, - ) - - functions = FunctionTool(functions=user_function_set_3) - toolset3 = ToolSet() - toolset3.add(functions) - - agent_team.add_agent( - model=model_deployment_name, - name="TemperatureAgent", - instructions="You are a specialized agent for temperature conversion.", - toolset=toolset3, - can_delegate=False, - ) - - agent_team.assemble_team() - - user_request = ( - "Hello, Please provide me current time in '%Y-%m-%d %H:%M:%S' format, and the weather in New York. " - "Finally, convert the Celsius to Fahrenheit and send an email to Example Recipient with summary of results." - ) - - # Once process_request is called, the TeamLeader will coordinate. - # The loop in process_request will pick up tasks from the queue, assign them, and so on. - agent_team.process_request(request=user_request) - - agent_team.dismantle_team() -else: - print("Error: Please define the environment variable MODEL_DEPLOYMENT_NAME.") diff --git a/sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_basics_async_with_console_tracing.py b/sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_basics_async_with_console_tracing.py deleted file mode 100644 index 37c2cc42b83a..000000000000 --- a/sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_basics_async_with_console_tracing.py +++ /dev/null @@ -1,93 +0,0 @@ -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ - -""" -DESCRIPTION: - This sample demonstrates how to use basic agent operations from - the Azure Agents service using a asynchronous client with tracing to console. - -USAGE: - python sample_agents_basics_async_with_console_tracing.py - - Before running the sample: - - pip install azure-ai-agents azure-identity opentelemetry-sdk azure-core-tracing-opentelemetry aiohttp - - If you want to export telemetry to OTLP endpoint (such as Aspire dashboard - https://learn.microsoft.com/dotnet/aspire/fundamentals/dashboard/standalone?tabs=bash) - install: - - pip install opentelemetry-exporter-otlp-proto-grpc - - Set these environment variables with your own values: - * PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview - page of your Azure AI Foundry portal. - * AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED - Optional. Set to `true` to trace the content of chat - messages, which may contain personal data. False by default. -""" -import asyncio -import time -import sys -from azure.core.settings import settings - -settings.tracing_implementation = "opentelemetry" -from opentelemetry import trace -from opentelemetry.sdk.trace import TracerProvider -from opentelemetry.sdk.trace.export import SimpleSpanProcessor, ConsoleSpanExporter -from azure.ai.agents.aio import AgentsClient -from azure.ai.agents.models import ListSortOrder, MessageTextContent -from azure.identity.aio import DefaultAzureCredential -from opentelemetry import trace -import os -from azure.ai.agents.telemetry import AIAgentsInstrumentor - -# Setup tracing to console -# Requires opentelemetry-sdk -span_exporter = ConsoleSpanExporter() -tracer_provider = TracerProvider() -tracer_provider.add_span_processor(SimpleSpanProcessor(span_exporter)) -trace.set_tracer_provider(tracer_provider) -tracer = trace.get_tracer(__name__) - -AIAgentsInstrumentor().instrument() - -scenario = os.path.basename(__file__) -tracer = trace.get_tracer(__name__) - - -@tracer.start_as_current_span(__file__) -async def main() -> None: - - async with DefaultAzureCredential() as creds: - async with AgentsClient(endpoint=os.environ["PROJECT_ENDPOINT"], credential=creds) as agent_client: - - agent = await agent_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", instructions="You are helpful agent" - ) - print(f"Created agent, agent ID: {agent.id}") - - thread = await agent_client.threads.create() - print(f"Created thread, thread ID: {thread.id}") - - message = await agent_client.messages.create( - thread_id=thread.id, role="user", content="Hello, tell me a joke" - ) - print(f"Created message, message ID: {message.id}") - - run = await agent_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) - print(f"Run completed with status: {run.status}") - - await agent_client.delete_agent(agent.id) - print("Deleted agent") - - messages = agent_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) - async for msg in messages: - last_part = msg.content[-1] - if isinstance(last_part, MessageTextContent): - print(f"{msg.role}: {last_part.text.value}") - - -if __name__ == "__main__": - asyncio.run(main()) diff --git a/sdk/ai/azure-ai-projects/CHANGELOG.md b/sdk/ai/azure-ai-projects/CHANGELOG.md index ab792e3be23e..77ff503c976e 100644 --- a/sdk/ai/azure-ai-projects/CHANGELOG.md +++ b/sdk/ai/azure-ai-projects/CHANGELOG.md @@ -1,5 +1,17 @@ # Release History +## 1.0.0b12 (Unreleased) + +### Features added + +### Breaking changes + +### Bugs Fixed + +* Fix for enable_telemetry to correctly instrument azure-ai-agents + +### Sample updates + ## 1.0.0b11 (2025-05-15) There have been significant updates with the release of version 1.0.0b11, including breaking changes. diff --git a/sdk/ai/azure-ai-projects/README.md b/sdk/ai/azure-ai-projects/README.md index 1dda36fb6505..70c379b63ffa 100644 --- a/sdk/ai/azure-ai-projects/README.md +++ b/sdk/ai/azure-ai-projects/README.md @@ -1,520 +1 @@ -# Azure AI Projects client library for Python - -The AI Projects client library (in preview) is part of the Azure AI Foundry SDK, and provides easy access to -resources in your Azure AI Foundry Project. Use it to: - -* **Create and run Agents** using the `.agents` property on the client. -* **Get an AzureOpenAI client** using the `.inference.get_azure_openai_client` method. -* **Enumerate AI Models** deployed to your Foundry Project using the `.deployments` operations. -* **Enumerate connected Azure resources** in your Foundry project using the `.connections` operations. -* **Upload documents and create Datasets** to reference them using the `.datasets` operations. -* **Create and enumerate Search Indexes** using the `.indexes` operations. -* **Get an Azure AI Inference client** for chat completions, text or image embeddings using the `.inference` operations. -* **Read a Prompty file or string** and render messages for inference clients, using the `PromptTemplate` class. -* **Run Evaluations** to assess the performance of generative AI applications, using the `evaluations` operations. -* **Enable OpenTelemetry tracing** using the `enable_telemetry` function. - -> **Note:** There have been significant updates with the release of version 1.0.0b11, including breaking changes. -please see new code snippets below and the samples folder. Agents are now implemented in a separate package `azure-ai-agents` -which will get installed automatically when you install `azure-ai-projects`. You can continue using ".agents" -operations on the `AIProjectsClient` to create, run and delete agents, as before. -See [full set of Agents samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-agents/samples) -in their new location. Also see the [change log for the 1.0.0b11 release](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-projects/CHANGELOG.md). - -[Product documentation](https://aka.ms/azsdk/azure-ai-projects/product-doc) -| [Samples][samples] -| [API reference documentation](https://aka.ms/azsdk/azure-ai-projects/python/reference) -| [Package (PyPI)](https://aka.ms/azsdk/azure-ai-projects/python/package) -| [SDK source code](https://aka.ms/azsdk/azure-ai-projects/python/code) - -## Reporting issues - -To report an issue with the client library, or request additional features, please open a GitHub issue [here](https://github.com/Azure/azure-sdk-for-python/issues). Mention the package name "azure-ai-projects" in the title or content. - -## Getting started - -### Prerequisite - -- Python 3.9 or later. -- An [Azure subscription][azure_sub]. -- A [project in Azure AI Foundry](https://learn.microsoft.com/azure/ai-studio/how-to/create-projects). -- The project endpoint URL of the form `https://.services.ai.azure.com/api/projects/`. It can be found in your Azure AI Foundry Project overview page. Below we will assume the environment variable `PROJECT_ENDPOINT` was defined to hold this value. -- An Entra ID token for authentication. Your application needs an object that implements the [TokenCredential](https://learn.microsoft.com/python/api/azure-core/azure.core.credentials.tokencredential) interface. Code samples here use [DefaultAzureCredential](https://learn.microsoft.com/python/api/azure-identity/azure.identity.defaultazurecredential). To get that working, you will need: - * An appropriate role assignment. see [Role-based access control in Azure AI Foundry portal](https://learn.microsoft.com/azure/ai-foundry/concepts/rbac-ai-foundry). Role assigned can be done via the "Access Control (IAM)" tab of your Azure AI Project resource in the Azure portal. - * [Azure CLI](https://learn.microsoft.com/cli/azure/install-azure-cli) installed. - * You are logged into your Azure account by running `az login`. - -### Install the package - -```bash -pip install azure-ai-projects -``` - -## Key concepts - -### Create and authenticate the client with Entra ID - -Entra ID is the only authentication method supported at the moment by the client. - -To construct a synchronous client: - -```python -import os -from azure.ai.projects import AIProjectClient -from azure.identity import DefaultAzureCredential - -project_client = AIProjectClient( - credential=DefaultAzureCredential(), - endpoint=os.environ["PROJECT_ENDPOINT"], -) -``` - -To construct an asynchronous client, Install the additional package [aiohttp](https://pypi.org/project/aiohttp/): - -```bash -pip install aiohttp -``` - -and update the code above to import `asyncio`, and import `AIProjectClient` from the `azure.ai.projects.aio` namespace: - -```python -import os -import asyncio -from azure.ai.projects.aio import AIProjectClient -from azure.core.credentials import AzureKeyCredential - -project_client = AIProjectClient.from_connection_string( - credential=DefaultAzureCredential(), - endpoint=os.environ["PROJECT_ENDPOINT"], -) -``` - -**Note:** Support for project connection string and hub-based projects has been discontinued. We recommend creating a new Azure AI Foundry resource utilizing project endpoint. If this is not possible, please pin the version of or pin the version of `azure-ai-projects` to `1.0.0b10` or earlier. - -## Examples - -### Performing Agent operations - -The `.agents` property on the `AIProjectsClient` gives you access to an authenticated `AgentsClient` from the `azure-ai-agents` package. Below we show how to create an Agent and delete it. To see what you can do with the `agent` you created, see the [many samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-agents/samples) associated with the `azure-ai-agents` package. - -The code below assumes `model_deployment_name` (a string) is defined. It's the deployment name of an AI model in your Foundry Project, as shown in the "Models + endpoints" tab, under the "Name" column. - - - -```python -agent = project_client.agents.create_agent( - model=model_deployment_name, - name="my-agent", - instructions="You are helpful agent", -) -print(f"Created agent, agent ID: {agent.id}") - -# Do something with your Agent! -# See samples here https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-agents/samples - -project_client.agents.delete_agent(agent.id) -print("Deleted agent") -``` - - - -### Get an authenticated AzureOpenAI client - -Your Azure AI Foundry project may have one or more OpenAI models deployed that support chat completions. Use the code below to get an authenticated [AzureOpenAI](https://github.com/openai/openai-python?tab=readme-ov-file#microsoft-azure-openai) from the [openai](https://pypi.org/project/openai/) package, and execute a chat completions call. - -The code below assumes `model_deployment_name` (a string) is defined. It's the deployment name of an AI model in your Foundry Project, or a connected Azure OpenAI resource. As shown in the "Models + endpoints" tab, under the "Name" column. - -Update the `api_version` value with one found in the "Data plane - inference" row [in this table](https://learn.microsoft.com/azure/ai-services/openai/reference#api-specs). - - - -```python -print( - "Get an authenticated Azure OpenAI client for the parent AI Services resource, and perform a chat completion operation:" -) -with project_client.inference.get_azure_openai_client(api_version="2024-10-21") as client: - - response = client.chat.completions.create( - model=model_deployment_name, - messages=[ - { - "role": "user", - "content": "How many feet are in a mile?", - }, - ], - ) - - print(response.choices[0].message.content) - -print( - "Get an authenticated Azure OpenAI client for a connected Azure OpenAI service, and perform a chat completion operation:" -) -with project_client.inference.get_azure_openai_client( - api_version="2024-10-21", connection_name=connection_name -) as client: - - response = client.chat.completions.create( - model=model_deployment_name, - messages=[ - { - "role": "user", - "content": "How many feet are in a mile?", - }, - ], - ) - - print(response.choices[0].message.content) -``` - - - -See the "inference" folder in the [package samples][samples] for additional samples. - -### Get an authenticated ChatCompletionsClient - -Your Azure AI Foundry project may have one or more AI models deployed that support chat completions. These could be OpenAI models, Microsoft models, or models from other providers. Use the code below to get an authenticated [ChatCompletionsClient](https://learn.microsoft.com/python/api/azure-ai-inference/azure.ai.inference.chatcompletionsclient) from the [azure-ai-inference](https://pypi.org/project/azure-ai-inference/) package, and execute a chat completions call. - -First, install the package: - -```bash -pip install azure-ai-inference -``` - -Then run the code below. Here we assume `model_deployment_name` (a string) is defined. It's the deployment name of an AI model in your Foundry Project, as shown in the "Models + endpoints" tab, under the "Name" column. - - - -```python -with project_client.inference.get_chat_completions_client() as client: - - response = client.complete( - model=model_deployment_name, messages=[UserMessage(content="How many feet are in a mile?")] - ) - - print(response.choices[0].message.content) -``` - - - -See the "inference" folder in the [package samples][samples] for additional samples, including getting an authenticated [EmbeddingsClient](https://learn.microsoft.com/python/api/azure-ai-inference/azure.ai.inference.embeddingsclient) and [ImageEmbeddingsClient](https://learn.microsoft.com/python/api/azure-ai-inference/azure.ai.inference.imageembeddingsclient). - -### Deployments operations - -The code below shows some Deployments operations, which allow you to enumerate the AI models deployed to your AI Foundry Projects. These models can be seen in the "Models + endpoints" tab in your AI Foundry Project. Full samples can be found under the "deployment" folder in the [package samples][samples]. - - - -```python -print("List all deployments:") -for deployment in project_client.deployments.list(): - print(deployment) - -print(f"List all deployments by the model publisher `{model_publisher}`:") -for deployment in project_client.deployments.list(model_publisher=model_publisher): - print(deployment) - -print(f"List all deployments of model `{model_name}`:") -for deployment in project_client.deployments.list(model_name=model_name): - print(deployment) - -print(f"Get a single deployment named `{model_deployment_name}`:") -deployment = project_client.deployments.get(model_deployment_name) -print(deployment) -``` - - - -### Connections operations - -The code below shows some Connection operations, which allow you to enumerate the Azure Resources connected to your AI Foundry Projects. These connections can be seen in the "Management Center", in the "Connected resources" tab in your AI Foundry Project. Full samples can be found under the "connections" folder in the [package samples][samples]. - - - -```python -print("List all connections:") -for connection in project_client.connections.list(): - print(connection) - -print("List all connections of a particular type:") -for connection in project_client.connections.list( - connection_type=ConnectionType.AZURE_OPEN_AI, -): - print(connection) - -print("Get the default connection of a particular type, without its credentials:") -connection = project_client.connections.get_default(connection_type=ConnectionType.AZURE_OPEN_AI) -print(connection) - -print("Get the default connection of a particular type, with its credentials:") -connection = project_client.connections.get_default( - connection_type=ConnectionType.AZURE_OPEN_AI, include_credentials=True -) -print(connection) - -print(f"Get the connection named `{connection_name}`, without its credentials:") -connection = project_client.connections.get(connection_name) -print(connection) - -print(f"Get the connection named `{connection_name}`, with its credentials:") -connection = project_client.connections.get(connection_name, include_credentials=True) -print(connection) -``` - - - -### Dataset operations - -The code below shows some Dataset operations. Full samples can be found under the "datasets" -folder in the [package samples][samples]. - - - -```python -print( - f"Upload a single file and create a new Dataset `{dataset_name}`, version `{dataset_version_1}`, to reference the file." -) -dataset: DatasetVersion = project_client.datasets.upload_file( - name=dataset_name, - version=dataset_version_1, - file_path=data_file, - connection_name=connection_name, -) -print(dataset) - -print( - f"Upload files in a folder (including sub-folders) and create a new version `{dataset_version_2}` in the same Dataset, to reference the files." -) -dataset = project_client.datasets.upload_folder( - name=dataset_name, - version=dataset_version_2, - folder=data_folder, - connection_name=connection_name, - file_pattern=re.compile(r"\.(txt|csv|md)$", re.IGNORECASE), -) -print(dataset) - -print(f"Get an existing Dataset version `{dataset_version_1}`:") -dataset = project_client.datasets.get(name=dataset_name, version=dataset_version_1) -print(dataset) - -print(f"Get credentials of an existing Dataset version `{dataset_version_1}`:") -asset_credential = project_client.datasets.get_credentials(name=dataset_name, version=dataset_version_1) -print(asset_credential) - -print("List latest versions of all Datasets:") -for dataset in project_client.datasets.list(): - print(dataset) - -print(f"Listing all versions of the Dataset named `{dataset_name}`:") -for dataset in project_client.datasets.list_versions(name=dataset_name): - print(dataset) - -print("Delete all Dataset versions created above:") -project_client.datasets.delete(name=dataset_name, version=dataset_version_1) -project_client.datasets.delete(name=dataset_name, version=dataset_version_2) -``` - - - -### Indexes operations - -The code below shows some Indexes operations. Full samples can be found under the "indexes" -folder in the [package samples][samples]. - - - -```python -print( - f"Create Index `{index_name}` with version `{index_version}`, referencing an existing AI Search resource:" -) -index = project_client.indexes.create_or_update( - name=index_name, - version=index_version, - body=AzureAISearchIndex(connection_name=ai_search_connection_name, index_name=ai_search_index_name), -) -print(index) - -print(f"Get Index `{index_name}` version `{index_version}`:") -index = project_client.indexes.get(name=index_name, version=index_version) -print(index) - -print("List latest versions of all Indexes:") -for index in project_client.indexes.list(): - print(index) - -print(f"Listing all versions of the Index named `{index_name}`:") -for index in project_client.indexes.list_versions(name=index_name): - print(index) - -print(f"Delete Index `{index_name}` version `{index_version}`:") -project_client.indexes.delete(name=index_name, version=index_version) -``` - - - -### Evaluation - -Evaluation in Azure AI Project client library provides quantitive, AI-assisted quality and safety metrics to asses performance and Evaluate LLM Models, GenAI Application and Agents. Metrics are defined as evaluators. Built-in or custom evaluators can provide comprehensive evaluation insights. - -The code below shows some evaluation operations. Full list of sample can be found under "evaluation" folder in the [package samples][samples] - - - -```python -print("Upload a single file and create a new Dataset to reference the file.") -dataset: DatasetVersion = project_client.datasets.upload_file( - name=dataset_name, - version=dataset_version, - file_path=data_file, -) -print(dataset) - -print("Create an evaluation") -evaluation: Evaluation = Evaluation( - display_name="Sample Evaluation Test", - description="Sample evaluation for testing", - # Sample Dataset Id : azureai://accounts//projects//data//versions/ - data=InputDataset(id=dataset.id if dataset.id else ""), - evaluators={ - "relevance": EvaluatorConfiguration( - id=EvaluatorIds.RELEVANCE.value, - init_params={ - "deployment_name": model_deployment_name, - }, - data_mapping={ - "query": "${data.query}", - "response": "${data.response}", - }, - ), - "violence": EvaluatorConfiguration( - id=EvaluatorIds.VIOLENCE.value, - init_params={ - "azure_ai_project": endpoint, - }, - ), - "bleu_score": EvaluatorConfiguration( - id=EvaluatorIds.BLEU_SCORE.value, - ), - }, -) - -evaluation_response: Evaluation = project_client.evaluations.create( - evaluation, - headers={ - "model-endpoint": model_endpoint, - "api-key": model_api_key, - }, -) -print(evaluation_response) - -print("Get evaluation") -get_evaluation_response: Evaluation = project_client.evaluations.get(evaluation_response.name) - -print(get_evaluation_response) - -print("List evaluations") -for evaluation in project_client.evaluations.list(): - print(evaluation) -``` - - - -## Troubleshooting - -### Exceptions - -Client methods that make service calls raise an [HttpResponseError](https://learn.microsoft.com/python/api/azure-core/azure.core.exceptions.httpresponseerror) exception for a non-success HTTP status code response from the service. The exception's `status_code` will hold the HTTP response status code (with `reason` showing the friendly name). The exception's `error.message` contains a detailed message that may be helpful in diagnosing the issue: - -```python -from azure.core.exceptions import HttpResponseError - -... - -try: - result = project_client.connections.list() -except HttpResponseError as e: - print(f"Status code: {e.status_code} ({e.reason})") - print(e.message) -``` - -For example, when you provide wrong credentials: - -```text -Status code: 401 (Unauthorized) -Operation returned an invalid status 'Unauthorized' -``` - -### Logging - -The client uses the standard [Python logging library](https://docs.python.org/3/library/logging.html). The SDK logs HTTP request and response details, which may be useful in troubleshooting. To log to stdout, add the following at the top of your Python script: - -```python -import sys -import logging - -# Acquire the logger for this client library. Use 'azure' to affect both -# 'azure.core` and `azure.ai.inference' libraries. -logger = logging.getLogger("azure") - -# Set the desired logging level. logging.INFO or logging.DEBUG are good options. -logger.setLevel(logging.DEBUG) - -# Direct logging output to stdout: -handler = logging.StreamHandler(stream=sys.stdout) -# Or direct logging output to a file: -# handler = logging.FileHandler(filename="sample.log") -logger.addHandler(handler) - -# Optional: change the default logging format. Here we add a timestamp. -#formatter = logging.Formatter("%(asctime)s:%(levelname)s:%(name)s:%(message)s") -#handler.setFormatter(formatter) -``` - -By default logs redact the values of URL query strings, the values of some HTTP request and response headers (including `Authorization` which holds the key or token), and the request and response payloads. To create logs without redaction, add `logging_enable=True` to the client constructor: - -```python -project_client = AIProjectClient( - credential=DefaultAzureCredential(), - endpoint=os.environ["PROJECT_ENDPOINT"], - logging_enable = True -) -``` - -Note that the log level must be set to `logging.DEBUG` (see above code). Logs will be redacted with any other log level. - -Be sure to protect non redacted logs to avoid compromising security. - -For more information, see [Configure logging in the Azure libraries for Python](https://aka.ms/azsdk/python/logging) - -### Reporting issues - -To report an issue with the client library, or request additional features, please open a GitHub issue [here](https://github.com/Azure/azure-sdk-for-python/issues). Mention the package name "azure-ai-projects" in the title or content. - -## Next steps - -Have a look at the [Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-projects/samples) folder, containing fully runnable Python code for synchronous and asynchronous clients. - -## Contributing - -This project welcomes contributions and suggestions. Most contributions require -you to agree to a Contributor License Agreement (CLA) declaring that you have -the right to, and actually do, grant us the rights to use your contribution. -For details, visit https://cla.microsoft.com. - -When you submit a pull request, a CLA-bot will automatically determine whether -you need to provide a CLA and decorate the PR appropriately (e.g., label, -comment). Simply follow the instructions provided by the bot. You will only -need to do this once across all repos using our CLA. - -This project has adopted the -[Microsoft Open Source Code of Conduct][code_of_conduct]. For more information, -see the Code of Conduct FAQ or contact opencode@microsoft.com with any -additional questions or comments. - - -[samples]: https://aka.ms/azsdk/azure-ai-projects/python/samples/ -[code_of_conduct]: https://opensource.microsoft.com/codeofconduct/ -[azure_sub]: https://azure.microsoft.com/free/ -[evaluators]: https://learn.microsoft.com/azure/ai-studio/how-to/develop/evaluate-sdk -[azure_ai_evaluation]: https://learn.microsoft.com/python/api/overview/azure/ai-evaluation-readme -[evaluator_library]: https://learn.microsoft.com/azure/ai-studio/how-to/evaluate-generative-ai-app#view-and-manage-the-evaluators-in-the-evaluator-library \ No newline at end of file +Hello world \ No newline at end of file diff --git a/sdk/ai/azure-ai-projects/README_AGENTS.md b/sdk/ai/azure-ai-projects/README_AGENTS.md new file mode 100644 index 000000000000..a71db514276b --- /dev/null +++ b/sdk/ai/azure-ai-projects/README_AGENTS.md @@ -0,0 +1,1284 @@ + +# Azure AI Agents client library for Python + +Use the AI Agents client library to: + +* **Develop Agents using the Azure AI Agents Service**, leveraging an extensive ecosystem of models, tools, and capabilities from OpenAI, Microsoft, and other LLM providers. The Azure AI Agents Service enables the building of Agents for a wide range of generative AI use cases. +* **Note:** While this package can be used independently, we recommend using the Azure AI Projects client library (azure-ai-projects) for an enhanced experience. +The Projects library provides simplified access to advanced functionality, such as creating and managing agents, enumerating AI models, working with datasets and +managing search indexes, evaluating generative AI performance, and enabling OpenTelemetry tracing. + +[Product documentation](https://aka.ms/azsdk/azure-ai-agents/product-doc) +| [Samples][samples] +| [API reference documentation](https://aka.ms/azsdk/azure-ai-agents/python/reference) +| [Package (PyPI)](https://aka.ms/azsdk/azure-ai-agents/python/package) +| [SDK source code](https://aka.ms/azsdk/azure-ai-agents/python/code) +| [AI Starter Template](https://aka.ms/azsdk/azure-ai-agents/python/ai-starter-template) + +## Reporting issues + +To report an issue with the client library, or request additional features, please open a GitHub issue [here](https://github.com/Azure/azure-sdk-for-python/issues). Mention the package name "azure-ai-agents" in the title or content. + +## Table of contents + +- [Getting started](#getting-started) + - [Prerequisite](#prerequisite) + - [Install the package](#install-the-package) +- [Key concepts](#key-concepts) + - [Create and authenticate the client](#create-and-authenticate-the-client) +- [Examples](#examples) + - [Create an Agent](#create-agent) with: + - [File Search](#create-agent-with-file-search) + - [Enterprise File Search](#create-agent-with-enterprise-file-search) + - [Code interpreter](#create-agent-with-code-interpreter) + - [Bing grounding](#create-agent-with-bing-grounding) + - [Azure AI Search](#create-agent-with-azure-ai-search) + - [Function call](#create-agent-with-function-call) + - [Azure Function Call](#create-agent-with-azure-function-call) + - [OpenAPI](#create-agent-with-openapi) + - [Fabric data](#create-an-agent-with-fabric) + - [Create thread](#create-thread) with + - [Tool resource](#create-thread-with-tool-resource) + - [Create message](#create-message) with: + - [File search attachment](#create-message-with-file-search-attachment) + - [Code interpreter attachment](#create-message-with-code-interpreter-attachment) + - [Create Message with Image Inputs](#create-message-with-image-inputs) + - [Execute Run, Run_and_Process, or Stream](#execute-run-run_and_process-or-stream) + - [Retrieve message](#retrieve-message) + - [Retrieve file](#retrieve-file) + - [Tear down by deleting resource](#teardown) + - [Tracing](#tracing) + - [Installation](#installation) + - [How to enable tracing](#how-to-enable-tracing) + - [How to trace your own functions](#how-to-trace-your-own-functions) +- [Troubleshooting](#troubleshooting) + - [Logging](#logging) + - [Reporting issues](#reporting-issues) +- [Next steps](#next-steps) +- [Contributing](#contributing) + +## Getting started + +### Prerequisite + +- Python 3.9 or later. +- An [Azure subscription][azure_sub]. +- A [project in Azure AI Foundry](https://learn.microsoft.com/azure/ai-studio/how-to/create-projects). +- The project endpoint string. It can be found in your Azure AI Foundry project overview page, under "Project details". Below we will assume the environment variable `PROJECT_ENDPOINT_STRING` was defined to hold this value. +- Entra ID is needed to authenticate the client. Your application needs an object that implements the [TokenCredential](https://learn.microsoft.com/python/api/azure-core/azure.core.credentials.tokencredential) interface. Code samples here use [DefaultAzureCredential](https://learn.microsoft.com/python/api/azure-identity/azure.identity.defaultazurecredential). To get that working, you will need: + * An appropriate role assignment. see [Role-based access control in Azure AI Foundry portal](https://learn.microsoft.com/azure/ai-foundry/concepts/rbac-ai-foundry). Role assigned can be done via the "Access Control (IAM)" tab of your Azure AI Project resource in the Azure portal. + * [Azure CLI](https://learn.microsoft.com/cli/azure/install-azure-cli) installed. + * You are logged into your Azure account by running `az login`. + * Note that if you have multiple Azure subscriptions, the subscription that contains your Azure AI Project resource must be your default subscription. Run `az account list --output table` to list all your subscription and see which one is the default. Run `az account set --subscription "Your Subscription ID or Name"` to change your default subscription. + +### Install the package + +```bash +pip install azure-ai-agents +``` + +## Key concepts + +### Create and authenticate the client + +To construct a synchronous client: + +```python +import os +from azure.ai.agents import AgentsClient +from azure.identity import DefaultAzureCredential + +agents_client = AgentsClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), +) +``` + +To construct an asynchronous client, Install the additional package [aiohttp](https://pypi.org/project/aiohttp/): + +```bash +pip install aiohttp +``` + +and update the code above to import `asyncio`, and import `AgentsClient` from the `azure.ai.agents.aio` namespace: + +```python +import os +import asyncio +from azure.ai.agents.aio import AgentsClient +from azure.core.credentials import AzureKeyCredential + +agent_client = AgentsClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), +) +``` + +## Examples + +### Create Agent + +Before creating an Agent, you need to set up Azure resources to deploy your model. [Create a New Agent Quickstart](https://learn.microsoft.com/azure/ai-services/agents/quickstart?pivots=programming-language-python-azure) details selecting and deploying your Agent Setup. + +Here is an example of how to create an Agent: + + +```python + + agent = agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are helpful agent", + ) +``` + + + +To allow Agents to access your resources or custom functions, you need tools. You can pass tools to `create_agent` by either `toolset` or combination of `tools` and `tool_resources`. + +Here is an example of `toolset`: + + +```python +functions = FunctionTool(user_functions) +code_interpreter = CodeInterpreterTool() + +toolset = ToolSet() +toolset.add(functions) +toolset.add(code_interpreter) + +# To enable tool calls executed automatically +agents_client.enable_auto_function_calls(toolset) + +agent = agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are a helpful agent", + toolset=toolset, +) +``` + + + +Also notices that if you use asynchronous client, you use `AsyncToolSet` instead. Additional information related to `AsyncFunctionTool` be discussed in the later sections. + +Here is an example to use `tools` and `tool_resources`: + + +```python +file_search_tool = FileSearchTool(vector_store_ids=[vector_store.id]) + +# Notices that FileSearchTool as tool and tool_resources must be added or the agent unable to search the file +agent = agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are helpful agent", + tools=file_search_tool.definitions, + tool_resources=file_search_tool.resources, +) +``` + + + +In the following sections, we show you sample code in either `toolset` or combination of `tools` and `tool_resources`. + +### Create Agent with File Search + +To perform file search by an Agent, we first need to upload a file, create a vector store, and associate the file to the vector store. Here is an example: + + + +```python +file = agents_client.files.upload_and_poll(file_path=asset_file_path, purpose=FilePurpose.AGENTS) +print(f"Uploaded file, file ID: {file.id}") + +vector_store = agents_client.vector_stores.create_and_poll(file_ids=[file.id], name="my_vectorstore") +print(f"Created vector store, vector store ID: {vector_store.id}") + +# Create file search tool with resources followed by creating agent +file_search = FileSearchTool(vector_store_ids=[vector_store.id]) + +agent = agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="Hello, you are helpful agent and can search information from uploaded files", + tools=file_search.definitions, + tool_resources=file_search.resources, +) +``` + + + +### Create Agent with Enterprise File Search + +We can upload file to Azure as it is shown in the example, or use the existing Azure blob storage. In the code below we demonstrate how this can be achieved. First we upload file to azure and create `VectorStoreDataSource`, which then is used to create vector store. This vector store is then given to the `FileSearchTool` constructor. + + + +```python +# We will upload the local file to Azure and will use it for vector store creation. +asset_uri = os.environ["AZURE_BLOB_URI"] + +# Create a vector store with no file and wait for it to be processed +ds = VectorStoreDataSource(asset_identifier=asset_uri, asset_type=VectorStoreDataSourceAssetType.URI_ASSET) +vector_store = agents_client.vector_stores.create_and_poll(data_sources=[ds], name="sample_vector_store") +print(f"Created vector store, vector store ID: {vector_store.id}") + +# Create a file search tool +file_search_tool = FileSearchTool(vector_store_ids=[vector_store.id]) + +# Notices that FileSearchTool as tool and tool_resources must be added or the agent unable to search the file +agent = agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are helpful agent", + tools=file_search_tool.definitions, + tool_resources=file_search_tool.resources, +) +``` + + + +We also can attach files to the existing vector store. In the code snippet below, we first create an empty vector store and add file to it. + + + +```python +# Create a vector store with no file and wait for it to be processed +vector_store = agents_client.vector_stores.create_and_poll(data_sources=[], name="sample_vector_store") +print(f"Created vector store, vector store ID: {vector_store.id}") + +ds = VectorStoreDataSource(asset_identifier=asset_uri, asset_type=VectorStoreDataSourceAssetType.URI_ASSET) +# Add the file to the vector store or you can supply data sources in the vector store creation +vector_store_file_batch = agents_client.vector_store_file_batches.create_and_poll( + vector_store_id=vector_store.id, data_sources=[ds] +) +print(f"Created vector store file batch, vector store file batch ID: {vector_store_file_batch.id}") + +# Create a file search tool +file_search_tool = FileSearchTool(vector_store_ids=[vector_store.id]) +``` + + + +### Create Agent with Code Interpreter + +Here is an example to upload a file and use it for code interpreter by an Agent: + + + +```python +file = agents_client.files.upload_and_poll(file_path=asset_file_path, purpose=FilePurpose.AGENTS) +print(f"Uploaded file, file ID: {file.id}") + +code_interpreter = CodeInterpreterTool(file_ids=[file.id]) + +# Create agent with code interpreter tool and tools_resources +agent = agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are helpful agent", + tools=code_interpreter.definitions, + tool_resources=code_interpreter.resources, +) +``` + + + +### Create Agent with Bing Grounding + +To enable your Agent to perform search through Bing search API, you use `BingGroundingTool` along with a connection. + +Here is an example: + + + +```python +conn_id = os.environ["AZURE_BING_CONNECTION_ID"] + +# Initialize agent bing tool and add the connection id +bing = BingGroundingTool(connection_id=conn_id) + +# Create agent with the bing tool and process agent run +with agents_client: + agent = agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are a helpful agent", + tools=bing.definitions, + ) +``` + + + +### Create Agent with Azure AI Search + +Azure AI Search is an enterprise search system for high-performance applications. It integrates with Azure OpenAI Service and Azure Machine Learning, offering advanced search technologies like vector search and full-text search. Ideal for knowledge base insights, information discovery, and automation. Creating an Agent with Azure AI Search requires an existing Azure AI Search Index. For more information and setup guides, see [Azure AI Search Tool Guide](https://learn.microsoft.com/azure/ai-services/agents/how-to/tools/azure-ai-search?tabs=azurecli%2Cpython&pivots=overview-azure-ai-search). + +Here is an example to integrate Azure AI Search: + + + +```python +conn_id = os.environ["AI_AZURE_AI_CONNECTION_ID"] + +print(conn_id) + +# Initialize agent AI search tool and add the search index connection id +ai_search = AzureAISearchTool( + index_connection_id=conn_id, index_name="sample_index", query_type=AzureAISearchQueryType.SIMPLE, top_k=3, filter="" +) + +# Create agent with AI search tool and process agent run +with agents_client: + agent = agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are a helpful agent", + tools=ai_search.definitions, + tool_resources=ai_search.resources, + ) +``` + + + +If the agent has found the relevant information in the index, the reference +and annotation will be provided in the message response. In the example above, we replace +the reference placeholder by the actual reference and url. Please note, that to +get sensible result, the index needs to have "embedding", "token", "category" and "title" fields. + + + +```python +# Fetch and log all messages +messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) +for message in messages: + if message.role == MessageRole.AGENT and message.url_citation_annotations: + placeholder_annotations = { + annotation.text: f" [see {annotation.url_citation.title}] ({annotation.url_citation.url})" + for annotation in message.url_citation_annotations + } + for message_text in message.text_messages: + message_str = message_text.text.value + for k, v in placeholder_annotations.items(): + message_str = message_str.replace(k, v) + print(f"{message.role}: {message_str}") + else: + for message_text in message.text_messages: + print(f"{message.role}: {message_text.text.value}") +``` + + + +### Create Agent with Function Call + +You can enhance your Agents by defining callback functions as function tools. These can be provided to `create_agent` via either the `toolset` parameter or the combination of `tools` and `tool_resources`. Here are the distinctions: + +For more details about requirements and specification of functions, refer to [Function Tool Specifications](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-agents/FunctionTool.md) + +Here is an example to use [user functions](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-agents/samples/utils/user_functions.py) in `toolset`: + + +```python +functions = FunctionTool(user_functions) +toolset = ToolSet() +toolset.add(functions) +agents_client.enable_auto_function_calls(toolset) + +agent = agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are a helpful agent", + toolset=toolset, +) +``` + + + +For asynchronous functions, you must import `AgentsClient` from `azure.ai.agents.aio` and use `AsyncFunctionTool`. Here is an example using [asynchronous user functions](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_functions_async.py): + +```python +from azure.ai.agents.aio import AgentsClient +``` + + + +```python +functions = AsyncFunctionTool(user_async_functions) + +toolset = AsyncToolSet() +toolset.add(functions) +agents_client.enable_auto_function_calls(toolset) + +agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are a helpful agent", + toolset=toolset, +) +``` + + + +Notice that if `enable_auto_function_calls` is called, the SDK will invoke the functions automatically during `create_and_process` or streaming. If you prefer to execute them manually, refer to [`sample_agents_stream_eventhandler_with_functions.py`](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_eventhandler_with_functions.py) or +[`sample_agents_functions.py`](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_functions.py) + +### Create Agent With Azure Function Call + +The AI agent leverages Azure Functions triggered asynchronously via Azure Storage Queues. To enable the agent to perform Azure Function calls, you must set up the corresponding `AzureFunctionTool`, specifying input and output queues as well as parameter definitions. + +Example Python snippet illustrating how you create an agent utilizing the Azure Function Tool: + +```python +azure_function_tool = AzureFunctionTool( + name="foo", + description="Get answers from the foo bot.", + parameters={ + "type": "object", + "properties": { + "query": {"type": "string", "description": "The question to ask."}, + "outputqueueuri": {"type": "string", "description": "The full output queue uri."}, + }, + }, + input_queue=AzureFunctionStorageQueue( + queue_name="azure-function-foo-input", + storage_service_endpoint=storage_service_endpoint, + ), + output_queue=AzureFunctionStorageQueue( + queue_name="azure-function-tool-output", + storage_service_endpoint=storage_service_endpoint, + ), +) + +agent = agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="azure-function-agent-foo", + instructions=f"You are a helpful support agent. Use the provided function any time the prompt contains the string 'What would foo say?'. When you invoke the function, ALWAYS specify the output queue uri parameter as '{storage_service_endpoint}/azure-function-tool-output'. Always responds with \"Foo says\" and then the response from the tool.", + tools=azure_function_tool.definitions, +) +print(f"Created agent, agent ID: {agent.id}") +``` + +--- + +**Limitations** + +Currently, the Azure Function integration for the AI Agent has the following limitations: + +- Supported trigger for Azure Function is currently limited to **Queue triggers** only. + HTTP or other trigger types and streaming responses are not supported at this time. + +--- + +**Create and Deploy Azure Function** + +Before you can use the agent with AzureFunctionTool, you need to create and deploy Azure Function. + +Below is an example Python Azure Function responding to queue-triggered messages and placing responses on the output queue: + +```python +import azure.functions as func +import logging +import json + +app = func.FunctionApp() + + +@app.function_name(name="Foo") +@app.queue_trigger( + arg_name="arguments", + queue_name="azure-function-foo-input", + connection="AzureWebJobsStorage") +@app.queue_output( + arg_name="outputQueue", + queue_name="azure-function-tool-output", + connection="AzureWebJobsStorage") +def foo(arguments: func.QueueMessage, outputQueue: func.Out[str]) -> None: + """ + The function, answering question. + + :param arguments: The arguments, containing json serialized request. + :param outputQueue: The output queue to write messages to. + """ + + parsed_args = json.loads(arguments.get_body().decode('utf-8')) + try: + response = { + "Value": "Bar", + "CorrelationId": parsed_args['CorrelationId'] + } + outputQueue.set(json.dumps(response)) + logging.info(f'The function returns the following message: {json.dumps(response)}') + except Exception as e: + logging.error(f"Error processing message: {e}") + raise +``` + +> **Important:** Both input and output payloads must contain the `CorrelationId`, which must match in request and response. + +--- + +**Azure Function Project Creation and Deployment** + +To deploy your function to Azure properly, follow Microsoft's official documentation step by step: + +[Azure Functions Python Developer Guide](https://learn.microsoft.com/azure/azure-functions/create-first-function-cli-python?tabs=windows%2Cbash%2Cazure-cli%2Cbrowser) + +**Summary of required steps:** + +- Use the Azure CLI or Azure Portal to create an Azure Function App. +- Create input and output queues in Azure Storage. +- Deploy your Function code. + +--- + +**Verification and Testing Azure Function** + +To ensure that your Azure Function deployment functions correctly: + +1. Place the following style message manually into the input queue (`input`): + +{ + "CorrelationId": "42" +} + +Check the output queue (`output`) and validate the structured message response: + +{ + "Value": "Bar", + "CorrelationId": "42" +} + +--- + +**Required Role Assignments (IAM Configuration)** + +Ensure your Azure AI Project identity has the following storage account permissions: +- `Storage Account Contributor` +- `Storage Blob Data Contributor` +- `Storage File Data Privileged Contributor` +- `Storage Queue Data Contributor` +- `Storage Table Data Contributor` + +--- + +**Additional Important Configuration Notes** + +- The Azure Function configured above uses the `AzureWebJobsStorage` connection string for queue connectivity. You may alternatively use managed identity-based connections as described in the official Azure Functions Managed Identity documentation. +- Storage queues you specify (`input` & `output`) should already exist in the storage account before the Function deployment or invocation, created manually via Azure portal or CLI. +- When using Azure storage account connection strings, make sure the account has enabled storage account key access (`Storage Account > Settings > Configuration`). + +--- + +With the above steps complete, your Azure Function integration with your AI Agent is ready for use. + + +### Create Agent With Logic Apps + +Logic Apps allow HTTP requests to trigger actions. For more information, refer to the guide [Logic App Workflows for Function Calling](https://learn.microsoft.com/azure/ai-services/openai/how-to/assistants-logic-apps). + +Your Logic App must be in the same resource group as your Azure AI Project, shown in the Azure Portal. Agents SDK accesses Logic Apps through Workflow URLs, which are fetched and called as requests in functions. + +Below is an example of how to create an Azure Logic App utility tool and register a function with it. + + + +```python + +# Create the agents client +agents_client = AgentsClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), +) + +# Extract subscription and resource group from the project scope +subscription_id = os.environ["SUBSCRIPTION_ID"] +resource_group = os.environ["resource_group_name"] + +# Logic App details +logic_app_name = "" +trigger_name = "" + +# Create and initialize AzureLogicAppTool utility +logic_app_tool = AzureLogicAppTool(subscription_id, resource_group) +logic_app_tool.register_logic_app(logic_app_name, trigger_name) +print(f"Registered logic app '{logic_app_name}' with trigger '{trigger_name}'.") + +# Create the specialized "send_email_via_logic_app" function for your agent tools +send_email_func = create_send_email_function(logic_app_tool, logic_app_name) + +# Prepare the function tools for the agent +functions_to_use: Set = { + fetch_current_datetime, + send_email_func, # This references the AzureLogicAppTool instance via closure +} +``` + + + +After this the functions can be incorporated normally into code using `FunctionTool`. + + +### Create Agent With OpenAPI + +OpenAPI specifications describe REST operations against a specific endpoint. Agents SDK can read an OpenAPI spec, create a function from it, and call that function against the REST endpoint without additional client-side execution. + +Here is an example creating an OpenAPI tool (using anonymous authentication): + + + +```python + +with open(weather_asset_file_path, "r") as f: + openapi_weather = jsonref.loads(f.read()) + +with open(countries_asset_file_path, "r") as f: + openapi_countries = jsonref.loads(f.read()) + +# Create Auth object for the OpenApiTool (note that connection or managed identity auth setup requires additional setup in Azure) +auth = OpenApiAnonymousAuthDetails() + +# Initialize agent OpenApi tool using the read in OpenAPI spec +openapi_tool = OpenApiTool( + name="get_weather", spec=openapi_weather, description="Retrieve weather information for a location", auth=auth +) +openapi_tool.add_definition( + name="get_countries", spec=openapi_countries, description="Retrieve a list of countries", auth=auth +) + +# Create agent with OpenApi tool and process agent run +with agents_client: + agent = agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are a helpful agent", + tools=openapi_tool.definitions, + ) +``` + + + + +### Create an Agent with Fabric + +To enable your Agent to answer queries using Fabric data, use `FabricTool` along with a connection to the Fabric resource. + +Here is an example: + + + +```python +conn_id = os.environ["FABRIC_CONNECTION_ID"] + +print(conn_id) + +# Initialize an Agent Fabric tool and add the connection id +fabric = FabricTool(connection_id=conn_id) + +# Create an Agent with the Fabric tool and process an Agent run +with agents_client: + agent = agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are a helpful agent", + tools=fabric.definitions, + ) +``` + + + + +### Create Thread + +For each session or conversation, a thread is required. Here is an example: + + + +```python +thread = agents_client.threads.create() +``` + + + +### Create Thread with Tool Resource + +In some scenarios, you might need to assign specific resources to individual threads. To achieve this, you provide the `tool_resources` argument to `create_thread`. In the following example, you create a vector store and upload a file, enable an Agent for file search using the `tools` argument, and then associate the file with the thread using the `tool_resources` argument. + + + +```python +file = agents_client.files.upload_and_poll(file_path=asset_file_path, purpose=FilePurpose.AGENTS) +print(f"Uploaded file, file ID: {file.id}") + +vector_store = agents_client.vector_stores.create_and_poll(file_ids=[file.id], name="my_vectorstore") +print(f"Created vector store, vector store ID: {vector_store.id}") + +# Create file search tool with resources followed by creating agent +file_search = FileSearchTool(vector_store_ids=[vector_store.id]) + +agent = agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="Hello, you are helpful agent and can search information from uploaded files", + tools=file_search.definitions, +) + +print(f"Created agent, ID: {agent.id}") + +# Create thread with file resources. +# If the agent has multiple threads, only this thread can search this file. +thread = agents_client.threads.create(tool_resources=file_search.resources) +``` + + + +#### List Threads + +To list all threads attached to a given agent, use the list_threads API: + +```python +threads = agents_client.threads.list() +``` + +### Create Message + +To create a message for agent to process, you pass `user` as `role` and a question as `content`: + + + +```python +message = agents_client.messages.create(thread_id=thread.id, role="user", content="Hello, tell me a joke") +``` + + + +### Create Message with File Search Attachment + +To attach a file to a message for content searching, you use `MessageAttachment` and `FileSearchTool`: + + + +```python +attachment = MessageAttachment(file_id=file.id, tools=FileSearchTool().definitions) +message = agents_client.messages.create( + thread_id=thread.id, role="user", content="What feature does Smart Eyewear offer?", attachments=[attachment] +) +``` + + + +### Create Message with Code Interpreter Attachment + +To attach a file to a message for data analysis, use `MessageAttachment` and `CodeInterpreterTool` classes. You must pass `CodeInterpreterTool` as `tools` or `toolset` in `create_agent` call or the file attachment cannot be opened for code interpreter. + +Here is an example to pass `CodeInterpreterTool` as tool: + + + +```python +# Notice that CodeInterpreter must be enabled in the agent creation, +# otherwise the agent will not be able to see the file attachment for code interpretation +agent = agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are helpful agent", + tools=CodeInterpreterTool().definitions, +) +print(f"Created agent, agent ID: {agent.id}") + +thread = agents_client.threads.create() +print(f"Created thread, thread ID: {thread.id}") + +# Create an attachment +attachment = MessageAttachment(file_id=file.id, tools=CodeInterpreterTool().definitions) + +# Create a message +message = agents_client.messages.create( + thread_id=thread.id, + role="user", + content="Could you please create bar chart in TRANSPORTATION sector for the operating profit from the uploaded csv file and provide file to me?", + attachments=[attachment], +) +``` + + + +Azure blob storage can be used as a message attachment. In this case, use `VectorStoreDataSource` as a data source: + + + +```python +# We will upload the local file to Azure and will use it for vector store creation. +asset_uri = os.environ["AZURE_BLOB_URI"] +ds = VectorStoreDataSource(asset_identifier=asset_uri, asset_type=VectorStoreDataSourceAssetType.URI_ASSET) + +# Create a message with the attachment +attachment = MessageAttachment(data_source=ds, tools=code_interpreter.definitions) +message = agents_client.messages.create( + thread_id=thread.id, role="user", content="What does the attachment say?", attachments=[attachment] +) +``` + + + +### Create Message with Image Inputs + +You can send messages to Azure agents with image inputs in following ways: + +- **Using an image stored as a uploaded file** +- **Using a public image accessible via URL** +- **Using a base64 encoded image string** + +The following examples demonstrate each method: + +#### Create message using uploaded image file + +```python +# Upload the local image file +image_file = agents_client.files.upload_and_poll(file_path="image_file.png", purpose="assistants") + +# Construct content using uploaded image +file_param = MessageImageFileParam(file_id=image_file.id, detail="high") +content_blocks = [ + MessageInputTextBlock(text="Hello, what is in the image?"), + MessageInputImageFileBlock(image_file=file_param), +] + +# Create the message +message = agents_client.messages.create( + thread_id=thread.id, + role="user", + content=content_blocks +) +``` + +#### Create message with an image URL input + +```python +# Specify the public image URL +image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg" + +# Create content directly referencing image URL +url_param = MessageImageUrlParam(url=image_url, detail="high") +content_blocks = [ + MessageInputTextBlock(text="Hello, what is in the image?"), + MessageInputImageUrlBlock(image_url=url_param), +] + +# Create the message +message = agents_client.messages.create( + thread_id=thread.id, + role="user", + content=content_blocks +) +``` + +#### Create message with base64-encoded image input + +```python +import base64 + +def image_file_to_base64(path: str) -> str: + with open(path, "rb") as f: + return base64.b64encode(f.read()).decode("utf-8") + +# Convert your image file to base64 format +image_base64 = image_file_to_base64("image_file.png") + +# Prepare the data URL +img_data_url = f"data:image/png;base64,{image_base64}" + +# Use base64 encoded string as image URL parameter +url_param = MessageImageUrlParam(url=img_data_url, detail="high") +content_blocks = [ + MessageInputTextBlock(text="Hello, what is in the image?"), + MessageInputImageUrlBlock(image_url=url_param), +] + +# Create the message +message = agents_client.messages.create( + thread_id=thread.id, + role="user", + content=content_blocks +) +``` + +### Execute Run, Run_and_Process, or Stream + +To process your message, you can use `runs.create`, `runs.create_and_process`, or `runs.stream`. + +`create_run` requests the Agent to process the message without polling for the result. If you are using `function tools` regardless as `toolset` or not, your code is responsible for polling for the result and acknowledging the status of `Run`. When the status is `requires_action`, your code is responsible for calling the function tools. For a code sample, visit [`sample_agents_functions.py`](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_functions.py). + +Here is an example of `runs.create` and poll until the run is completed: + + + +```python +run = agents_client.runs.create(thread_id=thread.id, agent_id=agent.id) + +# Poll the run as long as run status is queued or in progress +while run.status in ["queued", "in_progress", "requires_action"]: + # Wait for a second + time.sleep(1) + run = agents_client.runs.get(thread_id=thread.id, run_id=run.id) +``` + + + +To have the SDK poll on your behalf and call `function tools`, use the `create_and_process` method. Note that `function tools` will only be invoked if they are provided as `toolset` during the `create_agent` call. + +Here is an example: + + + +```python +run = agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) +``` + + + +With streaming, polling need not be considered. If `function tools` are provided as `toolset` during the `create_agent` call, they will be invoked by the SDK. + +Here is an example of streaming: + + + +```python +with agents_client.runs.stream(thread_id=thread.id, agent_id=agent.id) as stream: + + for event_type, event_data, _ in stream: + + if isinstance(event_data, MessageDeltaChunk): + print(f"Text delta received: {event_data.text}") + + elif isinstance(event_data, ThreadMessage): + print(f"ThreadMessage created. ID: {event_data.id}, Status: {event_data.status}") + + elif isinstance(event_data, ThreadRun): + print(f"ThreadRun status: {event_data.status}") + + elif isinstance(event_data, RunStep): + print(f"RunStep type: {event_data.type}, Status: {event_data.status}") + + elif event_type == AgentStreamEvent.ERROR: + print(f"An error occurred. Data: {event_data}") + + elif event_type == AgentStreamEvent.DONE: + print("Stream completed.") + break + + else: + print(f"Unhandled Event Type: {event_type}, Data: {event_data}") +``` + + + +In the code above, because an `event_handler` object is not passed to the `stream` function, the SDK will instantiate `AgentEventHandler` or `AsyncAgentEventHandler` as the default event handler and produce an iterable object with `event_type` and `event_data`. `AgentEventHandler` and `AsyncAgentEventHandler` are overridable. Here is an example: + + + +```python +# With AgentEventHandler[str], the return type for each event functions is optional string. +class MyEventHandler(AgentEventHandler[str]): + + def on_message_delta(self, delta: "MessageDeltaChunk") -> Optional[str]: + return f"Text delta received: {delta.text}" + + def on_thread_message(self, message: "ThreadMessage") -> Optional[str]: + return f"ThreadMessage created. ID: {message.id}, Status: {message.status}" + + def on_thread_run(self, run: "ThreadRun") -> Optional[str]: + return f"ThreadRun status: {run.status}" + + def on_run_step(self, step: "RunStep") -> Optional[str]: + return f"RunStep type: {step.type}, Status: {step.status}" + + def on_error(self, data: str) -> Optional[str]: + return f"An error occurred. Data: {data}" + + def on_done(self) -> Optional[str]: + return "Stream completed." + + def on_unhandled_event(self, event_type: str, event_data: Any) -> Optional[str]: + return f"Unhandled Event Type: {event_type}, Data: {event_data}" +``` + + + + + + +```python +with agents_client.runs.stream(thread_id=thread.id, agent_id=agent.id, event_handler=MyEventHandler()) as stream: + for event_type, event_data, func_return in stream: + print(f"Received data.") + print(f"Streaming receive Event Type: {event_type}") + print(f"Event Data: {str(event_data)[:100]}...") + print(f"Event Function return: {func_return}\n") +``` + + + +As you can see, this SDK parses the events and produces various event types similar to OpenAI agents. In your use case, you might not be interested in handling all these types and may decide to parse the events on your own. To achieve this, please refer to [override base event handler](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_with_base_override_eventhandler.py). + +``` +Note: Multiple streaming processes may be chained behind the scenes. + +When the SDK receives a `ThreadRun` event with the status `requires_action`, the next event will be `Done`, followed by termination. The SDK will submit the tool calls using the same event handler. The event handler will then chain the main stream with the tool stream. + +Consequently, when you iterate over the streaming using a for loop similar to the example above, the for loop will receive events from the main stream followed by events from the tool stream. +``` + + +### Retrieve Message + +To retrieve messages from agents, use the following example: + + + +```python +messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) +for msg in messages: + if msg.text_messages: + last_text = msg.text_messages[-1] + print(f"{msg.role}: {last_text.text.value}") +``` + + + +In addition, `messages` and `messages.data[]` offer helper properties such as `text_messages`, `image_contents`, `file_citation_annotations`, and `file_path_annotations` to quickly retrieve content from one message or all messages. + +### Retrieve File + +Files uploaded by Agents cannot be retrieved back. If your use case need to access the file content uploaded by the Agents, you are advised to keep an additional copy accessible by your application. However, files generated by Agents are retrievable by `save_file` or `get_file_content`. + +Here is an example retrieving file ids from messages and save to the local drive: + + + +```python +messages = agents_client.messages.list(thread_id=thread.id) +print(f"Messages: {messages}") + +for msg in messages: + # Save every image file in the message + for img in msg.image_contents: + file_id = img.image_file.file_id + file_name = f"{file_id}_image_file.png" + agents_client.files.save(file_id=file_id, file_name=file_name) + print(f"Saved image file to: {Path.cwd() / file_name}") + + # Print details of every file-path annotation + for ann in msg.file_path_annotations: + print("File Paths:") + print(f" Type: {ann.type}") + print(f" Text: {ann.text}") + print(f" File ID: {ann.file_path.file_id}") + print(f" Start Index: {ann.start_index}") + print(f" End Index: {ann.end_index}") +``` + + + +Here is an example to use `get_file_content`: + +```python +from pathlib import Path + +async def save_file_content(client, file_id: str, file_name: str, target_dir: Optional[Union[str, Path]] = None): + # Determine the target directory + path = Path(target_dir).expanduser().resolve() if target_dir else Path.cwd() + path.mkdir(parents=True, exist_ok=True) + + # Retrieve the file content + file_content_stream = await client.files.get_content(file_id) + if not file_content_stream: + raise RuntimeError(f"No content retrievable for file ID '{file_id}'.") + + # Collect all chunks asynchronously + chunks = [] + async for chunk in file_content_stream: + if isinstance(chunk, (bytes, bytearray)): + chunks.append(chunk) + else: + raise TypeError(f"Expected bytes or bytearray, got {type(chunk).__name__}") + + target_file_path = path / file_name + + # Write the collected content to the file synchronously + with open(target_file_path, "wb") as file: + for chunk in chunks: + file.write(chunk) +``` + +### Teardown + +To remove resources after completing tasks, use the following functions: + + + +```python +# Delete the file when done +agents_client.vector_stores.delete(vector_store.id) +print("Deleted vector store") + +agents_client.files.delete(file_id=file.id) +print("Deleted file") + +# Delete the agent when done +agents_client.delete_agent(agent.id) +print("Deleted agent") +``` + + + +## Tracing + +You can add an Application Insights Azure resource to your Azure AI Foundry project. See the Tracing tab in your AI Foundry project. If one was enabled, you can get the Application Insights connection string, configure your Agents, and observe the full execution path through Azure Monitor. Typically, you might want to start tracing before you create an Agent. + +### Installation + +Make sure to install OpenTelemetry and the Azure SDK tracing plugin via + +```bash +pip install opentelemetry +pip install azure-ai-agents azure-identity opentelemetry-sdk azure-core-tracing-opentelemetry +``` + +You will also need an exporter to send telemetry to your observability backend. You can print traces to the console or use a local viewer such as [Aspire Dashboard](https://learn.microsoft.com/dotnet/aspire/fundamentals/dashboard/standalone?tabs=bash). + +To connect to Aspire Dashboard or another OpenTelemetry compatible backend, install OTLP exporter: + +```bash +pip install opentelemetry-exporter-otlp +``` + +### How to enable tracing + +Here is a code sample that shows how to enable Azure Monitor tracing: + + + +```python +from opentelemetry import trace +from azure.monitor.opentelemetry import configure_azure_monitor + +# Enable Azure Monitor tracing +application_insights_connection_string = os.environ["APPLICATIONINSIGHTS_CONNECTION_STRING"] +configure_azure_monitor(connection_string=application_insights_connection_string) + +scenario = os.path.basename(__file__) +tracer = trace.get_tracer(__name__) + +with tracer.start_as_current_span(scenario): + with agents_client: +``` + + + +In addition, you might find helpful to see the tracing logs in console. You can achieve by the following code: + +```python +from azure.ai.agents.telemetry import enable_telemetry + +enable_telemetry(destination=sys.stdout) +``` +### How to trace your own functions + +The decorator `trace_function` is provided for tracing your own function calls using OpenTelemetry. By default the function name is used as the name for the span. Alternatively you can provide the name for the span as a parameter to the decorator. + +This decorator handles various data types for function parameters and return values, and records them as attributes in the trace span. The supported data types include: +* Basic data types: str, int, float, bool +* Collections: list, dict, tuple, set + * Special handling for collections: + - If a collection (list, dict, tuple, set) contains nested collections, the entire collection is converted to a string before being recorded as an attribute. + - Sets and dictionaries are always converted to strings to ensure compatibility with span attributes. + +Object types are omitted, and the corresponding parameter is not traced. + +The parameters are recorded in attributes `code.function.parameter.` and the return value is recorder in attribute `code.function.return.value` + +## Troubleshooting + +### Logging + +The client uses the standard [Python logging library](https://docs.python.org/3/library/logging.html). The SDK logs HTTP request and response details, which may be useful in troubleshooting. To log to stdout, add the following: + +```python +import sys +import logging + +# Acquire the logger for this client library. Use 'azure' to affect both +# 'azure.core` and `azure.ai.inference' libraries. +logger = logging.getLogger("azure") + +# Set the desired logging level. logging.INFO or logging.DEBUG are good options. +logger.setLevel(logging.DEBUG) + +# Direct logging output to stdout: +handler = logging.StreamHandler(stream=sys.stdout) +# Or direct logging output to a file: +# handler = logging.FileHandler(filename="sample.log") +logger.addHandler(handler) + +# Optional: change the default logging format. Here we add a timestamp. +#formatter = logging.Formatter("%(asctime)s:%(levelname)s:%(name)s:%(message)s") +#handler.setFormatter(formatter) +``` + +By default logs redact the values of URL query strings, the values of some HTTP request and response headers (including `Authorization` which holds the key or token), and the request and response payloads. To create logs without redaction, add `logging_enable = True` to the client constructor: + +```python +agents_client = AgentsClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), + logging_enable = True +) +``` + +Note that the log level must be set to `logging.DEBUG` (see above code). Logs will be redacted with any other log level. + +Be sure to protect non redacted logs to avoid compromising security. + +For more information, see [Configure logging in the Azure libraries for Python](https://aka.ms/azsdk/python/logging) + +### Reporting issues + +To report an issue with the client library, or request additional features, please open a GitHub issue [here](https://github.com/Azure/azure-sdk-for-python/issues). Mention the package name "azure-ai-agents" in the title or content. + + +## Next steps + +Have a look at the [Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-agents/samples) folder, containing fully runnable Python code for synchronous and asynchronous clients. + +Explore the [AI Starter Template](https://aka.ms/azsdk/azure-ai-agents/python/ai-starter-template). This template creates an Azure AI Foundry hub, project and connected resources including Azure OpenAI Service, AI Search and more. It also deploys a simple chat application to Azure Container Apps. + +## Contributing + +This project welcomes contributions and suggestions. Most contributions require +you to agree to a Contributor License Agreement (CLA) declaring that you have +the right to, and actually do, grant us the rights to use your contribution. +For details, visit https://cla.microsoft.com. + +When you submit a pull request, a CLA-bot will automatically determine whether +you need to provide a CLA and decorate the PR appropriately (e.g., label, +comment). Simply follow the instructions provided by the bot. You will only +need to do this once across all repos using our CLA. + +This project has adopted the +[Microsoft Open Source Code of Conduct][code_of_conduct]. For more information, +see the Code of Conduct FAQ or contact opencode@microsoft.com with any +additional questions or comments. + + +[samples]: https://aka.ms/azsdk/azure-ai-projects/python/samples/ +[code_of_conduct]: https://opensource.microsoft.com/codeofconduct/ +[entra_id]: https://learn.microsoft.com/azure/ai-services/authentication?tabs=powershell#authenticate-with-microsoft-entra-id +[azure_identity_credentials]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity#credentials +[azure_identity_pip]: https://pypi.org/project/azure-identity/ +[default_azure_credential]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity#defaultazurecredential +[pip]: https://pypi.org/project/pip/ +[azure_sub]: https://azure.microsoft.com/free/ +[evaluators]: https://learn.microsoft.com/azure/ai-studio/how-to/develop/evaluate-sdk +[azure_ai_evaluation]: https://learn.microsoft.com/python/api/overview/azure/ai-evaluation-readme +[evaluator_library]: https://learn.microsoft.com/azure/ai-studio/how-to/evaluate-generative-ai-app#view-and-manage-the-evaluators-in-the-evaluator-library \ No newline at end of file diff --git a/sdk/ai/azure-ai-projects/apiview-properties.json b/sdk/ai/azure-ai-projects/apiview-properties.json index 29de53e42619..bc33249851dd 100644 --- a/sdk/ai/azure-ai-projects/apiview-properties.json +++ b/sdk/ai/azure-ai-projects/apiview-properties.json @@ -55,30 +55,30 @@ "azure.ai.projects.aio.operations.EvaluationsOperations.create": "Azure.AI.Projects.Evaluations.create", "azure.ai.projects.operations.EvaluationsOperations.create_agent_evaluation": "Azure.AI.Projects.Evaluations.createAgentEvaluation", "azure.ai.projects.aio.operations.EvaluationsOperations.create_agent_evaluation": "Azure.AI.Projects.Evaluations.createAgentEvaluation", - "azure.ai.projects.operations.DatasetsOperations.list_versions": "Azure.AI.Projects.ServicePatterns.Datasets.listVersions", - "azure.ai.projects.aio.operations.DatasetsOperations.list_versions": "Azure.AI.Projects.ServicePatterns.Datasets.listVersions", - "azure.ai.projects.operations.DatasetsOperations.list": "Azure.AI.Projects.ServicePatterns.Datasets.listLatest", - "azure.ai.projects.aio.operations.DatasetsOperations.list": "Azure.AI.Projects.ServicePatterns.Datasets.listLatest", - "azure.ai.projects.operations.DatasetsOperations.get": "Azure.AI.Projects.ServicePatterns.Datasets.getVersion", - "azure.ai.projects.aio.operations.DatasetsOperations.get": "Azure.AI.Projects.ServicePatterns.Datasets.getVersion", - "azure.ai.projects.operations.DatasetsOperations.delete": "Azure.AI.Projects.ServicePatterns.Datasets.deleteVersion", - "azure.ai.projects.aio.operations.DatasetsOperations.delete": "Azure.AI.Projects.ServicePatterns.Datasets.deleteVersion", - "azure.ai.projects.operations.DatasetsOperations.create_or_update": "Azure.AI.Projects.ServicePatterns.Datasets.createOrUpdateVersion", - "azure.ai.projects.aio.operations.DatasetsOperations.create_or_update": "Azure.AI.Projects.ServicePatterns.Datasets.createOrUpdateVersion", + "azure.ai.projects.operations.DatasetsOperations.list_versions": "Azure.AI.Projects.Datasets.listVersions", + "azure.ai.projects.aio.operations.DatasetsOperations.list_versions": "Azure.AI.Projects.Datasets.listVersions", + "azure.ai.projects.operations.DatasetsOperations.list": "Azure.AI.Projects.Datasets.listLatest", + "azure.ai.projects.aio.operations.DatasetsOperations.list": "Azure.AI.Projects.Datasets.listLatest", + "azure.ai.projects.operations.DatasetsOperations.get": "Azure.AI.Projects.Datasets.getVersion", + "azure.ai.projects.aio.operations.DatasetsOperations.get": "Azure.AI.Projects.Datasets.getVersion", + "azure.ai.projects.operations.DatasetsOperations.delete": "Azure.AI.Projects.Datasets.deleteVersion", + "azure.ai.projects.aio.operations.DatasetsOperations.delete": "Azure.AI.Projects.Datasets.deleteVersion", + "azure.ai.projects.operations.DatasetsOperations.create_or_update": "Azure.AI.Projects.Datasets.createOrUpdateVersion", + "azure.ai.projects.aio.operations.DatasetsOperations.create_or_update": "Azure.AI.Projects.Datasets.createOrUpdateVersion", "azure.ai.projects.operations.DatasetsOperations.pending_upload": "Azure.AI.Projects.Datasets.startPendingUploadVersion", "azure.ai.projects.aio.operations.DatasetsOperations.pending_upload": "Azure.AI.Projects.Datasets.startPendingUploadVersion", "azure.ai.projects.operations.DatasetsOperations.get_credentials": "Azure.AI.Projects.Datasets.getCredentials", "azure.ai.projects.aio.operations.DatasetsOperations.get_credentials": "Azure.AI.Projects.Datasets.getCredentials", - "azure.ai.projects.operations.IndexesOperations.list_versions": "Azure.AI.Projects.ServicePatterns.Indexes.listVersions", - "azure.ai.projects.aio.operations.IndexesOperations.list_versions": "Azure.AI.Projects.ServicePatterns.Indexes.listVersions", - "azure.ai.projects.operations.IndexesOperations.list": "Azure.AI.Projects.ServicePatterns.Indexes.listLatest", - "azure.ai.projects.aio.operations.IndexesOperations.list": "Azure.AI.Projects.ServicePatterns.Indexes.listLatest", - "azure.ai.projects.operations.IndexesOperations.get": "Azure.AI.Projects.ServicePatterns.Indexes.getVersion", - "azure.ai.projects.aio.operations.IndexesOperations.get": "Azure.AI.Projects.ServicePatterns.Indexes.getVersion", - "azure.ai.projects.operations.IndexesOperations.delete": "Azure.AI.Projects.ServicePatterns.Indexes.deleteVersion", - "azure.ai.projects.aio.operations.IndexesOperations.delete": "Azure.AI.Projects.ServicePatterns.Indexes.deleteVersion", - "azure.ai.projects.operations.IndexesOperations.create_or_update": "Azure.AI.Projects.ServicePatterns.Indexes.createOrUpdateVersion", - "azure.ai.projects.aio.operations.IndexesOperations.create_or_update": "Azure.AI.Projects.ServicePatterns.Indexes.createOrUpdateVersion", + "azure.ai.projects.operations.IndexesOperations.list_versions": "Azure.AI.Projects.Indexes.listVersions", + "azure.ai.projects.aio.operations.IndexesOperations.list_versions": "Azure.AI.Projects.Indexes.listVersions", + "azure.ai.projects.operations.IndexesOperations.list": "Azure.AI.Projects.Indexes.listLatest", + "azure.ai.projects.aio.operations.IndexesOperations.list": "Azure.AI.Projects.Indexes.listLatest", + "azure.ai.projects.operations.IndexesOperations.get": "Azure.AI.Projects.Indexes.getVersion", + "azure.ai.projects.aio.operations.IndexesOperations.get": "Azure.AI.Projects.Indexes.getVersion", + "azure.ai.projects.operations.IndexesOperations.delete": "Azure.AI.Projects.Indexes.deleteVersion", + "azure.ai.projects.aio.operations.IndexesOperations.delete": "Azure.AI.Projects.Indexes.deleteVersion", + "azure.ai.projects.operations.IndexesOperations.create_or_update": "Azure.AI.Projects.Indexes.createOrUpdateVersion", + "azure.ai.projects.aio.operations.IndexesOperations.create_or_update": "Azure.AI.Projects.Indexes.createOrUpdateVersion", "azure.ai.projects.operations.DeploymentsOperations.get": "Azure.AI.Projects.Deployments.get", "azure.ai.projects.aio.operations.DeploymentsOperations.get": "Azure.AI.Projects.Deployments.get", "azure.ai.projects.operations.DeploymentsOperations.list": "Azure.AI.Projects.Deployments.list", diff --git a/sdk/ai/azure-ai-projects/assets.json b/sdk/ai/azure-ai-projects/assets.json new file mode 100644 index 000000000000..752d2238c55f --- /dev/null +++ b/sdk/ai/azure-ai-projects/assets.json @@ -0,0 +1,6 @@ +{ + "AssetsRepo": "Azure/azure-sdk-assets", + "AssetsRepoPrefixPath": "python", + "TagPrefix": "python/ai/azure-ai-projects", + "Tag": "python/ai/azure-ai-projects_25a915bc4c" +} diff --git a/sdk/ai/azure-ai-projects/azure/ai/projects/_client.py b/sdk/ai/azure-ai-projects/azure/ai/projects/_client.py index 217cb41b875a..4f134a04a6b9 100644 --- a/sdk/ai/azure-ai-projects/azure/ai/projects/_client.py +++ b/sdk/ai/azure-ai-projects/azure/ai/projects/_client.py @@ -29,7 +29,7 @@ from azure.core.credentials import TokenCredential -class AIProjectClient: # pylint: disable=too-many-instance-attributes +class AIProjectClient: """AIProjectClient. :ivar connections: ConnectionsOperations operations diff --git a/sdk/ai/azure-ai-projects/azure/ai/projects/_patch_telemetry.py b/sdk/ai/azure-ai-projects/azure/ai/projects/_patch_telemetry.py index 4d67af1a22f3..0f48f15959a7 100644 --- a/sdk/ai/azure-ai-projects/azure/ai/projects/_patch_telemetry.py +++ b/sdk/ai/azure-ai-projects/azure/ai/projects/_patch_telemetry.py @@ -72,7 +72,7 @@ def enable_telemetry( ) try: - from azure.ai.agents.tracing import AIAgentsInstrumentor # pylint: disable=import-error,no-name-in-module + from azure.ai.agents.telemetry import AIAgentsInstrumentor # pylint: disable=import-error,no-name-in-module agents_instrumentor = AIAgentsInstrumentor() if not agents_instrumentor.is_instrumented(): diff --git a/sdk/ai/azure-ai-projects/azure/ai/projects/_version.py b/sdk/ai/azure-ai-projects/azure/ai/projects/_version.py index 319889e447e0..46f199f51a87 100644 --- a/sdk/ai/azure-ai-projects/azure/ai/projects/_version.py +++ b/sdk/ai/azure-ai-projects/azure/ai/projects/_version.py @@ -6,4 +6,4 @@ # Changes may cause incorrect behavior and will be lost if the code is regenerated. # -------------------------------------------------------------------------- -VERSION = "1.0.0b11" +VERSION = "1.0.0b12" diff --git a/sdk/ai/azure-ai-projects/azure/ai/projects/aio/_client.py b/sdk/ai/azure-ai-projects/azure/ai/projects/aio/_client.py index 7fc978f8c178..52a42cebdeab 100644 --- a/sdk/ai/azure-ai-projects/azure/ai/projects/aio/_client.py +++ b/sdk/ai/azure-ai-projects/azure/ai/projects/aio/_client.py @@ -29,7 +29,7 @@ from azure.core.credentials_async import AsyncTokenCredential -class AIProjectClient: # pylint: disable=too-many-instance-attributes +class AIProjectClient: """AIProjectClient. :ivar connections: ConnectionsOperations operations diff --git a/sdk/ai/azure-ai-projects/azure/ai/projects/aio/operations/_operations.py b/sdk/ai/azure-ai-projects/azure/ai/projects/aio/operations/_operations.py index 45f5d3d15a03..3fc29230c783 100644 --- a/sdk/ai/azure-ai-projects/azure/ai/projects/aio/operations/_operations.py +++ b/sdk/ai/azure-ai-projects/azure/ai/projects/aio/operations/_operations.py @@ -1022,7 +1022,7 @@ async def delete(self, name: str, version: str, **kwargs: Any) -> None: response = pipeline_response.http_response - if response.status_code not in [204]: + if response.status_code not in [204, 200]: map_error(status_code=response.status_code, response=response, error_map=error_map) raise HttpResponseError(response=response) diff --git a/sdk/ai/azure-ai-projects/azure/ai/projects/aio/operations/_patch_inference_async.py b/sdk/ai/azure-ai-projects/azure/ai/projects/aio/operations/_patch_inference_async.py index 22f3301844e6..974ec5855ca6 100644 --- a/sdk/ai/azure-ai-projects/azure/ai/projects/aio/operations/_patch_inference_async.py +++ b/sdk/ai/azure-ai-projects/azure/ai/projects/aio/operations/_patch_inference_async.py @@ -9,7 +9,6 @@ """ import logging from typing import Optional, TYPE_CHECKING, Any -from urllib.parse import urlparse from azure.core.tracing.decorator_async import distributed_trace_async from azure.core.tracing.decorator import distributed_trace @@ -18,6 +17,8 @@ EntraIDCredentials, ) from ...models._enums import ConnectionType +from ...operations._patch_inference import _get_aoai_inference_url +from ...operations._patch_inference import _get_inference_url if TYPE_CHECKING: # pylint: disable=unused-import,ungrouped-imports @@ -40,48 +41,6 @@ class InferenceOperations: def __init__(self, outer_instance: "azure.ai.projects.aio.AIProjectClient") -> None: # type: ignore[name-defined] self._outer_instance = outer_instance - # TODO: Use a common method for both the sync and async operations - @classmethod - def _get_inference_url(cls, input_url: str) -> str: - """ - Converts an input URL in the format: - https:/// - to: - https:///models - - :param input_url: The input endpoint URL used to construct AIProjectClient. - :type input_url: str - - :return: The endpoint URL required to construct inference clients from the azure-ai-inference package. - :rtype: str - """ - parsed = urlparse(input_url) - if parsed.scheme != "https" or not parsed.netloc: - raise ValueError("Invalid endpoint URL format. Must be an https URL with a host.") - new_url = f"https://{parsed.netloc}/models" - return new_url - - # TODO: Use a common method for both the sync and async operations - @classmethod - def _get_aoai_inference_url(cls, input_url: str) -> str: - """ - Converts an input URL in the format: - https:/// - to: - https:// - - :param input_url: The input endpoint URL used to construct AIProjectClient. - :type input_url: str - - :return: The endpoint URL required to construct an AzureOpenAI client from the `openai` package. - :rtype: str - """ - parsed = urlparse(input_url) - if parsed.scheme != "https" or not parsed.netloc: - raise ValueError("Invalid endpoint URL format. Must be an https URL with a host.") - new_url = f"https://{parsed.netloc}" - return new_url - @distributed_trace def get_chat_completions_client(self, **kwargs: Any) -> "ChatCompletionsClient": # type: ignore[name-defined] """Get an authenticated asynchronous ChatCompletionsClient (from the package azure-ai-inference) to use with @@ -107,7 +66,7 @@ def get_chat_completions_client(self, **kwargs: Any) -> "ChatCompletionsClient": "Azure AI Inference SDK is not installed. Please install it using 'pip install azure-ai-inference'" ) from e - endpoint = self._get_inference_url(self._outer_instance._config.endpoint) # pylint: disable=protected-access + endpoint = _get_inference_url(self._outer_instance._config.endpoint) # pylint: disable=protected-access client = ChatCompletionsClient( endpoint=endpoint, @@ -146,7 +105,7 @@ def get_embeddings_client(self, **kwargs: Any) -> "EmbeddingsClient": # type: i "Azure AI Inference SDK is not installed. Please install it using 'pip install azure-ai-inference'" ) from e - endpoint = self._get_inference_url(self._outer_instance._config.endpoint) # pylint: disable=protected-access + endpoint = _get_inference_url(self._outer_instance._config.endpoint) # pylint: disable=protected-access client = EmbeddingsClient( endpoint=endpoint, @@ -185,7 +144,7 @@ def get_image_embeddings_client(self, **kwargs: Any) -> "ImageEmbeddingsClient": "Azure AI Inference SDK is not installed. Please install it using 'pip install azure-ai-inference'" ) from e - endpoint = self._get_inference_url(self._outer_instance._config.endpoint) # pylint: disable=protected-access + endpoint = _get_inference_url(self._outer_instance._config.endpoint) # pylint: disable=protected-access client = ImageEmbeddingsClient( endpoint=endpoint, @@ -300,7 +259,7 @@ async def get_azure_openai_client( "azure.identity package not installed. Please install it using 'pip install azure.identity'" ) from e - azure_endpoint = self._get_aoai_inference_url( + azure_endpoint = _get_aoai_inference_url( self._outer_instance._config.endpoint # pylint: disable=protected-access ) diff --git a/sdk/ai/azure-ai-projects/azure/ai/projects/operations/_operations.py b/sdk/ai/azure-ai-projects/azure/ai/projects/operations/_operations.py index de607445c337..74a910c924e4 100644 --- a/sdk/ai/azure-ai-projects/azure/ai/projects/operations/_operations.py +++ b/sdk/ai/azure-ai-projects/azure/ai/projects/operations/_operations.py @@ -633,7 +633,7 @@ class ConnectionsOperations: :attr:`connections` attribute. """ - def __init__(self, *args, **kwargs): + def __init__(self, *args, **kwargs) -> None: input_args = list(args) self._client: PipelineClient = input_args.pop(0) if input_args else kwargs.pop("client") self._config: AIProjectClientConfiguration = input_args.pop(0) if input_args else kwargs.pop("config") @@ -878,7 +878,7 @@ class EvaluationsOperations: :attr:`evaluations` attribute. """ - def __init__(self, *args, **kwargs): + def __init__(self, *args, **kwargs) -> None: input_args = list(args) self._client: PipelineClient = input_args.pop(0) if input_args else kwargs.pop("client") self._config: AIProjectClientConfiguration = input_args.pop(0) if input_args else kwargs.pop("config") @@ -1295,7 +1295,7 @@ class DatasetsOperations: :attr:`datasets` attribute. """ - def __init__(self, *args, **kwargs): + def __init__(self, *args, **kwargs) -> None: input_args = list(args) self._client: PipelineClient = input_args.pop(0) if input_args else kwargs.pop("client") self._config: AIProjectClientConfiguration = input_args.pop(0) if input_args else kwargs.pop("config") @@ -1576,7 +1576,7 @@ def delete(self, name: str, version: str, **kwargs: Any) -> None: # pylint: dis response = pipeline_response.http_response - if response.status_code not in [204]: + if response.status_code not in [204, 200]: map_error(status_code=response.status_code, response=response, error_map=error_map) raise HttpResponseError(response=response) @@ -1951,7 +1951,7 @@ class IndexesOperations: :attr:`indexes` attribute. """ - def __init__(self, *args, **kwargs): + def __init__(self, *args, **kwargs) -> None: input_args = list(args) self._client: PipelineClient = input_args.pop(0) if input_args else kwargs.pop("client") self._config: AIProjectClientConfiguration = input_args.pop(0) if input_args else kwargs.pop("config") @@ -2400,7 +2400,7 @@ class DeploymentsOperations: :attr:`deployments` attribute. """ - def __init__(self, *args, **kwargs): + def __init__(self, *args, **kwargs) -> None: input_args = list(args) self._client: PipelineClient = input_args.pop(0) if input_args else kwargs.pop("client") self._config: AIProjectClientConfiguration = input_args.pop(0) if input_args else kwargs.pop("config") @@ -2583,7 +2583,7 @@ class RedTeamsOperations: :attr:`red_teams` attribute. """ - def __init__(self, *args, **kwargs): + def __init__(self, *args, **kwargs) -> None: input_args = list(args) self._client: PipelineClient = input_args.pop(0) if input_args else kwargs.pop("client") self._config: AIProjectClientConfiguration = input_args.pop(0) if input_args else kwargs.pop("config") diff --git a/sdk/ai/azure-ai-projects/azure/ai/projects/operations/_patch_inference.py b/sdk/ai/azure-ai-projects/azure/ai/projects/operations/_patch_inference.py index 4e5739d7114d..a3f72eaf8f3f 100644 --- a/sdk/ai/azure-ai-projects/azure/ai/projects/operations/_patch_inference.py +++ b/sdk/ai/azure-ai-projects/azure/ai/projects/operations/_patch_inference.py @@ -22,6 +22,46 @@ logger = logging.getLogger(__name__) +def _get_inference_url(input_url: str) -> str: + """ + Converts an input URL in the format: + https:/// + to: + https:///models + + :param input_url: The input endpoint URL used to construct AIProjectClient. + :type input_url: str + + :return: The endpoint URL required to construct inference clients from the `azure-ai-inference` package. + :rtype: str + """ + parsed = urlparse(input_url) + if parsed.scheme != "https" or not parsed.netloc: + raise ValueError("Invalid endpoint URL format. Must be an https URL with a host.") + new_url = f"https://{parsed.netloc}/models" + return new_url + + +def _get_aoai_inference_url(input_url: str) -> str: + """ + Converts an input URL in the format: + https:/// + to: + https:// + + :param input_url: The input endpoint URL used to construct AIProjectClient. + :type input_url: str + + :return: The endpoint URL required to construct an AzureOpenAI client from the `openai` package. + :rtype: str + """ + parsed = urlparse(input_url) + if parsed.scheme != "https" or not parsed.netloc: + raise ValueError("Invalid endpoint URL format. Must be an https URL with a host.") + new_url = f"https://{parsed.netloc}" + return new_url + + class InferenceOperations: """ .. warning:: @@ -35,46 +75,6 @@ class InferenceOperations: def __init__(self, outer_instance: "azure.ai.projects.AIProjectClient") -> None: # type: ignore[name-defined] self._outer_instance = outer_instance - @classmethod - def _get_inference_url(cls, input_url: str) -> str: - """ - Converts an input URL in the format: - https:/// - to: - https:///models - - :param input_url: The input endpoint URL used to construct AIProjectClient. - :type input_url: str - - :return: The endpoint URL required to construct inference clients from the `azure-ai-inference` package. - :rtype: str - """ - parsed = urlparse(input_url) - if parsed.scheme != "https" or not parsed.netloc: - raise ValueError("Invalid endpoint URL format. Must be an https URL with a host.") - new_url = f"https://{parsed.netloc}/models" - return new_url - - @classmethod - def _get_aoai_inference_url(cls, input_url: str) -> str: - """ - Converts an input URL in the format: - https:/// - to: - https:// - - :param input_url: The input endpoint URL used to construct AIProjectClient. - :type input_url: str - - :return: The endpoint URL required to construct an AzureOpenAI client from the `openai` package. - :rtype: str - """ - parsed = urlparse(input_url) - if parsed.scheme != "https" or not parsed.netloc: - raise ValueError("Invalid endpoint URL format. Must be an https URL with a host.") - new_url = f"https://{parsed.netloc}" - return new_url - @distributed_trace def get_chat_completions_client(self, **kwargs: Any) -> "ChatCompletionsClient": # type: ignore[name-defined] """Get an authenticated ChatCompletionsClient (from the package azure-ai-inference) to use with @@ -100,7 +100,7 @@ def get_chat_completions_client(self, **kwargs: Any) -> "ChatCompletionsClient": "Azure AI Inference SDK is not installed. Please install it using 'pip install azure-ai-inference'" ) from e - endpoint = self._get_inference_url(self._outer_instance._config.endpoint) # pylint: disable=protected-access + endpoint = _get_inference_url(self._outer_instance._config.endpoint) # pylint: disable=protected-access client = ChatCompletionsClient( endpoint=endpoint, @@ -139,7 +139,7 @@ def get_embeddings_client(self, **kwargs: Any) -> "EmbeddingsClient": # type: i "Azure AI Inference SDK is not installed. Please install it using 'pip install azure-ai-inference'" ) from e - endpoint = self._get_inference_url(self._outer_instance._config.endpoint) # pylint: disable=protected-access + endpoint = _get_inference_url(self._outer_instance._config.endpoint) # pylint: disable=protected-access client = EmbeddingsClient( endpoint=endpoint, @@ -178,7 +178,7 @@ def get_image_embeddings_client(self, **kwargs: Any) -> "ImageEmbeddingsClient": "Azure AI Inference SDK is not installed. Please install it using 'pip install azure-ai-inference'" ) from e - endpoint = self._get_inference_url(self._outer_instance._config.endpoint) # pylint: disable=protected-access + endpoint = _get_inference_url(self._outer_instance._config.endpoint) # pylint: disable=protected-access client = ImageEmbeddingsClient( endpoint=endpoint, @@ -291,7 +291,7 @@ def get_azure_openai_client( "azure.identity package not installed. Please install it using 'pip install azure.identity'" ) from e - azure_endpoint = self._get_aoai_inference_url( + azure_endpoint = _get_aoai_inference_url( self._outer_instance._config.endpoint # pylint: disable=protected-access ) diff --git a/sdk/ai/azure-ai-projects/azure_ai_projects_tests.template.env b/sdk/ai/azure-ai-projects/azure_ai_projects_tests.template.env new file mode 100644 index 000000000000..c34a3dda4a7f --- /dev/null +++ b/sdk/ai/azure-ai-projects/azure_ai_projects_tests.template.env @@ -0,0 +1,15 @@ +# +# Environment variables that define secrets required for running tests. +# +# All values should be empty by default in this template. +# +# To run tests locally on your device: +# 1. Rename the file to azure_ai_projects_tests.env +# 2. Fill in the values for the environment variables below (do not commit these changes to the repository!) +# 3. Run the test (`pytest`) +# + +# Project endpoint has the format: +# `https://.services.ai.azure.com/api/projects/` +AZURE_AI_PROJECTS_TESTS_PROJECT_ENDPOINT= + diff --git a/sdk/ai/azure-ai-projects/cspell.json b/sdk/ai/azure-ai-projects/cspell.json index 71bd8a696481..ed393add5d13 100644 --- a/sdk/ai/azure-ai-projects/cspell.json +++ b/sdk/ai/azure-ai-projects/cspell.json @@ -12,6 +12,7 @@ "getconnectionwithcredentials", "quantitive", "balapvbyostoragecanary", + "fspath", ], "ignorePaths": [ ] diff --git a/sdk/ai/azure-ai-projects/generated_tests/conftest.py b/sdk/ai/azure-ai-projects/generated_tests/conftest.py deleted file mode 100644 index dd8e527abab1..000000000000 --- a/sdk/ai/azure-ai-projects/generated_tests/conftest.py +++ /dev/null @@ -1,35 +0,0 @@ -# coding=utf-8 -# -------------------------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for license information. -# Code generated by Microsoft (R) Python Code Generator. -# Changes may cause incorrect behavior and will be lost if the code is regenerated. -# -------------------------------------------------------------------------- -import os -import pytest -from dotenv import load_dotenv -from devtools_testutils import ( - test_proxy, - add_general_regex_sanitizer, - add_body_key_sanitizer, - add_header_regex_sanitizer, -) - -load_dotenv() - - -# For security, please avoid record sensitive identity information in recordings -@pytest.fixture(scope="session", autouse=True) -def add_sanitizers(test_proxy): - aiproject_subscription_id = os.environ.get("AIPROJECT_SUBSCRIPTION_ID", "00000000-0000-0000-0000-000000000000") - aiproject_tenant_id = os.environ.get("AIPROJECT_TENANT_ID", "00000000-0000-0000-0000-000000000000") - aiproject_client_id = os.environ.get("AIPROJECT_CLIENT_ID", "00000000-0000-0000-0000-000000000000") - aiproject_client_secret = os.environ.get("AIPROJECT_CLIENT_SECRET", "00000000-0000-0000-0000-000000000000") - add_general_regex_sanitizer(regex=aiproject_subscription_id, value="00000000-0000-0000-0000-000000000000") - add_general_regex_sanitizer(regex=aiproject_tenant_id, value="00000000-0000-0000-0000-000000000000") - add_general_regex_sanitizer(regex=aiproject_client_id, value="00000000-0000-0000-0000-000000000000") - add_general_regex_sanitizer(regex=aiproject_client_secret, value="00000000-0000-0000-0000-000000000000") - - add_header_regex_sanitizer(key="Set-Cookie", value="[set-cookie;]") - add_header_regex_sanitizer(key="Cookie", value="cookie;") - add_body_key_sanitizer(json_path="$..access_token", value="access_token") diff --git a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_connections_operations.py b/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_connections_operations.py deleted file mode 100644 index d93e0e240cca..000000000000 --- a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_connections_operations.py +++ /dev/null @@ -1,22 +0,0 @@ -# coding=utf-8 -# -------------------------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for license information. -# Code generated by Microsoft (R) Python Code Generator. -# Changes may cause incorrect behavior and will be lost if the code is regenerated. -# -------------------------------------------------------------------------- -import pytest -from devtools_testutils import recorded_by_proxy -from testpreparer import AIProjectClientTestBase, AIProjectPreparer - - -@pytest.mark.skip("you may need to update the auto-generated test case before run it") -class TestAIProjectConnectionsOperations(AIProjectClientTestBase): - @AIProjectPreparer() - @recorded_by_proxy - def test_connections_list(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.connections.list() - result = [r for r in response] - # please add some check logic here by yourself - # ... diff --git a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_connections_operations_async.py b/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_connections_operations_async.py deleted file mode 100644 index cc08499be0ee..000000000000 --- a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_connections_operations_async.py +++ /dev/null @@ -1,23 +0,0 @@ -# coding=utf-8 -# -------------------------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for license information. -# Code generated by Microsoft (R) Python Code Generator. -# Changes may cause incorrect behavior and will be lost if the code is regenerated. -# -------------------------------------------------------------------------- -import pytest -from devtools_testutils.aio import recorded_by_proxy_async -from testpreparer import AIProjectPreparer -from testpreparer_async import AIProjectClientTestBaseAsync - - -@pytest.mark.skip("you may need to update the auto-generated test case before run it") -class TestAIProjectConnectionsOperationsAsync(AIProjectClientTestBaseAsync): - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_connections_list(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = client.connections.list() - result = [r async for r in response] - # please add some check logic here by yourself - # ... diff --git a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_datasets_operations.py b/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_datasets_operations.py deleted file mode 100644 index bdd6a44c053b..000000000000 --- a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_datasets_operations.py +++ /dev/null @@ -1,105 +0,0 @@ -# coding=utf-8 -# -------------------------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for license information. -# Code generated by Microsoft (R) Python Code Generator. -# Changes may cause incorrect behavior and will be lost if the code is regenerated. -# -------------------------------------------------------------------------- -import pytest -from devtools_testutils import recorded_by_proxy -from testpreparer import AIProjectClientTestBase, AIProjectPreparer - - -@pytest.mark.skip("you may need to update the auto-generated test case before run it") -class TestAIProjectDatasetsOperations(AIProjectClientTestBase): - @AIProjectPreparer() - @recorded_by_proxy - def test_datasets_list_versions(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.datasets.list_versions( - name="str", - ) - result = [r for r in response] - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy - def test_datasets_list(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.datasets.list() - result = [r for r in response] - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy - def test_datasets_get(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.datasets.get( - name="str", - version="str", - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy - def test_datasets_delete(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.datasets.delete( - name="str", - version="str", - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy - def test_datasets_create_or_update(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.datasets.create_or_update( - name="str", - version="str", - body={ - "dataUri": "str", - "name": "str", - "type": "uri_file", - "version": "str", - "connectionName": "str", - "description": "str", - "id": "str", - "isReference": bool, - "tags": {"str": "str"}, - }, - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy - def test_datasets_pending_upload(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.datasets.pending_upload( - name="str", - version="str", - body={"pendingUploadType": "str", "connectionName": "str", "pendingUploadId": "str"}, - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy - def test_datasets_get_credentials(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.datasets.get_credentials( - name="str", - version="str", - ) - - # please add some check logic here by yourself - # ... diff --git a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_datasets_operations_async.py b/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_datasets_operations_async.py deleted file mode 100644 index 6db1ecba7504..000000000000 --- a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_datasets_operations_async.py +++ /dev/null @@ -1,106 +0,0 @@ -# coding=utf-8 -# -------------------------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for license information. -# Code generated by Microsoft (R) Python Code Generator. -# Changes may cause incorrect behavior and will be lost if the code is regenerated. -# -------------------------------------------------------------------------- -import pytest -from devtools_testutils.aio import recorded_by_proxy_async -from testpreparer import AIProjectPreparer -from testpreparer_async import AIProjectClientTestBaseAsync - - -@pytest.mark.skip("you may need to update the auto-generated test case before run it") -class TestAIProjectDatasetsOperationsAsync(AIProjectClientTestBaseAsync): - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_datasets_list_versions(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = client.datasets.list_versions( - name="str", - ) - result = [r async for r in response] - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_datasets_list(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = client.datasets.list() - result = [r async for r in response] - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_datasets_get(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = await client.datasets.get( - name="str", - version="str", - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_datasets_delete(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = await client.datasets.delete( - name="str", - version="str", - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_datasets_create_or_update(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = await client.datasets.create_or_update( - name="str", - version="str", - body={ - "dataUri": "str", - "name": "str", - "type": "uri_file", - "version": "str", - "connectionName": "str", - "description": "str", - "id": "str", - "isReference": bool, - "tags": {"str": "str"}, - }, - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_datasets_pending_upload(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = await client.datasets.pending_upload( - name="str", - version="str", - body={"pendingUploadType": "str", "connectionName": "str", "pendingUploadId": "str"}, - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_datasets_get_credentials(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = await client.datasets.get_credentials( - name="str", - version="str", - ) - - # please add some check logic here by yourself - # ... diff --git a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_deployments_operations.py b/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_deployments_operations.py deleted file mode 100644 index b0e1e586d866..000000000000 --- a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_deployments_operations.py +++ /dev/null @@ -1,33 +0,0 @@ -# coding=utf-8 -# -------------------------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for license information. -# Code generated by Microsoft (R) Python Code Generator. -# Changes may cause incorrect behavior and will be lost if the code is regenerated. -# -------------------------------------------------------------------------- -import pytest -from devtools_testutils import recorded_by_proxy -from testpreparer import AIProjectClientTestBase, AIProjectPreparer - - -@pytest.mark.skip("you may need to update the auto-generated test case before run it") -class TestAIProjectDeploymentsOperations(AIProjectClientTestBase): - @AIProjectPreparer() - @recorded_by_proxy - def test_deployments_get(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.deployments.get( - name="str", - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy - def test_deployments_list(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.deployments.list() - result = [r for r in response] - # please add some check logic here by yourself - # ... diff --git a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_deployments_operations_async.py b/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_deployments_operations_async.py deleted file mode 100644 index 3958d83eab29..000000000000 --- a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_deployments_operations_async.py +++ /dev/null @@ -1,34 +0,0 @@ -# coding=utf-8 -# -------------------------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for license information. -# Code generated by Microsoft (R) Python Code Generator. -# Changes may cause incorrect behavior and will be lost if the code is regenerated. -# -------------------------------------------------------------------------- -import pytest -from devtools_testutils.aio import recorded_by_proxy_async -from testpreparer import AIProjectPreparer -from testpreparer_async import AIProjectClientTestBaseAsync - - -@pytest.mark.skip("you may need to update the auto-generated test case before run it") -class TestAIProjectDeploymentsOperationsAsync(AIProjectClientTestBaseAsync): - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_deployments_get(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = await client.deployments.get( - name="str", - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_deployments_list(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = client.deployments.list() - result = [r async for r in response] - # please add some check logic here by yourself - # ... diff --git a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_evaluation_results_operations.py b/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_evaluation_results_operations.py deleted file mode 100644 index b68d4d88d17a..000000000000 --- a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_evaluation_results_operations.py +++ /dev/null @@ -1,125 +0,0 @@ -# coding=utf-8 -# -------------------------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for license information. -# Code generated by Microsoft (R) Python Code Generator. -# Changes may cause incorrect behavior and will be lost if the code is regenerated. -# -------------------------------------------------------------------------- -import pytest -from devtools_testutils import recorded_by_proxy -from testpreparer import AIProjectClientTestBase, AIProjectPreparer - - -@pytest.mark.skip("you may need to update the auto-generated test case before run it") -class TestAIProjectEvaluationResultsOperations(AIProjectClientTestBase): - @AIProjectPreparer() - @recorded_by_proxy - def test_evaluation_results_list_versions(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.evaluation_results.list_versions( - name="str", - ) - result = [r for r in response] - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy - def test_evaluation_results_list_latest(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.evaluation_results.list_latest() - result = [r for r in response] - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy - def test_evaluation_results_get_version(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.evaluation_results.get_version( - name="str", - version="str", - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy - def test_evaluation_results_delete_version(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.evaluation_results.delete_version( - name="str", - version="str", - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy - def test_evaluation_results_create(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.evaluation_results.create( - name="str", - body={ - "name": "str", - "version": "str", - "BlobUri": "str", - "DatasetFamily": "str", - "DatasetName": "str", - "Metrics": {"str": 0.0}, - "ModelAssetId": "str", - "ModelName": "str", - "ModelVersion": "str", - "ResultType": "str", - "description": "str", - "id": "str", - "stage": "str", - "tags": {"str": "str"}, - }, - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy - def test_evaluation_results_create_version(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.evaluation_results.create_version( - name="str", - version="str", - body={ - "name": "str", - "version": "str", - "BlobUri": "str", - "DatasetFamily": "str", - "DatasetName": "str", - "Metrics": {"str": 0.0}, - "ModelAssetId": "str", - "ModelName": "str", - "ModelVersion": "str", - "ResultType": "str", - "description": "str", - "id": "str", - "stage": "str", - "tags": {"str": "str"}, - }, - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy - def test_evaluation_results_start_pending_upload(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.evaluation_results.start_pending_upload( - name="str", - version="str", - body={"pendingUploadType": "str", "connectionName": "str", "pendingUploadId": "str"}, - ) - - # please add some check logic here by yourself - # ... diff --git a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_evaluation_results_operations_async.py b/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_evaluation_results_operations_async.py deleted file mode 100644 index b90df81464cd..000000000000 --- a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_evaluation_results_operations_async.py +++ /dev/null @@ -1,126 +0,0 @@ -# coding=utf-8 -# -------------------------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for license information. -# Code generated by Microsoft (R) Python Code Generator. -# Changes may cause incorrect behavior and will be lost if the code is regenerated. -# -------------------------------------------------------------------------- -import pytest -from devtools_testutils.aio import recorded_by_proxy_async -from testpreparer import AIProjectPreparer -from testpreparer_async import AIProjectClientTestBaseAsync - - -@pytest.mark.skip("you may need to update the auto-generated test case before run it") -class TestAIProjectEvaluationResultsOperationsAsync(AIProjectClientTestBaseAsync): - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_evaluation_results_list_versions(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = client.evaluation_results.list_versions( - name="str", - ) - result = [r async for r in response] - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_evaluation_results_list_latest(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = client.evaluation_results.list_latest() - result = [r async for r in response] - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_evaluation_results_get_version(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = await client.evaluation_results.get_version( - name="str", - version="str", - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_evaluation_results_delete_version(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = await client.evaluation_results.delete_version( - name="str", - version="str", - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_evaluation_results_create(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = await client.evaluation_results.create( - name="str", - body={ - "name": "str", - "version": "str", - "BlobUri": "str", - "DatasetFamily": "str", - "DatasetName": "str", - "Metrics": {"str": 0.0}, - "ModelAssetId": "str", - "ModelName": "str", - "ModelVersion": "str", - "ResultType": "str", - "description": "str", - "id": "str", - "stage": "str", - "tags": {"str": "str"}, - }, - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_evaluation_results_create_version(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = await client.evaluation_results.create_version( - name="str", - version="str", - body={ - "name": "str", - "version": "str", - "BlobUri": "str", - "DatasetFamily": "str", - "DatasetName": "str", - "Metrics": {"str": 0.0}, - "ModelAssetId": "str", - "ModelName": "str", - "ModelVersion": "str", - "ResultType": "str", - "description": "str", - "id": "str", - "stage": "str", - "tags": {"str": "str"}, - }, - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_evaluation_results_start_pending_upload(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = await client.evaluation_results.start_pending_upload( - name="str", - version="str", - body={"pendingUploadType": "str", "connectionName": "str", "pendingUploadId": "str"}, - ) - - # please add some check logic here by yourself - # ... diff --git a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_evaluations_operations.py b/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_evaluations_operations.py deleted file mode 100644 index e07aa0e02b47..000000000000 --- a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_evaluations_operations.py +++ /dev/null @@ -1,71 +0,0 @@ -# coding=utf-8 -# -------------------------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for license information. -# Code generated by Microsoft (R) Python Code Generator. -# Changes may cause incorrect behavior and will be lost if the code is regenerated. -# -------------------------------------------------------------------------- -import pytest -from devtools_testutils import recorded_by_proxy -from testpreparer import AIProjectClientTestBase, AIProjectPreparer - - -@pytest.mark.skip("you may need to update the auto-generated test case before run it") -class TestAIProjectEvaluationsOperations(AIProjectClientTestBase): - @AIProjectPreparer() - @recorded_by_proxy - def test_evaluations_get(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.evaluations.get( - name="str", - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy - def test_evaluations_list(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.evaluations.list() - result = [r for r in response] - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy - def test_evaluations_create(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.evaluations.create( - evaluation={ - "data": "input_data", - "evaluators": {"str": {"id": "str", "dataMapping": {"str": "str"}, "initParams": {"str": {}}}}, - "id": "str", - "description": "str", - "displayName": "str", - "properties": {"str": "str"}, - "status": "str", - "tags": {"str": "str"}, - }, - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy - def test_evaluations_create_agent_evaluation(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.evaluations.create_agent_evaluation( - evaluation={ - "appInsightsConnectionString": "str", - "evaluators": {"str": {"id": "str", "dataMapping": {"str": "str"}, "initParams": {"str": {}}}}, - "runId": "str", - "redactionConfiguration": {"redactScoreProperties": bool}, - "samplingConfiguration": {"maxRequestRate": 0.0, "name": "str", "samplingPercent": 0.0}, - "threadId": "str", - }, - ) - - # please add some check logic here by yourself - # ... diff --git a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_evaluations_operations_async.py b/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_evaluations_operations_async.py deleted file mode 100644 index 07f22bd9e58a..000000000000 --- a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_evaluations_operations_async.py +++ /dev/null @@ -1,72 +0,0 @@ -# coding=utf-8 -# -------------------------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for license information. -# Code generated by Microsoft (R) Python Code Generator. -# Changes may cause incorrect behavior and will be lost if the code is regenerated. -# -------------------------------------------------------------------------- -import pytest -from devtools_testutils.aio import recorded_by_proxy_async -from testpreparer import AIProjectPreparer -from testpreparer_async import AIProjectClientTestBaseAsync - - -@pytest.mark.skip("you may need to update the auto-generated test case before run it") -class TestAIProjectEvaluationsOperationsAsync(AIProjectClientTestBaseAsync): - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_evaluations_get(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = await client.evaluations.get( - name="str", - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_evaluations_list(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = client.evaluations.list() - result = [r async for r in response] - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_evaluations_create(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = await client.evaluations.create( - evaluation={ - "data": "input_data", - "evaluators": {"str": {"id": "str", "dataMapping": {"str": "str"}, "initParams": {"str": {}}}}, - "id": "str", - "description": "str", - "displayName": "str", - "properties": {"str": "str"}, - "status": "str", - "tags": {"str": "str"}, - }, - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_evaluations_create_agent_evaluation(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = await client.evaluations.create_agent_evaluation( - evaluation={ - "appInsightsConnectionString": "str", - "evaluators": {"str": {"id": "str", "dataMapping": {"str": "str"}, "initParams": {"str": {}}}}, - "runId": "str", - "redactionConfiguration": {"redactScoreProperties": bool}, - "samplingConfiguration": {"maxRequestRate": 0.0, "name": "str", "samplingPercent": 0.0}, - "threadId": "str", - }, - ) - - # please add some check logic here by yourself - # ... diff --git a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_indexes_operations.py b/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_indexes_operations.py deleted file mode 100644 index 82f33d5188bd..000000000000 --- a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_indexes_operations.py +++ /dev/null @@ -1,87 +0,0 @@ -# coding=utf-8 -# -------------------------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for license information. -# Code generated by Microsoft (R) Python Code Generator. -# Changes may cause incorrect behavior and will be lost if the code is regenerated. -# -------------------------------------------------------------------------- -import pytest -from devtools_testutils import recorded_by_proxy -from testpreparer import AIProjectClientTestBase, AIProjectPreparer - - -@pytest.mark.skip("you may need to update the auto-generated test case before run it") -class TestAIProjectIndexesOperations(AIProjectClientTestBase): - @AIProjectPreparer() - @recorded_by_proxy - def test_indexes_list_versions(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.indexes.list_versions( - name="str", - ) - result = [r for r in response] - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy - def test_indexes_list(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.indexes.list() - result = [r for r in response] - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy - def test_indexes_get(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.indexes.get( - name="str", - version="str", - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy - def test_indexes_delete(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.indexes.delete( - name="str", - version="str", - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy - def test_indexes_create_or_update(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.indexes.create_or_update( - name="str", - version="str", - body={ - "connectionName": "str", - "indexName": "str", - "name": "str", - "type": "AzureSearch", - "version": "str", - "description": "str", - "fieldMapping": { - "contentFields": ["str"], - "filepathField": "str", - "metadataFields": ["str"], - "titleField": "str", - "urlField": "str", - "vectorFields": ["str"], - }, - "id": "str", - "tags": {"str": "str"}, - }, - ) - - # please add some check logic here by yourself - # ... diff --git a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_indexes_operations_async.py b/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_indexes_operations_async.py deleted file mode 100644 index 53812b80aa1d..000000000000 --- a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_indexes_operations_async.py +++ /dev/null @@ -1,88 +0,0 @@ -# coding=utf-8 -# -------------------------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for license information. -# Code generated by Microsoft (R) Python Code Generator. -# Changes may cause incorrect behavior and will be lost if the code is regenerated. -# -------------------------------------------------------------------------- -import pytest -from devtools_testutils.aio import recorded_by_proxy_async -from testpreparer import AIProjectPreparer -from testpreparer_async import AIProjectClientTestBaseAsync - - -@pytest.mark.skip("you may need to update the auto-generated test case before run it") -class TestAIProjectIndexesOperationsAsync(AIProjectClientTestBaseAsync): - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_indexes_list_versions(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = client.indexes.list_versions( - name="str", - ) - result = [r async for r in response] - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_indexes_list(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = client.indexes.list() - result = [r async for r in response] - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_indexes_get(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = await client.indexes.get( - name="str", - version="str", - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_indexes_delete(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = await client.indexes.delete( - name="str", - version="str", - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_indexes_create_or_update(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = await client.indexes.create_or_update( - name="str", - version="str", - body={ - "connectionName": "str", - "indexName": "str", - "name": "str", - "type": "AzureSearch", - "version": "str", - "description": "str", - "fieldMapping": { - "contentFields": ["str"], - "filepathField": "str", - "metadataFields": ["str"], - "titleField": "str", - "urlField": "str", - "vectorFields": ["str"], - }, - "id": "str", - "tags": {"str": "str"}, - }, - ) - - # please add some check logic here by yourself - # ... diff --git a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_red_teams_operations.py b/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_red_teams_operations.py deleted file mode 100644 index 8cb4893cbb4c..000000000000 --- a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_red_teams_operations.py +++ /dev/null @@ -1,56 +0,0 @@ -# coding=utf-8 -# -------------------------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for license information. -# Code generated by Microsoft (R) Python Code Generator. -# Changes may cause incorrect behavior and will be lost if the code is regenerated. -# -------------------------------------------------------------------------- -import pytest -from devtools_testutils import recorded_by_proxy -from testpreparer import AIProjectClientTestBase, AIProjectPreparer - - -@pytest.mark.skip("you may need to update the auto-generated test case before run it") -class TestAIProjectRedTeamsOperations(AIProjectClientTestBase): - @AIProjectPreparer() - @recorded_by_proxy - def test_red_teams_get(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.red_teams.get( - name="str", - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy - def test_red_teams_list(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.red_teams.list() - result = [r for r in response] - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy - def test_red_teams_create(self, aiproject_endpoint): - client = self.create_client(endpoint=aiproject_endpoint) - response = client.red_teams.create( - red_team={ - "id": "str", - "target": "target_config", - "applicationScenario": "str", - "attackStrategies": ["str"], - "displayName": "str", - "numTurns": 0, - "properties": {"str": "str"}, - "riskCategories": ["str"], - "simulationOnly": bool, - "status": "str", - "tags": {"str": "str"}, - }, - ) - - # please add some check logic here by yourself - # ... diff --git a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_red_teams_operations_async.py b/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_red_teams_operations_async.py deleted file mode 100644 index dc93a4d14181..000000000000 --- a/sdk/ai/azure-ai-projects/generated_tests/test_ai_project_red_teams_operations_async.py +++ /dev/null @@ -1,57 +0,0 @@ -# coding=utf-8 -# -------------------------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for license information. -# Code generated by Microsoft (R) Python Code Generator. -# Changes may cause incorrect behavior and will be lost if the code is regenerated. -# -------------------------------------------------------------------------- -import pytest -from devtools_testutils.aio import recorded_by_proxy_async -from testpreparer import AIProjectPreparer -from testpreparer_async import AIProjectClientTestBaseAsync - - -@pytest.mark.skip("you may need to update the auto-generated test case before run it") -class TestAIProjectRedTeamsOperationsAsync(AIProjectClientTestBaseAsync): - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_red_teams_get(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = await client.red_teams.get( - name="str", - ) - - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_red_teams_list(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = client.red_teams.list() - result = [r async for r in response] - # please add some check logic here by yourself - # ... - - @AIProjectPreparer() - @recorded_by_proxy_async - async def test_red_teams_create(self, aiproject_endpoint): - client = self.create_async_client(endpoint=aiproject_endpoint) - response = await client.red_teams.create( - red_team={ - "id": "str", - "target": "target_config", - "applicationScenario": "str", - "attackStrategies": ["str"], - "displayName": "str", - "numTurns": 0, - "properties": {"str": "str"}, - "riskCategories": ["str"], - "simulationOnly": bool, - "status": "str", - "tags": {"str": "str"}, - }, - ) - - # please add some check logic here by yourself - # ... diff --git a/sdk/ai/azure-ai-projects/generated_tests/testpreparer.py b/sdk/ai/azure-ai-projects/generated_tests/testpreparer.py deleted file mode 100644 index 69c9aaa6e8d1..000000000000 --- a/sdk/ai/azure-ai-projects/generated_tests/testpreparer.py +++ /dev/null @@ -1,26 +0,0 @@ -# coding=utf-8 -# -------------------------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for license information. -# Code generated by Microsoft (R) Python Code Generator. -# Changes may cause incorrect behavior and will be lost if the code is regenerated. -# -------------------------------------------------------------------------- -from azure.ai.projects import AIProjectClient -from devtools_testutils import AzureRecordedTestCase, PowerShellPreparer -import functools - - -class AIProjectClientTestBase(AzureRecordedTestCase): - - def create_client(self, endpoint): - credential = self.get_credential(AIProjectClient) - return self.create_client_from_credential( - AIProjectClient, - credential=credential, - endpoint=endpoint, - ) - - -AIProjectPreparer = functools.partial( - PowerShellPreparer, "aiproject", aiproject_endpoint="https://fake_aiproject_endpoint.com" -) diff --git a/sdk/ai/azure-ai-projects/generated_tests/testpreparer_async.py b/sdk/ai/azure-ai-projects/generated_tests/testpreparer_async.py deleted file mode 100644 index 56353f9fdd65..000000000000 --- a/sdk/ai/azure-ai-projects/generated_tests/testpreparer_async.py +++ /dev/null @@ -1,20 +0,0 @@ -# coding=utf-8 -# -------------------------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for license information. -# Code generated by Microsoft (R) Python Code Generator. -# Changes may cause incorrect behavior and will be lost if the code is regenerated. -# -------------------------------------------------------------------------- -from azure.ai.projects.aio import AIProjectClient -from devtools_testutils import AzureRecordedTestCase - - -class AIProjectClientTestBaseAsync(AzureRecordedTestCase): - - def create_async_client(self, endpoint): - credential = self.get_credential(AIProjectClient, is_async=True) - return self.create_client_from_credential( - AIProjectClient, - credential=credential, - endpoint=endpoint, - ) diff --git a/sdk/ai/azure-ai-agents/samples/__init__.py b/sdk/ai/azure-ai-projects/samples/agents/__init__.py similarity index 100% rename from sdk/ai/azure-ai-agents/samples/__init__.py rename to sdk/ai/azure-ai-projects/samples/agents/__init__.py diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/__init__.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/__init__.py similarity index 100% rename from sdk/ai/azure-ai-agents/samples/agents_async/__init__.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_async/__init__.py diff --git a/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_azure_functions_async.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_azure_functions_async.py new file mode 100644 index 000000000000..5151dc57380f --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_azure_functions_async.py @@ -0,0 +1,108 @@ +# pylint: disable=line-too-long,useless-suppression +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ +import asyncio + +""" +DESCRIPTION: + This sample demonstrates how to use azure function agent operations from + the Azure Agents service using a asynchronous client. + +USAGE: + python sample_agents_azure_functions_async.py + + Before running the sample: + + pip install azure-ai-projects azure-ai-projects azure-identity + + Set these environment variables with your own values: + 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Azure AI Foundry portal. + 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model, as found under the "Name" column in + the "Models + endpoints" tab in your Azure AI Foundry project. + 3) STORAGE_SERVICE_ENDPONT - the storage service queue endpoint, triggering Azure function. + + Please see Getting Started with Azure Functions page for more information on Azure Functions: + https://learn.microsoft.com/azure/azure-functions/functions-get-started +""" + +import os +from azure.ai.projects.aio import AIProjectClient +from azure.identity.aio import DefaultAzureCredential +from azure.ai.agents.models import ( + AzureFunctionStorageQueue, + AzureFunctionTool, + MessageRole, +) + + +async def main(): + + project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), + ) + + async with project_client: + agents_client = project_client.agents + + storage_service_endpoint = os.environ["STORAGE_SERVICE_ENDPONT"] + azure_function_tool = AzureFunctionTool( + name="foo", + description="Get answers from the foo bot.", + parameters={ + "type": "object", + "properties": { + "query": {"type": "string", "description": "The question to ask."}, + "outputqueueuri": {"type": "string", "description": "The full output queue uri."}, + }, + }, + input_queue=AzureFunctionStorageQueue( + queue_name="azure-function-foo-input", + storage_service_endpoint=storage_service_endpoint, + ), + output_queue=AzureFunctionStorageQueue( + queue_name="azure-function-tool-output", + storage_service_endpoint=storage_service_endpoint, + ), + ) + + agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="azure-function-agent-foo", + instructions=f"You are a helpful support agent. Use the provided function any time the prompt contains the string 'What would foo say?'. When you invoke the function, ALWAYS specify the output queue uri parameter as '{storage_service_endpoint}/azure-function-tool-output'. Always responds with \"Foo says\" and then the response from the tool.", + tools=azure_function_tool.definitions, + ) + print(f"Created agent, agent ID: {agent.id}") + + # Create a thread + thread = await agents_client.threads.create() + print(f"Created thread, thread ID: {thread.id}") + + # Create a message + message = await agents_client.messages.create( + thread_id=thread.id, + role="user", + content="What is the most prevalent element in the universe? What would foo say?", + ) + print(f"Created message, message ID: {message.id}") + + run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) + if run.status == "failed": + print(f"Run failed: {run.last_error}") + + # Get the last message from the sender + last_msg = await agents_client.messages.get_last_message_text_by_role( + thread_id=thread.id, role=MessageRole.AGENT + ) + if last_msg: + print(f"Last Message: {last_msg.text.value}") + + # Delete the agent once done + await agents_client.delete_agent(agent.id) + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_basics_async.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_basics_async.py new file mode 100644 index 000000000000..0e4227a5f86c --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_basics_async.py @@ -0,0 +1,81 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +""" +DESCRIPTION: + This sample demonstrates how to use basic agent operations from + the Azure Agents service using a asynchronous client. + +USAGE: + python sample_agents_basics_async.py + + Before running the sample: + + pip install azure-ai-projects azure-ai-agents azure-identity aiohttp + + Set these environment variables with your own values: + 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Azure AI Foundry portal. + 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model, as found under the "Name" column in + the "Models + endpoints" tab in your Azure AI Foundry project. +""" +import asyncio +import time + +from azure.ai.projects.aio import AIProjectClient +from azure.ai.agents.models import MessageTextContent, ListSortOrder +from azure.identity.aio import DefaultAzureCredential + +import os + + +async def main() -> None: + + project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), + ) + + async with project_client: + agents_client = project_client.agents + + agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", instructions="You are helpful agent" + ) + print(f"Created agent, agent ID: {agent.id}") + + thread = await agents_client.threads.create() + print(f"Created thread, thread ID: {thread.id}") + + message = await agents_client.messages.create(thread_id=thread.id, role="user", content="Hello, tell me a joke") + print(f"Created message, message ID: {message.id}") + + run = await agents_client.runs.create(thread_id=thread.id, agent_id=agent.id) + + # Poll the run as long as run status is queued or in progress + while run.status in ["queued", "in_progress", "requires_action"]: + # Wait for a second + time.sleep(1) + run = await agents_client.runs.get(thread_id=thread.id, run_id=run.id) + print(f"Run status: {run.status}") + + if run.status == "failed": + print(f"Run error: {run.last_error}") + + await agents_client.delete_agent(agent.id) + print("Deleted agent") + + messages = agents_client.messages.list( + thread_id=thread.id, + order=ListSortOrder.ASCENDING, + ) + async for msg in messages: + last_part = msg.content[-1] + if isinstance(last_part, MessageTextContent): + print(f"{msg.role}: {last_part.text.value}") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_basics_create_thread_and_process_run_async.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_basics_create_thread_and_process_run_async.py new file mode 100644 index 000000000000..21e5c7b2f908 --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_basics_create_thread_and_process_run_async.py @@ -0,0 +1,80 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ +""" +DESCRIPTION: + Asynchronous variant of sample_agents_basics_thread_and_process_run.py. + This sample demonstrates how to use the new convenience method + `create_thread_and_process_run` in the Azure AI Agents service. + This single call will create a thread, start a run, poll to + completion (including any tool calls), and return the final result. + +USAGE: + python sample_agents_basics_thread_and_process_run_async.py + + Before running: + + pip install azure-ai-projects azure-ai-agents azure-identity aiohttp + + Set these environment variables with your own values: + 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Azure AI Foundry portal. + 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model, as found under the "Name" column in + the "Models + endpoints" tab in your Azure AI Foundry project. +""" +import asyncio +import os + +from azure.ai.projects.aio import AIProjectClient +from azure.ai.agents.models import ( + AgentThreadCreationOptions, + ThreadMessageOptions, + MessageTextContent, + ListSortOrder, +) +from azure.identity.aio import DefaultAzureCredential + + +async def main() -> None: + project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), + ) + + async with project_client: + agents_client = project_client.agents + + agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="sample-agent", + instructions="You are a helpful assistant that tells jokes.", + ) + print(f"Created agent, agent ID: {agent.id}") + + run = await agents_client.create_thread_and_process_run( + agent_id=agent.id, + thread=AgentThreadCreationOptions( + messages=[ThreadMessageOptions(role="user", content="Hi! Tell me your favorite programming joke.")] + ), + ) + + if run.status == "failed": + print(f"Run error: {run.last_error}") + + # List all messages in the thread, in ascending order of creation + messages = agents_client.messages.list( + thread_id=run.thread_id, + order=ListSortOrder.ASCENDING, + ) + async for msg in messages: + last_part = msg.content[-1] + if isinstance(last_part, MessageTextContent): + print(f"{msg.role}: {last_part.text.value}") + + await agents_client.delete_agent(agent.id) + print(f"Deleted agent {agent.id!r}") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_basics_create_thread_and_run_async.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_basics_create_thread_and_run_async.py new file mode 100644 index 000000000000..52b2341539e0 --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_basics_create_thread_and_run_async.py @@ -0,0 +1,89 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ +""" +DESCRIPTION: + Asynchronous variant of sample_agents_basics_thread_and_run.py. + It creates an agent, starts a new thread, and immediately runs it + using the async Azure AI Agents client. + +USAGE: + python sample_agents_basics_thread_and_run_async.py + + Before running: + + pip install azure-ai-projects azure-ai-agents azure-identity aiohttp + + Set these environment variables with your own values: + 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Azure AI Foundry portal. + 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model, as found under the "Name" column in + the "Models + endpoints" tab in your Azure AI Foundry project. +""" +import asyncio +import os + +from azure.ai.projects.aio import AIProjectClient +from azure.ai.agents.models import ( + AgentThreadCreationOptions, + ThreadMessageOptions, + MessageTextContent, + ListSortOrder, +) +from azure.identity.aio import DefaultAzureCredential + + +async def main() -> None: + project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), + ) + + async with project_client: + agents_client = project_client.agents + + agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="sample-agent", + instructions="You are a helpful assistant that tells jokes.", + ) + print(f"Created agent, agent ID: {agent.id}") + + # Prepare the initial user message + initial_message = ThreadMessageOptions( + role="user", + content="Hello! Can you tell me a joke?", + ) + + # Create a new thread and immediately start a run on it + run = await agents_client.create_thread_and_run( + agent_id=agent.id, + thread=AgentThreadCreationOptions(messages=[initial_message]), + ) + + # Poll the run as long as run status is queued or in progress + while run.status in {"queued", "in_progress", "requires_action"}: + await asyncio.sleep(1) + run = await agents_client.runs.get(thread_id=run.thread_id, run_id=run.id) + print(f"Run status: {run.status}") + + if run.status == "failed": + print(f"Run error: {run.last_error}") + + # List all messages in the thread, in ascending order of creation + messages = agents_client.messages.list( + thread_id=run.thread_id, + order=ListSortOrder.ASCENDING, + ) + async for msg in messages: + last_part = msg.content[-1] + if isinstance(last_part, MessageTextContent): + print(f"{msg.role}: {last_part.text.value}") + + await agents_client.delete_agent(agent.id) + print(f"Deleted agent {agent.id!r}") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_code_interpreter_async.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_code_interpreter_async.py new file mode 100644 index 000000000000..56c9e6a47fe6 --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_code_interpreter_async.py @@ -0,0 +1,111 @@ +# pylint: disable=line-too-long,useless-suppression +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +""" +DESCRIPTION: + This sample demonstrates how to use code interpreter tool with agent from + the Azure Agents service using a asynchronous client. + +USAGE: + python sample_agents_code_interpreter_async.py + + Before running the sample: + + pip install azure-ai-projects azure-ai-agents azure-identity aiohttp + + Set these environment variables with your own values: + 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Azure AI Foundry portal. + 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model, as found under the "Name" column in + the "Models + endpoints" tab in your Azure AI Foundry project. +""" +import asyncio + +from azure.ai.projects.aio import AIProjectClient +from azure.ai.agents.models import CodeInterpreterTool, FilePurpose, ListSortOrder, MessageRole +from azure.identity.aio import DefaultAzureCredential +from pathlib import Path + +import os + +asset_file_path = os.path.abspath( + os.path.join(os.path.dirname(__file__), "../assets/synthetic_500_quarterly_results.csv") +) + + +async def main() -> None: + + project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), + ) + + async with project_client: + agents_client = project_client.agents + + # Upload a file and wait for it to be processed + file = await agents_client.files.upload_and_poll(file_path=asset_file_path, purpose=FilePurpose.AGENTS) + print(f"Uploaded file, file ID: {file.id}") + + code_interpreter = CodeInterpreterTool(file_ids=[file.id]) + + agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are helpful agent", + tools=code_interpreter.definitions, + tool_resources=code_interpreter.resources, + ) + print(f"Created agent, agent ID: {agent.id}") + + thread = await agents_client.threads.create() + print(f"Created thread, thread ID: {thread.id}") + + message = await agents_client.messages.create( + thread_id=thread.id, + role="user", + content="Could you please create bar chart in TRANSPORTATION sector for the operating profit from the uploaded csv file and provide file to me?", + ) + print(f"Created message, message ID: {message.id}") + + run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) + print(f"Run finished with status: {run.status}") + + if run.status == "failed": + # Check if you got "Rate limit is exceeded.", then you want to get more quota + print(f"Run failed: {run.last_error}") + + messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) + + last_msg = await agents_client.messages.get_last_message_text_by_role( + thread_id=thread.id, role=MessageRole.AGENT + ) + if last_msg: + print(f"Last Message: {last_msg.text.value}") + + async for msg in messages: + # Save every image file in the message + for img in msg.image_contents: + file_id = img.image_file.file_id + file_name = f"{file_id}_image_file.png" + await agents_client.files.save(file_id=file_id, file_name=file_name) + print(f"Saved image file to: {Path.cwd() / file_name}") + + # Print details of every file-path annotation + for ann in msg.file_path_annotations: + print("File Paths:") + print(f" Type: {ann.type}") + print(f" Text: {ann.text}") + print(f" File ID: {ann.file_path.file_id}") + print(f" Start Index: {ann.start_index}") + print(f" End Index: {ann.end_index}") + + await agents_client.delete_agent(agent.id) + print("Deleted agent") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_code_interpreter_attachment_async.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_code_interpreter_attachment_async.py new file mode 100644 index 000000000000..c6af613314e7 --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_code_interpreter_attachment_async.py @@ -0,0 +1,95 @@ +# pylint: disable=line-too-long,useless-suppression +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +""" +DESCRIPTION: + This sample demonstrates how to use agent operations with code interpreter from + the Azure Agents service using a synchronous client. + +USAGE: + python sample_agents_code_interpreter_attachment_async.py + + Before running the sample: + + pip install azure-ai-projects azure-ai-agents azure-identity aiohttp + + Set these environment variables with your own values: + 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Azure AI Foundry portal. + 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model, as found under the "Name" column in + the "Models + endpoints" tab in your Azure AI Foundry project. +""" +import asyncio +import os +from azure.ai.projects.aio import AIProjectClient +from azure.ai.agents.models import ( + CodeInterpreterTool, + FilePurpose, + MessageAttachment, + ListSortOrder, + MessageTextContent, +) +from azure.identity.aio import DefaultAzureCredential + +asset_file_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../assets/product_info_1.md")) + + +async def main(): + project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), + ) + + async with project_client: + agents_client = project_client.agents + + # Upload a file and wait for it to be processed + file = await agents_client.files.upload_and_poll(file_path=asset_file_path, purpose=FilePurpose.AGENTS) + print(f"Uploaded file, file ID: {file.id}") + + code_interpreter = CodeInterpreterTool() + + # Notice that CodeInterpreter must be enabled in the agent creation, otherwise the agent will not be able to see the file attachment + agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are helpful agent", + tools=code_interpreter.definitions, + ) + print(f"Created agent, agent ID: {agent.id}") + + thread = await agents_client.threads.create() + print(f"Created thread, thread ID: {thread.id}") + + # Create a message with the attachment + attachment = MessageAttachment(file_id=file.id, tools=code_interpreter.definitions) + message = await agents_client.messages.create( + thread_id=thread.id, role="user", content="What does the attachment say?", attachments=[attachment] + ) + print(f"Created message, message ID: {message.id}") + + run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) + print(f"Run finished with status: {run.status}") + + if run.status == "failed": + # Check if you got "Rate limit is exceeded.", then you want to get more quota + print(f"Run failed: {run.last_error}") + + await agents_client.files.delete(file.id) + print("Deleted file") + + await agents_client.delete_agent(agent.id) + print("Deleted agent") + + messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) + async for msg in messages: + last_part = msg.content[-1] + if isinstance(last_part, MessageTextContent): + print(f"{msg.role}: {last_part.text.value}") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_code_interpreter_attachment_enterprise_search_async.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_code_interpreter_attachment_enterprise_search_async.py similarity index 97% rename from sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_code_interpreter_attachment_enterprise_search_async.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_code_interpreter_attachment_enterprise_search_async.py index e24340ba347f..e25772cc8e8f 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_code_interpreter_attachment_enterprise_search_async.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_code_interpreter_attachment_enterprise_search_async.py @@ -13,7 +13,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity aiohttp + pip install azure-ai-projects azure-ai-agents azure-identity aiohttp Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview diff --git a/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_functions_async.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_functions_async.py new file mode 100644 index 000000000000..8f39ebd23247 --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_functions_async.py @@ -0,0 +1,117 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +""" +FILE: sample_agents_functions_async.py + +DESCRIPTION: + This sample demonstrates how to use agent operations with custom functions from + the Azure Agents service using a asynchronous client. + +USAGE: + python sample_agents_functions_async.py + + Before running the sample: + + pip install azure-ai-projects azure-ai-agents azure-identity aiohttp + + Set these environment variables with your own values: + 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Azure AI Foundry portal. + 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model, as found under the "Name" column in + the "Models + endpoints" tab in your Azure AI Foundry project. +""" +import asyncio +import time +import os +from azure.ai.projects.aio import AIProjectClient +from azure.ai.agents.models import ( + AsyncFunctionTool, + AsyncToolSet, + RequiredFunctionToolCall, + SubmitToolOutputsAction, + ToolOutput, + ListSortOrder, + MessageTextContent, +) +from azure.identity.aio import DefaultAzureCredential +from utils.user_async_functions import user_async_functions + + +async def main() -> None: + project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), + ) + + async with project_client: + agents_client = project_client.agents + + # Initialize agent functions + functions = AsyncFunctionTool(functions=user_async_functions) + toolset = AsyncToolSet() + toolset.add(functions) + + # Create agent + agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are helpful agent", + tools=functions.definitions, + ) + print(f"Created agent, agent ID: {agent.id}") + + # Create thread for communication + thread = await agents_client.threads.create() + print(f"Created thread, ID: {thread.id}") + + # Create and send message + message = await agents_client.messages.create( + thread_id=thread.id, role="user", content="Hello, what's the time?" + ) + print(f"Created message, ID: {message.id}") + + # Create and run agent task + run = await agents_client.runs.create(thread_id=thread.id, agent_id=agent.id) + print(f"Created run, ID: {run.id}") + + # Polling loop for run status + while run.status in ["queued", "in_progress", "requires_action"]: + time.sleep(4) + run = await agents_client.runs.get(thread_id=thread.id, run_id=run.id) + + if run.status == "requires_action" and isinstance(run.required_action, SubmitToolOutputsAction): + tool_calls = run.required_action.submit_tool_outputs.tool_calls + if not tool_calls: + print("No tool calls provided - cancelling run") + await agents_client.runs.cancel(thread_id=thread.id, run_id=run.id) + break + + tool_outputs = await toolset.execute_tool_calls(tool_calls) + + print(f"Tool outputs: {tool_outputs}") + if tool_outputs: + await agents_client.runs.submit_tool_outputs( + thread_id=thread.id, run_id=run.id, tool_outputs=tool_outputs + ) + + print(f"Current run status: {run.status}") + + print(f"Run completed with status: {run.status}") + + # Delete the agent when done + await agents_client.delete_agent(agent.id) + print("Deleted agent") + + # Fetch and log messages + messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) + async for msg in messages: + last_part = msg.content[-1] + if isinstance(last_part, MessageTextContent): + print(f"{msg.role}: {last_part.text.value}") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_image_input_base64_async.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_image_input_base64_async.py new file mode 100644 index 000000000000..b19fb72d01e9 --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_image_input_base64_async.py @@ -0,0 +1,118 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +""" +DESCRIPTION: + This sample demonstrates how to use basic agent operations using image file input for the + the Azure Agents service using a synchronous client. + +USAGE: + python sample_agents_image_input_base64.py + + Before running the sample: + + pip install azure-ai-projects azure-ai-projects azure-identity + + Set these environment variables with your own values: + 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Azure AI Foundry portal. + 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model, as found under the "Name" column in + the "Models + endpoints" tab in your Azure AI Foundry project. +""" +import asyncio +import os, time, base64 +from typing import List +from azure.ai.projects.aio import AIProjectClient +from azure.identity.aio import DefaultAzureCredential +from azure.ai.agents.models import ( + ListSortOrder, + MessageTextContent, + MessageInputContentBlock, + MessageImageUrlParam, + MessageInputTextBlock, + MessageInputImageUrlBlock, +) + +asset_file_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../assets/image_file.png")) + + +def image_to_base64(image_path: str) -> str: + """ + Convert an image file to a Base64-encoded string. + + :param image_path: The path to the image file (e.g. 'image_file.png') + :return: A Base64-encoded string representing the image. + :raises FileNotFoundError: If the provided file path does not exist. + :raises OSError: If there's an error reading the file. + """ + if not os.path.isfile(image_path): + raise FileNotFoundError(f"File not found at: {image_path}") + + try: + with open(image_path, "rb") as image_file: + file_data = image_file.read() + return base64.b64encode(file_data).decode("utf-8") + except Exception as exc: + raise OSError(f"Error reading file '{image_path}'") from exc + + +async def main(): + project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), + ) + + async with project_client: + agents_client = project_client.agents + + agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are helpful agent", + ) + print(f"Created agent, agent ID: {agent.id}") + + thread = await agents_client.threads.create() + print(f"Created thread, thread ID: {thread.id}") + + input_message = "Hello, what is in the image ?" + image_base64 = image_to_base64(asset_file_path) + img_url = f"data:image/png;base64,{image_base64}" + url_param = MessageImageUrlParam(url=img_url, detail="high") + content_blocks: List[MessageInputContentBlock] = [ + MessageInputTextBlock(text=input_message), + MessageInputImageUrlBlock(image_url=url_param), + ] + message = await agents_client.messages.create(thread_id=thread.id, role="user", content=content_blocks) + print(f"Created message, message ID: {message.id}") + + run = await agents_client.runs.create(thread_id=thread.id, agent_id=agent.id) + + # Poll the run as long as run status is queued or in progress + while run.status in ["queued", "in_progress", "requires_action"]: + # Wait for a second + time.sleep(1) + run = await agents_client.runs.get(thread_id=thread.id, run_id=run.id) + print(f"Run status: {run.status}") + + if run.status == "failed": + print(f"Run failed: {run.last_error}") + + await agents_client.delete_agent(agent.id) + print("Deleted agent") + + messages = agents_client.messages.list( + thread_id=thread.id, + order=ListSortOrder.ASCENDING, + ) + + async for msg in messages: + last_part = msg.content[-1] + if isinstance(last_part, MessageTextContent): + print(f"{msg.role}: {last_part.text.value}") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_image_input_file_async.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_image_input_file_async.py new file mode 100644 index 000000000000..4eb7335f13de --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_image_input_file_async.py @@ -0,0 +1,100 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +""" +DESCRIPTION: + This sample demonstrates how to use basic agent operations using image file input for the + the Azure Agents service using a synchronous client. + +USAGE: + python sample_agents_image_input_file.py + + Before running the sample: + + pip install azure-ai-projects azure-ai-projects azure-identity + + Set these environment variables with your own values: + 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Azure AI Foundry portal. + 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model, as found under the "Name" column in + the "Models + endpoints" tab in your Azure AI Foundry project. +""" +import asyncio +import os, time +from typing import List +from azure.ai.projects.aio import AIProjectClient +from azure.identity.aio import DefaultAzureCredential +from azure.ai.agents.models import ( + ListSortOrder, + MessageTextContent, + MessageInputContentBlock, + MessageImageFileParam, + MessageInputTextBlock, + MessageInputImageFileBlock, + FilePurpose, +) + +asset_file_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../assets/image_file.png")) + + +async def main(): + project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), + ) + + async with project_client: + agents_client = project_client.agents + + agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are helpful agent", + ) + print(f"Created agent, agent ID: {agent.id}") + + thread = await agents_client.threads.create() + print(f"Created thread, thread ID: {thread.id}") + + image_file = await agents_client.files.upload_and_poll(file_path=asset_file_path, purpose=FilePurpose.AGENTS) + print(f"Uploaded file, file ID: {image_file.id}") + + input_message = "Hello, what is in the image ?" + file_param = MessageImageFileParam(file_id=image_file.id, detail="high") + content_blocks: List[MessageInputContentBlock] = [ + MessageInputTextBlock(text=input_message), + MessageInputImageFileBlock(image_file=file_param), + ] + message = await agents_client.messages.create(thread_id=thread.id, role="user", content=content_blocks) + print(f"Created message, message ID: {message.id}") + + run = await agents_client.runs.create(thread_id=thread.id, agent_id=agent.id) + + # Poll the run as long as run status is queued or in progress + while run.status in ["queued", "in_progress", "requires_action"]: + # Wait for a second + time.sleep(1) + run = await agents_client.runs.get(thread_id=thread.id, run_id=run.id) + print(f"Run status: {run.status}") + + if run.status == "failed": + print(f"Run failed: {run.last_error}") + + await agents_client.delete_agent(agent.id) + print("Deleted agent") + + messages = agents_client.messages.list( + thread_id=thread.id, + order=ListSortOrder.ASCENDING, + ) + + async for msg in messages: + last_part = msg.content[-1] + if isinstance(last_part, MessageTextContent): + print(f"{msg.role}: {last_part.text.value}") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_image_input_url_async.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_image_input_url_async.py new file mode 100644 index 000000000000..00995181f1ba --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_image_input_url_async.py @@ -0,0 +1,96 @@ +# pylint: disable=line-too-long,useless-suppression +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +""" +DESCRIPTION: + This sample demonstrates how to use basic agent operations using image url input for the + the Azure Agents service using a synchronous client. + +USAGE: + python sample_agents_image_input_url.py + + Before running the sample: + + pip install azure-ai-projects azure-ai-projects azure-identity + + Set these environment variables with your own values: + 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Azure AI Foundry portal. + 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model, as found under the "Name" column in + the "Models + endpoints" tab in your Azure AI Foundry project. +""" +import asyncio +import os, time +from typing import List +from azure.ai.projects.aio import AIProjectClient +from azure.identity.aio import DefaultAzureCredential +from azure.ai.agents.models import ( + ListSortOrder, + MessageTextContent, + MessageInputContentBlock, + MessageImageUrlParam, + MessageInputTextBlock, + MessageInputImageUrlBlock, +) + + +async def main(): + project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), + ) + + async with project_client: + agents_client = project_client.agents + + agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are helpful agent", + ) + print(f"Created agent, agent ID: {agent.id}") + + thread = await agents_client.threads.create() + print(f"Created thread, thread ID: {thread.id}") + + image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg" + input_message = "Hello, what is in the image ?" + url_param = MessageImageUrlParam(url=image_url, detail="high") + content_blocks: List[MessageInputContentBlock] = [ + MessageInputTextBlock(text=input_message), + MessageInputImageUrlBlock(image_url=url_param), + ] + message = await agents_client.messages.create(thread_id=thread.id, role="user", content=content_blocks) + print(f"Created message, message ID: {message.id}") + + run = await agents_client.runs.create(thread_id=thread.id, agent_id=agent.id) + + # Poll the run as long as run status is queued or in progress + while run.status in ["queued", "in_progress", "requires_action"]: + # Wait for a second + time.sleep(1) + run = await agents_client.runs.get(thread_id=thread.id, run_id=run.id) + print(f"Run status: {run.status}") + + if run.status == "failed": + print(f"Run failed: {run.last_error}") + + await agents_client.delete_agent(agent.id) + print("Deleted agent") + + messages = agents_client.messages.list( + thread_id=thread.id, + order=ListSortOrder.ASCENDING, + ) + + async for msg in messages: + last_part = msg.content[-1] + if isinstance(last_part, MessageTextContent): + print(f"{msg.role}: {last_part.text.value}") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_json_schema_async.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_json_schema_async.py new file mode 100644 index 000000000000..cd5b9a83977f --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_json_schema_async.py @@ -0,0 +1,108 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +""" +DESCRIPTION: + This sample demonstrates how to use agents with JSON schema output format. + +USAGE: + python sample_agents_json_schema_async.py + + Before running the sample: + + pip install azure-ai-projects azure-ai-agents azure-identity pydantic + + Set these environment variables with your own values: + 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Azure AI Foundry portal. + 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model, as found under the "Name" column in + the "Models + endpoints" tab in your Azure AI Foundry project. +""" + +import asyncio +import os + +from enum import Enum +from pydantic import BaseModel, TypeAdapter +from azure.ai.projects.aio import AIProjectClient +from azure.identity.aio import DefaultAzureCredential +from azure.ai.agents.models import ( + ListSortOrder, + MessageTextContent, + MessageRole, + ResponseFormatJsonSchema, + ResponseFormatJsonSchemaType, + RunStatus, +) + + +# Create the pydantic model to represent the planet names and there masses. +class Planets(str, Enum): + Earth = "Earth" + Mars = "Mars" + Jupyter = "Jupyter" + + +class Planet(BaseModel): + planet: Planets + mass: float + + +async def main(): + project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), + ) + + async with project_client: + agents_client = project_client.agents + + agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="Extract the information about planets.", + response_format=ResponseFormatJsonSchemaType( + json_schema=ResponseFormatJsonSchema( + name="planet_mass", + description="Extract planet mass.", + schema=Planet.model_json_schema(), + ) + ), + ) + print(f"Created agent, agent ID: {agent.id}") + + thread = await agents_client.threads.create() + print(f"Created thread, thread ID: {thread.id}") + + message = await agents_client.messages.create( + thread_id=thread.id, + role="user", + content=("The mass of the Mars is 6.4171E23 kg; the mass of the Earth is 5.972168E24 kg;"), + ) + print(f"Created message, message ID: {message.id}") + + run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) + + if run.status != RunStatus.COMPLETED: + print(f"The run did not succeed: {run.status=}.") + + await agents_client.delete_agent(agent.id) + print("Deleted agent") + + messages = agents_client.messages.list( + thread_id=thread.id, + order=ListSortOrder.ASCENDING, + ) + + async for msg in messages: + if msg.role == MessageRole.AGENT: + last_part = msg.content[-1] + if isinstance(last_part, MessageTextContent): + planet = TypeAdapter(Planet).validate_json(last_part.text.value) + print(f"The mass of {planet.planet} is {planet.mass} kg.") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_run_with_toolset_async.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_run_with_toolset_async.py new file mode 100644 index 000000000000..c7d9addc9130 --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_run_with_toolset_async.py @@ -0,0 +1,90 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +""" +DESCRIPTION: + This sample demonstrates how to use agent operations with toolset from + the Azure Agents service using a synchronous client. + +USAGE: + python sample_agents_run_with_toolset_async.py + + Before running the sample: + + pip install azure-ai-projects azure-ai-agents azure-identity aiohttp + + Set these environment variables with your own values: + 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Azure AI Foundry portal. + 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model. +""" + +import os, asyncio +from azure.ai.projects.aio import AIProjectClient +from azure.identity.aio import DefaultAzureCredential +from azure.ai.agents.models import AsyncFunctionTool, AsyncToolSet, ListSortOrder, MessageTextContent +from utils.user_async_functions import user_async_functions + + +async def main() -> None: + + project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), + ) + + async with project_client: + agents_client = project_client.agents + + # Initialize agent toolset with user functions and code interpreter + # [START create_agent_with_async_function_tool] + functions = AsyncFunctionTool(user_async_functions) + + toolset = AsyncToolSet() + toolset.add(functions) + agents_client.enable_auto_function_calls(toolset) + + agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are a helpful agent", + toolset=toolset, + ) + # [END create_agent_with_async_function_tool] + print(f"Created agent, ID: {agent.id}") + + # Create thread for communication + thread = await agents_client.threads.create() + print(f"Created thread, ID: {thread.id}") + + # Create message to thread + message = await agents_client.messages.create( + thread_id=thread.id, + role="user", + content="Hello, send an email with the datetime and weather information in New York?", + ) + print(f"Created message, ID: {message.id}") + + # Create and process agent run in thread with tools + run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) + print(f"Run finished with status: {run.status}") + + if run.status == "failed": + print(f"Run failed: {run.last_error}") + + # Delete the agent when done + await agents_client.delete_agent(agent.id) + print("Deleted agent") + + # Fetch and log all messages + messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) + async for msg in messages: + last_part = msg.content[-1] + if isinstance(last_part, MessageTextContent): + print(f"{msg.role}: {last_part.text.value}") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_stream_eventhandler_async.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_stream_eventhandler_async.py similarity index 53% rename from sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_stream_eventhandler_async.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_stream_eventhandler_async.py index db5539cb641a..274b460e64c7 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_stream_eventhandler_async.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_stream_eventhandler_async.py @@ -13,7 +13,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity aiohttp + pip install azure-ai-projects azure-ai-agents azure-identity aiohttp Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -23,7 +23,7 @@ import asyncio from typing import Any, Optional -from azure.ai.agents.aio import AgentsClient +from azure.ai.projects.aio import AIProjectClient from azure.ai.agents.models import ( ListSortOrder, MessageTextContent, @@ -64,41 +64,42 @@ async def on_unhandled_event(self, event_type: str, event_data: Any) -> Optional async def main() -> None: - async with DefaultAzureCredential() as creds: - async with AgentsClient( - endpoint=os.environ["PROJECT_ENDPOINT"], - credential=creds, - ) as agents_client: - agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", instructions="You are helpful agent" - ) - print(f"Created agent, agent ID: {agent.id}") - - thread = await agents_client.threads.create() - print(f"Created thread, thread ID {thread.id}") - - message = await agents_client.messages.create( - thread_id=thread.id, role="user", content="Hello, tell me a joke" - ) - print(f"Created message, message ID {message.id}") - - async with await agents_client.runs.stream( - thread_id=thread.id, agent_id=agent.id, event_handler=MyEventHandler() - ) as stream: - async for event_type, event_data, func_return in stream: - print(f"Received data.") - print(f"Streaming receive Event Type: {event_type}") - print(f"Event Data: {str(event_data)[:100]}...") - print(f"Event Function return: {func_return}\n") - - await agents_client.delete_agent(agent.id) - print("Deleted agent") - - messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) - async for msg in messages: - last_part = msg.content[-1] - if isinstance(last_part, MessageTextContent): - print(f"{msg.role}: {last_part.text.value}") + project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), + ) + + async with project_client: + agents_client = project_client.agents + + agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", instructions="You are helpful agent" + ) + print(f"Created agent, agent ID: {agent.id}") + + thread = await agents_client.threads.create() + print(f"Created thread, thread ID {thread.id}") + + message = await agents_client.messages.create(thread_id=thread.id, role="user", content="Hello, tell me a joke") + print(f"Created message, message ID {message.id}") + + async with await agents_client.runs.stream( + thread_id=thread.id, agent_id=agent.id, event_handler=MyEventHandler() + ) as stream: + async for event_type, event_data, func_return in stream: + print(f"Received data.") + print(f"Streaming receive Event Type: {event_type}") + print(f"Event Data: {str(event_data)[:100]}...") + print(f"Event Function return: {func_return}\n") + + await agents_client.delete_agent(agent.id) + print("Deleted agent") + + messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) + async for msg in messages: + last_part = msg.content[-1] + if isinstance(last_part, MessageTextContent): + print(f"{msg.role}: {last_part.text.value}") if __name__ == "__main__": diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_stream_eventhandler_with_functions_async.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_stream_eventhandler_with_functions_async.py similarity index 53% rename from sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_stream_eventhandler_with_functions_async.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_stream_eventhandler_with_functions_async.py index d81f5ad91543..a74375965f4f 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_stream_eventhandler_with_functions_async.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_stream_eventhandler_with_functions_async.py @@ -14,7 +14,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity aiohttp + pip install azure-ai-projects azure-ai-agents azure-identity aiohttp Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -25,10 +25,12 @@ from typing import Any import os +from azure.ai.projects.aio import AIProjectClient from azure.ai.agents.aio import AgentsClient from azure.ai.agents.models import ( AsyncAgentEventHandler, AsyncFunctionTool, + AsyncToolSet, ListSortOrder, MessageTextContent, MessageDeltaChunk, @@ -42,6 +44,11 @@ from azure.identity.aio import DefaultAzureCredential from utils.user_async_functions import user_async_functions +# Initialize function tool with user functions +functions = AsyncFunctionTool(functions=user_async_functions) +toolset = AsyncToolSet() +toolset.add(functions) + class MyEventHandler(AsyncAgentEventHandler[str]): @@ -65,21 +72,8 @@ async def on_thread_run(self, run: "ThreadRun") -> None: if run.status == "requires_action" and isinstance(run.required_action, SubmitToolOutputsAction): tool_calls = run.required_action.submit_tool_outputs.tool_calls - tool_outputs = [] - for tool_call in tool_calls: - if isinstance(tool_call, RequiredFunctionToolCall): - try: - output = await self.functions.execute(tool_call) - tool_outputs.append( - ToolOutput( - tool_call_id=tool_call.id, - output=output, - ) - ) - except Exception as e: - print(f"Error executing tool_call {tool_call.id}: {e}") - - print(f"Tool outputs: {tool_outputs}") + tool_outputs = await toolset.execute_tool_calls(tool_calls) + if tool_outputs: await self.agents_client.runs.submit_tool_outputs_stream( thread_id=run.thread_id, run_id=run.id, tool_outputs=tool_outputs, event_handler=self @@ -99,49 +93,47 @@ async def on_unhandled_event(self, event_type: str, event_data: Any) -> None: async def main() -> None: - async with DefaultAzureCredential() as creds: - async with AgentsClient( - endpoint=os.environ["PROJECT_ENDPOINT"], - credential=creds, - ) as agents_client: - - # [START create_agent_with_function_tool] - functions = AsyncFunctionTool(functions=user_async_functions) - - agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are a helpful agent", - tools=functions.definitions, - ) - # [END create_agent_with_function_tool] - print(f"Created agent, ID: {agent.id}") - - thread = await agents_client.threads.create() - print(f"Created thread, thread ID {thread.id}") - - message = await agents_client.messages.create( - thread_id=thread.id, - role="user", - content="Hello, send an email with the datetime and weather information in New York? Also let me know the details.", - ) - print(f"Created message, message ID {message.id}") - - async with await agents_client.runs.stream( - thread_id=thread.id, - agent_id=agent.id, - event_handler=MyEventHandler(functions, agents_client), - ) as stream: - await stream.until_done() - - await agents_client.delete_agent(agent.id) - print("Deleted agent") - - messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) - async for msg in messages: - last_part = msg.content[-1] - if isinstance(last_part, MessageTextContent): - print(f"{msg.role}: {last_part.text.value}") + project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), + ) + + async with project_client: + agents_client = project_client.agents + + agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are a helpful agent", + tools=functions.definitions, + ) + print(f"Created agent, ID: {agent.id}") + + thread = await agents_client.threads.create() + print(f"Created thread, thread ID {thread.id}") + + message = await agents_client.messages.create( + thread_id=thread.id, + role="user", + content="Hello, send an email with the datetime and weather information in New York? Also let me know the details.", + ) + print(f"Created message, message ID {message.id}") + + async with await agents_client.runs.stream( + thread_id=thread.id, + agent_id=agent.id, + event_handler=MyEventHandler(functions, agents_client), + ) as stream: + await stream.until_done() + + await agents_client.delete_agent(agent.id) + print("Deleted agent") + + messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) + async for msg in messages: + last_part = msg.content[-1] + if isinstance(last_part, MessageTextContent): + print(f"{msg.role}: {last_part.text.value}") if __name__ == "__main__": diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_stream_eventhandler_with_toolset_async.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_stream_eventhandler_with_toolset_async.py similarity index 53% rename from sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_stream_eventhandler_with_toolset_async.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_stream_eventhandler_with_toolset_async.py index 21ad7c3c1b82..016e2f151aea 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_stream_eventhandler_with_toolset_async.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_stream_eventhandler_with_toolset_async.py @@ -14,7 +14,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity aiohttp + pip install azure-ai-projects azure-ai-agents azure-identity aiohttp Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -24,7 +24,7 @@ import asyncio from typing import Any -from azure.ai.agents.aio import AgentsClient +from azure.ai.projects.aio import AIProjectClient from azure.ai.agents.models import MessageDeltaChunk, RunStep, ThreadMessage, ThreadRun from azure.ai.agents.models import ( AsyncAgentEventHandler, @@ -68,49 +68,51 @@ async def on_unhandled_event(self, event_type: str, event_data: Any) -> None: async def main() -> None: - async with DefaultAzureCredential() as creds: - async with AgentsClient( - endpoint=os.environ["PROJECT_ENDPOINT"], - credential=creds, - ) as agents_client: - - # Initialize toolset with user functions - functions = AsyncFunctionTool(user_async_functions) - toolset = AsyncToolSet() - toolset.add(functions) - - agents_client.enable_auto_function_calls(user_async_functions) - agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are helpful agent", - toolset=toolset, - ) - print(f"Created agent, agent ID: {agent.id}") - - thread = await agents_client.threads.create() - print(f"Created thread, thread ID {thread.id}") - - message = await agents_client.messages.create( - thread_id=thread.id, - role="user", - content="Hello, send an email with the datetime and weather information in New York? Also let me know the details", - ) - print(f"Created message, message ID {message.id}") - - async with await agents_client.runs.stream( - thread_id=thread.id, agent_id=agent.id, event_handler=MyEventHandler() - ) as stream: - await stream.until_done() - - await agents_client.delete_agent(agent.id) - print("Deleted agent") - - messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) - async for msg in messages: - last_part = msg.content[-1] - if isinstance(last_part, MessageTextContent): - print(f"{msg.role}: {last_part.text.value}") + project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), + ) + + async with project_client: + agents_client = project_client.agents + + # Initialize toolset with user functions + functions = AsyncFunctionTool(user_async_functions) + toolset = AsyncToolSet() + toolset.add(functions) + + agents_client.enable_auto_function_calls(user_async_functions) + agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are helpful agent", + toolset=toolset, + ) + print(f"Created agent, agent ID: {agent.id}") + + thread = await agents_client.threads.create() + print(f"Created thread, thread ID {thread.id}") + + message = await agents_client.messages.create( + thread_id=thread.id, + role="user", + content="Hello, send an email with the datetime and weather information in New York? Also let me know the details", + ) + print(f"Created message, message ID {message.id}") + + async with await agents_client.runs.stream( + thread_id=thread.id, agent_id=agent.id, event_handler=MyEventHandler() + ) as stream: + await stream.until_done() + + await agents_client.delete_agent(agent.id) + print("Deleted agent") + + messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) + async for msg in messages: + last_part = msg.content[-1] + if isinstance(last_part, MessageTextContent): + print(f"{msg.role}: {last_part.text.value}") if __name__ == "__main__": diff --git a/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_stream_iteration_async.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_stream_iteration_async.py new file mode 100644 index 000000000000..b7f9e86bb31b --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_stream_iteration_async.py @@ -0,0 +1,95 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +""" +DESCRIPTION: + This sample demonstrates how to use agent operations with interation in streaming from + the Azure Agents service using a asynchronous client. + +USAGE: + python sample_agents_stream_iteration_async.py + + Before running the sample: + + pip install azure-ai-projects azure-ai-agents azure-identity aiohttp + + Set these environment variables with your own values: + 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Azure AI Foundry portal. + 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model. +""" +import asyncio + +from azure.ai.projects.aio import AIProjectClient +from azure.ai.agents.models import AgentStreamEvent +from azure.ai.agents.models import ( + MessageDeltaChunk, + RunStep, + ThreadMessage, + ThreadRun, + ListSortOrder, + MessageTextContent, +) +from azure.identity.aio import DefaultAzureCredential + +import os + + +async def main() -> None: + project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), + ) + + async with project_client: + agents_client = project_client.agents + + agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", instructions="You are helpful agent" + ) + print(f"Created agent, agent ID: {agent.id}") + + thread = await agents_client.threads.create() + print(f"Created thread, thread ID {thread.id}") + + message = await agents_client.messages.create(thread_id=thread.id, role="user", content="Hello, tell me a joke") + print(f"Created message, message ID {message.id}") + + async with await agents_client.runs.stream(thread_id=thread.id, agent_id=agent.id) as stream: + async for event_type, event_data, _ in stream: + + if isinstance(event_data, MessageDeltaChunk): + print(f"Text delta received: {event_data.text}") + + elif isinstance(event_data, ThreadMessage): + print(f"ThreadMessage created. ID: {event_data.id}, Status: {event_data.status}") + + elif isinstance(event_data, ThreadRun): + print(f"ThreadRun status: {event_data.status}") + elif isinstance(event_data, RunStep): + print(f"RunStep type: {event_data.type}, Status: {event_data.status}") + + elif event_type == AgentStreamEvent.ERROR: + print(f"An error occurred. Data: {event_data}") + + elif event_type == AgentStreamEvent.DONE: + print("Stream completed.") + break + + else: + print(f"Unhandled Event Type: {event_type}, Data: {event_data}") + + await agents_client.delete_agent(agent.id) + print("Deleted agent") + + messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) + async for msg in messages: + last_part = msg.content[-1] + if isinstance(last_part, MessageTextContent): + print(f"{msg.role}: {last_part.text.value}") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_stream_with_base_override_eventhandler_async.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_stream_with_base_override_eventhandler_async.py similarity index 67% rename from sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_stream_with_base_override_eventhandler_async.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_stream_with_base_override_eventhandler_async.py index 751bfbbb5806..aa8d8a4fc929 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_async/sample_agents_stream_with_base_override_eventhandler_async.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_stream_with_base_override_eventhandler_async.py @@ -17,7 +17,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity aiohttp + pip install azure-ai-projects azure-ai-agents azure-identity aiohttp Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -28,7 +28,7 @@ import json from typing import AsyncGenerator, Optional -from azure.ai.agents.aio import AgentsClient +from azure.ai.projects.aio import AIProjectClient from azure.ai.agents.models._models import ( MessageDeltaChunk, MessageDeltaTextContent, @@ -77,38 +77,39 @@ async def get_stream_chunks(self) -> AsyncGenerator[str, None]: async def main() -> None: - async with DefaultAzureCredential() as creds: - async with AgentsClient( - endpoint=os.environ["PROJECT_ENDPOINT"], - credential=creds, - ) as agents_client: - agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", instructions="You are helpful agent" - ) - print(f"Created agent, agent ID: {agent.id}") - - thread = await agents_client.threads.create() - print(f"Created thread, thread ID {thread.id}") - - message = await agents_client.messages.create( - thread_id=thread.id, role="user", content="Hello, tell me a joke" - ) - print(f"Created message, message ID {message.id}") - - async with await agents_client.runs.stream( - thread_id=thread.id, agent_id=agent.id, event_handler=MyEventHandler() - ) as stream: - async for chunk in stream.get_stream_chunks(): - print(chunk) - - await agents_client.delete_agent(agent.id) - print("Deleted agent") - - messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) - async for msg in messages: - last_part = msg.content[-1] - if isinstance(last_part, MessageTextContent): - print(f"{msg.role}: {last_part.text.value}") + project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), + ) + + async with project_client: + agents_client = project_client.agents + + agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", instructions="You are helpful agent" + ) + print(f"Created agent, agent ID: {agent.id}") + + thread = await agents_client.threads.create() + print(f"Created thread, thread ID {thread.id}") + + message = await agents_client.messages.create(thread_id=thread.id, role="user", content="Hello, tell me a joke") + print(f"Created message, message ID {message.id}") + + async with await agents_client.runs.stream( + thread_id=thread.id, agent_id=agent.id, event_handler=MyEventHandler() + ) as stream: + async for chunk in stream.get_stream_chunks(): + print(chunk) + + await agents_client.delete_agent(agent.id) + print("Deleted agent") + + messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) + async for msg in messages: + last_part = msg.content[-1] + if isinstance(last_part, MessageTextContent): + print(f"{msg.role}: {last_part.text.value}") if __name__ == "__main__": diff --git a/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_vector_store_batch_enterprise_file_search_async.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_vector_store_batch_enterprise_file_search_async.py new file mode 100644 index 000000000000..e04f9955c449 --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_vector_store_batch_enterprise_file_search_async.py @@ -0,0 +1,125 @@ +# pylint: disable=line-too-long,useless-suppression +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ +""" +DESCRIPTION: + This sample demonstrates how to use agent operations to add files to an existing vector store and perform search from + the Azure Agents service using a synchronous client. + +USAGE: + python sample_agents_vector_store_batch_enterprise_file_search_async.py + + Before running the sample: + + pip install azure-ai-projects azure-ai-agents azure-identity azure-ai-ml aiohttp + + Set these environment variables with your own values: + 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Azure AI Foundry portal. + 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model. +""" +import asyncio +import os + +from azure.ai.projects.aio import AIProjectClient +from azure.ai.agents.models import ( + FileSearchTool, + ListSortOrder, + MessageTextContent, + VectorStoreDataSource, + VectorStoreDataSourceAssetType, +) +from azure.identity.aio import DefaultAzureCredential + + +async def main(): + + project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), + ) + + async with project_client: + agents_client = project_client.agents + + # We will upload the local file to Azure and will use it for vector store creation. + asset_uri = os.environ["AZURE_BLOB_URI"] + ds = VectorStoreDataSource( + asset_identifier=asset_uri, + asset_type=VectorStoreDataSourceAssetType.URI_ASSET, + ) + vector_store = await agents_client.vector_stores.create_and_poll(file_ids=[], name="sample_vector_store") + print(f"Created vector store, vector store ID: {vector_store.id}") + + # Add the file to the vector store or you can supply file ids in the vector store creation + vector_store_file_batch = await agents_client.vector_store_file_batches.create_and_poll( + vector_store_id=vector_store.id, data_sources=[ds] + ) + print(f"Created vector store file batch, vector store file batch ID: {vector_store_file_batch.id}") + + # Create a file search tool + file_search_tool = FileSearchTool(vector_store_ids=[vector_store.id]) + + # Notices that FileSearchTool as tool and tool_resources must be added or the agent unable to search the file + agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are helpful agent", + tools=file_search_tool.definitions, + tool_resources=file_search_tool.resources, + ) + print(f"Created agent, agent ID: {agent.id}") + + thread = await agents_client.threads.create() + print(f"Created thread, thread ID: {thread.id}") + + message = await agents_client.messages.create( + thread_id=thread.id, + role="user", + content="What feature does Smart Eyewear offer?", + ) + print(f"Created message, message ID: {message.id}") + + run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) + print(f"Created run, run ID: {run.id}") + + file_search_tool.remove_vector_store(vector_store.id) + print(f"Removed vector store from file search, vector store ID: {vector_store.id}") + + await agents_client.update_agent( + agent_id=agent.id, + tools=file_search_tool.definitions, + tool_resources=file_search_tool.resources, + ) + print(f"Updated agent, agent ID: {agent.id}") + + thread = await agents_client.threads.create() + print(f"Created thread, thread ID: {thread.id}") + + message = await agents_client.messages.create( + thread_id=thread.id, + role="user", + content="What feature does Smart Eyewear offer?", + ) + print(f"Created message, message ID: {message.id}") + + run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) + print(f"Created run, run ID: {run.id}") + + await agents_client.vector_stores.delete(vector_store.id) + print("Deleted vector store") + + await agents_client.delete_agent(agent.id) + print("Deleted agent") + + messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) + async for msg in messages: + last_part = msg.content[-1] + if isinstance(last_part, MessageTextContent): + print(f"{msg.role}: {last_part.text.value}") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_vector_store_batch_file_search_async.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_vector_store_batch_file_search_async.py new file mode 100644 index 000000000000..7dada937762b --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_vector_store_batch_file_search_async.py @@ -0,0 +1,117 @@ +# pylint: disable=line-too-long,useless-suppression +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +""" +DESCRIPTION: + This sample demonstrates how to use agent operations to add files to an existing vector store and perform search from + the Azure Agents service using a asynchronous client. + +USAGE: + python sample_agents_vector_store_batch_file_search_async.py + + Before running the sample: + + pip install azure-ai-projects azure-ai-agents azure-identity aiohttp + + Set these environment variables with your own values: + 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Azure AI Foundry portal. + 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model. +""" + +import asyncio +import os +from azure.ai.projects.aio import AIProjectClient +from azure.ai.agents.models import FileSearchTool, FilePurpose, ListSortOrder, MessageTextContent +from azure.identity.aio import DefaultAzureCredential + +asset_file_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../assets/product_info_1.md")) + + +async def main() -> None: + project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), + ) + + async with project_client: + agents_client = project_client.agents + + # Upload a file and wait for it to be processed + file = await agents_client.files.upload_and_poll(file_path=asset_file_path, purpose=FilePurpose.AGENTS) + print(f"Uploaded file, file ID: {file.id}") + + # Create a vector store with no file and wait for it to be processed + vector_store = await agents_client.vector_stores.create_and_poll(file_ids=[], name="sample_vector_store") + print(f"Created vector store, vector store ID: {vector_store.id}") + + # Add the file to the vector store or you can supply file ids in the vector store creation + vector_store_file_batch = await agents_client.vector_store_file_batches.create_and_poll( + vector_store_id=vector_store.id, file_ids=[file.id] + ) + print(f"Created vector store file batch, vector store file batch ID: {vector_store_file_batch.id}") + + # Create a file search tool + file_search_tool = FileSearchTool(vector_store_ids=[vector_store.id]) + + # Notices that FileSearchTool as tool and tool_resources must be added or the agent unable to search the file + agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are helpful agent", + tools=file_search_tool.definitions, + tool_resources=file_search_tool.resources, + ) + print(f"Created agent, agent ID: {agent.id}") + + thread = await agents_client.threads.create() + print(f"Created thread, thread ID: {thread.id}") + + message = await agents_client.messages.create( + thread_id=thread.id, role="user", content="What feature does Smart Eyewear offer?" + ) + print(f"Created message, message ID: {message.id}") + + run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) + print(f"Created run, run ID: {run.id}") + + file_search_tool.remove_vector_store(vector_store.id) + print(f"Removed vector store from file search, vector store ID: {vector_store.id}") + + await agents_client.update_agent( + agent_id=agent.id, tools=file_search_tool.definitions, tool_resources=file_search_tool.resources + ) + print(f"Updated agent, agent ID: {agent.id}") + + thread = await agents_client.threads.create() + print(f"Created thread, thread ID: {thread.id}") + + message = await agents_client.messages.create( + thread_id=thread.id, role="user", content="What feature does Smart Eyewear offer?" + ) + print(f"Created message, message ID: {message.id}") + + run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) + print(f"Created run, run ID: {run.id}") + + await agents_client.files.delete(file.id) + print("Deleted file") + + await agents_client.vector_stores.delete(vector_store.id) + print("Deleted vector store") + + await agents_client.delete_agent(agent.id) + print("Deleted agent") + + messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) + async for msg in messages: + last_part = msg.content[-1] + if isinstance(last_part, MessageTextContent): + print(f"{msg.role}: {last_part.text.value}") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_vector_store_enterprise_file_search_async.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_vector_store_enterprise_file_search_async.py new file mode 100644 index 000000000000..bd5dae4f98ed --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_vector_store_enterprise_file_search_async.py @@ -0,0 +1,89 @@ +# pylint: disable=line-too-long,useless-suppression +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ +""" +DESCRIPTION: + This sample demonstrates how to add files to agent during the vector store creation. + +USAGE: + python sample_agents_vector_store_enterprise_file_search_async.py + + Before running the sample: + + pip install azure-ai-projects azure-ai-agents azure-identity azure-ai-ml aiohttp + + Set these environment variables with your own values: + 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Azure AI Foundry portal. + 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model. +""" +import asyncio +import os + +from azure.ai.projects.aio import AIProjectClient +from azure.ai.agents.models import ( + FileSearchTool, + VectorStoreDataSource, + VectorStoreDataSourceAssetType, + ListSortOrder, + MessageTextContent, +) +from azure.identity.aio import DefaultAzureCredential + + +async def main(): + project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), + ) + + async with project_client: + agents_client = project_client.agents + + # We will upload the local file to Azure and will use it for vector store creation. + asset_uri = os.environ["AZURE_BLOB_URI"] + ds = VectorStoreDataSource(asset_identifier=asset_uri, asset_type=VectorStoreDataSourceAssetType.URI_ASSET) + vector_store = await agents_client.vector_stores.create_and_poll(data_sources=[ds], name="sample_vector_store") + print(f"Created vector store, vector store ID: {vector_store.id}") + + # Create a file search tool + file_search_tool = FileSearchTool(vector_store_ids=[vector_store.id]) + + # Notices that FileSearchTool as tool and tool_resources must be added or the agent unable to search the file + agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are helpful agent", + tools=file_search_tool.definitions, + tool_resources=file_search_tool.resources, + ) + print(f"Created agent, agent ID: {agent.id}") + + thread = await agents_client.threads.create() + print(f"Created thread, thread ID: {thread.id}") + + message = await agents_client.messages.create( + thread_id=thread.id, role="user", content="What feature does Smart Eyewear offer?" + ) + print(f"Created message, message ID: {message.id}") + + run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) + print(f"Created run, run ID: {run.id}") + + await agents_client.vector_stores.delete(vector_store.id) + print("Deleted vector store") + + await agents_client.delete_agent(agent.id) + print("Deleted agent") + + messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) + async for msg in messages: + last_part = msg.content[-1] + if isinstance(last_part, MessageTextContent): + print(f"{msg.role}: {last_part.text.value}") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_vector_store_file_search_async.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_vector_store_file_search_async.py new file mode 100644 index 000000000000..1851246d3cd0 --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_vector_store_file_search_async.py @@ -0,0 +1,87 @@ +# pylint: disable=line-too-long,useless-suppression +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ +""" +DESCRIPTION: + This sample demonstrates how to add files to agent during the vector store creation. + +USAGE: + python sample_agents_vector_store_file_search_async.py + + Before running the sample: + + pip install azure-ai-projects azure-ai-agents azure-identity aiohttp + + Set these environment variables with your own values: + 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Azure AI Foundry portal. + 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model. +""" +import asyncio +import os + +from azure.ai.projects.aio import AIProjectClient +from azure.ai.agents.models import FileSearchTool, FilePurpose, MessageTextContent, ListSortOrder +from azure.identity.aio import DefaultAzureCredential + +asset_file_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../assets/product_info_1.md")) + + +async def main(): + project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), + ) + + async with project_client: + agents_client = project_client.agents + + # Upload a file and wait for it to be processed + file = await agents_client.files.upload_and_poll(file_path=asset_file_path, purpose=FilePurpose.AGENTS) + print(f"Uploaded file, file ID: {file.id}") + + # Create a vector store with no file and wait for it to be processed + vector_store = await agents_client.vector_stores.create_and_poll(file_ids=[file.id], name="sample_vector_store") + print(f"Created vector store, vector store ID: {vector_store.id}") + + # Create a file search tool + file_search_tool = FileSearchTool(vector_store_ids=[vector_store.id]) + + # Notices that FileSearchTool as tool and tool_resources must be added or the agent unable to search the file + agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are helpful agent", + tools=file_search_tool.definitions, + tool_resources=file_search_tool.resources, + ) + print(f"Created agent, agent ID: {agent.id}") + + thread = await agents_client.threads.create() + print(f"Created thread, thread ID: {thread.id}") + + message = await agents_client.messages.create( + thread_id=thread.id, role="user", content="What feature does Smart Eyewear offer?" + ) + print(f"Created message, message ID: {message.id}") + + run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) + print(f"Created run, run ID: {run.id}") + + await agents_client.vector_stores.delete(vector_store.id) + print("Deleted vector store") + + await agents_client.delete_agent(agent.id) + print("Deleted agent") + + messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) + async for msg in messages: + last_part = msg.content[-1] + if isinstance(last_part, MessageTextContent): + print(f"{msg.role}: {last_part.text.value}") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_with_file_search_attachment_async.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_with_file_search_attachment_async.py new file mode 100644 index 000000000000..321c1c7c6cee --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_async/sample_agents_with_file_search_attachment_async.py @@ -0,0 +1,89 @@ +# pylint: disable=line-too-long,useless-suppression +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +""" +DESCRIPTION: + This sample demonstrates how to use agent operations to create messages with file search attachments from + the Azure Agents service using a asynchronous client. + +USAGE: + python sample_agents_with_file_search_attachment_async.py + + Before running the sample: + + pip install azure-ai-projects azure-ai-agents azure-identity aiohttp + + Set these environment variables with your own values: + 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Azure AI Foundry portal. + 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model. +""" +import asyncio + +from azure.ai.projects.aio import AIProjectClient +from azure.ai.agents.models import FilePurpose +from azure.ai.agents.models import FileSearchTool, MessageAttachment, ListSortOrder, MessageTextContent +from azure.identity.aio import DefaultAzureCredential + +import os + +asset_file_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../assets/product_info_1.md")) + + +async def main() -> None: + project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), + ) + + async with project_client: + agents_client = project_client.agents + + # Upload a file and wait for it to be processed + file = await agents_client.files.upload_and_poll(file_path=asset_file_path, purpose=FilePurpose.AGENTS) + + # Create agent + agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are helpful agent", + ) + print(f"Created agent, agent ID: {agent.id}") + + thread = await agents_client.threads.create() + print(f"Created thread, thread ID: {thread.id}") + + # Create a message with the file search attachment + # Notice that vector store is created temporarily when using attachments with a default expiration policy of seven days. + attachment = MessageAttachment(file_id=file.id, tools=FileSearchTool().definitions) + message = await agents_client.messages.create( + thread_id=thread.id, + role="user", + content="What feature does Smart Eyewear offer?", + attachments=[attachment], + ) + print(f"Created message, message ID: {message.id}") + + run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id, polling_interval=4) + print(f"Created run, run ID: {run.id}") + + print(f"Run completed with status: {run.status}") + + await agents_client.files.delete(file.id) + print("Deleted file") + + await agents_client.delete_agent(agent.id) + print("Deleted agent") + + messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) + async for msg in messages: + last_part = msg.content[-1] + if isinstance(last_part, MessageTextContent): + print(f"{msg.role}: {last_part.text.value}") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/utils/__init__.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/utils/__init__.py similarity index 100% rename from sdk/ai/azure-ai-agents/samples/agents_async/utils/__init__.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_async/utils/__init__.py diff --git a/sdk/ai/azure-ai-agents/samples/agents_async/utils/user_async_functions.py b/sdk/ai/azure-ai-projects/samples/agents/agents_async/utils/user_async_functions.py similarity index 100% rename from sdk/ai/azure-ai-agents/samples/agents_async/utils/user_async_functions.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_async/utils/user_async_functions.py diff --git a/sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_image_input_base64.py b/sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_image_input_base64.py similarity index 94% rename from sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_image_input_base64.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_image_input_base64.py index a3d573f7579f..6660d063a35e 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_image_input_base64.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_image_input_base64.py @@ -13,7 +13,7 @@ Before running the sample: - pip install azure-ai-projects azure-identity + pip install azure-ai-projects azure-ai-projects azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -24,7 +24,7 @@ import os, time, base64 from typing import List -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.agents.models import ( MessageTextContent, @@ -57,12 +57,13 @@ def image_to_base64(image_path: str) -> str: raise OSError(f"Error reading file '{image_path}'") from exc -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) -with agents_client: +with project_client: + agents_client = project_client.agents agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], diff --git a/sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_image_input_file.py b/sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_image_input_file.py similarity index 93% rename from sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_image_input_file.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_image_input_file.py index f3d4f9e9a13d..26967232c4c3 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_image_input_file.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_image_input_file.py @@ -13,7 +13,7 @@ Before running the sample: - pip install azure-ai-projects azure-identity + pip install azure-ai-projects azure-ai-projects azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -24,7 +24,7 @@ import os, time from typing import List -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.agents.models import ( ListSortOrder, @@ -38,12 +38,13 @@ asset_file_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../assets/image_file.png")) -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) -with agents_client: +with project_client: + agents_client = project_client.agents agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], diff --git a/sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_image_input_url.py b/sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_image_input_url.py similarity index 93% rename from sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_image_input_url.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_image_input_url.py index 7eb85b05af64..947f6af0dc2f 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_image_input_url.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_image_input_url.py @@ -14,7 +14,7 @@ Before running the sample: - pip install azure-ai-projects azure-identity + pip install azure-ai-projects azure-ai-projects azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -25,7 +25,7 @@ import os, time from typing import List -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.agents.models import ( MessageTextContent, @@ -36,12 +36,13 @@ ) -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) -with agents_client: +with project_client: + agents_client = project_client.agents agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], diff --git a/sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_json_schema.py b/sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_json_schema.py similarity index 93% rename from sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_json_schema.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_json_schema.py index ed239002b2c4..3e4a2dae3b3f 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_json_schema.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_json_schema.py @@ -12,7 +12,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity pydantic + pip install azure-ai-projects azure-ai-agents azure-identity pydantic Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -25,7 +25,7 @@ from enum import Enum from pydantic import BaseModel, TypeAdapter -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.agents.models import ( MessageTextContent, @@ -36,7 +36,7 @@ ) # [START create_agents_client] -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) @@ -55,7 +55,8 @@ class Planet(BaseModel): mass: float -with agents_client: +with project_client: + agents_client = project_client.agents agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], diff --git a/sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_vector_store_batch_enterprise_file_search.py b/sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_vector_store_batch_enterprise_file_search.py similarity index 94% rename from sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_vector_store_batch_enterprise_file_search.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_vector_store_batch_enterprise_file_search.py index ad11c7553570..48cb10814b33 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_vector_store_batch_enterprise_file_search.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_vector_store_batch_enterprise_file_search.py @@ -11,7 +11,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity azure-ai-ml + pip install azure-ai-projects azure-ai-agents azure-identity azure-ai-ml Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -21,16 +21,17 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import FileSearchTool, VectorStoreDataSource, VectorStoreDataSourceAssetType, ListSortOrder from azure.identity import DefaultAzureCredential -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) -with agents_client: +with project_client: + agents_client = project_client.agents # We will upload the local file to Azure and will use it for vector store creation. asset_uri = os.environ["AZURE_BLOB_URI"] diff --git a/sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_vector_store_batch_file_search.py b/sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_vector_store_batch_file_search.py similarity index 95% rename from sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_vector_store_batch_file_search.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_vector_store_batch_file_search.py index 26b4ae31e71b..0c2153fba088 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_vector_store_batch_file_search.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_vector_store_batch_file_search.py @@ -13,7 +13,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -23,18 +23,19 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import FileSearchTool, FilePurpose, ListSortOrder from azure.identity import DefaultAzureCredential asset_file_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../assets/product_info_1.md")) -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) -with agents_client: +with project_client: + agents_client = project_client.agents # Upload a file and wait for it to be processed file = agents_client.files.upload_and_poll(file_path=asset_file_path, purpose=FilePurpose.AGENTS) diff --git a/sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_vector_store_file_search.py b/sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_vector_store_file_search.py similarity index 93% rename from sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_vector_store_file_search.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_vector_store_file_search.py index 04c821b1a828..01955f2ae315 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_vector_store_file_search.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_vector_store_file_search.py @@ -11,7 +11,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -21,18 +21,19 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import FileSearchTool, FilePurpose, ListSortOrder from azure.identity import DefaultAzureCredential asset_file_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../assets/product_info_1.md")) -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) -with agents_client: +with project_client: + agents_client = project_client.agents # Upload a file and wait for it to be processed file = agents_client.files.upload_and_poll(file_path=asset_file_path, purpose=FilePurpose.AGENTS) diff --git a/sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_with_code_interpreter_file_attachment.py b/sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_with_code_interpreter_file_attachment.py similarity index 95% rename from sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_with_code_interpreter_file_attachment.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_with_code_interpreter_file_attachment.py index 08087f1918ef..9195a64fb6c2 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_with_code_interpreter_file_attachment.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_with_code_interpreter_file_attachment.py @@ -15,7 +15,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -25,7 +25,7 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import CodeInterpreterTool, MessageAttachment from azure.ai.agents.models import FilePurpose, MessageRole from azure.identity import DefaultAzureCredential @@ -35,12 +35,13 @@ os.path.join(os.path.dirname(__file__), "../assets/synthetic_500_quarterly_results.csv") ) -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) -with agents_client: +with project_client: + agents_client = project_client.agents # Upload a file and wait for it to be processed file = agents_client.files.upload_and_poll(file_path=asset_file_path, purpose=FilePurpose.AGENTS) diff --git a/sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_with_file_search_attachment.py b/sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_with_file_search_attachment.py similarity index 93% rename from sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_with_file_search_attachment.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_with_file_search_attachment.py index febfb33da3a8..9864cad38be8 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_with_file_search_attachment.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_with_file_search_attachment.py @@ -14,7 +14,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -23,18 +23,19 @@ the "Models + endpoints" tab in your Azure AI Foundry project. """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import FilePurpose, FileSearchTool, MessageAttachment, ListSortOrder from azure.identity import DefaultAzureCredential asset_file_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../assets/product_info_1.md")) -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) -with agents_client: +with project_client: + agents_client = project_client.agents # Upload a file and wait for it to be processed file = agents_client.files.upload_and_poll(file_path=asset_file_path, purpose=FilePurpose.AGENTS) diff --git a/sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_with_resources_in_thread.py b/sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_with_resources_in_thread.py similarity index 94% rename from sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_with_resources_in_thread.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_with_resources_in_thread.py index b010ad13540e..9af2da71e5b2 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_files_images_inputs/sample_agents_with_resources_in_thread.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_files_images_inputs/sample_agents_with_resources_in_thread.py @@ -13,7 +13,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -23,18 +23,19 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import FileSearchTool, FilePurpose, ListSortOrder from azure.identity import DefaultAzureCredential asset_file_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../assets/product_info_1.md")) -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) -with agents_client: +with project_client: + agents_client = project_client.agents # Upload file and create vector store # [START create_agent_and_thread_for_file_search] diff --git a/sdk/ai/azure-ai-agents/samples/agents_multiagent/sample_agents_agent_team.py b/sdk/ai/azure-ai-projects/samples/agents/agents_multiagent/sample_agents_agent_team.py similarity index 50% rename from sdk/ai/azure-ai-agents/samples/agents_multiagent/sample_agents_agent_team.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_multiagent/sample_agents_agent_team.py index 8289b8a90426..7c388403d435 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_multiagent/sample_agents_agent_team.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_multiagent/sample_agents_agent_team.py @@ -24,7 +24,7 @@ Before running the sample: - 1. pip install azure-ai-agents azure-identity + 1. pip install azure-ai-projects azure-ai-agents azure-identity 2. Ensure `utils/agent_team_config.yaml` is present and TEAM_LEADER_MODEL points to a valid model deployment. 3. Set these environment variables with your own values: @@ -34,45 +34,48 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from utils.agent_team import AgentTeam, _create_task from utils.agent_trace_configurator import AgentTraceConfigurator -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) -agents_client.enable_auto_function_calls({_create_task}) +with project_client: + agents_client = project_client.agents -model_deployment_name = os.getenv("MODEL_DEPLOYMENT_NAME") + agents_client.enable_auto_function_calls({_create_task}) -if model_deployment_name is not None: - AgentTraceConfigurator(agents_client=agents_client).setup_tracing() - with agents_client: - agent_team = AgentTeam("test_team", agents_client=agents_client) - agent_team.add_agent( - model=model_deployment_name, - name="Coder", - instructions="You are software engineer who writes great code. Your name is Coder.", - ) - agent_team.add_agent( - model=model_deployment_name, - name="Reviewer", - instructions="You are software engineer who reviews code. Your name is Reviewer.", - ) - agent_team.assemble_team() + model_deployment_name = os.getenv("MODEL_DEPLOYMENT_NAME") - print("A team of agents specialized in software engineering is available for requests.") - while True: - user_input = input("Input (type 'quit' or 'exit' to exit): ") - if user_input.lower() == "quit": - break - elif user_input.lower() == "exit": - break - agent_team.process_request(request=user_input) + if model_deployment_name is not None: + AgentTraceConfigurator(agents_client=agents_client).setup_tracing() + with agents_client: + agent_team = AgentTeam("test_team", agents_client=agents_client) + agent_team.add_agent( + model=model_deployment_name, + name="Coder", + instructions="You are software engineer who writes great code. Your name is Coder.", + ) + agent_team.add_agent( + model=model_deployment_name, + name="Reviewer", + instructions="You are software engineer who reviews code. Your name is Reviewer.", + ) + agent_team.assemble_team() - agent_team.dismantle_team() -else: - print("Error: Please define the environment variable MODEL_DEPLOYMENT_NAME.") + print("A team of agents specialized in software engineering is available for requests.") + while True: + user_input = input("Input (type 'quit' or 'exit' to exit): ") + if user_input.lower() == "quit": + break + elif user_input.lower() == "exit": + break + agent_team.process_request(request=user_input) + + agent_team.dismantle_team() + else: + print("Error: Please define the environment variable MODEL_DEPLOYMENT_NAME.") diff --git a/sdk/ai/azure-ai-projects/samples/agents/agents_multiagent/sample_agents_agent_team_custom_team_leader.py b/sdk/ai/azure-ai-projects/samples/agents/agents_multiagent/sample_agents_agent_team_custom_team_leader.py new file mode 100644 index 000000000000..3a7d89494cfe --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_multiagent/sample_agents_agent_team_custom_team_leader.py @@ -0,0 +1,120 @@ +# pylint: disable=line-too-long,useless-suppression +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +""" +DESCRIPTION: + This sample demonstrates how to multiple agents using AgentTeam with traces. + +USAGE: + python sample_agents_agent_team_custom_team_leader.py + + Before running the sample: + + pip install azure-ai-projects azure-ai-agents azure-identity + + Set these environment variables with your own values: + PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Azure AI Foundry portal. + MODEL_DEPLOYMENT_NAME - the name of the model deployment to use. +""" + +import os +from typing import Optional, Set +from azure.ai.projects import AIProjectClient +from azure.identity import DefaultAzureCredential +from utils.agent_team import AgentTeam, AgentTask +from utils.agent_trace_configurator import AgentTraceConfigurator +from azure.ai.agents.models import FunctionTool, ToolSet + +project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), +) + +model_deployment_name = os.getenv("MODEL_DEPLOYMENT_NAME") + + +def create_task(team_name: str, recipient: str, request: str, requestor: str) -> str: + """ + Requests another agent in the team to complete a task. + + :param team_name (str): The name of the team. + :param recipient (str): The name of the agent that is being requested to complete the task. + :param request (str): A description of the to complete. This can also be a question. + :param requestor (str): The name of the agent who is requesting the task. + :return: True if the task was successfully received, False otherwise. + :rtype: str + """ + task = AgentTask(recipient=recipient, task_description=request, requestor=requestor) + team: Optional[AgentTeam] = None + try: + team = AgentTeam.get_team(team_name) + except: + pass + if team is not None: + team.add_task(task) + return "True" + return "False" + + +# Any additional functions that might be used by the agents: +agent_team_default_functions: Set = { + create_task, +} + +default_function_tool = FunctionTool(functions=agent_team_default_functions) + +with project_client: + agents_client = project_client.agents + + agents_client.enable_auto_function_calls({create_task}) + + if model_deployment_name is not None: + AgentTraceConfigurator(agents_client=agents_client).setup_tracing() + with agents_client: + agent_team = AgentTeam("test_team", agents_client=agents_client) + toolset = ToolSet() + toolset.add(default_function_tool) + agent_team.set_team_leader( + model=model_deployment_name, + name="TeamLeader", + instructions="You are an agent named 'TeamLeader'. You are a leader of a team of agents. The name of your team is 'test_team'." + "You are an agent that is responsible for receiving requests from user and utilizing a team of agents to complete the task. " + "When you are passed a request, the only thing you will do is evaluate which team member should do which task next to complete the request. " + "You will use the provided create_task function to create a task for the agent that is best suited for handling the task next. " + "You will respond with the description of who you assigned the task and why. When you think that the original user request is " + "processed completely utilizing all the talent available in the team, you do not have to create anymore tasks. " + "Using the skills of all the team members when applicable is highly valued. " + "Do not create parallel tasks. " + "Here are the other agents in your team: " + "- Coder: You are software engineer who writes great code. Your name is Coder. " + "- Reviewer: You are software engineer who reviews code. Your name is Reviewer.", + toolset=toolset, + ) + agent_team.add_agent( + model=model_deployment_name, + name="Coder", + instructions="You are software engineer who writes great code. Your name is Coder.", + ) + agent_team.add_agent( + model=model_deployment_name, + name="Reviewer", + instructions="You are software engineer who reviews code. Your name is Reviewer.", + ) + agent_team.assemble_team() + + print("A team of agents specialized in software engineering is available for requests.") + while True: + user_input = input("Input (type 'quit' or 'exit' to exit): ") + if user_input.lower() == "quit": + break + elif user_input.lower() == "exit": + break + agent_team.process_request(request=user_input) + + agent_team.dismantle_team() + else: + print("Error: Please define the environment variable MODEL_DEPLOYMENT_NAME.") diff --git a/sdk/ai/azure-ai-projects/samples/agents/agents_multiagent/sample_agents_multi_agent_team.py b/sdk/ai/azure-ai-projects/samples/agents/agents_multiagent/sample_agents_multi_agent_team.py new file mode 100644 index 000000000000..07aa0f9d81b9 --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_multiagent/sample_agents_multi_agent_team.py @@ -0,0 +1,137 @@ +# pylint: disable=line-too-long,useless-suppression +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +""" +DESCRIPTION: + This sample demonstrates how to use an AgentTeam to execute a multi-step + user request with automatic function calling and trace collection. + + The team consists of + • one leader agent - created automatically from the configuration in + `utils/agent_team_config.yaml` + • three worker agents - `TimeWeatherAgent`, `SendEmailAgent`, and + `TemperatureAgent`, each defined in the code below with its own tools + + IMPORTANT - leader-agent model configuration + `utils/agent_team_config.yaml` contains the key TEAM_LEADER_MODEL. + Its value must be the name of a **deployed** model in your Azure AI + project (e.g. "gpt-4o-mini"). + If this deployment does not exist, AgentTeam cannot instantiate the + leader agent and the sample will fail. + +USAGE: + python sample_agents_multi_agent_team.py + + Before running the sample: + + 1. pip install azure-ai-projects azure-ai-agents azure-identity + 2. Ensure `utils/agent_team_config.yaml` is present and TEAM_LEADER_MODEL points + to a valid model deployment. + 3. Set these environment variables with your own values: + PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Azure AI Foundry portal. + MODEL_DEPLOYMENT_NAME - The model deployment name used for the worker agents. +""" + +import os +from typing import Set + +from utils.user_functions_with_traces import ( + fetch_current_datetime, + fetch_weather, + send_email_using_recipient_name, + convert_temperature, +) + +from azure.ai.projects import AIProjectClient +from azure.ai.agents.models import ToolSet, FunctionTool +from azure.identity import DefaultAzureCredential +from utils.agent_team import AgentTeam, _create_task +from utils.agent_trace_configurator import AgentTraceConfigurator + +project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), +) + +user_function_set_1: Set = {fetch_current_datetime, fetch_weather} + +user_function_set_2: Set = {send_email_using_recipient_name} + +user_function_set_3: Set = {convert_temperature} + + +with project_client: + agents_client = project_client.agents + + agents_client.enable_auto_function_calls( + { + _create_task, + fetch_current_datetime, + fetch_weather, + send_email_using_recipient_name, + convert_temperature, + } + ) + + model_deployment_name = os.getenv("MODEL_DEPLOYMENT_NAME") + + if model_deployment_name is not None: + AgentTraceConfigurator(agents_client=agents_client).setup_tracing() + with agents_client: + + functions = FunctionTool(functions=user_function_set_1) + toolset1 = ToolSet() + toolset1.add(functions) + + agent_team = AgentTeam("test_team", agents_client=agents_client) + + agent_team.add_agent( + model=model_deployment_name, + name="TimeWeatherAgent", + instructions="You are a specialized agent for time and weather queries.", + toolset=toolset1, + can_delegate=True, + ) + + functions = FunctionTool(functions=user_function_set_2) + toolset2 = ToolSet() + toolset2.add(functions) + + agent_team.add_agent( + model=model_deployment_name, + name="SendEmailAgent", + instructions="You are a specialized agent for sending emails.", + toolset=toolset2, + can_delegate=False, + ) + + functions = FunctionTool(functions=user_function_set_3) + toolset3 = ToolSet() + toolset3.add(functions) + + agent_team.add_agent( + model=model_deployment_name, + name="TemperatureAgent", + instructions="You are a specialized agent for temperature conversion.", + toolset=toolset3, + can_delegate=False, + ) + + agent_team.assemble_team() + + user_request = ( + "Hello, Please provide me current time in '%Y-%m-%d %H:%M:%S' format, and the weather in New York. " + "Finally, convert the Celsius to Fahrenheit and send an email to Example Recipient with summary of results." + ) + + # Once process_request is called, the TeamLeader will coordinate. + # The loop in process_request will pick up tasks from the queue, assign them, and so on. + agent_team.process_request(request=user_request) + + agent_team.dismantle_team() + else: + print("Error: Please define the environment variable MODEL_DEPLOYMENT_NAME.") diff --git a/sdk/ai/azure-ai-agents/samples/agents_multiagent/utils/agent_team.py b/sdk/ai/azure-ai-projects/samples/agents/agents_multiagent/utils/agent_team.py similarity index 100% rename from sdk/ai/azure-ai-agents/samples/agents_multiagent/utils/agent_team.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_multiagent/utils/agent_team.py diff --git a/sdk/ai/azure-ai-agents/samples/agents_multiagent/utils/agent_team_config.yaml b/sdk/ai/azure-ai-projects/samples/agents/agents_multiagent/utils/agent_team_config.yaml similarity index 100% rename from sdk/ai/azure-ai-agents/samples/agents_multiagent/utils/agent_team_config.yaml rename to sdk/ai/azure-ai-projects/samples/agents/agents_multiagent/utils/agent_team_config.yaml diff --git a/sdk/ai/azure-ai-agents/samples/agents_multiagent/utils/agent_trace_configurator.py b/sdk/ai/azure-ai-projects/samples/agents/agents_multiagent/utils/agent_trace_configurator.py similarity index 100% rename from sdk/ai/azure-ai-agents/samples/agents_multiagent/utils/agent_trace_configurator.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_multiagent/utils/agent_trace_configurator.py diff --git a/sdk/ai/azure-ai-agents/samples/agents_multiagent/utils/user_functions_with_traces.py b/sdk/ai/azure-ai-projects/samples/agents/agents_multiagent/utils/user_functions_with_traces.py similarity index 100% rename from sdk/ai/azure-ai-agents/samples/agents_multiagent/utils/user_functions_with_traces.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_multiagent/utils/user_functions_with_traces.py diff --git a/sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_eventhandler_with_bing_grounding.py b/sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_eventhandler_with_bing_grounding.py similarity index 95% rename from sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_eventhandler_with_bing_grounding.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_eventhandler_with_bing_grounding.py index f63e475d631c..89e130126a0e 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_eventhandler_with_bing_grounding.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_eventhandler_with_bing_grounding.py @@ -14,7 +14,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -28,7 +28,7 @@ import os from typing import Any from azure.identity import DefaultAzureCredential -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import ( MessageDeltaChunk, RunStep, @@ -79,12 +79,13 @@ def on_unhandled_event(self, event_type: str, event_data: Any) -> None: print(f"Unhandled Event Type: {event_type}, Data: {event_data}") -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) -with agents_client: +with project_client: + agents_client = project_client.agents bing_connection_id = os.environ["AZURE_BING_CONNECTION_ID"] print(f"Bing Connection ID: {bing_connection_id}") diff --git a/sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_eventhandler_with_functions.py b/sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_eventhandler_with_functions.py similarity index 96% rename from sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_eventhandler_with_functions.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_eventhandler_with_functions.py index 83d56502a119..e16a99df53a7 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_eventhandler_with_functions.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_eventhandler_with_functions.py @@ -14,7 +14,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -25,7 +25,7 @@ from typing import Any import os, sys -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import ( AgentEventHandler, FunctionTool, @@ -46,7 +46,7 @@ sys.path.insert(0, root_path) from samples.utils.user_functions import user_functions -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) @@ -108,7 +108,8 @@ def on_unhandled_event(self, event_type: str, event_data: Any) -> None: print(f"Unhandled Event Type: {event_type}, Data: {event_data}") -with agents_client: +with project_client: + agents_client = project_client.agents # [START create_agent_with_function_tool] functions = FunctionTool(user_functions) diff --git a/sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_eventhandler_with_toolset.py b/sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_eventhandler_with_toolset.py similarity index 94% rename from sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_eventhandler_with_toolset.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_eventhandler_with_toolset.py index c1004de68469..0ac79409b8c4 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_eventhandler_with_toolset.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_eventhandler_with_toolset.py @@ -14,7 +14,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -23,7 +23,7 @@ the "Models + endpoints" tab in your Azure AI Foundry project. """ -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import ( MessageDeltaChunk, ListSortOrder, @@ -44,7 +44,7 @@ sys.path.insert(0, root_path) from samples.utils.user_functions import user_functions -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) @@ -79,7 +79,9 @@ def on_unhandled_event(self, event_type: str, event_data: Any) -> None: print(f"Unhandled Event Type: {event_type}, Data: {event_data}") -with agents_client: +with project_client: + agents_client = project_client.agents + # [START create_agent_with_function_tool] functions = FunctionTool(user_functions) toolset = ToolSet() diff --git a/sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_iteration_with_bing_grounding.py b/sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_iteration_with_bing_grounding.py similarity index 95% rename from sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_iteration_with_bing_grounding.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_iteration_with_bing_grounding.py index cbb500367ad2..32fe5956c33e 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_iteration_with_bing_grounding.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_iteration_with_bing_grounding.py @@ -14,7 +14,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -26,7 +26,7 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import AgentStreamEvent, RunStepDeltaChunk from azure.ai.agents.models import ( MessageDeltaChunk, @@ -40,12 +40,14 @@ ) from azure.identity import DefaultAzureCredential -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) -with agents_client: +with project_client: + agents_client = project_client.agents + bing_connection_id = os.environ["AZURE_BING_CONNECTION_ID"] bing = BingGroundingTool(connection_id=bing_connection_id) print(f"Bing Connection ID: {bing_connection_id}") diff --git a/sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_iteration_with_file_search.py b/sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_iteration_with_file_search.py similarity index 95% rename from sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_iteration_with_file_search.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_iteration_with_file_search.py index dd767c3285e3..dab751fc05df 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_iteration_with_file_search.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_iteration_with_file_search.py @@ -14,7 +14,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -24,19 +24,20 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import AgentStreamEvent, FileSearchTool, RunStepDeltaChunk from azure.ai.agents.models import MessageDeltaChunk, RunStep, ThreadMessage, ThreadRun, FilePurpose, ListSortOrder from azure.identity import DefaultAzureCredential asset_file_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../assets/product_info_1.md")) -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) -with agents_client: +with project_client: + agents_client = project_client.agents # Upload file and create vector store file = agents_client.files.upload_and_poll(file_path=asset_file_path, purpose=FilePurpose.AGENTS) diff --git a/sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_iteration_with_functions.py b/sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_iteration_with_functions.py new file mode 100644 index 000000000000..bc5646270da8 --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_iteration_with_functions.py @@ -0,0 +1,148 @@ +# pylint: disable=line-too-long,useless-suppression +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +""" +DESCRIPTION: + This sample demonstrates how to use agent operations with iteration and functions from + the Azure Agents service using a synchronous client. + +USAGE: + python sample_agents_stream_iteration_with_functions.py + + Before running the sample: + + pip install azure-ai-projects azure-ai-agents azure-identity + + Set these environment variables with your own values: + 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Azure AI Foundry portal. + 2) MODEL_DEPLOYMENT_NAME - The deployment name of the AI model, as found under the "Name" column in + the "Models + endpoints" tab in your Azure AI Foundry project. +""" +from typing import Any + +import os, sys +from azure.ai.projects import AIProjectClient +from azure.ai.agents.models import ( + FunctionTool, + ListSortOrder, + MessageDeltaChunk, + RequiredFunctionToolCall, + RunStep, + SubmitToolOutputsAction, + ThreadMessage, + ThreadRun, + ToolOutput, + AgentStreamEvent, + RunStepDeltaChunk, +) +from azure.identity import DefaultAzureCredential + +current_path = os.path.dirname(__file__) +root_path = os.path.abspath(os.path.join(current_path, os.pardir, os.pardir)) +if root_path not in sys.path: + sys.path.insert(0, root_path) +from samples.utils.user_functions import user_functions + +project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), +) + +with project_client: + agents_client = project_client.agents + + # [START create_agent_with_function_tool] + functions = FunctionTool(user_functions) + + agent = agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are a helpful agent", + tools=functions.definitions, + ) + # [END create_agent_with_function_tool] + print(f"Created agent, ID: {agent.id}") + + thread = agents_client.threads.create() + print(f"Created thread, thread ID {thread.id}") + + message = agents_client.messages.create( + thread_id=thread.id, + role="user", + content="Hello, send an email with the datetime and weather information in New York? Also let me know the details.", + ) + print(f"Created message, message ID {message.id}") + + with agents_client.runs.stream(thread_id=thread.id, agent_id=agent.id) as stream: + + for event_type, event_data, _ in stream: + + if isinstance(event_data, MessageDeltaChunk): + print(f"Text delta received: {event_data.text}") + + elif isinstance(event_data, RunStepDeltaChunk): + print(f"RunStepDeltaChunk received. ID: {event_data.id}.") + + elif isinstance(event_data, ThreadMessage): + print(f"ThreadMessage created. ID: {event_data.id}, Status: {event_data.status}") + + elif isinstance(event_data, ThreadRun): + print(f"ThreadRun status: {event_data.status}") + + if event_data.status == "failed": + print(f"Run failed. Error: {event_data.last_error}") + + if event_data.status == "requires_action" and isinstance( + event_data.required_action, SubmitToolOutputsAction + ): + tool_calls = event_data.required_action.submit_tool_outputs.tool_calls + + tool_outputs = [] + for tool_call in tool_calls: + if isinstance(tool_call, RequiredFunctionToolCall): + try: + output = functions.execute(tool_call) + tool_outputs.append( + ToolOutput( + tool_call_id=tool_call.id, + output=output, + ) + ) + except Exception as e: + print(f"Error executing tool_call {tool_call.id}: {e}") + + print(f"Tool outputs: {tool_outputs}") + if tool_outputs: + # Once we receive 'requires_action' status, the next event will be DONE. + # Here we associate our existing event handler to the next stream. + agents_client.runs.submit_tool_outputs_stream( + thread_id=event_data.thread_id, + run_id=event_data.id, + tool_outputs=tool_outputs, + event_handler=stream, + ) + + elif isinstance(event_data, RunStep): + print(f"RunStep type: {event_data.type}, Status: {event_data.status}") + + elif event_type == AgentStreamEvent.ERROR: + print(f"An error occurred. Data: {event_data}") + + elif event_type == AgentStreamEvent.DONE: + print("Stream completed.") + + else: + print(f"Unhandled Event Type: {event_type}, Data: {event_data}") + + agents_client.delete_agent(agent.id) + print("Deleted agent") + + messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) + for msg in messages: + if msg.text_messages: + last_text = msg.text_messages[-1] + print(f"{msg.role}: {last_text.text.value}") diff --git a/sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_iteration_with_toolset.py b/sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_iteration_with_toolset.py similarity index 94% rename from sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_iteration_with_toolset.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_iteration_with_toolset.py index c2f58bb1f005..4562c7bfa381 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_iteration_with_toolset.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_iteration_with_toolset.py @@ -13,7 +13,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -23,7 +23,7 @@ """ import os, sys -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import AgentStreamEvent, RunStepDeltaChunk from azure.ai.agents.models import ( MessageDeltaChunk, @@ -41,7 +41,7 @@ sys.path.insert(0, root_path) from samples.utils.user_functions import user_functions -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) @@ -50,7 +50,9 @@ toolset = ToolSet() toolset.add(functions) -with agents_client: +with project_client: + agents_client = project_client.agents + agents_client.enable_auto_function_calls(toolset) agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], diff --git a/sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_with_base_override_eventhandler.py b/sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_with_base_override_eventhandler.py similarity index 95% rename from sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_with_base_override_eventhandler.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_with_base_override_eventhandler.py index bbd0d688b3b9..854a5bdc7221 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_streaming/sample_agents_stream_with_base_override_eventhandler.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_streaming/sample_agents_stream_with_base_override_eventhandler.py @@ -16,7 +16,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -27,7 +27,7 @@ import json from typing import Generator, Optional -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import ( MessageDeltaChunk, MessageDeltaTextContent, @@ -39,6 +39,12 @@ import os +project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), +) + + # Our goal is to parse the event data in a string and return the chunk in text for each iteration. # Because we want the iteration to be a string, we define str as the generic type for BaseAsyncAgentEventHandler # and override the _process_event method to return a string. @@ -74,12 +80,9 @@ def get_stream_chunks(self) -> Generator[str, None, None]: yield chunk -agents_client = AgentsClient( - endpoint=os.environ["PROJECT_ENDPOINT"], - credential=DefaultAzureCredential(), -) +with project_client: + agents_client = project_client.agents -with agents_client: agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", instructions="You are helpful agent" ) diff --git a/sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_basics_async_with_azure_monitor_tracing.py b/sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_basics_async_with_azure_monitor_tracing.py new file mode 100644 index 000000000000..a10148e0a868 --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_basics_async_with_azure_monitor_tracing.py @@ -0,0 +1,90 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +""" +DESCRIPTION: + This sample demonstrates how to use basic agent operations from + the Azure Agents service using a asynchronous client with Azure Monitor tracing. + View the results in the "Tracing" tab in your Azure AI Foundry project page. + +USAGE: + python sample_agents_basics_async_with_azure_monitor_tracing.py + + Before running the sample: + + pip install azure-ai-projects azure-ai-agents azure-identity opentelemetry-sdk azure-monitor-opentelemetry aiohttp + + Set these environment variables with your own values: + * PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Azure AI Foundry portal. + * AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED - Optional. Set to `true` to trace the content of chat + messages, which may contain personal data. False by default. + * AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED - Optional. Set to `true` to trace the content of chat + messages, which may contain personal data. False by default. + * APPLICATIONINSIGHTS_CONNECTION_STRING - Set to the connection string of your Application Insights resource. + This is used to send telemetry data to Azure Monitor. You can also get the connection string programmatically + from AIProjectClient using the `telemetry.get_connection_string` method. A code sample showing how to do this + can be found in the `sample_telemetry_async.py` file in the azure-ai-projects telemetry samples. +""" +import asyncio +import time +from azure.ai.projects.aio import AIProjectClient +from azure.ai.agents.models import ListSortOrder, MessageTextContent +from azure.identity.aio import DefaultAzureCredential +from opentelemetry import trace +import os +from azure.monitor.opentelemetry import configure_azure_monitor + +scenario = os.path.basename(__file__) +tracer = trace.get_tracer(__name__) + + +async def main() -> None: + + async with DefaultAzureCredential() as creds: + async with AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=creds, + ) as project_client: + + async with project_client: + agents_client = project_client.agents + + # Enable Azure Monitor tracing + application_insights_connection_string = os.environ["APPLICATIONINSIGHTS_CONNECTION_STRING"] + configure_azure_monitor(connection_string=application_insights_connection_string) + + with tracer.start_as_current_span(scenario): + async with agents_client: + agent = await agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name="my-agent", + instructions="You are helpful agent", + ) + print(f"Created agent, agent ID: {agent.id}") + + thread = await agents_client.threads.create() + print(f"Created thread, thread ID: {thread.id}") + + message = await agents_client.messages.create( + thread_id=thread.id, role="user", content="Hello, tell me a joke" + ) + print(f"Created message, message ID: {message.id}") + + run = await agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) + print(f"Run completed with status: {run.status}") + + await agents_client.delete_agent(agent.id) + print("Deleted agent") + + messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) + async for msg in messages: + last_part = msg.content[-1] + if isinstance(last_part, MessageTextContent): + print(f"{msg.role}: {last_part.text.value}") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_basics_async_with_azure_monitor_tracing.py b/sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_basics_async_with_console_tracing.py similarity index 58% rename from sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_basics_async_with_azure_monitor_tracing.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_basics_async_with_console_tracing.py index 2f656421be26..df6c85e59be9 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_basics_async_with_azure_monitor_tracing.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_basics_async_with_console_tracing.py @@ -6,59 +6,71 @@ """ DESCRIPTION: This sample demonstrates how to use basic agent operations from - the Azure Agents service using a asynchronous client with Azure Monitor tracing. - View the results in the "Tracing" tab in your Azure AI Foundry project page. + the Azure Agents service using a asynchronous client with tracing to console. USAGE: - python sample_agents_basics_async_with_azure_monitor_tracing.py + python sample_agents_basics_async_with_console_tracing.py Before running the sample: - pip install azure-ai-agents azure-identity opentelemetry-sdk azure-monitor-opentelemetry aiohttp + pip install azure-ai-projects azure-ai-agents azure-identity opentelemetry-sdk azure-core-tracing-opentelemetry aiohttp + + If you want to export telemetry to OTLP endpoint (such as Aspire dashboard + https://learn.microsoft.com/dotnet/aspire/fundamentals/dashboard/standalone?tabs=bash) + install: + + pip install azure-ai-projects opentelemetry-exporter-otlp-proto-grpc Set these environment variables with your own values: * PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview page of your Azure AI Foundry portal. * AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED - Optional. Set to `true` to trace the content of chat messages, which may contain personal data. False by default. - * AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED - Optional. Set to `true` to trace the content of chat - messages, which may contain personal data. False by default. - * APPLICATIONINSIGHTS_CONNECTION_STRING - Set to the connection string of your Application Insights resource. - This is used to send telemetry data to Azure Monitor. You can also get the connection string programmatically - from AIProjectClient using the `telemetry.get_connection_string` method. A code sample showing how to do this - can be found in the `sample_telemetry_async.py` file in the azure-ai-projects telemetry samples. """ import asyncio import time -from azure.ai.agents.aio import AgentsClient +import sys +from azure.core.settings import settings + +settings.tracing_implementation = "opentelemetry" +from opentelemetry import trace +from opentelemetry.sdk.trace import TracerProvider +from opentelemetry.sdk.trace.export import SimpleSpanProcessor, ConsoleSpanExporter +from azure.ai.projects.aio import AIProjectClient from azure.ai.agents.models import ListSortOrder, MessageTextContent from azure.identity.aio import DefaultAzureCredential from opentelemetry import trace import os -from azure.monitor.opentelemetry import configure_azure_monitor +from azure.ai.agents.telemetry import AIAgentsInstrumentor + +# Setup tracing to console +# Requires opentelemetry-sdk +span_exporter = ConsoleSpanExporter() +tracer_provider = TracerProvider() +tracer_provider.add_span_processor(SimpleSpanProcessor(span_exporter)) +trace.set_tracer_provider(tracer_provider) +tracer = trace.get_tracer(__name__) + +AIAgentsInstrumentor().instrument() scenario = os.path.basename(__file__) tracer = trace.get_tracer(__name__) +@tracer.start_as_current_span(__file__) async def main() -> None: async with DefaultAzureCredential() as creds: - agents_client = AgentsClient( + async with AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=creds, - ) + ) as project_client: - # Enable Azure Monitor tracing - application_insights_connection_string = os.environ["APPLICATIONINSIGHTS_CONNECTION_STRING"] - configure_azure_monitor(connection_string=application_insights_connection_string) + async with project_client: + agents_client = project_client.agents - with tracer.start_as_current_span(scenario): - async with agents_client: agent = await agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name="my-agent", - instructions="You are helpful agent", + model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", instructions="You are helpful agent" ) print(f"Created agent, agent ID: {agent.id}") diff --git a/sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_basics_with_azure_monitor_tracing.py b/sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_basics_with_azure_monitor_tracing.py similarity index 92% rename from sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_basics_with_azure_monitor_tracing.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_basics_with_azure_monitor_tracing.py index 0633bdfd9893..6153b3d1902c 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_basics_with_azure_monitor_tracing.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_basics_with_azure_monitor_tracing.py @@ -14,7 +14,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity azure-monitor-opentelemetry + pip install azure-ai-projects azure-ai-agents azure-identity azure-monitor-opentelemetry Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -30,11 +30,11 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import ListSortOrder from azure.identity import DefaultAzureCredential -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) @@ -51,7 +51,9 @@ tracer = trace.get_tracer(__name__) with tracer.start_as_current_span(scenario): - with agents_client: + with project_client: + agents_client = project_client.agents + # [END enable_tracing] agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", instructions="You are helpful agent" diff --git a/sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_basics_with_console_tracing.py b/sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_basics_with_console_tracing.py similarity index 87% rename from sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_basics_with_console_tracing.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_basics_with_console_tracing.py index 294e4201a14c..0a93e44a2a0a 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_basics_with_console_tracing.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_basics_with_console_tracing.py @@ -13,13 +13,13 @@ Before running the sample: - pip install azure-ai-agents azure-identity opentelemetry-sdk azure-core-tracing-opentelemetry + pip install azure-ai-projects azure-ai-agents azure-identity opentelemetry-sdk azure-core-tracing-opentelemetry If you want to export telemetry to OTLP endpoint (such as Aspire dashboard https://learn.microsoft.com/dotnet/aspire/fundamentals/dashboard/standalone?tabs=bash) install: - pip install opentelemetry-exporter-otlp-proto-grpc + pip install azure-ai-projects opentelemetry-exporter-otlp-proto-grpc Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -34,11 +34,11 @@ from azure.core.settings import settings settings.tracing_implementation = "opentelemetry" -# Install opentelemetry with command "pip install opentelemetry-sdk". +# Install opentelemetry with command "pip install azure-ai-projects opentelemetry-sdk". from opentelemetry import trace from opentelemetry.sdk.trace import TracerProvider from opentelemetry.sdk.trace.export import SimpleSpanProcessor, ConsoleSpanExporter -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import ListSortOrder from azure.identity import DefaultAzureCredential from azure.ai.agents.telemetry import AIAgentsInstrumentor @@ -53,14 +53,16 @@ AIAgentsInstrumentor().instrument() -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) scenario = os.path.basename(__file__) with tracer.start_as_current_span(scenario): - with agents_client: + with project_client: + agents_client = project_client.agents + agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", instructions="You are helpful agent" ) diff --git a/sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_basics_with_console_tracing_custom_attributes.py b/sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_basics_with_console_tracing_custom_attributes.py similarity index 90% rename from sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_basics_with_console_tracing_custom_attributes.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_basics_with_console_tracing_custom_attributes.py index 2484a390677c..1b51abdaf41b 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_basics_with_console_tracing_custom_attributes.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_basics_with_console_tracing_custom_attributes.py @@ -14,13 +14,13 @@ Before running the sample: - pip install azure-ai-agents azure-identity opentelemetry-sdk azure-core-tracing-opentelemetry + pip install azure-ai-projects azure-ai-agents azure-identity opentelemetry-sdk azure-core-tracing-opentelemetry If you want to export telemetry to OTLP endpoint (such as Aspire dashboard https://learn.microsoft.com/dotnet/aspire/fundamentals/dashboard/standalone?tabs=bash) install: - pip install opentelemetry-exporter-otlp-proto-grpc + pip install azure-ai-projects opentelemetry-exporter-otlp-proto-grpc Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -36,11 +36,11 @@ from azure.core.settings import settings settings.tracing_implementation = "opentelemetry" -# Install opentelemetry with command "pip install opentelemetry-sdk". +# Install opentelemetry with command "pip install azure-ai-projects opentelemetry-sdk". from opentelemetry import trace from opentelemetry.sdk.trace import TracerProvider, SpanProcessor, ReadableSpan, Span from opentelemetry.sdk.trace.export import SimpleSpanProcessor, ConsoleSpanExporter -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import ListSortOrder from azure.identity import DefaultAzureCredential from azure.ai.agents.telemetry import AIAgentsInstrumentor @@ -75,7 +75,7 @@ def on_end(self, span: ReadableSpan): AIAgentsInstrumentor().instrument() -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) @@ -88,7 +88,10 @@ def on_end(self, span: ReadableSpan): tracer = trace.get_tracer(__name__) with tracer.start_as_current_span(scenario): - with agents_client: + + with project_client: + agents_client = project_client.agents + agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", instructions="You are helpful agent" ) diff --git a/sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_stream_eventhandler_with_azure_monitor_tracing.py b/sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_stream_eventhandler_with_azure_monitor_tracing.py similarity index 94% rename from sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_stream_eventhandler_with_azure_monitor_tracing.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_stream_eventhandler_with_azure_monitor_tracing.py index d1549dc23851..aeb05da1e7e0 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_stream_eventhandler_with_azure_monitor_tracing.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_stream_eventhandler_with_azure_monitor_tracing.py @@ -14,7 +14,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity opentelemetry-sdk azure-monitor-opentelemetry + pip install azure-ai-projects azure-ai-agents azure-identity opentelemetry-sdk azure-monitor-opentelemetry Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -30,7 +30,7 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.agents.models import ( AgentEventHandler, @@ -44,7 +44,7 @@ from opentelemetry import trace from azure.monitor.opentelemetry import configure_azure_monitor -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) @@ -87,7 +87,10 @@ def on_unhandled_event(self, event_type: str, event_data: Any) -> None: tracer = trace.get_tracer(__name__) with tracer.start_as_current_span(scenario): - with agents_client: + + with project_client: + agents_client = project_client.agents + # Create an agent and run stream with event handler agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", instructions="You are a helpful agent" diff --git a/sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_stream_eventhandler_with_console_tracing.py b/sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_stream_eventhandler_with_console_tracing.py similarity index 92% rename from sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_stream_eventhandler_with_console_tracing.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_stream_eventhandler_with_console_tracing.py index 9dcc0b42456b..2f7a660d21f9 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_stream_eventhandler_with_console_tracing.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_stream_eventhandler_with_console_tracing.py @@ -13,13 +13,13 @@ Before running the sample: - pip install azure-ai-agents azure-identity opentelemetry-sdk azure-core-tracing-opentelemetry + pip install azure-ai-projects azure-ai-agents azure-identity opentelemetry-sdk azure-core-tracing-opentelemetry If you want to export telemetry to OTLP endpoint (such as Aspire dashboard https://learn.microsoft.com/dotnet/aspire/fundamentals/dashboard/standalone?tabs=bash) install: - pip install opentelemetry-exporter-otlp-proto-grpc + pip install azure-ai-projects opentelemetry-exporter-otlp-proto-grpc Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -37,7 +37,7 @@ from opentelemetry import trace from opentelemetry.sdk.trace import TracerProvider from opentelemetry.sdk.trace.export import SimpleSpanProcessor, ConsoleSpanExporter -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.agents.models import ( AgentEventHandler, @@ -63,7 +63,7 @@ scenario = os.path.basename(__file__) tracer = trace.get_tracer(__name__) -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) @@ -102,7 +102,10 @@ def on_unhandled_event(self, event_type: str, event_data: Any) -> None: tracer = trace.get_tracer(__name__) with tracer.start_as_current_span(scenario): - with agents_client: + + with project_client: + agents_client = project_client.agents + # Create an agent and run stream with event handler agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", instructions="You are a helpful agent" diff --git a/sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_toolset_with_azure_monitor_tracing.py b/sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_toolset_with_azure_monitor_tracing.py similarity index 92% rename from sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_toolset_with_azure_monitor_tracing.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_toolset_with_azure_monitor_tracing.py index ccc8168c622a..be5e5e9ed04f 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_toolset_with_azure_monitor_tracing.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_toolset_with_azure_monitor_tracing.py @@ -14,7 +14,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity opentelemetry-sdk azure-monitor-opentelemetry + pip install azure-ai-projects azure-ai-agents azure-identity opentelemetry-sdk azure-monitor-opentelemetry Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -31,7 +31,7 @@ from typing import Any, Callable, Set import os, time, json -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.agents.models import ( FunctionTool, @@ -42,7 +42,7 @@ from azure.monitor.opentelemetry import configure_azure_monitor from azure.ai.agents.telemetry import trace_function -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) @@ -89,11 +89,15 @@ def fetch_weather(location: str) -> str: toolset = ToolSet() toolset.add(functions) -# To enable tool calls executed automatically -agents_client.enable_auto_function_calls(toolset) with tracer.start_as_current_span(scenario): - with agents_client: + + with project_client: + agents_client = project_client.agents + + # To enable tool calls executed automatically + agents_client.enable_auto_function_calls(toolset) + # Create an agent and run user's request with function calls agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], diff --git a/sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_toolset_with_console_tracing.py b/sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_toolset_with_console_tracing.py similarity index 89% rename from sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_toolset_with_console_tracing.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_toolset_with_console_tracing.py index fd7b65e3d990..f6ddd8792ebb 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_toolset_with_console_tracing.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_telemetry/sample_agents_toolset_with_console_tracing.py @@ -13,13 +13,13 @@ Before running the sample: - pip install azure-ai-agents azure-identity opentelemetry-sdk azure-core-tracing-opentelemetry + pip install azure-ai-projects azure-ai-agents azure-identity opentelemetry-sdk azure-core-tracing-opentelemetry If you want to export telemetry to OTLP endpoint (such as Aspire dashboard https://learn.microsoft.com/dotnet/aspire/fundamentals/dashboard/standalone?tabs=bash) install: - pip install opentelemetry-exporter-otlp-proto-grpc + pip install azure-ai-projects opentelemetry-exporter-otlp-proto-grpc Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -35,11 +35,11 @@ from azure.core.settings import settings settings.tracing_implementation = "opentelemetry" -# Install opentelemetry with command "pip install opentelemetry-sdk". +# Install opentelemetry with command "pip install azure-ai-projects opentelemetry-sdk". from opentelemetry import trace from opentelemetry.sdk.trace import TracerProvider from opentelemetry.sdk.trace.export import SimpleSpanProcessor, ConsoleSpanExporter -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.agents.models import ( FunctionTool, @@ -62,7 +62,7 @@ scenario = os.path.basename(__file__) tracer = trace.get_tracer(__name__) -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) @@ -102,11 +102,14 @@ def fetch_weather(location: str) -> str: toolset = ToolSet() toolset.add(functions) -# To enable tool calls executed automatically -agents_client.enable_auto_function_calls(toolset) with tracer.start_as_current_span(scenario): - with agents_client: + with project_client: + agents_client = project_client.agents + + # To enable tool calls executed automatically + agents_client.enable_auto_function_calls(toolset) + # Create an agent and run user's request with function calls agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], diff --git a/sdk/ai/azure-ai-agents/samples/agents_tools/__init__.py b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/__init__.py similarity index 100% rename from sdk/ai/azure-ai-agents/samples/agents_tools/__init__.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_tools/__init__.py diff --git a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_azure_ai_search.py b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_azure_ai_search.py similarity index 94% rename from sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_azure_ai_search.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_azure_ai_search.py index ee1a88e8435f..600dfe42243b 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_azure_ai_search.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_azure_ai_search.py @@ -23,7 +23,7 @@ Before running the sample: - pip install azure-ai-projects azure-identity + pip install azure-ai-projects azure-ai-projects azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -35,11 +35,11 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.agents.models import AzureAISearchQueryType, AzureAISearchTool, ListSortOrder, MessageRole -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) @@ -55,7 +55,14 @@ ) # Create agent with AI search tool and process agent run -with agents_client: +project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), +) + +with project_client: + agents_client = project_client.agents + agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", diff --git a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_azure_functions.py b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_azure_functions.py similarity index 94% rename from sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_azure_functions.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_azure_functions.py index 0b30ba59ac26..b7c095946368 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_azure_functions.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_azure_functions.py @@ -14,7 +14,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -27,16 +27,17 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import AzureFunctionStorageQueue, AzureFunctionTool, MessageRole from azure.identity import DefaultAzureCredential -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) -with agents_client: +with project_client: + agents_client = project_client.agents # [START create_agent_with_azure_function_tool] storage_service_endpoint = os.environ["STORAGE_SERVICE_ENDPONT"] diff --git a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_bing_custom_search.py b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_bing_custom_search.py similarity index 94% rename from sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_bing_custom_search.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_bing_custom_search.py index 7bda57a11237..04756402b136 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_bing_custom_search.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_bing_custom_search.py @@ -14,7 +14,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set this environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -26,7 +26,7 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.agents.models import BingCustomSearchTool @@ -35,7 +35,7 @@ # At the moment, it should be in the format ";;;" # Customer needs to login to Azure subscription via Azure CLI and set the environment variables -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) @@ -46,7 +46,9 @@ bing_custom_tool = BingCustomSearchTool(connection_id=conn_id, instance_name="") # Create Agent with the Bing Custom Search tool and process Agent run -with agents_client: +with project_client: + agents_client = project_client.agents + agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", diff --git a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_bing_grounding.py b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_bing_grounding.py similarity index 94% rename from sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_bing_grounding.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_bing_grounding.py index c03d484def7e..f042a17da650 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_bing_grounding.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_bing_grounding.py @@ -14,7 +14,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -26,12 +26,12 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import MessageRole, BingGroundingTool from azure.identity import DefaultAzureCredential -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) @@ -43,7 +43,8 @@ bing = BingGroundingTool(connection_id=conn_id) # Create agent with the bing tool and process agent run -with agents_client: +with project_client: + agents_client = project_client.agents agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", diff --git a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_code_interpreter.py b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_code_interpreter.py similarity index 95% rename from sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_code_interpreter.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_code_interpreter.py index 16efeae47cc8..56ada75bbdf4 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_code_interpreter.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_code_interpreter.py @@ -14,7 +14,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -24,7 +24,7 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import CodeInterpreterTool from azure.ai.agents.models import FilePurpose, MessageRole from azure.identity import DefaultAzureCredential @@ -34,12 +34,13 @@ os.path.join(os.path.dirname(__file__), "../assets/synthetic_500_quarterly_results.csv") ) -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) -with agents_client: +with project_client: + agents_client = project_client.agents # Upload a file and wait for it to be processed # [START upload_file_and_create_agent_with_code_interpreter] diff --git a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_code_interpreter_attachment_enterprise_search.py b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_code_interpreter_attachment_enterprise_search.py similarity index 94% rename from sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_code_interpreter_attachment_enterprise_search.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_code_interpreter_attachment_enterprise_search.py index db05d687eb9d..25501ca4aab8 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_code_interpreter_attachment_enterprise_search.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_code_interpreter_attachment_enterprise_search.py @@ -13,7 +13,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -25,7 +25,7 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import ( CodeInterpreterTool, MessageAttachment, @@ -35,12 +35,13 @@ ) from azure.identity import DefaultAzureCredential -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) -with agents_client: +with project_client: + agents_client = project_client.agents code_interpreter = CodeInterpreterTool() diff --git a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_connected_agent.py b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_connected_agent.py similarity index 74% rename from sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_connected_agent.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_connected_agent.py index 1ad5f890f32b..2b8414030745 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_connected_agent.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_connected_agent.py @@ -13,7 +13,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -23,34 +23,36 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import ConnectedAgentTool, MessageRole from azure.identity import DefaultAzureCredential -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) connected_agent_name = "stock_price_bot" -stock_price_agent = agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name=connected_agent_name, - instructions=( - "Your job is to get the stock price of a company. If asked for the Microsoft stock price, always return $350." - ), -) +with project_client: + agents_client = project_client.agents -# [START create_agent_with_connected_agent_tool] -# Initialize Connected Agent tool with the agent id, name, and description -connected_agent = ConnectedAgentTool( - id=stock_price_agent.id, name=connected_agent_name, description="Gets the stock price of a company" -) + stock_price_agent = agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name=connected_agent_name, + instructions=( + "Your job is to get the stock price of a company. If asked for the Microsoft stock price, always return $350." + ), + ) + + # [START create_agent_with_connected_agent_tool] + # Initialize Connected Agent tool with the agent id, name, and description + connected_agent = ConnectedAgentTool( + id=stock_price_agent.id, name=connected_agent_name, description="Gets the stock price of a company" + ) -# Create agent with the Connected Agent tool and process assistant run -with agents_client: + # Create agent with the Connected Agent tool and process assistant run agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-assistant", diff --git a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_enterprise_file_search.py b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_enterprise_file_search.py similarity index 93% rename from sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_enterprise_file_search.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_enterprise_file_search.py index 37ed15355117..b183a0431ab2 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_enterprise_file_search.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_enterprise_file_search.py @@ -12,7 +12,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity azure-ai-ml + pip install azure-ai-projects azure-ai-agents azure-identity azure-ai-ml Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -24,16 +24,17 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import FileSearchTool, ListSortOrder, VectorStoreDataSource, VectorStoreDataSourceAssetType from azure.identity import DefaultAzureCredential -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) -with agents_client: +with project_client: + agents_client = project_client.agents # [START upload_file_and_create_agent_with_file_search] # We will upload the local file to Azure and will use it for vector store creation. diff --git a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_fabric.py b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_fabric.py similarity index 93% rename from sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_fabric.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_fabric.py index e60e64ffe8a3..4e2dcd0ac3b3 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_fabric.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_fabric.py @@ -15,7 +15,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set this environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -27,11 +27,11 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.agents.models import FabricTool, ListSortOrder -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) @@ -45,7 +45,9 @@ fabric = FabricTool(connection_id=conn_id) # Create an Agent with the Fabric tool and process an Agent run -with agents_client: +with project_client: + agents_client = project_client.agents + agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", diff --git a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_file_search.py b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_file_search.py similarity index 94% rename from sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_file_search.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_file_search.py index 47f6f937e54c..00d3b7963d09 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_file_search.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_file_search.py @@ -13,7 +13,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -23,7 +23,7 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import ( FileSearchTool, FilePurpose, @@ -33,12 +33,13 @@ asset_file_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../assets/product_info_1.md")) -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) -with agents_client: +with project_client: + agents_client = project_client.agents # Upload file and create vector store # [START upload_file_create_vector_store_and_agent_with_file_search_tool] diff --git a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_functions.py b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_functions.py similarity index 95% rename from sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_functions.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_functions.py index 2dee8cd21bba..fd712a5f5342 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_functions.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_functions.py @@ -13,7 +13,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -22,7 +22,7 @@ the "Models + endpoints" tab in your Azure AI Foundry project. """ import os, time, sys -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.agents.models import ( FunctionTool, @@ -38,7 +38,7 @@ sys.path.insert(0, root_path) from samples.utils.user_functions import user_functions -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) @@ -46,7 +46,9 @@ # Initialize function tool with user functions functions = FunctionTool(functions=user_functions) -with agents_client: +with project_client: + agents_client = project_client.agents + # Create an agent and run user's request with function calls agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], diff --git a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_logic_apps.py b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_logic_apps.py similarity index 95% rename from sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_logic_apps.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_logic_apps.py index 2712ce427b7a..53b7817af440 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_logic_apps.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_logic_apps.py @@ -20,7 +20,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity azure-mgmt-logic + pip install azure-ai-projects azure-ai-agents azure-identity azure-mgmt-logic Set this environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -40,7 +40,7 @@ import sys from typing import Set -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import ToolSet, FunctionTool from azure.identity import DefaultAzureCredential @@ -57,7 +57,7 @@ # [START register_logic_app] # Create the agents client -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) @@ -85,7 +85,9 @@ } # [END register_logic_app] -with agents_client: +with project_client: + agents_client = project_client.agents + # Create an agent functions = FunctionTool(functions=functions_to_use) toolset = ToolSet() diff --git a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_multiple_connected_agents.py b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_multiple_connected_agents.py similarity index 67% rename from sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_multiple_connected_agents.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_multiple_connected_agents.py index ae189ad435fc..95150ffa85eb 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_multiple_connected_agents.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_multiple_connected_agents.py @@ -14,7 +14,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -24,45 +24,47 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import ConnectedAgentTool, MessageRole from azure.identity import DefaultAzureCredential -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) -connected_agent_name = "stock_price_bot" -weather_agent_name = "weather_bot" +with project_client: + agents_client = project_client.agents -stock_price_agent = agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name=connected_agent_name, - instructions=( - "Your job is to get the stock price of a company. If asked for the Microsoft stock price, always return $350." - ), -) + connected_agent_name = "stock_price_bot" + weather_agent_name = "weather_bot" -weather_agent = agents_client.create_agent( - model=os.environ["MODEL_DEPLOYMENT_NAME"], - name=weather_agent_name, - instructions=( - "Your job is to get the weather for a given location. If asked for the weather in Seattle, always return 60 degrees and cloudy." - ), -) + stock_price_agent = agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name=connected_agent_name, + instructions=( + "Your job is to get the stock price of a company. If asked for the Microsoft stock price, always return $350." + ), + ) -# Initialize Connected Agent tools with the agent id, name, and description -connected_agent = ConnectedAgentTool( - id=stock_price_agent.id, name=connected_agent_name, description="Gets the stock price of a company" -) -connected_weather_agent = ConnectedAgentTool( - id=weather_agent.id, name=weather_agent_name, description="Gets the weather for a given location" -) + weather_agent = agents_client.create_agent( + model=os.environ["MODEL_DEPLOYMENT_NAME"], + name=weather_agent_name, + instructions=( + "Your job is to get the weather for a given location. If asked for the weather in Seattle, always return 60 degrees and cloudy." + ), + ) + + # Initialize Connected Agent tools with the agent id, name, and description + connected_agent = ConnectedAgentTool( + id=stock_price_agent.id, name=connected_agent_name, description="Gets the stock price of a company" + ) + connected_weather_agent = ConnectedAgentTool( + id=weather_agent.id, name=weather_agent_name, description="Gets the weather for a given location" + ) -# Create agent with the Connected Agent tool and process assistant run -with agents_client: + # Create agent with the Connected Agent tool and process assistant run agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-assistant", diff --git a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_openapi.py b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_openapi.py similarity index 95% rename from sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_openapi.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_openapi.py index fa29a238cc4f..e22b081ccce7 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_openapi.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_openapi.py @@ -15,7 +15,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity jsonref + pip install azure-ai-projects azure-ai-agents azure-identity jsonref Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -26,7 +26,7 @@ import os import jsonref -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.agents.models import OpenApiTool, OpenApiAnonymousAuthDetails @@ -34,7 +34,7 @@ countries_asset_file_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../assets/countries.json")) -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) @@ -58,7 +58,9 @@ ) # Create agent with OpenApi tool and process agent run -with agents_client: +with project_client: + agents_client = project_client.agents + agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", diff --git a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_openapi_connection_auth.py b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_openapi_connection_auth.py similarity index 94% rename from sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_openapi_connection_auth.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_openapi_connection_auth.py index 13e3076f5f17..94602da687b8 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_openapi_connection_auth.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_openapi_connection_auth.py @@ -25,7 +25,7 @@ Save that connection name as the PROJECT_OPENAPI_CONNECTION_NAME environment variable - pip install azure-ai-agents azure-identity jsonref + pip install azure-ai-projects azure-ai-agents azure-identity jsonref Set this environment variables with your own values: PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -36,13 +36,13 @@ import os import jsonref -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.agents.models import OpenApiTool, OpenApiConnectionAuthDetails, OpenApiConnectionSecurityScheme asset_file_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../assets/tripadvisor_openapi.json")) -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) @@ -64,7 +64,9 @@ ) # Create an Agent with OpenApi tool and process Agent run -with agents_client: +with project_client: + agents_client = project_client.agents + agent = agents_client.create_agent( model=model_name, name="my-agent", instructions="You are a helpful agent", tools=openapi.definitions ) diff --git a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_run_with_toolset.py b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_run_with_toolset.py similarity index 93% rename from sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_run_with_toolset.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_run_with_toolset.py index 1665feb9a9f8..b6c86a10b28d 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_run_with_toolset.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_run_with_toolset.py @@ -13,7 +13,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -23,7 +23,7 @@ """ import os, sys -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.agents.models import FunctionTool, ToolSet, CodeInterpreterTool @@ -33,13 +33,15 @@ sys.path.insert(0, root_path) from samples.utils.user_functions import user_functions -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) # Create agent with toolset and process agent run -with agents_client: +with project_client: + agents_client = project_client.agents + # Initialize agent toolset with user functions and code interpreter # [START create_agent_toolset] functions = FunctionTool(user_functions) diff --git a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_sharepoint.py b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_sharepoint.py similarity index 93% rename from sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_sharepoint.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_sharepoint.py index f8448ea1b18f..5d3246b56cf8 100644 --- a/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_sharepoint.py +++ b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/sample_agents_sharepoint.py @@ -17,7 +17,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set this environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -29,7 +29,7 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.agents.models import SharepointTool @@ -38,7 +38,7 @@ # At the moment, it should be in the format ";;;" # Customer needs to login to Azure subscription via Azure CLI and set the environment variables -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) @@ -49,7 +49,9 @@ sharepoint = SharepointTool(connection_id=conn_id) # Create agent with Sharepoint tool and process agent run -with agents_client: +with project_client: + agents_client = project_client.agents + agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", diff --git a/sdk/ai/azure-ai-agents/samples/agents_tools/utils/user_logic_apps.py b/sdk/ai/azure-ai-projects/samples/agents/agents_tools/utils/user_logic_apps.py similarity index 100% rename from sdk/ai/azure-ai-agents/samples/agents_tools/utils/user_logic_apps.py rename to sdk/ai/azure-ai-projects/samples/agents/agents_tools/utils/user_logic_apps.py diff --git a/sdk/ai/azure-ai-agents/samples/assets/countries.json b/sdk/ai/azure-ai-projects/samples/agents/assets/countries.json similarity index 100% rename from sdk/ai/azure-ai-agents/samples/assets/countries.json rename to sdk/ai/azure-ai-projects/samples/agents/assets/countries.json diff --git a/sdk/ai/azure-ai-agents/samples/assets/image_file.png b/sdk/ai/azure-ai-projects/samples/agents/assets/image_file.png similarity index 100% rename from sdk/ai/azure-ai-agents/samples/assets/image_file.png rename to sdk/ai/azure-ai-projects/samples/agents/assets/image_file.png diff --git a/sdk/ai/azure-ai-agents/samples/assets/product_info_1.md b/sdk/ai/azure-ai-projects/samples/agents/assets/product_info_1.md similarity index 100% rename from sdk/ai/azure-ai-agents/samples/assets/product_info_1.md rename to sdk/ai/azure-ai-projects/samples/agents/assets/product_info_1.md diff --git a/sdk/ai/azure-ai-agents/samples/assets/synthetic_500_quarterly_results.csv b/sdk/ai/azure-ai-projects/samples/agents/assets/synthetic_500_quarterly_results.csv similarity index 100% rename from sdk/ai/azure-ai-agents/samples/assets/synthetic_500_quarterly_results.csv rename to sdk/ai/azure-ai-projects/samples/agents/assets/synthetic_500_quarterly_results.csv diff --git a/sdk/ai/azure-ai-agents/samples/assets/tripadvisor_openapi.json b/sdk/ai/azure-ai-projects/samples/agents/assets/tripadvisor_openapi.json similarity index 100% rename from sdk/ai/azure-ai-agents/samples/assets/tripadvisor_openapi.json rename to sdk/ai/azure-ai-projects/samples/agents/assets/tripadvisor_openapi.json diff --git a/sdk/ai/azure-ai-agents/samples/assets/weather_openapi.json b/sdk/ai/azure-ai-projects/samples/agents/assets/weather_openapi.json similarity index 100% rename from sdk/ai/azure-ai-agents/samples/assets/weather_openapi.json rename to sdk/ai/azure-ai-projects/samples/agents/assets/weather_openapi.json diff --git a/sdk/ai/azure-ai-projects/samples/agents/sample_agents.py b/sdk/ai/azure-ai-projects/samples/agents/sample_agents.py deleted file mode 100644 index bd430c2502ca..000000000000 --- a/sdk/ai/azure-ai-projects/samples/agents/sample_agents.py +++ /dev/null @@ -1,51 +0,0 @@ -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ - -""" -DESCRIPTION: - Given an AIProjectClient, this sample demonstrates how to access an authenticated - AgentsClient from the azure-ai-agents, associated with your AI Foundry project. - For more information on the azure-ai-agents see https://pypi.org/project/azure-ai-agents. - Find Agent samples here: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-agents/samples. - -USAGE: - python sample_agents.py - - Before running the sample: - - pip install azure-ai-projects azure-identity - - Set this environment variables with your own values: - 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the overview page of your - Azure AI Foundry project. - 2) MODEL_DEPLOYMENT_NAME - The AI model deployment name, to be used by your Agent, as found - in your AI Foundry project. -""" - -import os -from azure.identity import DefaultAzureCredential -from azure.ai.projects import AIProjectClient - -endpoint = os.environ["PROJECT_ENDPOINT"] -model_deployment_name = os.environ["MODEL_DEPLOYMENT_NAME"] - -with DefaultAzureCredential(exclude_interactive_browser_credential=False) as credential: - - with AIProjectClient(endpoint=endpoint, credential=credential) as project_client: - - # [START agents_sample] - agent = project_client.agents.create_agent( - model=model_deployment_name, - name="my-agent", - instructions="You are helpful agent", - ) - print(f"Created agent, agent ID: {agent.id}") - - # Do something with your Agent! - # See samples here https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-agents/samples - - project_client.agents.delete_agent(agent.id) - print("Deleted agent") - # [END connection_sample] diff --git a/sdk/ai/azure-ai-projects/samples/agents/sample_agents_async.py b/sdk/ai/azure-ai-projects/samples/agents/sample_agents_async.py deleted file mode 100644 index 44aa3eb4535e..000000000000 --- a/sdk/ai/azure-ai-projects/samples/agents/sample_agents_async.py +++ /dev/null @@ -1,56 +0,0 @@ -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ - -""" -DESCRIPTION: - Given an asynchronous AIProjectClient, this sample demonstrates how to access an authenticated - asynchronous AgentsClient from the azure-ai-agents, associated with your AI Foundry project. - For more information on the azure-ai-agents see https://pypi.org/project/azure-ai-agents. - Find Agent samples here: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-agents/samples. - -USAGE: - python sample_agents_async.py - - Before running the sample: - - pip install azure-ai-projects azure-identity aiohttp - - Set this environment variables with your own values: - 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the overview page of your - Azure AI Foundry project. - 2) MODEL_DEPLOYMENT_NAME - The AI model deployment name, to be used by your Agent, as found - in your AI Foundry project. -""" - -import os, asyncio -from azure.identity.aio import DefaultAzureCredential -from azure.ai.projects.aio import AIProjectClient - - -async def main() -> None: - - endpoint = os.environ["PROJECT_ENDPOINT"] - model_deployment_name = os.environ["MODEL_DEPLOYMENT_NAME"] - - async with DefaultAzureCredential() as credential: - - async with AIProjectClient(endpoint=endpoint, credential=credential) as project_client: - - agent = await project_client.agents.create_agent( - model=model_deployment_name, - name="my-agent", - instructions="You are helpful agent", - ) - print(f"Created agent, agent ID: {agent.id}") - - # Do something with your Agent! - # See samples here https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-agents/samples - - await project_client.agents.delete_agent(agent.id) - print("Deleted agent") - - -if __name__ == "__main__": - asyncio.run(main()) diff --git a/sdk/ai/azure-ai-agents/samples/sample_agents_basics.py b/sdk/ai/azure-ai-projects/samples/agents/sample_agents_basics.py similarity index 92% rename from sdk/ai/azure-ai-agents/samples/sample_agents_basics.py rename to sdk/ai/azure-ai-projects/samples/agents/sample_agents_basics.py index b6d04469b4c7..f5cdc8e56262 100644 --- a/sdk/ai/azure-ai-agents/samples/sample_agents_basics.py +++ b/sdk/ai/azure-ai-projects/samples/agents/sample_agents_basics.py @@ -13,7 +13,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -23,21 +23,19 @@ """ import os, time -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.agents.models import ListSortOrder -# [START create_agents_client] -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) -# [END create_agents_client] -with agents_client: +with project_client: + agents_client = project_client.agents # [START create_agent] - agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", diff --git a/sdk/ai/azure-ai-agents/samples/sample_agents_basics_stream_eventhandler.py b/sdk/ai/azure-ai-projects/samples/agents/sample_agents_basics_stream_eventhandler.py similarity index 94% rename from sdk/ai/azure-ai-agents/samples/sample_agents_basics_stream_eventhandler.py rename to sdk/ai/azure-ai-projects/samples/agents/sample_agents_basics_stream_eventhandler.py index 94a5a944c27e..3839bf205306 100644 --- a/sdk/ai/azure-ai-agents/samples/sample_agents_basics_stream_eventhandler.py +++ b/sdk/ai/azure-ai-projects/samples/agents/sample_agents_basics_stream_eventhandler.py @@ -13,7 +13,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -23,7 +23,7 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.agents.models import ( @@ -36,7 +36,7 @@ from typing import Any, Optional -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) @@ -71,7 +71,9 @@ def on_unhandled_event(self, event_type: str, event_data: Any) -> Optional[str]: # [END stream_event_handler] -with agents_client: +with project_client: + agents_client = project_client.agents + # Create an agent and run stream with event handler agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", instructions="You are a helpful agent" diff --git a/sdk/ai/azure-ai-agents/samples/sample_agents_basics_stream_iteration.py b/sdk/ai/azure-ai-projects/samples/agents/sample_agents_basics_stream_iteration.py similarity index 92% rename from sdk/ai/azure-ai-agents/samples/sample_agents_basics_stream_iteration.py rename to sdk/ai/azure-ai-projects/samples/agents/sample_agents_basics_stream_iteration.py index 94437ee16d8a..8e5bcffb2ccf 100644 --- a/sdk/ai/azure-ai-agents/samples/sample_agents_basics_stream_iteration.py +++ b/sdk/ai/azure-ai-projects/samples/agents/sample_agents_basics_stream_iteration.py @@ -13,7 +13,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -23,7 +23,7 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.agents.models import ( AgentStreamEvent, @@ -33,12 +33,14 @@ RunStep, ) -agents_client = AgentsClient( +project_client = AIProjectClient( endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential(), ) -with agents_client: +with project_client: + agents_client = project_client.agents + # Create an agent and run stream with iteration agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], name="my-agent", instructions="You are a helpful agent" diff --git a/sdk/ai/azure-ai-agents/samples/sample_agents_basics_thread_and_process_run.py b/sdk/ai/azure-ai-projects/samples/agents/sample_agents_basics_thread_and_process_run.py similarity index 88% rename from sdk/ai/azure-ai-agents/samples/sample_agents_basics_thread_and_process_run.py rename to sdk/ai/azure-ai-projects/samples/agents/sample_agents_basics_thread_and_process_run.py index 812070e22f63..3f8ef58de884 100644 --- a/sdk/ai/azure-ai-agents/samples/sample_agents_basics_thread_and_process_run.py +++ b/sdk/ai/azure-ai-projects/samples/agents/sample_agents_basics_thread_and_process_run.py @@ -15,7 +15,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -25,13 +25,18 @@ """ import os -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import AgentThreadCreationOptions, ThreadMessageOptions, ListSortOrder from azure.identity import DefaultAzureCredential -agents_client = AgentsClient(endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential()) +project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), +) + +with project_client: + agents_client = project_client.agents -with agents_client: agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], name="process-run-sample-agent", diff --git a/sdk/ai/azure-ai-agents/samples/sample_agents_basics_thread_and_run.py b/sdk/ai/azure-ai-projects/samples/agents/sample_agents_basics_thread_and_run.py similarity index 89% rename from sdk/ai/azure-ai-agents/samples/sample_agents_basics_thread_and_run.py rename to sdk/ai/azure-ai-projects/samples/agents/sample_agents_basics_thread_and_run.py index 53aa59c16d9e..90b4141f09ad 100644 --- a/sdk/ai/azure-ai-agents/samples/sample_agents_basics_thread_and_run.py +++ b/sdk/ai/azure-ai-projects/samples/agents/sample_agents_basics_thread_and_run.py @@ -13,7 +13,7 @@ Before running the sample: - pip install azure-ai-agents azure-identity + pip install azure-ai-projects azure-ai-agents azure-identity Set these environment variables with your own values: 1) PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview @@ -26,13 +26,18 @@ import os import time -from azure.ai.agents import AgentsClient +from azure.ai.projects import AIProjectClient from azure.ai.agents.models import AgentThreadCreationOptions, ThreadMessageOptions, ListSortOrder from azure.identity import DefaultAzureCredential -agents_client = AgentsClient(endpoint=os.environ["PROJECT_ENDPOINT"], credential=DefaultAzureCredential()) +project_client = AIProjectClient( + endpoint=os.environ["PROJECT_ENDPOINT"], + credential=DefaultAzureCredential(), +) + +with project_client: + agents_client = project_client.agents -with agents_client: agent = agents_client.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], name="sample-agent", diff --git a/sdk/ai/azure-ai-agents/samples/utils/__init__.py b/sdk/ai/azure-ai-projects/samples/agents/utils/__init__.py similarity index 100% rename from sdk/ai/azure-ai-agents/samples/utils/__init__.py rename to sdk/ai/azure-ai-projects/samples/agents/utils/__init__.py diff --git a/sdk/ai/azure-ai-agents/samples/utils/user_functions.py b/sdk/ai/azure-ai-projects/samples/agents/utils/user_functions.py similarity index 100% rename from sdk/ai/azure-ai-agents/samples/utils/user_functions.py rename to sdk/ai/azure-ai-projects/samples/agents/utils/user_functions.py diff --git a/sdk/ai/azure-ai-projects/samples/inference/async_samples/sample_chat_completions_with_azure_openai_client_async.py b/sdk/ai/azure-ai-projects/samples/inference/async_samples/sample_chat_completions_with_azure_openai_client_async.py index f04b27b79b94..cc8c953e7dff 100644 --- a/sdk/ai/azure-ai-projects/samples/inference/async_samples/sample_chat_completions_with_azure_openai_client_async.py +++ b/sdk/ai/azure-ai-projects/samples/inference/async_samples/sample_chat_completions_with_azure_openai_client_async.py @@ -1,3 +1,4 @@ +# pylint: disable=line-too-long,useless-suppression # ------------------------------------ # Copyright (c) Microsoft Corporation. # Licensed under the MIT License. diff --git a/sdk/ai/azure-ai-projects/samples/inference/sample_chat_completions_with_azure_openai_client.py b/sdk/ai/azure-ai-projects/samples/inference/sample_chat_completions_with_azure_openai_client.py index 2a43b20b8a32..981f177d7b28 100644 --- a/sdk/ai/azure-ai-projects/samples/inference/sample_chat_completions_with_azure_openai_client.py +++ b/sdk/ai/azure-ai-projects/samples/inference/sample_chat_completions_with_azure_openai_client.py @@ -1,3 +1,4 @@ +# pylint: disable=line-too-long,useless-suppression # ------------------------------------ # Copyright (c) Microsoft Corporation. # Licensed under the MIT License. diff --git a/sdk/ai/azure-ai-projects/setup.py b/sdk/ai/azure-ai-projects/setup.py index bbc27d4b682c..3172316ce47d 100644 --- a/sdk/ai/azure-ai-projects/setup.py +++ b/sdk/ai/azure-ai-projects/setup.py @@ -71,7 +71,7 @@ "azure-core>=1.30.0", "typing-extensions>=4.12.2", "azure-storage-blob>=12.15.0", - "azure-ai-agents>=1.0.0b1", + "azure-ai-agents>=1.0.0", ], python_requires=">=3.9", extras_require={ diff --git a/sdk/ai/azure-ai-projects/tests/conftest.py b/sdk/ai/azure-ai-projects/tests/conftest.py new file mode 100644 index 000000000000..737e4f85c400 --- /dev/null +++ b/sdk/ai/azure-ai-projects/tests/conftest.py @@ -0,0 +1,99 @@ +# pylint: disable=line-too-long,useless-suppression +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ +import os +import pytest +from dotenv import load_dotenv, find_dotenv +from devtools_testutils import remove_batch_sanitizers, add_general_regex_sanitizer, add_body_key_sanitizer + +if not load_dotenv(find_dotenv(filename="azure_ai_projects_tests.env"), override=True): + print( + "Failed to apply environment variables for azure-ai-projects tests. This is expected if running in ADO pipeline." + ) + + +def pytest_collection_modifyitems(items): + if os.environ.get("AZURE_TEST_RUN_LIVE") == "true": + return + for item in items: + if "tests\\evaluation" in item.fspath.strpath or "tests/evaluation" in item.fspath.strpath: + item.add_marker( + pytest.mark.skip( + reason="Skip running Evaluations tests in PR pipeline until we can sort out the failures related to AI Foundry project settings" + ) + ) + + +class SanitizedValues: + SUBSCRIPTION_ID = "00000000-0000-0000-0000-000000000000" + RESOURCE_GROUP_NAME = "sanitized-resource-group-name" + ACCOUNT_NAME = "sanitized-account-name" + PROJECT_NAME = "sanitized-project-name" + COMPONENT_NAME = "sanitized-component-name" + + +@pytest.fixture(scope="session") +def sanitized_values(): + return { + "subscription_id": f"{SanitizedValues.SUBSCRIPTION_ID}", + "resource_group_name": f"{SanitizedValues.RESOURCE_GROUP_NAME}", + "project_name": f"{SanitizedValues.PROJECT_NAME}", + "account_name": f"{SanitizedValues.ACCOUNT_NAME}", + "component_name": f"{SanitizedValues.COMPONENT_NAME}", + } + + +# From: https://github.com/Azure/azure-sdk-for-python/blob/main/doc/dev/tests.md#start-the-test-proxy-server +# autouse=True will trigger this fixture on each pytest run, even if it's not explicitly used by a test method +# test_proxy auto-starts the test proxy +# patch_sleep and patch_async_sleep streamline tests by disabling wait times during LRO polling +@pytest.fixture(scope="session", autouse=True) +def start_proxy(test_proxy): + return + + +@pytest.fixture(scope="session", autouse=True) +def add_sanitizers(test_proxy, sanitized_values): + + def sanitize_url_paths(): + + add_general_regex_sanitizer( + regex=r"/subscriptions/([-\w\._\(\)]+)", + value=sanitized_values["subscription_id"], + group_for_replace="1", + ) + + add_general_regex_sanitizer( + regex=r"/resource[gG]roups/([-\w\._\(\)]+)", + value=sanitized_values["resource_group_name"], + group_for_replace="1", + ) + + add_general_regex_sanitizer( + regex=r"/projects/([-\w\._\(\)]+)", value=sanitized_values["project_name"], group_for_replace="1" + ) + + add_general_regex_sanitizer( + regex=r"/accounts/([-\w\._\(\)]+)", value=sanitized_values["account_name"], group_for_replace="1" + ) + + add_general_regex_sanitizer( + regex=r"/components/([-\w\._\(\)]+)", value=sanitized_values["component_name"], group_for_replace="1" + ) + + sanitize_url_paths() + + # Sanitize API key from service response (this includes Application Insights connection string) + add_body_key_sanitizer(json_path="credentials.key", value="Sanitized-api-key") + + # Sanitize SAS URI from Datasets get credential response + add_body_key_sanitizer(json_path="blobReference.credential.sasUri", value="Sanitized-sas-uri") + add_body_key_sanitizer(json_path="blobReferenceForConsumption.credential.sasUri", value="Sanitized-sas-uri") + + # Remove the following sanitizers since certain fields are needed in tests and are non-sensitive: + # - AZSDK3493: $..name + # - AZSDK3430: $..id + remove_batch_sanitizers(["AZSDK3493"]) + remove_batch_sanitizers(["AZSDK3430"]) diff --git a/sdk/ai/azure-ai-projects/tests/connections/test_connections.py b/sdk/ai/azure-ai-projects/tests/connections/test_connections.py deleted file mode 100644 index f1e4612563a8..000000000000 --- a/sdk/ai/azure-ai-projects/tests/connections/test_connections.py +++ /dev/null @@ -1,10 +0,0 @@ -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ - - -class TestConnections: - - def test_connections_get(self, **kwargs): - pass diff --git a/sdk/ai/azure-ai-projects/tests/samples/test_samples.py b/sdk/ai/azure-ai-projects/tests/samples/test_samples.py index 8b12a8ea9c6f..d1d8b69e6228 100644 --- a/sdk/ai/azure-ai-projects/tests/samples/test_samples.py +++ b/sdk/ai/azure-ai-projects/tests/samples/test_samples.py @@ -11,6 +11,9 @@ class TestSamples: + _samples_folder_path: str + _results: dict[str, tuple[bool, str]] + """ Test class for running all samples in the `/sdk/ai/azure-ai-projects/samples` folder. diff --git a/sdk/ai/azure-ai-projects/tests/test_agents.py b/sdk/ai/azure-ai-projects/tests/test_agents.py new file mode 100644 index 000000000000..8e72c929481d --- /dev/null +++ b/sdk/ai/azure-ai-projects/tests/test_agents.py @@ -0,0 +1,45 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +from azure.ai.projects import AIProjectClient +from test_base import TestBase, servicePreparer +from devtools_testutils import recorded_by_proxy + +# NOTE: This is just a simple test to verify that the agent can be created and deleted using AIProjectClient. +# You can find comprehensive Agent functionally tests here: +# https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-agents/tests + + +class TestAgents(TestBase): + + # To run this test, use the following command in the \sdk\ai\azure-ai-projects folder: + # cls & pytest tests\test_agents.py::TestAgents::test_agents -s + @servicePreparer() + @recorded_by_proxy + def test_agents(self, **kwargs): + + endpoint = kwargs.pop("azure_ai_projects_tests_project_endpoint") + print("\n=====> Endpoint:", endpoint) + + model_deployment_name = self.test_agents_params["model_deployment_name"] + agent_name = self.test_agents_params["agent_name"] + + with AIProjectClient( + endpoint=endpoint, + credential=self.get_credential(AIProjectClient, is_async=False), + ) as project_client: + + print("[test_agents] Create agent") + agent = project_client.agents.create_agent( + model=model_deployment_name, + name=agent_name, + instructions="You are helpful agent", + ) + assert agent.id + assert agent.model == model_deployment_name + assert agent.name == agent_name + + print("[test_agents] Delete agent") + project_client.agents.delete_agent(agent.id) diff --git a/sdk/ai/azure-ai-projects/tests/test_agents_async.py b/sdk/ai/azure-ai-projects/tests/test_agents_async.py new file mode 100644 index 000000000000..76c437c9cd49 --- /dev/null +++ b/sdk/ai/azure-ai-projects/tests/test_agents_async.py @@ -0,0 +1,45 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +from azure.ai.projects.aio import AIProjectClient +from test_base import TestBase, servicePreparer +from devtools_testutils.aio import recorded_by_proxy_async + +# NOTE: This is just a simple test to verify that the agent can be created and deleted using AIProjectClient. +# You can find comprehensive Agent functionally tests here: +# https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-agents/tests + + +class TestAgentsAsync(TestBase): + + # To run this test, use the following command in the \sdk\ai\azure-ai-projects folder: + # cls & pytest tests\test_agents_async.py::TestAgentsAsync::test_agents -s + @servicePreparer() + @recorded_by_proxy_async + async def test_agents(self, **kwargs): + + endpoint = kwargs.pop("azure_ai_projects_tests_project_endpoint") + print("\n=====> Endpoint:", endpoint) + + model_deployment_name = self.test_agents_params["model_deployment_name"] + agent_name = self.test_agents_params["agent_name"] + + async with AIProjectClient( + endpoint=endpoint, + credential=self.get_credential(AIProjectClient, is_async=True), + ) as project_client: + + print("[test_agents_async] Create agent") + agent = await project_client.agents.create_agent( + model=model_deployment_name, + name=agent_name, + instructions="You are helpful agent", + ) + assert agent.id + assert agent.model == model_deployment_name + assert agent.name == agent_name + + print("[test_agents_async] Delete agent") + await project_client.agents.delete_agent(agent.id) diff --git a/sdk/ai/azure-ai-projects/tests/test_base.py b/sdk/ai/azure-ai-projects/tests/test_base.py new file mode 100644 index 000000000000..ccf63ff4e21c --- /dev/null +++ b/sdk/ai/azure-ai-projects/tests/test_base.py @@ -0,0 +1,192 @@ +# pylint: disable=line-too-long,useless-suppression +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ +import random +import re +import functools +from typing import Optional +from azure.ai.projects.models import ( + Connection, + ConnectionType, + CredentialType, + ApiKeyCredentials, + Deployment, + DeploymentType, + ModelDeployment, + Index, + IndexType, + AzureAISearchIndex, + DatasetVersion, + DatasetType, + AssetCredentialResponse, +) +from devtools_testutils import AzureRecordedTestCase, EnvironmentVariableLoader, is_live_and_not_recording + + +servicePreparer = functools.partial( + EnvironmentVariableLoader, + "azure_ai_projects_tests", + azure_ai_projects_tests_project_endpoint="https://sanitized.services.ai.azure.com/api/projects/sanitized-project-name", +) + + +class TestBase(AzureRecordedTestCase): + + test_connections_params = { + "connection_name": "connection1", + "connection_type": ConnectionType.AZURE_OPEN_AI, + } + + test_deployments_params = { + "model_publisher": "Cohere", + "model_name": "gpt-4o", + "model_deployment_name": "DeepSeek-V3", + } + + test_agents_params = { + "model_deployment_name": "gpt-4o", + "agent_name": "agent-for-python-projects-sdk-testing", + } + + test_inference_params = { + "connection_name": "connection1", + "model_deployment_name": "gpt-4o", + "aoai_api_version": "2024-10-21", + } + + test_indexes_params = { + "index_name": f"test-index-name", + "index_version": "1", + "ai_search_connection_name": "my-ai-search-connection", + "ai_search_index_name": "my-ai-search-index", + } + + test_datasets_params = { + "dataset_name_1": f"test-dataset-name-{random.randint(0, 99999):05d}", + "dataset_name_2": f"test-dataset-name-{random.randint(0, 99999):05d}", + "dataset_name_3": f"test-dataset-name-{random.randint(0, 99999):05d}", + "dataset_name_4": f"test-dataset-name-{random.randint(0, 99999):05d}", + "dataset_version": 1, + "connection_name": "balapvbyostoragecanary", + } + + # Regular expression describing the pattern of an Application Insights connection string. + REGEX_APPINSIGHTS_CONNECTION_STRING = re.compile( + r"^InstrumentationKey=[0-9a-fA-F-]{36};IngestionEndpoint=https://.+.applicationinsights.azure.com/;LiveEndpoint=https://.+.monitor.azure.com/;ApplicationId=[0-9a-fA-F-]{36}$" + ) + + @staticmethod + def assert_equal_or_not_none(actual, expected=None): + assert actual is not None + if expected is not None: + assert actual == expected + + # Checks that a given dictionary has at least one non-empty (non-whitespace) string key-value pair. + @classmethod + def is_valid_dict(cls, d: dict[str, str]) -> bool: + return bool(d) and all( + isinstance(k, str) and isinstance(v, str) and k.strip() and v.strip() for k, v in d.items() + ) + + @classmethod + def validate_connection( + cls, + connection: Connection, + include_credentials: bool, + *, + expected_connection_type: Optional[ConnectionType] = None, + expected_connection_name: Optional[str] = None, + expected_authentication_type: Optional[CredentialType] = None, + expected_is_default: Optional[bool] = None, + ): + assert connection.id is not None + + TestBase.assert_equal_or_not_none(connection.name, expected_connection_name) + TestBase.assert_equal_or_not_none(connection.type, expected_connection_type) + TestBase.assert_equal_or_not_none(connection.credentials.type, expected_authentication_type) + + if expected_is_default is not None: + assert connection.is_default == expected_is_default + + if include_credentials: + if type(connection.credentials) == ApiKeyCredentials: + assert connection.credentials.type == CredentialType.API_KEY + assert connection.credentials.api_key is not None + + @classmethod + def validate_deployment( + cls, + deployment: Deployment, + *, + expected_model_name: Optional[str] = None, + expected_model_deployment_name: Optional[str] = None, + expected_model_publisher: Optional[str] = None, + ): + assert type(deployment) == ModelDeployment + assert deployment.type == DeploymentType.MODEL_DEPLOYMENT + assert deployment.model_version is not None + # Comment out the below, since I see that `Cohere-embed-v3-english` has an empty capabilities dict. + # assert TestBase.is_valid_dict(deployment.capabilities) + assert bool(deployment.sku) # Check none-empty + + TestBase.assert_equal_or_not_none(deployment.model_name, expected_model_name) + TestBase.assert_equal_or_not_none(deployment.name, expected_model_deployment_name) + TestBase.assert_equal_or_not_none(deployment.model_publisher, expected_model_publisher) + + @classmethod + def validate_index( + cls, + index: Index, + *, + expected_index_type: Optional[IndexType] = None, + expected_index_name: Optional[str] = None, + expected_index_version: Optional[str] = None, + expected_ai_search_connection_name: Optional[str] = None, + expected_ai_search_index_name: Optional[str] = None, + ): + + TestBase.assert_equal_or_not_none(index.name, expected_index_name) + TestBase.assert_equal_or_not_none(index.version, expected_index_version) + + if expected_index_type == IndexType.AZURE_SEARCH: + assert type(index) == AzureAISearchIndex + assert index.type == IndexType.AZURE_SEARCH + TestBase.assert_equal_or_not_none(index.connection_name, expected_ai_search_connection_name) + TestBase.assert_equal_or_not_none(index.index_name, expected_ai_search_index_name) + + @classmethod + def validate_dataset( + cls, + dataset: DatasetVersion, + *, + expected_dataset_type: Optional[DatasetType] = None, + expected_dataset_name: Optional[str] = None, + expected_dataset_version: Optional[str] = None, + expected_connection_name: Optional[str] = None, + ): + assert dataset.data_uri is not None + + if expected_dataset_type: + assert dataset.type == expected_dataset_type + else: + assert dataset.type == DatasetType.URI_FILE or dataset.type == DatasetType.URI_FOLDER + + TestBase.assert_equal_or_not_none(dataset.name, expected_dataset_name) + TestBase.assert_equal_or_not_none(dataset.version, expected_dataset_version) + if expected_connection_name: + assert dataset.connection_name == expected_connection_name + + @classmethod + def validate_asset_credential(cls, asset_credential: AssetCredentialResponse): + + assert asset_credential.blob_reference is not None + assert asset_credential.blob_reference.blob_uri + assert asset_credential.blob_reference.storage_account_arm_id + + assert asset_credential.blob_reference.credential is not None + assert ( + asset_credential.blob_reference.credential.type == "SAS" + ) # Why is this not of type CredentialType.SAS as defined for Connections? + assert asset_credential.blob_reference.credential.sas_uri diff --git a/sdk/ai/azure-ai-projects/tests/test_connections.py b/sdk/ai/azure-ai-projects/tests/test_connections.py new file mode 100644 index 000000000000..55db3a70288a --- /dev/null +++ b/sdk/ai/azure-ai-projects/tests/test_connections.py @@ -0,0 +1,64 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +from azure.ai.projects import AIProjectClient +from test_base import TestBase, servicePreparer +from devtools_testutils import recorded_by_proxy + + +class TestConnections(TestBase): + + # To run this test, use the following command in the \sdk\ai\azure-ai-projects folder: + # cls & pytest tests\test_connections.py::TestConnections::test_connections -s + @servicePreparer() + @recorded_by_proxy + def test_connections(self, **kwargs): + + endpoint = kwargs.pop("azure_ai_projects_tests_project_endpoint") + print("\n=====> Endpoint:", endpoint) + + connection_name = self.test_connections_params["connection_name"] + connection_type = self.test_connections_params["connection_type"] + + with AIProjectClient( + endpoint=endpoint, + credential=self.get_credential(AIProjectClient, is_async=False), + ) as project_client: + + print("[test_connections] List all connections") + empty = True + for connection in project_client.connections.list(): + empty = False + TestBase.validate_connection(connection, False) + assert not empty + + print("[test_connections] List all connections of a particular type") + empty = True + for connection in project_client.connections.list( + connection_type=connection_type, + ): + empty = False + TestBase.validate_connection(connection, False, expected_connection_type=connection_type) + assert not empty + + print("[test_connections] Get the default connection of a particular type, without its credentials") + connection = project_client.connections.get_default(connection_type=connection_type) + TestBase.validate_connection(connection, False, expected_connection_type=connection_type) + + print("[test_connections] Get the default connection of a particular type, with its credentials") + connection = project_client.connections.get_default( + connection_type=connection_type, include_credentials=True + ) + TestBase.validate_connection( + connection, True, expected_connection_type=connection_type, expected_is_default=True + ) + + print(f"[test_connections] Get the connection named `{connection_name}`, without its credentials") + connection = project_client.connections.get(connection_name) + TestBase.validate_connection(connection, False, expected_connection_name=connection_name) + + print(f"[test_connections] Get the connection named `{connection_name}`, with its credentials") + connection = project_client.connections.get(connection_name, include_credentials=True) + TestBase.validate_connection(connection, True, expected_connection_name=connection_name) diff --git a/sdk/ai/azure-ai-projects/tests/test_connections_async.py b/sdk/ai/azure-ai-projects/tests/test_connections_async.py new file mode 100644 index 000000000000..147bad39de9b --- /dev/null +++ b/sdk/ai/azure-ai-projects/tests/test_connections_async.py @@ -0,0 +1,64 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +from azure.ai.projects.aio import AIProjectClient +from test_base import TestBase, servicePreparer +from devtools_testutils.aio import recorded_by_proxy_async + + +class TestConnectionsAsync(TestBase): + + # To run this test, use the following command in the \sdk\ai\azure-ai-projects folder: + # cls & pytest tests\test_connections_async.py::TestConnectionsAsync::test_connections_async -s + @servicePreparer() + @recorded_by_proxy_async + async def test_connections_async(self, **kwargs): + + endpoint = kwargs.pop("azure_ai_projects_tests_project_endpoint") + print("\n=====> Endpoint:", endpoint) + + connection_name = self.test_connections_params["connection_name"] + connection_type = self.test_connections_params["connection_type"] + + async with AIProjectClient( + endpoint=endpoint, + credential=self.get_credential(AIProjectClient, is_async=True), + ) as project_client: + + print("[test_connections_async] List all connections") + empty = True + async for connection in project_client.connections.list(): + empty = False + TestBase.validate_connection(connection, False) + assert not empty + + print("[test_connections_async] List all connections of a particular type") + empty = True + async for connection in project_client.connections.list( + connection_type=connection_type, + ): + empty = False + TestBase.validate_connection(connection, False, expected_connection_type=connection_type) + assert not empty + + print("[test_connections_async] Get the default connection of a particular type, without its credentials") + connection = await project_client.connections.get_default(connection_type=connection_type) + TestBase.validate_connection(connection, False, expected_connection_type=connection_type) + + print("[test_connections_async] Get the default connection of a particular type, with its credentials") + connection = await project_client.connections.get_default( + connection_type=connection_type, include_credentials=True + ) + TestBase.validate_connection( + connection, True, expected_connection_type=connection_type, expected_is_default=True + ) + + print(f"[test_connections_async] Get the connection named `{connection_name}`, without its credentials") + connection = await project_client.connections.get(connection_name) + TestBase.validate_connection(connection, False, expected_connection_name=connection_name) + + print(f"[test_connections_async] Get the connection named `{connection_name}`, with its credentials") + connection = await project_client.connections.get(connection_name, include_credentials=True) + TestBase.validate_connection(connection, True, expected_connection_name=connection_name) diff --git a/sdk/ai/azure-ai-projects/tests/test_data/datasets/data_file1.txt b/sdk/ai/azure-ai-projects/tests/test_data/datasets/data_file1.txt new file mode 100644 index 000000000000..e129759a15ff --- /dev/null +++ b/sdk/ai/azure-ai-projects/tests/test_data/datasets/data_file1.txt @@ -0,0 +1 @@ +This is sample file 1 diff --git a/sdk/ai/azure-ai-projects/tests/test_data/datasets/data_file2.txt b/sdk/ai/azure-ai-projects/tests/test_data/datasets/data_file2.txt new file mode 100644 index 000000000000..3dd74cdfc9eb --- /dev/null +++ b/sdk/ai/azure-ai-projects/tests/test_data/datasets/data_file2.txt @@ -0,0 +1 @@ +This is sample file 2 diff --git a/sdk/ai/azure-ai-projects/tests/test_data/datasets/data_subfolder/data_file3.txt b/sdk/ai/azure-ai-projects/tests/test_data/datasets/data_subfolder/data_file3.txt new file mode 100644 index 000000000000..dde35c02f5a4 --- /dev/null +++ b/sdk/ai/azure-ai-projects/tests/test_data/datasets/data_subfolder/data_file3.txt @@ -0,0 +1 @@ +This is sample file 3 diff --git a/sdk/ai/azure-ai-projects/tests/test_data/datasets/data_subfolder/data_file4.txt b/sdk/ai/azure-ai-projects/tests/test_data/datasets/data_subfolder/data_file4.txt new file mode 100644 index 000000000000..0d17a14a0c1f --- /dev/null +++ b/sdk/ai/azure-ai-projects/tests/test_data/datasets/data_subfolder/data_file4.txt @@ -0,0 +1 @@ +This is sample file 4 diff --git a/sdk/ai/azure-ai-projects/tests/test_datasets.py b/sdk/ai/azure-ai-projects/tests/test_datasets.py new file mode 100644 index 000000000000..ede3731b8a87 --- /dev/null +++ b/sdk/ai/azure-ai-projects/tests/test_datasets.py @@ -0,0 +1,194 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ +import os +import re +import pytest +from azure.ai.projects import AIProjectClient +from azure.ai.projects.models import DatasetVersion, DatasetType +from test_base import TestBase, servicePreparer +from devtools_testutils import recorded_by_proxy, is_live_and_not_recording +from azure.core.exceptions import HttpResponseError + + +# Construct the paths to the data folder and data file used in this test +script_dir = os.path.dirname(os.path.abspath(__file__)) +data_folder = os.environ.get("DATA_FOLDER", os.path.join(script_dir, "test_data/datasets")) +data_file1 = os.path.join(data_folder, "data_file1.txt") +data_file2 = os.path.join(data_folder, "data_file2.txt") + + +class TestDatasets(TestBase): + + # To run this test, use the following command in the \sdk\ai\azure-ai-projects folder: + # cls & pytest tests\test_datasets.py::TestDatasets::test_datasets_upload_file -s + @servicePreparer() + @pytest.mark.skipif( + not is_live_and_not_recording(), + reason="Skipped because this test involves network calls from another client (azure.storage.blob) that is not recorded.", + ) + @recorded_by_proxy + def test_datasets_upload_file(self, **kwargs): + + endpoint = kwargs.pop("azure_ai_projects_tests_project_endpoint") + print("\n=====> Endpoint:", endpoint) + + connection_name = self.test_datasets_params["connection_name"] + dataset_name = self.test_datasets_params["dataset_name_1"] + dataset_version = self.test_datasets_params["dataset_version"] + + with AIProjectClient( + endpoint=endpoint, + credential=self.get_credential(AIProjectClient, is_async=False), + ) as project_client: + + print( + f"[test_datasets_upload_file] Upload a single file and create a new Dataset `{dataset_name}`, version `{dataset_version}`, to reference the file." + ) + dataset: DatasetVersion = project_client.datasets.upload_file( + name=dataset_name, + version=str(dataset_version), + file_path=data_file1, + connection_name=connection_name, + ) + print(dataset) + TestBase.validate_dataset( + dataset, + expected_dataset_type=DatasetType.URI_FILE, + expected_dataset_name=dataset_name, + expected_dataset_version=str(dataset_version), + ) + + print(f"[test_datasets_upload_file] Get an existing Dataset version `{dataset_version}`:") + dataset = project_client.datasets.get(name=dataset_name, version=dataset_version) + print(dataset) + TestBase.validate_dataset( + dataset, + expected_dataset_type=DatasetType.URI_FILE, + expected_dataset_name=dataset_name, + expected_dataset_version=str(dataset_version), + ) + + print( + f"[test_datasets_upload_file] Upload a single file and create a new version in existing Dataset `{dataset_name}`, to reference the file." + ) + dataset: DatasetVersion = project_client.datasets.upload_file( + name=dataset_name, + version=str(dataset_version + 1), + file_path=data_file2, + connection_name=connection_name, + ) + print(dataset) + TestBase.validate_dataset( + dataset, + expected_dataset_type=DatasetType.URI_FILE, + expected_dataset_name=dataset_name, + expected_dataset_version=str(dataset_version + 1), + ) + + print(f"[test_datasets_upload_file] Get credentials of an existing Dataset version `{dataset_version}`:") + asset_credential = project_client.datasets.get_credentials(name=dataset_name, version=str(dataset_version)) + print(asset_credential) + TestBase.validate_asset_credential(asset_credential) + + """ + print("[test_datasets_upload_file] List latest versions of all Datasets:") + empty = True + for dataset in project_client.datasets.list(): + empty = False + print(dataset) + TestBase.validate_dataset(dataset) + assert not empty + + print(f"[test_datasets_upload_file] Listing all versions of the Dataset named `{dataset_name}`:") + empty = True + for dataset in project_client.datasets.list_versions(name=dataset_name): + empty = False + print(dataset) + TestBase.validate_dataset(dataset, expected_dataset_name=dataset_name) + assert not empty + """ + + print( + f"[test_datasets_upload_file] Delete Dataset `{dataset_name}`, version `{dataset_version}` that was created above." + ) + project_client.datasets.delete(name=dataset_name, version=str(dataset_version)) + project_client.datasets.delete(name=dataset_name, version=str(dataset_version + 1)) + + print( + "[test_datasets_upload_file] Delete the same (now non-existing) Dataset. REST API call should return 204 (No content). This call should NOT throw an exception." + ) + project_client.datasets.delete(name=dataset_name, version=str(dataset_version)) + + print( + f"[test_datasets_upload_file] Try to get a non-existing Dataset `{dataset_name}`, version `{dataset_version}`. This should throw an exception." + ) + try: + exception_thrown = False + dataset = project_client.datasets.get(name=dataset_name, version=str(dataset_version)) + except HttpResponseError as e: + exception_thrown = True + print(f"Expected exception occurred: {e}") + assert "Could not find asset with ID" in e.message + assert exception_thrown + + # To run this test, use the following command in the \sdk\ai\azure-ai-projects folder: + # cls & pytest tests\test_datasets.py::TestDatasets::test_datasets_upload_folder -s + @servicePreparer() + @pytest.mark.skipif( + not is_live_and_not_recording(), + reason="Skipped because this test involves network calls from another client (azure.storage.blob) that is not recorded.", + ) + @recorded_by_proxy + def test_datasets_upload_folder(self, **kwargs): + + endpoint = kwargs.pop("azure_ai_projects_tests_project_endpoint") + print("\n=====> Endpoint:", endpoint) + + connection_name = self.test_datasets_params["connection_name"] + dataset_name = self.test_datasets_params["dataset_name_2"] + dataset_version = self.test_datasets_params["dataset_version"] + + with AIProjectClient( + endpoint=endpoint, + credential=self.get_credential(AIProjectClient, is_async=False), + ) as project_client: + + print( + f"[test_datasets_upload_folder] Upload files in a folder (including sub-folders) and create a new version `{dataset_version}` in the same Dataset, to reference the files." + ) + dataset = project_client.datasets.upload_folder( + name=dataset_name, + version=str(dataset_version), + folder=data_folder, + connection_name=connection_name, + file_pattern=re.compile(r"\.(txt|csv|md)$", re.IGNORECASE), + ) + print(dataset) + TestBase.validate_dataset( + dataset, + expected_dataset_type=DatasetType.URI_FOLDER, + expected_dataset_name=dataset_name, + expected_dataset_version=str(dataset_version), + ) + + print(f"[test_datasets_upload_file] Get an existing Dataset version `{dataset_version}`:") + dataset = project_client.datasets.get(name=dataset_name, version=str(dataset_version)) + print(dataset) + TestBase.validate_dataset( + dataset, + expected_dataset_type=DatasetType.URI_FOLDER, + expected_dataset_name=dataset_name, + expected_dataset_version=str(dataset_version), + ) + + print(f"[test_datasets_upload_file] Get credentials of an existing Dataset version `{dataset_version}`:") + asset_credential = project_client.datasets.get_credentials(name=dataset_name, version=str(dataset_version)) + print(asset_credential) + TestBase.validate_asset_credential(asset_credential) + + print( + f"[test_datasets_upload_file] Delete Dataset `{dataset_name}`, version `{dataset_version}` that was created above." + ) + project_client.datasets.delete(name=dataset_name, version=str(dataset_version)) diff --git a/sdk/ai/azure-ai-projects/tests/test_datasets_async.py b/sdk/ai/azure-ai-projects/tests/test_datasets_async.py new file mode 100644 index 000000000000..803cdddb3dbc --- /dev/null +++ b/sdk/ai/azure-ai-projects/tests/test_datasets_async.py @@ -0,0 +1,199 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ +import os +import re +import pytest +from azure.ai.projects.aio import AIProjectClient +from azure.ai.projects.models import DatasetVersion, DatasetType +from test_base import TestBase, servicePreparer +from devtools_testutils.aio import recorded_by_proxy_async +from devtools_testutils import is_live_and_not_recording +from azure.core.exceptions import HttpResponseError + + +# Construct the paths to the data folder and data file used in this test +script_dir = os.path.dirname(os.path.abspath(__file__)) +data_folder = os.environ.get("DATA_FOLDER", os.path.join(script_dir, "test_data/datasets")) +data_file1 = os.path.join(data_folder, "data_file1.txt") +data_file2 = os.path.join(data_folder, "data_file2.txt") + + +class TestDatasetsAsync(TestBase): + + # To run this test, use the following command in the \sdk\ai\azure-ai-projects folder: + # cls & pytest tests\test_datasets_async.py::TestDatasetsAsync::test_datasets_upload_file_async -s + @servicePreparer() + @pytest.mark.skipif( + not is_live_and_not_recording(), + reason="Skipped because this test involves network calls from another client (azure.storage.blob) that is not recorded.", + ) + @recorded_by_proxy_async + async def test_datasets_upload_file(self, **kwargs): + + endpoint = kwargs.pop("azure_ai_projects_tests_project_endpoint") + print("\n=====> Endpoint:", endpoint) + + connection_name = self.test_datasets_params["connection_name"] + dataset_name = self.test_datasets_params["dataset_name_3"] + dataset_version = self.test_datasets_params["dataset_version"] + + async with AIProjectClient( + endpoint=endpoint, + credential=self.get_credential(AIProjectClient, is_async=True), + ) as project_client: + + print( + f"[test_datasets_upload_file] Upload a single file and create a new Dataset `{dataset_name}`, version `{dataset_version}`, to reference the file." + ) + dataset: DatasetVersion = await project_client.datasets.upload_file( + name=dataset_name, + version=str(dataset_version), + file_path=data_file1, + connection_name=connection_name, + ) + print(dataset) + TestBase.validate_dataset( + dataset, + expected_dataset_type=DatasetType.URI_FILE, + expected_dataset_name=dataset_name, + expected_dataset_version=str(dataset_version), + ) + + print(f"[test_datasets_upload_file] Get an existing Dataset version `{dataset_version}`:") + dataset = await project_client.datasets.get(name=dataset_name, version=dataset_version) + print(dataset) + TestBase.validate_dataset( + dataset, + expected_dataset_type=DatasetType.URI_FILE, + expected_dataset_name=dataset_name, + expected_dataset_version=str(dataset_version), + ) + + print( + f"[test_datasets_upload_file] Upload a single file and create a new version in existing Dataset `{dataset_name}`, to reference the file." + ) + dataset: DatasetVersion = await project_client.datasets.upload_file( + name=dataset_name, + version=str(dataset_version + 1), + file_path=data_file2, + connection_name=connection_name, + ) + print(dataset) + TestBase.validate_dataset( + dataset, + expected_dataset_type=DatasetType.URI_FILE, + expected_dataset_name=dataset_name, + expected_dataset_version=str(dataset_version + 1), + ) + + print(f"[test_datasets_upload_file] Get credentials of an existing Dataset version `{dataset_version}`:") + asset_credential = await project_client.datasets.get_credentials( + name=dataset_name, version=str(dataset_version) + ) + print(asset_credential) + TestBase.validate_asset_credential(asset_credential) + + """ + print("[test_datasets_upload_file] List latest versions of all Datasets:") + empty = True + for dataset in project_client.datasets.list(): + empty = False + print(dataset) + TestBase.validate_dataset(dataset) + assert not empty + + print(f"[test_datasets_upload_file] Listing all versions of the Dataset named `{dataset_name}`:") + empty = True + for dataset in project_client.datasets.list_versions(name=dataset_name): + empty = False + print(dataset) + TestBase.validate_dataset(dataset, expected_dataset_name=dataset_name) + assert not empty + """ + + print( + f"[test_datasets_upload_file] Delete Dataset `{dataset_name}`, version `{dataset_version}` that was created above." + ) + await project_client.datasets.delete(name=dataset_name, version=str(dataset_version)) + await project_client.datasets.delete(name=dataset_name, version=str(dataset_version + 1)) + + print( + "[test_datasets_upload_file] Delete the same (now non-existing) Dataset. REST API call should return 204 (No content). This call should NOT throw an exception." + ) + await project_client.datasets.delete(name=dataset_name, version=str(dataset_version)) + + print( + f"[test_datasets_upload_file] Try to get a non-existing Dataset `{dataset_name}`, version `{dataset_version}`. This should throw an exception." + ) + try: + exception_thrown = False + dataset = await project_client.datasets.get(name=dataset_name, version=str(dataset_version)) + except HttpResponseError as e: + exception_thrown = True + print(f"Expected exception occurred: {e}") + assert "Could not find asset with ID" in e.message + assert exception_thrown + + # To run this test, use the following command in the \sdk\ai\azure-ai-projects folder: + # cls & pytest tests\test_datasets_async.py::TestDatasetsAsync::test_datasets_upload_folder_async -s + @servicePreparer() + @pytest.mark.skipif( + not is_live_and_not_recording(), + reason="Skipped because this test involves network calls from another client (azure.storage.blob) that is not recorded.", + ) + @recorded_by_proxy_async + async def test_datasets_upload_folder_async(self, **kwargs): + + endpoint = kwargs.pop("azure_ai_projects_tests_project_endpoint") + print("\n=====> Endpoint:", endpoint) + + connection_name = self.test_datasets_params["connection_name"] + dataset_name = self.test_datasets_params["dataset_name_4"] + dataset_version = self.test_datasets_params["dataset_version"] + + async with AIProjectClient( + endpoint=endpoint, + credential=self.get_credential(AIProjectClient, is_async=True), + ) as project_client: + + print( + f"[test_datasets_upload_folder] Upload files in a folder (including sub-folders) and create a new version `{dataset_version}` in the same Dataset, to reference the files." + ) + dataset = await project_client.datasets.upload_folder( + name=dataset_name, + version=str(dataset_version), + folder=data_folder, + connection_name=connection_name, + file_pattern=re.compile(r"\.(txt|csv|md)$", re.IGNORECASE), + ) + print(dataset) + TestBase.validate_dataset( + dataset, + expected_dataset_type=DatasetType.URI_FOLDER, + expected_dataset_name=dataset_name, + expected_dataset_version=str(dataset_version), + ) + + print(f"[test_datasets_upload_file] Get an existing Dataset version `{dataset_version}`:") + dataset = await project_client.datasets.get(name=dataset_name, version=str(dataset_version)) + print(dataset) + TestBase.validate_dataset( + dataset, + expected_dataset_type=DatasetType.URI_FOLDER, + expected_dataset_name=dataset_name, + expected_dataset_version=str(dataset_version), + ) + + print(f"[test_datasets_upload_file] Get credentials of an existing Dataset version `{dataset_version}`:") + asset_credential = await project_client.datasets.get_credentials( + name=dataset_name, version=str(dataset_version) + ) + print(asset_credential) + TestBase.validate_asset_credential(asset_credential) + + print( + f"[test_datasets_upload_file] Delete Dataset `{dataset_name}`, version `{dataset_version}` that was created above." + ) + await project_client.datasets.delete(name=dataset_name, version=str(dataset_version)) diff --git a/sdk/ai/azure-ai-projects/tests/test_deployments.py b/sdk/ai/azure-ai-projects/tests/test_deployments.py new file mode 100644 index 000000000000..805f27e76d2b --- /dev/null +++ b/sdk/ai/azure-ai-projects/tests/test_deployments.py @@ -0,0 +1,54 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +from azure.ai.projects import AIProjectClient +from test_base import TestBase, servicePreparer +from devtools_testutils import recorded_by_proxy + + +class TestDeployments(TestBase): + + # To run this test, use the following command in the \sdk\ai\azure-ai-projects folder: + # cls & pytest tests\test_deployments.py::TestDeployments::test_deployments -s + @servicePreparer() + @recorded_by_proxy + def test_deployments(self, **kwargs): + + endpoint = kwargs.pop("azure_ai_projects_tests_project_endpoint") + print("\n=====> Endpoint:", endpoint) + + model_publisher = self.test_deployments_params["model_publisher"] + model_name = self.test_deployments_params["model_name"] + model_deployment_name = self.test_deployments_params["model_deployment_name"] + + with AIProjectClient( + endpoint=endpoint, + credential=self.get_credential(AIProjectClient, is_async=False), + ) as project_client: + + print("[test_deployments] List all deployments") + empty = True + for deployment in project_client.deployments.list(): + empty = False + TestBase.validate_deployment(deployment) + assert not empty + + print(f"[test_deployments] List all deployments by the model publisher `{model_publisher}`") + empty = True + for deployment in project_client.deployments.list(model_publisher=model_publisher): + empty = False + TestBase.validate_deployment(deployment, expected_model_publisher=model_publisher) + assert not empty + + print(f"[test_deployments] List all deployments of model `{model_name}`") + empty = True + for deployment in project_client.deployments.list(model_name=model_name): + empty = False + TestBase.validate_deployment(deployment, expected_model_name=model_name) + assert not empty + + print(f"[test_deployments] Get a single deployment named `{model_deployment_name}`") + deployment = project_client.deployments.get(model_deployment_name) + TestBase.validate_deployment(deployment, expected_model_deployment_name=model_deployment_name) diff --git a/sdk/ai/azure-ai-projects/tests/test_deployments_async.py b/sdk/ai/azure-ai-projects/tests/test_deployments_async.py new file mode 100644 index 000000000000..493d71935993 --- /dev/null +++ b/sdk/ai/azure-ai-projects/tests/test_deployments_async.py @@ -0,0 +1,54 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +from azure.ai.projects.aio import AIProjectClient +from test_base import TestBase, servicePreparer +from devtools_testutils.aio import recorded_by_proxy_async + + +class TestDeploymentsAsync(TestBase): + + # To run this test, use the following command in the \sdk\ai\azure-ai-projects folder: + # cls & pytest tests\test_deployments_async.py::TestDeploymentsAsync::test_deployments_async -s + @servicePreparer() + @recorded_by_proxy_async + async def test_deployments_async(self, **kwargs): + + endpoint = kwargs.pop("azure_ai_projects_tests_project_endpoint") + print("\n=====> Endpoint:", endpoint) + + model_publisher = self.test_deployments_params["model_publisher"] + model_name = self.test_deployments_params["model_name"] + model_deployment_name = self.test_deployments_params["model_deployment_name"] + + async with AIProjectClient( + endpoint=endpoint, + credential=self.get_credential(AIProjectClient, is_async=True), + ) as project_client: + + print("[test_deployments_async] List all deployments") + empty = True + async for deployment in project_client.deployments.list(): + empty = False + TestBase.validate_deployment(deployment) + assert not empty + + print(f"[test_deployments_async] List all deployments by the model publisher `{model_publisher}`") + empty = True + async for deployment in project_client.deployments.list(model_publisher=model_publisher): + empty = False + TestBase.validate_deployment(deployment, expected_model_publisher=model_publisher) + assert not empty + + print(f"[test_deployments_async] List all deployments of model `{model_name}`") + empty = True + async for deployment in project_client.deployments.list(model_name=model_name): + empty = False + TestBase.validate_deployment(deployment, expected_model_name=model_name) + assert not empty + + print(f"[test_deployments_async] Get a single deployment named `{model_deployment_name}`") + deployment = await project_client.deployments.get(model_deployment_name) + TestBase.validate_deployment(deployment, expected_model_deployment_name=model_deployment_name) diff --git a/sdk/ai/azure-ai-projects/tests/test_indexes.py b/sdk/ai/azure-ai-projects/tests/test_indexes.py new file mode 100644 index 000000000000..0bb5f33f0e0b --- /dev/null +++ b/sdk/ai/azure-ai-projects/tests/test_indexes.py @@ -0,0 +1,85 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +from azure.ai.projects import AIProjectClient +from azure.ai.projects.models import AzureAISearchIndex, IndexType +from test_base import TestBase, servicePreparer +from devtools_testutils import recorded_by_proxy + + +class TestIndexes(TestBase): + + # To run this test, use the following command in the \sdk\ai\azure-ai-projects folder: + # cls & pytest tests\test_indexes.py::TestIndexes::test_indexes -s + @servicePreparer() + @recorded_by_proxy + def test_indexes(self, **kwargs): + + endpoint = kwargs.pop("azure_ai_projects_tests_project_endpoint") + print("\n=====> Endpoint:", endpoint) + + index_name = self.test_indexes_params["index_name"] + index_version = self.test_indexes_params["index_version"] + ai_search_connection_name = self.test_indexes_params["ai_search_connection_name"] + ai_search_index_name = self.test_indexes_params["ai_search_index_name"] + + with AIProjectClient( + endpoint=endpoint, + credential=self.get_credential(AIProjectClient, is_async=False), + ) as project_client: + + print( + f"[test_indexes] Create Index `{index_name}` with version `{index_version}`, referencing an existing AI Search resource:" + ) + index = project_client.indexes.create_or_update( + name=index_name, + version=index_version, + body=AzureAISearchIndex(connection_name=ai_search_connection_name, index_name=ai_search_index_name), + ) + print(index) + TestBase.validate_index( + index, + expected_index_type=IndexType.AZURE_SEARCH, + expected_index_name=index_name, + expected_index_version=index_version, + expected_ai_search_connection_name=ai_search_connection_name, + expected_ai_search_index_name=ai_search_index_name, + ) + + print(f"[test_indexes] Get Index `{index_name}` version `{index_version}`:") + index = project_client.indexes.get(name=index_name, version=index_version) + print(index) + TestBase.validate_index( + index, + expected_index_type=IndexType.AZURE_SEARCH, + expected_index_name=index_name, + expected_index_version=index_version, + expected_ai_search_connection_name=ai_search_connection_name, + expected_ai_search_index_name=ai_search_index_name, + ) + + print("[test_indexes] List latest versions of all Indexes:") + empty = True + for index in project_client.indexes.list(): + empty = False + print(index) + TestBase.validate_index(index) + assert not empty + + print(f"[test_indexes] Listing all versions of the Index named `{index_name}`:") + empty = True + for index in project_client.indexes.list_versions(name=index_name): + empty = False + print(index) + TestBase.validate_index(index) + assert not empty + + print(f"[test_indexes] Delete Index `{index_name}` version `{index_version}`.") + project_client.indexes.delete(name=index_name, version=index_version) + + print( + f"[test_indexes] Again delete Index `{index_name}` version `{index_version}`. Since it does not exist, the REST API should return 204 (No content). This call should NOT throw an exception." + ) + project_client.indexes.delete(name=index_name, version=index_version) diff --git a/sdk/ai/azure-ai-projects/tests/test_indexes_async.py b/sdk/ai/azure-ai-projects/tests/test_indexes_async.py new file mode 100644 index 000000000000..9deddc5d49fe --- /dev/null +++ b/sdk/ai/azure-ai-projects/tests/test_indexes_async.py @@ -0,0 +1,85 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +from azure.ai.projects.aio import AIProjectClient +from azure.ai.projects.models import AzureAISearchIndex, IndexType +from test_base import TestBase, servicePreparer +from devtools_testutils.aio import recorded_by_proxy_async + + +class TestIndexesAsync(TestBase): + + # To run this test, use the following command in the \sdk\ai\azure-ai-projects folder: + # cls & pytest tests\test_indexes_async.py::TestIndexesAsync::test_indexes_async -s + @servicePreparer() + @recorded_by_proxy_async + async def test_indexes_async(self, **kwargs): + + endpoint = kwargs.pop("azure_ai_projects_tests_project_endpoint") + print("\n=====> Endpoint:", endpoint) + + index_name = self.test_indexes_params["index_name"] + index_version = self.test_indexes_params["index_version"] + ai_search_connection_name = self.test_indexes_params["ai_search_connection_name"] + ai_search_index_name = self.test_indexes_params["ai_search_index_name"] + + async with AIProjectClient( + endpoint=endpoint, + credential=self.get_credential(AIProjectClient, is_async=True), + ) as project_client: + + print( + f"[test_indexes] Create Index `{index_name}` with version `{index_version}`, referencing an existing AI Search resource:" + ) + index = await project_client.indexes.create_or_update( + name=index_name, + version=index_version, + body=AzureAISearchIndex(connection_name=ai_search_connection_name, index_name=ai_search_index_name), + ) + print(index) + TestBase.validate_index( + index, + expected_index_type=IndexType.AZURE_SEARCH, + expected_index_name=index_name, + expected_index_version=index_version, + expected_ai_search_connection_name=ai_search_connection_name, + expected_ai_search_index_name=ai_search_index_name, + ) + + print(f"[test_indexes] Get Index `{index_name}` version `{index_version}`:") + index = await project_client.indexes.get(name=index_name, version=index_version) + print(index) + TestBase.validate_index( + index, + expected_index_type=IndexType.AZURE_SEARCH, + expected_index_name=index_name, + expected_index_version=index_version, + expected_ai_search_connection_name=ai_search_connection_name, + expected_ai_search_index_name=ai_search_index_name, + ) + + print("[test_indexes] List latest versions of all Indexes:") + empty = True + async for index in project_client.indexes.list(): + empty = False + print(index) + TestBase.validate_index(index) + assert not empty + + print(f"[test_indexes] Listing all versions of the Index named `{index_name}`:") + empty = True + async for index in project_client.indexes.list_versions(name=index_name): + empty = False + print(index) + TestBase.validate_index(index) + assert not empty + + print(f"[test_indexes] Delete Index `{index_name}` version `{index_version}`.") + await project_client.indexes.delete(name=index_name, version=index_version) + + print( + f"[test_indexes] Again delete Index `{index_name}` version `{index_version}`. Since it does not exist, the REST API should return 204 (No content). This call should NOT throw an exception." + ) + await project_client.indexes.delete(name=index_name, version=index_version) diff --git a/sdk/ai/azure-ai-projects/tests/test_inference.py b/sdk/ai/azure-ai-projects/tests/test_inference.py new file mode 100644 index 000000000000..40f1ec706c69 --- /dev/null +++ b/sdk/ai/azure-ai-projects/tests/test_inference.py @@ -0,0 +1,100 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ +import pprint + +import pytest +from azure.ai.projects import AIProjectClient +from test_base import TestBase, servicePreparer +from devtools_testutils import recorded_by_proxy, is_live_and_not_recording + + +class TestInference(TestBase): + + # To run this test, use the following command in the \sdk\ai\azure-ai-projects folder: + # cls & pytest tests\test_inference.py::TestInference::test_inference -s + @servicePreparer() + @pytest.mark.skipif( + condition=(not is_live_and_not_recording()), + reason="Skipped because we cannot record chat completions call with AOAI client", + ) + @recorded_by_proxy + def test_inference(self, **kwargs): + + endpoint = kwargs.pop("azure_ai_projects_tests_project_endpoint") + print("\n=====> Endpoint:", endpoint) + + model_deployment_name = self.test_inference_params["model_deployment_name"] + api_version = self.test_inference_params["aoai_api_version"] + + with AIProjectClient( + endpoint=endpoint, + credential=self.get_credential(AIProjectClient, is_async=False), + ) as project_client: + + print( + "[test_inference] Get an authenticated Azure OpenAI client for the parent AI Services resource, and perform a chat completion operation." + ) + with project_client.inference.get_azure_openai_client(api_version=api_version) as client: + + response = client.chat.completions.create( + model=model_deployment_name, + messages=[ + { + "role": "user", + "content": "How many feet are in a mile?", + }, + ], + ) + + print("Raw dump of response object: ") + pprint.pprint(response) + print("Response message: ", response.choices[0].message.content) + contains = ["5280", "5,280"] + assert any(item in response.choices[0].message.content for item in contains) + + # To run this test, use the following command in the \sdk\ai\azure-ai-projects folder: + # cls & pytest tests\test_inference.py::TestInference::test_inference_on_connection -s + @servicePreparer() + @pytest.mark.skipif( + condition=(not is_live_and_not_recording()), + reason="Skipped because we cannot record chat completions call with AOAI client", + ) + @recorded_by_proxy + def test_inference_on_connection(self, **kwargs): + + endpoint = kwargs.pop("azure_ai_projects_tests_project_endpoint") + print("\n=====> Endpoint:", endpoint) + + connection_name = self.test_inference_params["connection_name"] + model_deployment_name = self.test_inference_params["model_deployment_name"] + api_version = self.test_inference_params["aoai_api_version"] + + with AIProjectClient( + endpoint=endpoint, + credential=self.get_credential(AIProjectClient, is_async=False), + ) as project_client: + + print( + "[test_inference_on_connection] Get an authenticated Azure OpenAI client for a connection AOAI service, and perform a chat completion operation." + ) + with project_client.inference.get_azure_openai_client( + api_version=api_version, connection_name=connection_name + ) as client: + + response = client.chat.completions.create( + model=model_deployment_name, + messages=[ + { + "role": "user", + "content": "How many feet are in a mile?", + }, + ], + ) + + print("Raw dump of response object: ") + pprint.pprint(response) + print("Response message: ", response.choices[0].message.content) + contains = ["5280", "5,280"] + assert any(item in response.choices[0].message.content for item in contains) diff --git a/sdk/ai/azure-ai-projects/tests/test_inference_async.py b/sdk/ai/azure-ai-projects/tests/test_inference_async.py new file mode 100644 index 000000000000..3d5c602d1b3a --- /dev/null +++ b/sdk/ai/azure-ai-projects/tests/test_inference_async.py @@ -0,0 +1,100 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ +import pprint +import pytest +from azure.ai.projects.aio import AIProjectClient +from test_base import TestBase, servicePreparer +from devtools_testutils import is_live_and_not_recording +from devtools_testutils.aio import recorded_by_proxy_async + + +class TestInferenceAsync(TestBase): + + # To run this test, use the following command in the \sdk\ai\azure-ai-projects folder: + # cls & pytest tests\test_inference_async.py::TestInferenceAsync::test_inference_async -s + @servicePreparer() + @pytest.mark.skipif( + not is_live_and_not_recording(), + reason="Skipped because we cannot record chat completions call with AOAI client", + ) + @recorded_by_proxy_async + async def test_inference_async(self, **kwargs): + + endpoint = kwargs.pop("azure_ai_projects_tests_project_endpoint") + print("\n=====> Endpoint:", endpoint) + + model_deployment_name = self.test_inference_params["model_deployment_name"] + api_version = self.test_inference_params["aoai_api_version"] + + async with AIProjectClient( + endpoint=endpoint, + credential=self.get_credential(AIProjectClient, is_async=True), + ) as project_client: + + print( + "[test_inference_async] Get an authenticated Azure OpenAI client for the parent AI Services resource, and perform a chat completion operation." + ) + async with await project_client.inference.get_azure_openai_client(api_version=api_version) as client: + + response = await client.chat.completions.create( + model=model_deployment_name, + messages=[ + { + "role": "user", + "content": "How many feet are in a mile?", + }, + ], + ) + + print("Raw dump of response object: ") + pprint.pprint(response) + print("Response message: ", response.choices[0].message.content) + contains = ["5280", "5,280"] + assert any(item in response.choices[0].message.content for item in contains) + + # To run this test, use the following command in the \sdk\ai\azure-ai-projects folder: + # cls & pytest tests\test_inference_async.py::TestInferenceAsync::test_inference_on_connection_async -s + @servicePreparer() + @pytest.mark.skipif( + not is_live_and_not_recording(), + reason="Skipped because we cannot record chat completions call with AOAI client", + ) + @recorded_by_proxy_async + async def test_inference_on_connection_async(self, **kwargs): + + endpoint = kwargs.pop("azure_ai_projects_tests_project_endpoint") + print("\n=====> Endpoint:", endpoint) + + connection_name = self.test_inference_params["connection_name"] + model_deployment_name = self.test_inference_params["model_deployment_name"] + api_version = self.test_inference_params["aoai_api_version"] + + async with AIProjectClient( + endpoint=endpoint, + credential=self.get_credential(AIProjectClient, is_async=False), + ) as project_client: + + print( + "[test_inference_on_connection_async] Get an authenticated Azure OpenAI client for a connection AOAI service, and perform a chat completion operation." + ) + async with await project_client.inference.get_azure_openai_client( + api_version=api_version, connection_name=connection_name + ) as client: + + response = await client.chat.completions.create( + model=model_deployment_name, + messages=[ + { + "role": "user", + "content": "How many feet are in a mile?", + }, + ], + ) + + print("Raw dump of response object: ") + pprint.pprint(response) + print("Response message: ", response.choices[0].message.content) + contains = ["5280", "5,280"] + assert any(item in response.choices[0].message.content for item in contains) diff --git a/sdk/ai/azure-ai-projects/tests/test_telemetry.py b/sdk/ai/azure-ai-projects/tests/test_telemetry.py new file mode 100644 index 000000000000..5716366fc87e --- /dev/null +++ b/sdk/ai/azure-ai-projects/tests/test_telemetry.py @@ -0,0 +1,35 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +from azure.ai.projects import AIProjectClient +from test_base import TestBase, servicePreparer +from devtools_testutils import recorded_by_proxy, is_live + + +class TestTelemetry(TestBase): + + # To run this test, use the following command in the \sdk\ai\azure-ai-projects folder: + # cls & pytest tests\test_telemetry.py::TestTelemetry::test_telemetry -s + @servicePreparer() + @recorded_by_proxy + def test_telemetry(self, **kwargs): + + endpoint = kwargs.pop("azure_ai_projects_tests_project_endpoint") + print("\n=====> Endpoint:", endpoint) + + with AIProjectClient( + endpoint=endpoint, + credential=self.get_credential(AIProjectClient, is_async=False), + ) as project_client: + + print("[test_telemetry] Get the Application Insights connection string:") + connection_string = project_client.telemetry.get_connection_string() + assert connection_string + if is_live(): + assert bool(self.REGEX_APPINSIGHTS_CONNECTION_STRING.match(connection_string)) + else: + assert connection_string == "Sanitized-api-key" + assert connection_string == project_client.telemetry.get_connection_string() # Test cached value + print("Application Insights connection string = " + connection_string) diff --git a/sdk/ai/azure-ai-projects/tests/test_telemetry_async.py b/sdk/ai/azure-ai-projects/tests/test_telemetry_async.py new file mode 100644 index 000000000000..86a96162b97a --- /dev/null +++ b/sdk/ai/azure-ai-projects/tests/test_telemetry_async.py @@ -0,0 +1,36 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +from azure.ai.projects.aio import AIProjectClient +from test_base import TestBase, servicePreparer +from devtools_testutils.aio import recorded_by_proxy_async +from devtools_testutils import is_live + + +class TestTelemetryAsync(TestBase): + + # To run this test, use the following command in the \sdk\ai\azure-ai-projects folder: + # cls & pytest tests\test_telemetry_async.py::TestTelemetryAsync::test_telemetry_async -s + @servicePreparer() + @recorded_by_proxy_async + async def test_telemetry_async(self, **kwargs): + + endpoint = kwargs.pop("azure_ai_projects_tests_project_endpoint") + print("\n=====> Endpoint:", endpoint) + + async with AIProjectClient( + endpoint=endpoint, + credential=self.get_credential(AIProjectClient, is_async=True), + ) as project_client: + + print("[test_telemetry_async] Get the Application Insights connection string:") + connection_string = await project_client.telemetry.get_connection_string() + assert connection_string + if is_live(): + assert bool(self.REGEX_APPINSIGHTS_CONNECTION_STRING.match(connection_string)) + else: + assert connection_string == "Sanitized-api-key" + assert connection_string == await project_client.telemetry.get_connection_string() # Test cached value + print("Application Insights connection string = " + connection_string) diff --git a/sdk/ai/azure-ai-projects/tsp-location.yaml b/sdk/ai/azure-ai-projects/tsp-location.yaml index 22421471ffc1..cf6504417959 100644 --- a/sdk/ai/azure-ai-projects/tsp-location.yaml +++ b/sdk/ai/azure-ai-projects/tsp-location.yaml @@ -1,4 +1,4 @@ directory: specification/ai/Azure.AI.Projects -commit: 07a63adf249cb199d5abd179448c92cd6e3446c8 +commit: c7f02183c56d9539034c3668a6e6cc8eeade55e9 repo: Azure/azure-rest-api-specs additionalDirectories: