-
Notifications
You must be signed in to change notification settings - Fork 3.2k
[AgentServer][AF] try mitigate AgentThread with managed service id issue #45174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
* Tool Client V1 Version * Langraph integration * Updates fixing langgraph adapter * fix build * fix cspel * fix cspell * Add ToolClient integration with Agent Framework and examples for dynamic agent creation * Made changes to return tools instead of toolclient * Address comments --------- Co-authored-by: Lu Sun <[email protected]>
…e-sdk-for-python into lusu/agentserver-1110
* fix streaming issue in af * fix streaming issue in af * update version to 1.0.0b5
…t Handling * Refactor Azure AI Tool Client Configuration and Enhance OAuth Consent Handling - Consolidated the AzureAIToolClientConfiguration class by removing redundant code and improving clarity. - Introduced OAuth consent handling in the agent's response methods to manage OAuthConsentRequiredError. - Updated the FoundryCBAgent to configure tools endpoint and agent name from environment variables. - Enhanced tool client to propagate OAuth consent errors for better handling in the agent. - Added methods to generate OAuth request IDs and handle OAuth consent requests in the LangGraph response converter. - Updated sample usage to include tool connection ID from environment variables. - Incremented version to 1.0.0b5 for the langgraph package. * Address Pylint and mypy issues * Updated change logs
* init AGENTS.md * init AGENTS.md, PLANNING.md and TASK.md * Task.md template * Task.md template * Apply suggestions from code review Co-authored-by: Copilot <[email protected]> * move task.md tempalte instructions to agents.md --------- Co-authored-by: Copilot <[email protected]>
…ader (#44929) * use git worktree * [agentserver] Attach package metadata to OpenAIResponse.metadata + header * [agentserver] Attach package metadata to OpenAIResponse.metadata + header
…ork (#45004) * [Hosted Agents] Implement managed checkpoints feature for AgentFramework * misc: use project_endpoint instead of project_id and foundry_endpoint * misc: use get_project_endpoint * misc: refine checkpoint init logic * misc: Add a mutual-exclusion check * misc: move the check into workflow_agent_adapter
Some MCP tool servers (e.g. HuggingFace) return tool manifests where certain parameters have an empty string for their JSON Schema 'type' field. This causes a Pydantic ValidationError when the SDK tries to deserialise the ListFoundryConnectedToolsResponse, because SchemaType only accepts the six standard JSON Schema types. Changes: - SchemaProperty.type is now Optional[SchemaType] with a model_validator that coerces empty type strings to None. - FoundryToolClient._create_ai_function skips parameters whose type is None with a warning log, instead of crashing with an AttributeError.
…45041) * [Hosted Agents] Implement managed checkpoints feature for Langgraph * misc: preserves all other compile parameters * chore: refactor AgentFramework to remove managed_checkpoints flag * chore: refactor langgraph to remove managed_checkpoints flag
…d langgraph (#45064) * [Hosted Agents] Add foundry checkpoints samples for agentframework and langgraph * docs: Add README files for Foundry checkpoint samples
Co-authored-by: Declan <[email protected]>
Applies the same guard as the agentframework fix (PR #45051) to the LangGraph tool resolver. When an MCP tool manifest contains a property with an empty or unrecognised schema type, the resolver now skips that field with a warning instead of crashing with AttributeError.
…rovided (#45103) * implementing * implemented conversation item converter * refined with hitl filter * update hitl helper to filter out hitl messages * refined conversation repo and store * create foundry conversation thread repository by default * refined repo and store * refine * remove unused changes * remove unused changes * check conversation id * update version * fix minors
* integrate conversation history with langgraph * use asyncOpenAI client directly * modify unit test * save items in core package * add unittests * add change log * minor bug fix * remove some debug logs --------- Co-authored-by: Declan <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR updates the AgentServer packages to better handle missing conversation IDs (mitigating managed identity / AgentThread issues), expands support for Foundry tools and managed checkpoints, and refactors LangGraph conversion APIs while adding docs/samples and unit tests.
Changes:
- Make
conversation.idoptional end-to-end (ID generation + persistence paths). - Add/expand Foundry tools runtime + checkpoint client/repository support and related samples/docs.
- Refactor LangGraph “state converter” into Response API request/response converters and update streaming event generators accordingly.
Reviewed changes
Copilot reviewed 193 out of 223 changed files in this pull request and generated 14 comments.
Show a summary per file
| File | Description |
|---|---|
| sdk/agentserver/azure-ai-agentserver-langgraph/samples/human_in_the_loop/.env-template | Adds env template for HITL sample configuration. |
| sdk/agentserver/azure-ai-agentserver-langgraph/samples/custom_state/main.py | Updates sample to new Response API converter pattern for custom state. |
| sdk/agentserver/azure-ai-agentserver-langgraph/samples/custom_state/README.md | Aligns sample documentation with new converter classes and flow. |
| sdk/agentserver/azure-ai-agentserver-langgraph/pyproject.toml | Updates metadata/classifiers/dependencies and build check settings. |
| sdk/agentserver/azure-ai-agentserver-langgraph/doc/azure.ai.agentserver.langgraph.tools.rst | Adds Sphinx doc stub for tools package. |
| sdk/agentserver/azure-ai-agentserver-langgraph/doc/azure.ai.agentserver.langgraph.rst | Adds Sphinx doc stub for langgraph package + toctree. |
| sdk/agentserver/azure-ai-agentserver-langgraph/doc/azure.ai.agentserver.langgraph.models.rst | Adds Sphinx doc stub for models package + submodules. |
| sdk/agentserver/azure-ai-agentserver-langgraph/doc/azure.ai.agentserver.langgraph.models.response_event_generators.rst | Adds Sphinx doc stub for response event generator package. |
| sdk/agentserver/azure-ai-agentserver-langgraph/cspell.json | Extends spelling dictionary for new terms. |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/tools/_tool_node.py | Adds tool call wrapper for late-binding Foundry tools. |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/tools/_middleware.py | Adds middleware to bind Foundry tools into LangChain/LangGraph tool calls. |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/tools/_context.py | Adds tool-resolution context container. |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/tools/_builder.py | Adds use_foundry_tools(...) helper API for middleware/model wrapping. |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/tools/init.py | Exposes Foundry tools helper APIs from the tools package. |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/models/response_event_generators/response_stream_event_generator.py | Updates streaming generator to use LanggraphRunContext and HITL support. |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/models/response_event_generators/response_output_text_event_generator.py | Updates generator context type to LanggraphRunContext. |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/models/response_event_generators/response_output_item_event_generator.py | Adds Interrupt handling + HITL wiring; updates context usage. |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/models/response_event_generators/response_function_call_argument_event_generator.py | Adds HITL argument extraction for interrupts; updates context usage. |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/models/response_event_generators/response_event_generator.py | Switches event generator base to LanggraphRunContext. |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/models/response_event_generators/response_content_part_event_generator.py | Minor formatting adjustment in on_end signature block. |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/models/response_event_generators/item_resource_helpers.py | Adds helper for interrupt-to-function-call item resources (HITL). |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/models/response_api_stream_response_converter.py | Introduces new Response API streaming response converter implementation. |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/models/response_api_request_converter.py | Adds request converter ABC + item-resource-to-message conversion helper. |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/models/response_api_converter.py | Introduces Response API converter interface (request + stream/non-stream). |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/models/langgraph_stream_response_converter.py | Removes old stream converter implementation. |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/models/langgraph_state_converter.py | Removes old state converter abstraction and MessageState converter. |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/models/human_in_the_loop_json_helper.py | Adds JSON HITL helper for LangGraph interrupts ↔ function call items. |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/models/init.py | Removes old exports (now replaced by new converter modules). |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/checkpointer/_item_id.py | Adds composite checkpoint item-id encoding/parsing. |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/checkpointer/init.py | Exposes Foundry checkpoint saver from checkpointer. |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/_version.py | Bumps package version to 1.0.0b11. |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/_exceptions.py | Adds explicit error for missing conversation id with checkpointer. |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/_context.py | Adds LanggraphRunContext for runtime/config resolution. |
| sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/init.py | Expands from_langgraph signature; sets current app metadata on import. |
| sdk/agentserver/azure-ai-agentserver-langgraph/README.md | Removes mention of old LanggraphStateConverter usage. |
| sdk/agentserver/azure-ai-agentserver-langgraph/CHANGELOG.md | Adds release history entries up to 1.0.0b11. |
| sdk/agentserver/azure-ai-agentserver-core/tests/unit_tests/tools/utils/conftest.py | Adds shared tool model builders for core tool tests. |
| sdk/agentserver/azure-ai-agentserver-core/tests/unit_tests/tools/utils/init.py | Initializes utils unit test package. |
| sdk/agentserver/azure-ai-agentserver-core/tests/unit_tests/tools/runtime/conftest.py | Adds fixtures for runtime tests (mock tool client/user provider). |
| sdk/agentserver/azure-ai-agentserver-core/tests/unit_tests/tools/runtime/init.py | Initializes runtime unit test package. |
| sdk/agentserver/azure-ai-agentserver-core/tests/unit_tests/tools/conftest.py | Adds shared fixtures for tools tests (credentials, models, HTTP response mocks). |
| sdk/agentserver/azure-ai-agentserver-core/tests/unit_tests/tools/client/test_configuration.py | Adds unit tests for FoundryToolClientConfiguration pipeline policies. |
| sdk/agentserver/azure-ai-agentserver-core/tests/unit_tests/tools/client/operations/init.py | Initializes client operations test package. |
| sdk/agentserver/azure-ai-agentserver-core/tests/unit_tests/tools/client/init.py | Initializes tools client unit test package. |
| sdk/agentserver/azure-ai-agentserver-core/tests/unit_tests/tools/init.py | Initializes tools unit test package. |
| sdk/agentserver/azure-ai-agentserver-core/tests/unit_tests/server/test_response_metadata.py | Adds tests for response metadata header/attachment behavior. |
| sdk/agentserver/azure-ai-agentserver-core/tests/unit_tests/server/test_otel_context.py | Adds test for restoring previous OpenTelemetry context. |
| sdk/agentserver/azure-ai-agentserver-core/tests/unit_tests/server/common/test_foundry_id_generator.py | Adds tests around optional conversation id partition selection. |
| sdk/agentserver/azure-ai-agentserver-core/tests/unit_tests/init.py | Initializes core unit test package. |
| sdk/agentserver/azure-ai-agentserver-core/samples/simple_mock_agent/custom_mock_agent_test.py | Updates sample agent instantiation to include project_endpoint. |
| sdk/agentserver/azure-ai-agentserver-core/pyproject.toml | Updates metadata/classifiers/dependency pins and build check settings. |
| sdk/agentserver/azure-ai-agentserver-core/doc/azure.ai.agentserver.core.utils.rst | Adds Sphinx doc stub for core utils. |
| sdk/agentserver/azure-ai-agentserver-core/doc/azure.ai.agentserver.core.tools.utils.rst | Adds Sphinx doc stub for core tools utils. |
| sdk/agentserver/azure-ai-agentserver-core/doc/azure.ai.agentserver.core.tools.runtime.rst | Adds Sphinx doc stub for core tools runtime. |
| sdk/agentserver/azure-ai-agentserver-core/doc/azure.ai.agentserver.core.tools.rst | Adds Sphinx doc stub for core tools + toctree. |
| sdk/agentserver/azure-ai-agentserver-core/doc/azure.ai.agentserver.core.tools.client.rst | Adds Sphinx doc stub for core tools client. |
| sdk/agentserver/azure-ai-agentserver-core/doc/azure.ai.agentserver.core.server.common.rst | Adds Sphinx doc inclusion for server constants module. |
| sdk/agentserver/azure-ai-agentserver-core/doc/azure.ai.agentserver.core.rst | Expands Sphinx toctree to include application/models/tools/utils. |
| sdk/agentserver/azure-ai-agentserver-core/doc/azure.ai.agentserver.core.models.rst | Adds Sphinx doc stub for core models package. |
| sdk/agentserver/azure-ai-agentserver-core/doc/azure.ai.agentserver.core.models.projects.rst | Adds Sphinx doc stub for projects models. |
| sdk/agentserver/azure-ai-agentserver-core/doc/azure.ai.agentserver.core.models.openai.rst | Adds Sphinx doc stub for openai models. |
| sdk/agentserver/azure-ai-agentserver-core/doc/azure.ai.agentserver.core.application.rst | Adds Sphinx doc stub for core application module. |
| sdk/agentserver/azure-ai-agentserver-core/cspell.json | Extends spelling dictionary for new terms. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/utils/_credential.py | Adds adapter for sync/async credentials to async token interface. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/utils/init.py | Initializes utils package. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/tools/utils/_name_resolver.py | Adds stable tool name resolver for de-duplication. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/tools/utils/init.py | Exposes ToolNameResolver. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/tools/runtime/_user.py | Adds user providers and header-based resolution. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/tools/runtime/_starlette.py | Adds Starlette middleware to attach user info per request. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/tools/runtime/_resolver.py | Adds tool invocation resolver abstraction and default implementation. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/tools/runtime/_invoker.py | Adds tool invoker abstraction and default implementation. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/tools/runtime/_facade.py | Adds facade parsing for tool descriptors + connection-id parsing. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/tools/runtime/init.py | Initializes tools runtime package. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/tools/client/operations/_base.py | Adds base HTTP operations with error handling and JSON extraction helpers. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/tools/client/_configuration.py | Adds tool client configuration with policies and user-agent. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/tools/client/init.py | Initializes tools client package. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/tools/_exceptions.py | Adds typed exceptions for tool resolution/invocation/oauth. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/tools/init.py | Exposes core tool client/models/runtime APIs. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/common/id_generator/id_generator.py | Adds oauth request id generation. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/common/id_generator/foundry_id_generator.py | Makes conversation id optional; adjusts partition logic and id format. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/common/constants.py | Adds reserved function name constant for HITL. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/common/agent_run_context.py | Makes conversation id optional; includes tools in deserialization. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/_response_metadata.py | Adds metadata header builder + response/event attachment helpers. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/_context.py | Adds singleton server context for accessing tool runtime. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/_create_response.py | Extends CreateResponse to include tools. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/constants.py | Updates env var constants and adds OTLP protocol/tools endpoints. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/checkpoints/client/operations/_sessions.py | Adds checkpoint session operations (upsert/read/delete). |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/checkpoints/client/operations/init.py | Exposes checkpoint operations. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/checkpoints/client/_configuration.py | Adds checkpoint client configuration with policies and auth. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/checkpoints/client/init.py | Exposes checkpoint client and models. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/checkpoints/init.py | Exposes checkpoint storage module public API. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/application/_options.py | Adds typed options for server/http/tools configuration. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/application/_metadata.py | Adds current-app metadata for headers/user-agent; merge behavior. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/application/_configuration.py | Adds resolved runtime configuration dataclasses. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/application/_builder.py | Adds placeholder AgentServerBuilder. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/application/init.py | Exposes application metadata and setters. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/_version.py | Bumps package version to 1.0.0b11. |
| sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/init.py | Exposes AgentServerContext; keeps logging configuration. |
| sdk/agentserver/azure-ai-agentserver-core/CHANGELOG.md | Adds release history entries up to 1.0.0b11. |
| sdk/agentserver/azure-ai-agentserver-agentframework/tests/unit_tests/test_human_in_the_loop_helper.py | Adds HITL helper unit tests for message filtering behavior. |
| sdk/agentserver/azure-ai-agentserver-agentframework/tests/unit_tests/test_from_agent_framework_managed.py | Adds tests ensuring checkpoint repository optionality and passthrough. |
| sdk/agentserver/azure-ai-agentserver-agentframework/tests/unit_tests/test_foundry_checkpoint_repository.py | Adds tests for Foundry checkpoint repository caching and session creation. |
| sdk/agentserver/azure-ai-agentserver-agentframework/tests/unit_tests/test_conversation_item_converter.py | Adds tests for conversation item conversion to ChatMessages. |
| sdk/agentserver/azure-ai-agentserver-agentframework/tests/unit_tests/test_conversation_id_optional.py | Adds tests for missing conversation id handling in memory repos. |
| sdk/agentserver/azure-ai-agentserver-agentframework/tests/unit_tests/test_agent_framework_input_converter.py | Updates tests for async input conversion API. |
| sdk/agentserver/azure-ai-agentserver-agentframework/tests/unit_tests/mocks/init.py | Adds mocks package exports for tests. |
| sdk/agentserver/azure-ai-agentserver-agentframework/tests/unit_tests/conftest.py | Moves pytest config into unit_tests and adds import/path workarounds. |
| sdk/agentserver/azure-ai-agentserver-agentframework/tests/unit_tests/init.py | Initializes agentframework unit test package. |
| sdk/agentserver/azure-ai-agentserver-agentframework/tests/conftest.py | Removes old conftest location (now under unit_tests). |
| sdk/agentserver/azure-ai-agentserver-agentframework/tests/init.py | Removes placeholder tests package marker. |
| sdk/agentserver/azure-ai-agentserver-agentframework/samples/workflow_with_foundry_checkpoints/main.py | Adds sample for workflow agent using Foundry managed checkpoints. |
| sdk/agentserver/azure-ai-agentserver-agentframework/samples/workflow_with_foundry_checkpoints/README.md | Documents setup/run/requests for managed checkpoint workflow sample. |
| sdk/agentserver/azure-ai-agentserver-agentframework/samples/human_in_the_loop_workflow_agent/workflow_as_agent_reflection_pattern.py | Adds reflection-pattern workflow building blocks for HITL sample. |
| sdk/agentserver/azure-ai-agentserver-agentframework/samples/human_in_the_loop_workflow_agent/requirements.txt | Adds dependencies for HITL workflow sample. |
| sdk/agentserver/azure-ai-agentserver-agentframework/samples/human_in_the_loop_workflow_agent/main.py | Adds sample implementing HITL workflow agent with checkpointing. |
| sdk/agentserver/azure-ai-agentserver-agentframework/samples/human_in_the_loop_workflow_agent/.gitignore | Ignores checkpoint directory for sample. |
| sdk/agentserver/azure-ai-agentserver-agentframework/samples/human_in_the_loop_workflow_agent/.envtemplate | Adds env template for HITL workflow sample. |
| sdk/agentserver/azure-ai-agentserver-agentframework/samples/human_in_the_loop_ai_function/requirements.txt | Adds dependencies for HITL AI function sample. |
| sdk/agentserver/azure-ai-agentserver-agentframework/samples/human_in_the_loop_ai_function/main.py | Adds sample showing always-approval AI function with thread persistence. |
| sdk/agentserver/azure-ai-agentserver-agentframework/samples/human_in_the_loop_ai_function/README.md | Documents setup and HITL feedback loop for AI function sample. |
| sdk/agentserver/azure-ai-agentserver-agentframework/samples/human_in_the_loop_ai_function/.gitignore | Ignores local thread storage directory for sample. |
| sdk/agentserver/azure-ai-agentserver-agentframework/samples/human_in_the_loop_ai_function/.envtemplate | Adds env template for HITL AI function sample. |
| sdk/agentserver/azure-ai-agentserver-agentframework/samples/chat_client_with_foundry_tool/requirements.txt | Adds dependencies for Foundry tools chat-client sample. |
| sdk/agentserver/azure-ai-agentserver-agentframework/samples/chat_client_with_foundry_tool/chat_client_with_foundry_tool.py | Adds sample wiring Foundry tools middleware into Agent Framework client. |
| sdk/agentserver/azure-ai-agentserver-agentframework/samples/chat_client_with_foundry_tool/README.md | Documents Foundry tools middleware sample. |
| sdk/agentserver/azure-ai-agentserver-agentframework/samples/basic_simple/minimal_example.py | Loads env earlier for sample reliability. |
| sdk/agentserver/azure-ai-agentserver-agentframework/pyproject.toml | Updates metadata/classifiers/dependency pins and build check settings. |
| sdk/agentserver/azure-ai-agentserver-agentframework/mypy.ini | Adds mypy config and ignores sample errors. |
| sdk/agentserver/azure-ai-agentserver-agentframework/dev_requirements.txt | Adds pytest-asyncio to dev dependencies. |
| sdk/agentserver/azure-ai-agentserver-agentframework/cspell.json | Extends spelling dictionary for new terms. |
| sdk/agentserver/azure-ai-agentserver-agentframework/azure/ai/agentserver/agentframework/persistence/checkpoint_repository.py | Adds repository abstractions for checkpoint storage backends. |
| sdk/agentserver/azure-ai-agentserver-agentframework/azure/ai/agentserver/agentframework/persistence/_foundry_conversation_thread_repository.py | Adds repository bridging Foundry Conversation to AgentThread persistence. |
| sdk/agentserver/azure-ai-agentserver-agentframework/azure/ai/agentserver/agentframework/persistence/_foundry_checkpoint_repository.py | Adds Foundry-backed checkpoint repository implementation. |
| sdk/agentserver/azure-ai-agentserver-agentframework/azure/ai/agentserver/agentframework/persistence/init.py | Exposes persistence APIs for threads/checkpoints. |
| sdk/agentserver/azure-ai-agentserver-agentframework/azure/ai/agentserver/agentframework/models/utils/async_iter.py | Adds async iterable utilities (chunking/peeking). |
| sdk/agentserver/azure-ai-agentserver-agentframework/azure/ai/agentserver/agentframework/models/utils/init.py | Initializes models utils package. |
| sdk/agentserver/azure-ai-agentserver-agentframework/azure/ai/agentserver/agentframework/models/agent_framework_input_converters.py | Makes input conversion async and adds HITL-aware conversion path. |
| sdk/agentserver/azure-ai-agentserver-agentframework/azure/ai/agentserver/agentframework/agent_framework.py | Removes old agent adapter implementation. |
| sdk/agentserver/azure-ai-agentserver-agentframework/azure/ai/agentserver/agentframework/_version.py | Bumps package version to 1.0.0b11. |
| sdk/agentserver/azure-ai-agentserver-agentframework/azure/ai/agentserver/agentframework/_ai_agent_adapter.py | Adds new agent adapter implementation with HITL + OAuth consent handling. |
| sdk/agentserver/azure-ai-agentserver-agentframework/CHANGELOG.md | Adds release history entries up to 1.0.0b11. |
| sdk/agentserver/TASK.md | Adds task tracking file with recent completed work entries. |
| sdk/agentserver/PLANNING.md | Adds repo map and architecture overview documentation. |
| sdk/agentserver/CLAUDE.md | Adds pointer to AGENTS.md via Claude config file. |
Comments suppressed due to low confidence (1)
sdk/agentserver/azure-ai-agentserver-langgraph/azure/ai/agentserver/langgraph/models/response_event_generators/response_function_call_argument_event_generator.py:1
on_endis annotated to return a(bool, List[ResponseStreamEvent])tuple but returns a list. This mismatch is likely to break the event generator pipeline at runtime. Return a tuple consistent with the base class contract (and ensure callers consistently handle that contract across all generators).
# ---------------------------------------------------------
| if response.status_code not in [200]: | ||
| map_error(status_code=response.status_code, response=response, error_map=self._error_map) | ||
| raise HttpResponseError(response=response) | ||
|
|
||
| def _extract_response_json(self, response: AsyncHttpResponse) -> Any: | ||
| try: | ||
| payload_text = response.text() | ||
| payload_json = json.loads(payload_text) if payload_text else {} | ||
| except AttributeError: | ||
| payload_bytes = response.body() | ||
| payload_json = json.loads(payload_bytes.decode("utf-8")) if payload_bytes else {} | ||
| return payload_json No newline at end of file |
Copilot
AI
Feb 12, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_handle_response_error currently treats any non-200 response as an error, which will incorrectly fail valid responses (e.g., 201/204). Also, _extract_response_json calls AsyncHttpResponse.text() / body() synchronously—these are typically awaitable on Azure Core async transports, so this can return a coroutine and break json.loads. Consider (1) accepting a 2xx range (or an explicit set per operation), (2) using map_error(...) in the standard Azure SDK pattern (raise the mapped exception), and (3) making JSON extraction async and awaiting response content (or using an async json() helper if available).
| self, message: Union[AnyMessage, Interrupt], context: LanggraphRunContext, stream_state: StreamEventState | ||
| ) -> tuple[bool, List[project_models.ResponseStreamEvent]]: | ||
| if not self.started: # should not happen | ||
| return [] |
Copilot
AI
Feb 12, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
on_end is annotated to return tuple[bool, List[ResponseStreamEvent]] but returns a bare list when self.started is false. This will likely cause runtime failures wherever the return value is unpacked/consumed. Return a tuple consistent with the contract (e.g., False, []) to keep the stream state machine stable.
| def convert(self, event: Union[AnyMessage, dict, Any, None]): | ||
| try: | ||
| if self.current_generator is None: | ||
| self.current_generator = ResponseStreamEventGenerator(logger, None, hitl_helper=self.hitl_helper) | ||
| if event is None or not hasattr(event, '__getitem__'): | ||
| raise ValueError(f"Event is not indexable: {event}") | ||
| message = event[0] # expect a tuple | ||
| converted = self.try_process_message(message, self.context) | ||
| return converted | ||
| except Exception as e: | ||
| logger.error(f"Error converting message {event}: {e}") | ||
| raise ValueError(f"Error converting message {event}") from e |
Copilot
AI
Feb 12, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current validation (hasattr(event, '__getitem__')) is too broad: a dict is indexable but event[0] will KeyError rather than extracting a (message, ...) tuple element. If this converter expects (message, meta) tuples from LangGraph streams, validate isinstance(event, tuple | list) and length before indexing. Also consider including the original exception message/details in the raised ValueError to aid debugging.
| def convert_item_resource_to_message(item: Dict) -> AnyMessage: | ||
| """ | ||
| Convert an ItemResource (from AIProjectClient conversation items) to a LangGraph message. | ||
|
|
||
| :param item: The ItemResource dict from AIProjectClient.conversations.items.list(). | ||
| :type item: Dict | ||
|
|
||
| :return: The converted LangGraph message. | ||
| :rtype: AnyMessage | ||
| """ |
Copilot
AI
Feb 12, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
convert_item_resource_to_message is typed/documented as returning AnyMessage, but it returns None for unsupported item types. This can lead to downstream None values being treated as messages. Consider changing the return type/docstring to Optional[AnyMessage] and ensuring callers explicitly filter out None results.
| logger.warning(f"Unsupported item type '{item_type}' in item resource, skipping") | ||
| return None # type: ignore |
Copilot
AI
Feb 12, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
convert_item_resource_to_message is typed/documented as returning AnyMessage, but it returns None for unsupported item types. This can lead to downstream None values being treated as messages. Consider changing the return type/docstring to Optional[AnyMessage] and ensuring callers explicitly filter out None results.
| return f"{_encode(checkpoint_ns)}:{_encode(checkpoint_id)}:{item_type}:{_encode(sub_key)}" | ||
|
|
||
|
|
||
| def parse_item_id(item_id: str) -> ParsedItemId: |
Copilot
AI
Feb 12, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
New item-id encoding/parsing logic is introduced without accompanying tests. Add unit tests covering: round-trip make_item_id → parse_item_id (including colon/percent escaping), invalid formats (wrong part count), and invalid item_type values.
| parts = item_id.split(":", 3) | ||
| if len(parts) != 4: | ||
| raise ValueError(f"Invalid item_id format: {item_id}") | ||
|
|
||
| item_type = parts[2] | ||
| if item_type not in ("checkpoint", "writes", "blob"): | ||
| raise ValueError(f"Invalid item_type in item_id: {item_type}") |
Copilot
AI
Feb 12, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
New item-id encoding/parsing logic is introduced without accompanying tests. Add unit tests covering: round-trip make_item_id → parse_item_id (including colon/percent escaping), invalid formats (wrong part count), and invalid item_type values.
| class AsyncTokenCredentialAdapter(AsyncTokenCredential): | ||
| """ | ||
| AsyncTokenCredential adapter for either: | ||
| - azure.core.credentials.TokenCredential (sync) | ||
| - azure.core.credentials_async.AsyncTokenCredential (async) | ||
| """ |
Copilot
AI
Feb 12, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The async credential adapter is a core interoperability component but currently has no unit tests. Add tests that validate (1) sync TokenCredential.get_token is executed via thread offload, (2) async credentials are awaited directly, and (3) close() behavior works for both sync/async close implementations.
| async def get_token( | ||
| self, | ||
| *scopes: str, | ||
| claims: str | None = None, | ||
| tenant_id: str | None = None, | ||
| enable_cae: bool = False, | ||
| **kwargs: Any, | ||
| ) -> AccessToken: |
Copilot
AI
Feb 12, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The async credential adapter is a core interoperability component but currently has no unit tests. Add tests that validate (1) sync TokenCredential.get_token is executed via thread offload, (2) async credentials are awaited directly, and (3) close() behavior works for both sync/async close implementations.
| enable_cae=enable_cae, | ||
| **kwargs) | ||
|
|
||
| async def close(self) -> None: |
Copilot
AI
Feb 12, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The async credential adapter is a core interoperability component but currently has no unit tests. Add tests that validate (1) sync TokenCredential.get_token is executed via thread offload, (2) async credentials are awaited directly, and (3) close() behavior works for both sync/async close implementations.
API Change CheckAPIView identified API level changes in this PR and created the following API reviews |
Description
Please add an informative description that covers that changes made by the pull request and link all relevant issues.
If an SDK is being regenerated based on a new API spec, a link to the pull request containing these API spec changes should be included above.
All SDK Contribution checklist:
General Guidelines and Best Practices
Testing Guidelines