Summary
The Python samples for FoundryAgent with local function tools are incorrect and produce a 400 invalid_payload error at runtime. Additionally, the intended workflow for creating a PromptAgent with tools and then invoking it via FoundryAgent is unclear.
Environment
- Package:
agent-framework-foundry
- Python: 3.12
- Azure AI Projects SDK: latest
- Auth:
AzureCliCredential
Reproduction
Step 1 — Agent creation via AIProjectClient (works)
from azure.ai.projects import AIProjectClient
from azure.ai.projects.models import PromptAgentDefinition, Tool, FunctionTool
from azure.identity import AzureCliCredential
func_tool = FunctionTool(
name="get_weather",
parameters={
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city to get weather for.",
},
},
"required": ["location"],
"additionalProperties": False,
},
description="Get the current weather for a location.",
strict=True,
)
tools: list[Tool] = [func_tool]
project = AIProjectClient(
endpoint=os.getenv("FOUNDRY_PROJECT_ENDPOINT"),
credential=AzureCliCredential(),
)
agent = project.agents.create_version(
agent_name="TheWeatherAgent1",
definition=PromptAgentDefinition(
model="gpt-5.4-mini",
instructions="Expert weather agent. Answer questions accurately and concisely.",
tools=tools,
),
)
Step 2 — Invoke via FoundryAgent using tools= (fails)
Following the documented sample:
import asyncio
import os
from typing import Annotated
from agent_framework.foundry import FoundryAgent
from azure.identity import AzureCliCredential
from dotenv import load_dotenv
load_dotenv()
def get_weather(
location: Annotated[str, "The city to get weather for."],
) -> str:
"""Get the current weather for a location."""
return f"The weather in {location} is sunny, 22°C."
async def main() -> None:
agent = FoundryAgent(
project_endpoint=os.getenv("FOUNDRY_PROJECT_ENDPOINT"),
agent_name="TheWeatherAgent1",
credential=AzureCliCredential(),
tools=get_weather, # <-- as per the docs/samples
)
result = await agent.run("What's the weather in Paris?")
print(f"Agent: {result}")
asyncio.run(main())
Step 2a — Try another way of using the tool decorator for FunctionTool (Not FunctonTool, part of AI Projects SDK) (fails)
import asyncio
import os
from typing import Annotated
from agent_framework.foundry import FoundryAgent
from agent_framework import tool
from azure.identity import AzureCliCredential
from dotenv import load_dotenv
@tool
def get_weather(
location: Annotated[str, "The city to get weather for."],
) -> str:
"""Get the current weather for a location."""
return f"The weather in {location} is sunny, 22°C."
async def main() -> None:
agent = FoundryAgent(
project_endpoint=os.getenv("FOUNDRY_PROJECT_ENDPOINT"),
agent_name="TheWeatherAgent2",
credential=AzureCliCredential(),
tools=[get_weather]
)
result = await agent.run("What's the weather in Paris?")
if __name__ == "__main__":
asyncio.run(main())
Error
agent_framework.exceptions.ChatClientException:
<class 'agent_framework_foundry._agent._FoundryAgentChatClient'>
service failed to complete the prompt: Error code: 400 - {
'error': {
'code': 'invalid_payload',
'message': 'Not allowed when agent is specified.',
'param': 'tools',
'type': 'invalid_request_error',
'details': [],
'additionalInfo': {'request_id': '4b1c3bcff918421c62bc17e266ea0a06'}
}
}
Agent in Foundry
metadata:
microsoft.voice-live.enabled: "false"
object: agent.version
id: TheWeatherAgent2:1
name: TheWeatherAgent2
version: "1"
description: ""
created_at: 1775651912
definition:
kind: prompt
model: gpt-5.4-mini
instructions: Expert weather agent demo. Answer questions about the weather accurately and concisely.
tools:
- type: function
name: get_weather
description: Get the current weather for a location.
parameters:
type: object
properties:
location:
type: string
description: The city to get weather for.
required:
- location
additionalProperties: false
strict: true
status: active
Questions
-
How should we be configuring this? It doesn't feel right that tools can't be defined alongside the agent invocation, but I appreciate that FoundryAgent expects a prompt-based agent to already exist in the service. Are there any plans for MAF to handle the creation of prompt-based agents in Foundry directly?
-
How should tool calls be defined and wired up correctly? The samples provided are incorrect and produce a 400 error — what is the intended pattern?
Expected Behaviour
The documented sample using tools=get_weather on FoundryAgent should either:
- Work without error (framework strips tool schemas from the request when using an agent reference), or raise a clear, actionable error message explaining
Package Versions
agent-framework-foundry 1.0.0
Python Version
Python 3.12
Additional Context
No response
Summary
The Python samples for
FoundryAgentwith local function tools are incorrect and produce a400 invalid_payloaderror at runtime. Additionally, the intended workflow for creating a PromptAgent with tools and then invoking it viaFoundryAgentis unclear.Environment
agent-framework-foundryAzureCliCredentialReproduction
Step 1 — Agent creation via
AIProjectClient(works)Step 2 — Invoke via
FoundryAgentusingtools=(fails)Following the documented sample:
Step 2a — Try another way of using the tool decorator for FunctionTool (Not FunctonTool, part of AI Projects SDK) (fails)
Error
Agent in Foundry
Questions
How should we be configuring this? It doesn't feel right that tools can't be defined alongside the agent invocation, but I appreciate that
FoundryAgentexpects a prompt-based agent to already exist in the service. Are there any plans for MAF to handle the creation of prompt-based agents in Foundry directly?How should tool calls be defined and wired up correctly? The samples provided are incorrect and produce a
400error — what is the intended pattern?Expected Behaviour
The documented sample using
tools=get_weatheronFoundryAgentshould either:Package Versions
agent-framework-foundry 1.0.0
Python Version
Python 3.12
Additional Context
No response