Advanced Custom Tools and Semantic Kernel Orchestration
In our previous posts, we built an agent that can "think" (Models) and "remember" (Knowledge). But for an agent to be truly useful in an enterprise, it needs Agency—the ability to execute code, call APIs, and manipulate real-world data.
1. "Action" Ecosystem: Choosing Your Tooling
As an AI Engineer, you must decide how your agent interacts with external systems. In 2026, the AI-102 exam focuses heavily on choosing the most efficient integration pattern:
A. Custom Function Calling (Stateless)
Best for: Performing quick calculations or formatting data within your local Python/C# app.
- Mechanism: You describe the function's schema to the agent. The agent doesn't "run" the code; it returns a JSON object telling your app to run it and send the result back.
B. Azure Functions (Stateful & Event-Driven)
Best for: Long-running tasks, database writes, or secure backend operations.
- Mechanism: The agent triggers a serverless endpoint. This is the gold standard for Enterprise Scale because it offloads compute from the AI's reasoning loop.
C. OpenAPI 3.0 Specified Tools
Best for: Connecting to existing REST APIs (e.g., Shopify, Stripe, or a custom CRM).
- Mechanism: You simply upload the Swagger/OpenAPI spec. The agent automatically maps natural language intent to specific API endpoints without you writing a single "glue" function.
D. Newcomer: Model Context Protocol (MCP)
The MCP tool is a new addition to the Foundry Agent Service. It allows your agent to connect to a standardized "MCP Server" that can host hundreds of tools at once, making it incredibly easy to swap toolsets across different agents.
2. Orchestration with Semantic Kernel (SK)
When your project moves beyond a simple script, you need a framework. Semantic Kernel is Microsoft’s premier SDK for integrating LLMs with conventional programming.
The Agentic Hierarchy in Semantic Kernel
In the SDK, you’ll encounter three primary agent classes. Knowing when to use which is a common AI-102 scenario:
| Agent Class | Best Use Case | Infrastructure |
| ChatCompletionAgent | Lightweight, stateless bots. | Requires manual history management. |
| OpenAIAssistantAgent | Native OpenAI-specific features. | Limited to OpenAI models only. |
| AzureAIAgent | The Enterprise Choice. | Uses Foundry Agent Service; supports Llama, Mistral, and GPT. |
3. Deep Dive: The AzureAIAgent Class
The AzureAIAgent is the most powerful tool in your kit because it inherits the Foundry Agentic AI Benefits:
- Automatic Tool Invocation: You don't have to write the loop that checks if the model wants to call a function; the SDK handles the "Reasoning Loop" for you.
- Managed Threads: It uses the
AzureAIAgentThread, meaning the conversation history is stored securely in Azure (often backed by Cosmos DB in standard setups), not in your app's RAM.
Setup Sequence
To use an AzureAIAgent, follow this logical sequence:
- Initialize Settings: Create an
AzureAIAgentSettingsobject to fetch your project’s connection string. - Client Creation: Instantiate the
AgentsClient(the connection to Azure). - Agent Definition: Define the
AzureAIAgentwith its persona and toolset.
4. Supercharging Agents with Plugins
In Semantic Kernel, tools are encapsulated as Plugins. A plugin is more than just a function; it’s a "skill" the agent can master.
"Plugin Pattern"
- Define: Create a class (e.g.,
EmailPlugin). - Annotate: Use the
@kernel_functiondecorator. Crucial: Your "Description" is the documentation the AI reads to decide whether to call the function. If your description is vague, the agent will fail to use the tool. - Attach: Add the plugin to the agent's
plugin_collection.
Example Scenario: If you have a function called get_tasks, and the user says "What's on my plate today?", the agent uses its reasoning capability to map "on my plate" to the get_tasks plugin automatically.
5. Security & Governance: The "Foundry Control Plane"
In a production environment (and on the exam), security is paramount.
- Keyless Authentication: We use Microsoft Entra ID (Managed Identities) so our agents can call Azure Functions or SQL Databases without needing to manage connection strings.
- Responsible AI Filters: Every tool call is passed through Azure AI Content Safety to ensure the agent doesn't generate harmful code or access unauthorized data.
Transitioning to Multi-Agent Systems
The real magic happens when you connect multiple AzureAIAgents together using Semantic Kernel. You might have one agent specialized in Knowledge Retrieval (using Azure AI Search) and another specialized in Action (using Custom Plugins).
By mastering the AzureAIAgent class and the Plugin architecture, you are building a system that doesn't just answer questions—it solves problems.
Lab Section: Orchestrating Agents with Semantic Kernel and Custom Plugins
In this lab, you will build an Expense Reporting Agent. This agent doesn't just read data; it uses a custom-coded EmailPlugin to simulate sending an itemized claim to a corporate inbox. This represents a real-world "Action" tool integration.
1. Project Files & Infrastructure
Create a new directory for this lab. You will need three specific files to ensure the Semantic Kernel can communicate with your Azure AI Foundry project.
A. The .env File
Ensure your environment variables match the expected keys for the AzureAIAgentSettings class.
# Connection string from your Azure AI Foundry Project
AZURE_AI_AGENT_PROJECT_CONNECTION_STRING="your_connection_string_here"
# The name of your deployed model (e.g., gpt-4o or gpt-4.1)
AZURE_AI_AGENT_MODEL_DEPLOYMENT_NAME="gpt-4.1"
B. The data.txt File
This is the raw data our agent will process.
date,description,amount
07-Mar-2025,taxi,24.00
07-Mar-2025,dinner,65.50
07-Mar-2025,hotel,125.90
2. The Implementation (semantic-kernel.py)
This script uses asynchronous Python to manage the agent's lifecycle. Note how we define the EmailPlugin with @kernel_function decorators—this is how the agent "understands" what the tool does.
import os
import asyncio
from pathlib import Path
# Add references
from dotenv import load_dotenv
from azure.identity.aio import DefaultAzureCredential
from semantic_kernel.agents import AzureAIAgent, AzureAIAgentSettings, AzureAIAgentThread
from semantic_kernel.functions import kernel_function
from typing import Annotated
# Create a Plugin for the email functionality
class EmailPlugin:
"""A Plugin to simulate email functionality."""
@kernel_function(description="Sends an email to a specified recipient with a subject and body.")
def send_email(self,
to: Annotated[str, "The recipient's email address"],
subject: Annotated[str, "The subject line of the email."],
body: Annotated[str, "The full itemized text body of the email."]):
print("-" * 30)
print(f"OUTGOING EMAIL SIMULATION:")
print(f"To: {to}")
print(f"Subject: {subject}")
print(f"Body:\n{body}")
print("-" * 30)
async def process_expenses_data(prompt, expenses_data):
# 1. Load configuration and credentials
load_dotenv()
ai_agent_settings = AzureAIAgentSettings()
# 2. Connect to the Azure AI Foundry Project via Agent Service
async with (
DefaultAzureCredential(
exclude_environment_credential=True,
exclude_managed_identity_credential=True) as creds,
AzureAIAgent.create_client(credential=creds) as project_client,
):
# 3. Define the Agent's persona and logic
expenses_agent_def = await project_client.agents.create_agent(
model=ai_agent_settings.model_deployment_name,
name="expenses_agent",
instructions="""You are an AI assistant for expense claim submission.
Analyze the provided expenses data. When requested, use the send_email
tool to send an itemized list and the total sum to expenses@contoso.com.
Confirm to the user once the action is completed."""
)
# 4. Wrap the definition into a Semantic Kernel AzureAIAgent
expenses_agent = AzureAIAgent(
client=project_client,
definition=expenses_agent_def,
plugins=[EmailPlugin()] # Attach our custom tool here
)
# 5. Initialize a managed Thread
thread: AzureAIAgentThread = AzureAIAgentThread(client=project_client)
try:
# Combine the user prompt with the data for context
prompt_messages = [f"{prompt}: {expenses_data}"]
# 6. Invoke the agent (Handles the reasoning loop automatically)
response = await expenses_agent.get_response(thread_id=thread.id, messages=prompt_messages)
# Display the final interaction
print(f"\n# {response.name} Response:\n{response}")
except Exception as e:
print(f"Error during agent execution: {e}")
finally:
# Cleanup resources in Azure
await thread.delete() if thread else None
await project_client.agents.delete_agent(expenses_agent.id)
async def main():
os.system('cls' if os.name=='nt' else 'clear')
script_dir = Path(__file__).parent
file_path = script_dir / 'data.txt'
with file_path.open('r') as file:
data = file.read() + "\n"
user_prompt = input(f"Current Expenses:\n{data}\nWhat should I do? (e.g., 'Submit my claim')\n")
await process_expenses_data(user_prompt, data)
if __name__ == "__main__":
asyncio.run(main())
3. Environment & Execution Commands
Follow these steps exactly to set up your Python environment and run the code.
Step 1: Create and Activate Virtual Environment
# Create the environment
python -m venv my-env
# Activate (Windows PowerShell)
Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
.\my-env\Scripts\activate
Step 2: Install Required Packages
# Install the core SDK and the Azure-specific Semantic Kernel extensions
pip install python-dotenv azure-identity semantic-kernel[azure]
Step 3: Verify and Run
# Run the script
python .\semantic-kernel.py
4. Why this matters
This lab covers three advanced "Agentic" patterns that are high-priority for the 2026 AI-102 exam:
- Orchestration Frameworks: You demonstrated how to use Semantic Kernel to wrap an Azure AI Agent.
- Custom Function Calling (Plugins): You created a
kernel_functionwith Annotations (metadata) that allows the LLM to understand how to map "itemized expenses" to thebodyparameter. - Clean Lifecycle Management: You implemented an
asyncworkflow that handles initialization, execution, and resource cleanup (deleting threads/agents) to manage Azure costs.
