Skip to content
Go back

C# AI Agent - Part 2: Tool Calling and Cloud Models

Updated:
Edit page

In Part 1, we built the foundation and wired up a chat endpoint to the local Ollama model. We can already talk with the model, but it doesn’t know anything about our HVAC system yet.

In this part, we will give the agent tools to interact with our system and establish ground rules to follow. We will also connect to OpenAI and use a paid model for better results because, as you will see from experiments, the local model is not quite up to the task for complex reasoning, and I don’t have the hardware resources to run a larger one locally.

[!TIP] You can find the complete source code for this part on the GitHub Branch: csharp-ai-agent-part-2-tool-calling.

Adding Tools to the Agent

To make our agent useful, we need to bridge the gap between natural language and our backend code. First, we create a simple API Client to interact with our backend.

public class ClimateCoreApiClient(HttpClient httpClient)
{
    public async Task<List<DeviceDto>?> GetDevicesAsync() =>
        await httpClient.GetFromJsonAsync<List<DeviceDto>>("api/devices");
    // ... other methods
}

Next, we create a Tool Service. This exposes the specific actions we want the agent to be able to perform. Note the use of [Description] attributes; this is crucial, as the AI reads these descriptions to understand when and how to use the function.

public class ClimateCoreToolService(ClimateCoreApiClient apiClient)
{
    [Description("Gets a list of all available HVAC devices, including their IDs, names, and locations.")]
    public async Task<List<DeviceDto>?> GetDevices() => 
        await apiClient.GetDevicesAsync();

    [Description("Gets the real-time status of a specific device, including current temp and compressor state.")]
    public async Task<DeviceStatusDto?> GetDeviceStatus(int deviceId) => 
        await apiClient.GetDeviceStatusAsync(deviceId);

    [Description("Gets historical telemetry (temperature and activity) for a device over the last X days.")]
    public async Task<List<TelemetryDto>?> GetDeviceTelemetry(int deviceId, int days) => 
        await apiClient.GetDeviceTelemetryAsync(deviceId, days);
// ...
}

Finally, we use reflection to dynamically generate the AITool definitions that the Chat Client can understand:

public static class ToolsExtensions
{
    public static IEnumerable<AITool> GetTools(this IServiceProvider sp)
    {
        var toolService = sp.GetRequiredService<ClimateCoreToolService>();

        var methods = typeof(ClimateCoreToolService)
            .GetMethods(BindingFlags.Public | BindingFlags.Instance | BindingFlags.DeclaredOnly)
            .Where(m => m.GetCustomAttribute<DescriptionAttribute>() != null);

        foreach (var method in methods)
        {
            yield return AIFunctionFactory.Create(
                method, 
                toolService, 
                new AIFunctionFactoryOptions
                {
                    Name = ToSnakeCase(method.Name),
                });
        }
    }

    private static string ToSnakeCase(string text)
    {
        if (string.IsNullOrEmpty(text)) return text;
        return Regex.Replace(text, "([a-z])([A-Z])", "$1_$2").ToLower();
    }
}

With this setup, the agent now has access to all the tools we defined in ClimateCoreToolService with proper descriptions. We can now ask our agent what tools it has access to:

POST http://localhost:8081/chat
Content-Type: application/json

[
  {
    "Role": "user",
    "Contents": [
      {
        "$type": "text",
        "Text": "What tools can you use?"
      }
    ]
  }
]

The agent will respond with a list of tools it can use, along with their descriptions. Try asking for the temperature in a specific room or the status of a device to see how it uses the tools to gather information.

Give the Agent some context

If we don’t provide any guidance, the agent may not behave as we expect. To ensure it follows our desired behavior, we can define a system prompt.

I have added a SystemPrompt.txt which contains some ground rules for the agent to follow:

### ROLE
You are the ClimateCore Facility AI, an intelligent building management assistant. Your job is to monitor and control HVAC devices using the tools provided to you.

### CORE INSTRUCTIONS
1. **ALWAYS Use Tools**: Never guess the state of a device. If a user asks "What is the temp in the Office?", you must call `get_devices` to find the Office ID, then `get_device_status`.

I also inject dynamic information, such as the current date, so the Agent can reason about time-based requests:

var systemPromptPath = Path.Combine(AppContext.BaseDirectory, "SystemPrompt.txt");
var systemPromptText = File.Exists(systemPromptPath) 
    ? await File.ReadAllTextAsync(systemPromptPath) 
    : "You are a helpful AI assistant for managing HVAC devices.";

app.MapPost("/chat", async (
    List<ChatMessage> messages, 
    IChatClient client, 
    ChatOptions chatOptions) =>
{
    var staticPrompt = systemPromptText; 

    var dynamicContext = $"""
                          [CURRENT CONTEXT]
                          Server Time: {DateTime.Now:yyyy-MM-dd HH:mm:ss} (Local)
                          Day of Week: {DateTime.Now:DayOfWeek}
                          Active User: Admin
                          """;

    var systemMessage = new ChatMessage(ChatRole.System, staticPrompt + dynamicContext);
    
    var fullHistory = new List<ChatMessage> { systemMessage };
    fullHistory.AddRange(messages);

    var response = await client.GetResponseAsync(fullHistory, chatOptions);
    
    return Results.Ok(response.Messages);
});

app.Run();

I inject ChatOptions into the endpoint; these options contain the tools we created earlier. Also, the endpoint prepends the system message to the chat history. Of course, we are not sending anything complex from the client side yet because we will need something more than simple HTTP calls for a full chat interface.

Local is good for privacy, but…

With the current setup, we can interact with our HVAC system using AI, but there are some key considerations. While local models like Llama 3.2 are impressive for their size, they often struggle when the heat is on (pun intended):

To address these, we can switch to using OpenAI’s GPT-4o-mini model while still keeping our tools and system prompt intact. This way we get the best of both worlds: powerful AI capabilities from OpenAI and custom tools to interact with our HVAC system.

We will make a simple change to the compose.yaml file to add the OpenAI API key as an environment variable:

# ...
climatecore.agent:
    image: climatecore.agent
    build:
      context: .
      dockerfile: ClimateCore.Agent/Dockerfile
    environment:
      ASPNETCORE_HTTP_PORTS: "8080"
      ClimateCoreApi__BaseUrl: "http://climatecore.api:8080"
#     AI__Provider: "Ollama"
#     Ollama__Endpoint: "http://host.docker.internal:11434"
#     Ollama__ModelId: "llama3.1"
#     Ollama__Endpoint: "http://climatecore.llm:11434"
#     Ollama__ModelId: "llama3.2"
      AI__Provider: "OpenAI"
      OpenAI__ApiKey: "YOUR_OPENAI_API_KEY_HERE"
      OpenAI__ModelId: "gpt-4o-mini"
# ...

The Difference is Night and Day

Once you restart the container with the new brain, the change in behavior is immediate. Here is a comparison of how the two models handle the same complex request:

RequestLocal Llama 3.2 ResponseOpenAI GPT-4o-mini Response
”Set the temperature to 31 degrees Celsius.""Setting device ID 999 to 31°C.” (Invented ID - Failure)“The requested temperature of 31°C is outside the standard range. Please confirm if you would like to proceed with this setting."
"Please set the bedroom to maximum 18 degrees. The basement should be lower while the rest of the house can be around 20.""To set the bedroom to a maximum of 18 degrees, I will call set_setpoint with the Bedroom ID and the new temperature.\n\n{“name”: “set_setpoint”, “parameters”: {“id”: 3, “temperature”: 18}}""The temperature settings have been updated as follows:\n\n- Master Bedroom: Set to 18°C\n- Basement: Set to 16°C\n- Living Room: Set to 20°C\n- Kitchen: Set to 20°C\n- Attic: Set to 20°C\n\nIf you need any further adjustments, feel free to ask!”

The OpenAI model demonstrates a much better understanding of context, safety, and tool usage. It questions unsafe requests and correctly chains multiple tool calls to fulfill complex instructions.

Next Steps

With the agent now capable of using tools and reasoning effectively, we have a solid foundation for building more advanced features. We will see where this takes us in the next part of the series!


Edit page
Share this post on:

Next Post
C# AI Agent - Part 1: The Foundation