Skip to content
Go back

C# AI Agent - Part 1: The Foundation

Updated:
Edit page

I’m starting a new series to document my journey building AI agents in C#. The goal isn’t just to make a console app that calls an API, but to build a ‘real-world’ project: an IoT environment that we can eventually control through natural language.

To keep things easy to follow, I’m using a branch-per-part strategy. You can jump into any part of the series without having to set up the previous ones from scratch.

[!TIP] You can find the complete source code for this part on the GitHub Branch: csharp-ai-agent-part-1-foundation.

The High Level Architecture

To bring this to life, we’ll build a small set of services wired together. While everything can run in Docker, running an LLM inside a container on macOS (or without a dedicated NVIDIA GPU) can be painfully slow due to lack of GPU passthrough.

For the best experience, we will use a Hybrid Architecture:

I will comment out the docker Ollama service in the compose file to avoid confusion, but you can uncomment it if you want to try running everything in Docker.

The components are:


The “Brain” (Local Ollama)

Before we touch any C# code, let’s get the LLM running natively. This ensures we get fast inference speeds instead of the crawl you might experience inside Docker.

On macOS

We can use Homebrew to install the Ollama app, which gives us a nice menu bar interface and background service management.

brew install --cask ollama-app

Once installed, open your terminal and pull the model we’ll be using. We will use llama3.1 but you can swap it out for any other model Ollama supports.

# This downloads the model and starts an interactive chat session
ollama run llama3.2

Docker Serivices

Now we spin up the rest of the infrastructure. We need:

We tell the Agent inside Docker to look for the LLM on the Host machine using the special DNS name host.docker.internal.

services:
  climatecore.database:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: "postgres"
      POSTGRES_PASSWORD: "strongpassword123!"
      POSTGRES_DB: "climatecoredb"
    healthcheck:
      test: [ "CMD-SHELL", "pg_isready -U postgres -d climatecoredb" ]
      interval: 5s
      timeout: 5s
      retries: 5
    networks:
      - climate_network
  climatecore.api:
    image: climatecore.api
    build:
      context: .
      dockerfile: ClimateCore.Api/Dockerfile
    environment:
      ASPNETCORE_HTTP_PORTS: "8080"
      Database__Host: "climatecore.database"
      Database__Name: "climatecoredb"
      Database__User: "postgres"
      Database__Password: "strongpassword123!"
    ports:
      - "8080:8080"
    depends_on:
      climatecore.database:
        condition: service_healthy
    networks:
      - climate_network
  climatecore.agent:
    image: climatecore.agent
    build:
      context: .
      dockerfile: ClimateCore.Agent/Dockerfile
    environment:
      ASPNETCORE_HTTP_PORTS: "8080"
      ClimateCoreApi__BaseUrl: "http://climatecore.api:8080"
#     LLM__Endpoint: "http://climatecore.llm:11434"
#     LLM__ModelId: "llama3.1"
      LLM__Endpoint: "http://host.docker.internal:11434"
      LLM__ModelId: "llama3.1"
    ports:
      - "8081:8080"
    depends_on:
      climatecore.api:
        condition: service_started
#     climatecore.llm:
#       condition: service_healthy
    networks:
      - climate_network
#  climatecore.llm:
#    image: ollama/ollama:latest
#    volumes:
#      - ollama_storage:/root/.ollama
#    healthcheck:
#      test: ["CMD", "ollama", "list"]
#      interval: 10s
#      retries: 5
#    networks:
#      - climate_network
#  ollama-puller:
#    image: curlimages/curl:latest
#    networks:
#      - climate_network
#    depends_on:
#      climatecore.llm:
#        condition: service_healthy
#    entrypoint: [ "curl", "-f", "http://climatecore.llm:11434/api/pull", "-d", "{\"name\": \"llama3.1\"}" ]

networks:
  climate_network:

#volumes:
#  ollama_storage:

From the repo root:

docker compose up --build

This starts Postgres, builds the API and Agent, and connects them. The Agent will automatically reach out to the local Ollama instance.


Try It With HTTP Calls

There is no frontend yet, so use the built-in .http collections.

API (devices + telemetry)

Open ClimateCore.Api/ApiCollection.http and run:

Agent (chat to local LLM)

Open ClimateCore.Agent/ClimateCore.Agent.http and run:

That single endpoint is the full integration path: API call in, LLM response out. It is intentionally simple so we can build tools and orchestration on top of it without mystery.


The Foundation Code

To wrap things up, here is the C# code that powers the Agent endpoint. We use OllamaSharp and standard .NET AI abstractions.

The Agent Endpoint

Inside ClimateCore.Agent, we use OllamaSharp to talk to our container. In this first part, it acts as a simple pass-through to verify the connection is alive and the model is responding.

LlmServiceExtensions.cs

public static class LlmServiceExtensions
{
    public static IServiceCollection AddClimateControlLlm(this IServiceCollection services, IConfiguration configuration)
    {
        var llmSection = configuration.GetRequiredSection("LLM");
        var endpoint = llmSection.GetRequiredValue("Endpoint");
        var modelId = llmSection.GetRequiredValue("ModelId");

        services.AddSingleton<IChatClient>(sp =>
        {
            var loggerFactory = sp.GetRequiredService<ILoggerFactory>();
            var ollamaClient = new OllamaApiClient(new Uri(endpoint), modelId);

            return new ChatClientBuilder(ollamaClient)
                .UseLogging(loggerFactory)
                .UseFunctionInvocation(loggerFactory, c =>
                {
                    c.IncludeDetailedErrors = true; 
                })
                .Build(sp);
        });
        
        return services;
    }

    private static string GetRequiredValue(this IConfigurationSection section, string key) =>
        section[key] ?? throw new InvalidOperationException($"Missing required configuration: {section.Path}:{key}");
}

Program.cs

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddClimateControlLlm(builder.Configuration);

var app = builder.Build();

app.MapPost("/chat", async (List<ChatMessage> messages, IChatClient client) =>
{
    var response = await client.GetResponseAsync(messages);
    return Results.Ok(response.Messages);
});

app.Run();

Next Steps

Now that the foundation is set, in the next part we will create Tools so the Agent can act on the API, not just talk about it.


Edit page
Share this post on:

Previous Post
C# AI Agent - Part 2: Tool Calling and Cloud Models
Next Post
Hello World – Starting My Developer Blog