top of page
logo.png

Building an AI-Powered To‑Do App with Agentic AI (C# & OpenAI)

  • travispettry2
  • Sep 28
  • 11 min read
Agentic ai todo, c sharp and open ai

Developers and tech leads are increasingly looking to integrate AI capabilities into everyday applications. In Part 1 of this series ("Building an AI Agent in C#: Using OpenAI Function Calling"), we covered the basics of OpenAI’s function calling by creating a simple C# AI agent. That agent could call a single function (a tool) to fetch a random token, illustrating how an LLM can use external tools to go beyond static responses. We saw that OpenAI’s function calling feature lets a model produce a JSON function call which our code executes, then feed the result back so the model can incorporate it into its answer. This approach transforms a basic chatbot into an agentic AI system that can interact with the world (APIs, databases, etc.) rather than just generating text. In fact, OpenAI describes agents as systems that intelligently accomplish tasks – from simple workflows to complex open-ended objectives.


In Part 2, we’ll expand on those foundations by building a practical AI-powered To‑Do app. Our agent will handle multi-step commands: it can create to-do items via natural language prompts and even look up contacts from a (simulated) database to enrich those tasks. This will demonstrate how an AI agent can coordinate multiple tools (functions) in a single conversation – a key aspect of agentic AI. We’ll introduce the idea of the agent having various resources (like a to-do list and a contacts list) it can utilize. (In a future post, we’ll dive deeper into managing such resources via MCP servers, an emerging approach for standardized tool integration.) For now, let’s get hands-on building our C# AI to-do assistant using OpenAI’s API and function calling.


Recap: Function Calling and Agentic Tools


In Part 1, we learned that OpenAI’s function calling allows an LLM to invoke external functions mid-conversation by returning a JSON function call in its response. Our application can detect this, execute the requested function, and then provide the result back to the model, which uses it to form a final answer. This means our AI assistant can fetch real-time data, query databases, call APIs, etc., instead of being limited to its training data. As one developer put it, function calling allows your app to “become something more than a bot that produces unstructured data”, enabling it to generate actionable results that your code can process.


The high-level flow is: define the available functions (tools) and include them in the model prompt; the model decides if and when to call a function based on the user’s request; if a function call is needed, the model responds with the function name and arguments (instead of a final answer); the application executes that function and passes the result back into the conversation; the model then continues, possibly calling more functions or giving the final answer. This loop can repeat for multi-step tasks. In this way, our AI agent can plan and act through tools to satisfy user requests – a hallmark of agentic AI systems.


What’s new in Part 2? We’ll define two tools for our agent: one to add a new to-do item, and one to find a contact’s information from a database. This will let the AI handle requests like “Remind me to call Alice tomorrow” by first fetching Alice’s contact info, then creating a task with that info included. We’ll see how to orchestrate multiple function calls in sequence. By the end, you’ll have a simple but powerful AI-driven to-do application, and a better sense of how Apps By TAP can build agentic AI solutions that combine LLMs with real-world data and actions.



Setting Up the To‑Do Agent Project


(If you want to follow along with the full code, check out the Agentic-AI-Todos GitHub repo.) 

The project is a C#/.NET console application using the official OpenAI .NET SDK. We assume you have an OpenAI API key and have added the OpenAI NuGet package. The structure will be similar to Part 1’s demo, but now with two functions and a slightly more complex conversation flow. Defining our Data: For simplicity, we’ll use simple services to request contacts and create Todo items. In a real app this could be a database or external service, but these simple services will suffice to simulate looking up contact info by name. For example:

public class ContactService
{
    public List<Contact> Contacts = new List<Contact>
    {
        new Contact
        {
            id= Guid.NewGuid().ToString(),
            FirstName = "Peter",
            LastName = "Parker"
        },
        new Contact
        {
            id= Guid.NewGuid().ToString(),
            FirstName = "Tony",
            LastName = "Stark"
        },
        new Contact
        {
            id= Guid.NewGuid().ToString(),
            FirstName = "Bruce",
            LastName = "Banner"
        }
    };
}

We also implement two helper functions that our tools will call: one to find a contact by name, and one to add a todo item:


Contact HandleFindContact(JsonObject callArgs)
{
    string query = callArgs["query"]!.GetValue<string>();

    return _contactService.Contacts
        .OrderByDescending(c => $"{c.FirstName} {c.LastName}".Contains(query, StringComparison.OrdinalIgnoreCase))
        .ThenBy(c => $"{c.FirstName} {c.LastName}")
        .FirstOrDefault(c => $"{c.FirstName} {c.LastName}".Contains(query, StringComparison.OrdinalIgnoreCase));
}

Todo HandleCreateTodo(JsonObject callArgs)
{
    string title = callArgs["title"]!.GetValue<string>();
    string content = callArgs["content"]?.GetValue<string>() ?? title;
    string dueDate = callArgs["dueDate"]!.GetValue<string>();
    string contactId = callArgs["contactId"]!.GetValue<string>();

    var tz = TimeZoneInfo.FindSystemTimeZoneById("America/Kentucky/Louisville");

    var due = ResolveDueDate(dueDate, tz);
    var todo = new Todo
    {
        Title = title,
        Content = content,
        DueDate = due,
        ContactId = contactId
    };
    _toDoService.Add(todo);
    return todo;
}

These C# functions do the real work. Now let’s expose them to the AI model via OpenAI’s function calling interface.



Defining Multiple Tools for the AI


Using the OpenAI .NET SDK, we create function tools describing each function. We’ll use ChatTool.CreateFunctionTool to define find_contact and add_todo so the model knows they’re available:

// Tool 1: find_contact
var findContactTool = ChatTool.CreateFunctionTool(
    functionName: "find_contact",
    functionDescription: "Find a contact by name or partial name. Returns best match or null.",
    functionParameters: BinaryData.FromString("""
                        {
                          "type": "object",
                          "properties": {
                            "query": { "type": "string", "description": "Name or partial, e.g. 'steve'" }
                          },
                          "required": ["query"],
                          "additionalProperties": false
                        }
                        """));

// Tool 2: create_todo
var createTodoTool = ChatTool.CreateFunctionTool(
    functionName: "create_todo",
    functionDescription: "Create a TODO with a natural - language due date.Server resolves the date.",

   functionParameters: BinaryData.FromString("""
                        {
                          "type": "object",
                          "properties": {
                            "title": { "type": "string", "description": "Short imperative title, e.g. 'Call Steve'" },
                            "content" : { "type": "string", "description": "full description of the task" },
                            "dueDate": { "type": "string", "description": "Natural text like 'Friday', 'tomorrow 2pm'" },
                            "contactId": { "type": "string", "description": "Optional contact id from find_contact" }
                          },
                          "required": ["title", "dueDateText"],
                          "additionalProperties": false
                        }
                        """));

Let’s break down what we did:


  • functionName – The names find_contact and add_todo will be used by the model when it decides to call these functions.

  • functionDescription – A brief description helps the model understand when a tool is relevant. For example, we tell it find_contact “looks up a contact’s email by name” – so if the user prompt mentions a person’s name, the model might realize it should use this function.

  • functionParameters – Here we provide a JSON schema for the function’s arguments. Both functions take a single string: "name" for find_contact, and "task" (the to-do description) for add_todo. We mark them as required. This schema is crucial: it tells the model how to format the function call and what info it needs from the user prompt. In practice, the model will fill in these parameters based on the prompt (it can even infer values). For example, if the user says “Call Alice tomorrow,” the model might call find_contact with {"name": "Alice"} without us having to hard-code that logic.


By defining these tools, we’ve effectively given our AI agent two capabilities or resources it can use: access to a Contacts DB (via find_contact) and a To-Do list service (via add_todo). This is more powerful than a single-tool agent – the AI can now chain them to fulfill complex requests.



Guiding the AI with System Prompts


When using multiple tools, it’s important to guide the AI on how and when to use them. We do this with a system message prompt. In our to-do app, a reasonable instruction could be:


var system =
        @"You turn user requests into TODOs using tools.
Rules:
- If a person is mentioned, call find_contact first with the name.
- Then call create_todo with a concise title and a dueDateText extracted from the request. This due date should be a C# formatted datetime
- Use contactId from find_contact if a suitable match exists (name similarity).
- Be brief and confirm the todo created with date in local words (e.g., 'Friday 9:00 AM'). The message must start with todo created";

We create our initial message list with this system prompt and a user prompt. For example, let's say the user wants to add a task involving a contact:


var messages = new List<ChatMessage> {
    new SystemChatMessage(systemPrompt),
    new UserChatMessage("Remind me to email Alice about the project updates tomorrow.")
};

Here, the system message gives the AI context and rules:


  • It knows it has two tools (add_todo and find_contact) and roughly what they do.

  • It is instructed to use find_contact first if a person’s name is mentioned, then use add_todo to add the task including that info.

  • It should respond to the user with a confirmation (not just raw JSON or anything).


The user’s request is asking to “email Alice about project updates tomorrow.” The AI will parse this and realize:


  1. “Alice” likely refers to a person – and we have a tool to get contact info.

  2. The main goal is to add a reminder (to-do) to email Alice.


Thanks to the system guidance, the model should decide to call find_contact for “Alice” before adding the todo.



Executing the Agent: Handling Function Calls in Code

Now it’s time to send our prompt and tools to the OpenAI model and let it do its magic. We’ll use the chat completion API with our messages and the two tools we defined:


var chat = new ChatClient("gpt-5-nano", apiKey);
var options = new ChatCompletionOptions
{
    Tools = { findContactTool, createTodoTool }
    // no ToolChoice: let the model decide
};
ClientResult<ChatCompletion> resp = await chat.CompleteChatAsync(messages, options);

When this call returns, the model will have processed the user prompt and system instructions. If everything is set up right, the model’s response will not be a final answer yet – instead, it will likely contain a function call. In the OpenAI .NET SDK, we can check response.Choices[0].Message.ToolCalls (or similar) to see if the assistant attempted to call a function. We expect the first tool call to be find_contact (since “Alice” was mentioned).


Let’s illustrate the flow step-by-step:


  1. Model’s First Response – Function Call: The assistant determines it needs Alice’s contact info. It responds with a tool request, e.g. find_contact with argument name: "Alice". We retrieve this from the response, e.g.:

  2. Execute find_contact: Our code recognizes the tool name and calls the corresponding C# function:


Suppose our contactsDb contained Alice . Then contactResult might be "Alice". We then package this result into a ToolChatMessage to send back to the model:


messages.Add(new AssistantChatMessage(response.Choices[0].Message)); 
// record the assistant's function call message in history
messages.Add(new ToolChatMessage(toolCall.Id, JsonSerializer.Serialize(contactResult)));

The first line adds the assistant’s request to the conversation history (so the model “remembers” it called a function). The second line adds the function’s output as a message from the Tool. We tag it with the same toolCall.Id that the model gave, and serialize the result (the model expects a JSON string or object). Now the conversation history includes: system prompt, user prompt, assistant function-call message, and tool response.


  1. Model’s Second Response – Another Function Call: With Alice’s email now known, we ask the model to continue:


This time, the assistant sees the contact info and should now call add_todo. For example, it might supply an argument like task: "Email Alice about the project updates tomorrow (Alice: alice@example.com)". Again, we check response...ToolCalls and find an add_todo call with the provided task description.


  1. Execute add_todo: We run our AddTodo function: This will add the task string to our todoList. The result string might be Added TODO: "Email Alice about the project updates tomorrow (Alice)". We add this as a tool response message, similar to before:

  2. Model’s Final Response – User Answer: We call the model one more time with the updated message list (now containing the function call and result for add_todo as well). With the task successfully added, the assistant can now respond to the user:


We expect finalAnswer to be a friendly confirmation message, since we instructed the AI to respond with a brief confirmation. For example, it might reply:

“Got it! I’ve added a to-do: Email Alice about the project updates tomorrow 



To summarize, the AI agent handled a single user request by autonomously performing two tool calls in sequence. Our code orchestrated this by looping through model responses and executing tool calls until the model returned a normal message. OpenAI’s function calling system made it straightforward – the model decided what to call and with what parameters, and our job was just to carry out those calls and return the results.


Agentic AI with Multiple Resources (and a Peek at MCP)


With our to-do app example, we’ve essentially built a mini agent that can use two resources: a task list and a contacts directory. The large language model is doing the high-level reasoning to decide when to use each resource and how. This showcases the power of agentic AI: the AI isn’t just chatting, it’s taking actions on our behalf (creating todos) and pulling in external knowledge (contact info) to do so.


Imagine extending this idea – an agent could incorporate many tools: calendars, email senders, web search, you name it. In fact, the industry is moving toward standards for plugging in such resources. One notable emerging standard is MCP (Model Context Protocol). MCP provides a unified way for AI agents to discover and use tools/services, whether they’re local or cloud-based. In an MCP-based design, each resource (like our contact DB or todo service) would be an MCP server that the agent can query. The agent (or its platform) doesn’t need custom integration logic for each new tool – it speaks a common protocol to any MCP-compliant resource. This is a big leap forward for agentic AI: instead of custom one-off integrations, agents can perform useful multi-step tasks by connecting to arbitrary services in a standardized way.


In a future post, we’ll explore how to turn resources like our to-do list and contacts DB into MCP servers and use an agent framework to interact with them more dynamically. That will take our agentic to-do app to the next level, allowing easier expansion and even remote tool usage. Stay tuned for that deep dive!


Conclusion & Key Takeaways


In this tutorial, we built a C# AI to-do application that demonstrates agentic AI principles in action. Using OpenAI’s function calling, we enabled our AI agent to handle a user request with multiple steps – looking up contact information and creating a to-do entry. This showcases how LLMs can collaborate with developer-defined tools to produce trustworthy, useful outcomes (not just text). The AI understands the intent (a reminder involving a person), uses the appropriate tools in sequence, and gives the user a final answer that includes real data from our “database.”


Key takeaways:


  • OpenAI function calling lets your AI call into external code/API, which greatly enhances its capabilities for real applications. The agent can fetch data, perform actions, and then continue the conversation with those results.

  • By defining multiple tools, we created an agent that can orchestrate complex tasks. The LLM chose which function to call and when, exhibiting a form of reasoning and planning. This is the essence of agentic AI, where the AI isn’t just answering but acting to accomplish goals.

  • Proper prompting (system messages) is crucial to guide the AI’s tool use. We gave clear instructions on using find_contact then add_todo, which helped the model make the right calls in the right order.

  • We introduced the concept of resources (like the to-do list and contacts). In more advanced systems, these could be modular services. Standards like MCP aim to make integrating such resources easier and more robust – a topic we’ll explore in an upcoming blog. Embracing these patterns can lead to highly scalable agent solutions that are trustworthy and easier to maintain.


By combining LLM intelligence with real-world tools, you can build powerful applications – from personal assistants to enterprise process automation – that respond in natural language and take actions. The to-do app we built is a simple example, but the pattern can be extended to countless use cases (email agents, support chatbots that query databases, scheduling assistants, etc.).


Apps By TAP is excited about the potential of such agentic AI solutions. We’re actively leveraging these techniques (OpenAI function calling, multi-tool agents, MCP, and more) to build smarter apps for our clients. If you’re interested in bringing AI agents into your applications, feel free to reach out or follow our blog for more insights. The future of software will be filled with LLM-powered agents that can plan, reason, and work alongside us – and getting hands-on with examples like this is the first step to that future.


As we wrap Part 2—and tee up Part 3’s deep dive on MCP-backed resources—if you’re ready to turn this pattern into a production agent in your own C#/.NET stack, Apps By TAP can help. We ship agentic AI with real tool use (function calling, multi-tool orchestration), secure Azure/OIDC integration, and the observability you’ll need in production. Tell us what you’re building: appsbytap.com.

 
 
bottom of page