Building an AI Agent in C#: Using OpenAI Function Calling
- travispettry2
- Sep 21
- 9 min read

Developers and tech leads are increasingly looking to integrate AI capabilities into their applications. One powerful new feature from OpenAI is function calling, which allows a chat model to invoke external functions (tools) during a conversation. In this post, we'll walk through how to leverage OpenAI's function calling from a C# application – essentially creating a simple AI "agent" that can use external tools to enhance its responses. This is a technical deep dive brought to you by Apps By TAP, and we'll use a step-by-step example (with code) to illustrate the process.
Understanding OpenAI Function Calling (Tools)
OpenAI's function calling feature enables AI models to perform tasks beyond just generating text – they can call predefined functions or APIs to get information or take actions in the middle of a conversation. In essence, the model can output a JSON description of a function call (including the function name and arguments) instead of a direct answer. Your application intercepts this, executes the function, and then feeds the result back to the model so it can produce a final answer.
This capability is extremely powerful for building AI applications. It means your AI assistant can fetch real-time data, perform computations, or integrate with external services as needed, rather than being limited to its trained knowledge. As one developer put it, function calling allows your app to “become something more than a bot that produces unstructured data”, letting the AI generate data that your code can then process or act on. Potential use cases include: calling weather or stock price APIs in responses, querying internal databases to answer user questions, performing actions like sending an email or booking a meeting, and much more.
How does it work? The high-level flow for function calling looks like this:
Define available functions (tools) and send them along with the user's prompt in the API request.
The model decides if any function is needed. If so, it responds with a JSON object specifying which function to call and what arguments to use (this is the "function call").
Your code parses that JSON, executes the function with the given arguments, and captures the result.
You then send the function result back to the model as a new message in the conversation.
The model uses the result to continue the conversation, usually producing a final answer to the user. (If the answer still requires more steps, it might request another function call, and the cycle repeats.)
Under the hood, function calling relies on specially formatted prompts and the model's ability to follow a function specification. It's supported by OpenAI GPT models like gpt-4 and the June 2023 update of gpt-3.5-turbo (and newer) which are trained for this feature. We'll use the official OpenAI .NET SDK in our example, which provides convenient classes for working with chat models and tools.
Now, let's dive into building our C# AI agent step by step.
If you’d like to follow along with the complete code, the sample project is available on GitHub
Step 1: Defining a Tool Function for the AI
First, we need to define the function (or "tool") that our AI agent can use. In our example, we'll create a simple function that returns a random token string (to simulate fetching some external data). We both implement the function in C# and describe it to the AI model so the model knows it exists.
In code, we implement a helper function GetRandomToken() that generates a new GUID string:
string GetRandomToken() => Guid.NewGuid().ToString();Next, we define a ChatTool for this function using the OpenAI .NET SDK. The ChatTool.CreateFunctionTool method lets us specify the function's name, a description, and a JSON schema for its parameters. In this case, our get_random_token function doesn't require any input parameters, so we provide an empty parameters schema:
// Define the tool (function) that the AI can call
var tool = ChatTool.CreateFunctionTool(
functionName: "get_random_token",
functionDescription: "Return a fresh random token string.",
functionParameters: BinaryData.FromString(@"{
""type"": ""object"",
""properties"": { },
""additionalProperties"": false
}")
);Let's break down what's happening here:
Function Name: We name the function "get_random_token" – this is how the model will refer to it when requesting a call.
Description: A brief description tells the model what the function does. This helps the AI decide when the function might be relevant.
Parameters Schema: We provide a JSON schema describing the function's expected arguments. Our function needs no inputs, so the schema is an empty object with no properties. (For a more complex function, you would list each parameter, its type, and description in JSON format. For example, a weather API tool might require a "location" string parameter, etc. The model uses this schema to format its function call and can even infer or hallucinate argument values based on the user's request).
By defining the tool in this way, we effectively expose the function to the AI model. When the model sees the user's prompt, it also knows that this get_random_token tool is available to use if needed.
// System message to instruct the AI's behavior
var systemPrompt =
"You are a helpful assistant that must ALWAYS call the tool 'get_random_token` first. " +
"Never answer directly without calling that tool. " +
"After receiving the tool result, reply with exactly:\n" +
"\"RANDOM_TOKEN: <the token string>\"";
var messages = new List<ChatMessage>
{
new SystemChatMessage(systemPrompt),
new UserChatMessage("go")
};Let's unpack the system prompt:
We tell the AI it "must ALWAYS call the tool get_random_token first" and not to answer the user directly without doing so. This effectively forces the AI's reasoning toward using our function.
We then specify exactly how it should format its final answer: RANDOM_TOKEN: <the token string>. This is just for our demonstration – in a real scenario you might not need to be this explicit, but here we want predictable output. The assistant will obey this when formulating the answer after getting the token.
The user message in this case is simply "go". The content of the user prompt doesn't matter much in our trivial example, since our system instructions are forcing a specific behavior. In a real use-case, the user might ask something like "Please generate a secure token for me" and the assistant would then decide to call get_random_token to fulfill that request.
At this point, our messages list contains two messages: the system role instructions and the user's request. We also have our tool definition ready. Now it's time to send this information to the OpenAI model.
Step 3: Initiating the Model and Forcing a Function Call
Normally, when you call the OpenAI chat completion API with some tools defined, the model will decide on its own whether to call a function based on the user's prompt. However, since we explicitly want to demonstrate the tool usage, we will force the model to call our function on the first reply. The OpenAI .NET SDK allows this by setting a tool choice in the options.
We create a ChatCompletionOptions object and add our tool to it. We also set ToolChoice to the name of our function, indicating the model must attempt that function call first:
var options = new ChatCompletionOptions
{
Tools = { tool },
ToolChoice = ChatToolChoice.CreateFunctionChoice("get_random_token")
};
// Ask the model to complete the chat (this will trigger the function call)
ChatCompletion firstResponse = await chat.CompleteChatAsync(messages, options);A few notes on this step:
The Tools property of ChatCompletionOptions is where we pass the list of available functions (in our case just one tool). If we had multiple tools, we could add all of them to this list. The model would then choose among them when needed.
ToolChoice.CreateFunctionChoice("get_random_token") is used to force the model's action. This is useful for testing or certain flows. In general, if you omit ToolChoice, the model will decide autonomously whether to use a function. Here we set it because our system prompt already demands using the tool first, and we want to skip any hesitation. (You could also force ToolChoice = "none" to ensure the model gives a direct answer with no tool use, but that's not what we want here.)
When we call CompleteChatAsync, the model processes the conversation and the tool list. Thanks to our instructions, it will respond with a function call. Specifically, firstResponse should indicate that the model wants to call get_random_token. In the OpenAI .NET SDK, the ChatCompletion object will contain a list of ToolCalls (function call requests) from the assistant. We expect firstResponse.ToolCalls[0] to be the call to our function, including a unique Id for the call and any arguments (none in this case).
At this stage, the model has essentially said "I need to use get_random_token now" and is pausing for the function result. It's our job to handle that.
Step 4: Executing the Function and Providing the Result
The model's answer so far is incomplete – it's waiting for the function's output. Now our server (or application) needs to actually run the GetRandomToken() function and then supply the result back to the model. This bridges the gap between the AI and our external logic.
From the firstResponse we got above, we can retrieve the tool call information. We know it requested "get_random_token", so we execute that function in our code. Then we package the result into a message that the model will understand as the function's output:
// Record the assistant's tool call in the conversation history
messages.Add(new AssistantChatMessage(firstResponse));
// Execute the requested function (get_random_token) in our code
var toolCall = firstResponse.ToolCalls[0];
string tokenResult = GetRandomToken(); // call our C# function to get a token
// Add the function result as a Tool message, associated with the tool call ID
messages.Add(new ToolChatMessage(toolCall.Id, JsonSerializer.Serialize(new { toolResult = tokenResult })));Let's explain what's happening:
We add the AssistantChatMessage(firstResponse) to the messages history. This represents the assistant's reply that contains the function call request. (Even though this "reply" isn't user-facing text, it's an important part of the conversation state. It ensures the model remembers that it asked to call a function.)
We then execute GetRandomToken() on our side to fulfill the request. This returns, say, a GUID like "e8caff2c-5e4b-4c1f-9bf5-079a237cd5e3".
We create a new ToolChatMessage with the same toolCall.Id provided by the model, and include the tokenResult in JSON format. The JSON serialization here yields something like {"toolResult":"e8caff2c-5e4b-4c1f-9bf5-079a237cd5e3"}. This step is crucial: it packages the function's output in a way the model can use. Essentially, we are saying "here is the result of the function you wanted to call."
By adding the tool result message to our messages list, we maintain a complete conversation history. Now the history has: system prompt, user prompt, assistant's function call request, and the tool's response message.
At this point, we've done the heavy lifting of the "agent" – connecting the AI to an external function and getting a result. The final step is to send this updated conversation back to the model so it can produce the final answer.
Step 5: Getting the Final Answer from the Model
With the function result in the conversation, we call the model again to let it finish its response. We don't need to force any tool usage this time; we just call CompleteChatAsync with the current message history. The model will see that it got a token from the tool, and (thanks to our system prompt instructions) it will now output the answer in the required format:
// Call the model again, now that it has the tool result
ChatCompletion secondResponse = await chat.CompleteChatAsync(messages);
// Extract the assistant's final answer
string finalAnswer = secondResponse.Content[0].Text;This second response from the model should be the actual answer content for the user. In our example, given the system prompt, we expect finalAnswer to look like:
RANDOM_TOKEN: e8caff2c-5e4b-4c1f-9bf5-079a237cd5e3(Your GUID string will vary each time, of course.) The assistant has successfully followed the instructions: it called the get_random_token tool and then responded with the token it got.
At this point, our minimal AI agent has completed the task! We took a user request ("go"), forced the AI to use a tool, executed the tool, and the AI returned a result that includes the tool's output. While this example is trivial, it demonstrates the pattern for integrating any function into the AI's reasoning process.
In a real application, you wouldn't usually force the function call blindly; instead, you would let the model decide when to use tools. You might also have multiple tools available. In those cases, you would loop through calls as needed: each time the model returns a function request, execute it, add the result, and call the model again until it returns an answer. The OpenAI .NET SDK's design supports this loop by checking the FinishReason of the response (e.g., FinishReason == ChatFinishReason.ToolCalls means the model wants a function call, whereas Stop means it provided a final answer). Always be sure to handle possible errors or even model "hallucinations" (e.g., the model might ask for a function that doesn't exist or with arguments that don't make sense). Robust agents should validate and guard against that, but those topics are beyond our current walkthrough.
Conclusion
We have built a simple AI agent in C# that can call an external function using OpenAI's function calling feature. The key takeaways are:
Function calling allows your AI to use external tools, giving it capabilities far beyond basic Q&A. This bridges AI with real-world data and actions.
Using the official OpenAI .NET SDK, we can define tools with JSON schemas and handle the model's function call and response cycle fairly easily in a few steps.
The pattern involves a back-and-forth with the model: send prompt + tools, get a function call, execute it, send result, get the final answer. This architecture can be extended to many tools and multi-step interactions, enabling complex agents.
While our example returned a random token, you can imagine integrating more useful functions. For instance, you could allow the AI to send an email, query a database for records, or call a web API for up-to-date information – whatever your application needs. By doing so, you're turning a chat bot into an action-taking agent that can both converse and perform tasks.
👉 You can grab the full example project from GitHub and start experimenting today.
For help taking this pattern further, or to integrate AI-powered agents into your software, reach out to us at Apps By TAP. We’d love to help you build the next generation of intelligent applications.
If you're excited about the possibilities of such AI integration, you're not alone. We at Apps By TAP are actively exploring and implementing these AI-agent patterns in real projects. Our goal is to help organizations build intelligent apps that leverage the latest AI advancements. Feel free to reach out or visit our Apps By TAP homepage to learn more about our work and how we can help bring AI-powered solutions to your applications. Happy coding, and enjoy experimenting with AI function calling in your own C# projects!



