Semantic Kernel Getting Started Guide for Developers

If you're building AI features on Azure, you need orchestration that integrates with your existing infrastructure without forcing you to rewire everything. Semantic Kernel is Microsoft's open-source SDK that lets you build AI agents in C#, Python, or Java—using the same patterns you already know—while connecting to Azure OpenAI Service, Azure AI Search, and your enterprise data sources with built-in security and compliance.

What You'll Learn

Prerequisites

Step 1

Deploy Azure OpenAI Service and get your endpoint

In the Azure Portal, create an Azure OpenAI resource in a supported region (East US, Sweden Central, or check current availability). Once deployed, navigate to Azure OpenAI Studio and deploy a GPT-4o-mini model—this gives you the best balance of speed and cost for development. Copy your endpoint URL and API key from the Keys and Endpoint section. You'll use these in every Semantic Kernel project, and they're what give you enterprise features like data residency, private networking, and compliance that direct OpenAI API calls can't provide.

💡 Tip: Use managed identity instead of API keys in production. Deploy GPT-4o-mini first for development—it's 15x cheaper than GPT-4 and perfect for testing your orchestration logic.
Step 2

Create a new project and install Semantic Kernel

For C#, run 'dotnet new console -n MyFirstAgent' and 'dotnet add package Microsoft.SemanticKernel'. For Python, create a virtual environment and 'pip install semantic-kernel'. For Java, add the Semantic Kernel Maven dependency to your pom.xml. Semantic Kernel is fully open-source on GitHub (microsoft/semantic-kernel) and actively maintained by Microsoft with weekly releases. The SDK handles all the complexity of prompt engineering, function calling, and token management while letting you write idiomatic code in your preferred language.

💡 Tip: Pin to a specific Semantic Kernel version in production. The 1.x releases are stable, but new features arrive weekly.
Step 3

Initialize the Kernel and connect to Azure OpenAI

Create a Kernel instance and configure it with your Azure OpenAI endpoint, deployment name, and API key. In C#, use 'var builder = Kernel.CreateBuilder(); builder.AddAzureOpenAIChatCompletion(deploymentName, endpoint, apiKey);' then 'var kernel = builder.Build();'. In Python, it's 'kernel = Kernel()' followed by 'kernel.add_service(AzureChatCompletion(...))'. This single kernel instance manages all your AI services, plugins, memory, and execution context. Unlike raw API calls, the kernel automatically handles retries, token counting, and streaming responses while maintaining conversation state.

⚠ Watch out: Never hardcode API keys. Use Azure Key Vault or environment variables, and rotate keys every 90 days minimum.
Step 4

Create your first native plugin function

Plugins are how Semantic Kernel lets AI agents call your code. Create a class with a method decorated with [KernelFunction] in C# or @kernel_function in Python. For example, a GetCustomerData function that queries your database, or a SendEmail function that uses your SMTP service. The AI model can automatically invoke these functions when needed—this is what makes agents useful instead of just chatbots. You're not building prompts manually; you're describing what your business logic does, and the LLM decides when to call it based on user intent.

💡 Tip: Start with read-only functions for testing. Add write operations only after you've implemented approval workflows and logging.
Step 5

Add your plugin to the Kernel and enable automatic function calling

Import your plugin class using 'kernel.ImportPluginFromObject(new MyPlugin())' in C# or 'kernel.add_plugin(MyPlugin(), plugin_name="MyPlugin")' in Python. When you invoke the kernel with a prompt, it automatically analyzes which functions are available, determines if any are needed to answer the user's question, calls them with the right parameters, and incorporates the results into its response. This is OpenAI's function calling feature, but Semantic Kernel abstracts all the JSON schema generation and parameter marshaling. You write normal C#, Python, or Java methods, and Semantic Kernel handles the AI integration.

💡 Tip: Use clear function names and XML doc comments in C# or docstrings in Python—the LLM reads these to decide when to invoke your functions.
Step 6

Implement conversation memory with ChatHistory

Create a ChatHistory object to maintain context across multiple turns. In C#: 'var history = new ChatHistory();' then add messages with 'history.AddUserMessage(userInput);' and 'history.AddAssistantMessage(response);'. Pass this history to kernel invocations so the agent remembers previous exchanges. For production, persist ChatHistory to Azure Cosmos DB or Azure Table Storage indexed by session ID. This is critical for enterprise scenarios where conversations span hours or days, and users expect the agent to remember their account details, previous requests, and preferences without re-explaining.

⚠ Watch out: Monitor token usage—chat history grows quickly. Implement sliding window or summarization after 10-15 exchanges to avoid hitting context limits.
Step 7

Add Azure AI Search for RAG on company data

Install the Azure AI Search connector package and configure it with your search service endpoint and index name. Use 'kernel.ImportPluginFromObject(new AzureAISearchPlugin(...))' to make your indexed documents searchable by the agent. When a user asks about your product docs, support tickets, or internal knowledge base, Semantic Kernel automatically queries Azure AI Search, retrieves relevant chunks using vector similarity, and includes them in the prompt context. This RAG pattern gives you accurate answers grounded in your actual data instead of the LLM's training cutoff, and it works with PDFs, Word docs, and structured data processed by Azure AI Document Intelligence.

💡 Tip: Use hybrid search (vector + keyword) in Azure AI Search for 25-30% better retrieval accuracy on technical and domain-specific content.
Step 8

Implement error handling and retry policies

Wrap kernel invocations in try-catch blocks to handle rate limits (429 errors) and transient failures. Semantic Kernel includes built-in retry logic via HttpClient policies, but you should add application-level handling for quota exceeded, content filtering flags, and timeout scenarios. Use 'builder.Services.AddLogging()' to capture execution traces, token counts, and function call details—this telemetry is essential when debugging why an agent made a particular decision or called the wrong function. In production, route these logs to Application Insights where you can correlate AI behavior with user sessions and business outcomes.

💡 Tip: Set a max token budget per request (e.g., 4000 tokens) to prevent runaway costs from recursive function calls or adversarial inputs.
Step 9

Test planners for multi-step workflows

For complex tasks requiring multiple function calls in sequence, enable the FunctionCallingStepwisePlanner. This planner uses the LLM to break down a goal like 'find my overdue invoices and email summaries to the account owners' into discrete steps: search invoices, filter by date, retrieve customer emails, compose messages, send via plugin. You provide the goal as a prompt, and the planner generates and executes the plan automatically. This eliminates hundreds of lines of orchestration code you'd otherwise write manually. Test planners in a sandbox first—they can make 5-10 function calls per goal, and you want to validate the logic before pointing them at production systems.

⚠ Watch out: Planners increase latency and token cost. Use them for workflows that genuinely require reasoning across 3+ steps, not simple single-function calls.
Step 10

Deploy to Azure with managed identity and private endpoints

Package your Semantic Kernel agent as an Azure Container App, Azure Functions app, or App Service. Configure managed identity so your code authenticates to Azure OpenAI and other Azure services without storing credentials. Enable private endpoints to keep all AI traffic on the Azure backbone instead of the public internet—critical for regulated industries. Set up autoscaling based on request volume; Semantic Kernel apps are stateless and scale horizontally. Use Application Insights for distributed tracing across kernel invocations, plugin calls, and LLM requests. This architecture gives you the same enterprise reliability as your other Azure workloads, with the same deployment pipelines, monitoring, and governance.

💡 Tip: Start with Azure Container Apps for fastest deployment. It includes managed identity, autoscaling, and integrated log streaming without Kubernetes complexity.

Summary

You've just built a production-ready AI agent foundation using Semantic Kernel and Azure OpenAI Service. Your agent can call custom business logic, remember conversation context, search company data with RAG, handle errors gracefully, and deploy to enterprise Azure infrastructure. This architecture scales from prototype to millions of requests while maintaining security, compliance, and integration with your existing tech stack—exactly what you need when AI has to work alongside SAP, Salesforce, and legacy systems.

Next Steps

  1. Take Scott's AI-3016 course to learn advanced Semantic Kernel patterns including multi-agent orchestration, prompt caching, and token optimization
  2. Implement your first production plugin connecting to your CRM, ERP, or ticketing system API
  3. Set up a Prompt Flow pipeline in Azure AI Foundry to evaluate agent responses against your quality benchmarks
  4. Schedule a 30-minute consulting session with Scott to validate your Semantic Kernel architecture and deployment plan before you build

Need Custom AI Solutions for Your Business?

I build AI solutions that work for boring businesses—HVAC, dental, construction, professional services. Custom implementations in 90 days. You own the IP. We handle hosting, monitoring, updates, and 24/7 support.

Book a Free Consultation
Scott Hay Microsoft Certified Trainer & AI Solutions Architect Microsoft Certified Trainer (MCT) • Delivers 12 Microsoft Copilot courses (MS-4002 through MS-4023) plus Azure AI, Power BI • Azure AI Agents, Semantic Kernel, Power BI (PL-300), Power Platform certified • Former Microsoft and Amazon — 30+ years building production systems • Builds custom AI solutions for SMBs with 90-day delivery