This post will walk you through integrating DeepSeek R1 with .NET 9 using Semantic Kernel. If you’re ready to explore DeepSeek models locally, this step-by-step guide is a great place to start.

What You Will Learn

  • How to Get Started with DeepSeek R1
  • How to Use Ollama for running local models
  • How to install and start running the DeepSeek R1 model
  • How to Use Semantic Kernel in C#

1. Prerequisites

  • Visual Studio 2022+ (with .NET 9 SDK installed) .NET 9 is still in preview, so ensure that you have the preview SDK installed.
  • Ollama (for managing and running local models)
  • DeepSeek1.5b Model

2. Installing Ollama

Ollama is a tool or platform that enables users to run and interact with large language models (LLMs) locally. It streamlines the setup and deployment of open-source models such as LLaMA, Phi, DeepSeek R1, and others, making it easier to work with LLMs on a local machine.

To Run Ollama visit its official website https://ollama.com/download and install it on your machine.

3. Installing DeepSeek R1

DeepSeek’s first-generation reasoning models with comparable performance to OpenAI-o1, including six dense models distilled from DeepSeek-R1 based on Llama and Qwen.

On the Ollama website click on Models and click on deepseek-r1 and choose 1.5b parameter option

How To Use DeepSeek With .NET 9 - deepseek-r1:1.5b

Open Command Prompt and run the below command.

ollama run deepseek-r1:1.5b

It will download the model and start running automatically.

Once done, verify the model is available

ollama list

That’s it! We’re ready to integrate DeepSeek locally.

4. Creating .NET Console Application

  1. Launch Visual Studio
  2. Make sure .NET 9 is installed.
  3. Create a New Project
  4. File → New → Project…
  5. Pick Console App with .NET 9.
  6. Name Your Project
  7. e.g., DeepSeekDemoApp or any name you prefer.
  8. Target Framework Check
  9. Right-click on your project → Properties.
  10. Set Target Framework to .NET 9.

5. Integrating DeepSeek R1 with Semantic Kernel

Although it’s possible to call DeepSeek directly using HTTP requests to Ollama, leveraging Semantic Kernel provides a more powerful abstraction for prompt engineering, orchestration, and additional capabilities.