Run AI Locally in VS Code (FREE Copilot Alternative) | Ollama + DeepSeek Setup

By Kamlesh Bhor Β· πŸ“… 20 Apr 2026 Β· πŸ‘οΈ 11

Follow:

πŸ”₯Introduction:

Artificial Intelligence is transforming how developers write code. Tools like GitHub Copilot and ChatGPT have made coding faster β€” but they come with cost, internet dependency, and privacy concerns.

πŸ‘‰ What if you could run AI directly on your laptop, inside VS Code, without any API key or subscription?

In this guide, you’ll learn how to build a free, offline AI coding assistant using:

  • Ollama (local AI runtime)
  • DeepSeek models (coding + reasoning)
  • VS Code + Continue extension

πŸš€ What You Will Build

By the end of this tutorial, you will have:

  • βœ… AI assistant inside VS Code
  • βœ… No API or internet required
  • βœ… Multiple AI models working together
  • βœ… Full control over your data

🧰 Prerequisites

Before starting, ensure you have:

  • Windows / Mac / Linux system
  • Minimum 16GB RAM recommended
  • VS Code installed
  • Basic understanding of development (.NET preferred)

βš™οΈ Step 1: Install Ollama (Local AI Engine)

Ollama allows you to run large language models locally with a simple command.

Installation:

  1. Visit: https://ollama.com
  2. Download and install
  3. Restart your system

Verify Installation:

ollama --version

πŸ‘‰ If version appears β†’ installation successful


πŸ“¦ Step 2: Install AI Models

Now install the required models:

ollama pull deepseek-coder:6.7b
ollama pull deepseek-r1:8b
ollama pull llama3:8b

Β 


🧠 Model Roles Explained

Model Purpose
DeepSeek-Coder Code generation
DeepSeek-R1 Planning & reasoning
Llama 3 Code review & explanation

πŸ‘‰ Using multiple models gives better results than using just one.


πŸ§ͺ Step 3: Test AI Locally

Run:

ollama run deepseek-coder

Try prompt:

Create a .NET Web API controller for product CRUD

πŸ‘‰ You will see full code generated locally.


🧩 Step 4: Install VS Code Extension

  1. Open VS Code
  2. Go to Extensions
  3. Install Continue

This extension connects your local AI models to VS Code.


βš™οΈ Step 5: Configure Models in Continue

Open config/settings:

Ctrl + Shift + P β†’ Continue: Open Config

Paste this:

name: Local Config
version: 1.0.0
schema: v1

models:
  - name: DeepSeek Coder
    provider: ollama
    model: deepseek-coder:6.7b

  - name: DeepSeek R1
    provider: ollama
    model: deepseek-r1:8b

  - name: Llama
    provider: ollama
    model: llama3:8b

Save and reload VS Code.

πŸ€– Step 6: Use AI in VS Code

Open Continue panel and start using AI.


πŸ§ͺ Example 1: Generate Code

Create a .NET Web API controller for Product CRUD operations

🧠 Example 2: Improve Architecture

Switch to DeepSeek-R1:

How can I improve scalability and security of this API?

πŸ” Example 3: Code Review

Switch to Llama:

Review this code and suggest improvements

πŸ”₯ Multi-Model AI Workflow (Advanced)

Instead of using one AI, use a team-based workflow:

  1. R1 β†’ Plan architecture
  2. Coder β†’ Generate code
  3. Llama β†’ Review and optimize

πŸ‘‰ This approach improves quality significantly.


⚠️ Limitations of Local AI

While powerful, local AI has some limitations:

  • ❌ No full automation (agent mode limited)
  • ❌ Slightly slower than cloud models
  • ❌ Limited context handling

πŸ‘‰ Still, it is highly useful for everyday development.


πŸ’‘ Best Practices

  • Use smaller models (7B–8B) for speed
  • Avoid large unnecessary context
  • Provide clear and structured prompts
  • Use file-specific context instead of full project

🎯 Who Should Use This Setup?

This setup is ideal for:

  • .NET developers
  • Students learning AI
  • Developers who want free tools
  • Privacy-conscious engineers

πŸ† Benefits of Local AI

  • πŸ’Έ Zero cost
  • πŸ”’ Full privacy
  • ⚑ Works offline
  • 🧠 Better understanding of AI workflows

πŸ”Ž Common Questions

❓ Can I use AI in VS Code for free?

Yes, using Ollama and local models, you can run AI without any cost.


❓ Is this better than GitHub Copilot?

It depends:

  • Copilot β†’ faster, cloud-based
  • Local AI β†’ free, private, offline

❓ Do I need internet after setup?

No. Once models are downloaded, everything works offline.


πŸš€ Future of Local AI

Local AI is evolving rapidly. Soon, we will have:

  • Better agent workflows
  • Faster models
  • Full automation locally

πŸ‘‰ Learning this today gives you a strong advantage.


🏁 Conclusion

You don’t need expensive tools to be productive.

With this setup, you can:

  • Run AI locally
  • Write better code
  • Learn faster

πŸ‘‰ And most importantly β€” stay independent of cloud tools

Β 

Kamlesh Bhor
Article by Kamlesh Bhor

Feel free to comment below about this article.

πŸ’¬ Join the Discussion