We use Cookies to ensure that we give you the best experience on our website. Read our Privacy Policy.
REJECT ALL COOKIESI AGREE
Blog
DEV TIPS & TRICKS

Setting Up Ollama with Visual Studio Code: A Complete Integration Guide

December 18, 2024
6 min read
Two developers coding using Macbook.
By
Antonio Ramirez Cobos

Introduction

This guide will walk you through the process of integrating Ollama with Visual Studio Code using the Continue plugin. This setup will enable you to leverage AI capabilities directly within your development environment, enhancing your coding workflow with contextual AI assistance.

Understanding the Tools

Ollama

Ollama is an open-source tool that allows you to run Large Language Models (LLMs) locally on your machine. It provides a simple way to download, run, and manage various AI models like Llama 2, Code Llama, and others. The main advantages include:

  • Local execution for privacy
  • No API costs
  • Offline capability
  • Customizable models

For more details, check out the Ollama documentation.

Visual Studio Code

VS Code is Microsoft's popular code editor that provides:

  • Rich extension ecosystem
  • Integrated development environment features
  • Cross-platform support
  • Git integration

Learn more about VS Code features in their official documentation.

Continue Plugin

Continue is a VS Code extension that connects your editor with AI models. Key features include:

  • Code completion
  • Natural language to code conversion
  • Code explanation
  • Refactoring suggestions
  • Direct integration with various AI models including Ollama

Visit the Continue documentation for detailed information.

Prerequisites

Before starting, ensure you have:

  • A computer with at least 8GB RAM (16GB recommended)
  • Visual Studio Code installed
  • Administrative privileges for installation
  • Stable internet connection for initial downloads

Installation Steps

Step 1: Install Ollama

For macOS:

curl https://ollama.ai/install.sh | sh

For Linux:

curl -fsSL https://ollama.ai/install.sh | bash

For Windows:

  • Download the installer from Ollama's official download page
  • Run the installer and follow the prompts

Check the Ollama installation guide for detailed instructions.

Step 2: Install VS Code Continue Extension

1. Open VS Code

2. Press Ctrl+P (Windows/Linux) or Cmd+P (macOS)

3. Type: ext install continue.continue

4. Click Install

Alternatively, install directly from the VS Code Marketplace.

Step 3: Pull a Model with Ollama

Open your terminal and run:

ollama pull codellama

Browse available models in the Ollama model library.

Configuration

Configure Continue Plugin

1. Open VS Code Settings (Ctrl+, or Cmd+,)

2. Search for "Continue"

3. Set the following configurations:

{
  "continue.model": "ollama",
  "continue.modelParameters": {
    "model": "codellama"
  }
}

For advanced configuration options, visit the Continue configuration guide.

Configure Ollama Server Settings

Create or edit ~/.ollama/config.json:

{
  "host": "127.0.0.1",
  "port": 11434,
  "timeout": 30
}

Reference the Ollama configuration documentation for more options.

Usage Guide

Basic Commands

1. Start Ollama Server:

ollama serve

2. Quick code completion:

  • Place your cursor where you want code
  • Press Ctrl+Shift+P (Windows/Linux) or Cmd+Shift+P (macOS)
  • Type "Continue: Edit Code"
  • Enter your prompt

Visit the Continue quick start guide for more commands. 

Common Use Cases

1. Generate Code:

  • Select a code block or empty space
  • Use the command palette
  • Type your requirement in natural language

2. Code Explanation:

  • Select code you want explained
  • Use command palette
  • Choose "Continue: Explain Code"

3. Refactoring:

  • Select code to refactor
  • Use command palette
  • Choose "Continue: Edit Code"
  • Describe desired changes

Best Practices

1. Model Selection:

  • Use codellama for programming tasks (CodeLlama documentation)
  • Consider llama2 for general text completion
  • Match model size to your hardware capabilities

2. Prompt Engineering:

  • Be specific in your requests
  • Provide context when needed
  • Break down complex tasks

Learn more about effective prompting in the Continue prompting guide.

Troubleshooting

Common Issues and Solutions

1. Connection Errors:

# Check if Ollama is running
ps aux | grep ollama

# Restart Ollama
killall ollama
ollama serve

2. Performance Issues:

  • Ensure sufficient RAM is available
  • Close resource-intensive applications
  • Consider using a smaller model

3. Plugin Not Responding:

  • Reload VS Code window
  • Check Ollama server status
  • Verify configuration settings

Getting Help

Additional Resources

Community Resources

Tutorial Videos

Wrapping It Up

Your AI-powered development setup with Ollama and VS Code is good to go! It provides the flexibility of local AI capabilities, keeps your data private, and saves on API costs. Feel free to explore different models and workflows to see what fits best with your development style.

Don’t forget to check the official documentation and community resources from time to time - they’re great for staying up to date with new features. Happy coding!

Accelerate Your Career with 2am.tech

Join our team and collaborate with top tech professionals on cutting-edge projects, shaping the future of software development with your creativity and expertise.

See open positions

Don't miss out on
our latest insights
– Subscribe Now!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Share This Post
Back to Blog
Don't miss out on
our latest insights
– Subscribe Now!
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Navigate
Start Now