Skip to main content

Ollama

Official website https://ollama.ai

Opensource project to run, create, and share large language models (LLMs).

Connect Ollama Models

  • Download Ollama from the following link: ollama.ai
  • Install Ollama and use the model codellama by running the command ollama pull codellama
  • If you want to use mistral or other models, you will need to replace codellama with the desired model. For example: ollama pull mistral

How to use Ollama

  • In VSCode and Select Ollama like a Provider

  • Please be aware that Ollama is running locally on your computer.

  • Paste API Key here, and click on Connect:

Remove Key

No need to disconnect, just change the provider.

Ollama Models available in Code GPT

  • llama3:70b
  • llama3:8b
  • command-r-plus
  • command-r
  • codegemma
  • gemma:7b
  • gemma:2b
  • dbrx
  • mistral
  • mixtral
  • llama2
  • codellama
  • phi
  • deepseek-coder
  • starcoder2
  • dolphincoder
  • dolphin-mixtral
  • starling-lm
  • llama2-uncensored

API Errors

If you are getting API errors check the following link: Ollama Documentation

Ollama Errors

  • If the Ollama model does not respond in the chat, consider restarting it locally by turning it off and then on again. This action should resolve the issue.

  • If the Ollama is running but not responding, please manually remove 'Ollama_Host' from the environment variables and let it revert to the default setting.