Skip to main content


Official website

Opensource project to run, create, and share large language models (LLMs).

Connect Ollama Models

  • Download Ollama from the following link:
  • Install Ollama and use the model codellama by running the command ollama pull codellama
  • If you want to use mistral or other models, you will need to replace codellama with the desired model. For example: ollama pull mistral

How to use Ollama

  • In VSCode and Select Ollama like a Provider

- Please be aware that Ollama is running locally on your computer.

Ollama Models available in Code GPT

  • llama2
  • codellama
  • phi
  • mistral
  • mixtral
  • deepseek-coder

API Errors

If you are getting API errors check the following link: Ollama Documentation

If the Ollama model does not respond in the chat, consider restarting it locally by turning it off and then on again. This action should resolve the issue.