Proxy that allows you to use ollama as a copilot like Github copilot
Ensure ollama is installed:
curl -fsSL https://ollama.com/install.sh | shOr follow the manual install.
To use the default model expected by ollama-copilot:
ollama pull codellama:codeTo use DeepSeek:
ollama-copilot -provider deepseek -token YOUR_DEEPSEEK_API_KEY -model deepseek-coderTo use Mistral:
ollama-copilot -provider mistral -token YOUR_MISTRAL_API_KEY -model codestral-latestYou can use the install.sh script to automate the build, installation, and configuration of ollama-copilot and your editors:
./install.shThis script will:
- Build the binary and install it to
~/.local/bin/ollama-copilot. - Optionally install and start a Systemd Service (allowing you to configure
num-predict). - Automatically detect and offer to configure Neovim, VSCode, and Zed.
go install github.com/bernardo-bruning/ollama-copilot@latestEnsure your $PATH includes $HOME/go/bin or $GOPATH/bin.
For example, in ~/.bashrc or ~/.zshrc:
export PATH="$HOME/go/bin:$GOPATH/bin:$PATH"ollama-copilotor if you are hosting ollama in a container or elsewhere
OLLAMA_HOST="http://192.168.133.7:11434" ollama-copilotYou can configure the server using command-line flags:
| Flag | Default | Description |
|---|---|---|
-port |
:11437 |
HTTP port to listen on |
-proxy-port |
:11438 |
HTTP proxy port |
-port-ssl |
:11436 |
HTTPS port to listen on |
-proxy-port-ssl |
:11435 |
HTTPS proxy port |
-cert |
Certificate file path (*.crt) for custom TLS |
|
-key |
Key file path (*.key) for custom TLS |
|
-provider |
ollama |
Provider to run LLM |
-token |
TOKEN |
Token to pass for provider |
-model |
codellama:code |
LLM model to use |
-num-predict |
250 |
Number of tokens to predict (recommended 25 for copilot) |
-template |
<PRE> {{.Prefix}} <SUF> {{.Suffix}} <MID> |
Prompt template for fill-in-middle |
-system |
You are a helpful... |
System prompt to guide the model |
- Install copilot.vim
- Configure variables
let g:copilot_proxy = 'http://localhost:11435'
let g:copilot_proxy_strict_ssl = v:false- Install copilot extension
- Sign-in or sign-up in github
- Configure open settings config and insert
{
"github.copilot.advanced": {
"debug.overrideProxyUrl": "http://localhost:11437"
},
"http.proxy": "http://localhost:11435",
"http.proxyStrictSSL": false
}- Open settings (ctrl + ,)
- Set up edit completion proxying:
{
"features": {
"edit_prediction_provider": "copilot"
},
"show_completions_on_input": true,
"edit_predictions": {
"copilot": {
"proxy": "http://localhost:11435",
"proxy_no_verify": true
}
}
}- Open Settings (
Ctrl+Alt+S) - Navigate to Appearance & Behavior > System Settings > HTTP Proxy
- Select Manual proxy configuration:
- HTTP
- Host name:
localhost - Port number:
11435
- Navigate to Tools > Server Certificates
- Check Accept non-trusted certificates automatically
- Restart PyCharm
(experimental)
- Install copilot-emacs
- Configure the proxy
(use-package copilot
:straight (:host github :repo "copilot-emacs/copilot.el" :files ("*.el")) ;; if you don't use "straight", install otherwise
:ensure t
;; :hook (prog-mode . copilot-mode)
:bind (
("C-<tab>" . copilot-accept-completion)
)
:config
(setq copilot-network-proxy '(:host "127.0.0.1" :port 11434 :rejectUnauthorized :json-false))
)- Enable completions APIs usage; fill in the middle.
- Enable flexible configuration model (Currently only supported llamacode:code).
- Create self-installing functionality.
- Auto-configure IDEs (Neovim, VSCode, Zed).
- Documentation on how to use.
- Windows setup
