Learn how to use Ollama in LobeVidol to run large language models locally and experience cutting-edge AI.
Install Ollama Locally
Configure Ollama for Cross-Origin Access
OLLAMA_ORIGINS
. Use launchctl
to set the environment variable:Interact with Local LLM in LobeVidol
OLLAMA_ORIGINS
.OLLAMA_ORIGINS
for your user account, setting the value to *
.OK/Apply
to save and then restart your system.Ollama
.curl -fsSL https://ollama.com/install.sh | sh
OLLAMA_ORIGINS
. If Ollama is running as a systemd service, you should set the environment variables using systemctl
:sudo systemctl edit ollama.service
.sudo systemctl edit ollama.service
Environment
under the [Service]
section:[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_ORIGINS=*"
systemd
and restart Ollama:sudo systemctl daemon-reload
sudo systemctl restart ollama
docker pull ollama/ollama
OLLAMA_ORIGINS
.docker run
command.docker run -d --gpus=all -v ollama:/root/.ollama -e OLLAMA_ORIGINS="*" -p 11434:11434 --name ollama ollama/ollama
Settings
-> Language Models
, where you can configure Ollama’s proxy, model name, and more.