
Using Ollama on macOS
1
Install Ollama Locally
Download Ollama for macOS and unzip it to install.
2
Configure Ollama for Cross-Origin Access
Due to Ollama’s default configuration, it is set to allow local access only at startup, so cross-origin access and port listening require additional environment variable settings for After completing the setup, you need to restart the Ollama application.
OLLAMA_ORIGINS
. Use launchctl
to set the environment variable:3
Interact with Local LLM in LobeVidol
Next, you can start interacting with the local LLM using LobeVidol.

Using Ollama on Windows
1
Install Ollama Locally
2
Download Ollama for Windows and install it.
3
Configure Ollama for Cross-Origin Access
4
Due to Ollama’s default configuration, it is set to allow local access only at startup, so cross-origin access and port listening require additional environment variable settings for
OLLAMA_ORIGINS
.5
On Windows, Ollama inherits your user and system environment variables.
6
OLLAMA_ORIGINS
for your user account, setting the value to *
.OK/Apply
to save and then restart your system.Ollama
.7
Interact with Local LLM in LobeVidol
8
Next, you can start interacting with the local LLM using LobeVidol.
Using Ollama on Linux
1
Install Ollama Locally
2
Install using the following command:
3
curl -fsSL https://ollama.com/install.sh | sh
4
Alternatively, you can refer to the Linux Manual Installation Guide.
5
Configure Ollama for Cross-Origin Access
6
Due to Ollama’s default configuration, it is set to allow local access only at startup, so cross-origin access and port listening require additional environment variable settings for
OLLAMA_ORIGINS
. If Ollama is running as a systemd service, you should set the environment variables using systemctl
:7
sudo systemctl edit ollama.service
.8
sudo systemctl edit ollama.service
9
Environment
under the [Service]
section:10
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_ORIGINS=*"
11
systemd
and restart Ollama:12
sudo systemctl daemon-reload
sudo systemctl restart ollama
13
Interact with Local LLM in LobeVidol
14
Next, you can start interacting with the local LLM using LobeVidol.
Deploying Ollama Using Docker
1
Pull the Ollama Image
2
If you prefer to use Docker, Ollama also provides an official Docker image, which you can pull using the following command:
3
docker pull ollama/ollama
4
Configure Ollama for Cross-Origin Access
5
Due to Ollama’s default configuration, it is set to allow local access only at startup, so cross-origin access and port listening require additional environment variable settings for
OLLAMA_ORIGINS
.6
If Ollama is running as a Docker container, you can add the environment variable to the
docker run
command.7
docker run -d --gpus=all -v ollama:/root/.ollama -e OLLAMA_ORIGINS="*" -p 11434:11434 --name ollama ollama/ollama
8
Interact with Local LLM in LobeVidol
9
Next, you can start interacting with the local LLM using LobeVidol.
Installing Ollama Models
Ollama supports various models, and you can view the list of available models in the Ollama Library and choose the appropriate model based on your needs.Installing in LobeVidol
In LobeVidol, we have enabled some commonly used large language models by default, such as llama3, Gemma, Mistral, etc. When you select a model for interaction, we will prompt you to download that model.
Pulling Models Locally with Ollama
Of course, you can also install models by executing the following command in the terminal, using llama3 as an example:Custom Configuration
You can find the configuration options for Ollama inSettings
-> Language Models
, where you can configure Ollama’s proxy, model name, and more.

You can visit Integrating with Ollama to learn how to deploy LobeVidol to meet the integration requirements with Ollama.