Ollama is a powerful framework for running large language models (LLMs) locally, supporting various models including Llama 2, Mistral, and more. Now, LobeVidol has integrated with Ollama, which means you can easily use the language models provided by Ollama in LobeVidol to enhance your applications.

This document will guide you on how to use Ollama in LobeVidol:

Using Ollama on macOS

1

Install Ollama Locally

Download Ollama for macOS and unzip it to install.

2

Configure Ollama for Cross-Origin Access

Due to Ollama’s default configuration, it is set to allow local access only at startup, so cross-origin access and port listening require additional environment variable settings for OLLAMA_ORIGINS. Use launchctl to set the environment variable:

launchctl setenv OLLAMA_ORIGINS "*"

After completing the setup, you need to restart the Ollama application.

3

Interact with Local LLM in LobeVidol

Next, you can start interacting with the local LLM using LobeVidol.

<img
  src="https://oss.vidol.chat/assets/9cc8ac00cfe1eb7397e7206755c79f0a.webp"
  alt="Interact with llama3 in LobeVidol"
  className="w-full"
/>

Using Ollama on Windows

1

Install Ollama Locally
3

Configure Ollama for Cross-Origin Access
4

Due to Ollama’s default configuration, it is set to allow local access only at startup, so cross-origin access and port listening require additional environment variable settings for OLLAMA_ORIGINS.
5

On Windows, Ollama inherits your user and system environment variables.
6

  • First, exit the Ollama application by clicking on it in the Windows taskbar.
  • Edit the system environment variables from the Control Panel.
  • Edit or create the environment variable OLLAMA_ORIGINS for your user account, setting the value to *.
  • Click OK/Apply to save and then restart your system.
  • Relaunch Ollama.
  • 7

    Interact with Local LLM in LobeVidol
    8

    Next, you can start interacting with the local LLM using LobeVidol.

    Using Ollama on Linux

    1

    Install Ollama Locally
    2

    Install using the following command:
    3

    curl -fsSL https://ollama.com/install.sh | sh
    
    4

    Alternatively, you can refer to the Linux Manual Installation Guide.
    5

    Configure Ollama for Cross-Origin Access
    6

    Due to Ollama’s default configuration, it is set to allow local access only at startup, so cross-origin access and port listening require additional environment variable settings for OLLAMA_ORIGINS. If Ollama is running as a systemd service, you should set the environment variables using systemctl:
    7

  • Edit the systemd service by calling sudo systemctl edit ollama.service.
  • 8

    sudo systemctl edit ollama.service
    
    9

  • For each environment variable, add Environment under the [Service] section:
  • 10

    [Service]
    Environment="OLLAMA_HOST=0.0.0.0"
    Environment="OLLAMA_ORIGINS=*"
    
    11

  • Save and exit.
  • Reload systemd and restart Ollama:
  • 12

    sudo systemctl daemon-reload
    sudo systemctl restart ollama
    
    13

    Interact with Local LLM in LobeVidol
    14

    Next, you can start interacting with the local LLM using LobeVidol.

    Deploying Ollama Using Docker

    1

    Pull the Ollama Image
    2

    If you prefer to use Docker, Ollama also provides an official Docker image, which you can pull using the following command:
    3

    docker pull ollama/ollama
    
    4

    Configure Ollama for Cross-Origin Access
    5

    Due to Ollama’s default configuration, it is set to allow local access only at startup, so cross-origin access and port listening require additional environment variable settings for OLLAMA_ORIGINS.
    6

    If Ollama is running as a Docker container, you can add the environment variable to the docker run command.
    7

    docker run -d --gpus=all -v ollama:/root/.ollama -e OLLAMA_ORIGINS="*" -p 11434:11434 --name ollama ollama/ollama
    
    8

    Interact with Local LLM in LobeVidol
    9

    Next, you can start interacting with the local LLM using LobeVidol.

    Installing Ollama Models

    Ollama supports various models, and you can view the list of available models in the Ollama Library and choose the appropriate model based on your needs.

    Installing in LobeVidol

    In LobeVidol, we have enabled some commonly used large language models by default, such as llama3, Gemma, Mistral, etc. When you select a model for interaction, we will prompt you to download that model.

    Once the download is complete, you can start the conversation.

    Pulling Models Locally with Ollama

    Of course, you can also install models by executing the following command in the terminal, using llama3 as an example:

    ollama pull llama3
    

    Custom Configuration

    You can find the configuration options for Ollama in Settings -> Language Models, where you can configure Ollama’s proxy, model name, and more.

    You can visit Integrating with Ollama to learn how to deploy LobeVidol to meet the integration requirements with Ollama.