diff --git a/apps/ollama-cpu/docker-compose.yml b/apps/ollama-cpu/docker-compose.yml index 798d9e69..8f31e896 100755 --- a/apps/ollama-cpu/docker-compose.yml +++ b/apps/ollama-cpu/docker-compose.yml @@ -5,19 +5,17 @@ services: image: ollama/ollama:0.1.33 restart: unless-stopped container_name: ollama-cpu - environment: - - PORT=11436 ports: - - '${APP_PORT}:11436' + - '${APP_PORT}:11434' networks: - tipi_main_network volumes: - - ${APP_DATA_DIR}/.ollama:/root/.ollama + - ${APP_DATA_DIR}/data/.ollama:/root/.ollama labels: # Main traefik.enable: true traefik.http.middlewares.ollama-cpu-web-redirect.redirectscheme.scheme: https - traefik.http.services.ollama-cpu.loadbalancer.server.port: 11436 + traefik.http.services.ollama-cpu.loadbalancer.server.port: 11434 # Web traefik.http.routers.ollama-cpu-insecure.rule: Host(`${APP_DOMAIN}`) traefik.http.routers.ollama-cpu-insecure.entrypoints: web diff --git a/apps/ollama-cpu/metadata/description.md b/apps/ollama-cpu/metadata/description.md index 9f67b76a..500d423b 100755 --- a/apps/ollama-cpu/metadata/description.md +++ b/apps/ollama-cpu/metadata/description.md @@ -1,11 +1,9 @@ -# Ollama - CPU -[Ollama](https://github.com/ollama/ollama) allows you to run open-source large language models, such as Llama 3 & Mistral, locally. Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. - ---- - ## Usage +⚠️ This app runs on port **11436**. Take this into account when configuring tools connecting to the app. + ### Use with a frontend + - [LobeChat](https://github.com/lobehub/lobe-chat) - [LibreChat](https://github.com/danny-avila/LibreChat) - [OpenWebUI](https://github.com/open-webui/open-webui) @@ -14,9 +12,11 @@ --- ### Try the REST API + Ollama has a REST API for running and managing models. **Generate a response** + ```sh curl http://localhost:11434/api/generate -d '{ "model": "llama3", @@ -25,6 +25,7 @@ curl http://localhost:11434/api/generate -d '{ ``` **Chat with a model** + ```sh curl http://localhost:11434/api/chat -d '{ "model": "llama3", @@ -33,16 +34,11 @@ curl http://localhost:11434/api/chat -d '{ ] }' ``` ---- - -### Try in terminal -```sh -docker exec -it ollama-cpu ollama run llama3 --verbose -``` --- ## Model library + Ollama supports a list of models available on [ollama.com/library](https://ollama.com/library 'ollama model library') Here are some example models that can be downloaded: