chore(ollama-amd): small config adjustments

This commit is contained in:
Nicolas Meienberger 2024-05-11 11:32:02 +02:00
parent 199dacb18e
commit df7fa7fd4b
2 changed files with 15 additions and 18 deletions

View File

@ -12,7 +12,7 @@ services:
networks: networks:
- tipi_main_network - tipi_main_network
volumes: volumes:
- ${APP_DATA_DIR}/.ollama:/root/.ollama - ${APP_DATA_DIR}/data/.ollama:/root/.ollama
devices: devices:
# Attach GPU # Attach GPU
- /dev/kfd - /dev/kfd

View File

@ -1,11 +1,9 @@
# Ollama AMD
[Ollama](https://github.com/ollama/ollama) allows you to run open-source large language models, such as Llama3 and Mistral, locally. Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile.
---
## Usage ## Usage
⚠️ This app runs on port **11434**. Take this into account when configuring tools connecting to the app.
### Use with a frontend ### Use with a frontend
- [LobeChat](https://github.com/lobehub/lobe-chat) - [LobeChat](https://github.com/lobehub/lobe-chat)
- [LibreChat](https://github.com/danny-avila/LibreChat) - [LibreChat](https://github.com/danny-avila/LibreChat)
- [OpenWebUI](https://github.com/open-webui/open-webui) - [OpenWebUI](https://github.com/open-webui/open-webui)
@ -14,9 +12,11 @@
--- ---
### Try the REST API ### Try the REST API
Ollama has a REST API for running and managing models. Ollama has a REST API for running and managing models.
**Generate a response** **Generate a response**
```sh ```sh
curl http://localhost:11434/api/generate -d '{ curl http://localhost:11434/api/generate -d '{
"model": "llama3", "model": "llama3",
@ -25,6 +25,7 @@ curl http://localhost:11434/api/generate -d '{
``` ```
**Chat with a model** **Chat with a model**
```sh ```sh
curl http://localhost:11434/api/chat -d '{ curl http://localhost:11434/api/chat -d '{
"model": "llama3", "model": "llama3",
@ -33,16 +34,11 @@ curl http://localhost:11434/api/chat -d '{
] ]
}' }'
``` ```
---
### Try in terminal
```sh
docker exec -it ollama ollama run llama3
```
--- ---
## Compatible GPUs ## Compatible GPUs
Ollama supports the following AMD GPUs: Ollama supports the following AMD GPUs:
| Family | Cards and accelerators | | Family | Cards and accelerators |
| -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | | -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
@ -53,6 +49,7 @@ Ollama supports the following AMD GPUs:
--- ---
## Model library ## Model library
Ollama supports a list of models available on [ollama.com/library](https://ollama.com/library 'ollama model library') Ollama supports a list of models available on [ollama.com/library](https://ollama.com/library 'ollama model library')
Here are some example models that can be downloaded: Here are some example models that can be downloaded: