diff --git a/apps/ollama-cpu/metadata/description.md b/apps/ollama-cpu/metadata/description.md index 41a38af2..9f67b76a 100755 --- a/apps/ollama-cpu/metadata/description.md +++ b/apps/ollama-cpu/metadata/description.md @@ -1,5 +1,5 @@ # Ollama - CPU -[Ollama](https://github.com/ollama/ollama) allows you to run open-source large language models, such as Llama 3 & , locally. Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. +[Ollama](https://github.com/ollama/ollama) allows you to run open-source large language models, such as Llama 3 & Mistral, locally. Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. --- @@ -62,4 +62,4 @@ Here are some example models that can be downloaded: | Gemma | 7B | 4.8GB | `ollama run gemma:7b` | | Solar | 10.7B | 6.1GB | `ollama run solar` | -> Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models. \ No newline at end of file +> Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.