From f073e25626426f1a2d5741d5ec3a06c2ebc2aef1 Mon Sep 17 00:00:00 2001 From: nrvo <151435968+nrvo@users.noreply.github.com> Date: Sat, 4 May 2024 16:38:10 +0200 Subject: [PATCH] Update description.md --- apps/ollama-nvidia/metadata/description.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/apps/ollama-nvidia/metadata/description.md b/apps/ollama-nvidia/metadata/description.md index 369a19cf..bf4f2f27 100755 --- a/apps/ollama-nvidia/metadata/description.md +++ b/apps/ollama-nvidia/metadata/description.md @@ -1,5 +1,5 @@ # Ollama - Nvidia -[Ollama](https://github.com/ollama/ollama) allows you to run open-source large language models, such as Llama 3 & , locally. Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. +[Ollama](https://github.com/ollama/ollama) allows you to run open-source large language models, such as Llama 3 & Mistral, locally. Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. --- @@ -105,4 +105,4 @@ Here are some example models that can be downloaded: | Gemma | 7B | 4.8GB | `ollama run gemma:7b` | | Solar | 10.7B | 6.1GB | `ollama run solar` | -> Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models. \ No newline at end of file +> Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.