반응형

Env

Install nvidia docker env toolkit

Run

$ docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

Execute

$ docker exec -it ollama ollama run llama3.1:8b

help

>>> /?

Exit

>>> /bye
or
ctrl + d

llama3.1 taglist

https://ollama.com/library/llama3.1/tags

 

Tags · llama3.1

Llama 3.1 is a new state-of-the-art model from Meta available in 8B, 70B and 405B parameter sizes.

ollama.com

 

반응형

+ Recent posts