개발, 웹, 블로그/DeepLearning 상식

간단히 docker 환경에서 ollama를 이용하여 llama3.1 8B 실행해보기

삼성동고양이 2024. 10. 23. 17:47
반응형

Env

Install nvidia docker env toolkit

Run

$ docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

Execute

$ docker exec -it ollama ollama run llama3.1:8b

help

>>> /?

Exit

>>> /bye
or
ctrl + d

llama3.1 taglist

https://ollama.com/library/llama3.1/tags

 

Tags · llama3.1

Llama 3.1 is a new state-of-the-art model from Meta available in 8B, 70B and 405B parameter sizes.

ollama.com

 

반응형