Running Mistral LLM locally with Ollama's

parmarjatin4911@gmail.com - Jan 28 - - Dev Community

🦙 new Python 🐍 library inside a dockerized 🐳 environment with the allocation of 4 CPUs and 8 GB RAM. It took 19 sec to get a response 🚀. The last time I tried to run LLM locally, it took 10 minutes to get a response 🤯#llm #mistral #python #ollama
Image description

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .