Ollama & running Large Language Models locally 1/3 Appril Festival
Next to tinkering in the Makerspace of the OBA again, you can also join a workshop series on Running LLM’s locally with the Ollama App that we’re organizing to celebrate the [Appril Festival](https://apprilfestival.com/)
Being able to run a Large Language Model locally has a lot of advantages, next to not paying for a pro plan or API costs, it also means not sharing your chat data. Thanks to recent developments (‘quantization’) we now have models like [Mistral 8x7B](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ) that run on your laptop! There are also many products that support you in running, creating and sharing LLM’s locally with a command line, like the open source app [Ollama](https://ollama.com/download).
In this series of workshops we want to help you in setting up Ollama and running your local LLM’s. Ollama supports a range of models like Mistral, Llama2 and Phi. Every workshop consists of an introduction and has challenges on different levels to help you get started and broaden your knowledge. In this way the workshop will be interesting for both beginners and intermediate level participants. The idea is that participants also help and learn from each other. The evenings run from[masked]h
**Workshop 1/3 (April 17th); getting started**
* Introduction
* Setting up Ollama
* Selecting models
* Run Ollama with python or javascript
* Show & Tell what you want to do
*For beginners:*
We assume you know how to work with the prompt on your laptop (command line). Please install Ollama beforehand. You can experiment locally with models & prompting.
*For Intermediate:*
We assume you’re familiar with Github and you have basic knowledge of [Python](https://github.com/ollama/ollama-python) and Jupyler. An example of a challenge can be to develop a webinterface (also part of the second workshop).
*More advanced* challenges (have to be experienced in Python): develop a personalised assistant or running it on a raspberry pi. Using a webcam to take photos’ and have the LLM describe the images with LLaVa
**Workshop 2/3 (May 15th); making the most of Ollama on a variety of devices**
Beginners: depending on the acquired knowledge and your interests shared in the first workshop we’ll help you to build on.
* Using & modifying the Python (provided with the model) to adapt it to your specific usecase.
* New users can start with basics.
* Show & tell
**Workshop 3/3 (June 19th); finetuning your LLM**
Retrieval Augmented Generation; working with a predefined database with questions/answers that is to be used by the model (provided that you create such a database beforehand), more advanced participants can also try to finetune the model locally on your way of communicating, for example by training it on your emails.
* New users can start with basics.
* Show and tell
Follow us on twitter: https://twitter.com/sensemakersa
or join us on Slack by providing us your emailadress.
(click here)