Here's a guide to use Ollama LLM integrated with Moodle 5.x in a MoodleBox, a cool way to use AI totally offline!
You should first log into the MoodleBox via SSH and begin a sudo session:
sudo -Es
Step 1 – Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
On the MoodleBox, we use the gemma3:1b model (815 MB).
This model doesn't support the generation of images.
Let's test it:
ollama run gemma3:1b
You can now type a prompt of your choice, e.g. "Explain why generative AI is not reliable?" :-)
It's also possible to use gemma3:4b model (3.3GB).
This model support the generation of images, but as of Moodle 5.0.1+, it's not possible yet to use the image generation.
Step 2 – Configure MoodleBox
Open http://moodlebox.home/admin/settings.php?section=httpsecurity
- Remove from "cURL blocked hosts list" field the following lines
127.0.0.0/8
localhost
- Add
11434 to the field "cURL allowed ports list"
- Save
Open http://moodlebox.home/admin/settings.php?section=aiprovider
Click on "Create a new provider instance"
Choose Ollama API Provider
In "Name for instance", indicate something, e.g. "MoodleBox AI"
In "API endpoint", type "http://localhost:11434"
Click on "Create instance"
Enable the new instance, the click on Settings
In the settings of each action, change the model to "Custom" and indicate "gemma3:1b" in the field "Custom model name"
Click on Save changes
Open http://moodlebox.home/admin/category.php?category=ai
Step 3 – Use it
- Open any page with a TinyMCE field, e.g. your profile http://moodlebox.home/user/profile.php, and edit it
- Click the icon "AI generate text"
- Type a prompt, e.g. "Write a presentation of a maths teacher in 700 characters".