Image and video analyzer for Home Assistant using multimodal LLMs
🌟 Features · 📖 Resources · ⬇️ Installation · 🪲 How to report Bugs · ☕ Support
LLM Vision is a Home Assistant integration that uses multimodal LLMs to analyze images, videos, live camera feeds, and Frigate events. It can also keep track of analyzed events in a timeline, with an optional Timeline Card for your dashboard.
- Compatible with OpenAI, Anthropic Claude, Google Gemini, AWS Bedrock, Groq, LocalAI, Ollama, Open WebUI and providers with OpenAI compatible enpoints.
- Analyzes images and video files, live camera feeds and Frigate events
- Can remembers people, pets and objects
- Maintains a timeline of camera events, so you can display them on your dashboard as well as ask about them later
- Seamlessly updates sensors based on image input
See the website for the latest features as well as examples.
With the easy to use blueprint, you'll get camera event notifications intelligently summarized by AI. LLM Vision can also store events in a timeline, so you can see what happened on your dashboard.
Learn how to install the blueprint
Check the docs for detailed instructions on how to set up LLM Vision and each of the supported providers, get inspiration from examples or join the discussion on the Home Assistant Community.
For technical questions see the discussions tab.
Tip
LLM Vision is available in the default HACS repository. You can install it directly through HACS or click the button below to open it there.
- Install
LLM Vision
from HACS - Search for
LLM Vision
in Home Assistant Settings/Devices & services - Select your provider
- Follow the instructions to add your AI providers.
Continue with setup here: https://llm-vision.gitbook.io/getting-started/setup/providers
Important
Bugs: If you encounter any bugs and have followed the instructions carefully, file a bug report. Please check open issues first and include debug logs in your report. Debugging can be enabled on the integration's settings page. Feature Requests: If you have an idea for a feature, create a feature request.
You can support this project by starring this GitHub repository. If you want, you can also buy me a coffee here: