Skip to content

valentinfrlch/ha-llmvision

Repository files navigation

Issues Static Badge

Image and video analyzer for Home Assistant using multimodal LLMs

🌟 Features · 📖 Resources · ⬇️ Installation · 🪲 How to report Bugs · ☕ Support

Visit Website →



LLM Vision is a Home Assistant integration that uses multimodal LLMs to analyze images, videos, live camera feeds, and Frigate events. It can also keep track of analyzed events in a timeline, with an optional Timeline Card for your dashboard.

Features

  • Compatible with OpenAI, Anthropic Claude, Google Gemini, AWS Bedrock, Groq, LocalAI, Ollama, Open WebUI and providers with OpenAI compatible enpoints.
  • Analyzes images and video files, live camera feeds and Frigate events
  • Can remembers people, pets and objects
  • Maintains a timeline of camera events, so you can display them on your dashboard as well as ask about them later
  • Seamlessly updates sensors based on image input

See the website for the latest features as well as examples. features


Blueprint

With the easy to use blueprint, you'll get camera event notifications intelligently summarized by AI. LLM Vision can also store events in a timeline, so you can see what happened on your dashboard.

Learn how to install the blueprint

Resources

Check the docs for detailed instructions on how to set up LLM Vision and each of the supported providers, get inspiration from examples or join the discussion on the Home Assistant Community.

Static Badge

For technical questions see the discussions tab.

Installation

Tip

LLM Vision is available in the default HACS repository. You can install it directly through HACS or click the button below to open it there.

Open a repository inside the Home Assistant Community Store.

  1. Install LLM Vision from HACS
  2. Search for LLM Vision in Home Assistant Settings/Devices & services
  3. Select your provider
  4. Follow the instructions to add your AI providers.

Continue with setup here: https://llm-vision.gitbook.io/getting-started/setup/providers

How to report a bug or request a feature

Important

Bugs: If you encounter any bugs and have followed the instructions carefully, file a bug report. Please check open issues first and include debug logs in your report. Debugging can be enabled on the integration's settings page. Feature Requests: If you have an idea for a feature, create a feature request.

Support

You can support this project by starring this GitHub repository. If you want, you can also buy me a coffee here: