Congrats to the team at Flower Labs on launching Flower Intelligence today! Flower Intelligence is the best way to build privacy-respecting AI features that run locally on device.
Flower Intelligence: On-device AI privacy and speed with Cloud-based AI scale and simplicity Launching today, as a preview release: Flower Intelligence – the first open-source AI platform that enables on-device AI apps, that can also automatically handoff – if needed – to a purpose-built private cloud if the user allows it. Flower Intelligence enables developers to build AI experiences that offer a mix of user privacy, inference speed and model size that were previously impossible to achieve. Mozilla Thunderbird is an early adopter of Flower Intelligence using it to launch their upcoming Thunderbird Assist feature. Ryan Sipes, Managing Director for Mozilla Thunderbird explains why: "Our 20 million users expect data privacy from every feature we build. Flower Intelligence allows us to ship on-device AI that works locally with the most sensitive data." The world needs Flower Intelligence because AI today largely runs in the cloud. This allows easy deployment of powerful AI models. But importantly, this approach also prevents the use of private and sensitive data that should never be transmitted to a public cloud, and an AI app built this way will simply stop working if the wireless network becomes slow. On-device AI offers an alternative by running models locally inside a device, or even browser. But this doesn’t work for the largest models, and is only an option for the latest and most expensive phones and laptops. Many users are left behind with an approach that relies only on on-device AI. Flower Intelligence allows developers to build AI apps in a brand new way. When they use the open-source platform, the AI model will run locally when possible. Speed and privacy is prioritized – and an AI app still works even if the network fails. Furthermore, whenever extra power is needed – such as if the user device is older, or if a large AI model is needed, then Flower Confidential Remote Compute can step in as a seamless private extension of the device, without compromising privacy, security or performance. This happens without any additional work by the developer, but only if the user allows it. The Flower intelligence hybrid approach delivers the best of both worlds: local-first AI that remains powerful, private and compatible with all devices. Our preview of Flower Intelligence offers a variety of models such as those from Meta (Llama), Mistral AI and DeepSeek AI. The initial SDK includes Swift and TypeScript. We have started by offering AI inference-only via on-device and Flower Confidential Remote Compute, but the future roadmap also includes the ability for AI models to safely incorporate sensitive local data. For the full picture, join us live for the full launch of Flower Intelligence at the Flower AI Summit 2025 on March 26. Sign up links for the event are found below. Flower Intelligence is available now. Further details, and application for early access to Flower Confidential Remote Compute are also available from links in the comments