"OpenAI is an Azure and NVIDIA wrapper" "Venture Capital is a wrapper over people who actually have money" "NVIDIA is a TSMC wrapper" "Netflix is a AWS wrapper" Mic drop by Perplexity Co-founder & CEO Aravind Srinivas 🎤 Basically, we're all just wrappers at different levels. Interdependency is the rule, not the exception. We're now seeing startups breaking ARR growth records that could easily be categorized as wrappers on the LLM application layer. It's all about value. Kyle Poyar recently highlighted how Yaakov Carno tried the most viral AI products he could get his hands on. Here are the surprising patterns he found (summarized by Kyle): 1️⃣ Their AI doesn't feel like a black box. Pro-tips from the best: - Show step-by-step visibility into AI processes - Let users ask, “Why did AI do that?” - Use visual explanations to build trust. 2️⃣ Users don’t need better AI—they need better ways to talk to it. Pro-tips from the best: - Offer pre-built prompt templates to guide users. - Provide multiple interaction modes (guided, manual, hybrid). - Let AI suggest better inputs ("enhance prompt") before executing an action. 3️⃣ The AI works with you, not just for you. Pro-tips from the best: - Design AI tools to be interactive, not just output-driven. - Provide different modes for different types of collaboration. - Let users refine and iterate on AI results easily. 4️⃣ Let users see (& edit) the outcome before it's irreversible. Pro-tips from the best: - Allow users to test AI features before full commitment (many let you use it without even creating an account). - Provide preview or undo options before executing AI changes. - Offer exploratory onboarding experiences to build trust. 5️⃣ The AI weaves into your workflow, it doesn't interrupt it. Pro-tips from the best: - Provide simple accept/reject mechanisms for AI suggestions. - Design seamless transitions between AI interactions. - Prioritize the user’s context to avoid workflow disruptions. ➡️ The TL;DR: Having "AI" isn’t the differentiator anymore - great UX is. As noted by e.g. Brian Halligan (co-founder of HubSpot), a lot of folks have been negative on the application layer of AI. We've heard for the past two years how the "LLM wrapper applications" will get disrupted by the LLM's. I think you could argue that back in 2006, when e.g. HubSpot was founded and SaaS was exploding that all SaaS startups were "database wrapper applications." The industry blossomed by adding data, workflow, great UX/UI, integrations and deep domain expertise on top of those databases. I think LLM application layer companies will thrive by adding data, workflow, great UX/UI, integrations and deep domain expertise on top of the LLMs. What do you think? #artificialintelligence #startups
Question: which layer has the greatest chance of inventing AGI? Would you bet your money on chip manufacturer TMSC? Or on a prompt engineering startup? AGI is a research problem, and requires a fundamental breakthrough. AGI is not an engineering problem. You can never apply an existing AI in a new way, to produce AGI. Who does fundamental AI research? First principles.
That’s like saying an airplane is just a wrapper for some aluminium, titanium and oil…
Things I’ve repeatedly say while building reconfigured 1. The AI side is trivial compared to the UX side in building our application 2. This is going to to be many-to-many integrationg game 3. The “agent ecosystem” API game will be more dynamic than the SaaS API game 4. To be able to participate in this ecosystem we need to provide unique value, in our case we provide data that no other system can provide 5. In workflow smoothnes we’re competing against pen and paper (We’re a journaling app for your work thoughts)
Sure a piece of paper is just a wrapper around the chocolate bar but you can't (and for very sound economic reasons shouldn't) double count value each layer provides when you're estimating the full value of the mars bar. I like the pushback around LLM wrappers but his examples here (netflix is an awd wrapper, openai is an azure wrapper) are simply not thoughtful. Netflix is to Aws what a finely selected chocolate box to cocoa powder. And openai is to Nvidia what a 1000 uni students are to brick and mortar that's housing them.
Some of his takes are funny soundbites (for business/tech nerds anyway), but every application on a platform is not a wrapper. I think the wrapper designation is helpful to think about services with limited differentiation/additional value from the core technology used (e.g. LLM) and the wrapper product is likely to be replicated in a reasonable easy way by the core technology provider. I.e. Netflix uses AWS, but investing tens of billions per year in content is differentiated from the services of the cloud provider and it is not the next reasonable step by AWS or Azure or GCP to invest tens of billions of dollars in content. It is very likely that LLM application layer companies can thrive by adding data, workflow, great UX/UI, integrations and deep domain expertise on top of the LLMs, as your write. But also by building brand, data privacy/security/audit support, network effects and more. And as they do all of this they become less of wrappers and more normal application companies.
This generation of engineers is being trained primarily to use wrappers and APIs rather than developing a deep understanding of coding from scratch. Such excessive spoon-feeding fosters a lazy workforce, gradually eroding creativity and innovation. As a result, future generations may become less competitive, lacking original ideas and the ability to defend novelty in their work.
Whoever owns the core of the value network has the max control and makes the max cumulative profits with higher network effect.... Greater the network effect, higher the vendor lock-in.... Think Microsoft for PCs with OS as the core or Android/iOS with mobile apps... Same applies to AI too... So, who owns the core very much matters. To be interdependent or not is a choice that one needs to take based on various internal and external factors.
Discordo em parte. LLMs atuais engessam o trabalho sobretudo com o aprendizado de máquina. Tente pensar diferente da última sequência ou etapas de construção de um pensamento, inovação ou abordagem. Verá a IA te impondo erros, misturando abordagens que não te interessam mais e "viciada" na última abordagem da qual você desistiu. Mas trabalhar com ela é o mimimante inteligente a ser feito. Se for bem "vigiada" vai te ajudar muito mas vai "ter que corrigí-la" por algum tempo. E aja paciência. Porque este vicio de aprendizado por repetição para quem raciocina (ela não faz isso na sua obssessiva reunião de tolkens) pode ser um martírio e deformar o novo pensamento adotado. Inclusive adicionando "elementos que não te interesam mais" na nova abordagem que considerou explorar. Use a IA de forma profunda e complexa e veja se isso não vai ocorrer. Inclusive quando questionada ela "confessará" que faz a confusão. Ou seja há muito a evoluir.
Data & AI Entrepreneur and Investor | Bridging Business and Engineering
6dGood recent Y Combinator interview with Aravind: https://youtu.be/SP7Ua8FKZN4?si=5KFKQmoa0wDknTb0 He is good at pushing the buttons and starting discussions