AdalFlow’s cover photo
AdalFlow

AdalFlow

Software Development

Mountain View, California 3,194 followers

AdalFlow: The library to build and auto-optimize LLM applications, from chatbots and RAG to agents.

About us

To help developers close the performance gap between a demo and a production-grade LLM application.

Industry
Software Development
Company size
2-10 employees
Headquarters
Mountain View, California
Type
Privately Held
Founded
2024

Locations

Employees at AdalFlow

Updates

  • AdalFlow reposted this

    View profile for Li Yin

    [Hiring] AdalFlow author | LLM&CV researcher | Founder

    I feel trapped. I created the AdalFlow library and worked on it full-time for eight months without funding. I made it the best-performing library for auto-prompt optimization and reminded the community how important it is to have your LLM app be model-agnostic. Yet, I wasn’t directly working on my product as a founder. My goal wasn’t to build a managed service on top of AdalFlow. My startup mentor always asked me, “Why don’t you just build your product?” Open-source doesn’t seem like progress. As I improve the source code, there is a ton of work to update the documentation. The community keeps coming to me with requests for support. It’s like your baby is crying for you, but you have to stay strong and focus on the long-term goal. I have to build my product MVP, ensure the company gets into a great financial state, and be able to support AdalFlow better in the future. It is time for the community to step up and help each other out. That should be the meaning of open-source and community. While I’m testing my library in my own production, I am also pushing this community to become more independent without me. If you are passionate about open-source, prompt optimization, and believe you can help, don’t hesitate to reach out. AdalFlow is setup to be the PyTorch for LLM apps. Like and repost so that we can find the best future lead for AdalFlow!

  • AdalFlow reposted this

    View profile for Li Yin

    [Hiring] AdalFlow author | LLM&CV researcher | Founder

    Claude needs to solve these three problems first to take all IC (individual contributor) jobs in 3 years: Claude’s assistant is quite obvious. However, the assistant is only good at front-end tasks and falls short on the back end, more complicated algorithms, and larger-scope tasks. From Claude assistant to collaborator, how can they solve the context window limit so that clauses can have a deeper understanding of any code base beyond simple RAG? To reach Claude pioneers, will LLMs be capable of emerging something totally new? This looks like a positive hallucination to me. How can we, on one side, fight the bad hallucination and, on the other side, keep the positive hallucination? What are your thoughts and experience with AI for coding? #artificialintelligence #machinelearning #llms #agent

    • No alternative text description for this image
  • AdalFlow reposted this

    View profile for Li Yin

    [Hiring] AdalFlow author | LLM&CV researcher | Founder

    Here are 5 learnings from the first “Hack with Li” event: People were so engaged in the group co-working session and our learnings from the discussion is so valuable. 1. 80% of developers are building agents. 2. The system design of an agentic system is highly challenging, and it is especially difficult to create a unified design. 3. Once you start building, data, evaluation, and prompt engineering become additional challenges on the way to productionalize. 4. Hallucination detection is a key challenge in making a system highly trustworthy. 5. AdalFlow can speed up the building process after the system design and help create high-quality synthetic training data for model fine-tuning. Nothing can compare with in-person community. If you are interested with the next session, links are in comments. #adalflow #artificialintelligence #machinelearning #llms

  • AdalFlow reposted this

    View profile for Li Yin

    [Hiring] AdalFlow author | LLM&CV researcher | Founder

    Building LLM app demos that achieve 70% accuracy is easy, but reaching 90-100% is the real challenge—and the churn rate shows the difference. An LLM app is a post-training and pre-production process, where developers often rely on manual prompting to bridge the gap between human language and LLM language to achieve the best performance. Yet, we observe the following: - Many developers build the app, but only around 10-20% have actually evaluated their apps. - More than 90% of developers still rely on manual prompting, including founders of well-funded startups, and very few teams are using auto-optimization so far. In the future, LLM apps that cannot be auto-optimized will be out-competed by those that can. There are currently only two libraries that are capable of auto-optimizing any LLM workflow. - AdalFlow: Started in 2024 July. It gives developers full control over prompting and what to optimize. Though the newest, with LLM-AutoDiff, currently it is the most effective at performance. - DsPy: Started in 2022, the most-well known one with 22k Github star. It includes a collection of research from Stanford’s NLP lab, featuring techniques like bootstrap few-shot demonstration as well as COPRO and MIPRO. What challenges are you facing in productionalization? #adalflow #artificialintelligence #machinelearning #llms

  • AdalFlow reposted this

    View profile for Zach Wilson
    Zach Wilson Zach Wilson is an Influencer

    Founder of DataExpert.io | ADHD | 1m Followers | Dogs

    The future of data engineering is UNSTRUCTURED! Can you take data from video, audio, and document files and turn it into: - generate transcripts from these videos and audio files, clean up the transcripts, extract the entities and facts into knowledge graphs - smartly chunk your transcripts into vectors for fast and relevant lookups in vector databases - build evaluation criteria with frameworks such as AdalFlow to test that your lookups are not underperforming as you add more data - merge structured and unstructured data for building extremely rich contexts for RAG systems Data engineering will not be confined to rows and columns for very much longer! Get ready to consume and extract everything!

  • AdalFlow reposted this

    View profile for Li Yin

    [Hiring] AdalFlow author | LLM&CV researcher | Founder

    Here is my take after using OpenAI Deep Research and various open-source versions: 1. Deep research can't produce high-quality results without domain-expert instructions. There's a noticeable quality gap when simply providing a prompt versus explicitly instructing the agent to use credible sources instead of synthesizing each one blindly. 2. Deep research is likely not just a simple tool-calling agent. The tool-call instructions are likely fine-tuned into the model, allowing it to parallelize tasks and execute actions efficiently. Toolformer is a great reference if you want to achieve a similar approach. 3. The open-source versions are not there yet, as they rely solely on external agent steps. However, they are a great way for us to learn about the process. What is your take and experience? 👉 Links of open-sources ones in comments. #adalflow #artificialintelligence #machinelearning #llms

  • AdalFlow reposted this

    View profile for Li Yin

    [Hiring] AdalFlow author | LLM&CV researcher | Founder

    There are four ways to build LLM apps: 1. Use provider APIs' advanced features, such as multi-message handling, JSON format, and tools. (P1) 2. Avoid advanced features and rely solely on raw prompting and string processing to implement each feature manually. You can see Jina’s open Deep Search agent as an example. https://lnkd.in/gnHr_x2q 3. Similar to (2), but use a library to better manage prompting, structured output, and tools—leveraging auto-prompting out of the box with libraries like AdalFlow(P2). DsPy, however, does not provide full control over the prompts. 4. Similar to (1), but use a third-party library such as LangChain or LlamaIndex. However, the underlying implementation is somewhat of a black box. 👉 Which one do you prefer for your LLM app? Drop your number in comments. ____ I teach my audience how to build LLM apps. You can follow for more tips. #artificialintelligence #machinelearning #llms

    • No alternative text description for this image
  • AdalFlow reposted this

    View profile for Li Yin

    [Hiring] AdalFlow author | LLM&CV researcher | Founder

    Developers are overusing agents. The answer to any problem for current LLM app engineers is agents along with RAG. Agents are cool, but when you achieve agents only via prompting, it's slow and underperforming. For instance, in many papers, on the same dataset like Hotpot, we have seen a predefined workflow like multi-hop RAG perform much better than a ReAct agentic RAG while being much faster. In fact, the accuracy is 20% lower. Plus, many are overusing agents. When you want your chatbot to handle both coding, question answering, and maybe some other tasks that can be done with internal knowledge, you don't need an agent. You just need structured output and to program your agent to multi-task and make the best judgment in different cases. If you want to use agents, sequential agents like ReAct are not the answer—they're too slow! You need to go for a parallel agent. (Paper linked in comments.) In fact, Devin itself has implemented an agent that can plan multiple steps at once, build a DAG, and handle parallel tasks. I haven't seen any libraries providing this yet. We also have to fine-tune our LLM to use tools more efficiently without overwhelming the LLM context. ____ Post from AdalFlow library, like and repost to help the community build better LLM apps #llms #artificialintelligence #agents 

  • AdalFlow reposted this

    View profile for Li Yin

    [Hiring] AdalFlow author | LLM&CV researcher | Founder

    Classical ML is not going away. Even LLMs need it. I've seen data privacy researchers use classical models to detect whether LLMs leak personal information in their outputs. Similarly, startups have used classical models to detect hallucinations, finding them more accurate, faster, and cheaper than using LLMs. A possible workflow could be to build your LLM task pipeline using prompt-optimized functional components with AdalFlow library, such as a classifier or a router. Then, use this pipeline to auto-label more training data, eventually replacing these components with classical ML models for better speed and cost efficiency. Have you combined any classical models with your LLM applications? #artificialintelligence #adalflow #machinelearning #llms

  • AdalFlow reposted this

    View profile for Li Yin

    [Hiring] AdalFlow author | LLM&CV researcher | Founder

    One year ago, I had 9K followers, and today, I’m at 50K. This thriving online community, along with the in-person community I’ve built, is the best birthday gift I could ask for. The biggest challenge in growing a network isn’t just writing content—it’s enduring the pain when some of your most hardworking pieces perform the worst in terms of engagement. What I’ve learned is that effort doesn’t always equal popularity. My most hardworking GitHub repo, which took me 10 months to build, has the same number of stars as one I built in just four hours. I’ve learned not just to work hard, but to work smart. On one hand, we need to appeal to our audience, and on the other, we need to stay true to ourselves. It’s a lie to say that I create content only so others can learn from me or feel inspired—it’s more than that. For me, it’s about building a community and creating products that serve that community. I built AdalFlow to eliminate the need for manual prompt engineering for people like me—to provide a better tool for building RAG and agent-based systems in production. AdalFlow is fully open-source. I have no doubt that it’s one of the best options for LLM developers, but I also understand that making it a popular library requires much more—it takes a whole community! Now, I’m working on building a virtual expert-level AI engineer to help teams develop production-grade LLM applications faster and to help developers learn AI more effectively. The best value you can get from me isn’t just by reading my posts—it’s by reaching out and engaging in discussions. I genuinely want to help if I can, and by doing so, you’re also helping me build a better product. To summarize, posting isn’t just about gaining more followers. Posting can be painful. The best way to do it is from the heart—providing value to the audience and receiving value back through their feedback.

    • No alternative text description for this image

Similar pages

Browse jobs