A really practical guide to LLMs is now available! Learn what to do, what to avoid, from prompting to RAG, to fine-tuning, and evaluation. Definitively worth reading. https://lnkd.in/dCC6uKjs
Antonio Gulli’s Post
More Relevant Posts
-
Graph RAG approach tackles reasoning, conclusion drawing issues, and the high costs of training or fine-tuning of advanced LLMs. We discuss how Graph RAG works and what it's really good at in our latest article! 👇 https://lnkd.in/eQcajvHJ
What is Graph RAG approach?
turingpost.com
To view or add a comment, sign in
-
Very useful paper to read when it comes to putting prompts and questions together for your LLMs: https://lnkd.in/gXrRMduK
2312.16171v1.pdf
arxiv.org
To view or add a comment, sign in
-
Of late, there has been quite a lot of interest in increasing/improving the context of LLMs. One good example is the infini-attention paper from Google in early April. While it was more of tweaking the model architecture to improve the context, Microsoft has came up with a data-driven solution to tackle long context. This week, we have explained the idea of the paper in this YouTube video: YT Video: https://lnkd.in/e6MsatA6 Paper title: Make your LLM Fully Context Aware Paper Link: https://lnkd.in/e9rmHg6y Blog: https://lnkd.in/eeg8YXzH Hope it's useful. #deeplearning #machinelearning #ai #llms
Make your LLMs fully utilize the context (paper explained)
https://www.youtube.com/
To view or add a comment, sign in
-
What We Learned from a Year of Building with LLMs (Part I) Great read with practical tips and valuable insights for anyone applying LLMs to real-world use cases. Advice from: Eugene Yan, Bryan Bischof, Charles Frye, Hamel H., Jason Liu and Shreya Shankar.
What We Learned from a Year of Building with LLMs (Part I)
oreilly.com
To view or add a comment, sign in
-
Another great paper on evaluating LLMs.
2404.13940
arxiv.org
To view or add a comment, sign in
-
Who are we talking to with LLMs, exactly? Once you realize it's not a discussion, but a co-completion, the responses make a little more sense. A relevant article from the beginning of the year: https://lnkd.in/drvuv2Uw
Who are we talking to when we talk to these bots?
medium.com
To view or add a comment, sign in
-
I’ve reposted the insightful blog post, “Learnings from Building with LLMs,” authored by Eugene Yan, Bryan Bischof, Charles Frye, Hamel Husain, Jason Liu, and Shreya Shankar. The first part of the blog delves into: 🔹 Prompting tips to follow 🔹 Insights on RAG / information retrieval 🔹 Tuning and optimizing LLM workflows 🔹 Evaluation and monitoring Check out the blog here: <https://lnkd.in/gMPNbW5B> I’ve also given thanks and credit to Sudalai Rajkumar - SRK for this post it is much knowledgeable. #LLMs
What We Learned from a Year of Building with LLMs (Part I)
oreilly.com
To view or add a comment, sign in
-
Very interesting article on Long Context LLMs and how they perform with RAG. The question this article poses is "will Long Context LLMs negate the need for RAG". Their conclusion is that RAG and Long Context LLMs are synergistic. Read the entire article here: https://lnkd.in/ggxsgTDX
Long Context RAG Performance of LLMs
databricks.com
To view or add a comment, sign in
-
Verry interesting at least in two respects: First, humans outperform AI when summarizing sophisticated texts “Reviewers told ... that AI summaries often missed emphasis, nuance and context; included incorrect information or missed relevant information; and sometimes focused on auxiliary points or introduced irrelevant information” Second, AI creates additional work for people – reminds me of the production paradox …
I help you use generative tools effectively in your life and business. Reluctant expert. Thought follower. 11x developer. Connect and DM me.
Via Jürgen Geuter... LLMs are great at summarizing long documents, if, by summarizing, you mean making up grammatically correct sentences that probably don't reflect the general content and important details of the original document. Here's a shortcut to using LLMs correctly. Anything you want to do with LLMs, especially if it involves shortcutting human expertise you would otherwise have to pay for... Any activity type you want to do... Do ten real world tests in your real world, where you have the LLM do it, you have a competent human do it, and you have a human competent in the task judge the results. After 10 real world tests, you might have enough data to honestly pick a winner. That's what the Australian government did here. Side note... This is how I figured out that something wasn't as it seemed with this technology. This is how I encourage my clients to discover these truths for themselves. When someone tells you, you have to take their word for it. When you can experience it yourself, it sticks quickly. #WrittenByMe
AI worse than humans in every way at summarising information, government trial finds
https://www.crikey.com.au
To view or add a comment, sign in
-
In this three-part essay published by O'Reilly, Eugene Yan, Bryan Bischof, Hamel H., Jason Liu, Shreya Shankar, and I elaborate our shared perspective on just what it takes to productionize LLMs. Each part covers one of the three key levels of work: tactical, operational, and strategic. The first part, released today, covers tactical considerations. One year in to building with LLMs, which prompting tricks have graduated to prompt engineering and which have fallen into the dustbin of prompt hacks? How and when should we compare embedding-based retrieval to lexical search? https://lnkd.in/gq2skAWB
What We Learned from a Year of Building with LLMs (Part I)
oreilly.com
To view or add a comment, sign in