Henry Peter’s Post

View profile for Henry Peter

We are making ground breaking changes to service creation and delivery using Large Language Models! Innovate in AI-driven eXperience Automation and be part of a pioneering team. #AI #XA #CXA #Innovation #JoinUs

That's a good summary Manny Bernabe - Reflecting on an insightful article from O'Reilly about LLMs, I was struck by this point: "LLMs are also broadly accessible, allowing everyone, not just ML engineers and scientists, to build intelligence into their products. While the barrier to entry for building AI products has been lowered, creating those effective beyond a demo remains a deceptively difficult endeavor." This democratization reminds me of the printing press's impact on biblical texts, which expanded access but also introduced new interpretive challenges. As we navigate these advancements, it's crucial to balance accessibility with responsible use. At Ushur, we've been at the forefront of leveraging LLMs in our backend/knowledge work automation as well as carefully introducing them on the conversational side via our Chatbots and other modes. More will be coming out on that. Read more here: O'Reilly Article #AI #LLM #Technology #Innovation #History #Democratization #ResponsibleAI #Ushur

View profile for Manny Bernabe
Manny Bernabe Manny Bernabe is an Influencer

AI Evangelist, Content Creator and Educator

“What We Learned from a Year of Building with LLMs“ A wonderful article on building LLMs in production. Thank you Eugene Yan ,Bryan BischofCharles FryeHamel H.Jason Liu and Shreya Shankar! My highlights below 👇 ---- Part One focuses on tactical insights, including prompting tips, evaluation strategies, Retrieval-Augmented Generation (RAG), and fine-tuning considerations. Parts two and three cover operational aspects and strategy. [Links in comments] ---- Prompting: 1 - Start every LLM application with prompting. 2 - Build on various prompting techniques: in-context learning w/ n-shot prompts, chain of thought (CoT), relevant resources (RAG, etc.). 3 - For n, aim for 5 examples per prompt; don’t hesitate to go up to 12. 4 - Lean towards smaller, focused prompts over large, multi-purpose ones. ---- RAG (Retrieval-Augmented Generation): 1 - Don’t rely solely on vector embeddings for search. 2 - Combine keyword search with embeddings for a hybrid approach. 3 - Start with RAG before fine-tuning; it's cost-efficient and effective. 4 - RAG remains relevant even with longer context models. ---- Fine-Tuning: 1 - Consider fine-tuning only when prompting and RAG are insufficient. 2 - Fine-tuning adds complexity and costs: requires annotated data, model evaluation, and hosting. ---- Evaluation: 1 - Use LLMs as judges - decent correlation to human judges 2 - Evaluation metrics: Likert scales, binary classifications, pairwise comparisons. Binary easiest cognitive load. 3 - Safety and PII defects are managed well, but hallucinations still around ~5-10% and tough to get under 2%. ---- Overall, a dense (but readable) article offering valuable tactical insights into LLM development. Highly recommended for a deep dive into current best practices and tips for effective LLM implementation. #genai #enterprise #llms

Manny Bernabe

AI Evangelist, Content Creator and Educator

8mo

💯 Henry, I'm really pumped about the diverse and powerful LLM use cases at Ushur. There are so many great ones!

To view or add a comment, sign in

Explore topics