Enterprise RAG and Multi-Agent Applications

4.9 (22)

·

7 Weeks

·

Cohort-based Course

Build and Optimize Production-Grade RAG and LLM Applications: Master Advanced Techniques for Scalable, Secure, and Low-Latency AI Solutions

This course is popular

15 people enrolled last week.

Previously at

Google
Stanford University
Ucla
company logo
University of Minnesota

Course overview

Go Beyond Basic Frameworks: Build and Deploy Production-Grade AI Solutions

Welcome to the most technically rigorous and hands-on Large Language Model (LLM) application course available today.


This isn't just another AI course – it's your gateway to mastering the art and science of deploying production-grade LLM solutions that stand out in the real world.


As part of Maven's Top-Rated Content, this course is designed for those who have already mastered the basics of RAG, cosine similarity, vector databases, and LLMs. We'll take you to the next level, focusing on practical aspects of packaging and deploying these models in real-world production environments.


For cohort members joining this intensive learning experience, here's what you get:


- 6 weeks of in-depth content

- Weekly office hours for personalized guidance

- Real-world projects and challenging assignments

- Guest Lectures by Leading AI Professionals

- Continued support post-graduation

- Lifetime access to course materials



What You'll Master: Course Highlights


Agents: Forget CrewAI or Autogen, build your won Agents from scratch. Learn what it takes to make an Agent from the ground. Contribute to Open Source community.


Advanced RAG Solutions: Dive into enterprise-level RAG architectures and learn how to build and implement semantic caching from scratch using GCP and Redis.


LLM Hosting and Deployment: Gain insights into best practices for hosting Large Language Models (LLMs) in diverse production settings, creating inference endpoints, and deploying LLMs on serverless platforms.


Continual Pre-Training and Fine-Tuning: Explore advanced techniques for continual pre-training, fine-tuning LLMs, and mitigating catastrophic forgetting. Learn how to build a data pipeline for pre-training, apply causal language modeling, and leverage scaling laws.


Model Merging and Mixture of Experts: Master techniques for merging multiple models to enhance their collective capabilities, including the Mixture of Experts (MoE) approach. Learn to use tools like mergekit for efficient model merging.


Quantization Methods: Discover techniques to reduce model size while maintaining performance, crucial for deployment in resource-constrained environments.


Inference Speed Optimization: Learn strategies to accelerate inference speeds for real-time language processing, ensuring efficient and responsive AI systems.


Responsible AI Implementation: Explore ethical AI development using guardrails like NeMo, Colang, and Llama Guard to ensure AI systems align with responsible AI principles.


Agentic RAG and Chunking Strategies: Implement advanced semantic chunking techniques and explore AI agent frameworks like AutoGen to enhance the capabilities of RAG systems.


DSPy and Knowledge Graphs: Learn to create and utilize knowledge graphs effectively, mastering DSPy as an alternative prompting approach for structured data handling and enhanced AI interaction.


Throughout the course, we will analyze state-of-the-art AI products, reverse-engineering some through Python. As a bonus, you'll have access to experimental products being developed at Traversaal.ai, my startup, allowing you to stay at the forefront of advancements in the field.


Prerequisite: You should have coding experience of building RAG Solution. Understanding of Encoders and Decoders and some knowledge of Cloud Solutions and APIs.


If you feel the need for a more foundational course, consider checking out my other offering on LLMs: Building LLM Applications (https://maven.com/boring-bot/ml-system-design).


This course is for you if you are a:

01

Machine Learning Engineer exploring different techniques to scale LLM solutions

02

Researcher, who would like to delve in to various aspects of open-source LLMs

03

Software Engineer, looking to learn how to integrate AI into their products

What you’ll get out of this course

Advanced AI Architectures

Understand and implement complex AI architectures, including enterprise-level RAG systems and agentic RAG strategies. You will also dive deep into the Mixture of Experts (MoE) technique and other model merging strategies to enhance the capabilities of your AI systems.

Practical Skills for Deployment

From building semantic caches using GCP and Redis to deploying LLMs on serverless platforms like AWS Bedrock, you'll learn the practical skills to deploy and manage AI applications in real-world scenarios. 

Fine-Tuning Expertise

Acquire advanced techniques for fine-tuning LLMs, enabling you to adapt these models to specific tasks or domains and enhance their performance in targeted applications.

Efficient Inference Processing

Explore strategies for exploring and optimizing inference speeds, ensuring that your language models perform efficiently in real-time scenarios, a crucial skill for deploying responsive and scalable applications.

Knowledge of Responsible AI

Understand the importance of ethical AI development and learn to implement guardrails using tools like NeMo, Colang, and Llama Guard to ensure your AI systems align with responsible AI principles.




This course includes

10 interactive live sessions

Lifetime access to course materials

37 in-depth lessons

Direct access to instructor

5 projects to apply learnings

Guided feedback & reflection

Private community of peers

Course certificate upon completion

Maven Satisfaction Guarantee

This course is backed by Maven’s guarantee. You can receive a full refund within 14 days after the course ends, provided you meet the completion criteria in our refund policy.

Course syllabus

Week 1

Dec 6—Dec 8

    Dec

    7

    Session 1: Enterprise RAG and Multi Agents

    Sat 12/75:00 PM—7:00 PM (UTC)

    Recordings from previous Talks/ Sessions

    2 items

    Enterprise RAG Solutions with Semantic Caching

    8 items

Week 2

Dec 9—Dec 15

    Dec

    10

    Office Hours

    Tue 12/108:00 PM—8:30 PM (UTC)
    Optional

    Dec

    14

    Session 2

    Sat 12/145:00 PM—7:00 PM (UTC)

    Optimizing and Deploying Large Language Models

    9 items

Week 3

Dec 16—Dec 22

    Dec

    22

    Session 4

    Sun 12/225:00 PM—7:00 PM (UTC)

    Dec

    17

    Office Hours

    Tue 12/1710:00 PM—10:30 PM (UTC)
    Optional

    Dec

    21

    Session 3

    Sat 12/215:00 PM—7:00 PM (UTC)

    Module: DSPy and Implementing Guardrails for Responsible AI

    8 items

Week 4

Dec 23—Dec 29

    Dec

    23

    Office Hours

    Mon 12/236:30 PM—7:00 PM (UTC)
    Optional

    Knowledge Graphs

    9 items

Week 5

Dec 30—Jan 5

    Jan

    4

    Session 5

    Sat 1/45:00 PM—7:00 PM (UTC)

    Model Merging and Fine-tuning Video recordings

    2 items

Week 6

Jan 6—Jan 12

    Jan

    11

    Session 6

    Sat 1/115:00 PM—7:00 PM (UTC)

Week 7

Jan 13—Jan 18

    Semantic and Agentic RAG

    2 items

    Autogen and Agents

    2 items

Post-course

    Demo Day

    0 items

    Feb

    7

    Demo Day

    Fri 2/75:00 PM—7:00 PM (UTC)

4.9 (22 ratings)

What students are saying

Meet your instructor

Hamza Farooq

Hamza Farooq

I am the founder of Traversaal.ai, an LLM-based startup dedicated to creating scalable, customizable, and cost-efficient language model solutions for enterprises.


With over 15 years of experience in machine learning, my journey has spanned three continents and seven countries, covering a diverse range of industries such as tech, telecommunications, finance, and retail.


As a former Senior Research Manager at Google and Walmart Labs, I have led data science and machine learning teams, focusing on optimization, natural language processing, recommender systems, and time series forecasting.

I am also an adjunct professor at Stanford and UCLA, where I bridge the gap between academic theory and real-world AI applications.


Additionally, I frequently speak at conferences and conduct training sessions, sharing insights on large language models, deep learning, and cloud computing.

A pattern of wavy dots

Join an upcoming cohort

Enterprise RAG and Multi-Agent Applications

Cohort 4

$800

Dates

Dec 7—Jan 18, 2025

Payment Deadline

Dec 6, 2024

Don't miss out! Enrollment closes in 5 days

Get reimbursed

Course schedule

4-6 hours per week

  • Sundays

    9:00 - 11:00am PT

    Virtual Class

  • Weekly projects

    2-3 hours per week

    Work in teams to build solutions, this requires engagement with other team members

Learning is better with cohorts

Learning is better with cohorts

Active hands-on learning

This course builds on live workshops and hands-on projects

Interactive and project-based

You’ll be interacting with other learners through breakout rooms and project teams

Learn with a cohort of peers

Join a community of like-minded people who want to learn and grow alongside you

Frequently Asked Questions

A pattern of wavy dots

Join an upcoming cohort

Enterprise RAG and Multi-Agent Applications

Cohort 4

$800

Dates

Dec 7—Jan 18, 2025

Payment Deadline

Dec 6, 2024

Don't miss out! Enrollment closes in 5 days

Get reimbursed

$800

4.9 (22)

·

5 days left to enroll