Agentic AI
Agentic AI
Learn:
● Data Types: int, float, str, bool, list, tuple, dict, set.
● Operators: Arithmetic, comparison, logical, assignment, membership, identity.
● Control Flow: if, elif, else, for, while, break, continue.
● Functions: Definition, arguments, return values, scope, lambda functions, recursion.
Resources:
Courses
● Python for Everybody (University of Michigan on Coursera): - Excellent for beginners,
well-structured, and taught by a great instructor (Dr. Chuck).
● Introduction to Computer Science and Programming in Python (MIT OpenCourseware):
A more rigorous and theoretical introduction to computer science using Python.
Interactive Platforms:
Books:
Steps:
Weeks 1-2:
● Choose one primary course (e.g., Python for Everybody or MIT 6.0001) and commit to it.
● Don't just watch the videos or read passively. Type out the code yourself, experiment,
modify it, and make sure you understand what's happening before moving on.
● Complete all exercises, quizzes, and challenges in your chosen course.
● Work through the corresponding chapters in "Automate the Boring Stuff" (or "Python
Crash Course") and do the practice projects.
● Use Codecademy or LearnPython.org for interactive practice.
● Continue with your chosen course, focusing on functions and data structures.
● Start working on more challenging problems on HackerRank or LeetCode (easy level). Filter
for problems related to the topics you've covered.
● Combine different concepts to create small, functional programs.
Resources:
Courses:
Books:
● "Automate the Boring Stuff with Python, 2nd Edition": Chapter 17 has a good overview of
OOP.
● "Python Crash Course, 3rd Edition": Chapters 9 and 10 cover classes in more depth.
● "Fluent Python, 2nd Edition" by Luciano Ramalho: Chapters 12-22 provide an advanced
treatment of OOP (save this for later).
Articles:
Week 5:
● Start with a Course: Begin with the "Object-Oriented Programming in Python" course on
Coursera or a similar course on edX.
● Create Simple Classes:
○ BankAccount
○ Dog
○ Rectangle
● Practice: Implement the classes, experiment with creating objects, and call their methods.
Week 6:
Learn:
● NumPy: Arrays, array operations, broadcasting, linear algebra, random number generation.
● Pandas: Series, DataFrames, data input/output, data selection and filtering, data
manipulation, data transformation, data aggregation and grouping, time series data.
● Requests: Making HTTP requests, handling responses, passing parameters, authentication.
NumPy:
Pandas:
Requests:
Steps:
Week 7:
● NumPy: Work through the NumPy tutorials and user guide sections. Practice creating
arrays, performing operations, and understanding broadcasting. Start working on small
linear algebra problems.
● Pandas: Complete the "10 Minutes to pandas" tutorial and the Kaggle Pandas tutorial.
Focus on understanding Series, DataFrames, data import/export, selection, filtering, and
basic manipulation.
Learn:
● Supervised Learning: Regression (Linear, Polynomial), Classification (Logistic Regression,
Decision Trees, Random Forests, SVM, KNN).
● Unsupervised Learning: Clustering (K-Means, Hierarchical), Dimensionality Reduction
(PCA).
● Reinforcement Learning: Basic concepts (agent, environment, state, action, reward, policy),
MDPs, Q-learning, SARSA.
● Evaluation Metrics: Regression (MAE, MSE, RMSE, R-squared), Classification (Accuracy,
Precision, Recall, F1-score, ROC Curve, AUC), General (Confusion Matrix).
● Bias-Variance Tradeoff: Understand underfitting, overfitting, and techniques to find the
right balance.
● Cross-Validation: K-Fold, Stratified K-Fold, Leave-One-Out.
Books:
● "Hands-On Machine Learning with Scikit-Learn, Keras & TensorFlow, 3rd Edition" by
Aurélien Géron.
● "The Hundred-Page Machine Learning Book" by Andriy Burkov: A concise overview of
essential concepts.
● "Elements of Statistical Learning" by Hastie, Tibshirani, and Friedman: A more advanced
and theoretical text (for reference later).
Other Resources:
Steps:
Weeks 9-10:
● Focus on the Coursera Machine Learning course (Andrew Ng): Watch the lectures, take
notes, and complete the quizzes.
● Supplement with StatQuest and 3Blue1Brown: Use these for visual explanations of
complex topics.
Become proficient in using Scikit-learn for data preprocessing, model training, evaluation, and
selection.
Learn:
Resources:
Focus on:
Steps:
Weeks 11-12:
● Deep Dive into Scikit-learn Documentation: Spend time reading the relevant sections of
the Scikit-learn User Guide.
● Practice with Projects:
Learn Tokenization, text cleaning, stemming, lemmatization, POS tagging, NER, text
representation (BoW, TF-IDF, word embeddings - Word2vec, GloVe, FastText).
Resources:
Courses:
● "Natural Language Processing with Python" by Steven Bird, Ewan Klein, and Edward
Loper (NLTK Book): https://www.nltk.org/book/
● "Speech and Language Processing" by Daniel Jurafsky and James H. Martin: A more
advanced textbook (for reference).
● "A Visual Guide to Using BERT for the First Time" by Jay Alammar:
http://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/2
● "The Illustrated Word2vec" by Jay Alammar:
http://jalammar.github.io/illustrated-word2vec/
● TensorFlow - Word Embeddings:
https://www.tensorflow.org/text/guide/word_embeddings
Steps:
Weeks 13-14:
Resources:
● NLTK Book: https://www.nltk.org/book/ - Work through the examples and exercises in
Chapters 1-8.
● spaCy 101: https://spacy.io/usage/spacy-101
● spaCy API Documentation: https://spacy.io/api
● Advanced NLP with spaCy: [https://course.spacy.io/en/
● Real Python - spaCy Tutorial:
https://realpython.com/natural-language-processing-spacy-python/
● NLTK Tutorial for Beginners:
https://www.datacamp.com/tutorial/text-analytics-beginners-nltk
Steps:
Weeks 15-16:
● NLTK:
○ Work through the NLTK Book & Actively code along with the examples.
○ Complete the exercises to Reinforce your understanding.
● spaCy:
○ Complete the "spaCy 101" tutorial to Get comfortable with the basic spaCy
workflow.
○ Work through the "Advanced NLP with spaCy" course.
○ Read the Real Python spaCy tutorial.
● Comparative Project:
○ Choose a Text Processing Task: Sentiment analysis, text classification, or named
entity recognition.
○ Implement with Both NLTK and spaCy: Build two versions, one using each library.
○ Compare and Contrast: Analyze the differences in code, performance, and ease of
use.
● Example Project Ideas:
○ Sentiment Analysis of Movie Reviews (e.g., IMDB dataset).
○ Topic Modeling of News Articles.
○ Named Entity Recognition for a Specific Domain.
Understand RNNs, their limitations, and learn about LSTMs and GRUs.
Learn Sequential data, RNN architecture, hidden state, weight sharing, Backpropagation Through
Time (BPTT), vanishing and exploding gradients, Long Short-Term Memory (LSTM) networks,
Gated Recurrent Units (GRU), Bidirectional RNNs.
Resources:
Courses:
Books:
● "Hands-On Machine Learning with Scikit-Learn, Keras & TensorFlow": Chapter 16.
● "Deep Learning" by Goodfellow, Bengio, and Courville: Chapters 10 and 12.
Steps:
Weeks 17-18:
Resources:
Courses:
Articles/Blogs:
Weeks 19-20:
● DeepLearning.AI Specialization: If you are taking this specialization, complete Course 5,
Week 4, which covers the Transformer architecture.
● Jay Alammar's Blogs: Read "The Illustrated Transformer" and "The Illustrated BERT"
multiple times. These are crucial for a visual and intuitive understanding.
● Stanford CS224N: Watch Lectures 13 and 14 on Transformers.
● Hugging Face NLP Course: Start working through this course. It will introduce you to the
Hugging Face library and how to use pre-trained Transformer models.
● Original Papers (Optional): If you're feeling ambitious, skim the "Attention Is All You Need"
paper to get a sense of the original source material. Don't worry if you don't understand
everything at this point.
● Implementation (Optional): If you want to try implementing a simplified version of the
Transformer from scratch, there are some good tutorials available online (e.g., "The
Annotated Transformer" from Harvard NLP:
https://nlp.seas.harvard.edu/2018/04/03/attention.html). However, for now, it's more
important to understand the concepts and how to use existing libraries.
Resources:
○ Hugging Face NLP Course: https://huggingface.co/learn/nlp-course/chapter1/1 -
This is your primary resource. Work through the entire course.
○ Hugging Face Documentation: [invalid URL removed] - Get familiar with the
documentation. You'll be referring to it frequently.
○ Hugging Face Model Hub: https://huggingface.co/models - Explore the available
pre-trained models.
○ Transformer models: https://huggingface.co/docs/transformers/index
Weeks 21-22:
● Complete the Hugging Face NLP Course: This course will guide you through the process of
using the Hugging Face library for various NLP tasks. You'll learn how to load pre-trained
models, fine-tune them on your own data, and use them for inference.
● Experiment with Different Models: Try out different pre-trained models (BERT, RoBERTa,
GPT-2, etc.) and see how they perform on different tasks.
● Fine-tune a Model: Choose an NLP task (e.g., text classification, question answering) and
fine-tune a pre-trained Transformer model on a relevant dataset.
● Build a Project: Create a small project that uses a Transformer model for a real-world
application. For example:
○ A chatbot that uses a fine-tuned GPT-2 model to generate responses.
○ A sentiment analyzer that uses a fine-tuned BERT model to classify text as positive
or negative.
○ A question-answering system that uses a BERT model to answer questions based on
a given context.
○ A text summarizer using a T5 model.
Resources:
Courses:
Articles/Blogs:
Steps:
Weeks 23-24:
● Focus on Concepts:
○ Work through the "Generative AI with Large Language Models" course on Coursera.
○ Read blog posts from OpenAI, DeepMind, and Google AI to get a sense of the
current state of the field.
○ Skim the original GAN and VAE papers to get a basic understanding of their core
ideas.
● Experiment with Pre-trained Models:
○ Use the Hugging Face library to experiment with pre-trained GPT models for text
generation.
Weeks 25-26:
Courses:
Articles:
Steps:
Weeks 27-29:
Also, Tool use pattern, Retrieval Augmented Generation (RAG) pattern, planning pattern,
reflection pattern, multi-agent collaboration pattern, agent-human interaction pattern.
Resources:
Blogs:
Research Papers: Search academic databases (e.g., arXiv, IEEE Xplore, ACM Digital Library) for
papers on "agent design patterns," "agent architectures," and specific patterns like "tool use" or
"multi-agent collaboration."
Conferences:
Weeks 30-32:
● Spend time reading blog posts and research papers on agent design patterns. Focus on
understanding the patterns listed above (tool use, RAG, planning, reflection, multi-agent
collaboration, agent-human interaction).
● LangChain Focus: Pay close attention to how LangChain implements and discusses these
patterns.
● For the agents you designed in the previous step (Conceptual Project in Step 6), consider
how you could incorporate these design patterns to improve their capabilities.
● Look for examples of these patterns in action. Many of the frameworks discussed in the
next step (LangChain, AutoGen, etc.) provide examples of how to implement these patterns.
Resources:
AutoGen Courses:
Steps:
Weeks 33-34:
Focus on LangChain:
● LangGraph: If you're interested in more complex agent workflows and state management,
work through the LangGraph documentation and examples.
● AutoGen: Work through the AutoGen documentation and examples. Take a course on
"Building Multi-Agent Applications with AutoGen" if available. Build a project that involves
multiple agents collaborating to solve a problem.
● CrewAI: If you're interested in multi-agent systems, explore CrewAI.
Compare and Contrast: As you work with these frameworks, compare their features, strengths,
and weaknesses. Consider which framework is best suited for different types of agentic
applications.
Build a Multi-Agent Project (Optional): If you have time, try to build a project that involves
multiple agents working together. This could be a simulation, a game, or a system that automates a
complex task.
Learn:
Resources:
Books:
Courses:
● AAMAS, IJCAI, AAAI: Look for papers on multi-agent systems, reinforcement learning, and
agent safety.
● Journal of Artificial Intelligence Research (JAIR): https://www.jair.org/index.php/jair
● Artificial Intelligence (journal):
[https://www.sciencedirect.com/journal/artificial-intelligence
● IEEE Transactions on Artificial Intelligence:
https://cis.ieee.org/publications/ieee-transactions-on-artificial-intelligence
Strategies:
This roadmap is a starting point. Adapt it to your own interests and goals. The most important
thing is to keep learning, experimenting, and building! Good luck on your agentic AI journey!