NVIDIA-Certified Associate

Generative AI LLMs

(NCA-GENL)

About This Certification

The NCA Generative AI LLMs certification is an entry-level credential that validates the foundational concepts for developing, integrating, and maintaining AI-driven applications using generative AI and large language models (LLMs) with NVIDIA solutions. The exam is online and proctored remotely, includes 50 questions, and has a 60-minute time limit.

Please carefully review NVIDIA's examination policy before scheduling your exam.

If you have any questions, please contact us here.

Certification Exam Details

Duration: 1 hour

Price: $125 

Certification level: Associate

Subject: Generative AI and large language models

Number of questions: 50-60 multiple-choice

Prerequisites: A basic understanding of generative AI and large language models

Language: English 

Validity: This certification is valid for two years from issuance. Recertification may be achieved by retaking the exam.

Credentials: Upon passing the exam, participants will receive a digital badge and optional certificate indicating the certification level and topic.

Exam Preparation

Topics Covered in the Exam

Topics covered in the exam include:

  • Fundamentals of machine learning and neural networks
  • Prompt engineering
  • Alignment
  • Data analysis and visualization
  • Experimentation
  • Data preprocessing and feature engineering
  • Experiment design
  • Software development
  • Python libraries for LLMs
  • LLM integration and deployment

Candidate Audiences

  • AI DevOps engineers
  • AI strategists
  • Applied data scientists
  • Applied data research engineers
  • Applied deep learning research scientists
  • Cloud solution architects
  • Data scientists
  • Deep learning performance engineers
  • Generative AI specialists
  • LLM specialists and researchers
  • Machine learning engineers
  • Senior researchers
  • Software engineers
  • Solutions architects

Exam Study Guide

Review study guide

Exam Blueprint

Please review the table below. It’s organized by topic and weight to indicate how much of the exam is focused on each subject. Topics are mapped to NVIDIA Training courses and workshops that cover those subjects and that you can use to prepare for the exam.

Recommended Training
Type of course | Duration | Cost
Content Breakdown 30%
Core Machine Learning and AI Knowledge
24%
Software Development
22%
Experimentation
14%
Data Analysis and Visualization
10%
Trustworthy AI

You can take one of these courses:
Getting Started With Deep Learning
Self-paced | 8 hours | $90
Fundamentals of Deep Learning
Workshop | 8 hours | $500

You can take one of these courses:
Accelerating End-to-End Data Science Workflows
Self-paced | 6 hours | $90
Fundamentals of Accelerated Data Science​
Workshop | 8 hours | $500

You can take one of these courses:
Introduction to Transformer-Based Natural Language Processing
Self-paced | 6 hours | $30
Building Transformer-Based Natural Language Processing Applications
Workshop | 8 hours | $500

You can take one of these courses:
Building LLM Applications with Prompt Engineering
Self-paced | 8 hours | $90
Building LLM Applications with Prompt Engineering
Workshop | 8 hours | $500

You can take one of these courses:
Rapid Application Development With Large Language Models (LLMs)
Self-paced | 8 hours | $90
Rapid Application Development With Large Language Models (LLMs)
Workshop | 8 hours | $500

Review These Additional Materials

Contact Us

NVIDIA offers training and certification for professionals looking to enhance their skills and knowledge in the field of AI, accelerated computing, data science, advanced networking, graphics, simulation, and more.

Contact us to learn how we can help you achieve your goals.

Stay Up to Date

Get training news, announcements, and more from NVIDIA, including the latest information on new self-paced courses, instructor-led workshops, free training, discounts, and more. You can unsubscribe at any time.

You can take one of these courses:

Getting Started With Deep Learning
Fundamentals of Deep Learning

Skills covered in these courses:

Core Machine Learning and AI Knowledge​

  • Understand the fundamental techniques and tools required to train a deep learning model.

Software Development

  • Gain experience with common deep learning data types and model architectures. 
  • Leverage transfer learning between models to achieve efficient results with less data and computation. 
  • Take on your own project with a modern deep learning framework.

Experimentation

  • Build confidence to take on your own project with a modern deep learning framework.
  • Leverage transfer learning between models to achieve efficient results with less data and computation.

Data Analysis and Experimentation

  • Enhance datasets through data augmentation to improve model accuracy.

You can take one of these courses:

​Accelerating End-to-End Data Science Workflows
Fundamentals of Accelerated Data Science

Skills covered in these courses:

Core Machine Learning and AI Knowledge

  • Utilize a wide variety of machine learning algorithms, including XGBoost, for different data science problems.
  • Learn and apply powerful graph algorithms to analyze complex networks with NetworkX and cuGraph.

Data Analysis​ and Visualization​

Understand GPU-accelerated data manipulation:​

  • Ingest and prepare several datasets (some larger-than-memory) for use in multiple machine learning exercises. 
  • Read data directly to single and multiple GPUs with cuDF and Dask cuDF. 
  • Prepare information for machine learning tasks on the GPU with cuDF. 
  • Apply several essential machine learning techniques to prepare data. 
  • Use supervised and unsupervised GPU-accelerated algorithms with cuML. 
  • Train XGBoost models with Dask on multiple GPUs. 
  • Create and analyze graph data on the GPU with cuGraph. 
  • Use NVIDIA RAPIDS™ to integrate multiple massive datasets and perform analysis. 
  • Implement GPU-accelerated data preparation and feature extraction using cuDF and Apache Arrow data frames. 
  • Apply a broad spectrum of GPU-accelerated machine learning tasks using XGBoost and a variety of cuML algorithms. 
  • Execute GPU-accelerated graph analysis with cuGraph, achieving massive-scale analytics in small amounts of time. 
  • Rapidly achieve massive-scale graph analytics using cuGraph routines.

Experimentation

  • Learn and apply powerful graph algorithms to analyze complex networks with NetworkX and cuGraph.

Software Development

  • Deploy machine learning models on a Triton Inference Server to deliver optimal performance.

You can take one of these courses:

Introduction to Transformer-Based Natural Language Processing

Building Transformer-Based Natural Language Processing Applications

Skills covered in these courses:

Core Machine Learning and AI Knowledge​

  • Learn to describe how transformers are used as the basic building blocks of modern LLMs for natural language processing (NLP) applications.  
  • Learn how self-supervision improves upon the transformer architecture in BERT, Megatron, and other LLM variants for superior NLP results.

Software​ Development

  • Implement transformer-based models for different NLP applications.
  • Develop solutions for text classification, NER, author attribution, and question answering using LLM
  • Manage inference challenges and deploy refined models for live applications.

Experimentation​

  • Experiment with transformer-based models for various NLP tasks.
  • Test and compare model performance on question answering tasks.
  • Leverage pretrained, modern LLMs to solve various NLP tasks such as token classification, text classification,  summarization, and question-answering.

Data Analysis​ and Visualization​

  • Use transformer-based models for text classification.
  • Apply LLMs for named-entity recognition (NER).
  • Utilize transformer models for author attribution.
  • Learn how to leverage pretrained, modern LLM models to solve multiple NLP tasks such as text classification, named-entity recognition (NER), and question answering.

Building LLM Applications with Prompt Engineering

Skills covered in this course:

Core Machine Learning and AI Knowledge​

  • Understand how to apply iterative prompt engineering best practices to create LLM-based applications for various language-related tasks.

Data Analysis and Visualization

  • Learn to be proficient in using LangChain to organize and compose LLM workflows.

Software Development

  • Write application code to harness LLMs for generative tasks, document analysis, chatbot applications, and more.

Experimentation​

  • Learn to be proficient in using LangChain to organize and compose LLM workflows.

Rapid Application Development With Large Language Models (LLMs)

Skills covered in this course:

Core ML and AI Knowledge

  • Use encoder models for tasks like semantic analysis, embedding, question-answering, and zero-shot classification.
  • Work with conditioned decoder-style models to take in and generate interesting data formats, styles, and modalities.

Data Analysis

  • Explore the use of LangChain and LangGraph for orchestrating data pipelines and environment-enabled agents.

Software Development

  • Kickstart and guide generative AI solutions for safe, effective, and scalable natural data tasks.

Experimentation​

  • Find, pull in, and experiment with the Hugging Face model repository and the associated transformers API.

Trustworthy AI​

  • Kickstart and guide generative AI solutions for safe, effective, and scalable natural data tasks.