0% found this document useful (0 votes)
12 views21 pages

3 Methods To Run Llama 3.2 - Analytics Vidhya

Uploaded by

陳賢明
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views21 pages

3 Methods To Run Llama 3.2 - Analytics Vidhya

Uploaded by

陳賢明
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

2024/11/23 晚上7:20 3 Methods to Run Llama 3.

2 - Analytics Vidhya

prev Interview Prep Career GenAI Prompt Engg ChatGPT LLM nextL

3 Ways to Run Llama 3.2 on Your Device


Gourav Lohar 6
Last Updated : 01 Oct, 2024

Introduction

Meta recently launched Llama 3.2, its latest multimodal model. This version offers

improved language understanding, provides more accurate answers and generates

high-quality text. It can now analyze and interpret images, making it even more

versatile in understanding and responding to various input types! Llama 3.2 is a

powerful tool that can help you with so much. With its lightning-fast development,
this new LLM promises to unlock unprecedented communication capabilities. In this

article, we’ll dive into the exciting world of Llama 3.2, exploring its 3 unique ways to

run and the incredible features it brings to the table. From enhancing edge AI and

vision tasks to offering lightweight models for on-device use, Llama 3.2 is a

powerhouse!

https://www.analyticsvidhya.com/blog/2024/10/methods-to-run-llama-3-2/ 1/21
2024/11/23 晚上7:20 3 Methods to Run Llama 3.2 - Analytics Vidhya

Learning Objective

Understand the key advancements and features of Llama 3.2 in the AI

landscape.

Learn how to access and utilize Llama 3.2 through various platforms and
methods.

Explore the technical innovations, including vision models and lightweight


deployments for edge devices.

Gain insights into the practical applications of Llama 3.2, including image

processing and AI-enhanced communication.

Discover how Llama Stack simplifies the development of applications using

Llama models.

This article was published as a part of the Data Science Blogathon.

Table of contents

https://www.analyticsvidhya.com/blog/2024/10/methods-to-run-llama-3-2/ 2/21
2024/11/23 晚上7:20 3 Methods to Run Llama 3.2 - Analytics Vidhya

1. Introduction

2. What are Llama 3.2 Models?

3. Key Features and Advancements in Llama 3.2

4. In-Depth Technical Exploration

5. Performance Highlights and Benchmarks

6. Accessing and Utilizing Llama 3.2

7. Using Llama 3.2 with Ollama

8. Deploying Llama 3.2 via Groq Cloud

9. Running Llama 3.2 on Google Colab(llama-3.2-90b-text-preview)

What are Llama 3.2 Models?

Llama 3.2 is Meta’s latest attempt at breaking the bounds of innovation in the ever-
changing landscape of artificial intelligence. It is not an incremental version but

rather a significant leap forward into groundbreaking capabilities aiming to reshape

how we interact with and use AI.

Llama 3.2 isn’t about incrementally improving what exists but expanding the

frontiers of possibilities for open-source AI. Vision models, edge computing

capabilities, and a scope focused solely on safety will introduce Llama 3.2 into a

new era of possible AI applications.

Meta AI mentioned that Llama 3.2 is a collection of large language models (LLMs)

that have been pretrained and fine-tuned in 1B and 3B sizes for multilingual text, as

well as 11B and 90B sizes for text and image inputs and text output.

https://www.analyticsvidhya.com/blog/2024/10/methods-to-run-llama-3-2/ 3/21
2024/11/23 晚上7:20 3 Methods to Run Llama 3.2 - Analytics Vidhya

Also read: Getting Started With Meta Llama 3.2

Key Features and Advancements in Llama 3.2

Llama 3.2 brings a host of groundbreaking updates, transforming the landscape of

AI. From powerful vision models to optimized performance on mobile devices, this

release pushes the limits of what AI can achieve. Here’s a look at the key features

and advancements that set this version apart.

Edge and Mobile Deployment: Llama 3.2 features a wide range of lightweight

models aimed at deployment on the edge and phones. Models ranging from 1B

to 3B parameters offer impressive capabilities while staying efficient, and

developers can create privacy-enhancing, personal applications running on the


client. This may finally revolutionize access to AI, taking its power from behind

our fingers.

Safety and Responsibility: Meta remains steadfast in its commitment to

responsible AI development. Llama 3.2 incorporates safety enhancements and

provides tools to help developers and researchers mitigate potential risks

https://www.analyticsvidhya.com/blog/2024/10/methods-to-run-llama-3-2/ 4/21
2024/11/23 晚上7:20 3 Methods to Run Llama 3.2 - Analytics Vidhya

associated with AI deployment. This focus on safety is crucial as AI becomes

increasingly integrated into our daily lives.

Open-Source Ethos: Llama 3.2’s open nature is an integral part of Meta’s AI

strategy, one that should be promoted worldwide. It allows for cooperation,


innovation, and democratization in AI, allowing researchers and developers

worldwide to contribute to further building Llama 3.2 and thereby hastening the

speed of AI advancement.

In-Depth Technical Exploration

Llama 3.2’s architecture introduces cutting-edge innovations, including enhanced

vision models and optimized performance for edge computing. This section dives
into the technical intricacies that make these advancements possible.

Vision Models: Integrating vision capabilities into Llama 3.2 required a novel

model architecture. The team employed adapter weights to connect a pre-


trained image encoder seamlessly with the pre-trained language model. This

enables the model to process both text and image inputs, facilitating a deeper

understanding of the interplay between language and visual information.

Llama Stack Distributions: Meta has also introduced Llama Stack

distributions, providing a standardized interface for customizing and deploying

Llama models. This simplifies the development process, enabling developers to

build agentic applications and leverage retrieval-augmented generation (RAG)

capabilities.

https://www.analyticsvidhya.com/blog/2024/10/methods-to-run-llama-3-2/ 5/21
2024/11/23 晚上7:20 3 Methods to Run Llama 3.2 - Analytics Vidhya

Performance Highlights and Benchmarks

Llama 3.2 has performed very well across a wide range of benchmarks, showing its

capabilities in all sorts of domains. The vision models perform exceptionally well on
vision-related tasks such as understanding images and visual reasoning,

surpassing closed models such as Claude 3 Haiku on some of the benchmarks.


Lighter models perform highly across other areas like instruction following,
summarization, and tool use.

Let us now look into the benchmarks below:

https://www.analyticsvidhya.com/blog/2024/10/methods-to-run-llama-3-2/ 6/21
2024/11/23 晚上7:20 3 Methods to Run Llama 3.2 - Analytics Vidhya

Accessing and Utilizing Llama 3.2

Discover how to access and deploy Llama 3.2 models through downloads, partner
platforms, or direct integration with Meta’s AI ecosystem.

Download: You can download the Llama 3.2 models directly from the official
Llama website (llama.com) or from Hugging Face. This allows you to

experiment with the models on your own hardware and infrastructure.

Partner Platforms: Meta has collaborated with many partner platforms,


including major cloud providers and hardware manufacturers, to make Llama

3.2 readily available for development and deployment. These platforms allow
you to access and utilize the models, leveraging their infrastructure and tools.

Meta AI: The text also mentions that you can try these models using Meta’s

smart assistant, Meta AI. This could provide a convenient way to interact with
and experience the models’ capabilities without needing to set up your own
environment.

Using Llama 3.2 with Ollama

First, we will install Ollama first from here. After installing Ollama, run this on CMD:
https://www.analyticsvidhya.com/blog/2024/10/methods-to-run-llama-3-2/ 7/21
2024/11/23 晚上7:20 3 Methods to Run Llama 3.2 - Analytics Vidhya

ollama run llama3.2

#or

ollama run llama3.2:1b

It will download the 3B and 1B Models in your system

Code for Ollama

Install these dependencies:

langchain

langchain-ollama

langchain_experimental

from langchain_core.prompts import ChatPromptTemplate

from langchain_ollama.llms import OllamaLLM

def main():

print("LLama 3.2 ChatBot")

template = """Question: {question}

Answer: Let's think step by step."""

prompt = ChatPromptTemplate.from_template(template)

model = OllamaLLM(model="llama3.2")

chain = prompt | model

while True:

question = input("Enter your question here (or type 'exit' to quit): ")

if question.lower() == 'exit':

break

print("Thinking...")

answer = chain.invoke({"question": question})

https://www.analyticsvidhya.com/blog/2024/10/methods-to-run-llama-3-2/ 8/21
2024/11/23 晚上7:20 3 Methods to Run Llama 3.2 - Analytics Vidhya

print(f"Answer: {answer}")

if __name__ == "__main__":

main()

Deploying Llama 3.2 via Groq Cloud

Learn how to leverage Groq Cloud to deploy Llama 3.2, accessing its powerful
capabilities easily and efficiently.

Visit Groq and generate an API key.

Running Llama 3.2 on Google Colab(llama-3.2-90b-


text-preview)

Explore how to run Llama 3.2 on Google Colab, enabling you to experiment with
this advanced model in a convenient cloud-based environment.
https://www.analyticsvidhya.com/blog/2024/10/methods-to-run-llama-3-2/ 9/21
2024/11/23 晚上7:20 3 Methods to Run Llama 3.2 - Analytics Vidhya

!pip install groq

from google.colab import userdata

GROQ_API_KEY=userdata.get('GROQ_API_KEY')

from groq import Groq

client = Groq(api_key=GROQ_API_KEY)

completion = client.chat.completions.create(

model="llama-3.2-90b-text-preview",

messages=[

"role": "user",

"content": " Why MLops is required. Explain me like 10 years old child"

],

https://www.analyticsvidhya.com/blog/2024/10/methods-to-run-llama-3-2/ 10/21
2024/11/23 晚上7:20 3 Methods to Run Llama 3.2 - Analytics Vidhya

temperature=1,

max_tokens=1024,

top_p=1,

stream=True,

stop=None,

For chunk in completion:


print(chunk.choices[0].delta.content or "", end="")

Running Llama 3.2 on Google Colab(llama-3.2-11b-vision-


preview)

from google.colab import userdata

import base64

from groq import Groq

def image_to_base64(image_path):

"""Converts an image file to base64 encoding."""

with open(image_path, "rb") as image_file:

return base64.b64encode(image_file.read()).decode('utf-8')

# Ensure you have set the GROQ_API_KEY in your Colab userdata

client = Groq(api_key=userdata.get('GROQ_API_KEY'))

# Specify the path of your local image

image_path = "/content/2.jpg"

https://www.analyticsvidhya.com/blog/2024/10/methods-to-run-llama-3-2/ 11/21
2024/11/23 晚上7:20 3 Methods to Run Llama 3.2 - Analytics Vidhya

# Load and encode your image

image_base64 = image_to_base64(image_path)

# Make the API request

try:

completion = client.chat.completions.create(

model="llama-3.2-11b-vision-preview",

messages=[

"role": "user",

"content": [

"type": "text",

"text": "what is this?"

},

"type": "image_url",

"image_url": {

"url": f"data:image/jpeg;base64,{image_base64}"

],

temperature=1,

https://www.analyticsvidhya.com/blog/2024/10/methods-to-run-llama-3-2/ 12/21
2024/11/23 晚上7:20 3 Methods to Run Llama 3.2 - Analytics Vidhya

max_tokens=1024,

top_p=1,

stream=True,

stop=None,

# Process and print the response

for chunk in completion:

if chunk.choices and chunk.choices[0].delta and chunk.choices[0].delta.content:

print(chunk.choices[0].delta.content, end="")

except Exception as e:

print(f"An error occurred: {e}")

Input Image

Output

https://www.analyticsvidhya.com/blog/2024/10/methods-to-run-llama-3-2/ 13/21
2024/11/23 晚上7:20 3 Methods to Run Llama 3.2 - Analytics Vidhya

Conclusion

Meta’s Llama 3.2 shows the potential of open-source collaboration and the

relentless pursuit of AI advancement. Meta pushes the limits of language models

and helps shape a future where AI is not only more powerful but also more

accessible, responsible, and beneficial to all.

If you are looking for a Generative AI course online, then explore: GenAI Pinnacle

Program

Key Takeaways

Introducing vision models in Llama 3.2, thus image understanding and

reasoning, alongside text processing applications brings some new

opportunities, such as image captioning, visual question-answering, and

document understanding with charts or graphs.

This model’s lightweight models are optimized for edge devices and mobile
phones, bringing AI capabilities directly to users while maintaining privacy.

The introduction of Llama Stack distributions streamlines the process of building

and deploying applications with Llama models, making it easier for developers

https://www.analyticsvidhya.com/blog/2024/10/methods-to-run-llama-3-2/ 14/21
2024/11/23 晚上7:20 3 Methods to Run Llama 3.2 - Analytics Vidhya

to leverage their capabilities.

The media shown in this article is not owned by Analytics Vidhya and is used

at the Author’s discretion.

Frequently Asked Questions

Q1. What are the main differences between Llama 3.2 and previous
versions?

A. Llama 3.2 introduces vision models for image understanding, lightweight models

for edge devices, and Llama Stack distributions for simplified development.

Q2. How can I access and use Llama 3.2?

A. You can download the models, use them on partner platforms, or try them

through Meta AI.

Q3. What are some potential applications of the vision models in Llama 3.2?

A. Image captioning, visual question answering, document understanding with

charts and graphs, and more.

Q4. What is Llama Stack, and how does it benefit developers?

A. Llama Stack is a standardized interface that makes it easier to develop and


deploy Llama-based applications, particularly agentic apps.

The media shown in this article is not owned by Analytics Vidhya and is used

at the Author’s discretion.

Blogathon Generative AI Large Language Model Llama 3.2 LLMs

Run Llama 3.2 Ways To Run Llama 3.2

Gourav Lohar

https://www.analyticsvidhya.com/blog/2024/10/methods-to-run-llama-3-2/ 15/21
2024/11/23 晚上7:20 3 Methods to Run Llama 3.2 - Analytics Vidhya

Hi I'm Gourav, a Data Science Enthusiast with a medium foundation in statistical


analysis, machine learning, and data visualization. My journey into the world of data

began with a curiosity to unravel insights from datasets.

Advanced Artificial Intelligence Generative AI Large Language Models

LLMs

Free Courses

4.7

Generative AI - A Way of Life


Explore Generative AI for beginners: create text and images, use top AI tools, learn practical skills,
and ethics.

4.5

Getting Started with Large Language Models


Master Large Language Models (LLMs) with this course, offering clear guidance in NLP and model
training made simple.

https://www.analyticsvidhya.com/blog/2024/10/methods-to-run-llama-3-2/ 16/21
2024/11/23 晚上7:20 3 Methods to Run Llama 3.2 - Analytics Vidhya

4.6

Building LLM Applications using Prompt Engineering


This free course guides you on building LLM apps, mastering prompt engineering, and developing
chatbots with enterprise data.

4.8

Improving Real World RAG Systems: Key Challenges & Practical Solutions
Explore practical solutions, advanced retrieval strategies, and agentic RAG systems to improve
context, relevance, and accuracy in AI-driven applications.

4.7

Microsoft Excel: Formulas & Functions


Master MS Excel for data analysis with key formulas, functions, and LookUp tools in this
comprehensive course.

https://www.analyticsvidhya.com/blog/2024/10/methods-to-run-llama-3-2/ 17/21
2024/11/23 晚上7:20 3 Methods to Run Llama 3.2 - Analytics Vidhya

Responses From Readers

What are your thoughts?...

Submit reply

Write for us
Write, captivate, and earn accolades and rewards for your work

https://www.analyticsvidhya.com/blog/2024/10/methods-to-run-llama-3-2/ 18/21
2024/11/23 晚上7:20 3 Methods to Run Llama 3.2 - Analytics Vidhya

Reach a Global Audience


Get Expert Feedback
Build Your Brand & Audience

Cash In on Your Knowledge


Join a Thriving Community
Level Up Your Data Science Game

Flagship Courses
GenAI Pinnacle Program | AI/ML BlackBelt Courses

Free Courses
Generative AI | Large Language Models | Building LLM Applications using Prompt
Engineering | Building Your first RAG System using LlamaIndex | Stability.AI | MidJourney |
Building Production Ready RAG systems using LlamaIndex | Building LLMs for Code | Deep
Learning | Python | Microsoft Excel | Machine Learning | Decision Trees | Pandas for Data
Analysis | Ensemble Learning | NLP | NLP using Deep Learning | Neural Networks | Loan
Prediction Practice Problem | Time Series Forecasting | Tableau | Business Analytics

https://www.analyticsvidhya.com/blog/2024/10/methods-to-run-llama-3-2/ 19/21
2024/11/23 晚上7:20 3 Methods to Run Llama 3.2 - Analytics Vidhya

Popular Categories
Generative AI | Prompt Engineering | Generative AI Application | News | Technical Guides | AI
Tools | Interview Preparation | Research Papers | Success Stories | Quiz | Use Cases |
Listicles

Generative AI Tools and Techniques


GANs | VAEs | Transformers | StyleGAN | Pix2Pix | Autoencoders | GPT | BERT | Word2Vec |
LSTM | Attention Mechanisms | Diffusion Models | LLMs | SLMs | StyleGAN | Encoder
Decoder Models | Prompt Engineering | LangChain | LlamaIndex | RAG | Fine-tuning |
LangChain AI Agent | Multimodal Models | RNNs | DCGAN | ProGAN | Text-to-Image Models |
DDPM | Document Question Answering | Imagen | T5 (Text-to-Text Transfer Transformer) |
Seq2seq Models | WaveNet | Attention Is All You Need (Transformer Architecture)

Popular GenAI Models


Llama 3.1 | Llama 3 | Llama 2 | GPT 4o Mini | GPT 4o | GPT 3 | Claude 3 Haiku | Claude 3.5
Sonnet | Phi 3.5 | Phi 3 | Mistral Large 2 | Mistral NeMo | Mistral-7b | Gemini 1.5 Pro | Gemini
Flash 1.5 | Bedrock | Vertex AI | DALL.E | Midjourney | Stable Diffusion

Data Science Tools and Techniques


Python | R | SQL | Jupyter Notebooks | TensorFlow | Scikit-learn | PyTorch | Tableau | Apache
Spark | Matplotlib | Seaborn | Pandas | Hadoop | Docker | Git | Keras | Apache Kafka | AWS |
NLP | Random Forest | Computer Vision | Data Visualization | Data Exploration | Big Data |
Common Machine Learning Algorithms | Machine Learning

Company Discover

About Us Blogs

Contact Us Expert session

Careers Podcasts

Comprehensive Guides

Learn Engage

https://www.analyticsvidhya.com/blog/2024/10/methods-to-run-llama-3-2/ 20/21
2024/11/23 晚上7:20 3 Methods to Run Llama 3.2 - Analytics Vidhya

Free courses Community

AI/ML BlackBelt Program Hackathons

GenAI Program Events

Agentic AI Pioneer Program AI Newsletter

Contribute Enterprise

Become an Author Our offerings

Become a speaker Trainings

Become a mentor Data Culture

Become an instructor

Terms & conditions Refund Policy Privacy Policy Cookies Policy © Analytics
Vidhya 2024.All rights reserved.

https://www.analyticsvidhya.com/blog/2024/10/methods-to-run-llama-3-2/ 21/21

You might also like