LLVM
LLVM
• LLVM Design
• LLVM Publications
• LLVM User Guides
• General LLVM Programming Documentation
• LLVM Subsystem Documentation
• LLVM Mailing Lists
Written by The LLVM Team
• The LLVM Getting Started Guide - Discusses how to get up and running quickly with the LLVM
infrastructure. Everything from unpacking and compilation of the distribution to execution of some
tools.
• Getting Started with the LLVM System using Microsoft Visual Studio - An addendum to the main
Getting Started guide for those using Visual Studio on Windows.
• LLVM Tutorial - A walk through the process of using LLVM for a custom language, and the facilities
LLVM offers in tutorial form.
• Developer Policy - The LLVM project's policy towards developers and their contributions.
• LLVM Command Guide - A reference manual for the LLVM command line utilities ("man" pages for
LLVM tools).
Current tools: llvm-ar, llvm-as, llvm-dis, llvm-extract, llvm-ld, llvm-link, llvm-nm, llvm-prof,
llvm-ranlib, opt, llc, lli, llvmc llvm-gcc, llvm-g++, bugpoint, llvm-bcanalyzer,
• LLVM's Analysis and Transform Passes - A list of optimizations and analyses implemented in
LLVM.
• Frequently Asked Questions - A list of common questions and problems and their solutions.
• Release notes for the current release - This describes new features, known bugs, and other limitations.
• How to Submit A Bug Report - Instructions for properly submitting information about any bugs you
run into in the LLVM system.
• LLVM Testing Infrastructure Guide - A reference manual for using the LLVM testing infrastructure.
• How to build the Ada/C/C++/Fortran front-ends - Instructions for building gcc front-ends from
source.
• Packaging guide - Advice on packaging LLVM into a distribution.
• The LLVM Lexicon - Definition of acronyms, terms and concepts used in LLVM.
• You can probably find help on the unofficial LLVM IRC channel. We often are on irc.oftc.net in the
1
Documentation for the LLVM System at SVN head
#llvm channel. If you are using the mozilla browser, and have chatzilla installed, you can join #llvm
on irc.oftc.net directly.
• LLVM Language Reference Manual - Defines the LLVM intermediate representation and the
assembly form of the different nodes.
• The LLVM Programmers Manual - Introduction to the general layout of the LLVM sourcebase,
important classes and APIs, and some tips & tricks.
• LLVM Project Guide - How-to guide and templates for new projects that use the LLVM
infrastructure. The templates (directory organization, Makefiles, and test tree) allow the project code
to be located outside (or inside) the llvm/ tree, while using LLVM header files and libraries.
• LLVM Makefile Guide - Describes how the LLVM makefiles work and how to use them.
• CommandLine library Reference Manual - Provides information on using the command line parsing
library.
• LLVM Coding standards - Details the LLVM coding standards and provides useful information on
writing efficient C++ code.
• Extending LLVM - Look here to see how to add instructions and intrinsics to LLVM.
• Using LLVM Libraries - Look here to understand how to use the libraries produced when LLVM is
compiled.
• How To Release LLVM To The Public - This is a guide to preparing LLVM releases. Most
developers can ignore it.
• Doxygen generated documentation (classes) (tarball)
• ViewVC Repository Browser
• Writing an LLVM Pass - Information on how to write LLVM transformations and analyses.
• Writing an LLVM Backend - Information on how to write LLVM backends for machine targets.
• The LLVM Target-Independent Code Generator - The design and implementation of the LLVM code
generator. Useful if you are working on retargetting LLVM to a new architecture, designing a new
codegen pass, or enhancing existing components.
• TableGen Fundamentals - Describes the TableGen tool, which is used heavily by the LLVM code
generator.
• Alias Analysis in LLVM - Information on how to write a new alias analysis implementation or how to
use existing analyses.
• Accurate Garbage Collection with LLVM - The interfaces source-language compilers should use for
compiling GC'd programs.
• Source Level Debugging with LLVM - This document describes the design and philosophy behind
the LLVM source-level debugger.
• Zero Cost Exception handling in LLVM - This document describes the design and implementation of
exception handling in LLVM.
• Bugpoint - automatic bug finder and test-case reducer description and usage information.
• Compiler Driver (llvmc) Tutorial - This document is a tutorial introduction to the usage and
configuration of the LLVM compiler driver tool, llvmc.
• Compiler Driver (llvmc) Reference - This document describes the design and configuration of llvmc
in more detail.
• LLVM Bitcode File Format - This describes the file format and encoding used for LLVM "bc" files.
• System Library - This document describes the LLVM System Library (lib/System) and how to
keep LLVM source code portable
2
Documentation for the LLVM System at SVN head
• Link Time Optimization - This document describes the interface between LLVM intermodular
optimizer and the linker and its design
• The LLVM gold plugin - How to build your programs with link-time optimization on Linux.
• The GDB JIT interface - How to debug JITed code with GDB.
• The LLVM Announcements List: This is a low volume list that provides important announcements
regarding LLVM. It gets email about once a month.
• The Developer's List: This list is for people who want to be included in technical discussions of
LLVM. People post to this list when they have questions about writing code for or using the LLVM
tools. It is relatively low volume.
• The Bugs & Patches Archive: This list gets emailed every time a bug is opened and closed, and when
people submit patches to be included in LLVM. It is higher volume than the LLVMdev list.
• The Commits Archive: This list contains all commit messages that are made when LLVM developers
commit code changes to the repository. It is useful for those who want to stay on the bleeding edge of
LLVM development. This list is very high volume.
• The Test Results Archive: A message is automatically sent to this list by every active nightly tester
when it completes. As such, this list gets email several times each day, making it a high volume list.
3
Documentation for the LLVM System at SVN head
LLVM Language Reference Manual
1. Abstract
2. Introduction
3. Identifiers
4. High Level Structure
1. Module Structure
2. Linkage Types
1. 'private' Linkage
2. 'linker_private' Linkage
3. 'internal' Linkage
4. 'available_externally' Linkage
5. 'linkonce' Linkage
6. 'common' Linkage
7. 'weak' Linkage
8. 'appending' Linkage
9. 'extern_weak' Linkage
10. 'linkonce_odr' Linkage
11. 'weak_odr' Linkage
12. 'externally visible' Linkage
13. 'dllimport' Linkage
14. 'dllexport' Linkage
3. Calling Conventions
4. Named Types
5. Global Variables
6. Functions
7. Aliases
8. Named Metadata
9. Parameter Attributes
10. Function Attributes
11. Garbage Collector Names
12. Module-Level Inline Assembly
13. Data Layout
14. Pointer Aliasing Rules
5. Type System
1. Type Classifications
2. Primitive Types
1. Integer Type
2. Floating Point Types
3. Void Type
4. Label Type
5. Metadata Type
3. Derived Types
1. Aggregate Types
1. Array Type
2. Structure Type
3. Packed Structure Type
4. Union Type
5. Vector Type
2. Function Type
3. Pointer Type
4
Documentation for the LLVM System at SVN head
4. Opaque Type
4. Type Up-references
6. Constants
1. Simple Constants
2. Complex Constants
3. Global Variable and Function Addresses
4. Undefined Values
5. Addresses of Basic Blocks
6. Constant Expressions
7. Other Values
1. Inline Assembler Expressions
2. Metadata Nodes and Metadata Strings
8. Intrinsic Global Variables
1. The 'llvm.used' Global Variable
2. The 'llvm.compiler.used' Global Variable
3. The 'llvm.global_ctors' Global Variable
4. The 'llvm.global_dtors' Global Variable
9. Instruction Reference
1. Terminator Instructions
1. 'ret' Instruction
2. 'br' Instruction
3. 'switch' Instruction
4. 'indirectbr' Instruction
5. 'invoke' Instruction
6. 'unwind' Instruction
7. 'unreachable' Instruction
2. Binary Operations
1. 'add' Instruction
2. 'fadd' Instruction
3. 'sub' Instruction
4. 'fsub' Instruction
5. 'mul' Instruction
6. 'fmul' Instruction
7. 'udiv' Instruction
8. 'sdiv' Instruction
9. 'fdiv' Instruction
10. 'urem' Instruction
11. 'srem' Instruction
12. 'frem' Instruction
3. Bitwise Binary Operations
1. 'shl' Instruction
2. 'lshr' Instruction
3. 'ashr' Instruction
4. 'and' Instruction
5. 'or' Instruction
6. 'xor' Instruction
4. Vector Operations
1. 'extractelement' Instruction
2. 'insertelement' Instruction
3. 'shufflevector' Instruction
5. Aggregate Operations
5
Documentation for the LLVM System at SVN head
1. 'extractvalue' Instruction
2. 'insertvalue' Instruction
6. Memory Access and Addressing Operations
1. 'alloca' Instruction
2. 'load' Instruction
3. 'store' Instruction
4. 'getelementptr' Instruction
7. Conversion Operations
1. 'trunc .. to' Instruction
2. 'zext .. to' Instruction
3. 'sext .. to' Instruction
4. 'fptrunc .. to' Instruction
5. 'fpext .. to' Instruction
6. 'fptoui .. to' Instruction
7. 'fptosi .. to' Instruction
8. 'uitofp .. to' Instruction
9. 'sitofp .. to' Instruction
10. 'ptrtoint .. to' Instruction
11. 'inttoptr .. to' Instruction
12. 'bitcast .. to' Instruction
8. Other Operations
1. 'icmp' Instruction
2. 'fcmp' Instruction
3. 'phi' Instruction
4. 'select' Instruction
5. 'call' Instruction
6. 'va_arg' Instruction
10. Intrinsic Functions
1. Variable Argument Handling Intrinsics
1. 'llvm.va_start' Intrinsic
2. 'llvm.va_end' Intrinsic
3. 'llvm.va_copy' Intrinsic
2. Accurate Garbage Collection Intrinsics
1. 'llvm.gcroot' Intrinsic
2. 'llvm.gcread' Intrinsic
3. 'llvm.gcwrite' Intrinsic
3. Code Generator Intrinsics
1. 'llvm.returnaddress' Intrinsic
2. 'llvm.frameaddress' Intrinsic
3. 'llvm.stacksave' Intrinsic
4. 'llvm.stackrestore' Intrinsic
5. 'llvm.prefetch' Intrinsic
6. 'llvm.pcmarker' Intrinsic
7. llvm.readcyclecounter' Intrinsic
4. Standard C Library Intrinsics
1. 'llvm.memcpy.*' Intrinsic
2. 'llvm.memmove.*' Intrinsic
3. 'llvm.memset.*' Intrinsic
4. 'llvm.sqrt.*' Intrinsic
5. 'llvm.powi.*' Intrinsic
6. 'llvm.sin.*' Intrinsic
6
Documentation for the LLVM System at SVN head
7. 'llvm.cos.*' Intrinsic
8. 'llvm.pow.*' Intrinsic
5. Bit Manipulation Intrinsics
1. 'llvm.bswap.*' Intrinsics
2. 'llvm.ctpop.*' Intrinsic
3. 'llvm.ctlz.*' Intrinsic
4. 'llvm.cttz.*' Intrinsic
6. Arithmetic with Overflow Intrinsics
1. 'llvm.sadd.with.overflow.* Intrinsics
2. 'llvm.uadd.with.overflow.* Intrinsics
3. 'llvm.ssub.with.overflow.* Intrinsics
4. 'llvm.usub.with.overflow.* Intrinsics
5. 'llvm.smul.with.overflow.* Intrinsics
6. 'llvm.umul.with.overflow.* Intrinsics
7. Debugger intrinsics
8. Exception Handling intrinsics
9. Trampoline Intrinsic
1. 'llvm.init.trampoline' Intrinsic
10. Atomic intrinsics
1. llvm.memory_barrier
2. llvm.atomic.cmp.swap
3. llvm.atomic.swap
4. llvm.atomic.load.add
5. llvm.atomic.load.sub
6. llvm.atomic.load.and
7. llvm.atomic.load.nand
8. llvm.atomic.load.or
9. llvm.atomic.load.xor
10. llvm.atomic.load.max
11. llvm.atomic.load.min
12. llvm.atomic.load.umax
13. llvm.atomic.load.umin
11. Memory Use Markers
1. llvm.lifetime.start
2. llvm.lifetime.end
3. llvm.invariant.start
4. llvm.invariant.end
12. General intrinsics
1. 'llvm.var.annotation' Intrinsic
2. 'llvm.annotation.*' Intrinsic
3. 'llvm.trap' Intrinsic
4. 'llvm.stackprotector' Intrinsic
5. 'llvm.objectsize' Intrinsic
Abstract
This document is a reference manual for the LLVM assembly language. LLVM is a Static Single Assignment
(SSA) based representation that provides type safety, low-level operations, flexibility, and the capability of
representing 'all' high-level languages cleanly. It is the common code representation used throughout all
phases of the LLVM compilation strategy.
7
Documentation for the LLVM System at SVN head
Introduction
The LLVM code representation is designed to be used in three different forms: as an in-memory compiler IR,
as an on-disk bitcode representation (suitable for fast loading by a Just-In-Time compiler), and as a human
readable assembly language representation. This allows LLVM to provide a powerful intermediate
representation for efficient compiler transformations and analysis, while providing a natural means to debug
and visualize the transformations. The three different forms of LLVM are all equivalent. This document
describes the human readable representation and notation.
The LLVM representation aims to be light-weight and low-level while being expressive, typed, and extensible
at the same time. It aims to be a "universal IR" of sorts, by being at a low enough level that high-level ideas
may be cleanly mapped to it (similar to how microprocessors are "universal IR's", allowing many source
languages to be mapped to them). By providing type information, LLVM can be used as the target of
optimizations: for example, through pointer analysis, it can be proven that a C automatic variable is never
accessed outside of the current function, allowing it to be promoted to a simple SSA value instead of a
memory location.
Well-Formedness
It is important to note that this document describes 'well formed' LLVM assembly language. There is a
difference between what the parser accepts and what is considered 'well formed'. For example, the following
instruction is syntactically okay, but not well formed:
%x = add i32 1, %x
because the definition of %x does not dominate all of its uses. The LLVM infrastructure provides a
verification pass that may be used to verify that an LLVM module is well formed. This pass is automatically
run by the parser after parsing input assembly and by the optimizer before it outputs bitcode. The violations
pointed out by the verifier pass indicate bugs in transformation passes or input to the parser.
Identifiers
LLVM identifiers come in two basic types: global and local. Global identifiers (functions, global variables)
begin with the '@' character. Local identifiers (register names, types) begin with the '%' character.
Additionally, there are three different formats for identifiers, for different purposes:
1. Named values are represented as a string of characters with their prefix. For example, %foo,
@DivisionByZero, %a.really.long.identifier. The actual regular expression used is
'[%@][a-zA-Z$._][a-zA-Z$._0-9]*'. Identifiers which require other characters in their
names can be surrounded with quotes. Special characters may be escaped using "\xx" where xx is
the ASCII code for the character in hexadecimal. In this way, any character can be used in a name
value, even quotes themselves.
2. Unnamed values are represented as an unsigned numeric value with their prefix. For example, %12,
@2, %44.
3. Constants, which are described in a section about constants, below.
LLVM requires that values start with a prefix for two reasons: Compilers don't need to worry about name
clashes with reserved words, and the set of reserved words may be expanded in the future without penalty.
Additionally, unnamed identifiers allow a compiler to quickly come up with a temporary variable without
having to avoid symbol table conflicts.
Reserved words in LLVM are very similar to reserved words in other languages. There are keywords for
different opcodes ('add', 'bitcast', 'ret', etc...), for primitive type names ('void', 'i32', etc...), and others.
These reserved words cannot conflict with variable names, because none of them start with a prefix character
8
Documentation for the LLVM System at SVN head
('%' or '@').
This last way of multiplying %X by 8 illustrates several important lexical features of LLVM:
1. Comments are delimited with a ';' and go until the end of line.
2. Unnamed temporaries are created when the result of a computation is not assigned to a named value.
3. Unnamed temporaries are numbered sequentially
It also shows a convention that we follow in this document. When demonstrating instructions, we will follow
an instruction with a comment that defines the type and name of value produced. Comments are shown in
italic text.
; Named metadata
!1 = metadata !{i32 41}
!foo = !{!1, null}
9
Documentation for the LLVM System at SVN head
This example is made up of a global variable named ".LC0", an external declaration of the "puts" function,
a function definition for "main" and named metadata "foo".
In general, a module is made up of a list of global values, where both functions and global variables are global
values. Global values are represented by a pointer to a memory location (in this case, a pointer to an array of
char, and a pointer to a function), and have one of the following linkage types.
Linkage Types
All Global Variables and Functions have one of the following types of linkage:
private
Global values with private linkage are only directly accessible by objects in the current module. In
particular, linking code into a module with an private global value may cause the private to be
renamed as necessary to avoid collisions. Because the symbol is private to the module, all references
can be updated. This doesn't show up in any symbol table in the object file.
linker_private
Similar to private, but the symbol is passed through the assembler and removed by the linker after
evaluation. Note that (unlike private symbols) linker_private symbols are subject to coalescing by the
linker: weak symbols get merged and redefinitions are rejected. However, unlike normal strong
symbols, they are removed by the linker from the final linked image (executable or dynamic library).
internal
Similar to private, but the value shows as a local symbol (STB_LOCAL in the case of ELF) in the
object file. This corresponds to the notion of the 'static' keyword in C.
available_externally
Globals with "available_externally" linkage are never emitted into the object file
corresponding to the LLVM module. They exist to allow inlining and other optimizations to take
place given knowledge of the definition of the global, which is known to be somewhere outside the
module. Globals with available_externally linkage are allowed to be discarded at will, and
are otherwise the same as linkonce_odr. This linkage type is only allowed on definitions, not
declarations.
linkonce
Globals with "linkonce" linkage are merged with other globals of the same name when linkage
occurs. This can be used to implement some forms of inline functions, templates, or other code which
must be generated in each translation unit that uses it, but where the body may be overridden with a
more definitive definition later. Unreferenced linkonce globals are allowed to be discarded. Note
that linkonce linkage does not actually allow the optimizer to inline the body of this function into
callers because it doesn't know if this definition of the function is the definitive definition within the
program or whether it will be overridden by a stronger definition. To enable inlining and other
optimizations, use "linkonce_odr" linkage.
weak
"weak" linkage has the same merging semantics as linkonce linkage, except that unreferenced
globals with weak linkage may not be discarded. This is used for globals that are declared "weak" in
C source code.
common
"common" linkage is most similar to "weak" linkage, but they are used for tentative definitions in C,
such as "int X;" at global scope. Symbols with "common" linkage are merged in the same way as
weak symbols, and they may not be deleted if unreferenced. common symbols may not have an
explicit section, must have a zero initializer, and may not be marked 'constant'. Functions and
aliases may not have common linkage.
appending
10
Documentation for the LLVM System at SVN head
"appending" linkage may only be applied to global variables of pointer to array type. When two
global variables with appending linkage are linked together, the two global arrays are appended
together. This is the LLVM, typesafe, equivalent of having the system linker append together
"sections" with identical names when .o files are linked.
extern_weak
The semantics of this linkage follow the ELF object file model: the symbol is weak until linked, if not
linked, the symbol becomes null instead of being an undefined reference.
linkonce_odr
weak_odr
Some languages allow differing globals to be merged, such as two functions with different semantics.
Other languages, such as C++, ensure that only equivalent globals are ever merged (the "one
definition rule" - "ODR"). Such languages can use the linkonce_odr and weak_odr linkage
types to indicate that the global will only be merged with equivalent globals. These linkage types are
otherwise the same as their non-odr versions.
externally visible:
If none of the above identifiers are used, the global is externally visible, meaning that it participates in
linkage and can be used to resolve external symbol references.
The next two types of linkage are targeted for Microsoft Windows platform only. They are designed to
support importing (exporting) symbols from (to) DLLs (Dynamic Link Libraries).
dllimport
"dllimport" linkage causes the compiler to reference a function or variable via a global pointer to
a pointer that is set up by the DLL exporting the symbol. On Microsoft Windows targets, the pointer
name is formed by combining __imp_ and the function or variable name.
dllexport
"dllexport" linkage causes the compiler to provide a global pointer to a pointer in a DLL, so that
it can be referenced with the dllimport attribute. On Microsoft Windows targets, the pointer name
is formed by combining __imp_ and the function or variable name.
For example, since the ".LC0" variable is defined to be internal, if another module defined a ".LC0" variable
and was linked with this one, one of the two would be renamed, preventing a collision. Since "main" and
"puts" are external (i.e., lacking any linkage declarations), they are accessible outside of the current module.
It is illegal for a function declaration to have any linkage type other than "externally visible", dllimport or
extern_weak.
Calling Conventions
LLVM functions, calls and invokes can all have an optional calling convention specified for the call. The
calling convention of any pair of dynamic caller/callee must match, or the behavior of the program is
undefined. The following calling conventions are supported by LLVM, and more may be added in the future:
11
Documentation for the LLVM System at SVN head
target, without having to conform to an externally specified ABI (Application Binary Interface). Tail
calls can only be optimized when this or the GHC convention is used. This calling convention does
not support varargs and requires the prototype of all callees to exactly match the prototype of the
function definition.
"coldcc" - The cold calling convention:
This calling convention attempts to make code in the caller as efficient as possible under the
assumption that the call is not commonly executed. As such, these calls often preserve all registers so
that the call does not break any live ranges in the caller side. This calling convention does not support
varargs and requires the prototype of all callees to exactly match the prototype of the function
definition.
"cc 10" - GHC convention:
This calling convention has been implemented specifically for use by the Glasgow Haskell Compiler
(GHC). It passes everything in registers, going to extremes to achieve this by disabling callee save
registers. This calling convention should not be used lightly but only for specific situations such as an
alternative to the register pinning performance technique often used when implementing functional
programming languages.At the moment only X86 supports this convention and it has the following
limitations:
◊ On X86-32 only supports up to 4 bit type parameters. No floating point types are supported.
◊ On X86-64 only supports up to 10 bit type parameters and 6 floating point parameters.
This calling convention supports tail call optimization but requires both the caller and callee are using
it.
"cc <n>" - Numbered convention:
Any calling convention may be specified by number, allowing target-specific calling conventions to
be used. Target specific calling conventions start at 64.
More calling conventions can be added/defined on an as-needed basis, to support Pascal conventions or any
other well-known target-independent convention.
Visibility Styles
All Global Variables and Functions have one of the following visibility styles:
Named Types
LLVM IR allows you to specify name aliases for certain types. This can make it easier to read the IR and
make the IR more condensed (particularly when recursive types are involved). An example of a name
specification is:
12
Documentation for the LLVM System at SVN head
You may give a name to any type except "void". Type name aliases may be used anywhere a type is expected
with the syntax "%mytype".
Note that type names are aliases for the structural type that they indicate, and that you can therefore specify
multiple names for the same type. This often leads to confusing behavior when dumping out a .ll file. Since
LLVM IR uses structural typing, the name is not part of the type. When printing out LLVM IR, the printer
will pick one name to render all types of a particular shape. This means that if you have code where two
different source types end up having the same LLVM type, that the dumper will sometimes print the "wrong"
or unexpected type. This is an important design point and isn't going to change.
Global Variables
Global variables define regions of memory allocated at compilation time instead of run-time. Global variables
may optionally be initialized, may have an explicit section to be placed in, and may have an optional explicit
alignment specified. A variable may be defined as "thread_local", which means that it will not be shared by
threads (each thread will have a separated copy of the variable). A variable may be defined as a global
"constant," which indicates that the contents of the variable will never be modified (enabling better
optimization, allowing the global data to be placed in the read-only section of an executable, etc). Note that
variables that need runtime initialization cannot be marked "constant" as there is a store to the variable.
LLVM explicitly allows declarations of global variables to be marked constant, even if the final definition of
the global is not. This capability can be used to enable slightly better optimization of the program, but requires
the language definition to guarantee that optimizations based on the 'constantness' are valid for the translation
units that do not include the definition.
As SSA values, global variables define pointer values that are in scope (i.e. they dominate) all basic blocks in
the program. Global variables always define a pointer to their "content" type because they describe a region of
memory, and all memory objects in LLVM are accessed through pointers.
A global variable may be declared to reside in a target-specific numbered address space. For targets that
support them, address spaces may affect how optimizations are performed and/or what target instructions are
used to access the variable. The default address space is zero. The address space qualifier must precede any
other attributes.
LLVM allows an explicit section to be specified for globals. If the target supports it, it will emit globals to the
section specified.
An explicit alignment may be specified for a global. If not present, or if the alignment is set to zero, the
alignment of the global is set by the target to whatever it feels convenient. If an explicit alignment is specified,
the global is forced to have at least that much alignment. All alignments must be a power of 2.
For example, the following defines a global in a numbered address space with an initializer, section, and
alignment:
Functions
LLVM function definitions consist of the "define" keyword, an optional linkage type, an optional visibility
style, an optional calling convention, a return type, an optional parameter attribute for the return type, a
function name, a (possibly empty) argument list (each with optional parameter attributes), optional function
attributes, an optional section, an optional alignment, an optional garbage collector name, an opening curly
brace, a list of basic blocks, and a closing curly brace.
13
Documentation for the LLVM System at SVN head
LLVM function declarations consist of the "declare" keyword, an optional linkage type, an optional
visibility style, an optional calling convention, a return type, an optional parameter attribute for the return
type, a function name, a possibly empty list of arguments, an optional alignment, and an optional garbage
collector name.
A function definition contains a list of basic blocks, forming the CFG (Control Flow Graph) for the function.
Each basic block may optionally start with a label (giving the basic block a symbol table entry), contains a list
of instructions, and ends with a terminator instruction (such as a branch or function return).
The first basic block in a function is special in two ways: it is immediately executed on entrance to the
function, and it is not allowed to have predecessor basic blocks (i.e. there can not be any branches to the entry
block of a function). Because the block can have no predecessors, it also cannot have any PHI nodes.
LLVM allows an explicit section to be specified for functions. If the target supports it, it will emit functions to
the section specified.
An explicit alignment may be specified for a function. If not present, or if the alignment is set to zero, the
alignment of the function is set by the target to whatever it feels convenient. If an explicit alignment is
specified, the function is forced to have at least that much alignment. All alignments must be a power of 2.
Syntax:
Aliases
Aliases act as "second name" for the aliasee value (which can be either function, global variable, another alias
or bitcast of global value). Aliases may have an optional linkage type, and an optional visibility style.
Syntax:
Named Metadata
Named metadata is a collection of metadata. Metadata nodes (but not metadata strings) and null are the only
valid operands for a named metadata.
Syntax:
Parameter Attributes
The return type and each parameter of a function type may have a set of parameter attributes associated with
them. Parameter attributes are used to communicate additional information about the result or parameters of a
function. Parameter attributes are considered to be part of the function, not of the function type, so functions
with different parameter attributes can have the same function type.
Parameter attributes are simple keywords that follow the type specified. If multiple parameter attributes are
needed, they are space separated. For example:
14
Documentation for the LLVM System at SVN head
declare i32 @printf(i8* noalias nocapture, ...)
declare i32 @atoi(i8 zeroext)
declare signext i8 @returns_signed_char()
Note that any attributes for the function result (nounwind, readonly) come immediately after the
argument list.
zeroext
This indicates to the code generator that the parameter or return value should be zero-extended to a
32-bit value by the caller (for a parameter) or the callee (for a return value).
signext
This indicates to the code generator that the parameter or return value should be sign-extended to a
32-bit value by the caller (for a parameter) or the callee (for a return value).
inreg
This indicates that this parameter or return value should be treated in a special target-dependent
fashion during while emitting code for a function call or return (usually, by putting it in a register as
opposed to memory, though some targets use it to distinguish between two different kinds of
registers). Use of this attribute is target-specific.
byval
This indicates that the pointer parameter should really be passed by value to the function. The
attribute implies that a hidden copy of the pointee is made between the caller and the callee, so the
callee is unable to modify the value in the callee. This attribute is only valid on LLVM pointer
arguments. It is generally used to pass structs and arrays by value, but is also valid on pointers to
scalars. The copy is considered to belong to the caller not the callee (for example, readonly
functions should not write to byval parameters). This is not a valid attribute for return values. The
byval attribute also supports specifying an alignment with the align attribute. This has a
target-specific effect on the code generator that usually indicates a desired alignment for the
synthesized stack slot.
sret
This indicates that the pointer parameter specifies the address of a structure that is the return value of
the function in the source program. This pointer must be guaranteed by the caller to be valid: loads
and stores to the structure may be assumed by the callee to not to trap. This may only be applied to the
first parameter. This is not a valid attribute for return values.
noalias
This indicates that the pointer does not alias any global or any other parameter. The caller is
responsible for ensuring that this is the case. On a function return value, noalias additionally
indicates that the pointer does not alias any other pointers visible to the caller. For further details,
please see the discussion of the NoAlias response in alias analysis.
nocapture
This indicates that the callee does not make any copies of the pointer that outlive the callee itself. This
is not a valid attribute for return values.
nest
This indicates that the pointer parameter can be excised using the trampoline intrinsics. This is not a
valid attribute for return values.
15
Documentation for the LLVM System at SVN head
The compiler declares the supported values of name. Specifying a collector which will cause the compiler to
alter its output in order to support the named garbage collection algorithm.
Function Attributes
Function attributes are set to communicate additional information about a function. Function attributes are
considered to be part of the function, not of the function type, so functions with different parameter attributes
can have the same function type.
Function attributes are simple keywords that follow the type specified. If multiple attributes are needed, they
are space separated. For example:
alignstack(<n>)
This attribute indicates that, when emitting the prologue and epilogue, the backend should forcibly
align the stack pointer. Specify the desired alignment, which must be a power of two, in parentheses.
alwaysinline
This attribute indicates that the inliner should attempt to inline this function into callers whenever
possible, ignoring any active inlining size threshold for this caller.
inlinehint
This attribute indicates that the source code contained a hint that inlining this function is desirable
(such as the "inline" keyword in C/C++). It is just a hint; it imposes no requirements on the inliner.
noinline
This attribute indicates that the inliner should never inline this function in any situation. This attribute
may not be used together with the alwaysinline attribute.
optsize
This attribute suggests that optimization passes and code generator passes make choices that keep the
code size of this function low, and otherwise do optimizations specifically to reduce code size.
noreturn
This function attribute indicates that the function never returns normally. This produces undefined
behavior at runtime if the function ever does dynamically return.
nounwind
This function attribute indicates that the function never returns with an unwind or exceptional control
flow. If the function does unwind, its runtime behavior is undefined.
readnone
This attribute indicates that the function computes its result (or decides to unwind an exception) based
strictly on its arguments, without dereferencing any pointer arguments or otherwise accessing any
mutable state (e.g. memory, control registers, etc) visible to caller functions. It does not write through
any pointer arguments (including byval arguments) and never changes any state visible to callers.
This means that it cannot unwind exceptions by calling the C++ exception throwing methods, but
could use the unwind instruction.
readonly
This attribute indicates that the function does not write through any pointer arguments (including
byval arguments) or otherwise modify any state (e.g. memory, control registers, etc) visible to caller
functions. It may dereference pointer arguments and read state that may be set in the caller. A
readonly function always returns the same value (or unwinds an exception identically) when called
with the same set of arguments and global state. It cannot unwind an exception by calling the C++
exception throwing methods, but may use the unwind instruction.
16
Documentation for the LLVM System at SVN head
ssp
This attribute indicates that the function should emit a stack smashing protector. It is in the form of a
"canary"—a random value placed on the stack before the local variables that's checked upon return
from the function to see if it has been overwritten. A heuristic is used to determine if a function needs
stack protectors or not.
If a function that has an ssp attribute is inlined into a function that doesn't have an ssp attribute,
then the resulting function will have an ssp attribute.
sspreq
This attribute indicates that the function should always emit a stack smashing protector. This
overrides the ssp function attribute.
If a function that has an sspreq attribute is inlined into a function that doesn't have an sspreq
attribute or which has an ssp attribute, then the resulting function will have an sspreq attribute.
noredzone
This attribute indicates that the code generator should not use a red zone, even if the target-specific
ABI normally permits it.
noimplicitfloat
This attributes disables implicit floating point instructions.
naked
This attribute disables prologue / epilogue emission for the function. This can have very
system-specific consequences.
The strings can contain any character by escaping non-printable characters. The escape sequence used is
simply "\xx" where "xx" is the two digit hex code for the number.
The inline asm code is simply printed to the machine code .s file when assembly code is generated.
Data Layout
A module may specify a target specific data layout string that specifies how data is to be laid out in memory.
The syntax for the data layout is simply:
The layout specification consists of a list of specifications separated by the minus sign character ('-'). Each
specification starts with a letter and may include other information after the letter to define some aspect of the
data layout. The specifications accepted are as follows:
E
Specifies that the target lays out data in big-endian form. That is, the bits with the most significance
have the lowest address location.
e
17
Documentation for the LLVM System at SVN head
Specifies that the target lays out data in little-endian form. That is, the bits with the least significance
have the lowest address location.
p:size:abi:pref
This specifies the size of a pointer and its abi and preferred alignments. All sizes are in bits.
Specifying the pref alignment is optional. If omitted, the preceding : should be omitted too.
isize:abi:pref
This specifies the alignment for an integer type of a given bit size. The value of size must be in the
range [1,2^23).
vsize:abi:pref
This specifies the alignment for a vector type of a given bit size.
fsize:abi:pref
This specifies the alignment for a floating point type of a given bit size. The value of size must be
either 32 (float) or 64 (double).
asize:abi:pref
This specifies the alignment for an aggregate type of a given bit size.
ssize:abi:pref
This specifies the alignment for a stack object of a given bit size.
nsize1:size2:size3...
This specifies a set of native integer widths for the target CPU in bits. For example, it might contain
"n32" for 32-bit PowerPC, "n32:64" for PowerPC 64, or "n8:16:32:64" for X86-64. Elements of this
set are considered to support most general arithmetic operations efficiently.
When constructing the data layout for a given target, LLVM starts with a default set of specifications which
are then (possibly) overriden by the specifications in the datalayout keyword. The default specifications
are given in this list:
• E - big endian
• p:64:64:64 - 64-bit pointers with 64-bit alignment
• i1:8:8 - i1 is 8-bit (byte) aligned
• i8:8:8 - i8 is 8-bit (byte) aligned
• i16:16:16 - i16 is 16-bit aligned
• i32:32:32 - i32 is 32-bit aligned
• i64:32:64 - i64 has ABI alignment of 32-bits but preferred alignment of 64-bits
• f32:32:32 - float is 32-bit aligned
• f64:64:64 - double is 64-bit aligned
• v64:64:64 - 64-bit vector is 64-bit aligned
• v128:128:128 - 128-bit vector is 128-bit aligned
• a0:0:1 - aggregates are 8-bit aligned
• s0:64:64 - stack objects are 64-bit aligned
When LLVM is determining the alignment for a given type, it uses the following rules:
1. If the type sought is an exact match for one of the specifications, that specification is used.
2. If no match is found, and the type sought is an integer type, then the smallest integer type that is larger
than the bitwidth of the sought type is used. If none of the specifications are larger than the bitwidth
then the the largest integer type is used. For example, given the default specifications above, the i7
type will use the alignment of i8 (next largest) while both i65 and i256 will use the alignment of i64
(largest specified).
3. If no match is found, and the type sought is a vector type, then the largest vector type that is smaller
than the sought vector type will be used as a fall back. This happens because <128 x double> can be
implemented in terms of 64 <2 x double>, for example.
18
Documentation for the LLVM System at SVN head
Pointer Aliasing Rules
Any memory access must be done through a pointer value associated with an address range of the memory
access, otherwise the behavior is undefined. Pointer values are associated with address ranges according to the
following rules:
• A pointer value formed from a getelementptr instruction is associated with the addresses
associated with the first operand of the getelementptr.
• An address of a global variable is associated with the address range of the variable's storage.
• The result value of an allocation instruction is associated with the address range of the allocated
storage.
• A null pointer in the default address-space is associated with no address.
• A pointer value formed by an inttoptr is associated with all address ranges of all pointer values
that contribute (directly or indirectly) to the computation of the pointer's value.
• The result value of a bitcast is associated with all addresses associated with the operand of the
bitcast.
• An integer constant other than zero or a pointer value returned from a function not defined within
LLVM may be associated with address ranges allocated through mechanisms other than those
provided by LLVM. Such ranges shall not overlap with any ranges of addresses allocated by
mechanisms provided by LLVM.
LLVM IR does not associate types with memory. The result type of a load merely indicates the size and
alignment of the memory from which to load, as well as the interpretation of the value. The first operand of a
store similarly only indicates the size and alignment of the store.
Consequently, type-based alias analysis, aka TBAA, aka -fstrict-aliasing, is not applicable to
general unadorned LLVM IR. Metadata may be used to encode additional information which specialized
optimization passes may use to implement type-based alias analysis.
Type System
The LLVM type system is one of the most important features of the intermediate representation. Being typed
enables a number of optimizations to be performed on the intermediate representation directly, without having
to do extra analyses on the side before the transformation. A strong type system makes it easier to read the
generated code and enables novel analyses and transformations that are not feasible to perform on normal
three address code representations.
Type Classifications
The types fall into a few useful classifications:
Classification Types
integer i1, i2, i3, ... i8, ... i16, ... i32, ... i64, ...
floating point float, double, x86_fp80, fp128, ppc_fp128
first class integer, floating point, pointer, vector, structure, union, array, label, metadata.
primitive label, void, floating point, metadata.
derived array, function, pointer, structure, packed structure, union, vector, opaque.
The first class types are perhaps the most important. Values of these types are the only ones which can be
produced by instructions.
19
Documentation for the LLVM System at SVN head
Primitive Types
The primitive types are the fundamental building blocks of the LLVM system.
Integer Type
Overview:
The integer type is a very simple type that simply specifies an arbitrary bit width for the integer type desired.
Any bit width from 1 bit to 223-1 (about 8 million) can be specified.
Syntax:
iN
The number of bits the integer will occupy is specified by the N value.
Examples:
i1 a single-bit integer.
i32 a 32-bit integer.
i1942652 a really big integer of over 1 million bits.
Floating Point Types
Type Description
float 32-bit floating point value
double 64-bit floating point value
fp128 128-bit floating point value (112-bit mantissa)
x86_fp80 80-bit floating point value (X87)
ppc_fp128 128-bit floating point value (two 64-bits)
Void Type
Overview:
The void type does not represent any value and has no size.
Syntax:
void
Label Type
Overview:
Syntax:
label
Metadata Type
20
Documentation for the LLVM System at SVN head
Overview:
The metadata type represents embedded metadata. No derived types may be created from metadata except for
function arguments.
Syntax:
metadata
Derived Types
The real power in LLVM comes from the derived types in the system. This is what allows a programmer to
represent arrays, functions, pointers, and other useful types. Each of these types contain one or more element
types which may be a primitive type, or another derived type. For example, it is possible to have a two
dimensional array, using an array as the element type of another array.
Aggregate Types
Aggregate Types are a subset of derived types that can contain multiple member types. Arrays, structs, vectors
and unions are aggregate types.
Array Type
Overview:
The array type is a very simple derived type that arranges elements sequentially in memory. The array type
requires a size (number of elements) and an underlying data type.
Syntax:
The number of elements is a constant integer value; elementtype may be any type with a size.
Examples:
Function Type
21
Documentation for the LLVM System at SVN head
Overview:
The function type can be thought of as a function signature. It consists of a return type and a list of formal
parameter types. The return type of a function type is a scalar type, a void type, a struct type, or a union type.
If the return type is a struct type then all struct elements must be of first class types, and the struct must have
at least one element.
Syntax:
...where '<parameter list>' is a comma-separated list of type specifiers. Optionally, the parameter list
may include a type ..., which indicates that the function takes a variable number of arguments. Variable
argument functions can access their arguments with the variable argument handling intrinsic functions.
'<returntype>' is any type except label.
Examples:
The structure type is used to represent a collection of data members together in memory. The packing of the
field types is defined to match the ABI of the underlying processor. The elements of a structure may be any
type that has a size.
Structures in memory are accessed using 'load' and 'store' by getting a pointer to a field with the
'getelementptr' instruction. Structures in registers are accessed using the 'extractvalue' and
'insertvalue' instructions.
Syntax:
{ <type list> }
Examples:
The packed structure type is used to represent a collection of data members together in memory. There is no
padding between fields. Further, the alignment of a packed structure is 1 byte. The elements of a packed
structure may be any type that has a size.
22
Documentation for the LLVM System at SVN head
Structures are accessed using 'load and 'store' by getting a pointer to a field with the 'getelementptr'
instruction.
Syntax:
Examples:
A union type describes an object with size and alignment suitable for an object of any one of a given set of
types (also known as an "untagged" union). It is similar in concept and usage to a struct, except that all
members of the union have an offset of zero. The elements of a union may be any type that has a size. Unions
must have at least one member - empty unions are not allowed.
The size of the union as a whole will be the size of its largest member, and the alignment requirements of the
union as a whole will be the largest alignment requirement of any member.
Union members are accessed using 'load and 'store' by getting a pointer to a field with the
'getelementptr' instruction. Since all members are at offset zero, the getelementptr instruction does not
affect the address, only the type of the resulting pointer.
Syntax:
Examples:
The pointer type is used to specify memory locations. Pointers are commonly used to reference objects in
memory.
Pointer types may have an optional address space attribute defining the numbered address space where the
pointed-to object resides. The default address space is number zero. The semantics of non-zero address spaces
are target-specific.
Note that LLVM does not permit pointers to void (void*) nor does it permit pointers to labels (label*).
Use i8* instead.
23
Documentation for the LLVM System at SVN head
Syntax:
<type> *
Examples:
A vector type is a simple derived type that represents a vector of elements. Vector types are used when
multiple primitive data are operated in parallel using a single instruction (SIMD). A vector type requires a size
(number of elements) and an underlying primitive data type. Vector types are considered first class.
Syntax:
The number of elements is a constant integer value; elementtype may be any integer or floating point type.
Examples:
Opaque types are used to represent unknown types in the system. This corresponds (for example) to the C
notion of a forward declared structure type. In LLVM, opaque types can eventually be resolved to any type
(not just a structure type).
Syntax:
opaque
Examples:
An "up reference" allows you to refer to a lexically enclosing type without requiring it to have a name. For
instance, a structure declaration may contain a pointer to any of the types it is lexically a member of. Example
of up references (with their equivalent as named type declarations) include:
{ \2 * } %x = type { %x* }
{ \2 }* %y = type { %y }*
\1* %z = type %z*
24
Documentation for the LLVM System at SVN head
An up reference is needed by the asmprinter for printing out cyclic types when there is no declared name for a
type in the cycle. Because the asmprinter does not want to print out an infinite type string, it needs a syntax to
handle recursive types that have no names (all names are optional in llvm IR).
Syntax:
\<level>
The level is the count of the lexical type that is being referred to.
Examples:
Simple Constants
Boolean constants
The two strings 'true' and 'false' are both valid constants of the i1 type.
Integer constants
Standard integers (such as '4') are constants of the integer type. Negative numbers may be used with
integer types.
Floating point constants
Floating point constants use standard decimal notation (e.g. 123.421), exponential notation (e.g.
1.23421e+2), or a more precise hexadecimal notation (see below). The assembler requires the exact
decimal value of a floating-point constant. For example, the assembler accepts 1.25 but rejects 1.3
because 1.3 is a repeating decimal in binary. Floating point constants must have a floating point type.
Null pointer constants
The identifier 'null' is recognized as a null pointer constant and must be of pointer type.
The one non-intuitive notation for constants is the hexadecimal form of floating point constants. For example,
the form 'double 0x432ff973cafa8000' is equivalent to (but harder to read than) 'double
4.5e+15'. The only time hexadecimal floating point constants are required (and the only time that they are
generated by the disassembler) is when a floating point constant must be emitted but it cannot be represented
as a decimal floating point number in a reasonable number of digits. For example, NaN's, infinities, and other
special values are represented in their IEEE hexadecimal format so that assembly and disassembly do not
cause any bits to change in the constants.
When using the hexadecimal form, constants of types float and double are represented using the 16-digit form
shown above (which matches the IEEE754 representation for double); float values must, however, be exactly
representable as IEE754 single precision. Hexadecimal format is always used for long double, and there are
three forms of long double. The 80-bit format used by x86 is represented as 0xK followed by 20 hexadecimal
digits. The 128-bit format used by PowerPC (two adjacent doubles) is represented by 0xM followed by 32
hexadecimal digits. The IEEE 128-bit format is represented by 0xL followed by 32 hexadecimal digits; no
currently supported target uses this format. Long doubles will only work if they match the long double format
on your target. All hexadecimal formats are big-endian (sign bit at the left).
Complex Constants
25
Documentation for the LLVM System at SVN head
Complex constants are a (potentially recursive) combination of simple constants and smaller complex
constants.
Structure constants
Structure constants are represented with notation similar to structure type definitions (a comma
separated list of elements, surrounded by braces ({})). For example: "{ i32 4, float 17.0,
i32* @G }", where "@G" is declared as "@G = external global i32". Structure constants
must have structure type, and the number and types of elements must match those specified by the
type.
Union constants
Union constants are represented with notation similar to a structure with a single element - that is, a
single typed element surrounded by braces ({})). For example: "{ i32 4 }". The union type can
be initialized with a single-element struct as long as the type of the struct element matches the type of
one of the union members.
Array constants
Array constants are represented with notation similar to array type definitions (a comma separated list
of elements, surrounded by square brackets ([])). For example: "[ i32 42, i32 11, i32 74
]". Array constants must have array type, and the number and types of elements must match those
specified by the type.
Vector constants
Vector constants are represented with notation similar to vector type definitions (a comma separated
list of elements, surrounded by less-than/greater-than's (<>)). For example: "< i32 42, i32 11,
i32 74, i32 100 >". Vector constants must have vector type, and the number and types of
elements must match those specified by the type.
Zero initialization
The string 'zeroinitializer' can be used to zero initialize a value to zero of any type, including
scalar and aggregate types. This is often used to avoid having to print large zero initializers (e.g. for
large arrays) and is always exactly equivalent to using explicit zero initializers.
Metadata node
A metadata node is a structure-like constant with metadata type. For example: "metadata !{ i32
0, metadata !"test" }". Unlike other constants that are meant to be interpreted as part of the
instruction stream, metadata is a place to attach additional information such as debug info.
@X = global i32 17
@Y = global i32 42
@Z = global [2 x i32*] [ i32* @X, i32* @Y ]
Undefined Values
The string 'undef' can be used anywhere a constant is expected, and indicates that the user of the value may
receive an unspecified bit-pattern. Undefined values may be of any type (other than label or void) and be used
anywhere a constant is permitted.
Undefined values are useful because they indicate to the compiler that the program is well defined no matter
what value is used. This gives the compiler more freedom to optimize. Here are some examples of (potentially
surprising) transformations that are valid (in pseudo IR):
26
Documentation for the LLVM System at SVN head
%B = sub %X, undef
%C = xor %X, undef
Safe:
%A = undef
%B = undef
%C = undef
This is safe because all of the output bits are affected by the undef bits. Any output bit can have a zero or one
depending on the input bits.
%A = or %X, undef
%B = and %X, undef
Safe:
%A = -1
%B = 0
Unsafe:
%A = undef
%B = undef
These logical operations have bits that are not always affected by the input. For example, if "%X" has a zero
bit, then the output of the 'and' operation will always be a zero, no matter what the corresponding bit from the
undef is. As such, it is unsafe to optimize or assume that the result of the and is undef. However, it is safe to
assume that all bits of the undef could be 0, and optimize the and to 0. Likewise, it is safe to assume that all
the bits of the undef operand to the or could be set, allowing the or to be folded to -1.
This set of examples show that undefined select (and conditional branch) conditions can go "either way" but
they have to come from one of the two operands. In the %A example, if %X and %Y were both known to
have a clear low bit, then %A would have to have a cleared low bit. However, in the %C example, the
optimizer is allowed to assume that the undef operand could be the same as %Y, allowing the whole select to
be eliminated.
%B = undef
%C = xor %B, %B
%D = undef
%E = icmp lt %D, 4
%F = icmp gte %D, 4
Safe:
%A = undef
%B = undef
%C = undef
%D = undef
%E = undef
27
Documentation for the LLVM System at SVN head
%F = undef
This example points out that two undef operands are not necessarily the same. This can be surprising to
people (and also matches C semantics) where they assume that "X^X" is always zero, even if X is undef. This
isn't true for a number of reasons, but the short answer is that an undef "variable" can arbitrarily change its
value over its "live range". This is true because the "variable" doesn't actually have a live range. Instead, the
value is logically read from arbitrary registers that happen to be around when needed, so the value is not
necessarily consistent over time. In fact, %A and %C need to have the same semantics or the core LLVM
"replace all uses with" concept would not hold.
%A = fdiv undef, %X
%B = fdiv %X, undef
Safe:
%A = undef
b: unreachable
These examples show the crucial difference between an undefined value and undefined behavior. An
undefined value (like undef) is allowed to have an arbitrary bit-pattern. This means that the %A operation can
be constant folded to undef because the undef could be an SNaN, and fdiv is not (currently) defined on
SNaN's. However, in the second example, we can make a more aggressive assumption: because the undef is
allowed to be an arbitrary value, we are allowed to assume that it could be zero. Since a divide by zero has
undefined behavior, we are allowed to assume that the operation does not execute at all. This allows us to
delete the divide and all code after it: since the undefined operation "can't happen", the optimizer can assume
that it occurs in dead code.
These examples reiterate the fdiv example: a store "of" an undefined value can be assumed to not have any
effect: we can assume that the value is overwritten with bits that happen to match what was already there.
However, a store "to" an undefined location could clobber arbitrary memory, therefore, it has undefined
behavior.
The 'blockaddress' constant computes the address of the specified basic block in the specified function,
and always has an i8* type. Taking the address of the entry block is illegal.
This value only has defined behavior when used as an operand to the 'indirectbr' instruction or for
comparisons against null. Pointer equality tests between labels addresses is undefined behavior - though,
again, comparison against null is ok, and no label is equal to the null pointer. This may also be passed around
as an opaque pointer sized value as long as the bits are not inspected. This allows ptrtoint and arithmetic
to be performed on these values so long as the original value is reconstituted before the indirectbr.
Finally, some targets may provide defined semantics when using the value as the operand to an inline
assembly, but that is target specific.
Constant Expressions
28
Documentation for the LLVM System at SVN head
Constant expressions are used to allow expressions involving other constants to be used as constants. Constant
expressions may be of any first class type and may involve any LLVM operation that does not have side
effects (e.g. load and call are not supported). The following is the syntax for constant expressions:
29
Documentation for the LLVM System at SVN head
getelementptr inbounds ( CSTPTR, IDX0, IDX1, ... )
Perform the getelementptr operation on constants. As with the getelementptr instruction, the index list
may have zero or more indexes, which are required to make sense for the type of "CSTPTR".
select ( COND, VAL1, VAL2 )
Perform the select operation on constants.
icmp COND ( VAL1, VAL2 )
Performs the icmp operation on constants.
fcmp COND ( VAL1, VAL2 )
Performs the fcmp operation on constants.
extractelement ( VAL, IDX )
Perform the extractelement operation on constants.
insertelement ( VAL, ELT, IDX )
Perform the insertelement operation on constants.
shufflevector ( VEC1, VEC2, IDXMASK )
Perform the shufflevector operation on constants.
OPCODE ( LHS, RHS )
Perform the specified operation of the LHS and RHS constants. OPCODE may be any of the binary or
bitwise binary operations. The constraints on operands are the same as those for the corresponding
instruction (e.g. no bitwise operations on floating point values are allowed).
Other Values
Inline Assembler Expressions
LLVM supports inline assembler expressions (as opposed to Module-Level Inline Assembly) through the use
of a special value. This value represents the inline assembler as a string (containing the instructions to emit), a
list of operand constraints (stored as a string), a flag that indicates whether or not the inline asm expression
has side effects, and a flag indicating whether the function containing the asm needs to align its stack
conservatively. An example inline assembler expression is:
Inline assembler expressions may only be used as the callee operand of a call instruction. Thus, typically
we have:
Inline asms with side effects not visible in the constraint list must be marked as having side effects. This is
done through the use of the 'sideeffect' keyword, like so:
In some cases inline asms will contain code that will not work unless the stack is aligned in some way, such as
calls or SSE instructions on x86, yet will not contain code that does that alignment within the asm. The
compiler should make conservative assumptions about what the asm might contain and should generate its
usual stack alignment code in the prologue if the 'alignstack' keyword is present:
TODO: The format of the asm and constraints string still need to be documented here. Constraints on what
can be done (e.g. duplication, moving, etc need to be documented). This is probably best done by reference to
30
Documentation for the LLVM System at SVN head
another document that covers inline asm from a holistic perspective.
A metadata string is a string surrounded by double quotes. It can contain any character by escaping
non-printable characters with "\xx" where "xx" is the two digit hex code. For example: "!"test\00"".
Metadata nodes are represented with notation similar to structure constants (a comma separated list of
elements, surrounded by braces and preceded by an exclamation point). For example: "!{ metadata
!"test\00", i32 10}". Metadata nodes can have any values as their operand.
A named metadata is a collection of metadata nodes, which can be looked up in the module symbol table. For
example: "!foo = metadata !{!4, !3}".
Metadata can be used as function arguments. Here llvm.dbg.value function is using two metadata
arguments.
Metadata can be attached with an instruction. Here metadata !21 is attached with add instruction using
!dbg identifier.
@X = global i8 4
@Y = global i32 123
If a global variable appears in the @llvm.used list, then the compiler, assembler, and linker are required to
treat the symbol as if there is a reference to the global that it cannot see. For example, if a variable has internal
linkage and no references other than that from the @llvm.used list, it cannot be deleted. This is commonly
used to represent references from inline asms and other things the compiler cannot "see", and corresponds to
"attribute((used))" in GNU C.
31
Documentation for the LLVM System at SVN head
On some targets, the code generator must emit a directive to the assembler or object file to prevent the
assembler and linker from molesting the symbol.
This is a rare construct that should only be used in rare circumstances, and should not be exposed to source
languages.
Instruction Reference
The LLVM instruction set consists of several different classifications of instructions: terminator instructions,
binary instructions, bitwise binary instructions, memory instructions, and other instructions.
Terminator Instructions
As mentioned previously, every basic block in a program ends with a "Terminator" instruction, which
indicates which block should be executed after the current block is finished. These terminator instructions
typically yield a 'void' value: they produce control flow, not values (the one exception being the 'invoke'
instruction).
There are six different terminator instructions: the 'ret' instruction, the 'br' instruction, the 'switch'
instruction, the ''indirectbr' Instruction, the 'invoke' instruction, the 'unwind' instruction, and the
'unreachable' instruction.
'ret' Instruction
Syntax:
Overview:
The 'ret' instruction is used to return control flow (and optionally a value) from a function back to the caller.
There are two forms of the 'ret' instruction: one that returns a value and then causes control flow, and one
that just causes control flow to occur.
Arguments:
The 'ret' instruction optionally accepts a single argument, the return value. The type of the return value must
be a 'first class' type.
A function is not well formed if it it has a non-void return type and contains a 'ret' instruction with no return
value or a return value with a type that does not match its type, or if it has a void return type and contains a
'ret' instruction with a return value.
32
Documentation for the LLVM System at SVN head
Semantics:
When the 'ret' instruction is executed, control flow returns back to the calling function's context. If the caller
is a "call" instruction, execution continues at the instruction after the call. If the caller was an "invoke"
instruction, execution continues at the beginning of the "normal" destination block. If the instruction returns a
value, that value shall set the call or invoke instruction's return value.
Example:
'br' Instruction
Syntax:
Overview:
The 'br' instruction is used to cause control flow to transfer to a different basic block in the current function.
There are two forms of this instruction, corresponding to a conditional branch and an unconditional branch.
Arguments:
The conditional branch form of the 'br' instruction takes a single 'i1' value and two 'label' values. The
unconditional form of the 'br' instruction takes a single 'label' value as a target.
Semantics:
Upon execution of a conditional 'br' instruction, the 'i1' argument is evaluated. If the value is true, control
flows to the 'iftrue' label argument. If "cond" is false, control flows to the 'iffalse' label
argument.
Example:
Test:
%cond = icmp eq i32 %a, %b
br i1 %cond, label %IfEqual, label %IfUnequal
IfEqual:
ret i32 1
IfUnequal:
ret i32 0
'switch' Instruction
Syntax:
switch <intty> <value>, label <defaultdest> [ <intty> <val>, label <dest> ... ]
Overview:
The 'switch' instruction is used to transfer control flow to one of several different places. It is a
generalization of the 'br' instruction, allowing a branch to occur to one of many possible destinations.
33
Documentation for the LLVM System at SVN head
Arguments:
The 'switch' instruction uses three parameters: an integer comparison value 'value', a default 'label'
destination, and an array of pairs of comparison value constants and 'label's. The table is not allowed to
contain duplicate constant entries.
Semantics:
The switch instruction specifies a table of values and destinations. When the 'switch' instruction is
executed, this table is searched for the given value. If the value is found, control flow is transferred to the
corresponding destination; otherwise, control flow is transferred to the default destination.
Implementation:
Depending on properties of the target machine and the particular switch instruction, this instruction may be
code generated in different ways. For example, it could be generated as a series of chained conditional
branches or with a lookup table.
Example:
'indirectbr' Instruction
Syntax:
Overview:
The 'indirectbr' instruction implements an indirect branch to a label within the current function, whose
address is specified by "address". Address must be derived from a blockaddress constant.
Arguments:
The 'address' argument is the address of the label to jump to. The rest of the arguments indicate the full set
of possible destinations that the address may point to. Blocks are allowed to occur multiple times in the
destination list, though this isn't particularly useful.
This destination list is required so that dataflow analysis has an accurate understanding of the CFG.
Semantics:
Control transfers to the block specified in the address argument. All possible destination blocks must be listed
in the label list, otherwise this instruction has undefined behavior. This implies that jumps to labels defined in
34
Documentation for the LLVM System at SVN head
Implementation:
Example:
'invoke' Instruction
Syntax:
<result> = invoke [cconv] [ret attrs] <ptr to function ty> <function ptr val>(<function args>
to label <normal label> unwind label <exception label>
Overview:
The 'invoke' instruction causes control to transfer to a specified function, with the possibility of control flow
transfer to either the 'normal' label or the 'exception' label. If the callee function returns with the "ret"
instruction, control flow will return to the "normal" label. If the callee (or any indirect callees) returns with the
"unwind" instruction, control is interrupted and continued at the dynamically nearest "exception" label.
Arguments:
1. The optional "cconv" marker indicates which calling convention the call should use. If none is
specified, the call defaults to using C calling conventions.
2. The optional Parameter Attributes list for return values. Only 'zeroext', 'signext', and 'inreg'
attributes are valid here.
3. 'ptr to function ty': shall be the signature of the pointer to function value being invoked. In
most cases, this is a direct function invocation, but indirect invokes are just as possible, branching
off an arbitrary pointer to function value.
4. 'function ptr val': An LLVM value containing a pointer to a function to be invoked.
5. 'function args': argument list whose types match the function signature argument types and
parameter attributes. All arguments must be of first class type. If the function signature indicates the
function accepts a variable number of arguments, the extra arguments can be specified.
6. 'normal label': the label reached when the called function executes a 'ret' instruction.
7. 'exception label': the label reached when a callee returns with the unwind instruction.
8. The optional function attributes list. Only 'noreturn', 'nounwind', 'readonly' and 'readnone'
attributes are valid here.
Semantics:
This instruction is designed to operate as a standard 'call' instruction in most regards. The primary
difference is that it establishes an association with a label, which is used by the runtime library to unwind the
stack.
This instruction is used in languages with destructors to ensure that proper cleanup is performed in the case of
either a longjmp or a thrown exception. Additionally, this is important for implementation of 'catch'
clauses in high-level languages that support them.
35
Documentation for the LLVM System at SVN head
For the purposes of the SSA form, the definition of the value returned by the 'invoke' instruction is deemed
to occur on the edge from the current block to the "normal" label. If the callee unwinds then no return value is
available.
Note that the code generator does not yet completely support unwind, and that the invoke/unwind semantics
are likely to change in future versions.
Example:
'unwind' Instruction
Syntax:
unwind
Overview:
The 'unwind' instruction unwinds the stack, continuing control flow at the first callee in the dynamic call
stack which used an invoke instruction to perform the call. This is primarily used to implement exception
handling.
Semantics:
The 'unwind' instruction causes execution of the current function to immediately halt. The dynamic call
stack is then searched for the first invoke instruction on the call stack. Once found, execution continues at
the "exceptional" destination block specified by the invoke instruction. If there is no invoke instruction in
the dynamic call chain, undefined behavior results.
Note that the code generator does not yet completely support unwind, and that the invoke/unwind semantics
are likely to change in future versions.
'unreachable' Instruction
Syntax:
unreachable
Overview:
The 'unreachable' instruction has no defined semantics. This instruction is used to inform the optimizer
that a particular portion of the code is not reachable. This can be used to indicate that the code after a
no-return function cannot be reached, and other facts.
Semantics:
Binary Operations
Binary operators are used to do most of the computation in a program. They require two operands of the same
type, execute an operation on them, and produce a single value. The operands might represent multiple data,
36
Documentation for the LLVM System at SVN head
as is the case with the vector data type. The result value has the same type as its operands.
'add' Instruction
Syntax:
Overview:
Arguments:
The two arguments to the 'add' instruction must be integer or vector of integer values. Both arguments must
have identical types.
Semantics:
If the sum has unsigned overflow, the result returned is the mathematical result modulo 2n, where n is the bit
width of the result.
Because LLVM integers use a two's complement representation, this instruction is appropriate for both signed
and unsigned integers.
nuw and nsw stand for "No Unsigned Wrap" and "No Signed Wrap", respectively. If the nuw and/or nsw
keywords are present, the result value of the add is undefined if unsigned and/or signed overflow,
respectively, occurs.
Example:
'fadd' Instruction
Syntax:
Overview:
Arguments:
The two arguments to the 'fadd' instruction must be floating point or vector of floating point values. Both
arguments must have identical types.
37
Documentation for the LLVM System at SVN head
Semantics:
The value produced is the floating point sum of the two operands.
Example:
'sub' Instruction
Syntax:
Overview:
Note that the 'sub' instruction is used to represent the 'neg' instruction present in most other intermediate
representations.
Arguments:
The two arguments to the 'sub' instruction must be integer or vector of integer values. Both arguments must
have identical types.
Semantics:
If the difference has unsigned overflow, the result returned is the mathematical result modulo 2n, where n is
the bit width of the result.
Because LLVM integers use a two's complement representation, this instruction is appropriate for both signed
and unsigned integers.
nuw and nsw stand for "No Unsigned Wrap" and "No Signed Wrap", respectively. If the nuw and/or nsw
keywords are present, the result value of the sub is undefined if unsigned and/or signed overflow,
respectively, occurs.
Example:
'fsub' Instruction
Syntax:
38
Documentation for the LLVM System at SVN head
Overview:
Note that the 'fsub' instruction is used to represent the 'fneg' instruction present in most other intermediate
representations.
Arguments:
The two arguments to the 'fsub' instruction must be floating point or vector of floating point values. Both
arguments must have identical types.
Semantics:
The value produced is the floating point difference of the two operands.
Example:
'mul' Instruction
Syntax:
Overview:
Arguments:
The two arguments to the 'mul' instruction must be integer or vector of integer values. Both arguments must
have identical types.
Semantics:
If the result of the multiplication has unsigned overflow, the result returned is the mathematical result modulo
2n, where n is the bit width of the result.
Because LLVM integers use a two's complement representation, and the result is the same width as the
operands, this instruction returns the correct result for both signed and unsigned integers. If a full product (e.g.
i32xi32->i64) is needed, the operands should be sign-extended or zero-extended as appropriate to the
width of the full product.
nuw and nsw stand for "No Unsigned Wrap" and "No Signed Wrap", respectively. If the nuw and/or nsw
keywords are present, the result value of the mul is undefined if unsigned and/or signed overflow,
respectively, occurs.
39
Documentation for the LLVM System at SVN head
Example:
'fmul' Instruction
Syntax:
Overview:
Arguments:
The two arguments to the 'fmul' instruction must be floating point or vector of floating point values. Both
arguments must have identical types.
Semantics:
The value produced is the floating point product of the two operands.
Example:
'udiv' Instruction
Syntax:
Overview:
Arguments:
The two arguments to the 'udiv' instruction must be integer or vector of integer values. Both arguments must
have identical types.
Semantics:
The value produced is the unsigned integer quotient of the two operands.
Note that unsigned integer division and signed integer division are distinct operations; for signed integer
division, use 'sdiv'.
Example:
40
Documentation for the LLVM System at SVN head
'sdiv' Instruction
Syntax:
Overview:
Arguments:
The two arguments to the 'sdiv' instruction must be integer or vector of integer values. Both arguments must
have identical types.
Semantics:
The value produced is the signed integer quotient of the two operands rounded towards zero.
Note that signed integer division and unsigned integer division are distinct operations; for unsigned integer
division, use 'udiv'.
Division by zero leads to undefined behavior. Overflow also leads to undefined behavior; this is a rare case,
but can occur, for example, by doing a 32-bit division of -2147483648 by -1.
If the exact keyword is present, the result value of the sdiv is undefined if the result would be rounded or
if overflow would occur.
Example:
'fdiv' Instruction
Syntax:
Overview:
Arguments:
The two arguments to the 'fdiv' instruction must be floating point or vector of floating point values. Both
arguments must have identical types.
Semantics:
The value produced is the floating point quotient of the two operands.
41
Documentation for the LLVM System at SVN head
Example:
'urem' Instruction
Syntax:
Overview:
The 'urem' instruction returns the remainder from the unsigned division of its two arguments.
Arguments:
The two arguments to the 'urem' instruction must be integer or vector of integer values. Both arguments must
have identical types.
Semantics:
This instruction returns the unsigned integer remainder of a division. This instruction always performs an
unsigned division to get the remainder.
Note that unsigned integer remainder and signed integer remainder are distinct operations; for signed integer
remainder, use 'srem'.
Example:
'srem' Instruction
Syntax:
Overview:
The 'srem' instruction returns the remainder from the signed division of its two operands. This instruction can
also take vector versions of the values in which case the elements must be integers.
Arguments:
The two arguments to the 'srem' instruction must be integer or vector of integer values. Both arguments must
have identical types.
Semantics:
This instruction returns the remainder of a division (where the result has the same sign as the dividend, op1),
not the modulo operator (where the result has the same sign as the divisor, op2) of a value. For more
information about the difference, see The Math Forum. For a table of how this is implemented in various
languages, please see Wikipedia: modulo operation.
42
Documentation for the LLVM System at SVN head
Note that signed integer remainder and unsigned integer remainder are distinct operations; for unsigned
integer remainder, use 'urem'.
Taking the remainder of a division by zero leads to undefined behavior. Overflow also leads to undefined
behavior; this is a rare case, but can occur, for example, by taking the remainder of a 32-bit division of
-2147483648 by -1. (The remainder doesn't actually overflow, but this rule lets srem be implemented using
instructions that return both the result of the division and the remainder.)
Example:
'frem' Instruction
Syntax:
Overview:
The 'frem' instruction returns the remainder from the division of its two operands.
Arguments:
The two arguments to the 'frem' instruction must be floating point or vector of floating point values. Both
arguments must have identical types.
Semantics:
This instruction returns the remainder of a division. The remainder has the same sign as the dividend.
Example:
'shl' Instruction
Syntax:
Overview:
The 'shl' instruction returns the first operand shifted to the left a specified number of bits.
Arguments:
Both arguments to the 'shl' instruction must be the same integer or vector of integer type. 'op2' is treated as
an unsigned value.
43
Documentation for the LLVM System at SVN head
Semantics:
The value produced is op1 * 2op2 mod 2n, where n is the width of the result. If op2 is (statically or
dynamically) negative or equal to or larger than the number of bits in op1, the result is undefined. If the
arguments are vectors, each vector element of op1 is shifted by the corresponding shift amount in op2.
Example:
'lshr' Instruction
Syntax:
Overview:
The 'lshr' instruction (logical shift right) returns the first operand shifted to the right a specified number of
bits with zero fill.
Arguments:
Both arguments to the 'lshr' instruction must be the same integer or vector of integer type. 'op2' is treated as
an unsigned value.
Semantics:
This instruction always performs a logical shift right operation. The most significant bits of the result will be
filled with zero bits after the shift. If op2 is (statically or dynamically) equal to or larger than the number of
bits in op1, the result is undefined. If the arguments are vectors, each vector element of op1 is shifted by the
corresponding shift amount in op2.
Example:
'ashr' Instruction
Syntax:
Overview:
The 'ashr' instruction (arithmetic shift right) returns the first operand shifted to the right a specified number
of bits with sign extension.
44
Documentation for the LLVM System at SVN head
Arguments:
Both arguments to the 'ashr' instruction must be the same integer or vector of integer type. 'op2' is treated as
an unsigned value.
Semantics:
This instruction always performs an arithmetic shift right operation, The most significant bits of the result will
be filled with the sign bit of op1. If op2 is (statically or dynamically) equal to or larger than the number of
bits in op1, the result is undefined. If the arguments are vectors, each vector element of op1 is shifted by the
corresponding shift amount in op2.
Example:
'and' Instruction
Syntax:
Overview:
The 'and' instruction returns the bitwise logical and of its two operands.
Arguments:
The two arguments to the 'and' instruction must be integer or vector of integer values. Both arguments must
have identical types.
Semantics:
'or' Instruction
45
Documentation for the LLVM System at SVN head
Syntax:
Overview:
The 'or' instruction returns the bitwise logical inclusive or of its two operands.
Arguments:
The two arguments to the 'or' instruction must be integer or vector of integer values. Both arguments must
have identical types.
Semantics:
'xor' Instruction
Syntax:
Overview:
The 'xor' instruction returns the bitwise logical exclusive or of its two operands. The xor is used to
implement the "one's complement" operation, which is the "~" operator in C.
Arguments:
The two arguments to the 'xor' instruction must be integer or vector of integer values. Both arguments must
have identical types.
Semantics:
46
Documentation for the LLVM System at SVN head
0 1 1
1 0 1
1 1 0
Example:
Vector Operations
LLVM supports several instructions to represent vector operations in a target-independent manner. These
instructions cover the element-access and vector-specific operations needed to process vectors effectively.
While LLVM does directly support these vector operations, many sophisticated algorithms will want to use
target-specific intrinsics to take full advantage of a specific target.
'extractelement' Instruction
Syntax:
Overview:
The 'extractelement' instruction extracts a single scalar element from a vector at a specified index.
Arguments:
The first operand of an 'extractelement' instruction is a value of vector type. The second operand is an
index indicating the position from which to extract the element. The index may be a variable.
Semantics:
The result is a scalar of the same type as the element type of val. Its value is the value at position idx of
val. If idx exceeds the length of val, the results are undefined.
Example:
'insertelement' Instruction
Syntax:
<result> = insertelement <n x <ty>> <val>, <ty> <elt>, i32 <idx> ; yields <n x <ty>>
Overview:
The 'insertelement' instruction inserts a scalar element into a vector at a specified index.
Arguments:
The first operand of an 'insertelement' instruction is a value of vector type. The second operand is a
scalar value whose type must equal the element type of the first operand. The third operand is an index
47
Documentation for the LLVM System at SVN head
indicating the position at which to insert the value. The index may be a variable.
Semantics:
The result is a vector of the same type as val. Its element values are those of val except at position idx,
where it gets the value elt. If idx exceeds the length of val, the results are undefined.
Example:
<result> = insertelement <4 x i32> %vec, i32 1, i32 0 ; yields <4 x i32>
'shufflevector' Instruction
Syntax:
<result> = shufflevector <n x <ty>> <v1>, <n x <ty>> <v2>, <m x i32> <mask> ; yields <m x
Overview:
The 'shufflevector' instruction constructs a permutation of elements from two input vectors, returning a
vector with the same element type as the input and length that is the same as the shuffle mask.
Arguments:
The first two operands of a 'shufflevector' instruction are vectors with types that match each other. The
third argument is a shuffle mask whose element type is always 'i32'. The result of the instruction is a vector
whose length is the same as the shuffle mask and whose element type is the same as the element type of the
first two operands.
The shuffle mask operand is required to be a constant vector with either constant integer or undef values.
Semantics:
The elements of the two input vectors are numbered from left to right across both of the vectors. The shuffle
mask operand specifies, for each element of the result vector, which element of the two input vectors the
result element gets. The element selector may be undef (meaning "don't care") and the second operand may be
undef if performing a shuffle from only one vector.
Example:
Aggregate Operations
LLVM supports several instructions for working with aggregate values.
'extractvalue' Instruction
48
Documentation for the LLVM System at SVN head
Syntax:
Overview:
The 'extractvalue' instruction extracts the value of a member field from an aggregate value.
Arguments:
The first operand of an 'extractvalue' instruction is a value of struct, union or array type. The operands
are constant indices to specify which value to extract in a similar manner as indices in a 'getelementptr'
instruction.
Semantics:
The result is the value at the position in the aggregate specified by the index operands.
Example:
'insertvalue' Instruction
Syntax:
<result> = insertvalue <aggregate type> <val>, <ty> <elt>, <idx> ; yields <aggregate type>
Overview:
The 'insertvalue' instruction inserts a value into a member field in an aggregate value.
Arguments:
The first operand of an 'insertvalue' instruction is a value of struct, union or array type. The second
operand is a first-class value to insert. The following operands are constant indices indicating the position at
which to insert the value in a similar manner as indices in a 'getelementptr' instruction. The value to
insert must have the same type as the value identified by the indices.
Semantics:
The result is an aggregate of the same type as val. Its value is that of val except that the value at the
position specified by the indices is that of elt.
Example:
%agg1 = insertvalue {i32, float} undef, i32 1, 0 ; yields {i32 1, float undef}
%agg2 = insertvalue {i32, float} %agg1, float %val, 1 ; yields {i32 1, float %val}
49
Documentation for the LLVM System at SVN head
'alloca' Instruction
Syntax:
Overview:
The 'alloca' instruction allocates memory on the stack frame of the currently executing function, to be
automatically released when this function returns to its caller. The object is always allocated in the generic
address space (address space zero).
Arguments:
Semantics:
Memory is allocated; a pointer is returned. The operation is undefined if there is insufficient stack space for
the allocation. 'alloca'd memory is automatically released when the function returns. The 'alloca'
instruction is commonly used to represent automatic variables that must have an address available. When the
function returns (either with the ret or unwind instructions), the memory is reclaimed. Allocating zero
bytes is legal, but the result is undefined.
Example:
'load' Instruction
Syntax:
Overview:
Arguments:
The argument to the 'load' instruction specifies the memory address from which to load. The pointer must
point to a first class type. If the load is marked as volatile, then the optimizer is not allowed to modify
the number or order of execution of this load with other volatile load and store instructions.
50
Documentation for the LLVM System at SVN head
The optional constant align argument specifies the alignment of the operation (that is, the alignment of the
memory address). A value of 0 or an omitted align argument means that the operation has the preferential
alignment for the target. It is the responsibility of the code emitter to ensure that the alignment information is
correct. Overestimating the alignment results in undefined behavior. Underestimating the alignment may
produce less efficient code. An alignment of 1 is always safe.
The optional !nontemporal metadata must reference a single metatadata name <index> corresponding to a
metadata node with one i32 entry of value 1. The existence of the !nontemporal metatadata on the
instruction tells the optimizer and code generator that this load is not expected to be reused in the cache. The
code generator may select special instructions to save cache bandwidth, such as the MOVNT instruction on
x86.
Semantics:
The location of memory pointed to is loaded. If the value being loaded is of scalar type then the number of
bytes read does not exceed the minimum number of bytes needed to hold all bits of the type. For example,
loading an i24 reads at most three bytes. When loading a value of a type like i20 with a size that is not an
integral number of bytes, the result is undefined if the value was not originally written using a store of the
same type.
Examples:
'store' Instruction
Syntax:
Overview:
Arguments:
There are two arguments to the 'store' instruction: a value to store and an address at which to store it. The
type of the '<pointer>' operand must be a pointer to the first class type of the '<value>' operand. If the
store is marked as volatile, then the optimizer is not allowed to modify the number or order of
execution of this store with other volatile load and store instructions.
The optional constant "align" argument specifies the alignment of the operation (that is, the alignment of the
memory address). A value of 0 or an omitted "align" argument means that the operation has the preferential
alignment for the target. It is the responsibility of the code emitter to ensure that the alignment information is
correct. Overestimating the alignment results in an undefined behavior. Underestimating the alignment may
produce less efficient code. An alignment of 1 is always safe.
The optional !nontemporal metadata must reference a single metatadata name corresponding to a metadata
node with one i32 entry of value 1. The existence of the !nontemporal metatadata on the instruction tells the
optimizer and code generator that this load is not expected to be reused in the cache. The code generator may
51
Documentation for the LLVM System at SVN head
select special instructions to save cache bandwidth, such as the MOVNT instruction on x86.
Semantics:
The contents of memory are updated to contain '<value>' at the location specified by the '<pointer>'
operand. If '<value>' is of scalar type then the number of bytes written does not exceed the minimum
number of bytes needed to hold all bits of the type. For example, storing an i24 writes at most three bytes.
When writing a value of a type like i20 with a size that is not an integral number of bytes, it is unspecified
what happens to the extra bits that do not belong to the type, but they will typically be overwritten.
Example:
'getelementptr' Instruction
Syntax:
Overview:
The 'getelementptr' instruction is used to get the address of a subelement of an aggregate data structure.
It performs address calculation only and does not access memory.
Arguments:
The first argument is always a pointer, and forms the basis of the calculation. The remaining arguments are
indices that indicate which of the elements of the aggregate object are indexed. The interpretation of each
index is dependent on the type being indexed into. The first index always indexes the pointer value given as
the first argument, the second index indexes a value of the type pointed to (not necessarily the value directly
pointed to, since the first index can be non-zero), etc. The first type indexed into must be a pointer value,
subsequent types can be arrays, vectors, structs and unions. Note that subsequent types being indexed into can
never be pointers, since that would require loading the pointer before continuing calculation.
The type of each index argument depends on the type it is indexing into. When indexing into a (optionally
packed) structure or union, only i32 integer constants are allowed. When indexing into an array, pointer or
vector, integers of any width are allowed, and they are not required to be constant.
For example, let's consider a C code fragment and how it gets compiled to LLVM:
struct RT {
char A;
int B[10][20];
char C;
};
struct ST {
int X;
double Y;
struct RT Z;
};
52
Documentation for the LLVM System at SVN head
return &s[1].Z.B[5][13];
}
Semantics:
In the example above, the first index is indexing into the '%ST*' type, which is a pointer, yielding a '%ST' = '{
i32, double, %RT }' type, a structure. The second index indexes into the third element of the structure,
yielding a '%RT' = '{ i8 , [10 x [20 x i32]], i8 }' type, another structure. The third index
indexes into the second element of the structure, yielding a '[10 x [20 x i32]]' type, an array. The two
dimensions of the array are subscripted into, yielding an 'i32' type. The 'getelementptr' instruction
returns a pointer to this element, thus computing a value of 'i32*' type.
Note that it is perfectly legal to index partially through a structure, returning a pointer to an inner element.
Because of this, the LLVM code for the given testcase is equivalent to:
If the inbounds keyword is present, the result value of the getelementptr is undefined if the base
pointer is not an in bounds address of an allocated object, or if any of the addresses that would be formed by
successive addition of the offsets implied by the indices to the base address with infinitely precise arithmetic
are not an in bounds address of that allocated object. The in bounds addresses for an allocated object are all
the addresses that point into the object, plus the address one byte past the end.
If the inbounds keyword is not present, the offsets are added to the base address with silently-wrapping
two's complement arithmetic, and the result value of the getelementptr may be outside the object pointed
to by the base pointer. The result value may not necessarily be used to access memory though, even if it
happens to point into allocated storage. See the Pointer Aliasing Rules section for more information.
The getelementptr instruction is often confusing. For some more insight into how it works, see the
getelementptr FAQ.
Example:
53
Documentation for the LLVM System at SVN head
; yields i8*:eptr
%eptr = getelementptr [12 x i8]* %aptr, i64 0, i32 1
; yields i32*:iptr
%iptr = getelementptr [10 x i32]* @arr, i16 0, i16 0
Conversion Operations
The instructions in this category are the conversion instructions (casting) which all take a single operand and a
type. They perform various bit conversions on the operand.
Overview:
Arguments:
The 'trunc' instruction takes a value to trunc, which must be an integer type, and a type that specifies the
size and type of the result, which must be an integer type. The bit size of value must be larger than the bit
size of ty2. Equal sized types are not allowed.
Semantics:
The 'trunc' instruction truncates the high order bits in value and converts the remaining bits to ty2. Since
the source size must be larger than the destination size, trunc cannot be a no-op cast. It will always truncate
bits.
Example:
Overview:
Arguments:
The 'zext' instruction takes a value to cast, which must be of integer type, and a type to cast it to, which must
also be of integer type. The bit size of the value must be smaller than the bit size of the destination type,
ty2.
54
Documentation for the LLVM System at SVN head
Semantics:
The zext fills the high order bits of the value with zero bits until it reaches the size of the destination type,
ty2.
When zero extending from i1, the result will always be either 0 or 1.
Example:
Overview:
Arguments:
The 'sext' instruction takes a value to cast, which must be of integer type, and a type to cast it to, which must
also be of integer type. The bit size of the value must be smaller than the bit size of the destination type,
ty2.
Semantics:
The 'sext' instruction performs a sign extension by copying the sign bit (highest order bit) of the value
until it reaches the bit size of the type ty2.
Example:
Overview:
Arguments:
The 'fptrunc' instruction takes a floating point value to cast and a floating point type to cast it to. The size
of value must be larger than the size of ty2. This implies that fptrunc cannot be used to make a no-op
cast.
55
Documentation for the LLVM System at SVN head
Semantics:
The 'fptrunc' instruction truncates a value from a larger floating point type to a smaller floating point
type. If the value cannot fit within the destination type, ty2, then the results are undefined.
Example:
Overview:
The 'fpext' extends a floating point value to a larger floating point value.
Arguments:
The 'fpext' instruction takes a floating point value to cast, and a floating point type to cast it to. The
source type must be smaller than the destination type.
Semantics:
The 'fpext' instruction extends the value from a smaller floating point type to a larger floating point type.
The fpext cannot be used to make a no-op cast because it always changes bits. Use bitcast to make a
no-op cast for a floating point cast.
Example:
Overview:
The 'fptoui' converts a floating point value to its unsigned integer equivalent of type ty2.
Arguments:
The 'fptoui' instruction takes a value to cast, which must be a scalar or vector floating point value, and a
type to cast it to ty2, which must be an integer type. If ty is a vector floating point type, ty2 must be a
vector integer type with the same number of elements as ty
56
Documentation for the LLVM System at SVN head
Semantics:
The 'fptoui' instruction converts its floating point operand into the nearest (rounding towards zero)
unsigned integer value. If the value cannot fit in ty2, the results are undefined.
Example:
Overview:
Arguments:
The 'fptosi' instruction takes a value to cast, which must be a scalar or vector floating point value, and a
type to cast it to ty2, which must be an integer type. If ty is a vector floating point type, ty2 must be a
vector integer type with the same number of elements as ty
Semantics:
The 'fptosi' instruction converts its floating point operand into the nearest (rounding towards zero) signed
integer value. If the value cannot fit in ty2, the results are undefined.
Example:
Overview:
The 'uitofp' instruction regards value as an unsigned integer and converts that value to the ty2 type.
Arguments:
The 'uitofp' instruction takes a value to cast, which must be a scalar or vector integer value, and a type to
cast it to ty2, which must be an floating point type. If ty is a vector integer type, ty2 must be a vector
floating point type with the same number of elements as ty
57
Documentation for the LLVM System at SVN head
Semantics:
The 'uitofp' instruction interprets its operand as an unsigned integer quantity and converts it to the
corresponding floating point value. If the value cannot fit in the floating point value, the results are undefined.
Example:
Overview:
The 'sitofp' instruction regards value as a signed integer and converts that value to the ty2 type.
Arguments:
The 'sitofp' instruction takes a value to cast, which must be a scalar or vector integer value, and a type to
cast it to ty2, which must be an floating point type. If ty is a vector integer type, ty2 must be a vector
floating point type with the same number of elements as ty
Semantics:
The 'sitofp' instruction interprets its operand as a signed integer quantity and converts it to the
corresponding floating point value. If the value cannot fit in the floating point value, the results are undefined.
Example:
Overview:
The 'ptrtoint' instruction converts the pointer value to the integer type ty2.
Arguments:
The 'ptrtoint' instruction takes a value to cast, which must be a pointer value, and a type to cast it to
ty2, which must be an integer type.
Semantics:
The 'ptrtoint' instruction converts value to integer type ty2 by interpreting the pointer value as an
integer and either truncating or zero extending that value to the size of the integer type. If value is smaller
than ty2 then a zero extension is done. If value is larger than ty2 then a truncation is done. If they are the
58
Documentation for the LLVM System at SVN head
same size, then nothing is done (no-op cast) other than a type change.
Example:
Overview:
Arguments:
The 'inttoptr' instruction takes an integer value to cast, and a type to cast it to, which must be a pointer
type.
Semantics:
The 'inttoptr' instruction converts value to type ty2 by applying either a zero extension or a truncation
depending on the size of the integer value. If value is larger than the size of a pointer then a truncation is
done. If value is smaller than the size of a pointer then a zero extension is done. If they are the same size,
nothing is done (no-op cast).
Example:
Overview:
The 'bitcast' instruction converts value to type ty2 without changing any bits.
Arguments:
The 'bitcast' instruction takes a value to cast, which must be a non-aggregate first class value, and a type to
cast it to, which must also be a non-aggregate first class type. The bit sizes of value and the destination type,
ty2, must be identical. If the source type is a pointer, the destination type must also be a pointer. This
instruction supports bitwise conversion of vectors to integers and to vectors of other types (as long as they
have the same size).
59
Documentation for the LLVM System at SVN head
Semantics:
The 'bitcast' instruction converts value to type ty2. It is always a no-op cast because no bits change
with this conversion. The conversion is done as if the value had been stored to memory and read back as
type ty2. Pointer types may only be converted to other pointer types with this instruction. To convert pointers
to other types, use the inttoptr or ptrtoint instructions first.
Example:
Other Operations
The instructions in this category are the "miscellaneous" instructions, which defy better classification.
'icmp' Instruction
Syntax:
<result> = icmp <cond> <ty> <op1>, <op2> ; yields {i1} or {<N x i1>}:result
Overview:
The 'icmp' instruction returns a boolean value or a vector of boolean values based on comparison of its two
integer, integer vector, or pointer operands.
Arguments:
The 'icmp' instruction takes three operands. The first operand is the condition code indicating the kind of
comparison to perform. It is not a value, just a keyword. The possible condition code are:
1. eq: equal
2. ne: not equal
3. ugt: unsigned greater than
4. uge: unsigned greater or equal
5. ult: unsigned less than
6. ule: unsigned less or equal
7. sgt: signed greater than
8. sge: signed greater or equal
9. slt: signed less than
10. sle: signed less or equal
The remaining two arguments must be integer or pointer or integer vector typed. They must also be identical
types.
Semantics:
The 'icmp' compares op1 and op2 according to the condition code given as cond. The comparison
performed always yields either an i1 or vector of i1 result, as follows:
1. eq: yields true if the operands are equal, false otherwise. No sign interpretation is necessary or
performed.
60
Documentation for the LLVM System at SVN head
2. ne: yields true if the operands are unequal, false otherwise. No sign interpretation is necessary or
performed.
3. ugt: interprets the operands as unsigned values and yields true if op1 is greater than op2.
4. uge: interprets the operands as unsigned values and yields true if op1 is greater than or equal to
op2.
5. ult: interprets the operands as unsigned values and yields true if op1 is less than op2.
6. ule: interprets the operands as unsigned values and yields true if op1 is less than or equal to op2.
7. sgt: interprets the operands as signed values and yields true if op1 is greater than op2.
8. sge: interprets the operands as signed values and yields true if op1 is greater than or equal to op2.
9. slt: interprets the operands as signed values and yields true if op1 is less than op2.
10. sle: interprets the operands as signed values and yields true if op1 is less than or equal to op2.
If the operands are pointer typed, the pointer values are compared as if they were integers.
If the operands are integer vectors, then they are compared element by element. The result is an i1 vector
with the same number of elements as the values being compared. Otherwise, the result is an i1.
Example:
Note that the code generator does not yet support vector types with the icmp instruction.
'fcmp' Instruction
Syntax:
<result> = fcmp <cond> <ty> <op1>, <op2> ; yields {i1} or {<N x i1>}:result
Overview:
The 'fcmp' instruction returns a boolean value or vector of boolean values based on comparison of its
operands.
If the operands are floating point scalars, then the result type is a boolean (i1).
If the operands are floating point vectors, then the result type is a vector of boolean with the same number of
elements as the operands being compared.
Arguments:
The 'fcmp' instruction takes three operands. The first operand is the condition code indicating the kind of
comparison to perform. It is not a value, just a keyword. The possible condition code are:
61
Documentation for the LLVM System at SVN head
6. ole: ordered and less than or equal
7. one: ordered and not equal
8. ord: ordered (no nans)
9. ueq: unordered or equal
10. ugt: unordered or greater than
11. uge: unordered or greater than or equal
12. ult: unordered or less than
13. ule: unordered or less than or equal
14. une: unordered or not equal
15. uno: unordered (either nans)
16. true: no comparison, always returns true
Ordered means that neither operand is a QNAN while unordered means that either operand may be a QNAN.
Each of val1 and val2 arguments must be either a floating point type or a vector of floating point type.
They must have identical types.
Semantics:
The 'fcmp' instruction compares op1 and op2 according to the condition code given as cond. If the
operands are vectors, then the vectors are compared element by element. Each comparison performed always
yields an i1 result, as follows:
Example:
Note that the code generator does not yet support vector types with the fcmp instruction.
'phi' Instruction
62
Documentation for the LLVM System at SVN head
Syntax:
Overview:
The 'phi' instruction is used to implement the φ node in the SSA graph representing the function.
Arguments:
The type of the incoming values is specified with the first type field. After this, the 'phi' instruction takes a
list of pairs as arguments, with one pair for each predecessor basic block of the current block. Only values of
first class type may be used as the value arguments to the PHI node. Only labels may be used as the label
arguments.
There must be no non-phi instructions between the start of a basic block and the PHI instructions: i.e. PHI
instructions must be first in a basic block.
For the purposes of the SSA form, the use of each incoming value is deemed to occur on the edge from the
corresponding predecessor block to the current block (but after any definition of an 'invoke' instruction's
return value on the same edge).
Semantics:
At runtime, the 'phi' instruction logically takes on the value specified by the pair corresponding to the
predecessor basic block that executed just prior to the current block.
Example:
'select' Instruction
Syntax:
Overview:
The 'select' instruction is used to choose one value based on a condition, without branching.
Arguments:
The 'select' instruction requires an 'i1' value or a vector of 'i1' values indicating the condition, and two
values of the same first class type. If the val1/val2 are vectors and the condition is a scalar, then entire vectors
are selected, not individual elements.
63
Documentation for the LLVM System at SVN head
Semantics:
If the condition is an i1 and it evaluates to 1, the instruction returns the first value argument; otherwise, it
returns the second value argument.
If the condition is a vector of i1, then the value arguments must be vectors of the same size, and the selection
is done element by element.
Example:
Note that the code generator does not yet support conditions with vector type.
'call' Instruction
Syntax:
<result> = [tail] call [cconv] [ret attrs] <ty> [<fnty>*] <fnptrval>(<function args>) [fn att
Overview:
Arguments:
1. The optional "tail" marker indicates that the callee function does not access any allocas or varargs in
the caller. Note that calls may be marked "tail" even if they do not occur before a ret instruction. If
the "tail" marker is present, the function call is eligible for tail call optimization, but might not in fact
be optimized into a jump. The code generator may optimize calls marked "tail" with either 1)
automatic sibling call optimization when the caller and callee have matching signatures, or 2) forced
tail call optimization when the following extra requirements are met:
♦ Caller and callee both have the calling convention fastcc.
♦ The call is in tail position (ret immediately follows call and ret uses value of call or is void).
♦ Option -tailcallopt is enabled, or llvm::GuaranteedTailCallOpt is true.
♦ Platform specific constraints are met.
2. The optional "cconv" marker indicates which calling convention the call should use. If none is
specified, the call defaults to using C calling conventions. The calling convention of the call must
match the calling convention of the target function, or else the behavior is undefined.
3. The optional Parameter Attributes list for return values. Only 'zeroext', 'signext', and 'inreg'
attributes are valid here.
4. 'ty': the type of the call instruction itself which is also the type of the return value. Functions that
return no value are marked void.
5. 'fnty': shall be the signature of the pointer to function value being invoked. The argument types
must match the types implied by this signature. This type can be omitted if the function is not varargs
and if the function type does not return a pointer to a function.
6. 'fnptrval': An LLVM value containing a pointer to a function to be invoked. In most cases, this is
a direct function invocation, but indirect calls are just as possible, calling an arbitrary pointer to
function value.
7. 'function args': argument list whose types match the function signature argument types and
parameter attributes. All arguments must be of first class type. If the function signature indicates the
64
Documentation for the LLVM System at SVN head
function accepts a variable number of arguments, the extra arguments can be specified.
8. The optional function attributes list. Only 'noreturn', 'nounwind', 'readonly' and 'readnone'
attributes are valid here.
Semantics:
The 'call' instruction is used to cause control flow to transfer to a specified function, with its incoming
arguments bound to the specified values. Upon a 'ret' instruction in the called function, control flow
continues with the instruction after the function call, and the return value of the function is bound to the result
argument.
Example:
llvm treats calls to some functions with names and arguments that match the standard C99 library as being the
C99 library functions, and may perform optimizations or generate code for them under that assumption. This
is something we'd like to change in the future to provide better support for freestanding environments and
non-C-based languages.
'va_arg' Instruction
Syntax:
Overview:
The 'va_arg' instruction is used to access arguments passed through the "variable argument" area of a
function call. It is used to implement the va_arg macro in C.
Arguments:
This instruction takes a va_list* value and the type of the argument. It returns a value of the specified
argument type and increments the va_list to point to the next argument. The actual type of va_list is
target specific.
Semantics:
The 'va_arg' instruction loads an argument of the specified type from the specified va_list and causes the
va_list to point to the next argument. For more information, see the variable argument handling Intrinsic
Functions.
65
Documentation for the LLVM System at SVN head
It is legal for this instruction to be called in a function which does not take a variable number of arguments,
for example, the vfprintf function.
va_arg is an LLVM instruction instead of an intrinsic function because it takes a type as an argument.
Example:
Note that the code generator does not yet fully support va_arg on many targets. Also, it does not currently
support va_arg with aggregate types on any target.
Intrinsic Functions
LLVM supports the notion of an "intrinsic function". These functions have well known names and semantics
and are required to follow certain restrictions. Overall, these intrinsics represent an extension mechanism for
the LLVM language that does not require changing all of the transformations in LLVM when adding to the
language (or the bitcode reader/writer, the parser, etc...).
Intrinsic function names must all start with an "llvm." prefix. This prefix is reserved in LLVM for intrinsic
names; thus, function names may not begin with this prefix. Intrinsic functions must always be external
functions: you cannot define the body of intrinsic functions. Intrinsic functions may only be used in call or
invoke instructions: it is illegal to take the address of an intrinsic function. Additionally, because intrinsic
functions are part of the LLVM language, it is required if any are added that they be documented here.
Some intrinsic functions can be overloaded, i.e., the intrinsic represents a family of functions that perform the
same operation but on different data types. Because LLVM can represent over 8 million different integer
types, overloading is used commonly to allow an intrinsic function to operate on any integer type. One or
more of the argument types or the result type can be overloaded to accept any integer type. Argument types
may also be defined as exactly matching a previous argument's type or the result type. This allows an intrinsic
function which accepts multiple arguments, but needs all of them to be of the same type, to only be
overloaded with respect to a single argument or the result.
Overloaded intrinsics will have the names of its overloaded argument types encoded into its function name,
each preceded by a period. Only those types which are overloaded result in a name suffix. Arguments whose
type is matched against another type do not. For example, the llvm.ctpop function can take an integer of
any width and returns an integer of exactly the same integer width. This leads to a family of functions such as
i8 @llvm.ctpop.i8(i8 %val) and i29 @llvm.ctpop.i29(i29 %val). Only one type, the
return type, is overloaded, and only one type suffix is required. Because the argument's type is matched
against the return type, it does not require its own name suffix.
To learn how to add an intrinsic function, please see the Extending LLVM Guide.
All of these functions operate on arguments that use a target-specific value type "va_list". The LLVM
assembly language reference manual does not define what this type is, so all transformations should be
prepared to handle these functions regardless of the type used.
66
Documentation for the LLVM System at SVN head
This example shows how the va_arg instruction and the variable argument handling intrinsic functions are
used.
'llvm.va_start' Intrinsic
Syntax:
Overview:
Arguments:
Semantics:
The 'llvm.va_start' intrinsic works just like the va_start macro available in C. In a target-dependent
way, it initializes the va_list element to which the argument points, so that the next call to va_arg will
produce the first variable argument passed to the function. Unlike the C va_start macro, this intrinsic does
not need to know the last argument of the function as the compiler can figure that out.
'llvm.va_end' Intrinsic
Syntax:
Overview:
The 'llvm.va_end' intrinsic destroys *<arglist>, which has been initialized previously with
llvm.va_start or llvm.va_copy.
67
Documentation for the LLVM System at SVN head
Arguments:
Semantics:
The 'llvm.va_end' intrinsic works just like the va_end macro available in C. In a target-dependent way,
it destroys the va_list element to which the argument points. Calls to llvm.va_start and
llvm.va_copy must be matched exactly with calls to llvm.va_end.
'llvm.va_copy' Intrinsic
Syntax:
Overview:
The 'llvm.va_copy' intrinsic copies the current argument position from the source argument list to the
destination argument list.
Arguments:
The first argument is a pointer to a va_list element to initialize. The second argument is a pointer to a
va_list element to copy from.
Semantics:
The 'llvm.va_copy' intrinsic works just like the va_copy macro available in C. In a target-dependent
way, it copies the source va_list element into the destination va_list element. This intrinsic is
necessary because the llvm.va_start intrinsic may be arbitrarily complex and require, for example,
memory allocation.
The garbage collection intrinsics only operate on objects in the generic address space (address space zero).
'llvm.gcroot' Intrinsic
Syntax:
Overview:
The 'llvm.gcroot' intrinsic declares the existence of a GC root to the code generator, and allows some
metadata to be associated with it.
68
Documentation for the LLVM System at SVN head
Arguments:
The first argument specifies the address of a stack object that contains the root pointer. The second pointer
(which must be either a constant or a global value address) contains the meta-data to be associated with the
root.
Semantics:
At runtime, a call to this intrinsic stores a null pointer into the "ptrloc" location. At compile-time, the code
generator generates information to allow the runtime to find the pointer at GC safe points. The
'llvm.gcroot' intrinsic may only be used in a function which specifies a GC algorithm.
'llvm.gcread' Intrinsic
Syntax:
Overview:
The 'llvm.gcread' intrinsic identifies reads of references from heap locations, allowing garbage collector
implementations that require read barriers.
Arguments:
The second argument is the address to read from, which should be an address allocated from the garbage
collector. The first object is a pointer to the start of the referenced object, if needed by the language runtime
(otherwise null).
Semantics:
The 'llvm.gcread' intrinsic has the same semantics as a load instruction, but may be replaced with
substantially more complex code by the garbage collector runtime, as needed. The 'llvm.gcread' intrinsic
may only be used in a function which specifies a GC algorithm.
'llvm.gcwrite' Intrinsic
Syntax:
Overview:
The 'llvm.gcwrite' intrinsic identifies writes of references to heap locations, allowing garbage collector
implementations that require write barriers (such as generational or reference counting collectors).
Arguments:
The first argument is the reference to store, the second is the start of the object to store it to, and the third is
the address of the field of Obj to store to. If the runtime does not require a pointer to the object, Obj may be
null.
69
Documentation for the LLVM System at SVN head
Semantics:
The 'llvm.gcwrite' intrinsic has the same semantics as a store instruction, but may be replaced with
substantially more complex code by the garbage collector runtime, as needed. The 'llvm.gcwrite' intrinsic
may only be used in a function which specifies a GC algorithm.
'llvm.returnaddress' Intrinsic
Syntax:
Overview:
The 'llvm.returnaddress' intrinsic attempts to compute a target-specific value indicating the return
address of the current function or one of its callers.
Arguments:
The argument to this intrinsic indicates which function to return the address for. Zero indicates the calling
function, one indicates its caller, etc. The argument is required to be a constant integer value.
Semantics:
The 'llvm.returnaddress' intrinsic either returns a pointer indicating the return address of the specified
call frame, or zero if it cannot be identified. The value returned by this intrinsic is likely to be incorrect or 0
for arguments other than zero, so it should only be used for debugging purposes.
Note that calling this intrinsic does not prevent function inlining or other aggressive transformations, so the
value returned may not be that of the obvious source-language caller.
'llvm.frameaddress' Intrinsic
Syntax:
Overview:
The 'llvm.frameaddress' intrinsic attempts to return the target-specific frame pointer value for the
specified stack frame.
Arguments:
The argument to this intrinsic indicates which function to return the frame pointer for. Zero indicates the
calling function, one indicates its caller, etc. The argument is required to be a constant integer value.
70
Documentation for the LLVM System at SVN head
Semantics:
The 'llvm.frameaddress' intrinsic either returns a pointer indicating the frame address of the specified
call frame, or zero if it cannot be identified. The value returned by this intrinsic is likely to be incorrect or 0
for arguments other than zero, so it should only be used for debugging purposes.
Note that calling this intrinsic does not prevent function inlining or other aggressive transformations, so the
value returned may not be that of the obvious source-language caller.
'llvm.stacksave' Intrinsic
Syntax:
declare i8 *@llvm.stacksave()
Overview:
The 'llvm.stacksave' intrinsic is used to remember the current state of the function stack, for use with
llvm.stackrestore. This is useful for implementing language features like scoped automatic variable
sized arrays in C99.
Semantics:
This intrinsic returns a opaque pointer value that can be passed to llvm.stackrestore. When an
llvm.stackrestore intrinsic is executed with a value saved from llvm.stacksave, it effectively
restores the state of the stack to the state it was in when the llvm.stacksave intrinsic executed. In
practice, this pops any alloca blocks from the stack that were allocated after the llvm.stacksave was
executed.
'llvm.stackrestore' Intrinsic
Syntax:
Overview:
The 'llvm.stackrestore' intrinsic is used to restore the state of the function stack to the state it was in
when the corresponding llvm.stacksave intrinsic executed. This is useful for implementing language
features like scoped automatic variable sized arrays in C99.
Semantics:
'llvm.prefetch' Intrinsic
Syntax:
Overview:
The 'llvm.prefetch' intrinsic is a hint to the code generator to insert a prefetch instruction if supported;
otherwise, it is a noop. Prefetches have no effect on the behavior of the program but can change its
performance characteristics.
71
Documentation for the LLVM System at SVN head
Arguments:
address is the address to be prefetched, rw is the specifier determining if the fetch should be for a read (0)
or write (1), and locality is a temporal locality specifier ranging from (0) - no locality, to (3) - extremely
local keep in cache. The rw and locality arguments must be constant integers.
Semantics:
This intrinsic does not modify the behavior of the program. In particular, prefetches cannot trap and do not
produce a value. On targets that support this intrinsic, the prefetch can provide hints to the processor cache for
better performance.
'llvm.pcmarker' Intrinsic
Syntax:
Overview:
The 'llvm.pcmarker' intrinsic is a method to export a Program Counter (PC) in a region of code to
simulators and other tools. The method is target specific, but it is expected that the marker will use exported
symbols to transmit the PC of the marker. The marker makes no guarantees that it will remain with any
specific instruction after optimizations. It is possible that the presence of a marker will inhibit optimizations.
The intended use is to be inserted after optimizations to allow correlations of simulation runs.
Arguments:
Semantics:
This intrinsic does not modify the behavior of the program. Backends that do not support this intrinsic may
ignore it.
'llvm.readcyclecounter' Intrinsic
Syntax:
Overview:
The 'llvm.readcyclecounter' intrinsic provides access to the cycle counter register (or similar low
latency, high accuracy clocks) on those targets that support it. On X86, it should map to RDTSC. On Alpha, it
should map to RPCC. As the backing counters overflow quickly (on the order of 9 seconds on alpha), this
should only be used for small timings.
Semantics:
When directly supported, reading the cycle counter should not modify any memory. Implementations are
allowed to either return a application specific value or a system wide value. On backends without support, this
is lowered to a constant 0.
72
Documentation for the LLVM System at SVN head
LLVM provides intrinsics for a few important standard C library functions. These intrinsics allow
source-language front-ends to pass information about the alignment of the pointer arguments to the code
generator, providing opportunity for more efficient code generation.
'llvm.memcpy' Intrinsic
Syntax:
This is an overloaded intrinsic. You can use llvm.memcpy on any integer bit width. Not all targets support
all bit widths however.
Overview:
The 'llvm.memcpy.*' intrinsics copy a block of memory from the source location to the destination
location.
Note that, unlike the standard libc function, the llvm.memcpy.* intrinsics do not return a value, and takes
an extra alignment argument.
Arguments:
The first argument is a pointer to the destination, the second is a pointer to the source. The third argument is
an integer argument specifying the number of bytes to copy, and the fourth argument is the alignment of the
source and destination locations.
If the call to this intrinsic has an alignment value that is not 0 or 1, then the caller guarantees that both the
source and destination pointers are aligned to that boundary.
Semantics:
The 'llvm.memcpy.*' intrinsics copy a block of memory from the source location to the destination
location, which are not allowed to overlap. It copies "len" bytes of memory over. If the argument is known to
be aligned to some boundary, this can be specified as the fourth argument, otherwise it should be set to 0 or 1.
'llvm.memmove' Intrinsic
Syntax:
This is an overloaded intrinsic. You can use llvm.memmove on any integer bit width. Not all targets support
all bit widths however.
73
Documentation for the LLVM System at SVN head
declare void @llvm.memmove.i64(i8 * <dest>, i8 * <src>,
i64 <len>, i32 <align>)
Overview:
The 'llvm.memmove.*' intrinsics move a block of memory from the source location to the destination
location. It is similar to the 'llvm.memcpy' intrinsic but allows the two memory locations to overlap.
Note that, unlike the standard libc function, the llvm.memmove.* intrinsics do not return a value, and takes
an extra alignment argument.
Arguments:
The first argument is a pointer to the destination, the second is a pointer to the source. The third argument is
an integer argument specifying the number of bytes to copy, and the fourth argument is the alignment of the
source and destination locations.
If the call to this intrinsic has an alignment value that is not 0 or 1, then the caller guarantees that the source
and destination pointers are aligned to that boundary.
Semantics:
The 'llvm.memmove.*' intrinsics copy a block of memory from the source location to the destination
location, which may overlap. It copies "len" bytes of memory over. If the argument is known to be aligned to
some boundary, this can be specified as the fourth argument, otherwise it should be set to 0 or 1.
'llvm.memset.*' Intrinsics
Syntax:
This is an overloaded intrinsic. You can use llvm.memset on any integer bit width. Not all targets support all
bit widths however.
Overview:
The 'llvm.memset.*' intrinsics fill a block of memory with a particular byte value.
Note that, unlike the standard libc function, the llvm.memset intrinsic does not return a value, and takes an
extra alignment argument.
Arguments:
The first argument is a pointer to the destination to fill, the second is the byte value to fill it with, the third
argument is an integer argument specifying the number of bytes to fill, and the fourth argument is the known
alignment of destination location.
74
Documentation for the LLVM System at SVN head
If the call to this intrinsic has an alignment value that is not 0 or 1, then the caller guarantees that the
destination pointer is aligned to that boundary.
Semantics:
The 'llvm.memset.*' intrinsics fill "len" bytes of memory starting at the destination location. If the
argument is known to be aligned to some boundary, this can be specified as the fourth argument, otherwise it
should be set to 0 or 1.
'llvm.sqrt.*' Intrinsic
Syntax:
This is an overloaded intrinsic. You can use llvm.sqrt on any floating point or vector of floating point
type. Not all targets support all types however.
Overview:
The 'llvm.sqrt' intrinsics return the sqrt of the specified operand, returning the same value as the libm
'sqrt' functions would. Unlike sqrt in libm, however, llvm.sqrt has undefined behavior for negative
numbers other than -0.0 (which allows for better optimization, because there is no need to worry about errno
being set). llvm.sqrt(-0.0) is defined to return -0.0 like IEEE sqrt.
Arguments:
The argument and return value are floating point numbers of the same type.
Semantics:
This function returns the sqrt of the specified operand if it is a nonnegative floating point number.
'llvm.powi.*' Intrinsic
Syntax:
This is an overloaded intrinsic. You can use llvm.powi on any floating point or vector of floating point
type. Not all targets support all types however.
Overview:
The 'llvm.powi.*' intrinsics return the first operand raised to the specified (positive or negative) power.
The order of evaluation of multiplications is not defined. When a vector of floating point type is used, the
second argument remains a scalar integer value.
75
Documentation for the LLVM System at SVN head
Arguments:
The second argument is an integer power, and the first is a value to raise to that power.
Semantics:
This function returns the first value raised to the second power with an unspecified sequence of rounding
operations.
'llvm.sin.*' Intrinsic
Syntax:
This is an overloaded intrinsic. You can use llvm.sin on any floating point or vector of floating point type.
Not all targets support all types however.
Overview:
Arguments:
The argument and return value are floating point numbers of the same type.
Semantics:
This function returns the sine of the specified operand, returning the same values as the libm sin functions
would, and handles error conditions in the same way.
'llvm.cos.*' Intrinsic
Syntax:
This is an overloaded intrinsic. You can use llvm.cos on any floating point or vector of floating point type.
Not all targets support all types however.
Overview:
Arguments:
The argument and return value are floating point numbers of the same type.
76
Documentation for the LLVM System at SVN head
Semantics:
This function returns the cosine of the specified operand, returning the same values as the libm cos functions
would, and handles error conditions in the same way.
'llvm.pow.*' Intrinsic
Syntax:
This is an overloaded intrinsic. You can use llvm.pow on any floating point or vector of floating point type.
Not all targets support all types however.
Overview:
The 'llvm.pow.*' intrinsics return the first operand raised to the specified (positive or negative) power.
Arguments:
The second argument is a floating point power, and the first is a value to raise to that power.
Semantics:
This function returns the first value raised to the second power, returning the same values as the libm pow
functions would, and handles error conditions in the same way.
'llvm.bswap.*' Intrinsics
Syntax:
This is an overloaded intrinsic function. You can use bswap on any integer type that is an even number of
bytes (i.e. BitWidth % 16 == 0).
Overview:
The 'llvm.bswap' family of intrinsics is used to byte swap integer values with an even number of bytes
(positive multiple of 16 bits). These are useful for performing operations on data that is not in the target's
native byte order.
77
Documentation for the LLVM System at SVN head
Semantics:
The llvm.bswap.i16 intrinsic returns an i16 value that has the high and low byte of the input i16
swapped. Similarly, the llvm.bswap.i32 intrinsic returns an i32 value that has the four bytes of the input
i32 swapped, so that if the input bytes are numbered 0, 1, 2, 3 then the returned i32 will have its bytes in 3, 2,
1, 0 order. The llvm.bswap.i48, llvm.bswap.i64 and other intrinsics extend this concept to
additional even-byte lengths (6 bytes, 8 bytes and more, respectively).
'llvm.ctpop.*' Intrinsic
Syntax:
This is an overloaded intrinsic. You can use llvm.ctpop on any integer bit width. Not all targets support all bit
widths however.
Overview:
The 'llvm.ctpop' family of intrinsics counts the number of bits set in a value.
Arguments:
The only argument is the value to be counted. The argument may be of any integer type. The return type must
match the argument type.
Semantics:
'llvm.ctlz.*' Intrinsic
Syntax:
This is an overloaded intrinsic. You can use llvm.ctlz on any integer bit width. Not all targets support all
bit widths however.
Overview:
The 'llvm.ctlz' family of intrinsic functions counts the number of leading zeros in a variable.
Arguments:
The only argument is the value to be counted. The argument may be of any integer type. The return type must
match the argument type.
78
Documentation for the LLVM System at SVN head
Semantics:
The 'llvm.ctlz' intrinsic counts the leading (most significant) zeros in a variable. If the src == 0 then the
result is the size in bits of the type of src. For example, llvm.ctlz(i32 2) = 30.
'llvm.cttz.*' Intrinsic
Syntax:
This is an overloaded intrinsic. You can use llvm.cttz on any integer bit width. Not all targets support all
bit widths however.
Overview:
The 'llvm.cttz' family of intrinsic functions counts the number of trailing zeros.
Arguments:
The only argument is the value to be counted. The argument may be of any integer type. The return type must
match the argument type.
Semantics:
The 'llvm.cttz' intrinsic counts the trailing (least significant) zeros in a variable. If the src == 0 then the
result is the size in bits of the type of src. For example, llvm.cttz(2) = 1.
'llvm.sadd.with.overflow.*' Intrinsics
Syntax:
This is an overloaded intrinsic. You can use llvm.sadd.with.overflow on any integer bit width.
Overview:
The 'llvm.sadd.with.overflow' family of intrinsic functions perform a signed addition of the two
arguments, and indicate whether an overflow occurred during the signed summation.
Arguments:
The arguments (%a and %b) and the first element of the result structure may be of integer types of any bit
width, but they must have the same bit width. The second element of the result structure must be of type i1.
%a and %b are the two values that will undergo signed addition.
79
Documentation for the LLVM System at SVN head
Semantics:
The 'llvm.sadd.with.overflow' family of intrinsic functions perform a signed addition of the two
variables. They return a structure — the first element of which is the signed summation, and the second
element of which is a bit specifying if the signed summation resulted in an overflow.
Examples:
'llvm.uadd.with.overflow.*' Intrinsics
Syntax:
This is an overloaded intrinsic. You can use llvm.uadd.with.overflow on any integer bit width.
Overview:
The 'llvm.uadd.with.overflow' family of intrinsic functions perform an unsigned addition of the two
arguments, and indicate whether a carry occurred during the unsigned summation.
Arguments:
The arguments (%a and %b) and the first element of the result structure may be of integer types of any bit
width, but they must have the same bit width. The second element of the result structure must be of type i1.
%a and %b are the two values that will undergo unsigned addition.
Semantics:
The 'llvm.uadd.with.overflow' family of intrinsic functions perform an unsigned addition of the two
arguments. They return a structure — the first element of which is the sum, and the second element of which
is a bit specifying if the unsigned summation resulted in a carry.
Examples:
'llvm.ssub.with.overflow.*' Intrinsics
Syntax:
This is an overloaded intrinsic. You can use llvm.ssub.with.overflow on any integer bit width.
80
Documentation for the LLVM System at SVN head
Overview:
The 'llvm.ssub.with.overflow' family of intrinsic functions perform a signed subtraction of the two
arguments, and indicate whether an overflow occurred during the signed subtraction.
Arguments:
The arguments (%a and %b) and the first element of the result structure may be of integer types of any bit
width, but they must have the same bit width. The second element of the result structure must be of type i1.
%a and %b are the two values that will undergo signed subtraction.
Semantics:
The 'llvm.ssub.with.overflow' family of intrinsic functions perform a signed subtraction of the two
arguments. They return a structure — the first element of which is the subtraction, and the second element of
which is a bit specifying if the signed subtraction resulted in an overflow.
Examples:
'llvm.usub.with.overflow.*' Intrinsics
Syntax:
This is an overloaded intrinsic. You can use llvm.usub.with.overflow on any integer bit width.
Overview:
Arguments:
The arguments (%a and %b) and the first element of the result structure may be of integer types of any bit
width, but they must have the same bit width. The second element of the result structure must be of type i1.
%a and %b are the two values that will undergo unsigned subtraction.
Semantics:
81
Documentation for the LLVM System at SVN head
Examples:
'llvm.smul.with.overflow.*' Intrinsics
Syntax:
This is an overloaded intrinsic. You can use llvm.smul.with.overflow on any integer bit width.
Overview:
Arguments:
The arguments (%a and %b) and the first element of the result structure may be of integer types of any bit
width, but they must have the same bit width. The second element of the result structure must be of type i1.
%a and %b are the two values that will undergo signed multiplication.
Semantics:
Examples:
'llvm.umul.with.overflow.*' Intrinsics
Syntax:
This is an overloaded intrinsic. You can use llvm.umul.with.overflow on any integer bit width.
Overview:
82
Documentation for the LLVM System at SVN head
Arguments:
The arguments (%a and %b) and the first element of the result structure may be of integer types of any bit
width, but they must have the same bit width. The second element of the result structure must be of type i1.
%a and %b are the two values that will undergo unsigned multiplication.
Semantics:
Examples:
Debugger Intrinsics
The LLVM debugger intrinsics (which all start with llvm.dbg. prefix), are described in the LLVM Source
Level Debugging document.
Trampoline Intrinsic
This intrinsic makes it possible to excise one parameter, marked with the nest attribute, from a function. The
result is a callable function pointer lacking the nest parameter - the caller does not need to provide a value for
it. Instead, the value to use is stored in advance in a "trampoline", a block of memory usually allocated on the
stack, which also contains code to splice the nest value into the argument list. This is used to implement the
GCC nested function address extension.
For example, if the function is i32 f(i8* nest %c, i32 %x, i32 %y) then the resulting function
pointer has signature i32 (i32, i32)*. It can be created as follows:
%tramp = alloca [10 x i8], align 4 ; size and alignment only correct for X86
%tramp1 = getelementptr [10 x i8]* %tramp, i32 0, i32 0
%p = call i8* @llvm.init.trampoline( i8* %tramp1, i8* bitcast (i32 (i8* nest , i32, i32)* @f
%fp = bitcast i8* %p to i32 (i32, i32)*
The call %val = call i32 %fp( i32 %x, i32 %y ) is then equivalent to %val = call i32
%f( i8* %nval, i32 %x, i32 %y ).
'llvm.init.trampoline' Intrinsic
Syntax:
83
Documentation for the LLVM System at SVN head
Overview:
This fills the memory pointed to by tramp with code and returns a function pointer suitable for executing it.
Arguments:
The llvm.init.trampoline intrinsic takes three arguments, all pointers. The tramp argument must
point to a sufficiently large and sufficiently aligned block of memory; this memory is written to by the
intrinsic. Note that the size and the alignment are target-specific - LLVM currently provides no portable way
of determining them, so a front-end that generates this intrinsic needs to have some target-specific knowledge.
The func argument must hold a function bitcast to an i8*.
Semantics:
The block of memory pointed to by tramp is filled with target dependent code, turning it into a function. A
pointer to this function is returned, but needs to be bitcast to an appropriate function pointer type before being
called. The new function's signature is the same as that of func with any arguments marked with the nest
attribute removed. At most one such nest argument is allowed, and it must be of pointer type. Calling the
new function is equivalent to calling func with the same argument list, but with nval used for the missing
nest argument. If, after calling llvm.init.trampoline, the memory pointed to by tramp is modified,
then the effect of any later call to the returned function pointer is undefined.
These do not form an API such as high-level threading libraries, software transaction memory systems, atomic
primitives, and intrinsic functions as found in BSD, GNU libc, atomic_ops, APR, and other system and
application libraries. The hardware interface provided by LLVM should allow a clean implementation of all of
these APIs and parallel programming models. No one model or paradigm should be selected above others
unless the hardware itself ubiquitously does so.
'llvm.memory.barrier' Intrinsic
Syntax:
Overview:
The llvm.memory.barrier intrinsic guarantees ordering between specific pairs of memory access types.
Arguments:
The llvm.memory.barrier intrinsic requires five boolean arguments. The first four arguments enables a
specific barrier as listed below. The fifth argument specifies that the barrier applies to io or device or
uncached memory.
84
Documentation for the LLVM System at SVN head
• ls: load-store barrier
• sl: store-load barrier
• ss: store-store barrier
• device: barrier applies to device and uncached memory also.
Semantics:
This intrinsic causes the system to enforce some ordering constraints upon the loads and stores of the
program. This barrier does not indicate when any events will occur, it only enforces an order in which they
occur. For any of the specified pairs of load and store operations (f.ex. load-load, or store-load), all of the first
operations preceding the barrier will complete before any of the second operations succeeding the barrier
begin. Specifically the semantics for each pairing is as follows:
• ll: All loads before the barrier must complete before any load after the barrier begins.
• ls: All loads before the barrier must complete before any store after the barrier begins.
• ss: All stores before the barrier must complete before any store after the barrier begins.
• sl: All stores before the barrier must complete before any load after the barrier begins.
These semantics are applied with a logical "and" behavior when more than one is enabled in a single memory
barrier intrinsic.
Backends may implement stronger barriers than those requested when they do not support as fine grained a
barrier as requested. Some architectures do not need all types of barriers and on such architectures, these
become noops.
Example:
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 4, %ptr
'llvm.atomic.cmp.swap.*' Intrinsic
Syntax:
This is an overloaded intrinsic. You can use llvm.atomic.cmp.swap on any integer bit width and for
different address spaces. Not all targets support all bit widths however.
Overview:
This loads a value in memory and compares it to a given value. If they are equal, it stores a new value into the
memory.
85
Documentation for the LLVM System at SVN head
Arguments:
The llvm.atomic.cmp.swap intrinsic takes three arguments. The result as well as both cmp and val
must be integer values with the same bit width. The ptr argument must be a pointer to a value of this integer
type. While any bit width integer may be used, targets may only lower representations they support in
hardware.
Semantics:
This entire intrinsic must be executed atomically. It first loads the value in memory pointed to by ptr and
compares it with the value cmp. If they are equal, val is stored into the memory. The loaded value is yielded
in all cases. This provides the equivalent of an atomic compare-and-swap operation within the SSA
framework.
Examples:
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 4, %ptr
'llvm.atomic.swap.*' Intrinsic
Syntax:
This is an overloaded intrinsic. You can use llvm.atomic.swap on any integer bit width. Not all targets
support all bit widths however.
Overview:
This intrinsic loads the value stored in memory at ptr and yields the value from memory. It then stores the
value in val in the memory at ptr.
Arguments:
The llvm.atomic.swap intrinsic takes two arguments. Both the val argument and the result must be
integers of the same bit width. The first argument, ptr, must be a pointer to a value of this integer type. The
targets may only lower integer representations they support.
86
Documentation for the LLVM System at SVN head
Semantics:
This intrinsic loads the value pointed to by ptr, yields it, and stores val back into ptr atomically. This
provides the equivalent of an atomic swap operation within the SSA framework.
Examples:
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 4, %ptr
'llvm.atomic.load.add.*' Intrinsic
Syntax:
This is an overloaded intrinsic. You can use llvm.atomic.load.add on any integer bit width. Not all
targets support all bit widths however.
Overview:
This intrinsic adds delta to the value stored in memory at ptr. It yields the original value at ptr.
Arguments:
The intrinsic takes two arguments, the first a pointer to an integer value and the second an integer value. The
result is also an integer value. These integer types can have any bit width, but they must all have the same bit
width. The targets may only lower integer representations they support.
Semantics:
This intrinsic does a series of operations atomically. It first loads the value stored at ptr. It then adds delta,
stores the result to ptr. It yields the original value stored at ptr.
Examples:
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 4, %ptr
%result1 = call i32 @llvm.atomic.load.add.i32.p0i32( i32* %ptr, i32 4 )
87
Documentation for the LLVM System at SVN head
; yields {i32}:result1 = 4
%result2 = call i32 @llvm.atomic.load.add.i32.p0i32( i32* %ptr, i32 2 )
; yields {i32}:result2 = 8
%result3 = call i32 @llvm.atomic.load.add.i32.p0i32( i32* %ptr, i32 5 )
; yields {i32}:result3 = 10
%memval1 = load i32* %ptr ; yields {i32}:memval1 = 15
'llvm.atomic.load.sub.*' Intrinsic
Syntax:
This is an overloaded intrinsic. You can use llvm.atomic.load.sub on any integer bit width and for
different address spaces. Not all targets support all bit widths however.
Overview:
This intrinsic subtracts delta to the value stored in memory at ptr. It yields the original value at ptr.
Arguments:
The intrinsic takes two arguments, the first a pointer to an integer value and the second an integer value. The
result is also an integer value. These integer types can have any bit width, but they must all have the same bit
width. The targets may only lower integer representations they support.
Semantics:
This intrinsic does a series of operations atomically. It first loads the value stored at ptr. It then subtracts
delta, stores the result to ptr. It yields the original value stored at ptr.
Examples:
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 8, %ptr
%result1 = call i32 @llvm.atomic.load.sub.i32.p0i32( i32* %ptr, i32 4 )
; yields {i32}:result1 = 8
%result2 = call i32 @llvm.atomic.load.sub.i32.p0i32( i32* %ptr, i32 2 )
; yields {i32}:result2 = 4
%result3 = call i32 @llvm.atomic.load.sub.i32.p0i32( i32* %ptr, i32 5 )
; yields {i32}:result3 = 2
%memval1 = load i32* %ptr ; yields {i32}:memval1 = -3
'llvm.atomic.load.and.*' Intrinsic
'llvm.atomic.load.nand.*' Intrinsic
'llvm.atomic.load.or.*' Intrinsic
'llvm.atomic.load.xor.*' Intrinsic
Syntax:
88
Documentation for the LLVM System at SVN head
declare i8 @llvm.atomic.load.and.i8.p0i8( i8* <ptr>, i8 <delta> )
declare i16 @llvm.atomic.load.and.i16.p0i16( i16* <ptr>, i16 <delta> )
declare i32 @llvm.atomic.load.and.i32.p0i32( i32* <ptr>, i32 <delta> )
declare i64 @llvm.atomic.load.and.i64.p0i64( i64* <ptr>, i64 <delta> )
Overview:
These intrinsics bitwise the operation (and, nand, or, xor) delta to the value stored in memory at ptr. It
yields the original value at ptr.
Arguments:
These intrinsics take two arguments, the first a pointer to an integer value and the second an integer value.
The result is also an integer value. These integer types can have any bit width, but they must all have the same
bit width. The targets may only lower integer representations they support.
Semantics:
These intrinsics does a series of operations atomically. They first load the value stored at ptr. They then do
the bitwise operation delta, store the result to ptr. They yield the original value stored at ptr.
Examples:
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 0x0F0F, %ptr
%result0 = call i32 @llvm.atomic.load.nand.i32.p0i32( i32* %ptr, i32 0xFF )
; yields {i32}:result0 = 0x0F0F
%result1 = call i32 @llvm.atomic.load.and.i32.p0i32( i32* %ptr, i32 0xFF )
; yields {i32}:result1 = 0xFFFFFFF0
%result2 = call i32 @llvm.atomic.load.or.i32.p0i32( i32* %ptr, i32 0F )
; yields {i32}:result2 = 0xF0
%result3 = call i32 @llvm.atomic.load.xor.i32.p0i32( i32* %ptr, i32 0F )
; yields {i32}:result3 = FF
%memval1 = load i32* %ptr ; yields {i32}:memval1 = F0
'llvm.atomic.load.max.*' Intrinsic
'llvm.atomic.load.min.*' Intrinsic
'llvm.atomic.load.umax.*' Intrinsic
'llvm.atomic.load.umin.*' Intrinsic
89
Documentation for the LLVM System at SVN head
Syntax:
Overview:
These intrinsics takes the signed or unsigned minimum or maximum of delta and the value stored in
memory at ptr. It yields the original value at ptr.
Arguments:
These intrinsics take two arguments, the first a pointer to an integer value and the second an integer value.
The result is also an integer value. These integer types can have any bit width, but they must all have the same
bit width. The targets may only lower integer representations they support.
Semantics:
These intrinsics does a series of operations atomically. They first load the value stored at ptr. They then do
the signed or unsigned min or max delta and the value, store the result to ptr. They yield the original value
stored at ptr.
Examples:
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 7, %ptr
%result0 = call i32 @llvm.atomic.load.min.i32.p0i32( i32* %ptr, i32 -2 )
; yields {i32}:result0 = 7
%result1 = call i32 @llvm.atomic.load.max.i32.p0i32( i32* %ptr, i32 8 )
; yields {i32}:result1 = -2
%result2 = call i32 @llvm.atomic.load.umin.i32.p0i32( i32* %ptr, i32 10 )
; yields {i32}:result2 = 8
%result3 = call i32 @llvm.atomic.load.umax.i32.p0i32( i32* %ptr, i32 30 )
; yields {i32}:result3 = 8
90
Documentation for the LLVM System at SVN head
%memval1 = load i32* %ptr ; yields {i32}:memval1 = 30
'llvm.lifetime.start' Intrinsic
Syntax:
Overview:
Arguments:
The first argument is a constant integer representing the size of the object, or -1 if it is variable sized. The
second argument is a pointer to the object.
Semantics:
This intrinsic indicates that before this point in the code, the value of the memory pointed to by ptr is dead.
This means that it is known to never be used and has an undefined value. A load from the pointer that
precedes this intrinsic can be replaced with 'undef'.
'llvm.lifetime.end' Intrinsic
Syntax:
Overview:
Arguments:
The first argument is a constant integer representing the size of the object, or -1 if it is variable sized. The
second argument is a pointer to the object.
Semantics:
This intrinsic indicates that after this point in the code, the value of the memory pointed to by ptr is dead.
This means that it is known to never be used and has an undefined value. Any stores into the memory object
following this intrinsic may be removed as dead.
'llvm.invariant.start' Intrinsic
Syntax:
91
Documentation for the LLVM System at SVN head
Overview:
The 'llvm.invariant.start' intrinsic specifies that the contents of a memory object will not change.
Arguments:
The first argument is a constant integer representing the size of the object, or -1 if it is variable sized. The
second argument is a pointer to the object.
Semantics:
This intrinsic indicates that until an llvm.invariant.end that uses the return value, the referenced
memory location is constant and unchanging.
'llvm.invariant.end' Intrinsic
Syntax:
Overview:
The 'llvm.invariant.end' intrinsic specifies that the contents of a memory object are mutable.
Arguments:
The first argument is the matching llvm.invariant.start intrinsic. The second argument is a constant
integer representing the size of the object, or -1 if it is variable sized and the third argument is a pointer to the
object.
Semantics:
General Intrinsics
This class of intrinsics is designed to be generic and has no specific purpose.
'llvm.var.annotation' Intrinsic
Syntax:
declare void @llvm.var.annotation(i8* <val>, i8* <str>, i8* <str>, i32 <int> )
Overview:
Arguments:
The first argument is a pointer to a value, the second is a pointer to a global string, the third is a pointer to a
global string which is the source file name, and the last argument is the line number.
92
Documentation for the LLVM System at SVN head
Semantics:
This intrinsic allows annotation of local variables with arbitrary strings. This can be useful for special purpose
optimizations that want to look for these annotations. These have no other defined use, they are ignored by
code generation and optimization.
'llvm.annotation.*' Intrinsic
Syntax:
This is an overloaded intrinsic. You can use 'llvm.annotation' on any integer bit width.
Overview:
Arguments:
The first argument is an integer value (result of some expression), the second is a pointer to a global string, the
third is a pointer to a global string which is the source file name, and the last argument is the line number. It
returns the value of the first argument.
Semantics:
This intrinsic allows annotations to be put on arbitrary expressions with arbitrary strings. This can be useful
for special purpose optimizations that want to look for these annotations. These have no other defined use,
they are ignored by code generation and optimization.
'llvm.trap' Intrinsic
Syntax:
Overview:
Arguments:
None.
Semantics:
This intrinsics is lowered to the target dependent trap instruction. If the target does not have a trap instruction,
this intrinsic will be lowered to the call of the abort() function.
'llvm.stackprotector' Intrinsic
93
Documentation for the LLVM System at SVN head
Syntax:
Overview:
The llvm.stackprotector intrinsic takes the guard and stores it onto the stack at slot. The stack
slot is adjusted to ensure that it is placed on the stack before local variables.
Arguments:
The llvm.stackprotector intrinsic requires two pointer arguments. The first argument is the value
loaded from the stack guard @__stack_chk_guard. The second variable is an alloca that has enough
space to hold the value of the guard.
Semantics:
This intrinsic causes the prologue/epilogue inserter to force the position of the AllocaInst stack slot to be
before local variables on the stack. This is to ensure that if a local variable on the stack is overwritten, it will
destroy the value of the guard. When the function exits, the guard on the stack is checked against the original
guard. If they're different, then the program aborts by calling the __stack_chk_fail() function.
'llvm.objectsize' Intrinsic
Syntax:
Overview:
Arguments:
The llvm.objectsize intrinsic takes two arguments. The first argument is a pointer to or into the
object. The second argument is a boolean 0 or 1. This argument determines whether you want the
maximum (0) or minimum (1) bytes remaining. This needs to be a literal 0 or 1, variables are not allowed.
Semantics:
The llvm.objectsize intrinsic is lowered to either a constant representing the size of the object
concerned or i32/i64 -1 or 0 (depending on the type argument if the size cannot be determined at
compile time.
Chris Lattner
The LLVM Compiler Infrastructure
Last modified: $Date: 2010-03-11 18:12:20 -0600 (Thu, 11 Mar 2010) $
94
Documentation for the LLVM System at SVN head
The Often Misunderstood GEP Instruction
1. Introduction
2. Address Computation
1. Why is the extra 0 index required?
2. What is dereferenced by GEP?
3. Why can you index through the first pointer but not subsequent ones?
4. Why don't GEP x,0,0,1 and GEP x,1 alias?
5. Why do GEP x,1,0,0 and GEP x,1 alias?
6. Can GEP index into vector elements?
7. Can GEP index into unions?
8. What effect do address spaces have on GEPs?
9. How is GEP different from ptrtoint, arithmetic, and inttoptr?
10. I'm writing a backend for a target which needs custom lowering for GEP. How do I do this?
11. How does VLA addressing work with GEPs?
3. Rules
1. What happens if an array index is out of bounds?
2. Can array indices be negative?
3. Can I compare two values computed with GEPs?
4. Can I do GEP with a different pointer type than the type of the underlying object?
5. Can I cast an object's address to integer and add it to null?
6. Can I compute the distance between two objects, and add that value to one address to
compute the other address?
7. Can I do type-based alias analysis on LLVM IR?
8. What happens if a GEP computation overflows?
9. How can I tell if my front-end is following the rules?
4. Rationale
1. Why is GEP designed this way?
2. Why do struct member indices always use i32?
3. What's an uglygep?
5. Summary
Introduction
This document seeks to dispel the mystery and confusion surrounding LLVM's GetElementPtr (GEP)
instruction. Questions about the wily GEP instruction are probably the most frequently occurring questions
once a developer gets down to coding with LLVM. Here we lay out the sources of confusion and show that
the GEP instruction is really quite simple.
Address Computation
When people are first confronted with the GEP instruction, they tend to relate it to known concepts from other
programming paradigms, most notably C array indexing and field selection. GEP closely resembles C array
indexing and field selection, however it's is a little different and this leads to the following questions.
The confusion with the first index usually arises from thinking about the GetElementPtr instruction as if it was
a C index operator. They aren't the same. For example, when we write, in "C":
95
Documentation for the LLVM System at SVN head
AType *Foo;
...
X = &Foo->F;
it is natural to think that there is only one index, the selection of the field F. However, in this example, Foo is
a pointer. That pointer must be indexed explicitly in LLVM. C, on the other hand, indices through it
transparently. To arrive at the same address location as the C code, you would provide the GEP instruction
with two index operands. The first operand indexes through the pointer; the second operand indexes the field
F of the structure, just as if you wrote:
X = &Foo[0].F;
Why is it okay to index through the first pointer, but subsequent pointers won't be
dereferenced?
The answer is simply because memory does not have to be accessed to perform the computation. The first
operand to the GEP instruction must be a value of a pointer type. The value of the pointer is provided directly
to the GEP instruction as an operand without any need for accessing memory. It must, therefore be indexed
and requires an index operand. Consider this example:
struct munger_struct {
int f1;
int f2;
};
void munge(struct munger_struct *P) {
P[0].f1 = P[1].f1 + P[2].f2;
}
...
munger_struct Array[3];
...
munge(Array);
In this "C" example, the front end compiler (llvm-gcc) will generate three GEP instructions for the three
indices through "P" in the assignment statement. The function argument P will be the first operand of each of
these GEP instructions. The second operand indexes through that pointer. The third operand will be the field
offset into the struct munger_struct type, for either the f1 or f2 field. So, in LLVM assembly the
munge function looks like:
In each case the first operand is the pointer through which the GEP instruction starts. The same is true whether
the first operand is an argument, allocated memory, or a global variable.
96
Documentation for the LLVM System at SVN head
These GEP instructions are simply making address computations from the base address of MyVar. They
compute, as follows (using C syntax):
Since the type i32 is known to be four bytes long, the indices 0, 1 and 2 translate into memory offsets of 0, 4,
and 8, respectively. No memory is accessed to make these computations because the address of %MyVar is
passed directly to the GEP instructions.
The obtuse part of this example is in the cases of %idx2 and %idx3. They result in the computation of
addresses that point to memory past the end of the %MyVar global, which is only one i32 long, not three
i32s long. While this is legal in LLVM, it is inadvisable because any load or store with the pointer that
results from these GEP instructions would produce undefined results.
This question arises most often when the GEP instruction is applied to a global variable which is always a
pointer type. For example, consider this:
The GEP above yields an i32* by indexing the i32 typed field of the structure %MyStruct. When people
first look at it, they wonder why the i64 0 index is needed. However, a closer inspection of how globals and
GEPs work reveals the need. Becoming aware of the following facts will dispel the confusion:
1. The type of %MyStruct is not { float*, i32 } but rather { float*, i32 }*. That is,
%MyStruct is a pointer to a structure containing a pointer to a float and an i32.
2. Point #1 is evidenced by noticing the type of the first operand of the GEP instruction (%MyStruct)
which is { float*, i32 }*.
3. The first index, i64 0 is required to step over the global variable %MyStruct. Since the first
argument to the GEP instruction must always be a value of pointer type, the first index steps through
that pointer. A value of 0 means 0 elements offset from that pointer.
4. The second index, i32 1 selects the second field of the structure (the i32).
The GetElementPtr instruction dereferences nothing. That is, it doesn't access memory in any way. That's
what the Load and Store instructions are for. GEP is only involved in the computation of addresses. For
example, consider this:
97
Documentation for the LLVM System at SVN head
%MyVar = uninitialized global { [40 x i32 ]* }
...
%idx = getelementptr { [40 x i32]* }* %MyVar, i64 0, i32 0, i64 0, i64 17
In this example, we have a global variable, %MyVar that is a pointer to a structure containing a pointer to an
array of 40 ints. The GEP instruction seems to be accessing the 18th integer of the structure's array of ints.
However, this is actually an illegal GEP instruction. It won't compile. The reason is that the pointer in the
structure must be dereferenced in order to index into the array of 40 ints. Since the GEP instruction never
accesses memory, it is illegal.
In order to access the 18th integer in the array, you would need to do the following:
In this case, we have to load the pointer in the structure with a load instruction before we can index into the
array. If the example was changed to:
then everything works fine. In this case, the structure does not contain a pointer and the GEP instruction can
index through the global variable, into the first field of the structure and access the 18th i32 in the array
there.
If you look at the first indices in these GEP instructions you find that they are different (0 and 1), therefore the
address computation diverges with that index. Consider this example:
In this example, idx1 computes the address of the second integer in the array that is in the structure in
%MyVar, that is MyVar+4. The type of idx1 is i32*. However, idx2 computes the address of the next
structure after %MyVar. The type of idx2 is { [10 x i32] }* and its value is equivalent to MyVar +
40 because it indexes past the ten 4-byte integers in MyVar. Obviously, in such a situation, the pointers don't
alias.
These two GEP instructions will compute the same address because indexing through the 0th element does
not change the address. However, it does change the type. Consider this example:
98
Documentation for the LLVM System at SVN head
In this example, the value of %idx1 is %MyVar+40 and its type is i32*. The value of %idx2 is also
MyVar+40 but its type is { [10 x i32] }*.
With ptrtoint, you have to pick an integer type. One approach is to pick i64; this is safe on everything LLVM
supports (LLVM internally assumes pointers are never wider than 64 bits in many places), and the optimizer
will actually narrow the i64 arithmetic down to the actual pointer size on targets which don't support 64-bit
arithmetic in most cases. However, there are some cases where it doesn't do this. With GEP you can avoid this
problem.
Also, GEP carries additional pointer aliasing rules. It's invalid to take a GEP from one object, address into a
different separately allocated object, and dereference it. IR producers (front-ends) must follow this rule, and
consumers (optimizers, specifically alias analysis) benefit from being able to rely on it. See the Rules section
for more information.
I'm writing a backend for a target which needs custom lowering for GEP. How do I do this?
You don't. The integer computation implied by a GEP is target-independent. Typically what you'll need to do
is make your backend pattern-match expressions trees involving ADD, MUL, etc., which are what GEP is
lowered into. This has the advantage of letting your code work correctly in more cases.
GEP does use target-dependent parameters for the size and layout of data types, which targets can customize.
If you require support for addressing units which are not 8 bits, you'll need to fix a lot of code in the backend,
with GEP lowering being only a small piece of the overall picture.
VLA indices can be implemented as linearized indices. For example, an expression like X[a][b][c], must be
effectively lowered into a form like X[a*m+b*n+c], so that it appears to the GEP as a single-dimensional
array reference.
99
Documentation for the LLVM System at SVN head
This means if you want to write an analysis which understands array indices and you want to support VLAs,
your code will have to be prepared to reverse-engineer the linearization. One way to solve this problem is to
use the ScalarEvolution library, which always presents VLA and non-VLA indexing in the same manner.
Rules
What happens if an array index is out of bounds?
There are two senses in which an array index can be out of bounds.
First, there's the array type which comes from the (static) type of the first operand to the GEP. Indices greater
than the number of elements in the corresponding static array type are valid. There is no problem with out of
bounds indices in this sense. Indexing into an array only depends on the size of the array element, not the
number of elements.
A common example of how this is used is arrays where the size is not known. It's common to use array types
with zero length to represent these. The fact that the static type says there are zero elements is irrelevant; it's
perfectly valid to compute arbitrary element indices, as the computation only depends on the size of the array
element, not the number of elements. Note that zero-sized arrays are not a special case here.
This sense is unconnected with inbounds keyword. The inbounds keyword is designed to describe
low-level pointer arithmetic overflow conditions, rather than high-level array indexing rules.
Analysis passes which wish to understand array indexing should not assume that the static array type bounds
are respected.
The second sense of being out of bounds is computing an address that's beyond the actual underlying allocated
object.
With the inbounds keyword, the result value of the GEP is undefined if the address is outside the actual
underlying allocated object and not the address one-past-the-end.
Without the inbounds keyword, there are no restrictions on computing out-of-bounds addresses. Obviously,
performing a load or a store requires an address of allocated and sufficiently aligned memory. But the GEP
itself is only concerned with computing addresses.
Can I do GEP with a different pointer type than the type of the underlying object?
Yes. There are no restrictions on bitcasting a pointer value to an arbitrary pointer type. The types in a GEP
serve only to define the parameters for the underlying integer computation. They need not correspond with the
actual type of the underlying object.
Furthermore, loads and stores don't have to use the same types as the type of the underlying object. Types in
this context serve only to specify memory size and alignment. Beyond that there are merely a hint to the
optimizer indicating how the value will likely be used.
100
Documentation for the LLVM System at SVN head
The underlying integer computation is sufficiently defined; null has a defined value -- zero -- and you can add
whatever value you want to it.
However, it's invalid to access (load from or store to) an LLVM-aware object with such a pointer. This
includes GlobalVariables, Allocas, and objects pointed to by noalias pointers.
If you really need this functionality, you can do the arithmetic with explicit integer instructions, and use
inttoptr to convert the result to an address. Most of GEP's special aliasing rules do not apply to pointers
computed from ptrtoint, arithmetic, and inttoptr sequences.
Can I compute the distance between two objects, and add that value to one address to compute the
other address?
As with arithmetic on null, You can use GEP to compute an address that way, but you can't use that pointer to
actually access the object if you do, unless the object is managed outside of LLVM.
Also as above, ptrtoint and inttoptr provide an alternative way to do this which do not have this restriction.
It would be possible to add special annotations to the IR, probably using metadata, to describe a different type
system (such as the C type system), and do type-based aliasing on top of that. This is a much bigger
undertaking though.
Otherwise, the result value is the result from evaluating the implied two's complement integer computation.
However, since there's no guarantee of where an object will be allocated in the address space, such values
have limited meaning.
It's not possible to write a checker which could find all rule violations statically. It would be possible to write
a checker which works by instrumenting the code with dynamic checks though. Alternatively, it would be
possible to write a static checker which catches a subset of possible problems. However, no such checker
exists today.
Rationale
Why is GEP designed this way?
The design of GEP has the following goals, in rough unofficial order of priority:
• Support C, C-like languages, and languages which can be conceptually lowered into C (this covers a
lot).
101
Documentation for the LLVM System at SVN head
What's an uglygep?
Some LLVM optimizers operate on GEPs by internally lowering them into more primitive integer
expressions, which allows them to be combined with other integer expressions and/or split into multiple
separate integer expressions. If they've made non-trivial changes, translating back into LLVM IR can involve
reverse-engineering the structure of the addressing in order to fit it into the static type of the original first
operand. It isn't always possibly to fully reconstruct this structure; sometimes the underlying addressing
doesn't correspond with the static type at all. In such cases the optimizer instead will emit a GEP with the base
pointer casted to a simple address-unit pointer, using the name "uglygep". This isn't pretty, but it's just as
valid, and it's sufficient to preserve the pointer aliasing guarantees that GEP provides.
Summary
In summary, here's some things to always remember about the GetElementPtr instruction:
1. The GEP instruction never accesses memory, it only provides pointer computations.
2. The first operand to the GEP instruction is always a pointer and it must be indexed.
3. There are no superfluous indices for the GEP instruction.
4. Trailing zero indices are superfluous for pointer aliasing, but not for the types of the pointers.
5. Leading zero indices are not superfluous for pointer aliasing nor the types of the pointers.
102
Documentation for the LLVM System at SVN head
Getting Started with the LLVM System
• Overview
• Getting Started Quickly (A Summary)
• Requirements
1. Hardware
2. Software
3. Broken versions of GCC and other tools
• Getting Started with LLVM
1. Terminology and Notation
2. Setting Up Your Environment
3. Unpacking the LLVM Archives
4. Checkout LLVM from Subversion
5. Install the GCC Front End
6. Local LLVM Configuration
7. Compiling the LLVM Suite Source Code
8. Cross-Compiling LLVM
9. The Location of LLVM Object Files
10. Optional Configuration Items
• Program layout
1. llvm/examples
2. llvm/include
3. llvm/lib
4. llvm/projects
5. llvm/runtime
6. llvm/test
7. llvm-test
8. llvm/tools
9. llvm/utils
10. llvm/win32
• An Example Using the LLVM Tool Chain
1. Example with llvm-gcc4
• Common Problems
• Links
Written by: John Criswell, Chris Lattner, Misha Brukman, Vikram Adve, and Guochun Shi.
Overview
Welcome to LLVM! In order to get started, you first need to know some basic information.
First, LLVM comes in two pieces. The first piece is the LLVM suite. This contains all of the tools, libraries,
and header files needed to use the low level virtual machine. It contains an assembler, disassembler, bitcode
analyzer and bitcode optimizer. It also contains a test suite that can be used to test the LLVM tools and the
GCC front end.
The second piece is the GCC front end. This component provides a version of GCC that compiles C and C++
code into LLVM bitcode. Currently, the GCC front end uses the GCC parser to convert code to LLVM. Once
compiled into LLVM bitcode, a program can be manipulated with the LLVM tools from the LLVM suite.
There is a third, optional piece called llvm-test. It is a suite of programs with a testing harness that can be used
to further test LLVM's functionality and performance.
103
Documentation for the LLVM System at SVN head
Getting Started Quickly (A Summary)
Here's the short story for getting up and running quickly with LLVM:
Specify for directory the full pathname of where you want the LLVM tools and
libraries to be installed (default /usr/local).
◊ --with-llvmgccdir=directory
Optionally, specify for directory the full pathname of the C/C++ front end installation
to use with this LLVM configuration. If not specified, the PATH will be searched.
This is only needed if you want to run the testsuite or do some special kinds of
LLVM builds.
◊ --enable-spec2000=directory
Enable the SPEC2000 benchmarks for testing. The SPEC2000 benchmarks should be
available in directory.
8. Build the LLVM Suite:
1. gmake -k |& tee gnumake.out # this is csh or tcsh syntax
2. If you get an "internal compiler error (ICE)" or test failures, see below.
Consult the Getting Started with LLVM section for detailed information on configuring and compiling
LLVM. See Setting Up Your Environment for tips that simplify working with the GCC front end and LLVM
tools. Go to Program Layout to learn about the layout of the source code tree.
Requirements
104
Documentation for the LLVM System at SVN head
Before you begin to use the LLVM system, review the requirements given below. This may save you some
trouble by knowing ahead of time what hardware and software you will need.
Hardware
LLVM is known to work on the following platforms:
OS Arch Compilers
AuroraUX x861 GCC
Linux x861 GCC
Linux amd64 GCC
Solaris V9 (Ultrasparc) GCC
FreeBSD x861 GCC
MacOS X2 PowerPC GCC
MacOS X2,9 x86 GCC
Cygwin/Win32 x861,8, 11 GCC 3.4.X, binutils 2.20
MinGW/Win32 x861,6, 8, 10 GCC 3.4.X, binutils 2.20
LLVM has partial support for the following platforms:
OS Arch Compilers
Windows x861 Visual Studio 2005 SP1 or higher4,5
AIX3,4 PowerPC GCC
Linux3,5 PowerPC GCC
Linux7 Alpha GCC
Linux7 Itanium (IA-64) GCC
HP-UX7 Itanium (IA-64) HP aCC
Notes:
105
Documentation for the LLVM System at SVN head
Note that you will need about 1-3 GB of space for a full LLVM build in Debug mode, depending on the
system (it is so large because of all the debugging information and the fact that the libraries are statically
linked into multiple tools). If you do not need many of the tools and you are space-conscious, you can pass
ONLY_TOOLS="tools you need" to make. The Release build requires considerably less space.
The LLVM suite may compile on other platforms, but it is not guaranteed to do so. If compilation is
successful, the LLVM utilities should be able to assemble, disassemble, analyze, and optimize LLVM bitcode.
Code generation should work as well, although the generated native code may not work on your platform.
The GCC front end is not very portable at the moment. If you want to get it to work on another platform, you
can download a copy of the source and try to compile it on your platform.
Software
Compiling LLVM requires that you have several software packages installed. The table below lists those
required packages. The Package column is the usual name for the software package that LLVM depends on.
The Version column provides "known to work" versions of the package. The Notes column describes how
LLVM uses the package and provides other details.
1. Only the C and C++ languages are needed so there's no need to build the other languages for LLVM's
purposes. See below for specific version info.
2. You only need Subversion if you intend to build from the latest LLVM sources. If you're working
from a release distribution, you don't need Subversion.
3. Only needed if you want to run the automated test suite in the llvm/test directory.
4. If you want to make changes to the configure scripts, you will need GNU autoconf (2.59), and
consequently, GNU M4 (version 1.4 or higher). You will also need automake (1.9.2). We only use
aclocal from that package.
Additionally, your compilation host is expected to have the usual plethora of Unix utilities. Specifically:
106
Documentation for the LLVM System at SVN head
• cat - output concatenation utility
• cp - copy files
• date - print the current date/time
• echo - print to standard output
• egrep - extended regular expression search utility
• find - find files/dirs in a file system
• grep - regular expression search utility
• gzip* - gzip command for distribution generation
• gunzip* - gunzip command for distribution checking
• install - install directories/files
• mkdir - create a directory
• mv - move (rename) files
• ranlib - symbol table builder for archive libraries
• rm - remove (delete) files and directories
• sed - stream editor for transforming output
• sh - Bourne shell for make build scripts
• tar - tape archive for distribution generation
• test - test things in file system
• unzip* - unzip command for distribution checking
• zip* - zip command for distribution generation
GCC versions prior to 3.0: GCC 2.96.x and before had several problems in the STL that effectively prevent
it from compiling LLVM.
GCC 3.2.2 and 3.2.3: These versions of GCC fails to compile LLVM with a bogus template error. This was
fixed in later GCCs.
GCC 3.3.2: This version of GCC suffered from a serious bug which causes it to crash in the
"convert_from_eh_region_ranges_1" GCC function.
Cygwin GCC 3.3.3: The version of GCC 3.3.3 commonly shipped with Cygwin does not work. Please
upgrade to a newer version if possible.
SuSE GCC 3.3.3: The version of GCC 3.3.3 shipped with SuSE 9.1 (and possibly others) does not compile
LLVM correctly (it appears that exception handling is broken in some cases). Please download the FSF 3.3.3
or upgrade to a newer version of GCC.
GCC 3.4.0 on linux/x86 (32-bit): GCC miscompiles portions of the code generator, causing an infinite loop
in the llvm-gcc build when built with optimizations enabled (i.e. a release build).
GCC 3.4.2 on linux/x86 (32-bit): GCC miscompiles portions of the code generator at -O3, as with 3.4.0.
However gcc 3.4.2 (unlike 3.4.0) correctly compiles LLVM at -O2. A work around is to build release LLVM
107
Documentation for the LLVM System at SVN head
GCC 3.4.4 (CodeSourcery ARM 2005q3-2): this compiler miscompiles LLVM when building with
optimizations enabled. It appears to work with "make ENABLE_OPTIMIZED=1
OPTIMIZE_OPTION=-O1" or build a debug build.
IA-64 GCC 4.0.0: The IA-64 version of GCC 4.0.0 is known to miscompile LLVM.
Apple Xcode 2.3: GCC crashes when compiling LLVM at -O3 (which is the default with
ENABLE_OPTIMIZED=1. To work around this, build with "ENABLE_OPTIMIZED=1
OPTIMIZE_OPTION=-O2".
GCC 4.1.1: GCC fails to build LLVM with template concept check errors compiling some files. At the time
of this writing, GCC mainline (4.2) did not share the problem.
GCC 4.1.1 on X86-64/amd64: GCC miscompiles portions of LLVM when compiling llvm itself into 64-bit
code. LLVM will appear to mostly work but will be buggy, e.g. failing portions of its testsuite.
GCC 4.1.2 on OpenSUSE: Seg faults during libstdc++ build and on x86_64 platforms compiling md5.c gets
a mangled constant.
GCC 4.1.2 (20061115 (prerelease) (Debian 4.1.1-21)) on Debian: Appears to miscompile parts of LLVM
2.4. One symptom is ValueSymbolTable complaining about symbols remaining in the table on destruction.
GCC 4.1.2 20071124 (Red Hat 4.1.2-42): Suffers from the same symptoms as the previous one. It appears to
work with ENABLE_OPTIMIZED=0 (the default).
Cygwin GCC 4.3.2 20080827 (beta) 2: Users reported various problems related with link errors when using
this GCC version.
Debian GCC 4.3.2 on X86: Crashes building some files in LLVM 2.6.
GCC 4.3.3 (Debian 4.3.3-10) on ARM: Miscompiles parts of LLVM 2.6 when optimizations are turned on.
The symptom is an infinite loop in FoldingSetImpl::RemoveNode while running the code generator.
GNU ld 2.16.X. Some 2.16.X versions of the ld linker will produce very long warning messages complaining
that some ".gnu.linkonce.t.*" symbol was defined in a discarded section. You can safely ignore these
messages as they are erroneous and the linkage is correct. These messages disappear using ld 2.17.
GNU binutils 2.17: Binutils 2.17 contains a bug which causes huge link times (minutes instead of seconds)
when building LLVM. We recommend upgrading to a newer version (2.17.50.0.4 or later).
GNU Binutils 2.19.1 Gold: This version of Gold contained a bug which causes intermittent failures when
building LLVM with position independent code. The symptom is an error about cyclic dependencies. We
recommend upgrading to a newer version of Gold.
108
Documentation for the LLVM System at SVN head
The later sections of this guide describe the general layout of the the LLVM source tree, a simple example
using the LLVM tool chain, and links to find more information about LLVM or to get help via e-mail.
SRC_ROOT
This is the top level directory of the LLVM source tree.
OBJ_ROOT
This is the top level directory of the LLVM object tree (i.e. the tree where object files and compiled
programs will be placed. It can be the same as SRC_ROOT).
LLVMGCCDIR
This is where the LLVM GCC Front End is installed.
LLVM_LIB_SEARCH_PATH=/path/to/your/bitcode/libs
[Optional] This environment variable helps LLVM linking tools find the locations of your bitcode
libraries. It is provided only as a convenience since you can specify the paths using the -L options of
the tools and the C/C++ front-end will automatically use the bitcode files installed in its lib
directory.
The files are as follows, with x.y marking the version number:
llvm-x.y.tar.gz
Source release for the LLVM libraries and tools.
llvm-test-x.y.tar.gz
Source release for the LLVM test suite.
llvm-gcc-4.2-x.y.source.tar.gz
Source release of the llvm-gcc-4.2 front end. See README.LLVM in the root directory for build
instructions.
llvm-gcc-4.2-x.y-platform.tar.gz
Binary release of the llvm-gcc-4.2 front end for a specific platform.
109
Documentation for the LLVM System at SVN head
If you have access to our Subversion repository, you can get a fresh copy of the entire source code. All you
need to do is check it out from Subversion as follows:
• cd where-you-want-llvm-to-live
• Read-Only: svn co http://llvm.org/svn/llvm-project/llvm/trunk llvm
• Read-Write:svn co https://[email protected]/svn/llvm-project/llvm/trunk
llvm
This will create an 'llvm' directory in the current directory and fully populate it with the LLVM source code,
Makefiles, test directories, and local copies of documentation files.
If you want to get a specific release (as opposed to the most recent revision), you can checkout it from the
'tags' directory (instead of 'trunk'). The following releases are located in the following subdirectories of
the 'tags' directory:
If you would like to get the LLVM test suite (a separate package as of 1.4), you get it from the Subversion
repository:
% cd llvm/projects
% svn co http://llvm.org/svn/llvm-project/test-suite/trunk llvm-test
By placing it in the llvm/projects, it will be automatically configured by the LLVM configure script as
well as automatically updated when you run svn update.
If you would like to get the GCC front end source code, you can also get it and build it yourself. Please follow
these instructions to successfully get and build the LLVM GCC front-end.
110
Documentation for the LLVM System at SVN head
To install the GCC front end, do the following (on Windows, use an archival tool like 7-zip that understands
gzipped tars):
1. cd where-you-want-the-front-end-to-live
2. gunzip --stdout llvm-gcc-4.2-version-platform.tar.gz | tar -xvf -
Once the binary is uncompressed, if you're using a *nix-based system, add a symlink for llvm-gcc and
llvm-g++ to some directory in your path. If you're using a Windows-based system, add the bin
subdirectory of your front end installation directory to your PATH environment variable. For example, if you
uncompressed the binary to c:\llvm-gcc, add c:\llvm-gcc\bin to your PATH.
If you now want to build LLVM from source, when you configure LLVM, it will automatically detect
llvm-gcc's presence (if it is in your path) enabling its use in llvm-test. Note that you can always build or
install llvm-gcc at any point after building the main LLVM repository: just reconfigure llvm and llvm-test
will pick it up.
As a convenience for Windows users, the front end binaries for MinGW/x86 include versions of the required
w32api and mingw-runtime binaries. The last remaining step for Windows users is to simply uncompress the
binary binutils package from MinGW into your front end installation directory. While the front end
installation steps are not quite the same as a typical manual MinGW installation, they should be similar
enough to those who have previously installed MinGW on Windows systems.
The binary versions of the LLVM GCC front end may not suit all of your needs. For example, the binary
distribution may include an old version of a system header file, not "fix" a header file that needs to be fixed
for GCC, or it may be linked with libraries not available on your system. In cases like these, you may want to
try building the GCC front end from source. Thankfully, this is much easier now than it was in the past.
We also do not currently support updating of the GCC front end by manually overlaying newer versions of the
w32api and mingw-runtime binary packages that may become available from MinGW. At this time, it's best
to think of the MinGW LLVM GCC front end binary as a self-contained convenience package that requires
Windows users to simply download and uncompress the GNU Binutils binary package from the MinGW
project.
Regardless of your platform, if you discover that installing the LLVM GCC front end binaries is not as easy
as previously described, or you would like to suggest improvements, please let us know how you would like
to see things improved by dropping us a note on our mailing list.
The following environment variables are used by the configure script to configure the build system:
111
Documentation for the LLVM System at SVN head
Variable Purpose
Tells configure which C compiler to use. By default, configure will look for the first GCC
CC
C compiler in PATH. Use this variable to override configure's default behavior.
Tells configure which C++ compiler to use. By default, configure will look for the first
CXX
GCC C++ compiler in PATH. Use this variable to override configure's default behavior.
The following options can be used to set or enable LLVM specific options:
--with-llvmgccdir
Path to the LLVM C/C++ FrontEnd to be used with this LLVM configuration. The value of this
option should specify the full pathname of the C/C++ Front End to be used. If this option is not
provided, the PATH will be searched for a program named llvm-gcc and the C/C++ FrontEnd install
directory will be inferred from the path found. If the option is not given, and no llvm-gcc can be
found in the path then a warning will be produced by configure indicating this situation. LLVM
may still be built with the tools-only target but attempting to build the runtime libraries will fail
as these libraries require llvm-gcc and llvm-g++. See Install the GCC Front End for details on
installing the C/C++ Front End. See Bootstrapping the LLVM C/C++ Front-End for details on
building the C/C++ Front End.
--with-tclinclude
Path to the tcl include directory under which tclsh can be found. Use this if you have multiple tcl
installations on your machine and you want to use a specific one (8.x) for LLVM. LLVM only uses
tcl for running the dejagnu based test suite in llvm/test. If you don't specify this option, the
LLVM configure script will search for the tcl 8.4 and 8.3 releases.
--enable-optimized
Enables optimized compilation (debugging symbols are removed and GCC optimization flags are
enabled). Note that this is the default setting if you are using the LLVM distribution. The default
behavior of an Subversion checkout is to use an unoptimized build (also known as a debug build).
--enable-debug-runtime
Enables debug symbols in the runtime libraries. The default is to strip debug symbols from the
runtime libraries.
--enable-jit
Compile the Just In Time (JIT) compiler functionality. This is not available on all platforms. The
default is dependent on platform, so it is best to explicitly enable it if you want it.
--enable-targets=target-option
Controls which targets will be built and linked into llc. The default value for target_options is
"all" which builds and links all available targets. The value "host-only" can be specified to build only
a native compiler (no cross-compiler targets available). The "native" target is selected as the target of
the build host. You can also specify a comma separated list of target names that you want available in
llc. The target names use all lower case. The current set of targets is:
alpha, ia64, powerpc, skeleton, sparc, x86.
--enable-doxygen
Look for the doxygen program and enable construction of doxygen based documentation from the
source code. This is disabled by default because generating the documentation can take a long time
and producess 100s of megabytes of output.
--with-udis86
LLVM can use external disassembler library for various purposes (now it's used only for examining
code produced by JIT). This option will enable usage of udis86 x86 (both 32 and 64 bits)
112
Documentation for the LLVM System at SVN head
disassembler library.
% cd OBJ_ROOT
2. Run the configure script located in the LLVM source tree:
Debug Builds
These builds are the default when one is using an Subversion checkout and types gmake (unless the
--enable-optimized option was used during configuration). The build system will compile the
tools and libraries with debugging information. To get a Debug Build using the LLVM distribution
the --disable-optimized option must be passed to configure.
Profile Builds
These builds are for use with profiling. They compile profiling information into the code for use with
programs like gprof. Profile builds must be started by specifying ENABLE_PROFILING=1 on the
gmake command line.
Once you have LLVM configured, you can build it by entering the OBJ_ROOT directory and issuing the
following command:
% gmake
If the build fails, please check here to see if you are using a version of GCC that is known not to compile
LLVM.
If you have multiple processors in your machine, you may wish to use some of the parallel build options
provided by GNU Make. For example, you could use the command:
% gmake -j2
There are several special targets which are useful when working with the LLVM source code:
gmake clean
Removes all files generated by the build. This includes object files, generated C/C++ files, libraries,
and executables.
113
Documentation for the LLVM System at SVN head
gmake dist-clean
Removes everything that gmake clean does, but also removes files generated by configure. It
attempts to return the source tree to the original state in which it was shipped.
gmake install
Installs LLVM header files, libraries, tools, and documentation in a hierarchy under $PREFIX,
specified with ./configure --prefix=[dir], which defaults to /usr/local.
Please see the Makefile Guide for further details on these make targets and descriptions of other targets
available.
It is also possible to override default values from configure by declaring variables on the command line.
The following are some examples:
gmake ENABLE_OPTIMIZED=1
Perform a Release (Optimized) build.
gmake ENABLE_OPTIMIZED=0
Perform a Debug build.
gmake ENABLE_PROFILING=1
Perform a Profiling build.
gmake VERBOSE=1
Print what gmake is doing on standard output.
gmake TOOL_VERBOSE=1
Ask each tool invoked by the makefiles to print out what it is doing on the standard output. This also
implies VERBOSE=1.
Every directory in the LLVM object tree includes a Makefile to build it and any subdirectories that it
contains. Entering any directory inside the LLVM object tree and typing gmake should rebuild anything in or
below that directory that is out of date.
Cross-Compiling LLVM
It is possible to cross-compile LLVM itself. That is, you can create LLVM executables and libraries to be
hosted on a platform different from the platform where they are build (a Canadian Cross build). To configure
a cross-compile, supply the configure script with --build and --host options that are different. The
values of these options must be legal target triples that your GCC compiler supports.
114
Documentation for the LLVM System at SVN head
The result of such a build is executables that are not runnable on on the build host (--build option) but can be
executed on the compile host (--host option).
% cd OBJ_ROOT
• Run the configure script found in the LLVM source directory:
% SRC_ROOT/configure
The LLVM build will place files underneath OBJ_ROOT in directories named after the build type:
Debug Builds
Tools
OBJ_ROOT/Debug/bin
Libraries
OBJ_ROOT/Debug/lib
Release Builds
Tools
OBJ_ROOT/Release/bin
Libraries
OBJ_ROOT/Release/lib
Profile Builds
Tools
OBJ_ROOT/Profile/bin
Libraries
OBJ_ROOT/Profile/lib
This allows you to execute LLVM bitcode files directly. On Debian, you can also use this command instead of
the 'echo' command above:
115
Documentation for the LLVM System at SVN head
$ sudo update-binfmts --install llvm /path/to/lli --magic 'BC'
Program Layout
One useful source of information about the LLVM source base is the LLVM doxygen documentation
available at http://llvm.org/doxygen/. The following is a brief introduction to code layout:
llvm/examples
This directory contains some simple examples of how to use the LLVM IR and JIT.
llvm/include
This directory contains public header files exported from the LLVM library. The three main subdirectories of
this directory are:
llvm/include/llvm
This directory contains all of the LLVM specific header files. This directory also has subdirectories
for different portions of LLVM: Analysis, CodeGen, Target, Transforms, etc...
llvm/include/llvm/Support
This directory contains generic support libraries that are provided with LLVM but not necessarily
specific to LLVM. For example, some C++ STL utilities and a Command Line option processing
library store their header files here.
llvm/include/llvm/Config
This directory contains header files configured by the configure script. They wrap "standard"
UNIX and C header files. Source code can include these header files which automatically take care of
the conditional #includes that the configure script generates.
llvm/lib
This directory contains most of the source files of the LLVM system. In LLVM, almost all code exists in
libraries, making it very easy to share code among the different tools.
llvm/lib/VMCore/
This directory holds the core LLVM source files that implement core classes like Instruction and
BasicBlock.
llvm/lib/AsmParser/
This directory holds the source code for the LLVM assembly language parser library.
llvm/lib/BitCode/
This directory holds code for reading and write LLVM bitcode.
llvm/lib/Analysis/
This directory contains a variety of different program analyses, such as Dominator Information, Call
Graphs, Induction Variables, Interval Identification, Natural Loop Identification, etc.
llvm/lib/Transforms/
This directory contains the source code for the LLVM to LLVM program transformations, such as
Aggressive Dead Code Elimination, Sparse Conditional Constant Propagation, Inlining, Loop
Invariant Code Motion, Dead Global Elimination, and many others.
llvm/lib/Target/
This directory contains files that describe various target architectures for code generation. For
example, the llvm/lib/Target/X86 directory holds the X86 machine description while
llvm/lib/Target/CBackend implements the LLVM-to-C converter.
llvm/lib/CodeGen/
This directory contains the major parts of the code generator: Instruction Selector, Instruction
Scheduling, and Register Allocation.
llvm/lib/Debugger/
116
Documentation for the LLVM System at SVN head
This directory contains the source level debugger library that makes it possible to instrument LLVM
programs so that a debugger could identify source code locations at which the program is executing.
llvm/lib/ExecutionEngine/
This directory contains libraries for executing LLVM bitcode directly at runtime in both interpreted
and JIT compiled fashions.
llvm/lib/Support/
This directory contains the source code that corresponds to the header files located in
llvm/include/Support/.
llvm/lib/System/
This directory contains the operating system abstraction layer that shields LLVM from
platform-specific coding.
llvm/projects
This directory contains projects that are not strictly part of LLVM but are shipped with LLVM. This is also
the directory where you should create your own LLVM-based projects. See llvm/projects/sample for
an example of how to set up your own project.
llvm/runtime
This directory contains libraries which are compiled into LLVM bitcode and used when linking programs
with the GCC front end. Most of these libraries are skeleton versions of real libraries; for example, libc is a
stripped down version of glibc.
Unlike the rest of the LLVM suite, this directory needs the LLVM GCC front end to compile.
llvm/test
This directory contains feature and regression tests and other basic sanity checks on the LLVM infrastructure.
These are intended to run quickly and cover a lot of territory without being exhaustive.
test-suite
This is not a directory in the normal llvm module; it is a separate Subversion module that must be checked out
(usually to projects/test-suite). This module contains a comprehensive correctness, performance,
and benchmarking test suite for LLVM. It is a separate Subversion module because not every LLVM user is
interested in downloading or building such a comprehensive test suite. For further details on this test suite,
please see the Testing Guide document.
llvm/tools
The tools directory contains the executables built out of the libraries above, which form the main part of the
user interface. You can always get help for a tool by typing tool_name -help. The following is a brief
introduction to the most important tools. More detailed information is in the Command Guide.
bugpoint
bugpoint is used to debug optimization passes or code generation backends by narrowing down the
given test case to the minimum number of passes and/or instructions that still cause a problem,
whether it is a crash or miscompilation. See HowToSubmitABug.html for more information on using
bugpoint.
llvmc
The LLVM Compiler Driver. This program can be configured to utilize both LLVM and non-LLVM
compilation tools to enable pre-processing, translation, optimization, assembly, and linking of
programs all from one command line. llvmc also takes care of processing the dependent libraries
found in bitcode. This reduces the need to get the traditional -l<name> options right on the
command line. Please note that this tool, while functional, is still experimental and not feature
117
Documentation for the LLVM System at SVN head
complete.
llvm-ar
The archiver produces an archive containing the given LLVM bitcode files, optionally with an index
for faster lookup.
llvm-as
The assembler transforms the human readable LLVM assembly to LLVM bitcode.
llvm-dis
The disassembler transforms the LLVM bitcode to human readable LLVM assembly.
llvm-ld
llvm-ld is a general purpose and extensible linker for LLVM. This is the linker invoked by llvmc.
It performsn standard link time optimizations and allows optimization modules to be loaded and run
so that language specific optimizations can be applied at link time.
llvm-link
llvm-link, not surprisingly, links multiple LLVM modules into a single program.
lli
lli is the LLVM interpreter, which can directly execute LLVM bitcode (although very slowly...).
For architectures that support it (currently x86, Sparc, and PowerPC), by default, lli will function as
a Just-In-Time compiler (if the functionality was compiled in), and will execute the code much faster
than the interpreter.
llc
llc is the LLVM backend compiler, which translates LLVM bitcode to a native code assembly file
or to C code (with the -march=c option).
llvm-gcc
llvm-gcc is a GCC-based C frontend that has been retargeted to use LLVM as its backend instead
of GCC's RTL backend. It can also emit LLVM bitcode or assembly (with the -emit-llvm option)
instead of the usual machine code output. It works just like any other GCC compiler, taking the
typical -c, -S, -E, -o options that are typically used. Additionally, the the source code for
llvm-gcc is available as a separate Subversion module.
opt
opt reads LLVM bitcode, applies a series of LLVM to LLVM transformations (which are specified
on the command line), and then outputs the resultant bitcode. The 'opt -help' command is a good
way to get a list of the program transformations available in LLVM.
opt can also be used to run a specific analysis on an input LLVM bitcode file and print out the
results. It is primarily useful for debugging analyses, or familiarizing yourself with what an analysis
does.
llvm/utils
This directory contains utilities for working with LLVM source code, and some of the utilities are actually
required as part of the build process because they are code generators for parts of LLVM infrastructure.
codegen-diff
codegen-diff is a script that finds differences between code that LLC generates and code that LLI
generates. This is a useful tool if you are debugging one of them, assuming that the other generates
correct output. For the full user manual, run `perldoc codegen-diff'.
emacs/
The emacs directory contains syntax-highlighting files which will work with Emacs and XEmacs
editors, providing syntax highlighting support for LLVM assembly files and TableGen description
files. For information on how to use the syntax files, consult the README file in that directory.
getsrcs.sh
118
Documentation for the LLVM System at SVN head
The getsrcs.sh script finds and outputs all non-generated source files, which is useful if one
wishes to do a lot of development across directories and does not want to individually find each file.
One way to use it is to run, for example: xemacs `utils/getsources.sh` from the top of
your LLVM source tree.
llvmgrep
This little tool performs an "egrep -H -n" on each source file in LLVM and passes to it a regular
expression provided on llvmgrep's command line. This is a very efficient way of searching the
source base for a particular regular expression.
makellvm
The makellvm script compiles all files in the current directory and then compiles and links the tool
that is the first argument. For example, assuming you are in the directory
llvm/lib/Target/Sparc, if makellvm is in your path, simply running makellvm llc will
make a build of the current directory, switch to directory llvm/tools/llc and build it, causing a
re-linking of LLC.
TableGen/
The TableGen directory contains the tool used to generate register descriptions, instruction set
descriptions, and even assemblers from common TableGen description files.
vim/
The vim directory contains syntax-highlighting files which will work with the VIM editor, providing
syntax highlighting support for LLVM assembly files and TableGen description files. For information
on how to use the syntax files, consult the README file in that directory.
llvm/win32
This directory contains build scripts and project files for use with Visual C++. This allows developers on
Windows to build LLVM without the need for Cygwin. The contents of this directory should be considered
experimental at this time.
Note: The gcc4 frontend's invocation is considerably different from the previous gcc3 frontend. In particular,
the gcc4 frontend does not create bitcode by default: gcc4 produces native code. As the example below
illustrates, the '--emit-llvm' flag is needed to produce LLVM bitcode output. For makefiles and configure
scripts, the CFLAGS variable needs '--emit-llvm' to produce bitcode output.
#include <stdio.h>
int main() {
printf("hello world\n");
119
Documentation for the LLVM System at SVN head
return 0;
}
2. Next, compile the C file into a native executable:
Note that llvm-gcc works just like GCC by default. The standard -S and -c arguments work as usual
(producing a native .s or .o file, respectively).
3. Next, compile the C file into a LLVM bitcode file:
The -emit-llvm option can be used with the -S or -c options to emit an LLVM ".ll" or ".bc" file
(respectively) for the code. This allows you to use the standard LLVM tools on the bitcode file.
% ./hello
and
% lli hello.bc
The second examples shows how to invoke the LLVM JIT, lli.
5. Use the llvm-dis utility to take a look at the LLVM assembly code:
% ./hello.native
Note that using llvm-gcc to compile directly to native code (i.e. when the -emit-llvm option is not
present) does steps 6/7/8 for you.
Common Problems
If you are having problems building or using LLVM, or if you have any other general questions about LLVM,
please consult the Frequently Asked Questions page.
Links
This document is just an introduction on how to use LLVM to do some simple things... there are many more
interesting and complicated things that you can do that aren't documented here (but we'll gladly accept a patch
if you want to write something up!). For more information about LLVM, check out:
• LLVM homepage
120
Documentation for the LLVM System at SVN head
• LLVM doxygen tree
• Starting a Project that Uses LLVM
Chris Lattner
Reid Spencer
The LLVM Compiler Infrastructure
Last modified: $Date: 2010-04-27 01:53:59 -0500 (Tue, 27 Apr 2010) $
121
Documentation for the LLVM System at SVN head
Getting Started with the LLVM System using Microsoft Visual Studio
• Overview
• Getting Started Quickly (A Summary)
• Requirements
1. Hardware
2. Software
• Getting Started with LLVM
1. Terminology and Notation
2. The Location of LLVM Object Files
• An Example Using the LLVM Tool Chain
• Common Problems
• Links
Overview
The Visual Studio port at this time is experimental. It is suitable for use only if you are writing your own
compiler front end or otherwise have a need to dynamically generate machine code. The JIT and interpreter
are functional, but it is currently not possible to generate assembly code which is then assembled into an
executable. You can indirectly create executables by using the C back end.
To emphasize, there is no C/C++ front end currently available. llvm-gcc is based on GCC, which cannot be
bootstrapped using VC++. Eventually there should be a llvm-gcc based on Cygwin or MinGW that is
usable. There is also the option of generating bitcode files on Unix and copying them over to Windows. But
be aware the odds of linking C++ code compiled with llvm-gcc with code compiled with VC++ is
essentially zero.
The LLVM test suite cannot be run on the Visual Studio port at this time.
Most of the tools build and work. bugpoint does build, but does not work. The other tools 'should' work,
but have not been fully tested.
Additional information about the LLVM directory structure and tool chain can be found on the main Getting
Started page.
122
Documentation for the LLVM System at SVN head
2. svn co http://llvm.org/svn/llvm-project/llvm-top/trunk
llvm-top
3. make checkout MODULE=llvm
4. cd llvm
5. Use CMake to generate up-to-date project files:
♦ This step is currently optional as LLVM does still come with a normal Visual Studio solution
file, but it is not always kept up-to-date and will soon be deprecated in favor of the
multi-platform generator CMake.
♦ If CMake is installed then the most simple way is to just start the CMake GUI, select the
directory where you have LLVM extracted to, and the default options should all be fine. The
one option you may really want to change, regardless of anything else, might be the
CMAKE_INSTALL_PREFIX setting to select a directory to INSTALL to once compiling is
complete.
♦ If you use CMake to generate the Visual Studio solution and project files, then the Solution
will have a few extra options compared to the current included one. The projects may still be
built individually, but to build them all do not just select all of them in batch build (as some
are meant as configuration projects), but rather select and build just the ALL_BUILD project
to build everything, or the INSTALL project, which first builds the ALL_BUILD project,
then installs the LLVM headers, libs, and other useful things to the directory set by the
CMAKE_INSTALL_PREFIX setting when you first configured CMake.
6. Start Visual Studio
♦ If you did not use CMake, then simply double click on the solution file
llvm/win32/llvm.sln.
♦ If you used CMake, then the directory you created the project files, the root directory will
have an llvm.sln file, just double-click on that to open Visual Studio.
7. Build the LLVM Suite:
♦ Simply build the solution.
♦ The Fibonacci project is a sample program that uses the JIT. Modify the project's debugging
properties to provide a numeric command line argument. The program will print the
corresponding fibonacci value.
It is strongly encouraged that you get the latest version from Subversion as changes are continually making
the VS support better.
Requirements
Before you begin to use the LLVM system, review the requirements given below. This may save you some
trouble by knowing ahead of time what hardware and software you will need.
Hardware
Any system that can adequately run Visual Studio .NET 2005 SP1 is fine. The LLVM source tree and object
files, libraries and executables will consume approximately 3GB.
Software
You will need Visual Studio .NET 2005 SP1 or higher. The VS2005 SP1 beta and the normal VS2005 still
have bugs that are not completely compatible. VS2003 would work except (at last check) it has a bug with
friend classes that you can work-around with some minor code rewriting (and please submit a patch if you
do). Earlier versions of Visual Studio do not support the C++ standard well enough and will not work.
You will also need the CMake build system since it generates the project files you will use to build with.
123
Documentation for the LLVM System at SVN head
Do not install the LLVM directory tree into a path containing spaces (e.g. C:\Documents and Settings\...) as
the configure step will fail.
SRC_ROOT
This is the top level directory of the LLVM source tree.
OBJ_ROOT
This is the top level directory of the LLVM object tree (i.e. the tree where object files and compiled
programs will be placed. It is fixed at SRC_ROOT/win32).
The files that configure would create when building on Unix are created by the Configure project and
placed in OBJ_ROOT/llvm. You application must have OBJ_ROOT in its include search path just before
SRC_ROOT/include.
#include <stdio.h>
int main() {
printf("hello world\n");
return 0;
}
2. Next, compile the C file into a LLVM bitcode file:
This will create the result file hello.bc which is the LLVM bitcode that corresponds the the
compiled program and the library facilities that it required. You can execute this file directly using
lli tool, compile it to native assembly with the llc, optimize or analyze it further with the opt
tool, etc.
Note: while you cannot do this step on Windows, you can do it on a Unix system and transfer
hello.bc to Windows. Important: transfer as a binary file!
3. Run the program using the just-in-time compiler:
% lli hello.bc
124
Documentation for the LLVM System at SVN head
Note: this will only work for trivial C programs. Non-trivial programs (and any C++ program) will
have dependencies on the GCC runtime that won't be satisfied by the Microsoft runtime libraries.
4. Use the llvm-dis utility to take a look at the LLVM assembly code:
% cl hello.cbe.c
Note: this will only work for trivial C programs. Non-trivial programs (and any C++ program) will
have dependencies on the GCC runtime that won't be satisfied by the Microsoft runtime libraries.
7. Execute the native code program:
% hello.cbe.exe
Common Problems
• In Visual C++, if you are linking with the x86 target statically, the linker will remove the x86 target
library from your generated executable or shared library because there are no references to it. You can
force the linker to include these references by using
"/INCLUDE:_X86TargetMachineModule" when linking. In the Visual Studio IDE, this can
be added in Project Properties->Linker->Input->Force Symbol References.
If you are having problems building or using LLVM, or if you have any other general questions about LLVM,
please consult the Frequently Asked Questions page.
Links
This document is just an introduction to how to use LLVM to do some simple things... there are many more
interesting and complicated things that you can do that aren't documented here (but we'll gladly accept a patch
if you want to write something up!). For more information about LLVM, check out:
• LLVM homepage
• LLVM doxygen tree
• Starting a Project that Uses LLVM
Jeff Cohen
The LLVM Compiler Infrastructure
Last modified: $Date: 2009-08-05 10:42:44 -0500 (Wed, 05 Aug 2009) $
125
Documentation for the LLVM System at SVN head
LLVM Developer Policy
1. Introduction
2. Developer Policies
1. Stay Informed
2. Making a Patch
3. Code Reviews
4. Code Owners
5. Test Cases
6. Quality
7. Obtaining Commit Access
8. Making a Major Change
9. Incremental Development
10. Attribution of Changes
3. Copyright, License, and Patents
1. Copyright
2. License
3. Patents
4. Developer Agreements
This policy is aimed at frequent contributors to LLVM. People interested in contributing one-off patches can
do so in an informal way by sending them to the llvm-commits mailing list and engaging another developer to
see it through the process.
Developer Policies
This section contains policies that pertain to frequent LLVM developers. We always welcome one-off patches
from people who do not routinely contribute to LLVM, but we expect more from frequent contributors to keep
the system as efficient as possible for everyone. Frequent LLVM contributors are expected to meet the
following requirements in order for LLVM to maintain a high standard of quality.
Stay Informed
Developers should stay informed by reading at least the llvmdev email list. If you are doing anything more
than just casual work on LLVM, it is suggested that you also subscribe to the llvm-commits list and pay
attention to changes being made by others.
We recommend that active developers register an email account with LLVM Bugzilla and preferably
subscribe to the llvm-bugs email list to keep track of bugs and enhancements occurring in LLVM.
126
Documentation for the LLVM System at SVN head
Making a Patch
When making a patch for review, the goal is to make it as easy for the reviewer to read it as possible. As such,
we recommend that you:
1. Make your patch against the Subversion trunk, not a branch, and not an old version of LLVM. This
makes it easy to apply the patch. For information on how to check out SVN trunk, please see the
Getting Started Guide.
2. Similarly, patches should be submitted soon after they are generated. Old patches may not apply
correctly if the underlying code changes between the time the patch was created and the time it is
applied.
3. Patches should be made with this command:
svn diff
or with the utility utils/mkpatch, which makes it easy to read the diff.
4. Patches should not include differences in generated code such as the code generated by autoconf or
tblgen. The utils/mkpatch utility takes care of this for you.
When sending a patch to a mailing list, it is a good idea to send it as an attachment to the message, not
embedded into the text of the message. This ensures that your mailer will not mangle the patch when it sends
it (e.g. by making whitespace changes or by wrapping lines).
For Thunderbird users: Before submitting a patch, please open Preferences → Advanced → General → Config
Editor, find the key mail.content_disposition_type, and set its value to 1. Without this setting,
Thunderbird sends your attachment using Content-Disposition: inline rather than
Content-Disposition: attachment. Apple Mail gamely displays such a file inline, making it
difficult to work with for reviewers using that program.
Code Reviews
LLVM has a code review policy. Code review is one way to increase the quality of software. We generally
follow these policies:
1. All developers are required to have significant changes reviewed before they are committed to the
repository.
2. Code reviews are conducted by email, usually on the llvm-commits list.
3. Code can be reviewed either before it is committed or after. We expect major changes to be reviewed
before being committed, but smaller changes (or changes where the developer owns the component)
can be reviewed after commit.
4. The developer responsible for a code change is also responsible for making all necessary
review-related changes.
5. Code review can be an iterative process, which continues until the patch is ready to be committed.
Developers should participate in code reviews as both reviewers and reviewees. If someone is kind enough to
review your code, you should return the favor for someone else. Note that anyone is welcome to review and
give feedback on a patch, but only people with Subversion write access can approve it.
Code Owners
The LLVM Project relies on two features of its process to maintain rapid development in addition to the high
quality of its source base: the combination of code review plus post-commit review for trusted maintainers.
Having both is a great way for the project to take advantage of the fact that most people do the right thing
most of the time, and only commit patches without pre-commit review when they are confident they are right.
127
Documentation for the LLVM System at SVN head
The trick to this is that the project has to guarantee that all patches that are committed are reviewed after they
go in: you don't want everyone to assume someone else will review it, allowing the patch to go unreviewed.
To solve this problem, we have a notion of an 'owner' for a piece of the code. The sole responsibility of a code
owner is to ensure that a commit to their area of the code is appropriately reviewed, either by themself or by
someone else. The current code owners are:
Note that code ownership is completely different than reviewers: anyone can review a piece of code, and we
welcome code review from anyone who is interested. Code owners are the "last line of defense" to guarantee
that all patches that are committed are actually reviewed.
Being a code owner is a somewhat unglamorous position, but it is incredibly important for the ongoing
success of the project. Because people get busy, interests change, and unexpected things happen, code
ownership is purely opt-in, and anyone can choose to resign their "title" at any time. For now, we do not have
an official policy on how one gets elected to be a code owner.
Test Cases
Developers are required to create test cases for any bugs fixed and any new features added. Some tips for
getting your testcase approved:
1. All feature and regression test cases are added to the llvm/test directory. The appropriate
sub-directory should be selected (see the Testing Guide for details).
2. Test cases should be written in LLVM assembly language unless the feature or regression being tested
requires another language (e.g. the bug being fixed or feature being implemented is in the llvm-gcc
C++ front-end, in which case it must be written in C++).
3. Test cases, especially for regressions, should be reduced as much as possible, by bugpoint or
manually. It is unacceptable to place an entire failing program into llvm/test as this creates a
time-to-test burden on all developers. Please keep them short.
Note that llvm/test is designed for regression and small feature tests only. More extensive test cases (e.g.,
entire applications, benchmarks, etc) should be added to the llvm-test test suite. The llvm-test suite is for
coverage (correctness, performance, etc) testing, not feature or regression testing.
Quality
The minimum quality standards that any change must satisfy before being committed to the main
development branch are:
128
Documentation for the LLVM System at SVN head
"llvm-test/MultiSource/Benchmarks".
Additionally, the committer is responsible for addressing any problems found in the future that the change is
responsible for. For example:
We prefer for this to be handled before submission but understand that it isn't possible to test all of this for
every submission. Our build bots and nightly testing infrastructure normally finds these problems. A good rule
of thumb is to check the nightly testers for regressions the day after your change. Build bots will directly
email you if a group of commits that included yours caused a failure. You are expected to check the build bot
messages to see if they are your fault and, if so, fix the breakage.
Commits that violate these quality standards (e.g. are very broken) may be reverted. This is necessary when
the change blocks other developers from making progress. The developer is welcome to re-commit the change
after the problem has been fixed.
Once you've been granted commit access, you should be able to check out an LLVM tree with an SVN URL
of "https://[email protected]/..." instead of the normal anonymous URL of "http://llvm.org/...". The first
time you commit you'll have to type in your password. Note that you may get a warning from SVN about an
untrusted key, you can ignore this. To verify that your commit access works, please do a test commit (e.g.
change a comment or add a blank line). Your first commit to a repository may require the autogenerated email
to be approved by a mailing list. This is normal, and will be done when the mailing list owner has time.
If you have recently been granted commit access, these policies apply:
1. You are granted commit-after-approval to all parts of LLVM. To get approval, submit a patch to
llvm-commits. When approved you may commit it yourself.
2. You are allowed to commit patches without approval which you think are obvious. This is clearly a
subjective decision — we simply expect you to use good judgement. Examples include: fixing build
breakage, reverting obviously broken patches, documentation/comment changes, any other minor
changes.
129
Documentation for the LLVM System at SVN head
3. You are allowed to commit patches without approval to those portions of LLVM that you have
contributed or maintain (i.e., have been assigned responsibility for), with the proviso that such
commits must not break the build. This is a "trust but verify" policy and commits of this nature are
reviewed after they are committed.
4. Multiple violations of these policies or a single egregious violation may cause commit access to be
revoked.
In any case, your changes are still subject to code review (either before or after they are committed, depending
on the nature of the change). You are encouraged to review other peoples' patches as well, but you aren't
required to.
The design of LLVM is carefully controlled to ensure that all the pieces fit together well and are as consistent
as possible. If you plan to make a major change to the way LLVM works or want to add a major new
extension, it is a good idea to get consensus with the development community before you start working on it.
Once the design of the new feature is finalized, the work itself should be done as a series of incremental
changes, not as a long-term development branch.
Incremental Development
In the LLVM project, we do all significant changes as a series of incremental patches. We have a strong
dislike for huge changes or long-term development branches. Long-term development branches have a
number of drawbacks:
1. Branches must have mainline merged into them periodically. If the branch development and mainline
development occur in the same pieces of code, resolving merge conflicts can take a lot of time.
2. Other people in the community tend to ignore work on branches.
3. Huge changes (produced when a branch is merged back onto mainline) are extremely difficult to code
review.
4. Branches are not routinely tested by our nightly tester infrastructure.
5. Changes developed as monolithic large changes often don't work until the entire set of changes is
done. Breaking it down into a set of smaller changes increases the odds that any of the work will be
committed to the main repository.
To address these problems, LLVM uses an incremental development style and we require contributors to
follow this practice when making a large/invasive change. Some tips:
• Large/invasive changes usually have a number of secondary changes that are required before the big
change can be made (e.g. API cleanup, etc). These sorts of changes can often be done before the
major change is done, independently of that work.
• The remaining inter-related work should be decomposed into unrelated sets of changes if possible.
Once this is done, define the first increment and get consensus on what the end goal of the change is.
130
Documentation for the LLVM System at SVN head
• Each change in the set can be stand alone (e.g. to fix a bug), or part of a planned series of changes that
works towards the development goal.
• Each change should be kept as small as possible. This simplifies your work (into a logical
progression), simplifies code review and reduces the chance that you will get negative feedback on
the change. Small increments also facilitate the maintenance of a high quality code base.
• Often, an independent precursor to a big change is to add a new API and slowly migrate clients to use
the new API. Each change to use the new API is often "obvious" and can be committed without
review. Once the new API is in place and used, it is much easier to replace the underlying
implementation of the API. This implementation change is logically separate from the API change.
If you are interested in making a large change, and this scares you, please make sure to first discuss the
change/gather consensus then ask about the best way to go about making the change.
Attribution of Changes
We believe in correct attribution of contributions to their contributors. However, we do not want the source
code to be littered with random attributions "this code written by J. Random Hacker" (this is noisy and
distracting). In practice, the revision control system keeps a perfect history of who changed what, and the
CREDITS.txt file describes higher-level contributions. If you commit a patch for someone else, please say
"patch contributed by J. Random Hacker!" in the commit message.
NOTE: This section deals with legal matters but does not provide legal advice. We are not lawyers, please
seek legal counsel from an attorney.
Copyright
For consistency and ease of management, the project requires the copyright for all LLVM software to be held
by a single copyright holder: the University of Illinois (UIUC).
Although UIUC may eventually reassign the copyright of the software to another entity (e.g. a dedicated
non-profit "LLVM Organization") the intent for the project is to always have a single entity hold the
copyrights to LLVM at any given time.
We believe that having a single copyright holder is in the best interests of all developers and users as it greatly
reduces the managerial burden for any kind of administrative or technical decisions about LLVM. The goal of
the LLVM project is to always keep the code open and licensed under a very liberal license.
License
We intend to keep LLVM perpetually open source and to use a liberal open source license. The current license
is the University of Illinois/NCSA Open Source License, which boils down to this:
131
Documentation for the LLVM System at SVN head
We believe this fosters the widest adoption of LLVM because it allows commercial products to be derived
from LLVM with few restrictions and without a requirement for making any derived works also open source
(i.e. LLVM's license is not a "copyleft" license like the GPL). We suggest that you read the License if further
clarification is needed.
Note that the LLVM Project does distribute llvm-gcc, which is GPL. This means that anything "linked" into
llvm-gcc must itself be compatible with the GPL, and must be releasable under the terms of the GPL. This
implies that any code linked into llvm-gcc and distributed to others may be subject to the viral aspects of
the GPL (for example, a proprietary code generator linked into llvm-gcc must be made available under the
GPL). This is not a problem for code already distributed under a more liberal license (like the UIUC license),
and does not affect code generated by llvm-gcc. It may be a problem if you intend to base commercial
development on llvm-gcc without redistributing your source code.
We have no plans to change the license of LLVM. If you have questions or comments about the license,
please contact the LLVM Oversight Group.
Patents
To the best of our knowledge, LLVM does not infringe on any patents (we have actually removed code from
LLVM in the past that was found to infringe). Having code in LLVM that infringes on patents would violate
an important goal of the project by making it hard or impossible to reuse the code for arbitrary purposes
(including commercial use).
When contributing code, we expect contributors to notify us of any potential for patent-related trouble with
their changes. If you or your employer own the rights to a patent and would like to contribute code to LLVM
that relies on it, we require that the copyright owner sign an agreement that allows any other user of LLVM to
freely use your patent. Please contact the oversight group for more details.
Developer Agreements
With regards to the LLVM copyright and licensing, developers agree to assign their copyrights to UIUC for
any contribution made so that the entire software base can be managed by a single copyright holder. This
implies that any contributions can be licensed under the license that the project uses.
When contributing code, you also affirm that you are legally entitled to grant this copyright, personally or on
behalf of your employer. If the code belongs to some other entity, please raise this issue with the oversight
group before the code is committed.
132
Documentation for the LLVM System at SVN head
LLVM's Analysis and Transform Passes
1. Introduction
2. Analysis Passes
3. Transform Passes
4. Utility Passes
Introduction
This document serves as a high level summary of the optimization features that LLVM provides.
Optimizations are implemented as Passes that traverse some portion of a program to either collect information
or transform the program. The table below divides the passes that LLVM provides into three categories.
Analysis passes compute information that other passes can use or for debugging or program visualization
purposes. Transform passes can use (or invalidate) the analysis passes. Transform passes all mutate the
program in some way. Utility passes provides some utility but don't otherwise fit categorization. For example
passes to extract functions to bitcode or write a module to bitcode are neither analysis nor transform passes.
The table below provides a quick summary of each pass and links to the more complete pass description later
in the document.
ANALYSIS PASSES
Option Name
-aa-eval Exhaustive Alias Analysis Precision Evaluator
-basicaa Basic Alias Analysis (default AA impl)
-basiccg Basic CallGraph Construction
-codegenprepare Optimize for code generation
-count-aa Count Alias Analysis Query Responses
-debug-aa AA use debugger
-domfrontier Dominance Frontier Construction
-domtree Dominator Tree Construction
-dot-callgraph Print Call Graph to 'dot' file
-dot-cfg Print CFG of function to 'dot' file
-dot-cfg-only Print CFG of function to 'dot' file (with no function bodies)
-globalsmodref-aa Simple mod/ref analysis for globals
-instcount Counts the various types of Instructions
-intervals Interval Partition Construction
-loops Natural Loop Construction
-memdep Memory Dependence Analysis
-no-aa No Alias Analysis (always returns 'may' alias)
-no-profile No Profile Information
-postdomfrontier Post-Dominance Frontier Construction
-postdomtree Post-Dominator Tree Construction
-print-alias-sets Alias Set Printer
-print-callgraph Print a call graph
133
Documentation for the LLVM System at SVN head
134
Documentation for the LLVM System at SVN head
135
Documentation for the LLVM System at SVN head
This is inspired and adapted from code by: Naveen Neelakantam, Francesco Spadini, and Wojciech
Stryjewski.
AA use debugger
This simple pass checks alias analysis users to ensure that if they create a new value, they do not query AA
without informing it of the value. It acts as a shim over any other AA pass you want.
Yes keeping track of every value in the program is expensive, but this is a debugging pass.
136
Documentation for the LLVM System at SVN head
Interval Partition Construction
This analysis calculates and represents the interval partition of a function, or a preexisting interval partition.
In this way, the interval partition may be used to reduce a flow graph down to its degenerate single node
interval partition (unless it is irreducible).
No Profile Information
The default "no profile" implementation of the abstract ProfileInfo interface.
137
Documentation for the LLVM System at SVN head
The PrintFunctionPass class is designed to be pipelined with other FunctionPasses, and prints out
the functions of the module as they are processed.
This analysis is primarily useful for induction variable substitution and strength reduction.
Transform Passes
This section describes the LLVM Transform Passes.
This pass also handles aggregate arguments that are passed into a function, scalarizing them if the elements of
the aggregate are only loaded. Note that it refuses to scalarize aggregates which would require passing in more
than three operands to the function, because passing thousands of operands for a large array or structure is
unprofitable!
Note that this transformation could also be done for arguments that are only stored to (returning the value
instead), but does not currently. This case would be best handled when and if LLVM starts supporting
multiple return values from functions.
138
Documentation for the LLVM System at SVN head
blocks in depth-first order.
add i32 1, 2
becomes
i32 3
NOTE: this pass has a habit of making definitions be dead. It is a good idea to to run a DIE (Dead Instruction
Elimination) pass sometime after running this pass.
This pass is often useful as a cleanup pass to run after aggressive interprocedural passes, which add
possibly-dead arguments.
139
Documentation for the LLVM System at SVN head
Thus find places where the address of memory functions are taken and construct bounce functions with direct
calls of those functions.
This transformation makes the following changes to each loop with an identifiable induction variable:
1. All loops are transformed to have a single canonical induction variable which starts at zero and steps
by one.
2. The canonical induction variable is guaranteed to be the first PHI node in the loop header block.
3. Any pointer arithmetic recurrences are raised to use array subscripts.
If the trip count of a loop is computable, this pass also makes the following changes:
1. The exit condition for the loop is canonicalized to compare the induction value against the exit value.
This turns loops like:
into
140
Documentation for the LLVM System at SVN head
This transformation should be followed by strength reduction after all of the desired loop transformations
have been performed. Additionally, on targets where it is profitable, the loop could be transformed to count
down to zero (the "do loop" optimization).
Function Integration/Inlining
Bottom-up inlining of functions into callees.
Note that this implementation is very naïve. Control equivalent regions of the CFG should not require
duplicate counters, but it does put duplicate counters in.
Note that this implementation is very naïve. It inserts a counter for every edge in the program, instead of
using control flow information to prune the number of counters inserted.
After this pass, it is highly recommended to runmem2reg and adce. instcombine, load-vn, gdce, and dse also
are good to run afterwards.
into:
141
Documentation for the LLVM System at SVN head
%Z = add i32 %X, 2
This pass guarantees that the following canonicalizations are performed on the program:
• If a binary operator has a constant operand, it is moved to the right- hand side.
• Bitwise operators with constant operands are always grouped so that shifts are performed first, then
ors, then ands, then xors.
• Compare instructions are converted from <, >, â¤, or ⥠to = or â if possible.
• All cmp instructions on boolean values are replaced with logical operations.
• add X, X is represented as mul X, 2 â✘ shl X, 1
• Multiplies with a constant power-of-two argument are transformed into shifts.
• â—¦ etc.
if () { ...
X = 4;
}
if (X < 3) {
In this case, the unconditional branch at the end of the first if can be revectored to the false side of the second
if.
142
Documentation for the LLVM System at SVN head
... = X3 + 4 X4 = phi(X3)
... = X4 + 4
This is still valid LLVM; the extra phi nodes are purely redundant, and will be trivially eliminated by
InstCombine. The major benefit of this transformation is that it makes many other loop optimizations,
such as LoopUnswitching, simpler.
• Moving loop invariant loads and calls out of loops. If we can determine that a load or call inside of a
loop never aliases anything stored to, we can hoist it or sink it like any other instruction.
• Scalar Promotion of Memory - If there is a store instruction inside of the loop, we try to move the
store to happen AFTER the loop instead of inside of the loop. This can only happen if a few
conditions are true:
♦ The pointer stored through is loop invariant.
♦ There are no stores or loads in the loop which may alias the pointer. There are no calls in the
loop which mod/ref the pointer.
If these conditions are true, we can promote the loads and stores in the loop of the pointer to use a
temporary alloca'd variable. We then use the mem2reg functionality to construct the appropriate SSA
form for the variable.
Rotate Loops
143
Documentation for the LLVM System at SVN head
Unroll loops
This pass implements a simple loop unroller. It works best when loops have been canonicalized by the
-indvars pass, allowing it to determine the trip counts of loops easily.
Unswitch loops
This pass transforms loops that contain branches on loop-invariant conditions to have multiple loops. For
example, it turns the left into the right code:
This can increase the size of the code exponentially (doubling it every time a loop is unswitched) so we only
unswitch if the resultant code will be smaller than a threshold.
This pass expects LICM to be run before it to hoist invariant conditions out of the loop, to make the
unswitching opportunity obvious.
Loop pre-header insertion guarantees that there is a single, non-critical entry edge from outside of the loop to
the loop header. This simplifies a number of analyses and transformations, such as LICM.
Loop exit-block insertion guarantees that all exit blocks from the loop (blocks which are outside of the loop
that have predecessors inside of the loop) only have predecessors from inside of the loop (and are thus
dominated by the loop header). This simplifies transformations such as store-sinking that are built into LICM.
This pass also guarantees that loops will have exactly one backedge.
Note that the simplifycfg pass will clean up blocks which are split out but end up being unnecessary, so usage
of this pass should not pessimize generated code.
This pass obviously modifies the CFG, but updates loop information and dominator information.
This is a target-dependent tranformation because it depends on the size of data types and alignment
constraints.
144
Documentation for the LLVM System at SVN head
'Cheap' exception handling support gives the program the ability to execute any program which does not
"throw an exception", by turning 'invoke' instructions into calls and by turning 'unwind' instructions into calls
to abort(). If the program does dynamically use the unwind instruction, the program will print a message then
abort.
'Expensive' exception handling support gives the full exception handling support to the program at the cost of
making the 'invoke' instruction really expensive. It basically inserts setjmp/longjmp calls to emulate the
exception handling as necessary.
Because the 'expensive' support slows down programs a lot, and EH is only used for a subset of the programs,
it must be specifically enabled by the -enable-correct-eh-support option.
Note that after this pass runs the CFG is not entirely accurate (exceptional control flow edges are not correct
anymore) so only very simple things should be done after the lowerinvoke pass has run (like generation of
native code). This should not be used as a general purpose "my LLVM-to-LLVM pass doesn't support the
invoke instruction yet" lowering pass.
Lowering of longjmp is fairly trivial. We replace the call with a call to the LLVM library function
__llvm_sjljeh_throw_longjmp(). This unwinds the stack for us calling all of the destructors for
objects allocated on the stack.
At a setjmp call, the basic block is split and the setjmp removed. The calls in a function that have a
setjmp are converted to invoke where the except part checks to see if it's a longjmp exception and, if so, if
it's handled in the function. If it is, then it gets the value returned by the longjmp and goes to where the
basic block was split. invoke instructions are handled in a similar fashion with the original except block
being executed if it isn't a longjmp except that is handled by that function.
145
Documentation for the LLVM System at SVN head
Reassociate expressions
This pass reassociates commutative expressions in an order that is designed to promote better constant
propagation, GCSE, LICM, PRE, etc.
In the implementation of this algorithm, constants are assigned rank = 0, function arguments are rank = 1, and
other values are assigned ranks corresponding to the reverse post order traversal of current function (starting
at 2), which effectively gives values in deep loops higher rank than values not in loops.
This combines a simple scalar replacement of aggregates algorithm with the mem2reg algorithm because
often interact, especially for C++ programs. As such, iterating between scalarrepl, then mem2reg until
we run out of things to promote works well.
Note that this pass has a habit of making definitions be dead. It is a good idea to to run a DCE pass sometime
after running this pass.
146
Documentation for the LLVM System at SVN head
Note that this transformation makes code much less readable, so it should only be used in situations where the
strip utility would be used, such as reducing code size or making it harder to reverse engineer code.
• Trivial instructions between the call and return do not prevent the transformation from taking place,
though currently the analysis cannot support moving any really useful instructions (only dead ones).
• This pass transforms functions that are prevented from being tail recursive by an associative
expression to use an accumulator variable, thus compiling the typical naive factorial or fib
implementation into efficient code.
• TRE is performed if the function returns void, if the return returns the result returned by the call, or if
the function returns a run-time constant on all exits from the function. It is possible, though unlikely,
that the return returns something else (like constant 0), and can still be TRE'd. It can be TRE'd if all
other return instructions in the function return the exact same value.
• If it can prove that callees do not access theier caller stack frame, they are marked as eligible for tail
call elimination (by the code generator).
Tail Duplication
This pass performs a limited form of tail duplication, intended to simplify CFGs by removing some
unconditional branches. This pass is necessary to straighten out loops created by the C front-end, but also is
capable of making other code nicer. After this pass is run, the CFG simplify pass should be run to clean up the
mess.
147
Documentation for the LLVM System at SVN head
Utility Passes
This section describes the LLVM Utility Passes.
Running the verifier runs this pass automatically, so there should be no need to use it directly.
Module Verifier
Verifies an LLVM IR code. This is useful to run after an optimization which is undergoing testing. Note that
llvm-as verifies its input before emitting bitcode, and also that malformed bitcode is likely to make LLVM
crash. All language front-ends are therefore encouraged to verify their output before performing optimizing
transformations.
Note that this does not provide full security verification (like Java), but instead just tries to ensure that code is
well-formed.
148
Documentation for the LLVM System at SVN head
Displays the control flow graph using the GraphViz tool, but omitting function bodies.
Reid Spencer
LLVM Compiler Infrastructure
Last modified: $Date: 2010-03-01 13:24:17 -0600 (Mon, 01 Mar 2010) $
149
Documentation for the LLVM System at SVN head
1. License
1. Why are the LLVM source code and the front-end distributed under different licenses?
2. Does the University of Illinois Open Source License really qualify as an "open source"
license?
3. Can I modify LLVM source code and redistribute the modified source?
4. Can I modify LLVM source code and redistribute binaries or other tools based on it, without
redistributing the source?
2. Source code
1. In what language is LLVM written?
2. How portable is the LLVM source code?
3. Build Problems
1. When I run configure, it finds the wrong C compiler.
2. The configure script finds the right C compiler, but it uses the LLVM linker from a
previous build. What do I do?
3. When creating a dynamic library, I get a strange GLIBC error.
4. I've updated my source tree from Subversion, and now my build is trying to use a
file/directory that doesn't exist.
5. I've modified a Makefile in my source tree, but my build tree keeps using the old version.
What do I do?
6. I've upgraded to a new version of LLVM, and I get strange build errors.
7. I've built LLVM and am testing it, but the tests freeze.
8. Why do test results differ when I perform different types of builds?
9. Compiling LLVM with GCC 3.3.2 fails, what should I do?
10. Compiling LLVM with GCC succeeds, but the resulting tools do not work, what can be
wrong?
11. When I use the test suite, all of the C Backend tests fail. What is wrong?
12. After Subversion update, rebuilding gives the error "No rule to make target".
13. The llvmc program gives me errors/doesn't work.
14. When I compile LLVM-GCC with srcdir == objdir, it fails. Why?
4. Source Languages
1. What source languages are supported?
2. I'd like to write a self-hosting LLVM compiler. How should I interface with the LLVM
middle-end optimizers and back-end code generators?
3. What support is there for higher level source language constructs for building a compiler?
4. I don't understand the GetElementPtr instruction. Help!
5. Using the GCC Front End
1. When I compile software that uses a configure script, the configure script thinks my system
has all of the header files and libraries it is testing for. How do I get configure to work
correctly?
2. When I compile code using the LLVM GCC front end, it complains that it cannot find
libcrtend.a?
3. How can I disable all optimizations when compiling code using the LLVM GCC front end?
4. Can I use LLVM to convert C++ code to C code?
5. Can I compile C or C++ code to platform-independent LLVM bitcode?
6. Questions about code generated by the GCC front-end
1. What is this llvm.global_ctors and _GLOBAL__I__tmp_webcompile... stuff
that happens when I #include <iostream>?
2. Where did all of my code go??
3. What is this "undef" thing that shows up in my code?
150
Documentation for the LLVM System at SVN head
4. Why does instcombine + simplifycfg turn a call to a function with a mismatched calling
convention into "unreachable"? Why not make the verifier reject it?
License
Why are the LLVM source code and the front-end distributed under different licenses?
The C/C++ front-ends are based on GCC and must be distributed under the GPL. Our aim is to distribute
LLVM source code under a much less restrictive license, in particular one that does not compel users who
distribute tools based on modifying the source to redistribute the modified source code as well.
Does the University of Illinois Open Source License really qualify as an "open source" license?
Can I modify LLVM source code and redistribute the modified source?
Yes. The modified source distribution must retain the copyright notice and follow the three bulletted
conditions listed in the LLVM license.
Can I modify LLVM source code and redistribute binaries or other tools based on it, without redistributing the
source?
Yes. This is why we distribute LLVM under a less restrictive license than GPL, as explained in the first
question above.
Source Code
In what language is LLVM written?
All of the LLVM tools and libraries are written in C++ with extensive use of the STL.
The LLVM source code should be portable to most modern UNIX-like operating systems. Most of the code is
written in standard C++ with operating system services abstracted to a support library. The tools required to
build and test LLVM have been ported to a plethora of platforms.
• The GCC front end code is not as portable as the LLVM suite, so it may not compile as well on
unsupported platforms.
• The LLVM build system relies heavily on UNIX shell tools, like the Bourne Shell and sed. Porting to
systems without these tools (MacOS 9, Plan 9) will require more effort.
Build Problems
When I run configure, it finds the wrong C compiler.
The configure script attempts to locate first gcc and then cc, unless it finds compiler paths set in CC and
CXX for the C and C++ compiler, respectively.
151
Documentation for the LLVM System at SVN head
If configure finds the wrong compiler, either adjust your PATH environment variable or set CC and CXX
explicitly.
The configure script finds the right C compiler, but it uses the LLVM linker from a previous build. What
do I do?
The configure script uses the PATH to find executables, so if it's grabbing the wrong linker/assembler/etc,
there are two ways to fix it:
1. Adjust your PATH environment variable so that the correct program appears first in the PATH. This
may work, but may not be convenient when you want them first in your path for other work.
2. Run configure with an alternative PATH that is correct. In a Borne compatible shell, the syntax
would be:
This is still somewhat inconvenient, but it allows configure to do its work without having to adjust
your PATH permanently.
Under some operating systems (i.e. Linux), libtool does not work correctly if GCC was compiled with the
--disable-shared option. To work around this, install your own version of GCC that has shared libraries
enabled by default.
I've updated my source tree from Subversion, and now my build is trying to use a file/directory that doesn't
exist.
You need to re-run configure in your object directory. When new Makefiles are added to the source tree, they
have to be copied over to the object tree in order to be used by the build.
I've modified a Makefile in my source tree, but my build tree keeps using the old version. What do I do?
If the Makefile already exists in your object tree, you can just run the following command in the top level
directory of your object tree:
If the Makefile is new, you will have to modify the configure script to copy it over.
I've upgraded to a new version of LLVM, and I get strange build errors.
Sometimes, changes to the LLVM source code alters how the build system works. Changes in libtool,
autoconf, or header file dependencies are especially prone to this sort of problem.
The best thing to try is to remove the old files and re-build. In most cases, this takes care of the problem. To
do this, just type make clean and then make in the directory that fails to build.
I've built LLVM and am testing it, but the tests freeze.
152
Documentation for the LLVM System at SVN head
This is most likely occurring because you built a profile or release (optimized) build of LLVM and have not
specified the same information on the gmake command line.
% gmake ENABLE_PROFILING=1
...then you must run the tests with the following commands:
% cd llvm/test
% gmake ENABLE_PROFILING=1
The LLVM test suite is dependent upon several features of the LLVM tools and libraries.
First, the debugging assertions in code are not enabled in optimized or profiling builds. Hence, tests that used
to fail may pass.
Second, some tests may rely upon debugging options or behavior that is only available in the debug build.
These tests will fail in an optimized or profile build.
This is a bug in GCC, and affects projects other than LLVM. Try upgrading or downgrading your GCC.
Compiling LLVM with GCC succeeds, but the resulting tools do not work, what can be wrong?
Several versions of GCC have shown a weakness in miscompiling the LLVM codebase. Please consult your
compiler version (gcc --version) to find out whether it is broken. If so, your only option is to upgrade
GCC to a known good version.
After Subversion update, rebuilding gives the error "No rule to make target".
Stop.
This may occur anytime files are moved within the Subversion repository or removed entirely. In this case,
the best solution is to erase all .d files, which list dependencies for source files, and rebuild:
% cd $LLVM_OBJ_DIR
% rm -f `find . -name \*\.d`
% gmake
153
Documentation for the LLVM System at SVN head
llvmc is experimental and isn't really supported. We suggest using llvm-gcc instead.
The GNUmakefile in the top-level directory of LLVM-GCC is a special Makefile used by Apple to
invoke the build_gcc script after setting up a special environment. This has the unfortunate side-effect that
trying to build LLVM-GCC with srcdir == objdir in a "non-Apple way" invokes the GNUmakefile instead
of Makefile. Because the environment isn't set up correctly to do this, the build fails.
People not building LLVM-GCC the "Apple way" need to build LLVM-GCC with srcdir != objdir, or simply
remove the GNUmakefile entirely.
Source Languages
What source languages are supported?
LLVM currently has full support for C and C++ source languages. These are available through a special
version of GCC that LLVM calls the C Front End
There is an incomplete version of a Java front end available in the java module. There is no documentation
on this yet so you'll need to download the code, compile it, and try it.
The PyPy developers are working on integrating LLVM into the PyPy backend so that PyPy language can
translate to LLVM.
I'd like to write a self-hosting LLVM compiler. How should I interface with the LLVM middle-end optimizers
and back-end code generators?
Your compiler front-end will communicate with LLVM by creating a module in the LLVM intermediate
representation (IR) format. Assuming you want to write your language's compiler in the language itself (rather
than C++), there are 3 major ways to tackle generating LLVM IR from a front-end:
• Call into the LLVM libraries code using your language's FFI (foreign function interface).
♦ for: best tracks changes to the LLVM IR, .ll syntax, and .bc format
♦ for: enables running LLVM optimization passes without a emit/parse overhead
♦ for: adapts well to a JIT context
♦ against: lots of ugly glue code to write
• Emit LLVM assembly from your compiler's native language.
♦ for: very straightforward to get started
♦ against: the .ll parser is slower than the bitcode reader when interfacing to the middle end
♦ against: you'll have to re-engineer the LLVM IR object model and asm writer in your
language
♦ against: it may be harder to track changes to the IR
• Emit LLVM bitcode from your compiler's native language.
♦ for: can use the more-efficient bitcode reader when interfacing to the middle end
♦ against: you'll have to re-engineer the LLVM IR object model and bitcode writer in your
language
♦ against: it may be harder to track changes to the IR
154
Documentation for the LLVM System at SVN head
If you go with the first option, the C bindings in include/llvm-c should help a lot, since most languages have
strong support for interfacing with C. The most common hurdle with calling C from managed code is
interfacing with the garbage collector. The C interface was designed to require very little memory
management, and so is straightforward in this regard.
What support is there for a higher level source language constructs for building a compiler?
Currently, there isn't much. LLVM supports an intermediate representation which is useful for code
representation but will not support the high level (abstract syntax tree) representation needed by most
compilers. There are no facilities for lexical nor semantic analysis. There is, however, a mostly implemented
configuration-driven compiler driver which simplifies the task of running optimizations, linking, and
executable generation.
The configure script is getting things wrong because the LLVM linker allows symbols to be undefined at link
time (so that they can be resolved during JIT or translation to the C back end). That is why configure thinks
your system "has everything."
1. Make sure the CC and CXX environment variables contains the full path to the LLVM GCC front
end.
2. Make sure that the regular C compiler is first in your PATH.
3. Add the string "-Wl,-native" to your CFLAGS environment variable.
This will allow the llvm-ld linker to create a native code executable instead of shell script that runs the JIT.
Creating native code requires standard linkage, which in turn will allow the configure script to find out if code
is not linking on your system because the feature isn't available on your system.
When I compile code using the LLVM GCC front end, it complains that it cannot find libcrtend.a.
The only way this can happen is if you haven't installed the runtime library. To correct this, do:
% cd llvm/runtime
% make clean ; make install-bytecode
How can I disable all optimizations when compiling code using the LLVM GCC front end?
Passing "-Wa,-disable-opt -Wl,-disable-opt" will disable *all* cleanup and optimizations done at the llvm
level, leaving you with the truly horrible code that you desire.
155
Documentation for the LLVM System at SVN head
Yes, you can use LLVM to convert code from any language LLVM supports to C. Note that the generated C
code will be very low level (all loops are lowered to gotos, etc) and not very pretty (comments are stripped,
original source formatting is totally lost, variables are renamed, expressions are regrouped), so this may not be
what you're looking for. Also, there are several limitations noted below.
or:
% llvm-g++ a.cpp -c
% llvm-g++ b.cpp -c
% llvm-g++ a.o b.o -o program
With llvm-gcc3, this will generate program and program.bc. The .bc file is the LLVM version of the
program all linked together.
2. Convert the LLVM code to C code, using the LLC tool with the C backend:
% cc x.c
Using LLVM does not eliminate the need for C++ library support. If you use the llvm-g++ front-end, the
generated code will depend on g++'s C++ support libraries in the same way that code generated from g++
would. If you use another C++ front-end, the generated code will depend on whatever library that front-end
would normally require.
If you are working on a platform that does not provide any C++ libraries, you may be able to manually
compile libstdc++ to LLVM bitcode, statically link it into your program, then use the commands above to
convert the whole result into C code. Alternatively, you might compile the libraries and your application into
two different chunks of C code and link them.
Note that, by default, the C back end does not support exception handling. If you want/need it for a certain
program, you can enable it by passing "-enable-correct-eh-support" to the llc program. The resultant code will
use setjmp/longjmp to implement exception support that is relatively slow, and not C++-ABI-conforming on
most platforms, but otherwise correct.
Also, there are a number of other limitations of the C backend that cause it to produce code that does not fully
conform to the C++ ABI on most platforms. Some of the C++ programs in LLVM's test suite are known to
fail when compiled with the C back end because of ABI incompatibilities with standard C++ libraries.
No. C and C++ are inherently platform-dependent languages. The most obvious example of this is the
preprocessor. A very common way that C code is made portable is by using the preprocessor to include
platform-specific code. In practice, information about other platforms is lost after preprocessing, so the result
is inherently dependent on the platform that the preprocessing was targeting.
156
Documentation for the LLVM System at SVN head
Another example is sizeof. It's common for sizeof(long) to vary between platforms. In most C
front-ends, sizeof is expanded to a constant immediately, thus hard-wiring a platform-specific detail.
Also, since many platforms define their ABIs in terms of C, and since LLVM is lower-level than C, front-ends
currently must emit platform-specific IR in order to have the result conform to the platform ABI.
If you #include the <iostream> header into a C++ translation unit, the file will probably use the
std::cin/std::cout/... global objects. However, C++ does not guarantee an order of initialization
between static objects in different translation units, so if a static ctor/dtor in your .cpp file used std::cout,
for example, the object would not necessarily be automatically initialized before your use.
To make std::cout and friends work correctly in these scenarios, the STL that we use declares a static
object that gets created in every translation unit that includes <iostream>. This object has a static
constructor and destructor that initializes and destroys the global iostream objects before they could possibly
be used in the file. The code that you see in the .ll file corresponds to the constructor and destructor
registration code.
If you would like to make it easier to understand the LLVM code generated by the compiler in the demo
page, consider using printf() instead of iostreams to print values.
If you are using the LLVM demo page, you may often wonder what happened to all of the code that you typed
in. Remember that the demo script is running the code through the LLVM optimizers, so if your code doesn't
actually do anything useful, it might all be deleted.
To prevent this, make sure that the code is actually needed. For example, if you are computing some
expression, return the value from the function instead of leaving it in a local variable. If you really want to
constrain the optimizer, you can read from and assign to volatile global variables.
undef is the LLVM way of representing a value that is not defined. You can get these if you do not initialize
a variable before you use it. For example, the C function:
Is compiled to "ret i32 undef" because "i" never has a value specified for it.
Why does instcombine + simplifycfg turn a call to a function with a mismatched calling convention into
"unreachable"? Why not make the verifier reject it?
This is a common problem run into by authors of front-ends that are using custom calling conventions: you
need to make sure to set the right calling convention on both the function and on each call to the function. For
example, this code:
157
Documentation for the LLVM System at SVN head
ret void
}
define void @bar() {
call void @foo( )
ret void
}
Is optimized to:
... with "opt -instcombine -simplifycfg". This often bites people because "all their code disappears". Setting
the calling convention on the caller and callee is required for indirect calls to work, so people often ask why
not make the verifier reject this sort of thing.
The answer is that this code has undefined behavior, but it is not illegal. If we made it illegal, then every
transformation that could potentially create this would have to ensure that it doesn't, and there is valid code
that can create this sort of construct (in dead code). The sorts of things that can cause this to happen are fairly
contrived, but we still need to accept them. Here's an example:
In this example, "test" always passes @foo/false into bar, which ensures that it is dynamically called with the
right calling conv (thus, the code is perfectly well defined). If you run this through the inliner, you get this
(the explicit "or" is there so that the inliner doesn't dead code eliminate a bunch of stuff):
158
Documentation for the LLVM System at SVN head
br label %bar.exit
bar.exit:
ret void
}
Here you can see that the inlining pass made an undefined call to @foo with the wrong calling convention.
We really don't want to make the inliner have to know about this sort of thing, so it needs to be valid code. In
this case, dead code elimination can trivially remove the undefined code. However, if %X was an input
argument to @test, the inliner would produce this:
The interesting thing about this is that %X must be false for the code to be well-defined, but no amount of
dead code elimination will be able to delete the broken call as unreachable. However, since
instcombine/simplifycfg turns the undefined call into unreachable, we end up with a branch on a condition
that goes to unreachable: a branch to unreachable can never happen, so "-inline -instcombine -simplifycfg" is
able to produce:
159
Documentation for the LLVM System at SVN head
1. Introduction
2. Sub-project Status Update
3. External Projects Using LLVM 2.7
4. What's New in LLVM 2.7?
5. Installation Instructions
6. Portability and Supported Platforms
7. Known Problems
8. Additional Information
Introduction
This document contains the release notes for the LLVM Compiler Infrastructure, release 2.7. Here we
describe the status of LLVM, including major improvements from the previous release and significant known
problems. All LLVM releases may be downloaded from the LLVM releases web site.
For more information about LLVM, including information about the latest release, please check out the main
LLVM web site. If you have questions or comments, the LLVM Developer's Mailing List is a good place to
send them.
Note that if you are reading this file from a Subversion checkout or the main LLVM web page, this document
applies to the next release, not the current one. To see the release notes for a specific release, please see the
releases page.
In the LLVM 2.7 time-frame, the Clang team has made many improvements:
160
Documentation for the LLVM System at SVN head
• C++ Support: Clang is now capable of self-hosting! While still alpha-quality, Clang's C++ support
has matured enough to build LLVM and Clang, and C++ is now enabled by default. See the Clang
C++ compatibility page for common C++ migration issues.
• Objective-C: Clang now includes experimental support for an updated Objective-C ABI on
non-Darwin platforms. This includes support for non-fragile instance variables and accelerated
proxies, as well as greater potential for future optimisations. The new ABI is used when compiling
with the -fobjc-nonfragile-abi and -fgnu-runtime options. Code compiled with these options may be
mixed with code compiled with GCC or clang using the old GNU ABI, but requires the libobjc2
runtime from the GNUstep project.
• New warnings: Clang contains a number of new warnings, including control-flow warnings
(unreachable code, missing return statements in a non-void function, etc.), sign-comparison
warnings, and improved format-string warnings.
• CIndex API and Python bindings: Clang now includes a C API as part of the CIndex library.
Although we may make some changes to the API in the future, it is intended to be stable and has been
designed for use by external projects. See the Clang doxygen CIndex documentation for more details.
The CIndex API also includes a preliminary set of Python bindings.
• ARM Support: Clang now has ABI support for both the Darwin and Linux ARM ABIs. Coupled with
many improvements to the LLVM ARM backend, Clang is now suitable for use as a beta quality
ARM compiler.
In the LLVM 2.7 time-frame, the analyzer core has made several major and minor improvements, including
better support for tracking the fields of structures, initial support (not enabled by default yet) for doing
interprocedural (cross-function) analysis, and new checks have been added.
With the release of LLVM 2.7, VMKit has shifted to a great framework for writing virtual machines. VMKit
now offers precise and efficient garbage collection with multi-threading support, thanks to the MMTk
memory management toolkit, as well as just in time and ahead of time compilation with LLVM. The major
changes in VMKit 0.27 are:
• Garbage collection: VMKit now uses the MMTk toolkit for garbage collectors. The first collector to
be ported is the MarkSweep collector, which is precise, and drastically improves the performance of
VMKit.
• Line number information in the JVM: by using the debug metadata of LLVM, the JVM now supports
precise line number information, useful when printing a stack trace.
• Interface calls in the JVM: we implemented a variant of the Interface Method Table technique for
interface calls in the JVM.
161
Documentation for the LLVM System at SVN head
other low-level routines (some are 3x faster than the equivalent libgcc routines).
All of the code in the compiler-rt project is available under the standard LLVM License, a "BSD-style"
license. New in LLVM 2.7: compiler_rt now supports ARM targets.
DragonEgg is still a work in progress. Currently C works very well, while C++, Ada and Fortran work fairly
well. All other languages either don't work at all, or only work poorly. For the moment only the x86-32 and
x86-64 targets are supported, and only on linux and darwin (darwin needs an additional gcc patch).
DragonEgg is a new project which is seeing its first release with llvm-2.7.
2.7 includes major parts of the work required by the new MC Project. A few targets have been refactored to
support it, and work is underway to support a native assembler in LLVM. This work is not complete in LLVM
2.7, but it has made substantially more progress on LLVM mainline.
One minor example of what MC can do is to transcode an AT&T syntax X86 .s file into intel syntax. You can
do this with something like:
Pure
Pure is an algebraic/functional programming language based on term rewriting. Programs are collections of
equations which are used to evaluate expressions in a symbolic fashion. Pure offers dynamic typing, eager and
lazy evaluation, lexical closures, a hygienic macro system (also based on term rewriting), built-in list and
matrix support (including list and matrix comprehensions) and an easy-to-use C interface. The interpreter uses
LLVM as a backend to JIT-compile Pure programs to fast native code.
Pure versions 0.43 and later have been tested and are known to work with LLVM 2.7 (and continue to work
with older LLVM releases >= 2.5).
162
Documentation for the LLVM System at SVN head
Roadsend PHP
Roadsend PHP (rphp) is an open source implementation of the PHP programming language that uses LLVM
for its optimizer, JIT and static compiler. This is a reimplementation of an earlier project that is now based on
LLVM.
Unladen Swallow
Unladen Swallow is a branch of Python intended to be fully compatible and significantly faster. It uses
LLVM's optimization passes and JIT compiler.
TCE uses llvm-gcc/Clang and LLVM for C/C++ language support, target independent optimizations and also
for parts of code generation. It generates new LLVM-based code generators "on the fly" for the designed TTA
processors and loads them in to the compiler backend as runtime libraries to avoid per-target recompilation of
larger parts of the compiler chain.
SAFECode Compiler
SAFECode is a memory safe C compiler built using LLVM. It takes standard, unannotated C code, analyzes
the code to ensure that memory accesses and array indexing operations are safe, and instruments the code with
run-time checks when safety cannot be proven statically.
Icedtea6 1.8 and later have been tested and are known to work with LLVM 2.7 (and continue to work with
older LLVM releases >= 2.6 as well).
LLVM-Lua
LLVM-Lua uses LLVM to add JIT and static compiling support to the Lua VM. Lua bytecode is analyzed to
remove type checks, then LLVM is used to compile the bytecode down to machine code.
LLVM-Lua 1.2.0 have been tested and is known to work with LLVM 2.7.
MacRuby
MacRuby is an implementation of Ruby based on core Mac OS technologies, sponsored by Apple Inc. It uses
LLVM at runtime for optimization passes, JIT compilation and exception handling. It also allows static
(ahead-of-time) compilation of Ruby code straight to machine code.
163
Documentation for the LLVM System at SVN head
In addition to the existing C and native code generators, GHC now supports an LLVM code generator. GHC
supports LLVM 2.7.
• 2.7 includes initial support for the MicroBlaze target. MicroBlaze is a soft processor core designed for
Xilinx FPGAs.
• 2.7 includes a new LLVM IR "extensible metadata" feature. This feature supports many different use
cases, including allowing front-end authors to encode source level information into LLVM IR, which
is consumed by later language-specific passes. This is a great way to do high-level optimizations like
devirtualization, type-based alias analysis, etc. See the Extensible Metadata Blog Post for more
information.
• 2.7 encodes debug information in a completely new way, built on extensible metadata. The new
implementation is much more memory efficient and paves the way for improvements to optimized
code debugging experience.
• 2.7 now directly supports taking the address of a label and doing an indirect branch through a pointer.
This is particularly useful for interpreter loops, and is used to implement the GCC "address of label"
extension. For more information, see the Address of Label and Indirect Branches in LLVM IR Blog
Post.
• 2.7 is the first release to start supporting APIs for assembling and disassembling target machine code.
These APIs are useful for a variety of low level clients, and are surfaced in the new "enhanced
disassembly" API. For more information see the The X86 Disassembler Blog Post for more
information.
• 2.7 includes major parts of the work required by the new MC Project, see the MC update above for
more information.
164
Documentation for the LLVM System at SVN head
• LLVM IR now supports a 16-bit "half float" data type through two new intrinsics and APFloat
support.
• LLVM IR supports two new function attributes: inlinehint and alignstack(n). The former is a hint to
the optimizer that a function was declared 'inline' and thus the inliner should weight it higher when
considering inlining it. The later indicates to the code generator that the function diverges from the
platform ABI on stack alignment.
• The new llvm.objectsize intrinsic allows the optimizer to infer the sizes of memory objects in some
cases. This intrinsic is used to implement the GCC __builtin_object_size extension.
• LLVM IR now supports marking load and store instructions with "non-temporal" hints (building on
the new metadata feature). This hint encourages the code generator to generate non-temporal accesses
when possible, which are useful for code that is carefully managing cache behavior. Currently, only
the X86 backend provides target support for this feature.
• LLVM 2.7 has pre-alpha support for unions in LLVM IR. Unfortunately, this support is not really
usable in 2.7, so if you're interested in pushing it forward, please help contribute to LLVM mainline.
Optimizer Improvements
In addition to a large array of minor performance tweaks and bug fixes, this release includes a few major
enhancements and additions to the optimizers:
• The inliner reuses now merges arrays stack objects in different callees when inlining multiple call
sites into one function. This reduces the stack size of the resultant function.
• The -basicaa alias analysis pass (which is the default) has been improved to be less dependent on
"type safe" pointers. It can now look through bitcasts and other constructs more aggressively,
allowing better load/store optimization.
• The load elimination optimization in the GVN Pass [intro blog post] has been substantially improved
to be more aggressive about partial redundancy elimination and do more aggressive phi translation.
Please see the Advanced Topics in Redundant Load Elimination with a Focus on PHI Translation
Blog Post for more details.
• The module target data string now includes a notion of 'native' integer data types for the target. This
helps mid-level optimizations avoid promoting complex sequences of operations to data types that are
not natively supported (e.g. converting i32 operations to i64 on 32-bit chips).
• The mid-level optimizer is now conservative when operating on a module with no target data.
Previously, it would default to SparcV9 settings, which is not what most people expected.
• Jump threading is now much more aggressive at simplifying correlated conditionals and threading
blocks with otherwise complex logic. It has subsumed the old "Conditional Propagation" pass, and
-condprop has been removed from LLVM 2.7.
• The -instcombine pass has been refactored from being one huge file to being a library of its own.
Internally, it uses a customized IRBuilder to clean it up and simplify it.
• The optimal edge profiling pass is reliable and much more complete than in 2.6. It can be used with
the llvm-prof tool but isn't wired up to the llvm-gcc and clang command line options yet.
• A new experimental alias analysis implementation, -scev-aa, has been added. It uses LLVM's Scalar
Evolution implementation to do symbolic analysis of pointer offset expressions to disambiguate
pointers. It can catch a few cases that basicaa cannot, particularly in complex loop nests.
• The default pass ordering has been tweaked for improved optimization effectiveness.
• The JIT now supports generating debug information and is compatible with the new GDB 7.0 (and
later) interfaces for registering dynamically generated debug info.
• The JIT now defaults to compiling eagerly to avoid a race condition in the lazy JIT. Clients that still
want the lazy JIT can switch it on by calling
165
Documentation for the LLVM System at SVN head
ExecutionEngine::DisableLazyCompilation(false).
• It is now possible to create more than one JIT instance in the same process. These JITs can generate
machine code in parallel, although you still have to obey the other threading restrictions.
• The 'llc -asm-verbose' option (which is now the default) has been enhanced to emit many useful
comments to .s files indicating information about spill slots and loop nest structure. This should make
it much easier to read and understand assembly files. This is wired up in llvm-gcc and clang to the
-fverbose-asm option.
• New LSR with "full strength reduction" mode, which can reduce address register pressure in loops
where address generation is important.
• A new codegen level Common Subexpression Elimination pass (MachineCSE) is available and
enabled by default. It catches redundancies exposed by lowering.
• A new pre-register-allocation tail duplication pass is available and enabled by default, it can
substantially improve branch prediction quality in some cases.
• A new sign and zero extension optimization pass (OptimizeExtsPass) is available and enabled by
default. This pass can takes advantage architecture features like x86-64 implicit zero extension
behavior and sub-registers.
• The code generator now supports a mode where it attempts to preserve the order of instructions in the
input code. This is important for source that is hand scheduled and extremely sensitive to scheduling.
It is compatible with the GCC -fno-schedule-insns option.
• The target-independent code generator now supports generating code with arbitrary numbers of result
values. Returning more values than was previously supported is handled by returning through a
hidden pointer. In 2.7, only the X86 and XCore targets have adopted support for this though.
• The code generator now supports generating code that follows the Glasgow Haskell Compiler Calling
Convention and ABI.
• The "DAG instruction selection" phase of the code generator has been largely rewritten for 2.7.
Previously, tblgen spit out tons of C++ code which was compiled and linked into the target to do the
pattern matching, now it emits a much smaller table which is read by the target-independent code. The
primary advantages of this approach is that the size and compile time of various targets is much
improved. The X86 code generator shrunk by 1.5MB of code, for example.
• Almost the entire code generator has switched to emitting code through the MC interfaces instead of
printing textually to the .s file. This led to a number of cleanups and speedups. In 2.7, debug an
exception handling information does not go through MC yet.
• The X86 backend now optimizes tails calls much more aggressively for functions that use the
standard C calling convention.
• The X86 backend now models scalar SSE registers as subregs of the SSE vector registers, making the
code generator more aggressive in cases where scalars and vector types are mixed.
166
Documentation for the LLVM System at SVN head
• llvm-gcc now has complete support for the ARM v7 NEON instruction set. This support differs
slightly from the GCC implementation. Please see the ARM Advanced SIMD (NEON) Intrinsics and
Types in LLVM Blog Post for helpful information if migrating code from GCC to LLVM-GCC.
• The ARM and Thumb code generators now use register scavenging for stack object address
materialization. This allows the use of R3 as a general purpose register in Thumb1 code, as it was
previous reserved for use in stack address materialization. Secondly, sequential uses of the same value
will now re-use the materialized constant.
• The ARM backend now has good support for ARMv4 targets and has been tested on StrongARM
hardware. Previously, LLVM only supported ARMv4T and newer chips.
• Atomic builtins are now supported for ARMv6 and ARMv7 (__sync_synchronize,
__sync_fetch_and_add, etc.).
• The optimizer uses the new CodeMetrics class to measure the size of code. Various passes (like the
inliner, loop unswitcher, etc) all use this to make more accurate estimates of the code size impact of
various optimizations.
• A new llvm/Analysis/InstructionSimplify.h interface is available for doing symbolic simplification of
instructions (e.g. a+0 -> a) without requiring the instruction to exist. This centralizes a lot of ad-hoc
symbolic manipulation code scattered in various passes.
• The optimizer now uses a new SSAUpdater class which efficiently supports doing unstructured SSA
update operations. This centralized a bunch of code scattered throughout various passes (e.g. jump
threading, lcssa, loop rotate, etc) for doing this sort of thing. The code generator has a similar
MachineSSAUpdater class.
• The llvm/Support/Regex.h header exposes a platform independent regular expression API. Building
on this, the FileCheck utility now supports regular exressions.
• raw_ostream now supports a circular "debug stream" accessed with "dbgs()". By default, this stream
works the same way as "errs()", but if you pass -debug-buffer-size=1000 to opt, the debug
stream is capped to a fixed sized circular buffer and the output is printed at the end of the program's
execution. This is helpful if you have a long lived compiler process and you're interested in seeing
snapshots in time.
• You can now build LLVM as a big dynamic library (e.g. "libllvm2.7.so"). To get this, configure
LLVM with the --enable-shared option.
• LLVM command line tools now overwrite their output by default. Previously, they would only do this
with -f. This makes them more convenient to use, and behave more like standard unix tools.
• The opt and llc tools now autodetect whether their input is a .ll or .bc file, and automatically do the
right thing. This means you don't need to explicitly use the llvm-as tool for most things.
• The Andersen's alias analysis ("anders-aa") pass, the Predicate Simplifier ("predsimplify") pass, the
LoopVR pass, the GVNPRE pass, and the random sampling profiling ("rsprofiling") passes have all
been removed. They were not being actively maintained and had substantial problems. If you are
167
Documentation for the LLVM System at SVN head
interested in these components, you are welcome to ressurect them from SVN, fix the correctness
problems, and resubmit them to mainline.
• LLVM now defaults to building most libraries with RTTI turned off, providing a code size reduction.
Packagers who are interested in building LLVM to support plugins that require RTTI information
should build with "make REQUIRE_RTTI=1" and should read the new Advice on Packaging LLVM
document.
• The LLVM interpreter now defaults to not using libffi even if you have it installed. This makes it
more likely that an LLVM built on one system will work when copied to a similar system. To use
libffi, configure with --enable-libffi.
• Debug information uses a completely different representation, an LLVM 2.6 .bc file should work with
LLVM 2.7, but debug info won't come forward.
• The LLVM 2.6 (and earlier) "malloc" and "free" instructions got removed, along with
LowerAllocations pass. Now you should just use a call to the malloc and free functions in libc. These
calls are optimized as well as the old instructions were.
In addition, many APIs have changed in this release. Some of the major LLVM API changes are:
• Just about everything has been converted to use raw_ostream instead of std::ostream.
• llvm/ADT/iterator.h has been removed, just use <iterator> instead.
• The Streams.h file and DOUT got removed, use DEBUG(errs() << ...); instead.
• The TargetAsmInfo interface was renamed to MCAsmInfo.
• ModuleProvider has been removed and its methods moved to Module and GlobalValue.
Most clients can remove uses of ExistingModuleProvider, replace
getBitcodeModuleProvider with getLazyBitcodeModule, and pass their Module to
functions that used to accept ModuleProvider. Clients who wrote their own ModuleProviders
will need to derive from GVMaterializer instead and use Module::setMaterializer to
attach it to a Module.
• GhostLinkage has given up the ghost. GlobalValues that have not yet been read from their
backing storage have the same linkage they will have after being read in. Clients must replace calls to
GlobalValue::hasNotBeenReadFromBitcode with
GlobalValue::isMaterializable.
• The isInteger, isIntOrIntVector, isFloatingPoint, isFPOrFPVector and
isFPOrFPVector methods have been renamed isIntegerTy, isIntOrIntVectorTy,
isFloatingPointTy, isFPOrFPVectorTy and isFPOrFPVectorTy respectively.
• llvm::Instruction::clone() no longer takes argument.
• raw_fd_ostream's constructor now takes a flag argument, not individual booleans (see
include/llvm/Support/raw_ostream.h for details).
• Some header files have been renamed:
♦ llvm/Support/AIXDataTypesFix.h to llvm/System/AIXDataTypesFix.h
♦ llvm/Support/DataTypes.h to llvm/System/DataTypes.h
♦ llvm/Transforms/Utils/InlineCost.h to llvm/Analysis/InlineCost.h
♦ llvm/Support/Mangler.h to llvm/Target/Mangler.h
♦ llvm/Analysis/Passes.h to llvm/CodeGen/Passes.h
• Intel and AMD machines (IA32, X86-64, AMD64, EMT-64) running Red Hat Linux, Fedora Core,
FreeBSD and AuroraUX (and probably other unix-like systems).
• PowerPC and X86-based Mac OS X systems, running 10.4 and above in 32-bit and 64-bit modes.
• Intel and AMD machines running on Win32 using MinGW libraries (native).
168
Documentation for the LLVM System at SVN head
• Intel and AMD machines running on Win32 with the Cygwin libraries (limited support is available
for native builds with Visual C++).
• Sun x86 and AMD64 machines running Solaris 10, OpenSolaris 0906.
• Alpha-based machines running Debian GNU/Linux.
The core LLVM infrastructure uses GNU autoconf to adapt itself to the machine and operating system on
which it is built. However, minor porting may be required to get LLVM to work on new platforms. We
welcome your portability patches and reports of successful builds or error messages.
Known Problems
This section contains significant known problems with the LLVM system, listed by component. If you run
into a problem, please check the LLVM bug database and submit a bug if there isn't already one.
• LLVM will not correctly compile on Solaris and/or OpenSolaris using the stock GCC 3.x.x series 'out
the box', See: Broken versions of GCC and other tools. However, A Modern GCC Build for
x86/x86-64 has been made available from the third party AuroraUX Project that has been
meticulously tested for bootstrapping LLVM & Clang.
• The MSIL, Alpha, SPU, MIPS, PIC16, Blackfin, MSP430, SystemZ and MicroBlaze backends are
experimental.
• llc "-filetype=asm" (the default) is the only supported value for this option. The MachO writer
is experimental, and works much better in mainline SVN.
• The X86 backend does not yet support all inline assembly that uses the X86 floating point stack. It
supports the 'f' and 't' constraints, but not 'u'.
• The X86 backend generates inefficient floating point code when configured to generate code for
systems that don't have SSE2.
• Win64 code generation wasn't widely tested. Everything should work, but we expect small issues to
happen. Also, llvm-gcc cannot build the mingw64 runtime currently due to lack of support for the 'u'
inline assembly constraint and for X87 floating point inline assembly.
• The X86-64 backend does not yet support the LLVM IR instruction va_arg. Currently, front-ends
support variadic argument constructs on X86-64 by lowering them manually.
• The Linux PPC32/ABI support needs testing for the interpreter and static compilation, and lacks
support for debug information.
• Thumb mode works only on ARMv6 or higher processors. On sub-ARMv6 processors, thumb
programs can crash or produce wrong results (PR1388).
• Compilation for ARM Linux OABI (old ABI) is supported but not fully tested.
169
Documentation for the LLVM System at SVN head
• The SPARC backend only supports the 32-bit SPARC ABI (-m32); it does not support the 64-bit
SPARC ABI (-m64).
• On 21164s, some rare FP arithmetic sequences which may trap do not have the appropriate nops
inserted to ensure restartability.
• The C backend has only basic support for inline assembly code.
• The C backend violates the ABI of common C++ programs, preventing intermixing between C++
compiled by the CBE and C++ code compiled with llc or native compilers.
• The C backend does not support all exception handling constructs.
• The C backend does not support arbitrary precision integers.
• Fortran support generally works, but there are still several unresolved bugs in Bugzilla. Please see the
tools/gfortran component for details.
• The Ada front-end currently only builds on X86-32. This is mainly due to lack of trampoline support
(pointers to nested functions) on other platforms. However, it also fails to build on X86-64 which
does support trampolines.
• The Ada front-end fails to bootstrap. This is due to lack of LLVM support for setjmp/longjmp
style exception handling, which is used internally by the compiler. Workaround: configure with
--disable-bootstrap.
• The c380004, c393010 and cxg2021 ACATS tests fail (c380004 also fails with gcc-4.2 mainline). If
the compiler is built with checks disabled then c393010 causes the compiler to go into an infinite
loop, using up all system memory.
• Some GCC specific Ada tests continue to crash the compiler.
• The -E binder option (exception backtraces) does not work and will result in programs crashing if an
exception is raised. Workaround: do not use -E.
• Only discrete types are allowed to start or finish at a non-byte offset in a record. Workaround: do not
pack records or use representation clauses that result in a field of a non-discrete type starting or
finishing in the middle of a byte.
170
Documentation for the LLVM System at SVN head
• The lli interpreter considers 'main' as generated by the Ada binder to be invalid. Workaround: hand
edit the file to use pointers for argv and envp rather than integers.
• The -fstack-check option is ignored.
Additional Information
A wide variety of additional information is available on the LLVM web page, in particular in the
documentation section. The web page also contains versions of the API documentation which is up-to-date
with the Subversion version of the source code. You can access versions of these documents specific to this
release by going into the "llvm/doc/" directory in the LLVM tree.
If you have any questions or comments about LLVM, please feel free to contact us via the mailing lists.
171
Documentation for the LLVM System at SVN head
How to submit an LLVM bug report
Basically you have to do two things at a minimum. First, decide whether the bug crashes the compiler (or an
LLVM pass), or if the compiler is miscompiling the program (i.e., the compiler successfully produces an
executable, but it doesn't run right). Based on what type of bug it is, follow the instructions in the linked
section to narrow down the bug so that the person who fixes it will be able to find the problem more easily.
Once you have a reduced test-case, go to the LLVM Bug Tracking System and fill out the form with the
necessary details (note that you don't need to pick a category, just use the "new-bugs" category if you're not
sure). The bug description should contain the following information:
Crashing Bugs
More often than not, bugs in the compiler cause it to crash—often due to an assertion failure of some sort. The
most important piece of the puzzle is to figure out if it is crashing in the GCC front-end or if it is one of the
LLVM libraries (e.g. the optimizer or code generator) that has problems.
To figure out which component is crashing (the front-end, optimizer or code generator), run the llvm-gcc
command line as you were when the crash occurred, but with the following extra command line options:
• -O0 -emit-llvm: If llvm-gcc still crashes when passed these options (which disable the
optimizer and code generator), then the crash is in the front-end. Jump ahead to the section on
front-end bugs.
• -emit-llvm: If llvm-gcc crashes with this option (which disables the code generator), you
found an optimizer bug. Jump ahead to compile-time optimization bugs.
172
Documentation for the LLVM System at SVN head
• Otherwise, you have a code generator crash. Jump ahead to code generator bugs.
Front-end bugs
If the problem is in the front-end, you should re-run the same llvm-gcc command that resulted in the crash,
but add the -save-temps option. The compiler will crash again, but it will leave behind a foo.i file
(containing preprocessed C source code) and possibly foo.s for each compiled foo.c file. Send us the
foo.i file, along with the options you passed to llvm-gcc, and a brief description of the error it caused.
The delta tool helps to reduce the preprocessed file down to the smallest amount of code that still replicates
the problem. You're encouraged to use delta to reduce the code to make the developers' lives easier. This
website has instructions on the best way to use delta.
This command should do two things: it should print out a list of passes, and then it should crash in the same
was as llvm-gcc. If it doesn't crash, please follow the instructions for a front-end bug.
If this does crash, then you should be able to debug this with the following bugpoint command:
Please run this, then file a bug with the instructions and reduced .bc files that bugpoint emits. If something
goes wrong with bugpoint, please submit the "foo.bc" file and the list of passes printed by opt.
1. llc foo.bc
2. llc foo.bc -relocation-model=pic
3. llc foo.bc -relocation-model=static
4. llc foo.bc -enable-eh
5. llc foo.bc -relocation-model=pic -enable-eh
6. llc foo.bc -relocation-model=static -enable-eh
If none of these crash, please follow the instructions for a front-end bug. If one of these do crash, you should
be able to reduce this with one of the following bugpoint command lines (use the one corresponding to the
command above that failed):
173
Documentation for the LLVM System at SVN head
6. bugpoint -run-llc foo.bc --tool-args -relocation-model=static
-enable-eh
Please run this, then file a bug with the instructions and reduced .bc file that bugpoint emits. If something
goes wrong with bugpoint, please submit the "foo.bc" file and the option that llc crashes with.
Miscompilations
If llvm-gcc successfully produces an executable, but that executable doesn't run right, this is either a bug in
the code or a bug in the compiler. The first thing to check is to make sure it is not using undefined behavior
(e.g. reading a variable before it is defined). In particular, check to see if the program valgrinds clean, passes
purify, or some other memory checker tool. Many of the "LLVM bugs" that we have chased down ended up
being bugs in the program being compiled, not LLVM.
Once you determine that the program itself is not buggy, you should choose which code generator you wish to
compile the program with (e.g. C backend, the JIT, or LLC) and optionally a series of LLVM passes to run.
For example:
bugpoint will try to narrow down your list of passes to the one pass that causes an error, and simplify the
bitcode file as much as it can to assist you. It will print a message letting you know how to reproduce the
resulting error.
Special note: if you are debugging MultiSource or SPEC tests that already exist in the llvm/test
hierarchy, there is an easier way to debug the JIT, LLC, and CBE, using the pre-written Makefile targets,
which will pass the program options specified in the Makefiles:
cd llvm/test/../../program
make bugpoint-jit
At the end of a successful bugpoint run, you will be presented with two bitcode files: a safe file which can
be compiled with the C backend and the test file which either LLC or the JIT mis-codegenerates, and thus
174
Documentation for the LLVM System at SVN head
Chris Lattner
The LLVM Compiler Infrastructure
Last modified: $Date: 2009-10-12 09:46:08 -0500 (Mon, 12 Oct 2009) $
175
Documentation for the LLVM System at SVN head
LLVM Testing Infrastructure Guide
1. Overview
2. Requirements
3. LLVM testing infrastructure organization
♦ DejaGNU tests
♦ Test suite
4. Quick start
♦ DejaGNU tests
♦ Test suite
5. DejaGNU structure
♦ Writing new DejaGNU tests
♦ The FileCheck utility
♦ Variables and substitutions
♦ Other features
6. Test suite structure
7. Running the test suite
♦ Configuring External Tests
♦ Running different tests
♦ Generating test output
♦ Writing custom tests for llvm-test
8. Running the nightly tester
Overview
This document is the reference manual for the LLVM testing infrastructure. It documents the structure of the
LLVM testing infrastructure, the tools needed to use it, and how to add and run tests.
Requirements
In order to use the LLVM testing infrastructure, you will need all of the software required to build LLVM,
plus the following:
DejaGNU
The Feature and Regressions tests are organized and run by DejaGNU.
Expect
Expect is required by DejaGNU.
tcl
Tcl is required by DejaGNU.
DejaGNU tests
Code fragments are small pieces of code that test a specific feature of LLVM or trigger a specific bug in
LLVM. They are usually written in LLVM assembly language, but can be written in other languages if the test
targets a particular language front end (and the appropriate --with-llvmgcc options were used at
configure time of the llvm module). These tests are driven by the DejaGNU testing framework, which is
176
Documentation for the LLVM System at SVN head
These code fragments are not complete programs. The code generated from them is never executed to
determine correct behavior.
Typically when a bug is found in LLVM, a regression test containing just enough code to reproduce the
problem should be written and placed somewhere underneath this directory. In most cases, this will be a small
piece of LLVM assembly language code, often distilled from an actual application or benchmark.
Test suite
The test suite contains whole programs, which are pieces of code which can be compiled and linked into a
stand-alone program that can be executed. These programs are generally written in high level languages such
as C or C++, but sometimes they are written straight in LLVM assembly.
These programs are compiled and then executed using several different methods (native compiler, LLVM C
backend, LLVM JIT, LLVM native code generation, etc). The output of these programs is compared to ensure
that LLVM is compiling the program correctly.
In addition to compiling and executing programs, whole program tests serve as a way of benchmarking
LLVM performance, both in terms of the efficiency of the programs generated as well as the speed with
which LLVM compiles, optimizes, and generates code.
Quick start
The tests are located in two separate Subversion modules. The DejaGNU tests are in the main "llvm" module
under the directory llvm/test (so you get these tests for free with the main llvm tree). The more
comprehensive test suite that includes whole programs in C and C++ is in the test-suite module. This
module should be checked out to the llvm/projects directory (don't use another name then the default
"test-suite", for then the test suite will be run every time you run make in the main llvm directory). When
you configure the llvm module, the test-suite directory will be automatically configured.
Alternatively, you can configure the test-suite module manually.
DejaGNU tests
To run all of the simple tests in LLVM using DejaGNU, use the master Makefile in the llvm/test
directory:
% gmake -C llvm/test
or
% gmake check
To run only a subdirectory of tests in llvm/test using DejaGNU (ie. Transforms), just set the TESTSUITE
variable to the path of the subdirectory (relative to llvm/test):
177
Documentation for the LLVM System at SVN head
Note: If you are running the tests with objdir != subdir, you must have run the complete testsuite
before you can specify a subdirectory.
To run only a single test, set TESTONE to its path (relative to llvm/test) and make the check-one
target:
To run the tests with Valgrind (Memcheck by default), just append VG=1 to the commands above, e.g.:
Test suite
To run the comprehensive test suite (tests that compile and execute whole programs), first checkout and setup
the test-suite module:
% cd llvm/projects
% svn co http://llvm.org/svn/llvm-project/test-suite/trunk test-suite
% cd ..
% ./configure --with-llvmgccdir=$LLVM_GCC_DIR
where $LLVM_GCC_DIR is the directory where you installed llvm-gcc, not it's src or obj dir. The
--with-llvmgccdir option assumes that the llvm-gcc-4.2 module was configured with
--program-prefix=llvm-, and therefore that the C and C++ compiler drivers are called llvm-gcc
and llvm-g++ respectively. If this is not the case, use --with-llvmgcc/--with-llvmgxx to specify
each executable's location.
Then, run the entire test suite by running make in the test-suite directory:
% cd projects/test-suite
% gmake
Usually, running the "nightly" set of tests is a good idea, and you can also let it generate a report by running:
% cd projects/test-suite
% gmake TEST=nightly report report.html
Any of the above commands can also be run in a subdirectory of projects/test-suite to run the
specified test only on the programs in that subdirectory.
DejaGNU structure
The LLVM DejaGNU tests are driven by DejaGNU together with GNU Make and are located in the
llvm/test directory.
This directory contains a large array of small tests that exercise various features of LLVM and to ensure that
regressions do not occur. The directory is broken into several sub-directories, each focused on a particular
area of LLVM. A few of the important ones are:
178
Documentation for the LLVM System at SVN head
In order for DejaGNU to work, each directory of tests must have a dg.exp file. DejaGNU looks for this file
to determine how to run the tests. This file is just a Tcl script and it can do anything you want, but we've
standardized it for the LLVM regression tests. If you're adding a directory of tests, just copy dg.exp from
another directory to get running. The standard dg.exp simply loads a Tcl library (test/lib/llvm.exp)
and calls the llvm_runtests function defined in that library with a list of file names to run. The names are
obtained by using Tcl's glob command. Any directory that contains only directories does not need the
dg.exp file.
The llvm-runtests function lookas at each file that is passed to it and gathers any lines together that
match "RUN:". This are the "RUN" lines that specify how the test is to be run. So, each test script must
contain RUN lines if it is to do anything. If there are no RUN lines, the llvm-runtests function will issue
an error and the test will fail.
RUN lines are specified in the comments of the test program using the keyword RUN followed by a colon, and
lastly the command (pipeline) to execute. Together, these lines form the "script" that llvm-runtests
executes to run the test case. The syntax of the RUN lines is similar to a shell's syntax for pipelines including
I/O redirection and variable substitution. However, even though these lines may look like a shell script, they
are not. RUN lines are interpreted directly by the Tcl exec command. They are never executed by a shell.
Consequently the syntax differs from normal shell script syntax in a few ways. You can specify as many RUN
lines as needed.
Each RUN line is executed on its own, distinct from other lines unless its last character is \. This continuation
character causes the RUN line to be concatenated with the next one. In this way you can build up long
pipelines of commands without making huge line lengths. The lines ending in \ are concatenated until a RUN
line that doesn't end in \ is found. This concatenated set of RUN lines then constitutes one execution. Tcl will
substitute variables and arrange for the pipeline to be executed. If any process in the pipeline fails, the entire
line (and test case) fails too.
As with a Unix shell, the RUN: lines permit pipelines and I/O redirection to be used. However, the usage is
slightly different than for Bash. To check what's legal, see the documentation for the Tcl exec command and
the tutorial. The major differences are:
179
Documentation for the LLVM System at SVN head
• You can't do 2>&1. That will cause Tcl to write to a file named &1. Usually this is done to get stderr
to go through a pipe. You can do that in tcl with |& so replace this idiom: ... 2>&1 | grep with
... |& grep
• You can only redirect to a file, not to another descriptor and not from a here document.
• tcl supports redirecting to open files with the @ syntax but you shouldn't use that here.
There are some quoting rules that you must pay attention to when writing your RUN lines. In general nothing
needs to be quoted. Tcl won't strip off any ' or " so they will get passed to the invoked program. For example:
This will fail because the ' characters are passed to grep. This would instruction grep to look for 'find in the
files this and string'. To avoid this use curly braces to tell Tcl that it should treat everything enclosed as
one value. So our example would become:
Additionally, the characters [ and ] are treated specially by Tcl. They tell Tcl to interpret the content as a
command to execute. Since these characters are often used in regular expressions this can have disastrous
results and cause the entire test run in a directory to fail. For example, a common idiom is to look for some
basicblock number:
This, however, will cause Tcl to fail because its going to try to execute a program named "2-8". Instead, what
you want is this:
Finally, if you need to pass the \ character down to a program, then it must be doubled. This is another Tcl
special character. So, suppose you had:
This will fail to match what you want (a pointer to i32). First, the ' do not get stripped off. Second, the \ gets
stripped off by Tcl so what grep sees is: 'i32*'. That's not likely to match anything. To resolve this you
must use \\ and the {}, like this:
If your system includes GNU grep, make sure that GREP_OPTIONS is not set in your environment.
Otherwise, you may get invalid results (both false positives and false negatives).
180
Documentation for the LLVM System at SVN head
FileCheck (whose basic command line arguments are described in the FileCheck man page is designed to read
a file to check from standard input, and the set of things to verify from a file specified as a command line
argument. A simple example of using FileCheck from a RUN line looks like this:
This syntax says to pipe the current file ("%s") into llvm-as, pipe that into llc, then pipe the output of llc into
FileCheck. This means that FileCheck will be verifying its standard input (the llc output) against the filename
argument specified (the original .ll file specified by "%s"). To see how this works, lets look at the rest of the
.ll file (after the RUN line):
Here you can see some "CHECK:" lines specified in comments. Now you can see how the file is piped into
llvm-as, then llc, and the machine code output is what we are verifying. FileCheck checks the machine code
output to verify that it matches what the "CHECK:" lines specify.
The syntax of the CHECK: lines is very simple: they are fixed strings that must occur in order. FileCheck
defaults to ignoring horizontal whitespace differences (e.g. a space is allowed to match a tab) but otherwise,
the contents of the CHECK: line is required to match some thing in the test file exactly.
One nice thing about FileCheck (compared to grep) is that it allows merging test cases together into logical
groups. For example, because the test above is checking for the "sub1:" and "inc4:" labels, it will not match
unless there is a "subl" in between those labels. If it existed somewhere else in the file, that would not count:
"grep subl" matches if subl exists anywhere in the file.
181
Documentation for the LLVM System at SVN head
; X64: pinsrd_1:
; X64: pinsrd $1, %edi, %xmm0
}
In this case, we're testing that we get the expected code generation with both 32-bit and 64-bit code
generation.
define void @t2(<2 x double>* %r, <2 x double>* %A, double %B) {
%tmp3 = load <2 x double>* %A, align 16
%tmp7 = insertelement <2 x double> undef, double %B, i32 0
%tmp9 = shufflevector <2 x double> %tmp3,
<2 x double> %tmp7,
<2 x i32> < i32 0, i32 2 >
store <2 x double> %tmp9, <2 x double>* %r, align 16
ret void
; CHECK: t2:
; CHECK: movl 8(%esp), %eax
; CHECK-NEXT: movapd (%eax), %xmm0
; CHECK-NEXT: movhpd 12(%esp), %xmm0
; CHECK-NEXT: movl 4(%esp), %eax
; CHECK-NEXT: movapd %xmm0, (%eax)
; CHECK-NEXT: ret
}
CHECK-NEXT: directives reject the input unless there is exactly one newline between it an the previous
directive. A CHECK-NEXT cannot be the first directive in a file.
182
Documentation for the LLVM System at SVN head
support this, FileCheck allows you to specify regular expressions in matching strings, surrounded by double
braces: {{yourregex}}. Because we want to use fixed string matching for a majority of what we do, FileCheck
has been designed to support mixing and matching fixed string matching with regular expressions. This
allows you to write things like this:
In this case, any offset from the ESP register will be allowed, and any xmm register will be allowed.
Because regular expressions are enclosed with double braces, they are visually distinct, and you don't need to
use escape characters within the double braces like you would in C. In the rare case that you want to match
double braces explicitly from the input, you can use something ugly like {{[{][{]}} as your pattern.
FileCheck Variables
It is often useful to match a pattern and then verify that it occurs again later in the file. For codegen tests, this
can be useful to allow any register, but verify that that register is used consistently later. To do this, FileCheck
allows named variables to be defined and substituted into patterns. Here is a simple example:
; CHECK: test5:
; CHECK: notw [[REGISTER:%[a-z]+]]
; CHECK: andw {{.*}} [[REGISTER]]
The first check line matches a regex (%[a-z]+) and captures it into the variables "REGISTER". The second
line verifies that whatever is in REGISTER occurs later in the file after an "andw". FileCheck variable
references are always contained in [[ ]] pairs, are named, and their names can be formed with the regex
"[a-zA-Z][a-zA-Z0-9]*". If a colon follows the name, then it is a definition of the variable, if not, it is
a use.
FileCheck variables can be defined multiple times, and uses always get the latest value. Note that variables are
all read at the start of a "CHECK" line and are all defined at the end. This means that if you have something
like "CHECK: [[XYZ:.*]]x[[XYZ]]" that the check line will read the previous value of the XYZ
variable and define a new one after the match is performed. If you need to do something like this you can
probably take advantage of the fact that FileCheck is not actually line-oriented when it matches, this allows
you to define two separate CHECK lines that match on the same line.
Here are the available variable names. The alternate syntax is listed in parentheses.
$test (%s)
The full path to the test case's source. This is suitable for passing on the command line as the input to
an llvm tool.
$srcdir
The source directory from where the "make check" was run.
objdir
The object directory that corresponds to the $srcdir.
183
Documentation for the LLVM System at SVN head
subdir
A partial path from the test directory that contains the sub-directory that contains the test source
being executed.
srcroot
The root directory of the LLVM src tree.
objroot
The root directory of the LLVM object tree. This could be the same as the srcroot.
path
The path to the directory that contains the test case source. This is for locating any supporting files
that are not generated by the test, but used by the test.
tmp
The path to a temporary file name that could be used for this test case. The file name won't conflict
with other test cases. You can append to it if you need multiple temporaries. This is useful as the
destination of some redirected output.
llvmlibsdir (%llvmlibsdir)
The directory where the LLVM libraries are located.
target_triplet (%target_triplet)
The target triplet that corresponds to the current host machine (the one running the test cases). This
should probably be called "host".
llvmgcc (%llvmgcc)
The full path to the llvm-gcc executable as specified in the configured LLVM environment
llvmgxx (%llvmgxx)
The full path to the llvm-gxx executable as specified in the configured LLVM environment
gccpath
The full path to the C compiler used to build LLVM. Note that this might not be gcc.
gxxpath
The full path to the C++ compiler used to build LLVM. Note that this might not be g++.
compile_c (%compile_c)
The full command line used to compile LLVM C source code. This has all the configured -I, -D and
optimization options.
compile_cxx (%compile_cxx)
The full command used to compile LLVM C++ source code. This has all the configured -I, -D and
optimization options.
link (%link)
This full link command used to link LLVM executables. This has all the configured -I, -L and -l
options.
shlibext (%shlibext)
The suffix for the host platforms share library (dll) files. This includes the period as the first character.
To add more variables, two things need to be changed. First, add a line in the test/Makefile that creates
the site.exp file. This will "set" the variable as a global in the site.exp file. Second, in the
test/lib/llvm.exp file, in the substitute proc, add the variable name to the list of "global" declarations
at the beginning of the proc. That's it, the variable can then be used in test scripts.
Other Features
To make RUN line writing easier, there are several shell scripts located in the llvm/test/Scripts
directory. This directory is in the PATH when running tests, so you can just call these scripts using their
name. For example:
ignore
184
Documentation for the LLVM System at SVN head
This script runs its arguments and then always returns 0. This is useful in cases where the test needs to
cause a tool to generate an error (e.g. to check the error output). However, any program in a pipeline
that returns a non-zero result will cause the test to fail. This script overcomes that issue and nicely
documents that the test case is purposefully ignoring the result code of the tool
not
This script runs its arguments and then inverts the result code from it. Zero result codes become 1.
Non-zero result codes become 0. This is useful to invert the result of a grep. For example "not grep
X" means succeed only if you don't find X in the input.
Sometimes it is necessary to mark a test case as "expected fail" or XFAIL. You can easily mark a test as
XFAIL just by including XFAIL: on a line near the top of the file. This signals that the test case should
succeed if the test fails. Such test cases are counted separately by DejaGnu. To specify an expected fail, use
the XFAIL keyword in the comments of the test program followed by a colon and one or more regular
expressions (separated by a comma). The regular expressions allow you to XFAIL the test conditionally by
host platform. The regular expressions following the : are matched against the target triplet for the host
machine. If there is a match, the test is expected to fail. If not, the test is expected to succeed. To XFAIL
everywhere just specify XFAIL: *. Here is an example of an XFAIL line:
; XFAIL: darwin,sun
To make the output more useful, the llvm_runtest function wil scan the lines of the test case for ones that
contain a pattern that matches PR[0-9]+. This is the syntax for specifying a PR (Problem Report) number that
is related to the test case. The number after "PR" specifies the LLVM bugzilla number. When a PR number is
specified, it will be used in the pass/fail reporting. This is useful to quickly get some context when a test fails.
Finally, any line that contains "END." will cause the special interpretation of lines to terminate. This is
generally done right after the last RUN: line. This has two side effects: (a) it prevents special interpretation of
lines that are part of the test program, not the instructions to the test case, and (b) it speeds things up for really
big test cases by avoiding interpretation of the remainder of the file.
When executing tests, it is usually a good idea to start out with a subset of the available tests or programs.
This makes test run times smaller at first and later on this is useful to investigate individual test failures. To
run some test only on a subset of programs, simply change directory to the programs you want tested and run
gmake there. Alternatively, you can run a different test using the TEST variable to change what tests or run
on the selected programs (see below for more info).
In addition for testing correctness, the llvm-test directory also performs timing tests of various LLVM
optimizations. It also records compilation times for the compilers and the JIT. This information can be used to
compare the effectiveness of LLVM's optimizations and code generation.
llvm-test tests are divided into three types of tests: MultiSource, SingleSource, and External.
• llvm-test/SingleSource
185
Documentation for the LLVM System at SVN head
The SingleSource directory contains test programs that are only a single source file in size. These are
usually small benchmark programs or small programs that calculate a particular value. Several such
programs are grouped together in each directory.
• llvm-test/MultiSource
The MultiSource directory contains subdirectories which contain entire programs with multiple
source files. Large benchmarks and whole applications go here.
• llvm-test/External
The External directory contains Makefiles for building code that is external to (i.e., not distributed
with) LLVM. The most prominent members of this directory are the SPEC 95 and SPEC 2000
benchmark suites. The External directory does not contain these actual tests, but only the
Makefiles that know how to properly compile these programs from somewhere else. The presence and
location of these external programs is configured by the llvm-test configure script.
Each tree is then subdivided into several categories, including applications, benchmarks, regression tests, code
that is strange grammatically, etc. These organizations should be relatively self explanatory.
Some tests are known to fail. Some are bugs that we have not fixed yet; others are features that we haven't
added yet (or may never add). In DejaGNU, the result for such tests will be XFAIL (eXpected FAILure). In
this way, you can tell the difference between an expected and unexpected failure.
The tests in the test suite have no such feature at this time. If the test passes, only warnings and other
miscellaneous output will be generated. If a test fails, a large <program> FAILED message will be displayed.
This will help you separate benign warnings from actual test failures.
To run the test suite, you need to use the following steps:
During the re-configuration, you must either: (1) have llvm-gcc you just built in your path, or (2)
specify the directory where your just-built llvm-gcc is installed using
--with-llvmgccdir=$LLVM_GCC_DIR.
You must also tell the configure machinery that the test suite is available so it can be configured for
your build tree:
186
Documentation for the LLVM System at SVN head
% cd $LLVM_OBJ_ROOT ; $LLVM_SRC_ROOT/configure [--with-llvmgccdir=$LLVM_GCC_DIR]
[Remember that $LLVM_GCC_DIR is the directory where you installed llvm-gcc, not its src or obj
directory.]
7. You can now run the test suite from your build tree as follows:
% cd $LLVM_OBJ_ROOT/projects/test-suite
% make
Note that the second and third steps only need to be done once. After you have the suite checked out and
configured, you don't need to do it again (unless the test code or configure script changes).
--with-externals
--with-externals=<directory>
This tells LLVM where to find any external tests. They are expected to be in specifically named subdirectories
of <directory>. If directory is left unspecified, configure uses the default value
/home/vadve/shared/benchmarks/speccpu2000/benchspec. Subdirectory names known to
LLVM include:
spec95
speccpu2000
speccpu2006
povray31
Others are added from time to time, and can be determined from configure.
Running different tests
In addition to the regular "whole program" tests, the test-suite module also provides a mechanism for
compiling the programs in different ways. If the variable TEST is defined on the gmake command line, the
test system will include a Makefile named TEST.<value of TEST variable>.Makefile. This
Makefile can modify build rules to yield different results.
For example, the LLVM nightly tester uses TEST.nightly.Makefile to create the nightly test reports.
To run the nightly tests, run gmake TEST=nightly.
There are several TEST Makefiles available in the tree. Some of them are designed for internal LLVM
research and will not work outside of the LLVM research group. They may still be valuable, however, as a
guide to writing your own TEST Makefile for any optimization or analysis passes that you develop with
LLVM.
187
Documentation for the LLVM System at SVN head
Somewhat better is running gmake TEST=sometest test, which runs the specified test and usually
adds per-program summaries to the output (depending on which sometest you use). For example, the
nightly test explicitely outputs TEST-PASS or TEST-FAIL for every test after each program. Though
these lines are still drowned in the output, it's easy to grep the output logs in the Output directories.
Even better are the report and report.format targets (where format is one of html, csv, text or
graphs). The exact contents of the report are dependent on which TEST you are running, but the text results
are always shown at the end of the run and the results are always stored in the report.<type>.format
file (when running with TEST=<type>). The report also generate a file called
report.<type>.raw.out containing the output of the entire test run.
Lets say that you have an LLVM optimization pass, and you want to see how many times it triggers. First
thing you should do is add an LLVM statistic to your pass, which will tally counts of things you care about.
Following this, you can set up a test and a report that collects these and formats them for easy viewing. This
consists of two files, an "test-suite/TEST.XXX.Makefile" fragment (where XXX is the name of
your test) and an "llvm-test/TEST.XXX.report" file that indicates how to format the output into a
table. There are many example reports of various levels of sophistication included with the test suite, and the
framework is very general.
If you are interested in testing an optimization pass, check out the "libcalls" test as an example. It can be run
like this:
This will do a bunch of stuff, then eventually print a table like this:
This basically is grepping the -stats output and displaying it in a table. You can also use the "TEST=libcalls
report.html" target to get the table in HTML form, similarly for report.csv and report.tex.
188
Documentation for the LLVM System at SVN head
The source for this is in test-suite/TEST.libcalls.*. The format is pretty simple: the Makefile indicates how to
run the test (in this case, "opt -simplify-libcalls -stats"), and the report contains one line for
each column of the output. The first value is the header for the column and the second is the regex to grep the
output of the command for. There are lots of example reports that can do fancy stuff.
If you'd like to set up an instance of the nightly tester to run on your machine, take a look at the comments at
the top of the utils/NewNightlyTest.pl file. If you decide to set up a nightly tester please choose a
unique nickname and invoke utils/NewNightlyTest.pl with the "-nickname [yournickname]"
command line option.
You can create a shell script to encapsulate the running of the script. The optimized x86 Linux nightly test is
run from just such a script:
#!/bin/bash
BASE=/proj/work/llvm/nightlytest
export BUILDDIR=$BASE/build
export WEBDIR=$BASE/testresults
export LLVMGCCDIR=/proj/work/llvm/cfrontend/install
export PATH=/proj/install/bin:$LLVMGCCDIR/bin:$PATH
export LD_LIBRARY_PATH=/proj/install/lib
cd $BASE
cp /proj/work/llvm/llvm/utils/NewNightlyTest.pl .
nice ./NewNightlyTest.pl -nice -release -verbose -parallel -enable-linscan \
-nickname NightlyTester -noexternals > output.log 2>&1
It is also possible to specify the the location your nightly test results are submitted. You can do this by passing
the command line option "-submit-server [server_address]" and "-submit-script [script_on_server]" to
utils/NewNightlyTest.pl. For example, to submit to the llvm.org nightly test results page, you would
invoke the nightly test script with "-submit-server llvm.org -submit-script /nightlytest/NightlyTestAccept.cgi".
If these options are not specified, the nightly test script sends the results to the llvm.org nightly test results
page.
Take a look at the NewNightlyTest.pl file to see what all of the flags and strings do. If you start running
the nightly tests, please let us know. Thanks!
189
Documentation for the LLVM System at SVN head
Building the LLVM GCC Front-End
1. Retrieve the appropriate llvm-gcc-4.2-version.source.tar.gz archive from the LLVM web site.
It is also possible to download the sources of the llvm-gcc front end from a read-only mirror using
subversion. To check out the 4.2 code for first time use:
After that, the code can be be updated in the destination directory using:
svn update
1. The only platform for which the Ada front-end is known to build is 32 bit intel x86 running linux. It is
unlikely to build for other systems without some work.
2. The build requires having a compiler that supports Ada, C and C++. The Ada front-end is written in
Ada so an Ada compiler is needed to build it. Compilers known to work with the LLVM 2.5 release
are gcc-4.2 and the 2005, 2006 and 2007 versions of the GNAT GPL Edition. GNAT GPL 2008,
gcc-4.3 and later will not work. The LLVM parts of llvm-gcc are written in C++ so a C++ compiler
is needed to build them. The rest of gcc is written in C. Some linux distributions provide a version of
gcc that supports all three languages (the Ada part often comes as an add-on package to the rest of
gcc). Otherwise it is possible to combine two versions of gcc, one that supports Ada and C (such as
the 2007 GNAT GPL Edition) and another which supports C++, see below.
3. Because the Ada front-end is experimental, it is wise to build the compiler with checking enabled.
This causes it to run much slower, but helps catch mistakes in the compiler (please report any
Supposing appropriate compilers are available, llvm-gcc with Ada support can be built on an x86-32 linux
box using the following recipe:
wget http://llvm.org/releases/2.5/llvm-2.5.tar.gz
tar xzf llvm-2.5.tar.gz
mv llvm-2.5 llvm
wget http://llvm.org/releases/2.5/llvm-gcc-4.2-2.5.source.tar.gz
tar xzf llvm-gcc-4.2-2.5.source.tar.gz
mv llvm-gcc4.2-2.5.source llvm-gcc-4.2
mkdir llvm-objects
cd llvm-objects
4. Configure LLVM (here it is configured to install into /usr/local):
If you have a multi-compiler setup and the C++ compiler is not the default, then you can configure
like this:
make
6. Install LLVM (optional):
make install
7. Make a build directory llvm-gcc-4.2-objects for llvm-gcc and make it the current directory:
cd ..
mkdir llvm-gcc-4.2-objects
cd llvm-gcc-4.2-objects
8. Configure llvm-gcc (here it is configured to install into /usr/local). The
--enable-checking flag turns on sanity checks inside the compiler. To turn off these checks
If you have a multi-compiler setup, then you can configure like this:
export CC=PATH_TO_C_AND_ADA_COMPILER
export CXX=PATH_TO_C++_COMPILER
../llvm-gcc-4.2/configure --prefix=/usr/local --enable-languages=ada,c \
--enable-checking --enable-llvm=$PWD/../llvm-objects \
--disable-bootstrap --disable-multilib
9. Build and install the compiler:
make
make install
EXTRALANGS=,fortran
License Information
The LLVM GCC frontend is licensed to you under the GNU General Public License and the GNU Lesser
General Public License. Please see the files COPYING and COPYING.LIB for more details.
1. Overview
2. Compile Flags
3. C++ Features
4. Shared Library
5. Dependencies
Overview
LLVM sets certain default configure options to make sure our developers don't break things for constrained
platforms. These settings are not optimal for most desktop systems, and we hope that packagers (e.g., Redhat,
Debian, MacPorts, etc.) will tweak them. This document lists settings we suggest you tweak.
LLVM's API changes with each release, so users are likely to want, for example, both LLVM-2.6 and
LLVM-2.7 installed at the same time to support apps developed against each.
Compile Flags
LLVM runs much more quickly when it's optimized and assertions are removed. However, such a build is
currently incompatible with users who build without defining NDEBUG, and the lack of assertions makes it
hard to debug problems in user code. We recommend allowing users to install both optimized and debug
versions of LLVM in parallel. The following configure flags are relevant:
--disable-assertions
Builds LLVM with NDEBUG defined. Changes the LLVM ABI. Also available by setting
DISABLE_ASSERTIONS=0|1 in make's environment. This defaults to enabled regardless of the
optimization setting, but it slows things down.
--enable-debug-symbols
Builds LLVM with -g. Also available by setting DEBUG_SYMBOLS=0|1 in make's environment.
This defaults to disabled when optimizing, so you should turn it back on to let users debug their
programs.
--enable-optimized
(For svn checkouts) Builds LLVM with -O2 and, by default, turns off debug symbols. Also available
by setting ENABLE_OPTIMIZED=0|1 in make's environment. This defaults to enabled when not in
a checkout.
C++ Features
RTTI
LLVM disables RTTI by default. Add REQUIRES_RTTI=1 to your environment while running
make to re-enable it. This will allow users to build with RTTI enabled and still inherit from LLVM
classes.
Shared Library
Configure with --enable-shared to build libLLVM-major.minor.(so|dylib) and link the tools
against it. This saves lots of binary size at the cost of some startup time.
Dependencies
--enable-libffi
Depend on libffi to allow the LLVM interpreter to call external functions.
--with-oprofile
Depend on libopagent (>=version 0.9.4) to let the LLVM JIT tell oprofile about function addresses
and line numbers.
Table Of Contents
-A-
ADCE
-B-
BURS
-C-
CSE
-D-
DAG Derived Pointer DSA DSE
-G-
GC
-I-
IPA IPO ISel
-L-
LCSSA LICM Load-VN
-O-
Object Pointer
-P-
PRE
-R-
RAUW Reassociation Root
-S-
Safe Point SCC SCCP SDISel SRoA Stack Map
Definitions
-A-
ADCE
Aggressive Dead Code Elimination
-B-
BURS
Bottom Up Rewriting System—A method of instruction selection for code generation. An example is
the BURG tool.
-C-
CSE
Common Subexpression Elimination. An optimization that removes common subexpression
compuation. For example (a+b)*(a+b) has two subexpressions that are the same: (a+b). This
optimization would perform the addition only once and then perform the multiply (but only if it's
compulationally correct/safe).
DAG
Directed Acyclic Graph
Derived Pointer
A pointer to the interior of an object, such that a garbage collector is unable to use the pointer for
reachability analysis. While a derived pointer is live, the corresponding object pointer must be kept in
a root, otherwise the collector might free the referenced object. With copying collectors, derived
pointers pose an additional hazard that they may be invalidated at any safe point. This term is used in
opposition to object pointer.
DSA
Data Structure Analysis
DSE
Dead Store Elimination
-G-
GC
Garbage Collection. The practice of using reachability analysis instead of explicit memory
management to reclaim unused memory.
-H-
Heap
In garbage collection, the region of memory which is managed using reachability analysis.
-I-
IPA
Inter-Procedural Analysis. Refers to any variety of code analysis that occurs between procedures,
functions or compilation units (modules).
IPO
Inter-Procedural Optimization. Refers to any variety of code optimization that occurs between
procedures, functions or compilation units (modules).
ISel
Instruction Selection.
-L-
LCSSA
Loop-Closed Static Single Assignment Form
LICM
Loop Invariant Code Motion
Load-VN
Load Value Numbering
-O-
Object Pointer
A pointer to an object such that the garbage collector is able to trace references contained within the
object. This term is used in opposition to derived pointer.
-P-
PRE
Partial Redundancy Elimination
-R-
RAUW
An abbreviation for Replace All Uses With. The functions User::replaceUsesOfWith(),
Value::replaceAllUsesWith(), and Constant::replaceUsesOfWithOnConstant() implement the
replacement of one Value with another by iterating over its def/use chain and fixing up all of the
pointers to point to the new value. See also def/use chains.
Reassociation
Rearranging associative expressions to promote better redundancy elimination and other optimization.
For example, changing (A+B-A) into (B+A-A), permitting it to be optimized into (B+0) then (B).
Root
In garbage collection, a pointer variable lying outside of the heap from which the collector begins its
reachability analysis. In the context of code generation, "root" almost always refers to a "stack root" --
a local or temporary variable within an executing function.
-S-
Safe Point
In garbage collection, it is necessary to identify stack roots so that reachability analysis may proceed.
It may be infeasible to provide this information for every instruction, so instead the information may
is calculated only at designated safe points. With a copying collector, derived pointers must not be
retained across safe points and object pointers must be reloaded from stack roots.
SDISel
Selection DAG Instruction Selection.
SCC
Strongly Connected Component
SCCP
Sparse Conditional Constant Propagation
SRoA
Scalar Replacement of Aggregates
SSA
Static Single Assignment
Stack Map
In garbage collection, metadata emitted by the code generator which identifies roots within the stack
frame of an executing function.
1. Introduction
2. General Information
♦ The C++ Standard Template Library
3. Important and useful LLVM APIs
♦ The isa<>, cast<> and dyn_cast<> templates
♦ Passing strings (the StringRef and Twine classes)
◊ The StringRef class
◊ The Twine class
♦ The DEBUG() macro and -debug option
◊ Fine grained debug info with DEBUG_TYPE and the -debug-only option
♦ The Statistic class & -stats option
Written by Chris Lattner, Dinakar Dhurjati, Gabor Greif, Joel Stanley, Reid Spencer and Owen Anderson
Introduction
This document is meant to highlight some of the important classes and interfaces available in the LLVM
source-base. This manual is not intended to explain what LLVM is, how it works, and what LLVM code looks
like. It assumes that you know the basics of LLVM and are interested in writing transformations or otherwise
analyzing or manipulating the code.
This document should get you oriented so that you can find your way in the continuously growing source
code that makes up the LLVM infrastructure. Note that this manual is not intended to serve as a replacement
for reading the source code, so if you think there should be a method in one of these classes to do something,
The first section of this document describes general information that is useful to know when working in the
LLVM infrastructure, and the second describes the Core LLVM classes. In the future this manual will be
extended with information describing how to use extension libraries, such as dominator information, CFG
traversal routines, and useful utilities like the InstVisitor template.
General Information
This section contains general information that is useful if you are working in the LLVM source-base, but that
isn't specific to any particular API.
1. Dinkumware C++ Library reference - an excellent reference for the STL and other parts of the
standard C++ library.
2. C++ In a Nutshell - This is an O'Reilly book in the making. It has a decent Standard Library
Reference that rivals Dinkumware's, and is unfortunately no longer free since the book has been
published.
3. C++ Frequently Asked Questions
4. SGI's STL Programmer's Guide - Contains a useful Introduction to the STL.
5. Bjarne Stroustrup's C++ Page
6. Bruce Eckel's Thinking in C++, 2nd ed. Volume 2 Revision 4.0 (even better, get the book).
You are also encouraged to take a look at the LLVM Coding Standards guide which focuses on how to write
maintainable code more than where to put your curly braces.
isa<>:
Note that you should not use an isa<> test followed by a cast<>, for that use the dyn_cast<>
operator.
dyn_cast<>:
The dyn_cast<> operator is a "checking cast" operation. It checks to see if the operand is of the
specified type, and if so, returns a pointer to it (this operator does not work with references). If the
operand is not of the correct type, a null pointer is returned. Thus, this works very much like the
dynamic_cast<> operator in C++, and should be used in the same circumstances. Typically, the
dyn_cast<> operator is used in an if statement or some other flow control statement like this:
This form of the if statement effectively combines together a call to isa<> and a call to cast<>
into one statement, which is very convenient.
Note that the dyn_cast<> operator, like C++'s dynamic_cast<> or Java's instanceof
operator, can be abused. In particular, you should not use big chained if/then/else blocks to
check for lots of different variants of classes. If you find yourself wanting to do this, it is much
cleaner and more efficient to use the InstVisitor class to dispatch over the instruction type
directly.
cast_or_null<>:
The cast_or_null<> operator works just like the cast<> operator, except that it allows for a
null pointer as an argument (which it then propagates). This can sometimes be useful, allowing you to
combine several null checks into one.
dyn_cast_or_null<>:
The dyn_cast_or_null<> operator works just like the dyn_cast<> operator, except that it
allows for a null pointer as an argument (which it then propagates). This can sometimes be useful,
allowing you to combine several null checks into one.
These five templates can be used with any classes, whether they have a v-table or not. To add support for
these templates, you simply need to add classof static methods to the class you are interested casting to.
Describing this is currently outside the scope of this document, but there are lots of examples in the LLVM
source base.
These are generic classes, and they need to be able to accept strings which may have embedded null
characters. Therefore, they cannot simply take a const char *, and taking a const std::string&
requires clients to perform a heap allocation which is usually unnecessary. Instead, many LLVM APIs use a
const StringRef& or a const Twine& for passing strings efficiently.
It can be implicitly constructed using a C style null-terminated string, an std::string, or explicitly with a
character pointer and length. For example, the StringRef find function is declared as:
Similarly, APIs which need to return a string may return a StringRef instance, which can be used directly
or converted to an std::string using the str member function. See "llvm/ADT/StringRef.h" for
more information.
You should rarely use the StringRef class directly, because it contains pointers to external memory it is
not generally safe to store an instance of the class (unless you know that the external storage will not be
freed).
The Twine class is effectively a lightweight rope which points to temporary (stack allocated) objects. Twines
can be implicitly constructed as the result of the plus operator applied to strings (i.e., a C strings, an
std::string, or a StringRef). The twine delays the actual concatenation of strings until it is actually
required, at which point it can be efficiently rendered directly into a character array. This avoids unnecessary
heap allocation involved in constructing the temporary results of string concatenation. See
"llvm/ADT/Twine.h" for more information.
As with a StringRef, Twine objects point to external memory and should almost never be stored or
mentioned directly. They are intended solely for use when defining a function which should be able to
efficiently accept concatenated strings.
Naturally, because of this, you don't want to delete the debug printouts, but you don't want them to always be
noisy. A standard compromise is to comment them out, allowing you to enable them if you need them in the
future.
The "llvm/Support/Debug.h" file provides a macro named DEBUG() that is a much nicer solution to
this problem. Basically, you can put arbitrary code into the argument of the DEBUG macro, and it is only
executed if 'opt' (or any other tool) is run with the '-debug' command line argument:
Using the DEBUG() macro instead of a home-brewed solution allows you to not have to create "yet another"
command line option for the debug output for your pass. Note that DEBUG() macros are disabled for
optimized builds, so they do not cause a performance impact at all (for the same reason, they should also not
contain side-effects!).
One additional nice thing about the DEBUG() macro is that you can enable or disable it directly in gdb. Just
use "set DebugFlag=0" or "set DebugFlag=1" from the gdb if the program is running. If the
program hasn't been started yet, you can always just run it with -debug.
Fine grained debug info with DEBUG_TYPE and the -debug-only option
Sometimes you may find yourself in a situation where enabling -debug just turns on too much information
(such as when working on the code generator). If you want to enable debug information with more
fine-grained control, you define the DEBUG_TYPE macro and the -debug only option as follows:
#undef DEBUG_TYPE
DEBUG(errs() << "No debug type\n");
#define DEBUG_TYPE "foo"
DEBUG(errs() << "'foo' debug type\n");
#undef DEBUG_TYPE
#define DEBUG_TYPE "bar"
DEBUG(errs() << "'bar' debug type\n"));
#undef DEBUG_TYPE
#define DEBUG_TYPE ""
DEBUG(errs() << "No debug type (2)\n");
Of course, in practice, you should only set DEBUG_TYPE at the top of a file, to specify the debug type for the
entire module (if you do this before you #include "llvm/Support/Debug.h", you don't have to
insert the ugly #undef's). Also, you should use names more meaningful than "foo" and "bar", because there
is no system in place to ensure that names do not conflict. If two different modules use the same string, they
will all be turned on when the name is specified. This allows, for example, all debug information for
instruction scheduling to be enabled with -debug-type=InstrSched, even if the source lives in multiple
files.
The DEBUG_WITH_TYPE macro is also available for situations where you would like to set DEBUG_TYPE,
but only for one specific DEBUG statement. It takes an additional first parameter, which is the type to use. For
example, the preceding example could be written as:
Often you may run your pass on some big program, and you're interested to see how many times it makes a
certain transformation. Although you can do this with hand inspection, or some ad-hoc method, this is a real
pain and not very useful for big programs. Using the Statistic class makes it very easy to keep track of
this information, and the calculated information is presented in a uniform manner with the rest of the passes
being executed.
There are many examples of Statistic uses, but the basics of using it are as follows:
The STATISTIC macro defines a static variable, whose name is specified by the first argument. The
pass name is taken from the DEBUG_TYPE macro, and the description is taken from the second
argument. The variable defined ("NumXForms" in this case) acts like an unsigned integer.
2. Whenever you make a transformation, bump the counter:
That's all you have to do. To get 'opt' to print out the statistics gathered, use the '-stats' option:
When running opt on a C file from the SPEC benchmark suite, it gives a report that looks like this:
Obviously, with so many optimizations, having a unified framework for this stuff is very nice. Making your
pass fit well into the framework makes it more maintainable and useful.
LLVM provides several callbacks that are available in a debug build to do exactly that. If you call the
Function::viewCFG() method, for example, the current LLVM tool will pop up a window containing
the CFG for the function where each basic block is a node in the graph, and each node contains the
instructions in the block. Similarly, there also exists Function::viewCFGOnly() (does not include the
instructions), the MachineFunction::viewCFG() and MachineFunction::viewCFGOnly(),
and the SelectionDAG::viewGraph() methods. Within GDB, for example, you can usually use
something like call DAG.viewGraph() to pop up a window. Alternatively, you can sprinkle calls to
these functions in your code in places you want to debug.
Getting this to work requires a small amount of configuration. On Unix systems with X11, install the graphviz
toolkit, and make sure 'dot' and 'gv' are in your path. If you are running on Mac OS/X, download and install
the Mac OS/X Graphviz program, and add /Applications/Graphviz.app/Contents/MacOS/ (or
wherever you install it) to your path. Once in your system and path are set up, rerun the LLVM configure
script and rebuild LLVM to enable this functionality.
SelectionDAG has been extended to make it easier to locate interesting nodes in large complex graphs.
From gdb, if you call DAG.setGraphColor(node, "color"), then the next call
DAG.viewGraph() would highlight the node in the specified color (choices of colors can be found at
colors.) More complex node attributes can be provided with call DAG.setGraphAttrs(node,
"attributes") (choices can be found at Graph Attributes.) If you want to restart and clear all the current
graph attributes, then you can call DAG.clearGraphAttrs().
The first step is a choose your own adventure: do you want a sequential container, a set-like container, or a
map-like container? The most important thing when choosing a container is the algorithmic properties of how
you plan to access the container. Based on that, you should use:
• a map-like container if you need efficient look-up of an value based on another value. Map-like
containers also support efficient queries for containment (whether a key is in the map). Map-like
containers generally do not support efficient reverse mapping (values to keys). If you need that, use
two maps. Some map-like containers also support efficient iteration through the keys in sorted order.
Map-like containers are the most expensive sort, only use them if you need one of these capabilities.
• a set-like container if you need to put a bunch of stuff into a container that automatically eliminates
duplicates. Some set-like containers support efficient iteration through the elements in sorted order.
Set-like containers are more expensive than sequential containers.
• a sequential container provides the most efficient way to add elements and keeps track of the order
they are added to the collection. They permit duplicates and support efficient iteration, but do not
support efficient look-up based on a key.
• a string container is a specialized sequential container or reference structure that is used for character
or byte arrays.
• a bit container provides an efficient way to store and perform set operations on sets of numeric id's,
while automatically eliminating duplicates. Bit containers require a maximum of 1 bit for each
identifier you want to store.
Once the proper category of container is determined, you can fine tune the memory use, constant factors, and
cache behaviors of access by intelligently picking a member of the category. Note that constant factors and
cache behavior can be a big deal. If you have a vector that usually only contains a few elements (but could
contain many), for example, it's much better to use SmallVector than vector . Doing so avoids (relatively)
expensive malloc/free calls, which dwarf the cost of adding the elements to the container.
"llvm/ADT/SmallVector.h"
SmallVector<Type, N> is a simple class that looks and smells just like vector<Type>: it supports
efficient iteration, lays out elements in memory order (so you can do pointer arithmetic between elements),
supports efficient push_back/pop_back operations, supports efficient random access to its elements, etc.
This is good for vectors that are "usually small" (e.g. the number of predecessors/successors of a block is
usually less than 8). On the other hand, this makes the size of the SmallVector itself large, so you don't want
to allocate lots of them (doing so will waste a lot of space). As such, SmallVectors are most useful when on
the stack.
SmallVector also provides a nice portable and efficient replacement for alloca.
<vector>
std::vector is well loved and respected. It is useful when SmallVector isn't: when the size of the vector is often
large (thus the small optimization will rarely be a benefit) or if you will be allocating many instances of the
vector itself (which would waste space for elements that aren't in the container). vector is also useful when
interfacing with code that expects vectors :).
for ( ... ) {
std::vector<foo> V;
use V;
}
std::vector<foo> V;
for ( ... ) {
use V;
V.clear();
}
Doing so will save (at least) one heap allocation and free per iteration of the loop.
<deque>
std::deque is, in some senses, a generalized version of std::vector. Like std::vector, it provides constant time
random access and other similar properties, but it also provides efficient access to the front of the list. It does
not guarantee continuity of elements within memory.
In exchange for this extra flexibility, std::deque has significantly higher constant factor costs than std::vector.
If possible, use std::vector or something cheaper.
<list>
std::list is an extremely inefficient class that is rarely useful. It performs a heap allocation for every element
inserted into it, thus having an extremely high constant factor, particularly for small data types. std::list also
only supports bidirectional iteration, not random access iteration.
In exchange for this high cost, std::list supports efficient access to both ends of the list (like std::deque, but
unlike std::vector or SmallVector). In addition, the iterator invalidation characteristics of std::list are stronger
than that of a vector class: inserting or removing an element into the list does not invalidate iterator or pointers
to other elements in the list.
llvm/ADT/ilist.h
ilist<T> implements an 'intrusive' doubly-linked list. It is intrusive, because it requires the element to store
and provide access to the prev/next pointers for the list.
ilist has the same drawbacks as std::list, and additionally requires an ilist_traits
implementation for the element type, but it provides some novel characteristics. In particular, it can efficiently
store polymorphic objects, the traits class is informed when an element is inserted or removed from the list,
and ilists are guaranteed to support a constant-time splice operation.
These properties are exactly what we want for things like Instructions and basic blocks, which is why
these are implemented with ilists.
• ilist_traits
• iplist
• llvm/ADT/ilist_node.h
• Sentinels
ilist_traits
ilist_traits<T> is ilist<T>'s customization mechanism. iplist<T> (and consequently
ilist<T>) publicly derive from this traits class.
iplist
iplist<T> is ilist<T>'s base and as such supports a slightly narrower interface. Notably, inserters from
T& are absent.
ilist_traits<T> is a public base of this class and can be used for a wide variety of customizations.
llvm/ADT/ilist_node.h
ilist_node<T> implements a the forward and backward links that are expected by the ilist<T> (and
analogous containers) in the default manner.
ilist_node<T>s are meant to be embedded in the node type T, usually T publicly derives from
ilist_node<T>.
Sentinels
ilists have another specialty that must be considered. To be a good citizen in the C++ ecosystem, it needs
to support the standard container operations, such as begin and end iterators, etc. Also, the operator--
must work correctly on the end iterator in the case of non-empty ilists.
The only sensible solution to this problem is to allocate a so-called sentinel along with the intrusive list, which
serves as the end iterator, providing the back-link to the last element. However conforming to the C++
convention it is illegal to operator++ beyond the sentinel and it also must not be dereferenced.
These constraints allow for some implementation freedom to the ilist how to allocate and store the
sentinel. The corresponding policy is dictated by ilist_traits<T>. By default a T gets heap-allocated
whenever the need for a sentinel arises.
While the default policy is sufficient in most cases, it may break down when T does not provide a default
constructor. Also, in the case of many instances of ilists, the memory overhead of the associated sentinels
is wasted. To alleviate the situation with numerous and voluminous T-sentinels, sometimes a trick is
employed, leading to ghostly sentinels.
Ghostly sentinels are obtained by specially-crafted ilist_traits<T> which superpose the sentinel with
the ilist instance in memory. Pointer arithmetic is used to obtain the sentinel, which is relative to the
ilist's this pointer. The ilist is augmented by an extra pointer, which serves as the back-link of the
sentinel. This is the only field in the ghostly sentinel which can be legally accessed.
There are also various STL adapter classes such as std::queue, std::priority_queue, std::stack, etc. These
provide simplified access to an underlying container but don't affect the cost of the container itself.
A sorted 'vector'
If you intend to insert a lot of elements, then do a lot of queries, a great approach is to use a vector (or other
sequential container) with std::sort+std::unique to remove duplicates. This approach works really well if your
usage pattern has these two distinct phases (insert then query), and can be coupled with a good choice of
sequential container.
This combination provides the several nice properties: the result data is contiguous in memory (good for cache
locality), has few allocations, is easy to address (iterators in the final vector are just indices or pointers), and
can be efficiently queried with a standard binary or radix search.
"llvm/ADT/SmallSet.h"
If you have a set-like data structure that is usually small and whose elements are reasonably small, a
SmallSet<Type, N> is a good choice. This set has space for N elements in place (thus, if the set is
dynamically smaller than N, no malloc traffic is required) and accesses them with a simple linear search.
When the set grows beyond 'N' elements, it allocates a more expensive representation that guarantees efficient
access (for most types, it falls back to std::set, but for pointers it uses something far better, SmallPtrSet).
The magic of this class is that it handles small sets extremely efficiently, but gracefully handles extremely
large sets without loss of efficiency. The drawback is that the interface is quite small: it supports insertion,
queries and erasing, but does not support iteration.
"llvm/ADT/SmallPtrSet.h"
SmallPtrSet has all the advantages of SmallSet (and a SmallSet of pointers is transparently implemented with
a SmallPtrSet), but also supports iterators. If more than 'N' insertions are performed, a single quadratically
probed hash table is allocated and grows as needed, providing extremely efficient access (constant time
insertion/deleting/queries with low constant factors) and is very stingy with malloc traffic.
Note that, unlike std::set, the iterators of SmallPtrSet are invalidated whenever an insertion occurs. Also, the
values visited by the iterators are not visited in sorted order.
"llvm/ADT/DenseSet.h"
DenseSet is a simple quadratically probed hash table. It excels at supporting small values: it uses a single
allocation to hold all of the pairs that are currently inserted in the set. DenseSet is a great way to unique small
values that are not simple pointers (use SmallPtrSet for pointers). Note that DenseSet has the same
requirements for the value type that DenseMap has.
"llvm/ADT/FoldingSet.h"
FoldingSet is an aggregate class that is really good at uniquing expensive-to-create or polymorphic objects. It
is a combination of a chained hash table with intrusive links (uniqued objects are required to inherit from
FoldingSetNode) that uses SmallVector as part of its ID process.
Consider a case where you want to implement a "getOrCreateFoo" method for a complex object (for example,
a node in the code generator). The client has a description of *what* it wants to generate (it knows the opcode
and all the operands), but we don't want to 'new' a node, then try inserting it into a set only to find out it
already exists, at which point we would have to delete it and return the node that already exists.
To support this style of client, FoldingSet perform a query with a FoldingSetNodeID (which wraps
SmallVector) that can be used to describe the element that we want to query for. The query either returns the
element matching the ID or it returns an opaque ID that indicates where insertion should take place.
Construction of the ID usually does not require heap traffic.
Because FoldingSet uses intrusive links, it can support polymorphic objects in the set (for example, you can
have SDNode instances mixed with LoadSDNodes). Because the elements are individually allocated, pointers
to the elements are stable: inserting or removing elements does not invalidate any pointers to other elements.
<set>
std::set is a reasonable all-around set class, which is decent at many things but great at nothing. std::set
allocates memory for each element inserted (thus it is very malloc intensive) and typically stores three
pointers per element in the set (thus adding a large amount of per-element space overhead). It offers
guaranteed log(n) performance, which is not particularly fast from a complexity standpoint (particularly if the
elements of the set are expensive to compare, like strings), and has extremely high constant factors for lookup,
insertion and removal.
The advantages of std::set are that its iterators are stable (deleting or inserting an element from the set does
not affect iterators or pointers to other elements) and that iteration over the set is guaranteed to be in sorted
order. If the elements in the set are large, then the relative overhead of the pointers and malloc traffic is not a
big deal, but if the elements of the set are small, std::set is almost never a good choice.
"llvm/ADT/SetVector.h"
LLVM's SetVector<Type> is an adapter class that combines your choice of a set-like container along with a
Sequential Container. The important property that this provides is efficient insertion with uniquing (duplicate
elements are ignored) with iteration support. It implements this by inserting elements into both a set-like
container and the sequential container, using the set-like container for uniquing and the sequential container
for iteration.
The difference between SetVector and other sets is that the order of iteration is guaranteed to match the order
of insertion into the SetVector. This property is really important for things like sets of pointers. Because
pointer values are non-deterministic (e.g. vary across runs of the program on different machines), iterating
over the pointers in the set will not be in a well-defined order.
The drawback of SetVector is that it requires twice as much space as a normal set and has the sum of constant
factors from the set-like container and the sequential container that it uses. Use it *only* if you need to iterate
over the elements in a deterministic order. SetVector is also expensive to delete elements out of (linear time),
unless you use it's "pop_back" method, which is faster.
SetVector is an adapter class that defaults to using std::vector and std::set for the underlying containers, so it
is quite expensive. However, "llvm/ADT/SetVector.h" also provides a SmallSetVector class, which
defaults to using a SmallVector and SmallSet of a specified size. If you use this, and if your sets are
dynamically smaller than N, you will save a lot of heap traffic.
"llvm/ADT/UniqueVector.h"
UniqueVector is similar to SetVector, but it retains a unique ID for each element inserted into the set. It
internally contains a map and a vector, and it assigns a unique ID for each value inserted into the set.
UniqueVector is very expensive: its cost is the sum of the cost of maintaining both the map and vector, it has
high complexity, high constant factors, and produces a lot of malloc traffic. It should be avoided.
std::multiset is useful if you're not interested in elimination of duplicates, but has all the drawbacks of std::set.
A sorted vector (where you don't delete duplicate entries) or some other approach is almost always better.
"llvm/ADT/StringMap.h"
Strings are commonly used as keys in maps, and they are difficult to support efficiently: they are variable
length, inefficient to hash and compare when long, expensive to copy, etc. StringMap is a specialized
container designed to cope with these issues. It supports mapping an arbitrary range of bytes to an arbitrary
other object.
The StringMap implementation uses a quadratically-probed hash table, where the buckets store a pointer to
the heap allocated entries (and some other stuff). The entries in the map must be heap allocated because the
strings are variable length. The string data (key) and the element object (value) are stored in the same
allocation with the string data immediately after the element object. This container guarantees the
"(char*)(&Value+1)" points to the key string for a value.
The StringMap is very fast for several reasons: quadratic probing is very cache efficient for lookups, the hash
value of strings in buckets is not recomputed when lookup up an element, StringMap rarely has to touch the
memory for unrelated objects when looking up a value (even when hash collisions happen), hash table growth
does not recompute the hash values for strings already in the table, and each pair in the map is store in a single
allocation (the string data is stored in the same allocation as the Value of a pair).
StringMap also provides query methods that take byte ranges, so it only ever copies a string if a value is
inserted into the table.
"llvm/ADT/IndexedMap.h"
IndexedMap is a specialized container for mapping small dense integers (or values that can be mapped to
small dense integers) to some other type. It is internally implemented as a vector with a mapping function that
maps the keys to the dense integer range.
This is useful for cases like virtual registers in the LLVM code generator: they have a dense mapping that is
offset by a compile-time constant (the first virtual register ID).
"llvm/ADT/DenseMap.h"
DenseMap is a simple quadratically probed hash table. It excels at supporting small keys and values: it uses a
single allocation to hold all of the pairs that are currently inserted in the map. DenseMap is a great way to map
pointers to pointers, or map other small types to each other.
There are several aspects of DenseMap that you should be aware of, however. The iterators in a densemap are
invalidated whenever an insertion occurs, unlike map. Also, because DenseMap allocates space for a large
number of key/value pairs (it starts with 64 by default), it will waste a lot of space if your keys or values are
large. Finally, you must implement a partial specialization of DenseMapInfo for the key that you want, if it
isn't already supported. This is required to tell DenseMap about two special marker values (which can never
be inserted into the map) that it needs internally.
"llvm/ADT/ValueMap.h"
ValueMap is a wrapper around a DenseMap mapping Value*s (or subclasses) to another type. When a Value
is deleted or RAUW'ed, ValueMap will update itself so the new version of the key is mapped to the same
value, just as if the key were a WeakVH. You can configure exactly how this happens, and what else happens
on these two events, by passing a Config parameter to the ValueMap template.
<map>
std::map has similar characteristics to std::set: it uses a single allocation per pair inserted into the map, it
offers log(n) lookup with an extremely large constant factor, imposes a space penalty of 3 pointers per pair in
the map, etc.
std::map is most useful when your keys or values are very large, if you need to iterate over the collection in
sorted order, or if you need stable iterators into the map (i.e. they don't get invalidated if an insertion or
deletion of another element takes place).
std::multimap is useful if you want to map a key to multiple values, but has all the drawbacks of std::map. A
sorted vector or some other approach is almost always better.
String-like containers
TODO: const char* vs stringref vs smallstring vs std::string. Describe twine, xref to #string_apis.
One additional option is std::vector<bool>: we discourage its use for two reasons 1) the
implementation in many common compilers (e.g. commonly available versions of GCC) is extremely
BitVector
The BitVector container provides a dynamic size set of bits for manipulation. It supports individual bit
setting/testing, as well as set operations. The set operations take time O(size of bitvector), but operations are
performed one word at a time, instead of one bit at a time. This makes the BitVector very fast for set
operations compared to other containers. Use the BitVector when you expect the number of set bits to be high
(IE a dense set).
SmallBitVector
The SmallBitVector container provides the same interface as BitVector, but it is optimized for the case where
only a small number of bits, less than 25 or so, are needed. It also transparently supports larger bit counts, but
slightly less efficiently than a plain BitVector, so SmallBitVector should only be used when larger counts are
rare.
At this time, SmallBitVector does not support set operations (and, or, xor), and its operator[] does not provide
an assignable lvalue.
SparseBitVector
The SparseBitVector container is much like BitVector, with one major difference: Only the bits that are set,
are stored. This makes the SparseBitVector much more space efficient than BitVector when the set is sparse,
as well as making set operations O(number of set bits) instead of O(size of universe). The downside to the
SparseBitVector is that setting and testing of random bits is O(N), and on large SparseBitVectors, this can be
slower than BitVector. In our implementation, setting or testing bits in sorted order (either forwards or
reverse) is O(1) worst case. Testing and setting bits within 128 bits (depends on size) of the current bit is also
O(1). As a general statement, testing/setting bits in a SparseBitVector is O(distance away from last set bit).
Because this is a "how-to" section, you should also read about the main classes that you will be working with.
The Core LLVM Class Hierarchy Reference contains details and descriptions of the main classes that you
should know about.
Because the pattern for iteration is common across many different aspects of the program representation, the
standard template library algorithms may be used on them, and it is easier to remember how to iterate. First
we show a few common examples of the data structures that need to be traversed. Other data structures are
traversed in very similar ways.
Note that i can be used as if it were a pointer for the purposes of invoking member functions of the
Instruction class. This is because the indirection operator is overloaded for the iterator classes. In the
above code, the expression i->size() is exactly equivalent to (*i).size() just like you'd expect.
However, this isn't really the best way to print out the contents of a BasicBlock! Since the ostream
operators are overloaded for virtually anything you'll care about, you could have just invoked the print routine
on the basic block itself: errs() << *blk << "\n";.
#include "llvm/Support/InstIterator.h"
Easy, isn't it? You can also use InstIterators to fill a work list with its initial contents. For example, if
you wanted to initialize a work list to contain all instructions in a Function F, all you would need to do is
something like:
std::set<Instruction*> worklist;
// or better yet, SmallPtrSet<Instruction*, 64> worklist;
The STL set worklist would now contain all instructions in the Function pointed to by F.
However, the iterators you'll be working with in the LLVM framework are special: they will automatically
convert to a ptr-to-instance type whenever they need to. Instead of dereferencing the iterator and then taking
the address of the result, you can simply assign the iterator to the proper pointer type and you get the
dereference and address-of operation as a result of the assignment (behind the scenes, this is a result of
overloading casting mechanisms). Thus the last line of the last example,
is semantically equivalent to
Instruction *pinst = i;
It's also possible to turn a class pointer into the corresponding iterator, and this is a constant time operation
(very efficient). The following code snippet illustrates use of the conversion constructors provided by LLVM
iterators. By using these, you can explicitly grab the iterator of something without actually obtaining it via
iteration over some structure:
And the actual code is (remember, because we're writing a FunctionPass, our FunctionPass-derived
class simply has to override the runOnFunction method):
private:
unsigned callCounter;
};
This class has "value semantics": it should be passed by value, not by reference and it should not be
dynamically allocated or deallocated using operator new or operator delete. It is efficiently
copyable, assignable and constructable, with costs equivalents to that of a bare pointer. If you look at its
definition, it has only a single pointer member.
Function *F = ...;
Alternately, it's common to have an instance of the User Class and need to know what Values are used by it.
The list of all Values used by a User is known as a use-def chain. Instances of class Instruction are
common Users, so we might want to iterate over all of the values that a particular instruction uses (that is,
the operands of the particular Instruction):
#include "llvm/Support/CFG.h"
BasicBlock *BB = ...;
Creation of Instructions is straight-forward: simply call the constructor for the kind of instruction to
instantiate and provide the necessary parameters. For example, an AllocaInst only requires a
(const-ptr-to) Type. Thus:
will create an AllocaInst instance that represents the allocation of one integer in the current stack frame,
at run time. Each Instruction subclass is likely to have varying default parameters which change the
semantics of the instruction, so refer to the doxygen documentation for the subclass of Instruction that you're
interested in instantiating.
Naming values
It is very useful to name the values of instructions when you're able to, as this facilitates the debugging of
your transformations. If you end up looking at generated LLVM machine code, you definitely want to have
logical names associated with the results of instructions! By supplying a value for the Name (default)
parameter of the Instruction constructor, you associate a logical name with the result of the instruction's
execution at run time. For example, say that I'm writing a transformation that dynamically allocates space for
an integer on the stack, and that integer is going to be used as some kind of index by some other code. To
accomplish this, I place an AllocaInst at the first point in the first BasicBlock of some Function,
and I'm intending to use it within the same Function. I might do:
where indexLoc is now the logical name of the instruction's execution value, which is a pointer to an
integer on the run time stack.
Inserting instructions
There are essentially two ways to insert an Instruction into an existing sequence of instructions that form
a BasicBlock:
Appending to the end of a BasicBlock is so common that the Instruction class and
Instruction-derived classes provide constructors which take a pointer to a BasicBlock to be
appended to. For example code that looked like:
becomes:
which is much cleaner, especially if you are creating long instruction streams.
• Insertion into an implicit instruction list
Instruction instances that are already in BasicBlocks are implicitly associated with an
existing instruction list: the instruction list of the enclosing basic block. Thus, we could have
accomplished the same thing as the above code without being given a BasicBlock by doing:
pi->getParent()->getInstList().insert(pi, newInst);
In fact, this sequence of steps occurs so frequently that the Instruction class and
Instruction-derived classes provide constructors which take (as a default parameter) a pointer to
an Instruction which the newly-created Instruction should precede. That is,
Instruction constructors are capable of inserting the newly-created instance into the
BasicBlock of a provided instruction, immediately before that instruction. Using an
Instruction constructor with a insertBefore (default) parameter, the above code becomes:
Instruction* pi = ...;
Instruction* newInst = new Instruction(..., pi);
which is much cleaner, especially if you're creating a lot of instructions and adding them to
BasicBlocks.
Deleting Instructions
Instruction *I = .. ;
I->eraseFromParent();
Deleting Instructions
• ReplaceInstWithValue
This function replaces all uses of a given instruction with a value, and then removes the original
instruction. The following example illustrates the replacement of the result of a particular
AllocaInst that allocates memory for a single integer with a null pointer to an integer.
ReplaceInstWithValue(instToReplace->getParent()->getInstList(), ii,
Constant::getNullValue(PointerType::getUnqual(Type::Int32Ty)));
• ReplaceInstWithInst
This function replaces a particular instruction with another instruction, inserting the new instruction
into the basic block at the location where the old instruction was, and replacing any uses of the old
instruction with the new instruction. The following example illustrates the replacement of one
AllocaInst with another.
ReplaceInstWithInst(instToReplace->getParent()->getInstList(), ii,
new AllocaInst(Type::Int32Ty, 0, "ptrToReplacedInt"));
Deleting GlobalVariables
Deleting a global variable from a module is just as easy as deleting an Instruction. First, you must have a
pointer to the global variable that you wish to delete. You use this pointer to erase it from its parent, the
module. For example:
GlobalVariable *GV = .. ;
Note that LLVM's support for multithreading is still relatively young. Up through version 2.5, the execution
of threaded hosted applications was supported, but not threaded client access to the APIs. While this use case
is now supported, clients must adhere to the guidelines specified below to ensure proper operation in
multithreaded mode.
Note that, on Unix-like platforms, LLVM requires the presence of GCC's atomic intrinsics in order to support
threaded operation. If you need a multhreading-capable LLVM on a platform without a suitably modern
system compiler, consider compiling LLVM and LLVM-GCC in single-threaded mode, and using the
resultant compiler to build a copy of LLVM with multithreading support.
Note that both of these calls must be made in isolation. That is to say that no other LLVM API calls may be
executing at any time during the execution of llvm_start_multithreaded() or
llvm_stop_multithreaded . It's is the client's responsibility to enforce this isolation.
The return value of llvm_start_multithreaded() indicates the success or failure of the initialization.
Failure typically indicates that your copy of LLVM was built without multithreading support, typically
because GCC atomic intrinsics were not found in your system compiler. In this case, the LLVM API will not
be safe for concurrent calls. However, it will be safe for hosting threaded applications in the JIT, though care
must be taken to ensure that side exits and the like do not accidentally result in concurrent LLVM API calls.
Note that, if you use scope-based shutdown, you can use the llvm_shutdown_obj class, which calls
llvm_shutdown() in its destructor.
Note that, because no other threads are allowed to issue LLVM API calls before
llvm_start_multithreaded() returns, it is possible to have ManagedStatics of
llvm::sys::Mutexs.
Conceptually, LLVMContext provides isolation. Every LLVM entity (Modules, Values, Types,
Constants, etc.) in LLVM's in-memory IR belongs to an LLVMContext. Entities in different contexts
cannot interact with each other: Modules in different contexts cannot be linked together, Functions cannot
be added to Modules in different contexts, etc. What this means is that is is safe to compile on multiple
threads simultaneously, as long as no two threads operate on entities within the same context.
In practice, very few places in the API require the explicit specification of a LLVMContext, other than the
Type creation/lookup APIs. Because every Type carries a reference to its owning context, most other entities
can determine what context they belong to by looking at their own Type. If you are adding new entities to
LLVM IR, please try to maintain this interface design.
For clients that do not require the benefits of isolation, LLVM provides a convenience API
getGlobalContext(). This returns a global, lazily initialized LLVMContext that may be used in
situations where isolation is not a concern.
Advanced Topics
This section describes some of the advanced or obscure API's that most clients do not need to be aware of.
These API's tend manage the inner workings of the LLVM system, and only need to be accessed in unusual
circumstances.
Unfortunately achieving this goal is not a simple matter. In particular, recursive types and late resolution of
opaque types makes the situation very difficult to handle. Fortunately, for the most part, our implementation
makes most clients able to be completely unaware of the nasty internal details. The primary case where clients
are exposed to the inner workings of it are when building a recursive type. In addition to this case, the LLVM
bitcode reader, assembly parser, and linker also have to be aware of the inner workings of this system.
For our purposes below, we need three concepts. First, an "Opaque Type" is exactly as defined in the
language reference. Second an "Abstract Type" is any type which includes an opaque type as part of its type
graph (for example "{ opaque, i32 }"). Third, a concrete type is a type that is not an abstract type (e.g.
"{ i32, float }").
// At this point, NewSTy = "{ opaque*, i32 }". Tell VMCore that
// the struct and the opaque type are actually the same.
cast<OpaqueType>(StructTy.get())->refineAbstractTypeTo(NewSTy);
// Add a name for the type to the module symbol table (optional)
This code shows the basic approach used to build recursive types: build a non-recursive type using 'opaque',
then use type unification to close the cycle. The type unification step is performed by the
refineAbstractTypeTo method, which is described next. After that, we describe the PATypeHolder
class.
In the example above, the OpaqueType object is definitely deleted. Additionally, if there is an "{ \2*, i32}"
type already created in the system, the pointer and struct type created are also deleted. Obviously whenever a
type is deleted, any "Type*" pointers in the program are invalidated. As such, it is safest to avoid having any
"Type*" pointers to abstract types live across a call to refineAbstractTypeTo (note that non-abstract
types can never move or be deleted). To deal with this, the PATypeHolder class is used to maintain a stable
reference to a possibly refined type, and the AbstractTypeUser class is used to update more complex
datastructures.
PATypeHolder is an extremely light-weight object that uses a lazy union-find implementation to update
pointers. For example the pointer from a Value to its Type is maintained by PATypeHolder objects.
Note that the SymbolTable class should not be directly accessed by most clients. It should only be used
when iteration over the symbol table names themselves are required, which is very special purpose. Note that
not all LLVM Values have names, and those without names (i.e. they have an empty name) do not exist in
the symbol table.
These symbol tables support iteration over the values/types in the symbol table with
begin/end/iterator and supports querying to see if a specific name is in the symbol table (with
lookup). The ValueSymbolTable class exposes no public mutator methods, instead, simply call
setName on a value, which will autoinsert it into the appropriate symbol table. For types, use the
• Layout a) The Use object(s) are inside (resp. at fixed offset) of the User object and there are a fixed
number of them.
• Layout b) The Use object(s) are referenced by a pointer to an array from the User object and there
may be a variable number of them.
As of v2.4 each layout still possesses a direct pointer to the start of the array of Uses. Though not mandatory
for layout a), we stick to this redundancy for the sake of simplicity. The User object also stores the number
of Use objects it has. (Theoretically this information can also be calculated given the scheme presented
below.)
Special forms of allocation operators (operator new) enforce the following memory layouts:
...---.---.---.---.-------...
| P | P | P | P | User
'''---'---'---'---'-------'''
• Layout b) is modelled by pointing at the Use[] array.
.-------...
| User
'-------'''
|
v
.---.---.---.---...
| P | P | P | P |
'---'---'---'---'''
(In the above figures 'P' stands for the Use** that is stored in each Use object in the member Use::Prev)
A bit-encoding in the 2 LSBits (least significant bits) of the Use::Prev allows to find the start of the User
object:
Given a Use*, all we have to do is to walk till we get a stop and we either have a User immediately behind
or we have to walk to the next stop picking up digits and calculating the offset:
.---.---.---.---.---.---.---.---.---.---.---.---.---.---.---.---.----------------
| 1 | s | 1 | 0 | 1 | 0 | s | 1 | 1 | 0 | s | 1 | 1 | s | 1 | S | User (or User*)
'---'---'---'---'---'---'---'---'---'---'---'---'---'---'---'---'----------------
|+15 |+10 |+6 |+3 |+1
| | | | |__>
| | | |__________>
| | |______________________>
| |______________________________________>
|__________________________________________________________>
Only the significant number of bits need to be stored between the stops, so that the worst case is 20 memory
accesses when there are 1000 Use objects associated with a User.
Reference implementation
The following literate Haskell fragment demonstrates the concept:
The reverse algorithm computes the length of the string just by examining a certain prefix:
>
> deepCheck p = check (defaultConfig { configMaxTest = 500 }) p
>
Tagging considerations
To maintain the invariant that the 2 LSBits of each Use** in Use never change after being set up, setters of
Use::Prev must re-tag the new Use** on every modification. Accordingly getters must strip the tag bits.
For layout b) instead of the User we find a pointer (User* with LSBit set). Following this pointer brings us
to the User. A portable trick ensures that the first bytes of User (if interpreted as a pointer) never has the
LSBit set. (Portability is relying on the fact that all known compilers place the vptr in the first word of the
instances.)
The Core LLVM classes are the primary means of representing the program being inspected or transformed.
The core LLVM classes are defined in header files in the include/llvm/ directory, and implemented in
the lib/VMCore directory.
All other types are subclasses of DerivedType. Types can be named, but this is not a requirement. There
exists exactly one instance of a given shape at any one time. This allows type equality to be performed with
address equality of the Type Instance. That is, given two Type* values, the types are identical if the pointers
are identical.
IntegerType
Subclass of DerivedType that represents integer types of any bit width. Any bit width between
IntegerType::MIN_INT_BITS (1) and IntegerType::MAX_INT_BITS (~8 million) can
be represented.
◊ static const IntegerType* get(unsigned NumBits): get an integer type of
a specific bit width.
◊ unsigned getBitWidth() const: Get the bit width of an integer type.
SequentialType
This is subclassed by ArrayType and PointerType
◊ const Type * getElementType() const: Returns the type of each of the elements
in the sequential type.
ArrayType
This is a subclass of SequentialType and defines the interface for array types.
◊ unsigned getNumElements() const: Returns the number of elements in the array.
PointerType
Subclass of SequentialType for pointer types.
VectorType
Subclass of SequentialType for vector types. A vector type is similar to an ArrayType but is
distinguished because it is a first class type whereas ArrayType is not. Vector types are used for
vector operations and are usually small vectors of of an integer or floating point type.
StructType
Subclass of DerivedTypes for struct types.
FunctionType
Subclass of DerivedTypes for function types.
◊ bool isVarArg() const: Returns true if its a vararg function
◊ const Type * getReturnType() const: Returns the return type of the function.
◊ const Type * getParamType (unsigned i): Returns the type of the ith
parameter.
◊ const unsigned getNumParams() const: Returns the number of formal
parameters.
OpaqueType
Sublcass of DerivedType for abstract types. This class defines no content and is used as a placeholder
for some other type. Note that OpaqueType is used (temporarily) during type resolution for forward
references of types. Once the referenced type is resolved, the OpaqueType is replaced with the actual
type. OpaqueType can also be used for data abstraction. At link time opaque types can be resolved to
actual types of the same name.
The Module class represents the top level structure present in LLVM programs. An LLVM module is
effectively either a translation unit of the original program or a combination of several translation units
merged by the linker. The Module class keeps track of a list of Functions, a list of GlobalVariables,
and a SymbolTable. Additionally, it contains a few helpful member functions that try to make common
operations easy.
Constructing a Module is easy. You can optionally provide a name for it (probably based on the name of the
translation unit).
These are forwarding methods that make it easy to access the contents of a Module object's
Function list.
• Module::FunctionListType &getFunctionList()
Returns the list of Functions. This is necessary to use when you need to update the list or perform a
complex action that doesn't have a forwarding method.
These are forwarding methods that make it easy to access the contents of a Module object's
GlobalVariable list.
• Module::GlobalListType &getGlobalList()
Returns the list of GlobalVariables. This is necessary to use when you need to update the list or
perform a complex action that doesn't have a forwarding method.
• SymbolTable *getSymbolTable()
Look up the specified function in the Module SymbolTable. If it does not exist, return null.
• Function *getOrInsertFunction(const std::string &Name, const
FunctionType *T)
Look up the specified function in the Module SymbolTable. If it does not exist, add an external
declaration for the function and return it.
• std::string getTypeName(const Type *Ty)
If there is at least one entry in the SymbolTable for the specified Type, return it. Otherwise return
the empty string.
• bool addTypeName(const std::string &Name, const Type *Ty)
Insert an entry in the SymbolTable mapping Name to Ty. If there is already an entry for this name,
true is returned and the SymbolTable is not modified.
The Value class is the most important class in the LLVM Source base. It represents a typed value that may
be used (among other things) as an operand to an instruction. There are many different types of Values, such
as Constants,Arguments. Even Instructions and Functions are Values.
A particular Value may be used many times in the LLVM representation for a program. For example, an
incoming argument to a function (represented with an instance of the Argument class) is "used" by every
instruction in the function that references the argument. To keep track of this relationship, the Value class
keeps a list of all of the Users that is using it (the User class is a base class for all nodes in the LLVM graph
that can refer to Values). This use list is how LLVM represents def-use information in the program, and is
accessible through the use_* methods, shown below.
Because LLVM is a typed representation, every LLVM Value is typed, and this Type is available through
the getType() method. In addition, all LLVM values can be named. The "name" of the Value is a
symbolic string printed in the LLVM code:
The name of this instruction is "foo". NOTE that the name of any value may be missing (an empty string), so
names should ONLY be used for debugging (making the source code easier to read, debugging printouts),
they should not be used to keep track of values or map between them. For this purpose, use a std::map of
pointers to the Value itself instead.
One important aspect of LLVM is that there is no distinction between an SSA variable and the operation that
produces it. Because of this, any reference to the value produced by an instruction (or the value available as an
incoming argument, for example) is represented as a direct pointer to the instance of the class that represents
this value. Although this may take some getting used to, it simplifies the representation and makes it easier to
manipulate.
These methods are the interface to access the def-use information in LLVM. As with all other
iterators in LLVM, the naming conventions follow the conventions defined by the STL.
• Type *getType() const
This family of methods is used to access and assign a name to a Value, be aware of the precaution
above.
• void replaceAllUsesWith(Value *V)
This method traverses the use list of a Value changing all Users of the current value to refer to "V"
instead. For example, if you detect that an instruction always produces a constant value (for example
through constant folding), you can replace all uses of the instruction with the constant like this:
Inst->replaceAllUsesWith(ConstVal);
The User class is the common base class of all LLVM nodes that may refer to Values. It exposes a list of
"Operands" that are all of the Values that the User is referring to. The User class itself is a subclass of
Value.
The operands of a User point directly to the LLVM Value that it refers to. Because LLVM uses Static
Single Assignment (SSA) form, there can only be one definition referred to, allowing this direct connection.
This connection provides the use-def information in LLVM.
• Value *getOperand(unsigned i)
unsigned getNumOperands()
These two methods expose the operands of the User in a convenient form for direct access.
• User::op_iterator - Typedef for iterator over the operand list
op_iterator op_begin() - Get an iterator to the start of the operand list.
op_iterator op_end() - Get an iterator to the end of the operand list.
Together, these methods make up the iterator based interface to the operands of a User.
The Instruction class is the common base class for all LLVM instructions. It provides only a few
methods, but is a very commonly used class. The primary data tracked by the Instruction class itself is
the opcode (instruction type) and the parent BasicBlock the Instruction is embedded into. To
represent a specific type of instruction, one of many subclasses of Instruction are used.
Because the Instruction class subclasses the User class, its operands can be accessed in the same way as
for other Users (with the getOperand()/getNumOperands() and op_begin()/op_end()
methods).
An important file for the Instruction class is the llvm/Instruction.def file. This file contains
some meta-data about the various different types of instructions in LLVM. It describes the enum values that
are used as opcodes (for example Instruction::Add and Instruction::ICmp), as well as the
concrete sub-classes of Instruction that implement the instruction (for example BinaryOperator and
CmpInst). Unfortunately, the use of macros in this file confuses doxygen, so these enum values don't show
up correctly in the doxygen output.
• BinaryOperator
This subclasses represents all two operand instructions whose operands must be the same type, except
for the comparison instructions.
• CastInst
This subclass is the parent of the 12 casting instructions. It provides common operations on cast
instructions.
• CmpInst
This subclass respresents the two comparison instructions, ICmpInst (integer opreands), and
FCmpInst (floating point operands).
• TerminatorInst
This subclass is the parent of all terminator instructions (those which can terminate a block).
• BasicBlock *getParent()
Returns another instance of the specified instruction, identical in all ways to the original except that
the instruction has no parent (ie it's not embedded into a BasicBlock), and it has no name
Global values (GlobalVariables or Functions) are the only LLVM values that are visible in the bodies
of all Functions. Because they are visible at global scope, they are also subject to linking with other globals
defined in different translation units. To control the linking process, GlobalValues know their linkage
rules. Specifically, GlobalValues know whether they have internal or external linkage, as defined by the
LinkageTypes enumeration.
If a GlobalValue has internal linkage (equivalent to being static in C), it is not visible to code outside
the current translation unit, and does not participate in linking. If it has external linkage, it is visible to
external code, and does participate in linking. In addition to linkage information, GlobalValues keep track
of which Module they are currently part of.
Because GlobalValues are memory objects, they are always referred to by their address. As such, the
Type of a global is always a pointer to its contents. It is important to remember this when using the
GetElementPtrInst instruction because this pointer must be dereferenced first. For example, if you have
This returns the Module that the GlobalValue is currently embedded into.
The Function class represents a single procedure in LLVM. It is actually one of the more complex classes
in the LLVM hierarchy because it must keep track of a large amount of data. The Function class keeps
track of a list of BasicBlocks, a list of formal Arguments, and a SymbolTable.
The list of BasicBlocks is the most commonly used part of Function objects. The list imposes an
implicit ordering of the blocks in the function, which indicate how the code will be laid out by the backend.
Additionally, the first BasicBlock is the implicit entry node for the Function. It is not legal in LLVM to
explicitly branch to this initial block. There are no implicit exit nodes, and in fact there may be multiple exit
nodes from a single Function. If the BasicBlock list is empty, this indicates that the Function is
actually a function declaration: the actual body of the function hasn't been linked in yet.
In addition to a list of BasicBlocks, the Function class also keeps track of the list of formal
Arguments that the function receives. This container manages the lifetime of the Argument nodes, just like
the BasicBlock list does for the BasicBlocks.
The SymbolTable is a very rarely used LLVM feature that is only used when you have to look up a value
by name. Aside from that, the SymbolTable is used internally to make sure that there are not conflicts
between the names of Instructions, BasicBlocks, or Arguments in the function body.
Note that Function is a GlobalValue and therefore also a Constant. The value of the function is its address
(after linking) which is guaranteed to be constant.
Return whether or not the Function has a body defined. If the function is "external", it does not
have a body, and thus must be resolved by linking with a function defined in a different translation
unit.
• Function::iterator - Typedef for basic block list iterator
Function::const_iterator - Typedef for const_iterator.
begin(), end() size(), empty()
These are forwarding methods that make it easy to access the contents of a Function object's
BasicBlock list.
• Function::BasicBlockListType &getBasicBlockList()
Returns the list of BasicBlocks. This is necessary to use when you need to update the list or
perform a complex action that doesn't have a forwarding method.
• Function::arg_iterator - Typedef for the argument list iterator
Function::const_arg_iterator - Typedef for const_iterator.
arg_begin(), arg_end() arg_size(), arg_empty()
These are forwarding methods that make it easy to access the contents of a Function object's
Argument list.
• Function::ArgumentListType &getArgumentList()
Returns the list of Arguments. This is necessary to use when you need to update the list or perform a
complex action that doesn't have a forwarding method.
• BasicBlock &getEntryBlock()
Returns the entry BasicBlock for the function. Because the entry block for the function is always
the first block, this returns the first block of the Function.
• Type *getReturnType()
FunctionType *getFunctionType()
This traverses the Type of the Function and returns the return type of the function, or the
FunctionType of the actual function.
• SymbolTable *getSymbolTable()
Global variables are represented with the (surprise surprise) GlobalVariable class. Like functions,
GlobalVariables are also subclasses of GlobalValue, and as such are always referenced by their
Create a new global variable of the specified type. If isConstant is true then the global variable
will be marked as unchanging for the program. The Linkage parameter specifies the type of linkage
(internal, external, weak, linkonce, appending) for the variable. If the linkage is InternalLinkage,
WeakAnyLinkage, WeakODRLinkage, LinkOnceAnyLinkage or LinkOnceODRLinkage, then the
resultant global variable will have internal linkage. AppendingLinkage concatenates together all
instances (in different translation units) of the variable into a single variable but is only applicable to
arrays. See the LLVM Language Reference for further details on linkage types. Optionally an
initializer, a name, and the module to put the variable into may be specified for the global variable as
well.
• bool isConstant() const
Returns true if this is a global variable that is known not to be modified at runtime.
• bool hasInitializer()
Returns the initial value for a GlobalVariable. It is not legal to call this method if there is no
initializer.
This class represents a single entry multiple exit section of the code, commonly known as a basic block by the
compiler community. The BasicBlock class maintains a list of Instructions, which form the body of
the block. Matching the language definition, the last element of this list of instructions is always a terminator
instruction (a subclass of the TerminatorInst class).
In addition to tracking the list of instructions that make up the block, the BasicBlock class also keeps track
of the Function that it is embedded into.
Note that BasicBlocks themselves are Values, because they are referenced by instructions like branches
and can go in the switch tables. BasicBlocks have type label.
These methods and typedefs are forwarding functions that have the same semantics as the standard
library methods of the same names. These methods expose the underlying instruction list of a basic
block in a way that is easy to manipulate. To get the full complement of container operations
(including operations to update the list), you must use the getInstList() method.
• BasicBlock::InstListType &getInstList()
This method is used to get access to the underlying container that actually holds the Instructions. This
method must be used when there isn't a forwarding function in the BasicBlock class for the
operation that you would like to perform. Because there are no forwarding functions for "updating"
operations, you need to use this if you want to update the contents of a BasicBlock.
• Function *getParent()
Returns a pointer to Function the block is embedded into, or a null pointer if it is homeless.
• TerminatorInst *getTerminator()
Returns a pointer to the terminator instruction that appears at the end of the BasicBlock. If there is
no terminator instruction, or if the last instruction in the block is not a terminator, then a null pointer
is returned.
1. Overview
2. Create a project from the Sample Project
3. Source tree layout
4. Writing LLVM-style Makefiles
1. Required Variables
2. Variables for Building Subdirectories
3. Variables for Building Libraries
4. Variables for Building Programs
5. Miscellaneous Variables
5. Placement of object code
6. Further help
Overview
The LLVM build system is designed to facilitate the building of third party projects that use LLVM header
files, libraries, and tools. In order to use these facilities, a Makefile from a project must do the following
things:
1. Set make variables. There are several variables that a Makefile needs to set to use the LLVM build
system:
♦ PROJECT_NAME - The name by which your project is known.
♦ LLVM_SRC_ROOT - The root of the LLVM source tree.
♦ LLVM_OBJ_ROOT - The root of the LLVM object tree.
♦ PROJ_SRC_ROOT - The root of the project's source tree.
♦ PROJ_OBJ_ROOT - The root of the project's object tree.
♦ PROJ_INSTALL_ROOT - The root installation directory.
♦ LEVEL - The relative path from the current directory to the project's root
($PROJ_OBJ_ROOT).
2. Include Makefile.config from $(LLVM_OBJ_ROOT).
3. Include Makefile.rules from $(LLVM_SRC_ROOT).
There are two ways that you can set all of these variables:
1. You can write your own Makefiles which hard-code these values.
2. You can use the pre-made LLVM sample project. This sample project includes Makefiles, a configure
script that can be used to configure the location of LLVM, and the ability to support multiple object
directories from a single source directory.
This document assumes that you will base your project on the LLVM sample project found in
llvm/projects/sample. If you want to devise your own build system, studying the sample project and
LLVM Makefiles will probably provide enough information on how to write your own Makefiles.
1. Copy the llvm/projects/sample directory to any place of your choosing. You can place it
anywhere you like. Rename the directory to match the name of your project.
You must be using Autoconf version 2.59 or later and your aclocal version should be 1.9 or later.
6. Run configure in the directory in which you want to place object code. Use the following options
to tell your project where it can find LLVM:
--with-llvmsrc=<directory>
Tell your project where the LLVM source tree is located.
--with-llvmobj=<directory>
Tell your project where the LLVM object tree is located.
--prefix=<directory>
Tell your project where it should get installed.
That's it! Now all you have to do is type gmake (or make if your on a GNU/Linux system) in the root of your
object directory, and your project should build.
Underneath your top level directory, you should have the following directories:
lib
This subdirectory should contain all of your library source code. For each library that you build, you
will have one directory in lib that will contain that library's source code.
Libraries can be object files, archives, or dynamic libraries. The lib directory is just a convenient
place for libraries as it places them all in a directory from which they can be linked later.
By placing your header files in include, they will be found automatically by the LLVM build system.
For example, if you have a file include/jazz/note.h, then your source files can include it simply with
#include "jazz/note.h".
tools
This subdirectory should contain all of your source code for executables. For each program that you
build, you will have one directory in tools that will contain that program's source code.
test
This subdirectory should contain tests that verify that your code works correctly. Automated tests are
especially useful.
Currently, the LLVM build system provides basic support for tests. The LLVM system provides the
following:
◊ LLVM provides a tcl procedure that is used by Dejagnu to run tests. It can be found in
llvm/lib/llvm-dg.exp. This test procedure uses RUN lines in the actual test case to
determine how to run the test. See the TestingGuide for more details. You can easily write
Makefile support similar to the Makefiles in llvm/test to use Dejagnu to run your
project's tests.
◊ LLVM contains an optional package called llvm-test which provides benchmarks and
programs that are known to compile with the LLVM GCC front ends. You can use these
programs to test your code, gather statistics information, and compare it to the current LLVM
performance statistics.
Currently, there is no way to hook your tests directly into the llvm/test testing harness.
You will simply need to find a way to use the source provided within that directory on your
own.
Typically, you will want to build your lib directory first followed by your tools directory.
Required Variables
LEVEL
This variable is the relative path from this Makefile to the top directory of your project's source code.
For example, if your source code is in /tmp/src, then the Makefile in /tmp/src/jump/high
would set LEVEL to "../..".
DIRS
This is a space separated list of subdirectories that should be built. They will be built, one at a time, in
the order specified.
PARALLEL_DIRS
LIBRARYNAME
This variable contains the base name of the library that will be built. For example, to build a library
named libsample.a, LIBRARYNAME should be set to sample.
BUILD_ARCHIVE
By default, a library is a .o file that is linked directly into a program. To build an archive (also
known as a static library), set the BUILD_ARCHIVE variable.
SHARED_LIBRARY
If SHARED_LIBRARY is defined in your Makefile, a shared (or dynamic) library will be built.
TOOLNAME
This variable contains the name of the program that will be built. For example, to build an executable
named sample, TOOLNAME should be set to sample.
USEDLIBS
This variable holds a space separated list of libraries that should be linked into the program. These
libraries must either be LLVM libraries or libraries that come from your lib directory. The libraries
must be specified by their base name. For example, to link libsample.a, you would set USEDLIBS to
sample.
For example, to link libsample.so, you would have the following line in your Makefile:
LIBS += -lsample
Miscellaneous Variables
ExtraSource
This variable contains a space separated list of extra source files that need to be built. It is useful for
including the output of Lex and Yacc programs.
CFLAGS
CPPFLAGS
This variable can be used to add options to the C and C++ compiler, respectively. It is typically used
to add options that tell the compiler the location of additional directories to search for header files.
It is highly suggested that you append to CFLAGS and CPPFLAGS as opposed to overwriting them.
The master Makefiles may already have useful options in them that you may not want to overwrite.
Libraries
All libraries (static and dynamic) will be stored in PROJ_OBJ_ROOT/<type>/lib, where type is
Debug, Release, or Profile for a debug, optimized, or profiled build, respectively.
Executables
All executables will be stored in PROJ_OBJ_ROOT/<type>/bin, where type is Debug,
Release, or Profile for a debug, optimized, or profiled build, respectively.
Further Help
If you have any questions or need any help creating an LLVM project, the LLVM team would be more than
happy to help. You can always post your questions to the LLVM Developers Mailing List.
John Criswell
The LLVM Compiler Infrastructure
Last modified: $Date: 2009-08-13 15:08:52 -0500 (Thu, 13 Aug 2009) $
1. Introduction
2. General Concepts
1. Projects
2. Variable Values
3. Including Makefiles
1. Makefile
2. Makefile.common
3. Makefile.config
4. Makefile.rules
4. Comments
3. Tutorial
1. Libraries
1. Bitcode Modules
2. Loadable Modules
2. Tools
1. JIT Tools
3. Projects
4. Targets Supported
1. all
2. all-local
3. check
4. check-local
5. clean
6. clean-local
7. dist
8. dist-check
9. dist-clean
10. install
11. preconditions
12. printvars
13. reconfigure
14. spotless
15. tags
16. uninstall
5. Using Variables
1. Control Variables
2. Override Variables
3. Readable Variables
4. Internal Variables
Introduction
This document provides usage information about the LLVM makefile system. While loosely patterned after
the BSD makefile system, LLVM has taken a departure from BSD in order to implement additional features
needed by LLVM. Although makefile systems such as automake were attempted at one point, it has become
clear that the features needed by LLVM and the Makefile norm are too great to use a more limited tool.
Consequently, LLVM requires simply GNU Make 3.79, a widely portable makefile processor. LLVM
unabashedly makes heavy use of the features of GNU Make so the dependency on GNU Make is firm. If
you're not familiar with make, it is recommended that you read the GNU Makefile Manual.
While this document is rightly part of the LLVM Programmer's Manual, it is treated separately here because
of the volume of content and because it is often an early source of bewilderment for new developers.
General Concepts
The LLVM Makefile System is the component of LLVM that is responsible for building the software, testing
it, generating distributions, checking those distributions, installing and uninstalling, etc. It consists of a several
files throughout the source tree. These files and other general concepts are described in this section.
Projects
The LLVM Makefile System is quite generous. It not only builds its own software, but it can build yours too.
Built into the system is knowledge of the llvm/projects directory. Any directory under projects that
has both a configure script and a Makefile is assumed to be a project that uses the LLVM Makefile
system. Building software that uses LLVM does not require the LLVM Makefile System nor even placement
in the llvm/projects directory. However, doing so will allow your project to get up and running quickly
by utilizing the built-in features that are used to compile LLVM. LLVM compiles itself using the same
features of the makefile system as used for projects.
For complete details on setting up your projects configuration, simply mimic the
llvm/projects/sample project or for further details, consult the Projects.html page.
Variable Values
To use the makefile system, you simply create a file named Makefile in your directory and declare values
for certain variables. The variables and values that you select determine what the makefile system will do.
These variables enable rules and processing in the makefile system that automatically Do The Right Thing™.
Including Makefiles
Setting variables alone is not enough. You must include into your Makefile additional files that provide the
rules of the LLVM Makefile system. The various files involved are described in the sections that follow.
Makefile
Each directory to participate in the build needs to have a file named Makefile. This is the file first read by
make. It has three sections:
Makefile.common
Every project must have a Makefile.common file at its top source directory. This file serves three
purposes:
1. It includes the project's configuration makefile to obtain values determined by the configure
script. This is done by including the $(LEVEL)/Makefile.config file.
2. It specifies any other (static) values that are needed throughout the project. Only values that are used
in all or a large proportion of the project's directories should be placed here.
3. It includes the standard rules for the LLVM Makefile system,
$(LLVM_SRC_ROOT)/Makefile.rules. This file is the "guts" of the LLVM Makefile system.
Makefile.config
Makefile.rules
This file, located at $(LLVM_SRC_ROOT)/Makefile.rules is the heart of the LLVM Makefile System.
It provides all the logic, dependencies, and rules for building the targets supported by the system. What it does
largely depends on the values of make variables that have been set before Makefile.rules is included.
Comments
User Makefiles need not have comments in them unless the construction is unusual or it does not strictly
follow the rules and patterns of the LLVM makefile system. Makefile comments are invoked with the pound
(#) character. The # character and any text following it, to the end of the line, are ignored by make.
Tutorial
This section provides some examples of the different kinds of modules you can build with the LLVM
makefile system. In general, each directory you provide will build a single object although that object may be
composed of additionally compiled components.
Libraries
Only a few variable definitions are needed to build a regular library. Normally, the makefile system will build
all the software into a single libname.o (pre-linked) object. This means the library is not searchable and
that the distinction between compilation units has been dissolved. Optionally, you can ask for a shared library
(.so) or archive library (.a) built. Archive libraries are the default. For example:
LIBRARYNAME = mylib
SHARED_LIBRARY = 1
ARCHIVE_LIBRARY = 1
says to build a library named "mylib" with both a shared library (mylib.so) and an archive library
(mylib.a) version. The contents of all the libraries produced will be the same, they are just constructed
differently. Note that you normally do not need to specify the sources involved. The LLVM Makefile system
will infer the source files from the contents of the source directory.
Bitcode Modules
In some situations, it is desirable to build a single bitcode module from a variety of sources, instead of an
archive, shared library, or bitcode library. Bitcode modules can be specified in addition to any of the other
types of libraries by defining the MODULE_NAME variable. For example:
LIBRARYNAME = mylib
BYTECODE_LIBRARY = 1
MODULE_NAME = mymod
will build a module named mymod.bc from the sources in the directory. This module will be an aggregation
of all the bitcode modules derived from the sources. The example will also build a bitcode archive containing
a bitcode module for each compiled source file. The difference is subtle, but important depending on how the
module or library is to be linked.
Loadable Modules
In some situations, you need to create a loadable module. Loadable modules can be loaded into programs like
opt or llc to specify additional passes to run or targets to support. Loadable modules are also useful for
debugging a pass or providing a pass with another package if that pass can't be included in LLVM.
LLVM provides complete support for building such a module. All you need to do is use the
LOADABLE_MODULE variable in your Makefile. For example, to build a loadable module named MyMod
that uses the LLVM libraries LLVMSupport.a and LLVMSystem.a, you would specify:
LIBRARYNAME := MyMod
LOADABLE_MODULE := 1
LINK_COMPONENTS := support system
1. There will be no "lib" prefix on the module. This differentiates it from a standard shared library of the
same name.
2. The SHARED_LIBRARY variable is turned on.
3. The LINK_LIBS_IN_SHARED variable is turned on.
A loadable module is loaded by LLVM via the facilities of libtool's libltdl library which is part of
lib/System implementation.
Tools
For building executable programs (tools), you must provide the name of the tool and the names of the libraries
you wish to link with the tool. For example:
TOOLNAME = mytool
USEDLIBS = mylib
LINK_COMPONENTS = support system
says that we are to build a tool name mytool and that it requires three libraries: mylib, LLVMSupport.a
and LLVMSystem.a.
Note that two different variables are use to indicate which libraries are linked: USEDLIBS and LLVMLIBS.
This distinction is necessary to support projects. LLVMLIBS refers to the LLVM libraries found in the LLVM
object directory. USEDLIBS refers to the libraries built by your project. In the case of building LLVM tools,
USEDLIBS and LLVMLIBS can be used interchangeably since the "project" is LLVM itself and USEDLIBS
refers to the same place as LLVMLIBS.
Also note that there are two different ways of specifying a library: with a .a suffix and without. Without the
suffix, the entry refers to the re-linked (.o) file which will include all symbols of the library. This is useful, for
example, to include all passes from a library of passes. If the .a suffix is used then the library is linked as a
JIT Tools
Many tools will want to use the JIT features of LLVM. To do this, you simply specify that you want an
execution 'engine', and the makefiles will automatically link in the appropriate JIT for the host or an
interpreter if none is available:
TOOLNAME = my_jit_tool
USEDLIBS = mylib
LINK_COMPONENTS = engine
Of course, any additional libraries may be listed as other components. To get a full understanding of how this
changes the linker command, it is recommended that you:
cd examples/Fibonacci
make VERBOSE=1
Targets Supported
This section describes each of the targets that can be built using the LLVM Makefile system. Any target can
be invoked from any directory but not all are applicable to a given directory (e.g. "check", "dist" and "install"
will always operate as if invoked from the top level directory).
Implied
Target Name Target Description
Targets
all Compile the software recursively. Default target.
all-local Compile the software in the local directory only.
check Change to the test directory in a project and run the test suite there.
Run a local test suite. Generally this is only defined in the Makefile of
check-local
the project's test directory.
clean Remove built objects recursively.
clean-local Remove built objects from the local directory only.
dist all Prepare a source distribution tarball.
dist-check all Prepare a source distribution tarball and check that it builds.
dist-clean clean Clean source distribution tarball temporary files.
install all Copy built objects to installation directory.
preconditions all Check to make sure configuration and makefiles are up to date.
printvars all Prints variables defined by the makefile system (for debugging).
tags Make C and C++ tags files for emacs and vi.
uninstall Remove built objects from installation directory.
all (default)
When you invoke make with no arguments, you are implicitly instructing it to seek the "all" target (goal).
This target is used for building the software recursively and will do different things in different directories.
For example, in a lib directory, the "all" target will compile source files and generate libraries. But, in a
tools directory, it will link libraries and generate executables.
check
This target can be invoked from anywhere within a project's directories but always invokes the
check-local target in the project's test directory, if it exists and has a Makefile. A warning is
produced otherwise. If TESTSUITE is defined on the make command line, it will be passed down to the
invocation of make check-local in the test directory. The intended usage for this is to assist in
running specific suites of tests. If TESTSUITE is not set, the implementation of check-local should run
all normal tests. It is up to the project to define what different values for TESTSUTE will do. See the
TestingGuide for further details.
check-local
This target should be implemented by the Makefile in the project's test directory. It is invoked by the
check target elsewhere. Each project is free to define the actions of check-local as appropriate for that
project. The LLVM project itself uses dejagnu to run a suite of feature and regresson tests. Other projects may
choose to use dejagnu or any other testing mechanism.
clean
This target cleans the build directory, recursively removing all things that the Makefile builds. The cleaning
rules have been made guarded so they shouldn't go awry (via rm -f $(UNSET_VARIABLE)/* which will
attempt to erase the entire directory structure.
clean-local
This target does the same thing as clean but only for the current (local) directory.
dist
This target builds a distribution tarball. It first builds the entire project using the all target and then tars up
the necessary files and compresses it. The generated tarball is sufficient for a casual source distribution, but
probably not for a release (see dist-check).
dist-check
This target does the same thing as the dist target but also checks the distribution tarball. The check is made
by unpacking the tarball to a new directory, configuring it, building it, installing it, and then verifying that the
installation results are correct (by comparing to the original build). This target can take a long time to run but
should be done before a release goes out to make sure that the distributed tarball can actually be built into a
working release.
dist-clean
This is a special form of the clean clean target. It performs a normal clean but also removes things
pertaining to building the distribution.
install
This target finalizes shared objects and executables and copies all libraries, headers, executables and
documentation to the directory given with the --prefix option to configure. When completed, the
prefix directory will have everything needed to use LLVM.
The LLVM makefiles can generate complete internal documentation for all the classes by using doxygen.
By default, this feature is not enabled because it takes a long time and generates a massive amount of data
(>100MB). If you want this feature, you must configure LLVM with the --enable-doxygen switch and ensure
that a modern version of doxygen (1.3.7 or later) is available in your PATH. You can download doxygen from
here.
preconditions
This utility target checks to see if the Makefile in the object directory is older than the Makefile in the
source directory and copies it if so. It also reruns the configure script if that needs to be done and rebuilds
the Makefile.config file similarly. Users may overload this target to ensure that sanity checks are run
before any building of targets as all the targets depend on preconditions.
printvars
This utility target just causes the LLVM makefiles to print out some of the makefile variables so that you can
double check how things are set.
reconfigure
This utility target will force a reconfigure of LLVM or your project. It simply runs
$(PROJ_OBJ_ROOT)/config.status --recheck to rerun the configuration tests and rebuild the
configured files. This isn't generally useful as the makefiles will reconfigure themselves whenever its
necessary.
spotless
This utility target, only available when $(PROJ_OBJ_ROOT) is not the same as $(PROJ_SRC_ROOT),
will completely clean the $(PROJ_OBJ_ROOT) directory by removing its content entirely and reconfiguring
the directory. This returns the $(PROJ_OBJ_ROOT) directory to a completely fresh state. All content in the
directory except configured files and top-level makefiles will be lost.
tags
This target will generate a TAGS file in the top-level source directory. It is meant for use with emacs,
XEmacs, or ViM. The TAGS file provides an index of symbol definitions so that the editor can jump you to
the definition quickly.
uninstall
This target is the opposite of the install target. It removes the header, library and executable files from the
installation directories. Note that the directories themselves are not removed because it is not guaranteed that
LLVM is the only thing installing there (e.g. --prefix=/usr).
Variables
Variables are used to tell the LLVM Makefile System what to do and to obtain information from it. Variables
are also used internally by the LLVM Makefile System. Variable names that contain only the upper case
alphabetic letters and underscore are intended for use by the end user. All other variables are internal to the
LLVM Makefile System and should not be relied upon nor modified. The sections below describe how to use
the LLVM Makefile variables.
Control Variables
Variables listed in the table below should be set before the inclusion of $(LEVEL)/Makefile.common.
These variables provide input to the LLVM make system that tell it what to do for the current directory.
BUILD_ARCHIVE
If set to any value, causes an archive (.a) library to be built.
BUILT_SOURCES
GCC compilers which causes it to print out the command lines it uses to invoke sub-tools (compiler,
assembler, linker).
USEDLIBS
Specifies the list of project libraries that will be linked into the tool or library.
VERBOSE
Tells the Makefile system to produce detailed output of what it is doing instead of just summary
comments. This will generate a LOT of output.
Override Variables
Override variables can be used to override the default values provided by the LLVM makefile system. These
variables can be set in several ways:
AR (defaulted)
Specifies the path to the ar tool.
PROJ_OBJ_DIR
The directory into which the products of build rules will be placed. This might be the same as
PROJ_SRC_DIR but typically is not.
PROJ_SRC_DIR
The directory which contains the source files to be built.
BZIP2(configured)
The path to the bzip2 tool.
CC(configured)
The path to the 'C' compiler.
CFLAGS
Additional flags to be passed to the 'C' compiler.
CXX
Specifies the path to the C++ compiler.
CXXFLAGS
Additional flags to be passed to the C++ compiler.
DATE(configured)
Specifies the path to the date program or any program that can generate the current date and time on
its standard output
DOT(configured)
Specifies the path to the dot tool or false if there isn't one.
ECHO(configured)
Specifies the path to the echo tool for printing output.
EXEEXT(configured)
Provides the extension to be used on executables built by the makefiles. The value may be empty on
platforms that do not use file extensions for executables (e.g. Unix).
INSTALL(configured)
Specifies the path to the install tool.
LDFLAGS(configured)
Allows users to specify additional flags to pass to the linker.
LIBS(configured)
Readable Variables
Variables listed in the table below can be used by the user's Makefile but should not be changed. Changing the
value will generally cause the build to go wrong, so don't do it.
bindir
The directory into which executables will ultimately be installed. This value is derived from the
--prefix option given to configure.
Internal Variables
Variables listed below are used by the LLVM Makefile System and considered internal. You should not use
these variables under any circumstances.
Reid Spencer
The LLVM Compiler Infrastructure
Last modified: $Date: 2010-02-23 04:00:53 -0600 (Tue, 23 Feb 2010) $
1. Introduction
2. Quick Start Guide
1. Boolean Arguments
2. Argument Aliases
3. Selecting an alternative from a set of possibilities
4. Named alternatives
5. Parsing a list of options
6. Collecting options as a set of flags
7. Adding freeform text to help output
3. Reference Guide
1. Positional Arguments
◊ Specifying positional options with hyphens
◊ Determining absolute position with getPosition
◊ The cl::ConsumeAfter modifier
2. Internal vs External Storage
3. Option Attributes
4. Option Modifiers
◊ Hiding an option from -help output
◊ Controlling the number of occurrences required and allowed
◊ Controlling whether or not a value must be specified
◊ Controlling other formatting options
◊ Miscellaneous option modifiers
◊ Response files
5. Top-Level Classes and Functions
◊ The cl::ParseCommandLineOptions function
◊ The cl::ParseEnvironmentOptions function
◊ The cl::SetVersionPrinter function
◊ The cl::opt class
◊ The cl::list class
◊ The cl::bits class
◊ The cl::alias class
◊ The cl::extrahelp class
6. Builtin parsers
◊ The Generic parser<t> parser
◊ The parser<bool> specialization
◊ The parser<boolOrDefault> specialization
◊ The parser<string> specialization
◊ The parser<int> specialization
◊ The parser<double> and parser<float> specializations
4. Extension Guide
1. Writing a custom parser
2. Exploiting external storage
3. Dynamically adding command line options
Introduction
This document describes the CommandLine argument processing library. It will show you how to use it, and
what it can do. The CommandLine library uses a declarative approach to specifying the command line options
Although there are a lot of command line argument parsing libraries out there in many different languages,
none of them fit well with what I needed. By looking at the features and problems of other libraries, I
designed the CommandLine library to have the following features:
1. Speed: The CommandLine library is very quick and uses little resources. The parsing time of the
library is directly proportional to the number of arguments parsed, not the the number of options
recognized. Additionally, command line argument values are captured transparently into user defined
global variables, which can be accessed like any other variable (and with the same performance).
2. Type Safe: As a user of CommandLine, you don't have to worry about remembering the type of
arguments that you want (is it an int? a string? a bool? an enum?) and keep casting it around. Not only
does this help prevent error prone constructs, it also leads to dramatically cleaner source code.
3. No subclasses required: To use CommandLine, you instantiate variables that correspond to the
arguments that you would like to capture, you don't subclass a parser. This means that you don't have
to write any boilerplate code.
4. Globally accessible: Libraries can specify command line arguments that are automatically enabled in
any tool that links to the library. This is possible because the application doesn't have to keep a list of
arguments to pass to the parser. This also makes supporting dynamically loaded options trivial.
5. Cleaner: CommandLine supports enum and other types directly, meaning that there is less error and
more security built into the library. You don't have to worry about whether your integral command
line argument accidentally got assigned a value that is not valid for your enum type.
6. Powerful: The CommandLine library supports many different types of arguments, from simple
boolean flags to scalars arguments (strings, integers, enums, doubles), to lists of arguments. This is
possible because CommandLine is...
7. Extensible: It is very simple to add a new argument type to CommandLine. Simply specify the parser
that you want to use with the command line option when you declare it. Custom parsers are no
problem.
8. Labor Saving: The CommandLine library cuts down on the amount of grunt work that you, the user,
have to do. For example, it automatically provides a -help option that shows the available command
line options for your tool. Additionally, it does most of the basic correctness checking for you.
9. Capable: The CommandLine library can handle lots of different forms of options often found in real
programs. For example, positional arguments, ls style grouping options (to allow processing 'ls
-lad' naturally), ld style prefix options (to parse '-lmalloc -L/usr/lib'), and interpreter style
options.
This document will hopefully let you jump in and start using CommandLine in your utility quickly and
painlessly. Additionally it should be a simple reference manual to figure out how stuff works. If it is failing in
some area (or you want an extension to the library), nag the author, Chris Lattner.
To start out, you need to include the CommandLine header file into your program:
#include "llvm/Support/CommandLine.h"
Additionally, you need to add this as the first line of your main program:
... which actually parses the arguments and fills in the variable declarations.
Now that you are ready to support command line arguments, we need to tell the system which ones we want,
and what type of arguments they are. The CommandLine library uses a declarative syntax to model command
line arguments with the global variable declarations that capture the parsed values. This means that for every
command line option that you would like to support, there should be a global variable declaration to capture
the result. For example, in a compiler, we would like to support the Unix-standard '-o <filename>' option
to specify where to put the output. With the CommandLine library, this is represented like this:
This declares a global variable "OutputFilename" that is used to capture the result of the "o" argument
(first parameter). We specify that this is a simple scalar option by using the "cl::opt" template (as opposed
to the "cl::list template), and tell the CommandLine library that the data type that we are parsing is a
string.
The second and third parameters (which are optional) are used to specify what to output for the "-help"
option. In this case, we get a line that looks like this:
OPTIONS:
-help - display available options (-help-hidden for more)
-o <filename> - Specify output filename
Because we specified that the command line option should parse using the string data type, the variable
declared is automatically usable as a real string in all contexts that a normal C++ string object may be used.
For example:
...
std::ofstream Output(OutputFilename.c_str());
if (Output.good()) ...
...
There are many different options that you can use to customize the command line option handling library, but
the above example shows the general interface to these options. The options can be specified in any order, and
are specified with helper functions like cl::desc(...), so there are no positional dependencies to
remember. The available options are discussed in detail in the Reference Guide.
Continuing the example, we would like to have our compiler take an input filename as well as an output
filename, but we do not want the input filename to be specified with a hyphen (ie, not -filename.c). To
support this style of argument, the CommandLine library allows for positional arguments to be specified for
the program. These positional arguments are filled with command line parameters that are not in option form.
We use this feature like this:
Again, the CommandLine library does not require the options to be specified in any particular order, so the
above declaration is equivalent to:
By simply adding the cl::Required flag, the CommandLine library will automatically issue an error if the
argument is not specified, which shifts all of the command line option verification code out of your
application into the library. This is just one example of how using flags can alter the default behaviour of the
library, on a per-option basis. By adding one of the declarations above, the -help option synopsis is now
extended to:
OPTIONS:
-help - display available options (-help-hidden for more)
-o <filename> - Specify output filename
Boolean Arguments
In addition to input and output filenames, we would like the compiler example to support three boolean flags:
"-f" to force writing binary output to a terminal, "--quiet" to enable quiet mode, and "-q" for backwards
compatibility with some of our users. We can support these by declaring options of boolean type like this:
This does what you would expect: it declares three boolean variables ("Force", "Quiet", and "Quiet2") to
recognize these options. Note that the "-q" option is specified with the "cl::Hidden" flag. This modifier
prevents it from being shown by the standard "-help" output (note that it is still shown in the
"-help-hidden" output).
The CommandLine library uses a different parser for different data types. For example, in the string case, the
argument passed to the option is copied literally into the content of the string variable... we obviously cannot
do that in the boolean case, however, so we must use a smarter parser. In the case of the boolean parser, it
allows no options (in which case it assigns the value of true to the variable), or it allows the values "true" or
"false" to be specified, allowing any of the following inputs:
OPTIONS:
-f - Enable binary output on terminals
-o - Override output filename
-quiet - Don't print informational messages
-help - display available options (-help-hidden for more)
OPTIONS:
-f - Enable binary output on terminals
-o - Override output filename
-q - Don't print informational messages
-quiet - Don't print informational messages
-help - display available options (-help-hidden for more)
This brief example has shown you how to use the 'cl::opt' class to parse simple scalar command line
arguments. In addition to simple scalar arguments, the CommandLine library also provides primitives to
support CommandLine option aliases, and lists of options.
Argument Aliases
So far, the example works well, except for the fact that we need to check the quiet condition like this now:
...
if (!Quiet && !Quiet2) printInformationalMessage(...);
...
... which is a real pain! Instead of defining two values for the same condition, we can use the "cl::alias"
class to make the "-q" option an alias for the "-quiet" option, instead of providing a value itself:
The third line (which is the only one we modified from above) defines a "-q" alias that updates the "Quiet"
variable (as specified by the cl::aliasopt modifier) whenever it is specified. Because aliases do not hold
state, the only thing the program has to query is the Quiet variable now. Another nice feature of aliases is
that they automatically hide themselves from the -help output (although, again, they are still visible in the
-help-hidden output).
...
if (!Quiet) printInformationalMessage(...);
...
The answer is that it uses a table-driven generic parser (unless you specify your own parser, as described in
the Extension Guide). This parser maps literal strings to whatever type is required, and requires you to tell it
what this mapping should be.
Let's say that we would like to add four optimization levels to our optimizer, using the standard flags "-g",
"-O0", "-O1", and "-O2". We could easily implement this with boolean options like above, but there are
several problems with this strategy:
1. A user could specify more than one of the options at a time, for example, "compiler -O3 -O2".
The CommandLine library would not be able to catch this erroneous input for us.
2. We would have to test 4 different variables to see which ones are set.
3. This doesn't map to the numeric levels that we want... so we cannot easily see if some level >= "-O1"
is enabled.
To cope with these problems, we can use an enum value, and have the CommandLine library fill it in with the
appropriate level directly, which is used like this:
enum OptLevel {
g, O1, O2, O3
};
...
if (OptimizationLevel >= O2) doPartialRedundancyElimination(...);
...
This declaration defines a variable "OptimizationLevel" of the "OptLevel" enum type. This variable
can be assigned any of the values that are listed in the declaration (Note that the declaration list must be
terminated with the "clEnumValEnd" argument!). The CommandLine library enforces that the user can
only specify one of the options, and it ensure that only valid enum values can be specified. The
"clEnumVal" macros ensure that the command line arguments matched the enum values. With this option
added, our help output now is:
OPTIONS:
Choose optimization level:
-g - No optimizations, enable debugging
-O1 - Enable trivial optimizations
-O2 - Enable default optimizations
-O3 - Enable expensive optimizations
In this case, it is sort of awkward that flag names correspond directly to enum names, because we probably
don't want a enum definition named "g" in our program. Because of this, we can alternatively write this
example like this:
enum OptLevel {
Debug, O1, O2, O3
};
...
if (OptimizationLevel == Debug) outputDebugInfo(...);
...
By using the "clEnumValN" macro instead of "clEnumVal", we can directly specify the name that the flag
should get. In general a direct mapping is nice, but sometimes you can't or don't want to preserve the mapping,
which is when you would use it.
Named Alternatives
Another useful argument form is a named alternative style. We shall use this style in our compiler to specify
different debug levels that can be used. Instead of each debug level being its own switch, we want to support
the following options, of which only one can be specified at a time: "--debug-level=none",
"--debug-level=quick", "--debug-level=detailed". To do this, we use the exact same format
as our optimization level flags, but we also specify an option name. For this case, the code looks like this:
enum DebugLev {
nodebuginfo, quick, detailed
};
This definition defines an enumerated command line variable of type "enum DebugLev", which works
exactly the same way as before. The difference here is just the interface exposed to the user of your program
and the help output by the "-help" option:
OPTIONS:
Choose optimization level:
-g - No optimizations, enable debugging
Again, the only structural difference between the debug level declaration and the optimization level
declaration is that the debug level declaration includes an option name ("debug_level"), which
automatically changes how the library processes the argument. The CommandLine library supports both
forms so that you can choose the form most appropriate for your application.
enum Opts {
// 'inline' is a C++ keyword, so name it 'inlining'
dce, constprop, inlining, strip
};
This defines a variable that is conceptually of the type "std::vector<enum Opts>". Thus, you can
access it with standard vector methods:
Note that the "cl::list" template is completely general and may be used with any data types or other
arguments that you can use with the "cl::opt" template. One especially useful way to use a list is to
capture all of the positional arguments together if there may be more than one specified. In the case of a
linker, for example, the linker takes several '.o' files, and needs to capture them into a list. This is naturally
specified as:
This variable works just like a "vector<string>" object. As such, accessing the list is simple, just like
above. In this example, we used the cl::OneOrMore modifier to inform the CommandLine library that it is
an error if the user does not specify any .o files on our command line. Again, this just reduces the amount of
checking we have to do.
bits |= 1 <<(unsigned)enum;
Options that are specified multiple times are redundant. Any instances after the first are discarded.
Reworking the above list example, we could replace cl::list with cl::bits:
To test to see if constprop was specified, we can use the cl:bits::isSet function:
if (OptimizationBits.isSet(constprop)) {
...
}
It's also possible to get the raw bit vector using the cl::bits::getBits function:
Finally, if external storage is used, then the location specified must be of type unsigned. In all other ways a
cl::bits option is equivalent to a cl::list option.
OPTIONS:
...
-help - display available options (-help-hidden for more)
-o <filename> - Specify output filename
Reference Guide
Now that you know the basics of how to use the CommandLine library, this section will give you the detailed
information you need to tune how command line options work, as well as information on more "advanced"
command line option processing capabilities.
Positional Arguments
Positional arguments are those arguments that are not named, and are not specified with a hyphen. Positional
arguments should be used when an option is specified by its position alone. For example, the standard Unix
grep tool takes a regular expression argument, and an optional filename to search through (which defaults to
standard input if a filename is not specified). Using the CommandLine library, this would be specified as:
Given these two option declarations, the -help output for our grep replacement would look like this:
OPTIONS:
-help - display available options (-help-hidden for more)
... and the resultant program could be used just like the standard grep tool.
Positional arguments are sorted by their order of construction. This means that command line options will be
ordered according to how they are listed in a .cpp file, but will not have an ordering defined if the positional
arguments are defined in multiple .cpp files. The fix for this problem is simply to define all of your positional
arguments in one .cpp file.
The solution for this problem is the same for both your tool and the system version: use the '--' marker. When
the user specifies '--' on the command line, it is telling the program that all options after the '--' should be
treated as positional arguments, not options. Thus, we can use it like this:
So, generally, the problem is that you have two cl::list variables that interact in some way. To ensure the
correct interaction, you can use the cl::list::getPosition(optnum) method. This method returns
the absolute position (as found on the command line) of the optnum item in the cl::list.
Note that, for compatibility reasons, the cl::opt also supports an unsigned getPosition() option
As a concrete example, lets say we are developing a replacement for the standard Unix Bourne shell
(/bin/sh). To run /bin/sh, first you specify options to the shell itself (like -x which turns on trace
output), then you specify the name of the script to run, then you specify arguments to the script. These
arguments to the script are parsed by the Bourne shell command line option processor, but are not interpreted
as options to the shell itself. Using the CommandLine library, we would specify this as:
OPTIONS:
-help - display available options (-help-hidden for more)
-x - Enable trace output
At runtime, if we run our new shell replacement as `spiffysh -x test.sh -a -x -y bar', the
Trace variable will be set to true, the Script variable will be set to "test.sh", and the Argv list will
contain ["-a", "-x", "-y", "bar"], because they were specified after the last positional argument
(which is the script name).
There are several limitations to when cl::ConsumeAfter options can be specified. For example, only one
cl::ConsumeAfter can be specified per program, there must be at least one positional argument
specified, there must not be any cl::list positional arguments, and the cl::ConsumeAfter option should be
a cl::list option.
Sometimes, however, it is nice to separate the command line option processing code from the storage of the
value parsed. For example, lets say that we have a '-debug' option that we would like to use to enable debug
information across the entire body of our program. In this case, the boolean value controlling the debug code
should be globally accessible (in a header file, for example) yet the command line option processing code
should not be exposed to all of these clients (requiring lots of .cpp files to #include CommandLine.h).
To do this, set up your .h file with your option, like this for example:
// DebugFlag - This boolean is set to true if the '-debug' command line option
// DEBUG macro - This macro should be used by code to emit debug information.
// In the '-debug' option is specified on the command line, and if this is a
// debug build, then the code specified as the option to the macro will be
// executed. Otherwise it will not be.
#ifdef NDEBUG
#define DEBUG(X)
#else
#define DEBUG(X) do { if (DebugFlag) { X; } } while (0)
#endif
This allows clients to blissfully use the DEBUG() macro, or the DebugFlag explicitly if they want to. Now
we just need to be able to set the DebugFlag boolean when the option is set. To do this, we pass an
additional argument to our command line argument processor, and we specify where to fill in with the
cl::location attribute:
In the above example, we specify "true" as the second argument to the cl::opt template, indicating that
the template should not maintain a copy of the value itself. In addition to this, we specify the
cl::location attribute, so that DebugFlag is automatically set.
Option Attributes
This section describes the basic attributes that you can specify on options.
• The option name attribute (which is required for all options, except positional options) specifies what
the option name is. This option is specified in simple double quotes:
cl::opt<bool> Quiet("quiet");
• The cl::desc attribute specifies a description for the option to be shown in the -help output for
the program.
• The cl::value_desc attribute specifies a string that can be used to fine tune the -help output
for a command line option. Look here for an example.
• The cl::init attribute specifies an initial value for a scalar option. If this attribute is not specified
then the command line option value defaults to the value created by the default constructor for the
type. Warning: If you specify both cl::init and cl::location for an option, you must
specify cl::location first, so that when the command-line parser sees cl::init, it knows
where to put the initial value. (You will get an error at runtime if you don't put them in the right
order.)
• The cl::location attribute where to store the value for a parsed command line option if using
external storage. See the section on Internal vs External Storage for more information.
• The cl::aliasopt attribute specifies which option a cl::alias option is an alias for.
• The cl::values attribute specifies the string-to-value mapping to be used by the generic parser. It
takes a clEnumValEnd terminated list of (option, value, description) triplets that specify the option
name, the value mapped to, and the description shown in the -help for the tool. Because the generic
parser is used most frequently with enum values, two macros are often useful:
1. The clEnumVal macro is used as a nice simple way to specify a triplet for an enum. This
macro automatically makes the option name be the same as the enum name. The first option
Option Modifiers
Option modifiers are the flags and expressions that you pass into the constructors for cl::opt and
cl::list. These modifiers give you the ability to tweak how options are parsed and how -help output is
generated to fit your application well.
It is not possible to specify two options from the same category (you'll get a runtime error) to a single option,
except for options in the miscellaneous category. The CommandLine library specifies defaults for all of these
settings that are the most useful in practice and the most common, which mean that you usually shouldn't have
to worry about these.
• The cl::NotHidden modifier (which is the default for cl::opt and cl::list options)
indicates the option is to appear in both help listings.
• The cl::Hidden modifier (which is the default for cl::alias options) indicates that the option
should not appear in the -help output, but should appear in the -help-hidden output.
• The cl::ReallyHidden modifier indicates that the option should not appear in any help output.
• The cl::Optional modifier (which is the default for the cl::opt and cl::alias classes)
indicates that your program will allow either zero or one occurrence of the option to be specified.
• The cl::ZeroOrMore modifier (which is the default for the cl::list class) indicates that your
program will allow the option to be specified zero or more times.
• The cl::Required modifier indicates that the specified option must be specified exactly one time.
If an option is not specified, then the value of the option is equal to the value specified by the cl::init
attribute. If the cl::init attribute is not specified, the option value is initialized with the default
constructor for the data type.
If an option is specified multiple times for an option of the cl::opt class, only the last value will be
retained.
• The cl::ValueOptional modifier (which is the default for bool typed options) specifies that it
is acceptable to have a value, or not. A boolean argument can be enabled just by appearing on the
command line, or it can have an explicit '-foo=true'. If an option is specified with this mode, it is
illegal for the value to be provided without the equal sign. Therefore '-foo true' is illegal. To get
this behavior, you must use the cl::ValueRequired modifier.
• The cl::ValueRequired modifier (which is the default for all other types except for unnamed
alternatives using the generic parser) specifies that a value must be provided. This mode informs the
command line library that if an option is not provides with an equal sign, that the next argument
provided must be the value. This allows things like '-o a.out' to work.
• The cl::ValueDisallowed modifier (which is the default for unnamed alternatives using the
generic parser) indicates that it is a runtime error for the user to specify a value. This can be provided
to disallow users from providing options to boolean options (like '-foo=true').
In general, the default values for this option group work just like you would want them to. As mentioned
above, you can specify the cl::ValueDisallowed modifier to a boolean argument to restrict your command line
parser. These options are mostly useful when extending the library.
• The cl::NormalFormatting modifier (which is the default all options) specifies that this option
is "normal".
• The cl::Positional modifier specifies that this is a positional argument that does not have a
command line option associated with it. See the Positional Arguments section for more information.
• The cl::ConsumeAfter modifier specifies that this option is used to capture "interpreter style"
arguments. See this section for more information.
• The cl::Prefix modifier specifies that this option prefixes its value. With 'Prefix' options, the
equal sign does not separate the value from the option name specified. Instead, the value is everything
after the prefix, including any equal sign if present. This is useful for processing odd arguments like
-lmalloc and -L/usr/lib in a linker tool or -DNAME=value in a compiler tool. Here, the 'l',
'D' and 'L' options are normal string (or list) options, that have the cl::Prefix modifier added to
allow the CommandLine library to recognize them. Note that cl::Prefix options must not have
The CommandLine library does not restrict how you use the cl::Prefix or cl::Grouping modifiers,
but it is possible to specify ambiguous argument settings. Thus, it is possible to have multiple letter options
that are prefix or grouping options, and they will still work as designed.
To do this, the CommandLine library uses a greedy algorithm to parse the input option into (potentially
multiple) prefix and grouping options. The strategy basically looks like this:
parse(string OrigInput) {
• The cl::CommaSeparated modifier indicates that any commas specified for an option's value
should be used to split the value up into multiple values for the option. For example, these two options
are equivalent when cl::CommaSeparated is specified: "-foo=a -foo=b -foo=c" and
"-foo=a,b,c". This option only makes sense to be used in a case where the option is allowed to
accept one or more values (i.e. it is a cl::list option).
• The cl::PositionalEatsArgs modifier (which only applies to positional arguments, and only
makes sense for lists) indicates that positional argument should consume any strings after it (including
strings that start with a "-") up until another recognized positional argument. For example, if you have
two "eating" positional arguments, "pos1" and "pos2", the string "-pos1 -foo -bar baz
-pos2 -bork" would cause the "-foo -bar -baz" strings to be applied to the "-pos1" option
and the "-bork" string to be applied to the "-pos2" option.
• The cl::Sink modifier is used to handle unknown options. If there is at least one option with
cl::Sink modifier specified, the parser passes unrecognized option strings to it as values instead of
signaling an error. As with cl::CommaSeparated, this modifier only makes sense with a cl::list
option.
Response files
Some systems, such as certain variants of Microsoft Windows and some older Unices have a relatively low
limit on command-line length. It is therefore customary to use the so-called 'response files' to circumvent this
restriction. These files are mentioned on the command-line (using the "@file") syntax. The program reads
these files and inserts the contents into argv, thereby working around the command-line length limits.
Response files are enabled by an optional fourth argument to cl::ParseEnvironmentOptions and
cl::ParseCommandLineOptions.
The cl::ParseCommandLineOptions function requires two parameters (argc and argv), but may
also take an optional third parameter which holds additional extra text to emit when the -help option is
invoked, and a fourth boolean parameter that enables response files.
It takes four parameters: the name of the program (since argv may not be available, it can't just look in
argv[0]), the name of the environment variable to examine, the optional additional extra text to emit when
the -help option is invoked, and the boolean switch that controls whether response files should be read.
cl::ParseEnvironmentOptions will break the environment variable's value up into words and then
process them using cl::ParseCommandLineOptions. Note: Currently
cl::ParseEnvironmentOptions does not support quoting, so an environment variable containing
-option "foo bar" will be parsed as three words, -option, "foo, and bar", which is different from
what you would get from the shell with the same input.
namespace cl {
template <class DataType, bool ExternalStorage = false,
class ParserClass = parser<DataType> >
class opt;
}
The first template argument specifies what underlying data type the command line argument is, and is used to
select a default parser implementation. The second template argument is used to specify whether the option
should contain the storage for the option (the default) or whether external storage should be used to contain
the value parsed for the option (see Internal vs External Storage for more information).
The third template argument specifies which parser to use. The default value selects an instantiation of the
parser class based on the underlying data type of the option. In general, this default works well for most
applications, so this option is only used when using a custom parser.
namespace cl {
template <class DataType, class Storage = bool,
class ParserClass = parser<DataType> >
class list;
}
This class works the exact same as the cl::opt class, except that the second argument is the type of the
external storage, not a boolean value. For this class, the marker type 'bool' is used to indicate that internal
storage should be used.
namespace cl {
template <class DataType, class Storage = bool,
class ParserClass = parser<DataType> >
class bits;
}
This class works the exact same as the cl::lists class, except that the second argument must be of type
unsigned if external storage is used.
namespace cl {
class alias;
}
The cl::aliasopt attribute should be used to specify which option this is an alias for. Alias arguments
default to being Hidden, and use the aliased options parser to do the conversion from string to data.
namespace cl {
struct extrahelp;
}
To use the extrahelp, simply construct one with a const char* parameter to the constructor. The text
passed to the constructor will be printed at the bottom of the help message, verbatim. Note that multiple
cl::extrahelp can be used, but this practice is discouraged. If your tool needs to print additional help
information, put all that help into a single cl::extrahelp instance.
For example:
Builtin parsers
Parsers control how the string value taken from the command line is translated into a typed value, suitable for
use in a C++ program. By default, the CommandLine library uses an instance of parser<type> if the
command line option specifies that it uses values of type 'type'. Because of this, custom option processing is
specified with specializations of the 'parser' class.
The CommandLine library provides the following builtin parser specializations, which are sufficient for most
applications. It can, however, also be extended to work with new data types and new ways of interpreting the
same data. See the Writing a Custom Parser for more details on this type of library extension.
• The generic parser<t> parser can be used to map strings values to any data type, through the use
of the cl::values property, which specifies the mapping information. The most common use of this
parser is for parsing enum values, which allows you to use the CommandLine library for all of the
error checking to make sure that only valid enum values are specified (as opposed to accepting
arbitrary strings). Despite this, however, the generic parser class can be used for any data type.
• The parser<bool> specialization is used to convert boolean strings to a boolean value. Currently
accepted strings are "true", "TRUE", "True", "1", "false", "FALSE", "False", and "0".
• The parser<boolOrDefault> specialization is used for cases where the value is boolean, but
we also need to know whether the option was specified at all. boolOrDefault is an enum with 3
values, BOU_UNSET, BOU_TRUE and BOU_FALSE. This parser accepts the same strings as
parser<bool>.
• The parser<string> specialization simply stores the parsed string into the string value specified.
No conversion or modification of the data is performed.
• The parser<int> specialization uses the C strtol function to parse the string input. As such, it
will accept a decimal number (with an optional '+' or '-' prefix) which must start with a non-zero digit.
It accepts octal numbers, which are identified with a '0' prefix digit, and hexadecimal numbers with a
prefix of '0x' or '0X'.
• The parser<double> and parser<float> specializations use the standard C strtod
function to convert floating point strings into floating point values. As such, a broad range of string
formats is supported, including exponential notation (ex: 1.7e15) and properly supports locales.
Extension Guide
Although the CommandLine library has a lot of functionality built into it already (as discussed previously),
one of its true strengths lie in its extensibility. This section discusses how the CommandLine library works
under the covers and illustrates how to do some simple, common, extensions.
This approach has the advantage that users of your custom data type will automatically use your
custom parser whenever they define an option with a value type of your data type. The disadvantage
of this approach is that it doesn't work if your fundamental data type is something that is already
supported.
2. Write an independent class, using it explicitly from options that need it.
This approach works well in situations where you would line to parse an option using special syntax
for a not-very-special data-type. The drawback of this approach is that users of your parser have to be
aware that they are using your parser instead of the builtin ones.
To guide the discussion, we will discuss a custom parser that accepts file sizes, specified with an optional unit
after the numeric size. For example, we would like to parse "102kb", "41M", "1G" into the appropriate integer
value. In this case, the underlying data type we want to parse into is 'unsigned'. We choose approach #2
above because we don't want to make this the default for all unsigned options.
Our new class inherits from the cl::basic_parser template class to fill in the default, boiler plate code
for us. We give it the data type that we parse into, the last argument to the parse method, so that clients of
our custom parser know what object type to pass in to the parse method. (Here we declare that we parse into
'unsigned' variables.)
For most purposes, the only method that must be implemented in a custom parser is the parse method. The
parse method is called whenever the option is invoked, passing in the option itself, the option name, the
string to parse, and a reference to a return value. If the string to parse is not well-formed, the parser should
output an error message and return true. Otherwise it should return false and set 'Val' to the parsed value. In
our example, we implement parse as:
while (1) {
switch (*End++) {
case 0: return false; // No error
case 'i': // Ignore the 'i' in KiB if people use that
case 'b': case 'B': // Ignore B suffix
break;
default:
// Print an error message if unrecognized character!
return O.error("'" + Arg + "' value invalid for file size argument!");
}
}
}
This function implements a very simple parser for the kinds of strings we are interested in. Although it has
some holes (it allows "123KKK" for example), it is good enough for this example. Note that we use the option
itself to print out the error message (the error method always returns true) in order to get a nice error
message (shown below). Now that we have our parser class, we can use it like this:
OPTIONS:
-help - display available options (-help-hidden for more)
...
-max-file-size=<size> - Maximum file size to accept
And we can test that our parse works correctly now (the test program just prints out the max-file-size
argument value):
$ ./test
MFS: 0
$ ./test -max-file-size=123MB
MFS: 128974848
$ ./test -max-file-size=3G
MFS: 3221225472
$ ./test -max-file-size=dog
-max-file-size option: 'dog' value invalid for file size argument!
It looks like it works. The error message that we get is nice and helpful, and we seem to accept reasonable file
sizes. This wraps up the "custom parser" tutorial.
Chris Lattner
LLVM Compiler Infrastructure
Last modified: $Date: 2010-02-26 14:18:32 -0600 (Fri, 26 Feb 2010) $
1. Introduction
2. Mechanical Source Issues
1. Source Code Formatting
1. Commenting
2. Comment Formatting
3. #include Style
4. Source Code Width
5. Use Spaces Instead of Tabs
6. Indent Code Consistently
2. Compiler Issues
1. Treat Compiler Warnings Like Errors
2. Write Portable Code
3. Use of class/struct Keywords
3. Style Issues
1. The High Level Issues
1. A Public Header File is a Module
2. #include as Little as Possible
3. Keep "internal" Headers Private
4. Use Early Exits and 'continue' to Simplify Code
5. Don't use "else" after a return
6. Turn Predicate Loops into Predicate Functions
2. The Low Level Issues
1. Assert Liberally
2. Do not use 'using namespace std'
3. Provide a virtual method anchor for classes in headers
4. Don't evaluate end() every time through a loop
5. #include <iostream> is forbidden
6. Avoid std::endl
7. Use raw_ostream
Written by Misha Brukman, Brad Jones, Nate Begeman, and Chris Lattner
When you come to this realization, stop and think. Do you really need to extend LLVM? Is it a new
fundamental capability that LLVM does not support at its current incarnation or can it be synthesized from
already pre-existing LLVM elements? If you are not sure, ask on the LLVM-dev list. The reason is that
extending LLVM will get involved as you need to update all the different passes that you intend to use with
your extension, and there are many LLVM analyses and transformations, so it may be quite a bit of work.
Adding an intrinsic function is far easier than adding an instruction, and is transparent to optimization passes.
If your added functionality can be expressed as a function call, an intrinsic function is the method of choice
for LLVM extension.
Before you invest a significant amount of effort into a non-trivial extension, ask on the list if what you are
looking to do can be done with already-existing infrastructure, or if maybe someone else is already working
on it. You will save yourself a lot of time and effort by doing so.
Once the intrinsic has been added to the system, you must add code generator support for it. Generally you
must do the following steps:
Also, you need to implement (or modify) any analyses or passes that you want to understand this new
instruction.
1. llvm/include/llvm/Type.h: add enum for the new type; add static Type* for this type
2. llvm/lib/VMCore/Type.cpp: add mapping from TypeID => Type*; initialize the static
Type*
3. llvm/lib/AsmReader/Lexer.l: add ability to parse in the type from text assembly
4. llvm/lib/AsmReader/llvmAsmParser.y: add a token for that type
1. llvm/include/llvm/Type.h: add enum for the new type; add a forward declaration of the type
also
2. llvm/include/llvm/DerivedTypes.h: add new class to represent new class in the
hierarchy; add forward declaration to the TypeMap value type
3. llvm/lib/VMCore/Type.cpp: add support for derived type to:
std::string getTypeDescription(const Type &Ty,
std::vector<const Type*> &TypeStack)
bool TypesEqual(const Type *Ty, const Type *Ty2,
std::map<const Type*, const Type*> & EqTypes)
1. Abstract
2. Introduction
3. Library Descriptions
4. Library Dependencies
5. Linkage Rules Of Thumb
1. Always link LLVMCore, LLVMSupport, LLVMSystem
2. Never link both archive and re-linked
Warning: This document is out of date, please see llvm-config for more information.
Abstract
Amongst other things, LLVM is a toolkit for building compilers, linkers, runtime executives, virtual
machines, and other program execution related tools. In addition to the LLVM tool set, the functionality of
LLVM is available through a set of libraries. To use LLVM as a toolkit for constructing tools, a developer
needs to understand what is contained in the various libraries, what they depend on, and how to use them.
Fortunately, there is a tool, llvm-config to aid with this. This document describes the contents of the
libraries and how to use llvm-config to generate command line options.
Introduction
If you're writing a compiler, virtual machine, or any other utility based on LLVM, you'll need to figure out
which of the many libraries files you will need to link with to be successful. An understanding of the contents
of these libraries will be useful in coming up with an optimal specification for the libraries to link with. The
purpose of this document is to reduce some of the trial and error that the author experienced in using LLVM.
LLVM produces two types of libraries: archives (ending in .a) and objects (ending in .o). However, both are
libraries. Libraries ending in .o are known as re-linked libraries because they contain all the compilation
units of the library linked together as a single .o file. Furthermore, several of the libraries have both forms of
library. The re-linked libraries are used whenever you want to include all symbols from the library. The
archive libraries are used whenever you want to only resolve outstanding symbols at that point in the link
without including everything in the library.
If you're using the LLVM Makefile system to link your tools,you will use the LLVMLIBS make variable. (see
the Makefile Guide for details). This variable specifies which LLVM libraries to link into your tool and the
order in which they will be linked. You specify re-linked libraries by naming the library without a suffix. You
specify archive libraries by naming the library with a .a suffix but without the lib prefix. The order in
which the libraries appear in the LLVMLIBS variable definition is the order in which they will be linked.
Getting this order correct for your tool can sometimes be challenging.
Library Descriptions
The table below categorizes each library
To understand the relationships between libraries, the llvm-config can be very useful. If all you know is
that you want certain libraries to be available, you can generate the complete set of libraries to link with using
one of four options, as below:
1. --ldflags. This generates the command line options necessary to be passed to the ld tool in order
to link with LLVM. Most notably, the -L option is provided to specify a library search directory that
contains the LLVM libraries.
If you wish to delve further into how llvm-config generates the correct order (based on library
dependencies), please see the tool named GenLibDeps.pl in the utils source directory of LLVM.
libLLVMAnalysis.a
◊ libLLVMCore.a
◊ libLLVMSupport.a
◊ libLLVMSystem.a
◊ libLLVMTarget.a
libLLVMArchive.a
◊ libLLVMBCReader.a
◊ libLLVMCore.a
◊ libLLVMSupport.a
◊ libLLVMSystem.a
libLLVMAsmParser.a
◊ libLLVMCore.a
◊ libLLVMSystem.a
libLLVMBCReader.a
◊ libLLVMCore.a
◊ libLLVMSupport.a
◊ libLLVMSystem.a
libLLVMBCWriter.a
◊ libLLVMCore.a
◊ libLLVMSupport.a
◊ libLLVMSystem.a
libLLVMCodeGen.a
◊ libLLVMAnalysis.a
◊ libLLVMCore.a
◊ libLLVMScalarOpts.a
◊ libLLVMSupport.a
◊ libLLVMSystem.a
◊ libLLVMTarget.a
◊ libLLVMTransformUtils.a
libLLVMCore.a
◊ libLLVMSupport.a
◊ libLLVMSystem.a
libLLVMDebugger.a
◊ libLLVMBCReader.a
◊ libLLVMCore.a
◊ libLLVMSupport.a
◊ libLLVMSystem.a
libLLVMInstrumentation.a
◊ libLLVMCore.a
◊ libLLVMScalarOpts.a
◊ libLLVMSupport.a
◊ libLLVMTransformUtils.a
libLLVMLinker.a
◊ libLLVMArchive.a
◊ libLLVMBCReader.a
◊ libLLVMCore.a
◊ libLLVMSupport.a
◊ libLLVMSystem.a
1. Introduction
2. Qualification Criteria
3. Release Timeline
4. Release Process
Introduction
This document collects information about successfully releasing LLVM (including subprojects llvm-gcc and
Clang) to the public. It is the release manager's responsibility to ensure that a high quality build of LLVM is
released.
Release Timeline
LLVM is released on a time based schedule (currently every 6 months). We do not have dot releases because
of the nature of LLVM incremental development philosophy. The release schedule is roughly as follows:
1. Set code freeze and branch creation date for 6 months after last code freeze date. Announce release
schedule to the LLVM community and update the website.
2. Create release branch and begin release process.
3. Send out pre-release for first round of testing. Testing will last 7-10 days. During the first round of
testing, regressions should be found and fixed. Patches are merged from mainline to the release
branch.
4. Generate and send out second pre-release. Bugs found during this time will not be fixed unless
absolutely critical. Bugs introduce by patches merged in will be fixed and if so, a 3rd round of testing
is needed.
5. The release notes should be updated during the first and second round of pre-release testing.
6. Finally, release!
Release Process
1. Verify that the current Subversion HEAD is in decent shape by examining nightly tester or buildbot
results.
2. Request all developers to refrain from committing. Offenders get commit rights taken away
(temporarily).
3. Create the release branch for llvm, llvm-gcc4.2, clang, and the test-suite. The branch
name will be release_XX,where XX is the major and minor release numbers. Clang will have a
different release number than llvm/ llvm-gcc4 since its first release was years later (still deciding
if this will be true or not). These branches can be created without checking out anything from
subversion.
svn co https://llvm.org/svn/llvm-project/llvm/branches/release_XX
svn co https://llvm.org/svn/llvm-project/llvm-gcc-4.2/branches/release_XX
svn co https://llvm.org/svn/llvm-project/test-suite/branches/release_XX
svn co https://llvm.org/svn/llvm-project/cfe/branches/release_XX
In addition, the version number of all the Bugzilla components must be updated for the next release.
1. debug: ENABLE_OPTIMIZED=0
2. release: ENABLE_OPTIMIZED=1
3. release-asserts: ENABLE_OPTIMIZED=1 DISABLE_ASSERTIONS=1
Build LLVM
Build both debug, release (optimized), and release-asserts versions of LLVM on all supported platforms.
Direction to build llvm are here.
1. Build the LLVM GCC front-end by following the directions in the README.LLVM file. The
frontend must be compiled with c, c++, objc (mac only), objc++ (mac only) and fortran support.
2. Please boostrap as well.
3. Be sure to build with LLVM_VERSION_INFO=X.X, where X is the major and minor release
numbers.
4. Copy the installation directory to a directory named for the specific target. For example on Red Hat
Enterprise Linux, the directory would be named llvm-gcc4.2-2.6-x86-linux-RHEL4.
Archive and compress the new directory.
Architecture OS compiler
x86-32 Mac OS 10.5 gcc 4.0.1
x86-32 Linux gcc 4.2.X, gcc 4.3.X
x86-32 FreeBSD gcc 4.2.X
x86-32 mingw gcc 3.4.5
x86-64 Mac OS 10.5 gcc 4.0.1
Qualify LLVM-GCC
LLVM-GCC is qualified when front-end specific tests in the llvm dejagnu test suite all pass and there are no
regressions in the test-suite.
Qualify Clang
Clang is qualified when front-end specific tests in the llvm dejagnu test suite all pass, clang's own test suite
passes cleanly, and there are no regressions in the test-suite.
Specific Target Qualification Details
llvm-gcc clang
Architecture OS tests
baseline baseline
llvm dejagnu, clang tests, test-suite (including
x86-32 Mac OS 10.5 last release none
spec)
llvm dejagnu, clang tests, test-suite (including
x86-32 Linux last release none
spec)
x86-32 FreeBSD none none llvm dejagnu, clang tests, test-suite
x86-32 mingw last release none QT
llvm dejagnu, clang tests, test-suite (including
x86-64 Mac OS 10.5 last release none
spec)
llvm dejagnu, clang tests, test-suite (including
x86-64 Linux last release none
spec)
x86-64 FreeBSD none none llvm dejagnu, clang tests, test-suite
Community Testing
Once all testing has been completed and appropriate bugs filed, the pre-release tar balls may be put on the
website and the LLVM community is notified. Ask that all LLVM developers test the release in 2 ways:
1. Download llvm-X.X, llvm-test-X.X, and the appropriate llvm-gcc4 and/or clang binary. Build LLVM.
Run "make check" and the full llvm-test suite (make TEST=nightly report).
2. Download llvm-X.X, llvm-test-X.X, and the llvm-gcc4 and/or clang source. Compile everything. Run
"make check" and the full llvm-test suite (make TEST=nightly report).
Ask LLVM developers to submit the report and make check results to the list. Attempt to verify that there are
no regressions from the previous release. The results are not used to qualify a release, but to spot other
potential problems. For unsupported targets, verify that make check at least is clean.
During the first round of testing time, all regressions must be fixed before the second pre-release is created.
If this is the second round of testing, this is only to ensure the bug fixes previously merged in have not created
new major problems. This is not the time to solve additional and unrelated bugs. If no patches are merged in,
the release is determined to be ready and the release manager may move onto the next step.
• Patches applied to the release branch are only applied by the release manager.
• During the first round of testing, patches that fix regressions or that are small and relatively risk free
(verified by the appropriate code owner) are applied to the branch. Code owners are asked to be very
conservative in approving patches for the branch and we reserve the right to reject any patch that does not
fix a regression as previously defined.
• During the remaining rounds of testing, only patches that fix regressions may be applied.
FIXME: Add a note if anything needs to be done to the clang website. Eventually the websites will be merged
hopefully.
Update Documentation
Review the documentation and ensure that it is up to date. The Release Notes must be updated to reflect bug
fixes, new known issues, and changes in the list of supported platforms. The Getting Started Guide should be
updated to reflect the new release version number tag avaiable from Subversion and changes in basic system
requirements. Merge both changes from mainline into the release branch.
All LLVM passes are subclasses of the Pass class, which implement functionality by overriding virtual
methods inherited from Pass. Depending on how your pass works, you should inherit from the
ModulePass, CallGraphSCCPass, FunctionPass, or LoopPass, or BasicBlockPass classes,
which gives the system more information about what your pass does, and how it can be combined with other
passes. One of the main features of the LLVM Pass Framework is that it schedules passes to run in an
efficient way based on the constraints that your pass meets (which are indicated by which class they derive
from).
We start by showing you how to construct a pass, everything from setting up the code, to compiling, loading,
and executing it. After the basics are down, more advanced features are discussed.
# Make the shared library become a loadable module so the tools can
# dlopen/dlsym on the resulting library.
LOADABLE_MODULE = 1
# Tell the build system which LLVM libraries your pass needs. You'll probably
# need at least LLVMSystem.a, LLVMSupport.a, LLVMCore.a but possibly several
# others too.
LLVMLIBS = LLVMCore.a LLVMSupport.a LLVMSystem.a
This makefile specifies that all of the .cpp files in the current directory are to be compiled and linked
together into a Debug/lib/Hello.so shared object that can be dynamically loaded by the opt or
Now that we have the build scripts set up, we just need to write the code for the pass itself.
#include "llvm/Pass.h"
#include "llvm/Function.h"
#include "llvm/Support/raw_ostream.h"
Which are needed because we are writing a Pass, we are operating on Function's, and we will be doing
some printing.
Next we have:
... which is required because the functions from the include files live in the llvm namespace.
Next we have:
namespace {
... which starts out an anonymous namespace. Anonymous namespaces are to C++ what the "static"
keyword is to C (at global scope). It makes the things declared inside of the anonymous namespace only
visible to the current file. If you're not familiar with them, consult a decent C++ book for more information.
This declares a "Hello" class that is a subclass of FunctionPass. The different builtin pass subclasses are
described in detail later, but for now, know that FunctionPass's operate a function at a time.
This declares pass identifier used by LLVM to identify pass. This allows LLVM to avoid using expensive
C++ runtime information.
We declare a "runOnFunction" method, which overloads an abstract virtual method inherited from
FunctionPass. This is where we are supposed to do our thing, so we just print out our message with the
name of each function.
char Hello::ID = 0;
Lastly, we register our class Hello, giving it a command line argument "hello", and a name "Hello
World Pass". Last two RegisterPass arguments are optional. Their default value is false. If a pass walks
CFG without modifying it then third argument is set to true. If a pass is an analysis pass, for example
dominator tree pass, then true is supplied as fourth argument.
#include "llvm/Pass.h"
#include "llvm/Function.h"
#include "llvm/Support/raw_ostream.h"
namespace {
struct Hello : public FunctionPass {
char Hello::ID = 0;
RegisterPass<Hello> X("hello", "Hello World Pass");
}
Now that it's all together, compile the file with a simple "gmake" command in the local directory and you
should get a new "Debug/lib/Hello.so file. Note that everything in this file is contained in an
anonymous namespace: this reflects the fact that passes are self contained units that do not need external
interfaces (although they can have them) to be useful.
To test it, follow the example at the end of the Getting Started Guide to compile "Hello World" to LLVM. We
can now run the bitcode file (hello.bc) for the program through our transformation like this (or course, any
bitcode file will work):
To see what happened to the other string you registered, try running opt with the -help option:
OPTIONS:
Optimizations available:
...
-funcresolve - Resolve Functions
-gcse - Global Common Subexpression Elimination
-globaldce - Dead Global Elimination
-hello - Hello World Pass
-indvars - Canonicalize Induction Variables
-inline - Function Integration/Inlining
-instcombine - Combine redundant instructions
...
The pass name get added as the information string for your pass, giving some documentation to users of opt.
Now that you have a working pass, you would go ahead and make it do the cool transformations you want.
Once you get it all working and tested, it may become useful to find out how fast your pass is. The
PassManager provides a nice command line option (--time-passes) that allows you to get information
about the execution time of your pass along with the other passes you queue up. For example:
---User Time--- --System Time-- --User+System-- ---Wall Time--- --- Pass Name ---
0.0100 (100.0%) 0.0000 ( 0.0%) 0.0100 ( 50.0%) 0.0402 ( 84.0%) Bitcode Writer
0.0000 ( 0.0%) 0.0100 (100.0%) 0.0100 ( 50.0%) 0.0031 ( 6.4%) Dominator Set Constru
0.0000 ( 0.0%) 0.0000 ( 0.0%) 0.0000 ( 0.0%) 0.0013 ( 2.7%) Module Verifier
0.0000 ( 0.0%) 0.0000 ( 0.0%) 0.0000 ( 0.0%) 0.0033 ( 6.9%) Hello World Pass
0.0100 (100.0%) 0.0100 (100.0%) 0.0200 (100.0%) 0.0479 (100.0%) TOTAL
As you can see, our implementation above is pretty fast :). The additional passes listed are automatically
inserted by the 'opt' tool to verify that the LLVM emitted by your pass is still valid and well formed LLVM,
which hasn't been broken somehow.
Now that you have seen the basics of the mechanics behind passes, we can talk about some more details of
how they work and how to use them.
we did not discuss why or when this should occur. Here we talk about the classes available, from the most
general to the most specific.
When choosing a superclass for your Pass, you should choose the most specific class possible, while still
being able to meet the requirements listed. This gives the LLVM Pass Infrastructure information necessary to
optimize how passes are run, so that the resultant compiler isn't unnecessarily slow.
Although this pass class is very infrequently used, it is important for providing information about the current
target machine being compiled for, and other static information that can affect the various transformations.
ImmutablePasses never invalidate other transformations, are never invalidated, and are never "run".
A module pass can use function level passes (e.g. dominators) using the getAnalysis interface
getAnalysis<DominatorTree>(llvm::Function *) to provide the function to retrieve analysis
result for, if the function pass does not require any module or immutable passes. Note that this can only be
done for functions for which the analysis ran, e.g. in the case of dominators you should only ask for the
DominatorTree for function definitions, not declarations.
To write a correct ModulePass subclass, derive from ModulePass and overload the runOnModule
method with the following signature:
The runOnModule method performs the interesting work of the pass. It should return true if the module was
modified by the transformation and false otherwise.
TODO: explain briefly what SCC, Tarjan's algo, and B-U mean.
1. ... not allowed to modify any Functions that are not in the current SCC.
2. ... not allowed to inspect any Function's other than those in the current SCC and the direct callees of
the SCC.
3. ... required to preserve the current CallGraph object, updating it to reflect any changes made to the
program.
4. ... not allowed to add or remove SCC's from the current Module, though they may change the contents
of an SCC.
5. ... allowed to add or remove global variables from the current Module.
6. ... allowed to maintain state across invocations of runOnSCC (including global data).
Implementing a CallGraphSCCPass is slightly tricky in some cases because it has to handle SCCs with
more than one node in it. All of the virtual methods described below should return true if they modified the
program, or false if they didn't.
The doIninitialize method is allowed to do most of the things that CallGraphSCCPass's are not
allowed to do. They can add and remove functions, get pointers to functions, etc. The doInitialization
method is designed to do simple initialization type of stuff that does not depend on the SCCs being processed.
The doInitialization method call is not scheduled to overlap with any other pass executions (thus it
should be very fast).
The runOnSCC method performs the interesting work of the pass, and should return true if the module was
modified by the transformation, false otherwise.
The doFinalization method is an infrequently used method that is called when the pass framework has
finished calling runOnFunction for every function in the program being compiled.
Implementing a FunctionPass is usually straightforward (See the Hello World pass for example).
FunctionPass's may overload three virtual methods to do their work. All of these methods should return
true if they modified the program, or false if they didn't.
The doIninitialize method is allowed to do most of the things that FunctionPass's are not allowed
to do. They can add and remove functions, get pointers to functions, etc. The doInitialization method
is designed to do simple initialization type of stuff that does not depend on the functions being processed. The
doInitialization method call is not scheduled to overlap with any other pass executions (thus it should
be very fast).
A good example of how this method should be used is the LowerAllocations pass. This pass converts
malloc and free instructions into platform dependent malloc() and free() function calls. It uses the
doInitialization method to get a reference to the malloc and free functions that it needs, adding
prototypes to the module if necessary.
The runOnFunction method must be implemented by your subclass to do the transformation or analysis
work of your pass. As usual, a true value should be returned if the function is modified.
The doFinalization method is an infrequently used method that is called when the pass framework has
finished calling runOnFunction for every function in the program being compiled.
LoopPass subclasses are allowed to update loop nest using LPPassManager interface. Implementing a
loop pass is usually straightforward. Looppass's may overload three virtual methods to do their work. All
these methods should return true if they modified the program, or false if they didn't.
The doInitialization method is designed to do simple initialization type of stuff that does not depend
on the functions being processed. The doInitialization method call is not scheduled to overlap with
any other pass executions (thus it should be very fast). LPPassManager interface should be used to access
Function or Module level analysis information.
The runOnLoop method must be implemented by your subclass to do the transformation or analysis work of
your pass. As usual, a true value should be returned if the function is modified. LPPassManager interface
should be used to update loop nest.
BasicBlockPasses are useful for traditional local and "peephole" optimizations. They may override the
same doInitialization(Module &) and doFinalization(Module &) methods that
FunctionPass's have, but also have the following virtual methods that may also be implemented:
The doIninitialize method is allowed to do most of the things that BasicBlockPass's are not
allowed to do, but that FunctionPass's can. The doInitialization method is designed to do simple
initialization that does not depend on the BasicBlocks being processed. The doInitialization method
call is not scheduled to overlap with any other pass executions (thus it should be very fast).
Override this function to do the work of the BasicBlockPass. This function is not allowed to inspect or
modify basic blocks other than the parameter, and are not allowed to modify the CFG. A true value must be
returned if the basic block is modified.
The doFinalization method is an infrequently used method that is called when the pass framework has
finished calling runOnBasicBlock for every BasicBlock in the program being compiled. This can be used
to perform per-function finalization.
Pass registration
In the Hello World example pass we illustrated how pass registration works, and discussed some of the
reasons that it is used and what it does. Here we discuss how and why passes are registered.
As we saw above, passes are registered with the RegisterPass template, which requires you to pass at
least two parameters. The first parameter is the name of the pass that is to be used on the command line to
specify that the pass should be added to a program (for example, with opt or bugpoint). The second
argument is the name of the pass, which is to be used for the -help output of programs, as well as for debug
output generated by the --debug-pass option.
If you want your pass to be easily dumpable, you should implement the virtual print method:
The print method must be implemented by "analyses" in order to print a human readable version of the
analysis results. This is useful for debugging an analysis itself, as well as for other people to figure out how an
analysis works. Use the opt -analyze argument to invoke this method.
The llvm::OStream parameter specifies the stream to write the results on, and the Module parameter
gives a pointer to the top level module of the program that has been analyzed. Note however that this pointer
may be null in certain circumstances (such as calling the Pass::dump() from a debugger), so it should
only be used to enhance debug output, it should not be depended on.
Typically this functionality is used to require that analysis results are computed before your pass is run.
Running arbitrary transformation passes can invalidate the computed analysis results, which is what the
invalidation set specifies. If a pass does not implement the getAnalysisUsage method, it defaults to not
having any prerequisite passes, and invalidating all other passes.
Some analyses chain to other analyses to do their job. For example, an AliasAnalysis implementation is
required to chain to other alias analysis passes. In cases where analyses chain, the
addRequiredTransitive method should be used instead of the addRequired method. This informs
the PassManager that the transitively required pass should be alive as long as the requiring pass is.
The AnalysisUsage class provides several methods which are useful in certain circumstances that are
related to addPreserved. In particular, the setPreservesAll method can be called to indicate that the
pass does not modify the LLVM program at all (which is true for analyses), and the setPreservesCFG
method can be used by transformations that change instructions in the program but do not modify the CFG or
terminator instructions (note that this property is implicitly set for BasicBlockPass's).
addPreserved is particularly useful for transformations like BreakCriticalEdges. This pass knows
how to update a small set of loop and dominator related analyses if they exist, so it can preserve them, despite
the fact that it hacks on the CFG.
and:
// This example modifies the program, but does not modify the CFG
void LICM::getAnalysisUsage(AnalysisUsage &AU) const {
AU.setPreservesCFG();
AU.addRequired<LoopInfo>();
}
This method call returns a reference to the pass desired. You may get a runtime assertion failure if you attempt
to get an analysis that you did not declare as required in your getAnalysisUsage implementation. This
method can be called by your run* method implementation, or by any other local method invoked by your
run* method. A module level pass can use function level analysis info using this interface. For example:
In above example, runOnFunction for DominatorTree is called by pass manager before returning a reference
to the desired pass.
If your pass is capable of updating analyses if they exist (e.g., BreakCriticalEdges, as described
above), you can use the getAnalysisIfAvailable method, which returns a pointer to the analysis if it
is active. For example:
...
if (DominatorSet *DS = getAnalysisIfAvailable<DominatorSet>()) {
// A DominatorSet is active. This code will update it.
}
...
In particular, some analyses are defined such that there is a single simple interface to the analysis results, but
multiple ways of calculating them. Consider alias analysis for example. The most trivial alias analysis returns
"may alias" for any alias query. The most sophisticated analysis a flow-sensitive, context-sensitive
interprocedural analysis that can take a significant amount of time to execute (and obviously, there is a lot of
room between these two extremes for other implementations). To cleanly support situations like this, the
LLVM Pass Infrastructure supports the notion of Analysis Groups.
Although Pass Registration is optional for normal passes, all analysis group implementations must be
registered, and must use the RegisterAnalysisGroup template to join the implementation pool. Also, a
default implementation of the interface must be registered with RegisterAnalysisGroup.
As a concrete example of an Analysis Group in action, consider the AliasAnalysis analysis group. The default
implementation of the alias analysis interface (the basicaa pass) just does a few simple checks that don't
require significant analysis to compute (such as: two different globals can never alias each other, etc). Passes
that use the AliasAnalysis interface (for example the gcse pass), do not care which implementation of
alias analysis is actually provided, they just use the designated interface.
From the user's perspective, commands work just like normal. Issuing the command 'opt -gcse ...' will
cause the basicaa class to be instantiated and added to the pass sequence. Issuing the command 'opt
-somefancyaa -gcse ...' will cause the gcse pass to use the somefancyaa alias analysis (which
doesn't actually exist, it's just a hypothetical example) instead.
Using RegisterAnalysisGroup
The RegisterAnalysisGroup template is used to register the analysis group itself as well as add pass
implementations to the analysis group. First, an analysis should be registered, with a human readable name
provided for it. Unlike registration of passes, there is no command line argument to be specified for the
Analysis Group Interface itself, because it is "abstract":
Once the analysis is registered, passes can declare that they are valid implementations of the interface by
using the following code:
namespace {
// Analysis Group implementations must be registered normally...
RegisterPass<FancyAA>
B("somefancyaa", "A more complex alias analysis implementation");
This just shows a class FancyAA that is registered normally, then uses the RegisterAnalysisGroup
template to "join" the AliasAnalysis analysis group. Every implementation of an analysis group should
join using this template. A single pass may join multiple different analysis groups with no problem.
namespace {
// Analysis Group implementations must be registered normally...
RegisterPass<BasicAliasAnalysis>
D("basicaa", "Basic Alias Analysis (default AA impl)");
Pass Statistics
The Statistic class is designed to be an easy way to expose various success metrics from passes. These
statistics are printed at the end of a run, when the -stats command line option is enabled on the command line.
See the Statistics section in the Programmer's Manual for details.
The PassManager does two main things to try to reduce the execution time of a series of passes:
1. Share analysis results - The PassManager attempts to avoid recomputing analysis results as much as
possible. This means keeping track of which analyses are available already, which analyses get
invalidated, and which analyses are needed to be run for a pass. An important part of work is that the
PassManager tracks the exact lifetime of all analysis results, allowing it to free memory allocated
to holding analysis results as soon as they are no longer needed.
2. Pipeline the execution of passes on the program - The PassManager attempts to get better cache
and memory usage behavior out of a series of passes by pipelining the passes together. This means
that, given a series of consequtive FunctionPass's, it will execute all of the FunctionPass's on
the first function, then all of the FunctionPasses on the second function, etc... until the entire
program has been run through the passes.
This improves the cache behavior of the compiler, because it is only touching the LLVM program
representation for a single function at a time, instead of traversing the entire program. It reduces the
memory consumption of compiler, because, for example, only one DominatorSet needs to be
calculated at a time. This also makes it possible to implement some interesting enhancements in the
future.
The effectiveness of the PassManager is influenced directly by how much information it has about the
behaviors of the passes it is scheduling. For example, the "preserved" set is intentionally conservative in the
face of an unimplemented getAnalysisUsage method. Not implementing when it should be implemented
will have the effect of not allowing any analysis results to live across the execution of your pass.
The PassManager class exposes a --debug-pass command line options that is useful for debugging
pass execution, seeing how things work, and diagnosing when you should be preserving more analyses than
you currently are (To get information about all of the variants of the --debug-pass option, just type 'opt
-help-hidden').
By using the --debug-pass=Structure option, for example, we can see how our Hello World pass
interacts with other passes. Lets try it out with the gcse and licm passes:
$ opt -load ../../../Debug/lib/Hello.so -gcse -licm --debug-pass=Structure < hello.bc > /dev/nu
Module Pass Manager
Function Pass Manager
Dominator Set Construction
Immediate Dominators Construction
This output shows us when passes are constructed and when the analysis results are known to be dead
(prefixed with '--'). Here we see that GCSE uses dominator and immediate dominator information to do its
job. The LICM pass uses natural loop information, which uses dominator sets, but not immediate dominators.
Because immediate dominators are no longer useful after the GCSE pass, it is immediately destroyed. The
dominator sets are then reused to compute natural loop information, which is then used by the LICM pass.
After the LICM pass, the module verifier runs (which is automatically added by the 'opt' tool), which uses
the dominator set to check that the resultant LLVM code is well formed. After it finishes, the dominator set
information is destroyed, after being computed once, and shared by three passes.
Lets see how this changes when we run the Hello World pass in between the two passes:
$ opt -load ../../../Debug/lib/Hello.so -gcse -hello -licm --debug-pass=Structure < hello.bc >
Module Pass Manager
Function Pass Manager
Dominator Set Construction
Immediate Dominators Construction
Global Common Subexpression Elimination
-- Dominator Set Construction
-- Immediate Dominators Construction
-- Global Common Subexpression Elimination
Hello World Pass
-- Hello World Pass
Dominator Set Construction
Natural Loop Construction
Loop Invariant Code Motion
-- Natural Loop Construction
-- Loop Invariant Code Motion
Module Verifier
-- Dominator Set Construction
-- Module Verifier
Bitcode Writer
--Bitcode Writer
Hello: __main
Hello: puts
Hello: main
Here we see that the Hello World pass has killed the Dominator Set pass, even though it doesn't modify the
code at all! To fix this, we need to add the following getAnalysisUsage method to our pass:
$ opt -load ../../../Debug/lib/Hello.so -gcse -hello -licm --debug-pass=Structure < hello.bc >
Pass Arguments: -gcse -hello -licm
Module Pass Manager
Function Pass Manager
Dominator Set Construction
Immediate Dominators Construction
Global Common Subexpression Elimination
-- Immediate Dominators Construction
-- Global Common Subexpression Elimination
Hello World Pass
-- Hello World Pass
Natural Loop Construction
Loop Invariant Code Motion
-- Loop Invariant Code Motion
-- Natural Loop Construction
Module Verifier
-- Dominator Set Construction
-- Module Verifier
Bitcode Writer
--Bitcode Writer
Hello: __main
Hello: puts
Hello: main
Which shows that we don't accidentally invalidate dominator information anymore, and therefore do not have
to compute it twice.
The PassManager automatically determines when to compute analysis results, and how long to keep them
around for. Because the lifetime of the pass object itself is effectively the entire duration of the compilation
process, we need some way to free analysis results when they are no longer useful. The releaseMemory
virtual method is the way to do this.
If you are writing an analysis or any other pass that retains a significant amount of state (for use by another
pass which "requires" your pass and uses the getAnalysis method) you should implement releaseMemory
to, well, release the memory allocated to maintain this internal state. This method is called after the run*
method for the class, before the next call of run* in your pass.
The fundamental mechanisms for pass registration are the MachinePassRegistry class and subclasses of
MachinePassRegistryNode.
Implement your register allocator machine pass. In your register allocator .cpp file add the following include;
#include "llvm/CodeGen/RegAllocRegistry.h"
Also in your register allocator .cpp file, define a creator function in the form;
FunctionPass *createMyRegisterAllocator() {
return new MyRegisterAllocator();
}
Note that the signature of this function should match the type of
RegisterRegAlloc::FunctionPassCtor. In the same file add the "installing" declaration, in the
form;
Note the two spaces prior to the help string produces a tidy result on the -help query.
$ llc -help
...
-regalloc - Register allocator to use (default=linearscan)
=linearscan - linear scan register allocator
=local - local register allocator
=simple - simple register allocator
=myregalloc - my register allocator help string
...
And that's it. The user is now free to use -regalloc=myregalloc as an option. Registering instruction
schedulers is similar except use the RegisterScheduler class. Note that the
RegisterScheduler::FunctionPassCtor is significantly different from
RegisterRegAlloc::FunctionPassCtor.
To force the load/linking of your register allocator into the llc/lli tools, add your creator function's global
declaration to "Passes.h" and add a "pseudo" call line to
llvm/Codegen/LinkAllCodegenComponents.h.
FunctionPassCtor type.
Then you need to declare the registry. Example: if your pass registry is RegisterMyPasses then define;
MachinePassRegistry RegisterMyPasses::Registry;
And finally, declare the command line option for your passes. Example:
cl::opt<RegisterMyPasses::FunctionPassCtor, false,
RegisterPassParser<RegisterMyPasses> >
MyPassOpt("mypass",
cl::init(&createDefaultMyPass),
cl::desc("my pass option help"));
Here the command option is "mypass", with createDefaultMyPass as the default creator.
For sake of discussion, I'm going to assume that you are debugging a transformation invoked by opt,
although nothing described here depends on that.
$ gdb opt
GNU gdb 5.0
Copyright 2000 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB. Type "show warranty" for details.
This GDB was configured as "sparc-sun-solaris2.6"...
(gdb)
Note that opt has a lot of debugging information in it, so it takes time to load. Be patient. Since we cannot set
a breakpoint in our pass yet (the shared object isn't loaded until runtime), we must execute the process, and
have it stop before it invokes our pass, but after it has loaded the shared object. The most foolproof way of
doing this is to set a breakpoint in PassManager::run and then run the process with the arguments you
want:
Once the opt stops in the PassManager::run method you are now free to set breakpoints in your pass so
that you can trace through execution or do other standard debugging stuff.
Miscellaneous Problems
Once you have the basics down, there are a couple of problems that GDB has, some with solutions, some
without.
• Inline functions have bogus stack information. In general, GDB does a pretty good job getting stack
traces and stepping through inline functions. When a pass is dynamically loaded however, it somehow
completely loses this capability. The only solution I know of is to de-inline a function (move it from
the body of a class to a .cpp file).
• Restarting the program breaks breakpoints. After following the information above, you have
succeeded in getting some breakpoints planted in your pass. Nex thing you know, you restart the
program (i.e., you type 'run' again), and you start getting errors about breakpoints being unsettable.
The only way I have found to "fix" this problem is to delete the breakpoints that are already set in
your pass, run the program, and re-set the breakpoints once execution stops in
PassManager::run.
Hopefully these tips will help with common case debugging situations. If you'd like to contribute some tips of
your own, just contact Chris.
Multithreaded LLVM
Multiple CPU machines are becoming more common and compilation can never be fast enough: obviously we
should allow for a multithreaded compiler. Because of the semantics defined for passes above (specifically
they cannot maintain state across invocations of their run* methods), a nice clean way to implement a
multithreaded compiler would be for the PassManager class to create multiple instances of each pass
object, and allow the separate instances to be hacking on different parts of the program at the same time.
This implementation would prevent each of the passes from having to implement multithreaded constructs,
requiring only the LLVM core to have locking in a few places (for global resources). Although this is a simple
extension, we simply haven't had time (or multiprocessor machines, thus a reason) to implement this. Despite
that, we have kept the LLVM passes SMP ready, and you should too.
Chris Lattner
The LLVM Compiler Infrastructure
Last modified: $Date: 2010-02-18 08:37:52 -0600 (Thu, 18 Feb 2010) $
1. Introduction
♦ Audience
♦ Prerequisite Reading
♦ Basic Steps
♦ Preliminaries
2. Target Machine
3. Target Registration
4. Register Set and Register Classes
♦ Defining a Register
♦ Defining a Register Class
♦ Implement a subclass of TargetRegisterInfo
5. Instruction Set
♦ Instruction Operand Mapping
♦ Implement a subclass of TargetInstrInfo
♦ Branch Folding and If Conversion
6. Instruction Selector
♦ The SelectionDAG Legalize Phase
◊ Promote
◊ Expand
◊ Custom
◊ Legal
♦ Calling Conventions
7. Assembly Printer
8. Subtarget Support
9. JIT Support
♦ Machine Code Emitter
♦ Target JIT Info
Introduction
This document describes techniques for writing compiler backends that convert the LLVM Intermediate
Representation (IR) to code for a specified machine or other languages. Code intended for a specific machine
can take the form of either assembly code or binary code (usable for a JIT compiler).
The backend of LLVM features a target-independent code generator that may create output for several types
of target CPUs — including X86, PowerPC, Alpha, and SPARC. The backend may also be used to generate
code targeted at SPUs of the Cell processor or GPUs to support the execution of compute kernels.
Audience
The audience for this document is anyone who needs to write an LLVM backend to generate code for a
specific hardware or software target.
Prerequisite Reading
• LLVM Language Reference Manual — a reference manual for the LLVM assembly language.
• The LLVM Target-Independent Code Generator — a guide to the components (classes and code
generation algorithms) for translating the LLVM internal representation into machine code for a
specified target. Pay particular attention to the descriptions of code generation stages: Instruction
Selection, Scheduling and Formation, SSA-based Optimization, Register Allocation, Prolog/Epilog
Code Insertion, Late Machine Code Optimizations, and Code Emission.
• TableGen Fundamentals —a document that describes the TableGen (tblgen) application that
manages domain-specific information to support LLVM code generation. TableGen processes input
from a target description file (.td suffix) and generates C++ code that can be used for code
generation.
• Writing an LLVM Pass — The assembly printer is a FunctionPass, as are several SelectionDAG
processing steps.
To follow the SPARC examples in this document, have a copy of The SPARC Architecture Manual, Version 8
for reference. For details about the ARM instruction set, refer to the ARM Architecture Reference Manual. For
more about the GNU Assembler format (GAS), see Using As, especially for the assembly printer. Using As
contains a list of target machine dependent features.
Basic Steps
To write a compiler backend for LLVM that converts the LLVM IR to code for a specified target (machine or
other language), follow these steps:
• Create a subclass of the TargetMachine class that describes characteristics of your target machine.
Copy existing examples of specific TargetMachine class and header files; for example, start with
SparcTargetMachine.cpp and SparcTargetMachine.h, but change the file names for
your target. Similarly, change code that references "Sparc" to reference your target.
• Describe the register set of the target. Use TableGen to generate code for register definition, register
aliases, and register classes from a target-specific RegisterInfo.td input file. You should also
write additional code for a subclass of the TargetRegisterInfo class that represents the class register
file data used for register allocation and also describes the interactions between registers.
• Describe the instruction set of the target. Use TableGen to generate code for target-specific
instructions from target-specific versions of TargetInstrFormats.td and
TargetInstrInfo.td. You should write additional code for a subclass of the TargetInstrInfo
class to represent machine instructions supported by the target machine.
• Describe the selection and conversion of the LLVM IR from a Directed Acyclic Graph (DAG)
representation of instructions to native target-specific instructions. Use TableGen to generate code
that matches patterns and selects instructions based on additional information in a target-specific
version of TargetInstrInfo.td. Write code for XXXISelDAGToDAG.cpp, where XXX
identifies the specific target, to perform pattern matching and DAG-to-DAG instruction selection.
Also write code in XXXISelLowering.cpp to replace or remove operations and data types that
are not supported natively in a SelectionDAG.
• Write code for an assembly printer that converts LLVM IR to a GAS format for your target machine.
You should add assembly strings to the instructions defined in your target-specific version of
TargetInstrInfo.td. You should also write code for a subclass of AsmPrinter that performs the
LLVM-to-assembly conversion and a trivial subclass of TargetAsmInfo.
• Optionally, add support for subtargets (i.e., variants with different capabilities). You should also write
code for a subclass of the TargetSubtarget class, which allows you to use the -mcpu= and -mattr=
command-line options.
• Optionally, add JIT support and create a machine code emitter (subclass of TargetJITInfo) that is used
to emit binary code directly into memory.
In the .cpp and .h. files, initially stub up these methods and then implement them later. Initially, you may
not know which private members that the class will need and which components will need to be subclassed.
Preliminaries
To actually create your compiler backend, you need to create and modify a few files. The absolute minimum
is discussed here. But to actually use the LLVM target-independent code generator, you must perform the
steps described in the LLVM Target-Independent Code Generator document.
First, you should create a subdirectory under lib/Target to hold all the files related to your target. If your
target is called "Dummy," create the directory lib/Target/Dummy.
In this new directory, create a Makefile. It is easiest to copy a Makefile of another target and modify it.
It should at least contain the LEVEL, LIBRARYNAME and TARGET variables, and then include
$(LEVEL)/Makefile.common. The library can be named LLVMDummy (for example, see the MIPS
target). Alternatively, you can split the library into LLVMDummyCodeGen and LLVMDummyAsmPrinter,
the latter of which should be implemented in a subdirectory below lib/Target/Dummy (for example, see
the PowerPC target).
Note that these two naming schemes are hardcoded into llvm-config. Using any other naming scheme
will confuse llvm-config and produce a lot of (seemingly unrelated) linker errors when linking llc.
To make your target actually do something, you need to implement a subclass of TargetMachine. This
implementation should typically be in the file lib/Target/DummyTargetMachine.cpp, but any file
in the lib/Target directory will be built and should work. To use LLVM's target independent code
generator, you should do what all current machine backends do: create a subclass of
LLVMTargetMachine. (To create a target from scratch, create a subclass of TargetMachine.)
To get LLVM to actually build and link your target, you need to add it to the TARGETS_TO_BUILD variable.
To do this, you modify the configure script to know about your target when parsing the
--enable-targets option. Search the configure script for TARGETS_TO_BUILD, add your target to the
lists there (some creativity required), and then reconfigure. Alternatively, you can change
autotools/configure.ac and regenerate configure by running ./autoconf/AutoRegen.sh.
Target Machine
LLVMTargetMachine is designed as a base class for targets implemented with the LLVM
target-independent code generator. The LLVMTargetMachine class should be specialized by a concrete
target class that implements the various virtual methods. LLVMTargetMachine is defined as a subclass of
TargetMachine in include/llvm/Target/TargetMachine.h. The TargetMachine class
implementation (TargetMachine.cpp) also processes numerous command-line options.
For a target machine XXX, the implementation of XXXTargetMachine must have access methods to obtain
objects that represent target components. These methods are named get*Info, and are intended to obtain
the instruction set (getInstrInfo), register set (getRegisterInfo), stack frame layout
For instance, for the SPARC target, the header file SparcTargetMachine.h declares prototypes for
several get*Info and getTargetData methods that simply return a class member.
namespace llvm {
class Module;
protected:
virtual const TargetAsmInfo *createTargetAsmInfo() const;
public:
SparcTargetMachine(const Module &M, const std::string &FS);
• getInstrInfo()
• getRegisterInfo()
• getFrameInfo()
• getTargetData()
• getSubtargetImpl()
For some targets, you also need to support the following methods:
• getTargetLowering()
• getJITInfo()
• An upper-case "E" in the string indicates a big-endian target data model. a lower-case "e" indicates
little-endian.
• "p:" is followed by pointer information: size, ABI alignment, and preferred alignment. If only two
figures follow "p:", then the first value is pointer size, and the second value is both ABI and
preferred alignment.
• Then a letter for numeric type alignment: "i", "f", "v", or "a" (corresponding to integer, floating
point, vector, or aggregate). "i", "v", or "a" are followed by ABI alignment and preferred alignment.
"f" is followed by three values: the first indicates the size of a long double, then ABI alignment, and
then ABI preferred alignment.
Target Registration
You must also register your target with the TargetRegistry, which is what other LLVM tools use to be
able to lookup and use your target at runtime. The TargetRegistry can be used directly, but for most
targets there are helper templates which should take care of the work for you.
All targets should declare a global Target object which is used to represent the target during registration.
Then, in the target's TargetInfo library, the target should define that object and use the RegisterTarget
template to register the target. For example, the Sparc registration code looks like this:
Target llvm::TheSparcTarget;
This allows the TargetRegistry to look up the target by name or by target triple. In addition, most targets
will also register additional features which are available in separate libraries. These registration steps are
separate, because some clients may wish to only link in some parts of the target -- the JIT code generator does
not require the use of the assembler printer, for example. Here is an example of registering the Sparc assembly
printer:
You also need to define register classes to categorize related registers. A register class should be added for
groups of registers that are all treated the same way for some instruction. Typical examples are register classes
for integer, floating-point, or vector registers. A register allocator allows an instruction to use any register in a
specified register class to perform the instruction in a similar manner. Register classes allocate virtual registers
to instructions from these sets, and register classes let the target-independent register allocator automatically
Much of the code for registers, including register definition, register aliases, and register classes, is generated
by TableGen from XXXRegisterInfo.td input files and placed in XXXGenRegisterInfo.h.inc
and XXXGenRegisterInfo.inc output files. Some of the code in the implementation of
XXXRegisterInfo requires hand-coding.
Defining a Register
The XXXRegisterInfo.td file typically starts with register definitions for a target machine. The
Register class (specified in Target.td) is used to define an object for each register. The specified string
n becomes the Name of the register. The basic Register object does not have any subregisters and does not
specify any aliases.
For example, in the X86RegisterInfo.td file, there are register definitions that utilize the Register class,
such as:
This defines the register AL and assigns it values (with DwarfRegNum) that are used by gcc, gdb, or a
debug information writer (such as DwarfWriter in llvm/lib/CodeGen/AsmPrinter) to identify a
register. For register AL, DwarfRegNum takes an array of 3 values representing 3 different modes: the first
element is for X86-64, the second for exception handling (EH) on X86-32, and the third is generic. -1 is a
special Dwarf number that indicates the gcc number is undefined, and -2 indicates the register number is
invalid for this mode.
From the previously described line in the X86RegisterInfo.td file, TableGen generates this code in the
X86GenRegisterInfo.inc file:
From the register info file, TableGen generates a TargetRegisterDesc object for each register.
TargetRegisterDesc is defined in include/llvm/Target/TargetRegisterInfo.h with the
following fields:
struct TargetRegisterDesc {
const char *AsmName; // Assembly language name for the register
const char *Name; // Printable name for the reg (for debugging)
const unsigned *AliasSet; // Register Alias Set
TableGen uses the entire target description file (.td) to determine text names for the register (in the
AsmName and Name fields of TargetRegisterDesc) and the relationships of other registers to the
defined register (in the other TargetRegisterDesc fields). In this example, other definitions establish the
registers "AX", "EAX", and "RAX" as aliases for one another, so TableGen generates a null-terminated array
(AL_AliasSet) for this register alias set.
The Register class is commonly used as a base class for more complex classes. In Target.td, the
Register class is the base for the RegisterWithSubRegs class that is used to define registers that need
to specify subregisters in the SubRegs list, as shown here:
class RegisterWithSubRegs<string n,
list<Register> subregs> : Register<n> {
let SubRegs = subregs;
}
In SparcRegisterInfo.td, additional register classes are defined for SPARC: a Register subclass,
SparcReg, and further subclasses: Ri, Rf, and Rd. SPARC registers are identified by 5-bit ID numbers, which
is a feature common to these subclasses. Note the use of 'let' expressions to override values that are initially
defined in a superclass (such as SubRegs field in the Rd class).
In the SparcRegisterInfo.td file, there are register definitions that utilize these subclasses of
Register, such as:
The last two registers shown above (D0 and D1) are double-precision floating-point registers that are aliases
for pairs of single-precision floating-point sub-registers. In addition to aliases, the sub-register and
super-register relationships of the defined register are in fields of a register's TargetRegisterDesc.
Using SparcRegisterInfo.td with TableGen generates several output files that are intended for
inclusion in other source code that you write. SparcRegisterInfo.td generates
SparcGenRegisterInfo.h.inc, which should be included in the header file for the implementation of
the SPARC register implementation that you write (SparcRegisterInfo.h). In
SparcGenRegisterInfo.h.inc a new structure is defined called SparcGenRegisterInfo that
uses TargetRegisterInfo as its base. It also specifies types, based upon the defined register classes:
DFPRegsClass, FPRegsClass, and IntRegsClass.
IntRegsClass::IntRegsClass() : TargetRegisterClass(IntRegsRegClassID,
IntRegsVTs, IntRegsSubclasses, IntRegsSuperclasses, IntRegsSubRegClasses,
IntRegsSuperRegClasses, 4, 4, 1, IntRegs, IntRegs + 32) {}
}
Instruction Set
During the early stages of code generation, the LLVM IR code is converted to a SelectionDAG with nodes
that are instances of the SDNode class containing target instructions. An SDNode has an opcode, operands,
type requirements, and operation properties. For example, is an operation commutative, does an operation
load from memory. The various operation node types are described in the
include/llvm/CodeGen/SelectionDAGNodes.h file (values of the NodeType enum in the ISD
namespace).
TableGen uses the following target description (.td) input files to generate much of the code for instruction
definition:
There is also a target-specific XXX.td file, where XXX is the name of the target. The XXX.td file includes
the other .td input files, but its contents are only directly important for subtargets.
You should describe a concrete target-specific class XXXInstrInfo that represents machine instructions
supported by a target machine. XXXInstrInfo contains an array of XXXInstrDescriptor objects,
each of which describes one instruction. An instruction descriptor defines:
• Opcode mnemonic
• Number of operands
• List of implicit register definitions and uses
• Target-independent properties (such as memory access, is commutable)
• Target-specific flags
The Instruction class (defined in Target.td) is mostly used as a base for more complex instruction classes.
class Instruction {
string Namespace = "";
dag OutOperandList; // An dag containing the MI def operand list.
dag InOperandList; // An dag containing the MI use operand list.
string AsmString = ""; // The .s format to print the instruction with.
list<dag> Pattern; // Set to the DAG pattern for this instruction
list<Register> Uses = [];
list<Register> Defs = [];
A SelectionDAG node (SDNode) should contain an object representing a target-specific instruction that is
defined in XXXInstrInfo.td. The instruction objects should represent instructions from the architecture
manual of the target machine (such as the SPARC Architecture Manual for the SPARC target).
A single instruction from the architecture manual is often modeled as multiple target instructions, depending
upon its operands. For example, a manual might describe an add instruction that takes a register or an
immediate operand. An LLVM target could model this with two instructions named ADDri and ADDrr.
You should define a class for each instruction category and define each opcode as a subclass of the category
with appropriate parameters such as the fixed binary encoding of opcodes and extended opcodes. You should
map the register bits to the bits of the instruction in which they are encoded (for the JIT). Also you should
specify how the instruction should be printed when the automatic assembly printer is used.
As is described in the SPARC Architecture Manual, Version 8, there are three major 32-bit formats for
instructions. Format 1 is only for the CALL instruction. Format 2 is for branch on condition codes and SETHI
(set high bits of a register) instructions. Format 3 is for other instructions.
Each of these formats has corresponding classes in SparcInstrFormat.td. InstSP is a base class for
other instruction classes. Additional base classes are specified for more precise formats: for example in
SparcInstrFormat.td, F2_1 is for SETHI, and F2_2 is for branches. There are three other base
classes: F3_1 for register/register operations, F3_2 for register/immediate operations, and F3_3 for
floating-point operations. SparcInstrInfo.td also adds the base class Pseudo for synthetic SPARC
instructions.
SparcInstrInfo.td largely consists of operand and instruction definitions for the SPARC target. In
SparcInstrInfo.td, the following target description file entry, LDrr, defines the Load Integer
instruction for a Word (the LD SPARC opcode) from a memory address to a register. The first parameter, the
value 3 (112), is the operation value for this category of operation. The second parameter (0000002) is the
specific operation value for LD/Load Word. The third parameter is the output destination, which is a register
operand and defined in the Register target description file (IntRegs).
The fourth parameter is the input source, which uses the address operand MEMrr that is defined earlier in
SparcInstrInfo.td:
The fifth parameter is a string that is used by the assembly printer and can be left as an empty string until the
assembly printer interface is implemented. The sixth and final parameter is the pattern used to match the
instruction during the SelectionDAG Select Phase described in (The LLVM Target-Independent Code
Generator). This parameter is detailed in the next section, Instruction Selector.
Writing these definitions for so many similar instructions can involve a lot of cut and paste. In td files, the
multiclass directive enables the creation of templates to define several instruction classes at once (using
the defm directive). For example in SparcInstrInfo.td, the multiclass pattern F3_12 is defined
to create 2 instruction classes each time F3_12 is invoked:
So when the defm directive is used for the XOR and ADD instructions, as seen below, it creates four
instruction objects: XORrr, XORri, ADDrr, and ADDri.
SparcInstrInfo.td also includes definitions for condition codes that are referenced by branch
instructions. The following definitions in SparcInstrInfo.td indicate the bit location of the SPARC
condition code. For example, the 10th bit represents the 'greater than' condition for integers, and the 22nd bit
represents the 'greater than' condition for floats.
(Note that Sparc.h also defines enums that correspond to the same SPARC condition codes. Care must be
taken to ensure the values in Sparc.h correspond to the values in SparcInstrInfo.td. I.e.,
SPCC::ICC_NE = 9, SPCC::FCC_U = 23 and so on.)
The instruction templates in SparcInstrFormats.td show the base class for F3_1 is InstSP.
class InstSP<dag outs, dag ins, string asmstr, list<dag> pattern> : Instruction {
field bits<32> Inst;
let Namespace = "SP";
bits<2> op;
let Inst{31-30} = op;
dag OutOperandList = outs;
dag InOperandList = ins;
let AsmString = asmstr;
let Pattern = pattern;
}
F3 binds the op field and defines the rd, op3, and rs1 fields. F3 format instructions will bind the operands
rd, op3, and rs1 fields.
F3_1 binds the op3 field and defines the rs2 fields. F3_1 format instructions will bind the operands to the
rd, rs1, and rs2 fields. This results in the XNORrr instruction binding $dst, $b, and $c operands to the
rd, rs1, and rs2 fields respectively.
• isMoveInstr — Return true if the instruction is a register to register move; false, otherwise.
• isLoadFromStackSlot — If the specified machine instruction is a direct load from a stack slot,
return the register number of the destination and the FrameIndex of the stack slot.
Several implementations of AnalyzeBranch (for ARM, Alpha, and X86) can be examined as models for
your own AnalyzeBranch implementation. Since SPARC does not implement a useful
AnalyzeBranch, the ARM target implementation is shown below.
In the simplest case, if a block ends without a branch, then it falls through to the successor block. No
destination blocks are specified for either TBB or FBB, so both parameters return NULL. The start of the
AnalyzeBranch (see code below for the ARM target) shows the function parameters and the code for the
simplest case.
If a block ends with a single unconditional branch instruction, then AnalyzeBranch (shown below) should
return the destination of that branch in the TBB parameter.
A block may end with a single conditional branch instruction that falls through to successor block if the
condition evaluates to false. In that case, AnalyzeBranch (shown below) should return the destination of
that conditional branch in the TBB parameter and a list of operands in the Cond parameter to evaluate the
condition.
If a block ends with both a conditional branch and an ensuing unconditional branch, then AnalyzeBranch
(shown below) should return the conditional branch destination (assuming it corresponds to a conditional
evaluation of 'true') in the TBB parameter and the unconditional branch destination in the FBB
(corresponding to a conditional evaluation of 'false'). A list of operands to evaluate the condition should be
returned in the Cond parameter.
For the last two cases (ending with a single conditional branch or ending with one conditional and one
unconditional branch), the operands returned in the Cond parameter can be passed to methods of other
instructions to create new branches or perform other operations. An implementation of AnalyzeBranch
requires the helper methods RemoveBranch and InsertBranch to manage subsequent operations.
AnalyzeBranch should return false indicating success in most circumstances. AnalyzeBranch should
only return true when the method is stumped about what to do, for example, if a block has three terminating
branches. AnalyzeBranch may return true if it encounters a terminator it cannot handle, such as an indirect
branch.
Instruction Selector
LLVM uses a SelectionDAG to represent LLVM IR instructions, and nodes of the SelectionDAG
ideally represent native target instructions. During code generation, instruction selection passes are performed
to convert non-native DAG instructions into native target-specific instructions. The pass described in
TableGen generates code for instruction selection using the following target description input files:
The implementation of an instruction selection pass must include a header that declares the FunctionPass
class or a subclass of FunctionPass. In XXXTargetMachine.cpp, a Pass Manager (PM) should add
each instruction selection pass into the queue of passes to run.
The LLVM static compiler (llc) is an excellent tool for visualizing the contents of DAGs. To display the
SelectionDAG before or after specific processing phases, use the command line options for llc, described
at SelectionDAG Instruction Selection Process.
To describe instruction selector behavior, you should add patterns for lowering LLVM code into a
SelectionDAG as the last parameter of the instruction definitions in XXXInstrInfo.td. For example,
in SparcInstrInfo.td, this entry defines a register store operation, and the last parameter describes a
pattern with the store DAG operator.
XXXInstrInfo.td also generates (in XXXGenDAGISel.inc) the SelectCode method that is used to
call the appropriate processing method for an instruction. In this example, SelectCode calls
Select_ISD_STORE for the ISD::STORE opcode.
SDNode *SelectCode(SDValue N) {
...
MVT::ValueType NVT = N.getNode()->getValueType(0);
The pattern for STrr is matched, so elsewhere in XXXGenDAGISel.inc, code for STrr is created for
Select_ISD_STORE. The Emit_22 method is also generated in XXXGenDAGISel.inc to complete the
processing of this instruction.
In the constructor for the XXXTargetLowering class, first use the addRegisterClass method to
specify which types are supports and which register classes are associated with them. The code for the register
classes are generated by TableGen from XXXRegisterInfo.td and placed in
XXXGenRegisterInfo.h.inc. For example, the implementation of the constructor for the
SparcTargetLowering class (in SparcISelLowering.cpp) starts with the following code:
addRegisterClass(MVT::i32, SP::IntRegsRegisterClass);
addRegisterClass(MVT::f32, SP::FPRegsRegisterClass);
addRegisterClass(MVT::f64, SP::DFPRegsRegisterClass);
These callbacks are used to determine that an operation does or does not work with a specified type (or types).
And in all cases, the third parameter is a LegalAction type enum value: Promote, Expand, Custom, or
Legal. SparcISelLowering.cpp contains examples of all four LegalAction values.
Promote
For an operation without native support for a given type, the specified type may be promoted to a larger type
that is supported. For example, SPARC does not support a sign-extending load for Boolean values (i1 type),
so in SparcISelLowering.cpp the third parameter below, Promote, changes i1 type values to a large
type before loading.
Expand
For a type without native support, a value may need to be broken down further, rather than promoted. For an
operation without native support, a combination of other operations may be used to similar effect. In SPARC,
the floating-point sine and cosine trig operations are supported by expansion to other operations, as indicated
by the third parameter, Expand, to setOperationAction:
Custom
For some operations, simple type promotion or operation expansion may be insufficient. In some cases, a
special intrinsic function must be implemented.
For example, a constant value may require special treatment, or an operation may require spilling and
restoring registers in the stack and working with register allocators.
As seen in SparcISelLowering.cpp code below, to perform a type conversion from a floating point
value to a signed integer, first the setOperationAction should be called with Custom as the third
parameter:
In the LowerOperation method, for each Custom operation, a case statement should be added to indicate
what function to call. In the following code, an FP_TO_SINT opcode will call the LowerFP_TO_SINT
method:
Finally, the LowerFP_TO_SINT method is implemented, using an FP register to convert the floating-point
value to an integer.
Legal
The Legal LegalizeAction enum value simply indicates that an operation is natively supported. Legal
represents the default condition, so it is rarely used. In SparcISelLowering.cpp, the action for CTPOP
(an operation to count the bits set in an integer) is natively supported only for SPARC v9. The following code
enables the Expand conversion technique for non-v9 SPARC implementations.
Calling Conventions
To support target-specific calling conventions, XXXGenCallingConv.td uses interfaces (such as
CCIfType and CCAssignToReg) that are defined in lib/Target/TargetCallingConv.td. TableGen
can take the target descriptor file XXXGenCallingConv.td and generate the header file
XXXGenCallingConv.inc, which is typically included in XXXISelLowering.cpp. You can use the
interfaces in TargetCallingConv.td to specify:
The following example demonstrates the use of the CCIfType and CCAssignToReg interfaces. If the
CCIfType predicate is true (that is, if the current argument is of type f32 or f64), then the action is
performed. In this case, the CCAssignToReg action assigns the argument value to the first available
register: either R0 or R1.
CCDelegateTo is another commonly used interface, which tries to find a specified sub-calling convention,
and, if a match is found, it is invoked. In the following example (in X86CallingConv.td), the definition
of RetCC_X86_32_C ends with CCDelegateTo. After the current value is assigned to the register ST0 or
ST1, the RetCC_X86Common is invoked.
CCIfCC is an interface that attempts to match the given name to the current calling convention. If the name
identifies the current calling convention, then a specified action is invoked. In the following example (in
X86CallingConv.td), if the Fast calling convention is in use, then RetCC_X86_32_Fast is invoked.
If the SSECall calling convention is in use, then RetCC_X86_32_SSE is invoked.
Assembly Printer
During the code emission stage, the code generator may utilize an LLVM pass to produce assembly output.
To do this, you want to implement the code for a printer that converts LLVM IR to a GAS-format assembly
language for your target machine, using the following steps:
• Define all the assembly strings for your target, adding them to the instructions defined in the
XXXInstrInfo.td file. (See Instruction Set.) TableGen will produce an output file
(XXXGenAsmWriter.inc) with an implementation of the printInstruction method for the
XXXAsmPrinter class.
• Write XXXTargetAsmInfo.h, which contains the bare-bones declaration of the
XXXTargetAsmInfo class (a subclass of TargetAsmInfo).
• Write XXXTargetAsmInfo.cpp, which contains target-specific values for TargetAsmInfo
properties and sometimes new implementations for methods.
• Write XXXAsmPrinter.cpp, which implements the AsmPrinter class that performs the
LLVM-to-assembly conversion.
The code in XXXTargetAsmInfo.h is usually a trivial declaration of the XXXTargetAsmInfo class for
use in XXXTargetAsmInfo.cpp. Similarly, XXXTargetAsmInfo.cpp usually has a few declarations
of XXXTargetAsmInfo replacement values that override the default values in TargetAsmInfo.cpp.
For example in SparcTargetAsmInfo.cpp:
The X86 assembly printer implementation (X86TargetAsmInfo) is an example where the target specific
TargetAsmInfo class uses an overridden methods: ExpandInlineAsm.
#include "llvm/CodeGen/AsmPrinter.h"
#include "llvm/CodeGen/MachineFunctionPass.h"
The XXXAsmPrinter implementation must also include the code generated by TableGen that is output in
the XXXGenAsmWriter.inc file. The code in XXXGenAsmWriter.inc contains an implementation of
the printInstruction method that may call these methods:
• printOperand
• printMemOperand
• printCCOperand (for conditional statements)
• printDataDirective
• printDeclare
• printImplicitDef
• printInlineAsm
The printOperand method is implemented with a long switch/case statement for the type of operand:
register, immediate, basic block, external symbol, global address, constant pool index, or jump table index.
For an instruction with a memory address operand, the printMemOperand method should be implemented
to generate the proper output. Similarly, printCCOperand should be used to print a conditional operand.
doFinalization should be overridden in XXXAsmPrinter, and it should be called to shut down the
assembly printer. During doFinalization, global variables and constants are printed to output.
Subtarget Support
Subtarget support is used to inform the code generation process of instruction set variations for a given chip
set. For example, the LLVM SPARC implementation provided covers three major versions of the SPARC
microprocessor architecture: Version 8 (V8, which is a 32-bit architecture), Version 9 (V9, a 64-bit
architecture), and the UltraSPARC architecture. V8 has 16 double-precision floating-point registers that are
also usable as either 32 single-precision or 8 quad-precision registers. V8 is also purely big-endian. V9 has 32
double-precision floating-point registers that are also usable as 16 quad-precision registers, but cannot be used
as single-precision registers. The UltraSPARC architecture combines V9 with UltraSPARC Visual Instruction
Set extensions.
If subtarget support is needed, you should implement a target-specific XXXSubtarget class for your
architecture. This class should process the command-line options -mcpu= and -mattr=.
TableGen uses definitions in the Target.td and Sparc.td files to generate code in
SparcGenSubtarget.inc. In Target.td, shown below, the SubtargetFeature interface is
defined. The first 4 string parameters of the SubtargetFeature interface are a feature name, an attribute
set by the feature, the value of the attribute, and a description of the feature. (The fifth parameter is a list of
features whose presence is implied, and its default value is an empty array.)
In the Sparc.td file, the SubtargetFeature is used to define the following features.
Elsewhere in Sparc.td, the Proc class is defined and then is used to define particular SPARC processor
subtypes that may have the previously described features.
From Target.td and Sparc.td files, the resulting SparcGenSubtarget.inc specifies enum values to
identify the features, arrays of constants to represent the CPU features and CPU subtypes, and the
ParseSubtargetFeatures method that parses the features string that sets specified subtarget options. The
generated SparcGenSubtarget.inc file should be included in the SparcSubtarget.cpp. The
target-specific implementation of the XXXSubtarget method should follow this pseudocode:
JIT Support
The implementation of a target machine optionally includes a Just-In-Time (JIT) code generator that emits
machine code and auxiliary structures as binary output that can be written directly to memory. To do this,
implement JIT code generation by performing the following steps:
• Write an XXXCodeEmitter.cpp file that contains a machine function pass that transforms
target-machine instructions into relocatable machine code.
There are several different approaches to writing the JIT support code. For instance, TableGen and target
descriptor files may be used for creating a JIT code generator, but are not mandatory. For the Alpha and
PowerPC target machines, TableGen is used to generate XXXGenCodeEmitter.inc, which contains the
binary coding of machine instructions and the getBinaryCodeForInstr method to access those codes.
Other JIT implementations do not.
The implementations of these case statements often first emit the opcode and then get the operand(s). Then
depending upon the operand, helper methods may be called to process the operand(s). For example, in
X86CodeEmitter.cpp, for the X86II::AddRegFrm case, the first data emitted (by emitByte) is the
opcode added to the register operand. Then an object representing the machine operand, MO1, is extracted.
The helper methods such as isImmediate, isGlobalAddress, isExternalSymbol,
isConstantPoolIndex, and isJumpTableIndex determine the operand type.
(X86CodeEmitter.cpp also has private methods such as emitConstant, emitGlobalAddress,
emitExternalSymbolAddress, emitConstPoolAddress, and emitJumpTableAddress that
emit the data into the output stream.)
case X86II::AddRegFrm:
MCE.emitByte(BaseOpcode + getX86RegNum(MI.getOperand(CurOp++).getReg()));
if (CurOp != NumOps) {
const MachineOperand &MO1 = MI.getOperand(CurOp++);
unsigned Size = X86InstrInfo::sizeOfImm(Desc);
if (MO1.isImmediate())
emitConstant(MO1.getImm(), Size);
else {
unsigned rt = Is64BitMode ? X86::reloc_pcrel_word
: (IsPIC ? X86::reloc_picrel_word : X86::reloc_absolute_word);
if (Opcode == X86::MOV64ri)
rt = X86::reloc_absolute_dword; // FIXME: add X86II flag?
if (MO1.isGlobalAddress()) {
bool NeedStub = isa<Function>(MO1.getGlobal());
bool isLazy = gvNeedsLazyPtr(MO1.getGlobal());
emitGlobalAddress(MO1.getGlobal(), rt, MO1.getOffset(), 0,
NeedStub, isLazy);
} else if (MO1.isExternalSymbol())
emitExternalSymbolAddress(MO1.getSymbolName(), rt);
else if (MO1.isConstantPoolIndex())
emitConstPoolAddress(MO1.getIndex(), rt);
else if (MO1.isJumpTableIndex())
emitJumpTableAddress(MO1.getIndex(), rt);
}
}
break;
In the previous example, XXXCodeEmitter.cpp uses the variable rt, which is a RelocationType enum
that may be used to relocate addresses (for example, a global address with a PIC base offset). The
RelocationType enum for that target is defined in the short target-specific XXXRelocations.h file.
The RelocationType is used by the relocate method defined in XXXJITInfo.cpp to rewrite
addresses for referenced global symbols.
For example, X86Relocations.h specifies the following relocation types for the X86 addresses. In all
four cases, the relocated value is added to the value already in memory. For reloc_pcrel_word and
reloc_picrel_word, there is an additional initial adjustment.
enum RelocationType {
• getLazyResolverFunction — Initializes the JIT, gives the target a function that is used for
compilation.
• emitFunctionStub — Returns a native function with a specified address for a callback function.
• relocate — Changes the addresses of referenced globals, based on relocation types.
• Callback function that are wrappers to a function stub that is used when the real target is not initially
known.
TargetJITInfo::LazyResolverFn AlphaJITInfo::getLazyResolverFunction(
JITCompilerFn F) {
JITCompilerFunction = F;
return AlphaCompilationCallback;
}
For the X86 target, the getLazyResolverFunction implementation is a little more complication,
because it returns a different callback function for processors with SSE instructions and XMM registers.
The callback function initially saves and later restores the callee register values, incoming arguments, and
frame and return address. The callback function needs low-level access to the registers or stack, so it is
typically implemented with assembler.
1. Introduction
♦ Required components in the code generator
♦ The high-level design of the code generator
♦ Using TableGen for target description
2. Target description classes
♦ The TargetMachine class
♦ The TargetData class
♦ The TargetLowering class
♦ The TargetRegisterInfo class
♦ The TargetInstrInfo class
♦ The TargetFrameInfo class
♦ The TargetSubtarget class
♦ The TargetJITInfo class
3. Machine code description classes
♦ The MachineInstr class
♦ The MachineBasicBlock class
♦ The MachineFunction class
4. Target-independent code generation algorithms
♦ Instruction Selection
◊ Introduction to SelectionDAGs
◊ SelectionDAG Code Generation Process
◊ Initial SelectionDAG Construction
◊ SelectionDAG LegalizeTypes Phase
◊ SelectionDAG Legalize Phase
◊ SelectionDAG Optimization Phase: the DAG Combiner
◊ SelectionDAG Select Phase
◊ SelectionDAG Scheduling and Formation Phase
◊ Future directions for the SelectionDAG
♦ Live Intervals
◊ Live Variable Analysis
◊ Live Intervals Analysis
♦ Register Allocation
◊ How registers are represented in LLVM
◊ Mapping virtual registers to physical registers
◊ Handling two address instructions
◊ The SSA deconstruction phase
◊ Instruction folding
◊ Built in register allocators
♦ Code Emission
◊ Generating Assembly Code
◊ Generating Binary Machine Code
5. Target-specific Implementation Notes
♦ Tail call optimization
♦ Sibling call optimization
♦ The X86 backend
♦ The PowerPC backend
◊ LLVM PowerPC ABI
◊ Frame Layout
◊ Prolog/Epilog
◊ Dynamic Allocation
Written by Chris Lattner, Bill Wendling, Fernando Magno Quintao Pereira and Jim Laskey
Introduction
The LLVM target-independent code generator is a framework that provides a suite of reusable components for
translating the LLVM internal representation to the machine code for a specified target—either in assembly
form (suitable for a static compiler) or in binary machine code format (usable for a JIT compiler). The LLVM
target-independent code generator consists of five main components:
1. Abstract target description interfaces which capture important properties about various aspects of the
machine, independently of how they will be used. These interfaces are defined in
include/llvm/Target/.
2. Classes used to represent the machine code being generated for a target. These classes are intended to
be abstract enough to represent the machine code for any target machine. These classes are defined in
include/llvm/CodeGen/.
3. Target-independent algorithms used to implement various phases of native code generation (register
allocation, scheduling, stack frame representation, etc). This code lives in lib/CodeGen/.
4. Implementations of the abstract target description interfaces for particular targets. These machine
descriptions make use of the components provided by LLVM, and can optionally provide custom
target-specific passes, to build complete code generators for a specific target. Target descriptions live
in lib/Target/.
5. The target-independent JIT components. The LLVM JIT is completely target independent (it uses the
TargetJITInfo structure to interface for target-specific issues. The code for the
target-independent JIT lives in lib/ExecutionEngine/JIT.
Depending on which part of the code generator you are interested in working on, different pieces of this will
be useful to you. In any case, you should be familiar with the target description and machine code
representation classes. If you want to add a backend for a new target, you will need to implement the target
description classes for your new target and understand the LLVM code representation. If you are interested in
implementing a new code generation algorithm, it should only depend on the target-description and machine
code representation classes, ensuring that it is portable.
This design has two important implications. The first is that LLVM can support completely non-traditional
code generation targets. For example, the C backend does not require register allocation, instruction selection,
or any of the other standard components provided by the system. As such, it only implements these two
interfaces, and does its own thing. Another example of a code generator like this is a (purely hypothetical)
backend that converts LLVM to the GCC RTL form and uses GCC to emit machine code for a target.
This design also implies that it is possible to design and implement radically different code generators in the
LLVM system that do not make use of any of the built-in components. Doing so is not recommended at all,
but could be required for radically different targets that do not fit into the LLVM machine description model:
1. Instruction Selection — This phase determines an efficient way to express the input LLVM code in
the target instruction set. This stage produces the initial code for the program in the target instruction
set, then makes use of virtual registers in SSA form and physical registers that represent any required
register assignments due to target constraints or calling conventions. This step turns the LLVM code
into a DAG of target instructions.
2. Scheduling and Formation — This phase takes the DAG of target instructions produced by the
instruction selection phase, determines an ordering of the instructions, then emits the instructions as
MachineInstrs with that ordering. Note that we describe this in the instruction selection section
because it operates on a SelectionDAG.
3. SSA-based Machine Code Optimizations — This optional stage consists of a series of
machine-code optimizations that operate on the SSA-form produced by the instruction selector.
Optimizations like modulo-scheduling or peephole optimization work here.
4. Register Allocation — The target code is transformed from an infinite virtual register file in SSA
form to the concrete register file used by the target. This phase introduces spill code and eliminates all
virtual register references from the program.
5. Prolog/Epilog Code Insertion — Once the machine code has been generated for the function and the
amount of stack space required is known (used for LLVM alloca's and spill slots), the prolog and
epilog code for the function can be inserted and "abstract stack location references" can be eliminated.
This stage is responsible for implementing optimizations like frame-pointer elimination and stack
packing.
6. Late Machine Code Optimizations — Optimizations that operate on "final" machine code can go
here, such as spill code scheduling and peephole optimizations.
7. Code Emission — The final stage actually puts out the code for the current function, either in the
target assembler format or in machine code.
The code generator is based on the assumption that the instruction selector will use an optimal pattern
matching selector to create high-quality sequences of native instructions. Alternative code generator designs
based on pattern expansion and aggressive iterative peephole optimization are much slower. This design
permits efficient compilation (important for JIT environments) and aggressive optimization (used when
generating code offline) by allowing components of varying levels of sophistication to be used for any step of
compilation.
In addition to these stages, target implementations can insert arbitrary target-specific passes into the flow. For
example, the X86 target uses a special pass to handle the 80x87 floating point stack architecture. Other targets
with unusual requirements can be supported with custom passes as needed.
As LLVM continues to be developed and refined, we plan to move more and more of the target description to
the .td form. Doing so gives us a number of advantages. The most important is that it makes it easier to port
All of the target description classes (except the TargetData class) are designed to be subclassed by the
concrete target implementation, and have virtual methods implemented. To get to these implementations, the
TargetMachine class provides accessors that should be implemented by the target.
Registers in the code generator are represented in the code generator by unsigned integers. Physical registers
(those that actually exist in the target description) are unique small numbers, and virtual registers are generally
large. Note that register #0 is reserved as a flag value.
Each register in the processor description has an associated TargetRegisterDesc entry, which provides
a textual name for the register (used for assembly output and debugging dumps) and a set of aliases (used to
indicate whether one register overlaps with another).
In addition to the per-register description, the TargetRegisterInfo class exposes a set of processor
specific register classes (instances of the TargetRegisterClass class). Each register class contains sets
of registers that have the same properties (for example, they are all 32-bit integer registers). Each SSA virtual
register created by the instruction selector has an associated register class. When the register allocator runs, it
replaces virtual registers with a physical register in the set.
The target-specific implementations of these classes is auto-generated from a TableGen description of the
register file.
The opcode number is a simple unsigned integer that only has meaning to a specific backend. All of the
instructions for a target should be defined in the *InstrInfo.td file for the target. The opcode enum
values are auto-generated from this description. The MachineInstr class does not have any information
about how to interpret the instruction (i.e., what the semantics of the instruction are); for that you must refer to
the TargetInstrInfo class.
By convention, the LLVM code generator orders instruction operands so that all register definitions come
before the register uses, even on architectures that are normally printed in other orders. For example, the
SPARC add instruction: "add %i1, %i2, %i3" adds the "%i1", and "%i2" registers and stores the result
into the "%i3" register. In the LLVM code generator, the operands should be stored as "%i3, %i1, %i2":
with the destination first.
Keeping destination (definition) operands at the beginning of the operand list has several advantages. In
particular, the debugging printer will print the instruction like this:
Also if the first operand is a def, it is easier to create instructions whose only def is the first operand.
// Create a 'DestReg = mov 42' (rendered in X86 assembly as 'mov DestReg, 42')
// instruction. The '1' specifies how many operands will be added.
MachineInstr *MI = BuildMI(X86::MOV32ri, 1, DestReg).addImm(42);
// Create the same instr, but insert it at the end of a basic block.
MachineBasicBlock &MBB = ...
BuildMI(MBB, X86::MOV32ri, 1, DestReg).addImm(42);
// Create the same instr, but insert it before a specified iterator point.
MachineBasicBlock::iterator MBBI = ...
BuildMI(MBB, MBBI, X86::MOV32ri, 1, DestReg).addImm(42);
The key thing to remember with the BuildMI functions is that you have to specify the number of operands
that the machine instruction will take. This allows for efficient memory allocation. You also need to specify if
operands default to be uses of values, not definitions. If you need to add a definition operand (other than the
optional destination register), you must explicitly mark it as such:
MI.addReg(Reg, RegState::Define);
any case, the instruction selector should emit code that copies a virtual register into or out of a physical
register when needed.
The X86 instruction selector produces this machine code for the div and ret (use "llc X.bc
-march=x86 -print-machineinstrs" to get this):
;; Start of div
%EAX = mov %reg1024 ;; Copy X (in reg1024) into EAX
%reg1027 = sar %reg1024, 31
%EDX = mov %reg1027 ;; Sign extend X into EDX
idiv %reg1025 ;; Divide by Y (in reg1025)
%reg1026 = mov %EAX ;; Read the result (Z) out of EAX
;; Start of ret
%EAX = mov %reg1026 ;; 32-bit return value goes in EAX
ret
By the end of code generation, the register allocator has coalesced the registers and deleted the resultant
identity moves producing the following code:
;; X is in EAX, Y is in ECX
mov %EAX, %EDX
sar %EDX, 31
idiv %ECX
ret
This approach is extremely general (if it can handle the X86 architecture, it can handle anything!) and allows
all of the target specific knowledge about the instruction stream to be isolated in the instruction selector. Note
that physical registers should have a short lifetime for good code generation, and all physical registers are
assumed dead on entry to and exit from basic blocks (before register allocation). Thus, if you need a value to
be live across basic block boundaries, it must live in a virtual register.
After register allocation, machine code is no longer in SSA-form because there are no virtual registers left in
the code.
Instruction Selection
Instruction Selection is the process of translating LLVM code presented to the code generator into
target-specific machine instructions. There are several well-known ways to do this in the literature. LLVM
uses a SelectionDAG based instruction selector.
Portions of the DAG instruction selector are generated from the target description (*.td) files. Our goal is
for the entire instruction selector to be generated from these .td files, though currently there are still things
that require custom C++ code.
Introduction to SelectionDAGs
The SelectionDAG provides an abstraction for code representation in a way that is amenable to instruction
selection using automatic techniques (e.g. dynamic-programming based optimal pattern matching selectors). It
is also well-suited to other phases of code generation; in particular, instruction scheduling (SelectionDAG's
are very close to scheduling DAGs post-selection). Additionally, the SelectionDAG provides a host
representation where a large variety of very-low-level (but target-independent) optimizations may be
performed; ones which require extensive information about the instructions efficiently supported by the target.
The SelectionDAG is a Directed-Acyclic-Graph whose nodes are instances of the SDNode class. The primary
payload of the SDNode is its operation code (Opcode) that indicates what operation the node performs and
the operands to the operation. The various operation node types are described at the top of the
include/llvm/CodeGen/SelectionDAGNodes.h file.
Although most operations define a single value, each node in the graph may define multiple values. For
example, a combined div/rem operation will define both the dividend and the remainder. Many other
situations require multiple values as well. Each node also has some number of operands, which are edges to
the node defining the used value. Because nodes may define multiple values, edges are represented by
instances of the SDValue class, which is a <SDNode, unsigned> pair, indicating the node and result
value being used, respectively. Each value produced by an SDNode has an associated MVT (Machine Value
Type) indicating what the type of the value is.
SelectionDAGs contain two different kinds of values: those that represent data flow and those that represent
control flow dependencies. Data values are simple edges with an integer or floating point value type. Control
edges are represented as "chain" edges which are of type MVT::Other. These edges provide an ordering
between nodes that have side effects (such as loads, stores, calls, returns, etc). All nodes that have side effects
should take a token chain as input and produce a new one as output. By convention, token chain inputs are
always operand #0, and chain results are always the last value produced by an operation.
A SelectionDAG has designated "Entry" and "Root" nodes. The Entry node is always a marker node with an
Opcode of ISD::EntryToken. The Root node is the final side-effecting node in the token chain. For
example, in a single basic block function it would be the return node.
1. Build initial DAG — This stage performs a simple translation from the input LLVM code to an illegal
SelectionDAG.
2. Optimize SelectionDAG — This stage performs simple optimizations on the SelectionDAG to
simplify it, and recognize meta instructions (like rotates and div/rem pairs) for targets that support
these meta operations. This makes the resultant code more efficient and the select instructions from
DAG phase (below) simpler.
3. Legalize SelectionDAG Types — This stage transforms SelectionDAG nodes to eliminate any types
that are unsupported on the target.
4. Optimize SelectionDAG — The SelectionDAG optimizer is run to clean up redundancies exposed by
type legalization.
5. Legalize SelectionDAG Types — This stage transforms SelectionDAG nodes to eliminate any types
that are unsupported on the target.
6. Optimize SelectionDAG — The SelectionDAG optimizer is run to eliminate inefficiencies introduced
by operation legalization.
7. Select instructions from DAG — Finally, the target instruction selector matches the DAG operations
to target instructions. This process translates the target-independent input DAG into another DAG of
target instructions.
8. SelectionDAG Scheduling and Formation — The last phase assigns a linear order to the instructions
in the target-instruction DAG and emits them into the MachineFunction being compiled. This step
uses traditional prepass scheduling techniques.
After all of these steps are complete, the SelectionDAG is destroyed and the rest of the code generation passes
are run.
One great way to visualize what is going on here is to take advantage of a few LLC command line options.
The following options pop up a window displaying the SelectionDAG at specific times (if you only get errors
printed to the console while using this, you probably need to configure your system to add support for it).
• -view-dag-combine1-dags displays the DAG after being built, before the first optimization
pass.
• -view-legalize-dags displays the DAG before Legalization.
• -view-dag-combine2-dags displays the DAG before the second optimization pass.
• -view-isel-dags displays the DAG before the Select phase.
• -view-sched-dags displays the DAG before Scheduling.
The -view-sunit-dags displays the Scheduler's dependency graph. This graph is based on the final
SelectionDAG, with nodes that must be scheduled together bundled into a single scheduling-unit node, and
with immediate operands and other nodes that aren't relevant for scheduling omitted.
There are two main ways of converting values of unsupported scalar types to values of supported types:
converting small types to larger types ("promoting"), and breaking up large integer types into smaller ones
("expanding"). For example, a target might require that all f32 values are promoted to f64 and that all i1/i8/i16
values are promoted to i32. The same target might require that all i64 values be expanded into pairs of i32
values. These changes can insert sign and zero extensions as needed to make sure that the final code has the
same behavior as the input.
There are two main ways of converting values of unsupported vector types to value of supported types:
splitting vector types, multiple times if necessary, until a legal type is found, and extending vector types by
adding elements to the end to round them out to legal types ("widening"). If a vector gets split all the way
down to single-element parts with no supported vector type being found, the elements are converted to scalars
("scalarizing").
A target implementation tells the legalizer which types are supported (and which register class to use for
them) by calling the addRegisterClass method in its TargetLowering constructor.
Targets often have weird constraints, such as not supporting every operation on every supported datatype (e.g.
X86 does not support byte conditional moves and PowerPC does not support sign-extending loads from a
16-bit memory location). Legalize takes care of this by open-coding another sequence of operations to
emulate the operation ("expansion"), by promoting one type to a larger type that supports the operation
("promotion"), or by using a target-specific hook to implement the legalization ("custom").
A target implementation tells the legalizer which operations are not supported (and which of the above three
actions to take) by calling the setOperationAction method in its TargetLowering constructor.
Prior to the existence of the Legalize passes, we required that every target selector supported and handled
every operator and type even if they are not natively supported. The introduction of the Legalize phases
allows all of the canonicalization patterns to be shared across targets, and makes it very easy to optimize the
canonicalized code because it is still in the form of a DAG.
This LLVM code corresponds to a SelectionDAG that looks basically like this:
If a target supports floating point multiply-and-add (FMA) operations, one of the adds can be merged with the
multiply. On the PowerPC, for example, the output of the instruction selector might look like this DAG:
The FMADDS instruction is a ternary instruction that multiplies its first two operands and adds the third (as
single-precision floating-point numbers). The FADDS instruction is a simple binary single-precision add
instruction. To perform this pattern match, the PowerPC backend includes the following instruction
definitions:
The portion of the instruction definition in bold indicates the pattern used to match the instruction. The DAG
operators (like fmul/fadd) are defined in the lib/Target/TargetSelectionDAG.td file. "F4RC"
is the register class of the input and result values.
The TableGen DAG instruction selector generator reads the instruction patterns in the .td file and
automatically builds parts of the pattern matching code for your target. It has the following strengths:
If none of the single-instruction patterns for loading an immediate into a register match, this will be
used. This rule says "match an arbitrary i32 immediate, turning it into an ORI ('or a 16-bit
immediate') and an LIS ('load 16-bit immediate, where the immediate is shifted to the left 16 bits')
instruction". To make this work, the LO16/HI16 node transformations are used to manipulate the
input immediate (in this case, take the high or low 16-bits of the immediate).
• While the system does automate a lot, it still allows you to write custom C++ code to match special
cases if there is something that is hard to express.
While it has many strengths, the system currently has some limitations, primarily because it is a work in
progress and is not yet finished:
• Overall, there is no way to define or match SelectionDAG nodes that define multiple values (e.g.
SMUL_LOHI, LOAD, CALL, etc). This is the biggest reason that you currently still have to write
custom C++ code for your instruction selector.
• There is no great way to support matching complex addressing modes yet. In the future, we will
extend pattern fragments to allow them to define multiple values (e.g. the four operands of the X86
addressing mode, which are currently matched with custom C++ code). In addition, we'll extend
fragments so that a fragment can match multiple different patterns.
• We don't automatically infer flags like isStore/isLoad yet.
• We don't automatically generate the set of supported registers and operations for the Legalizer yet.
• We don't have a way of tying in custom legalized nodes yet.
Despite these limitations, the instruction selector generator is still quite useful for most of the binary and
logical operations in typical instruction sets. If you run into any problems or can't figure out how to do
Note that this phase is logically separate from the instruction selection phase, but is tied to it closely in the
code because it operates on SelectionDAGs.
Live Intervals
Live Intervals are the ranges (intervals) where a variable is live. They are used by some register allocator
passes to determine if two or more virtual registers which require the same physical register are live at the
same point in the program (i.e., they conflict). When this situation occurs, one virtual register must be spilled.
Physical registers may be live in to or out of a function. Live in values are typically arguments in registers.
Live out values are typically return values in registers. Live in values are marked as such, and are given a
dummy "defining" instruction during live intervals analysis. If the last basic block of a function is a return,
then it's marked as using all live out values in the function.
PHI nodes need to be handled specially, because the calculation of the live variable information from a depth
first traversal of the CFG of the function won't guarantee that a virtual register used by the PHI node is
defined before it's used. When a PHI node is encountered, only the definition is handled, because the uses
will be handled in other basic blocks.
For each PHI node of the current basic block, we simulate an assignment at the end of the current basic block
and traverse the successor basic blocks. If a successor basic block has a PHI node and one of the PHI node's
operands is coming from the current basic block, then the variable is marked as alive within the current basic
block and all of its predecessor basic blocks, until the basic block with the defining instruction is encountered.
More to come...
Register Allocation
The Register Allocation problem consists in mapping a program Pv, that can use an unbounded number of
virtual registers, to a program Pp that contains a finite (possibly small) number of physical registers. Each
target architecture has a different number of physical registers. If the number of physical registers is not
enough to accommodate all the virtual registers, some of them will have to be mapped into memory. These
virtuals are called spilled virtuals.
Some architectures contain registers that share the same physical location. A notable example is the X86
platform. For instance, in the X86 architecture, the registers EAX, AX and AL share the first eight bits. These
physical registers are marked as aliased in LLVM. Given a particular architecture, you can check which
registers are aliased by inspecting its RegisterInfo.td file. Moreover, the method
TargetRegisterInfo::getAliasSet(p_reg) returns an array containing all the physical registers
aliased to the register p_reg.
Physical registers, in LLVM, are grouped in Register Classes. Elements in the same register class are
functionally equivalent, and can be interchangeably used. Each virtual register can only be mapped to physical
registers of a particular class. For instance, in the X86 architecture, some virtuals can only be allocated to 8 bit
registers. A register class is described by TargetRegisterClass objects. To discover if a virtual register
is compatible with a given physical, this code can be used:
Sometimes, mostly for debugging purposes, it is useful to change the number of physical registers available in
the target architecture. This must be done statically, inside the TargetRegsterInfo.td file. Just grep
for RegisterClass, the last parameter of which is a list of registers. Just commenting some out is one
simple way to avoid them being used. A more polite way is to explicitly exclude some registers from the
allocation order. See the definition of the GR8 register class in
lib/Target/X86/X86RegisterInfo.td for an example of this.
Virtual registers are also denoted by integer numbers. Contrary to physical registers, different virtual registers
never share the same number. The smallest virtual register is normally assigned the number 1024. This may
Before register allocation, the operands of an instruction are mostly virtual registers, although physical
registers may also be used. In order to check if a given machine operand is a register, use the boolean function
MachineOperand::isRegister(). To obtain the integer code of a register, use
MachineOperand::getReg(). An instruction may define or use a register. For instance, ADD
reg:1026 := reg:1025 reg:1024 defines the registers 1024, and uses registers 1025 and 1026.
Given a register operand, the method MachineOperand::isUse() informs if that register is being used
by the instruction. The method MachineOperand::isDef() informs if that registers is being defined.
We will call physical registers present in the LLVM bitcode before register allocation pre-colored registers.
Pre-colored registers are used in many different situations, for instance, to pass parameters of functions calls,
and to store results of particular instructions. There are two types of pre-colored registers: the ones implicitly
defined, and those explicitly defined. Explicitly defined registers are normal operands, and can be accessed
with MachineInstr::getOperand(int)::getReg(). In order to check which registers are
implicitly defined by an instruction, use the TargetInstrInfo::get(opcode)::ImplicitDefs,
where opcode is the opcode of the target instruction. One important difference between explicit and implicit
physical registers is that the latter are defined statically for each instruction, whereas the former may vary
depending on the program being compiled. For example, an instruction that represents a function call will
always implicitly define or use the same set of physical registers. To read the registers implicitly used by an
instruction, use TargetInstrInfo::get(opcode)::ImplicitUses. Pre-colored registers impose
constraints on any register allocation algorithm. The register allocator must make sure that none of them is
been overwritten by the values of virtual registers while still alive.
The direct mapping provides more flexibility to the developer of the register allocator; however, it is more
error prone, and demands more implementation work. Basically, the programmer will have to specify where
load and store instructions should be inserted in the target function being compiled in order to get and store
values in memory. To assign a physical register to a virtual register present in a given operand, use
MachineOperand::setReg(p_reg). To insert a store instruction, use
TargetRegisterInfo::storeRegToStackSlot(...), and to insert a load instruction, use
TargetRegisterInfo::loadRegFromStackSlot.
The indirect mapping shields the application developer from the complexities of inserting load and store
instructions. In order to map a virtual register to a physical one, use
VirtRegMap::assignVirt2Phys(vreg, preg). In order to map a certain virtual register to
memory, use VirtRegMap::assignVirt2StackSlot(vreg). This method will return the stack slot
where vreg's value will be located. If it is necessary to map another virtual register to the same stack slot, use
VirtRegMap::assignVirt2StackSlot(vreg, stack_location). One important point to
consider when using the indirect mapping, is that even if a virtual register is mapped to memory, it still needs
If the indirect strategy is used, after all the virtual registers have been mapped to physical registers or stack
slots, it is necessary to use a spiller object to place load and store instructions in the code. Every virtual that
has been mapped to a stack slot will be stored to memory after been defined and will be loaded before being
used. The implementation of the spiller tries to recycle load/store instructions, avoiding unnecessary
instructions. For an example of how to invoke the spiller, see
RegAllocLinearScan::runOnMachineFunction in
lib/CodeGen/RegAllocLinearScan.cpp.
In order to produce correct code, LLVM must convert three address instructions that represent two address
instructions into true two address instructions. LLVM provides the pass TwoAddressInstructionPass
for this specific purpose. It must be run before register allocation takes place. After its execution, the resulting
code may no longer be in SSA form. This happens, for instance, in situations where an instruction such as %a
= ADD %b %c is converted to two instructions such as:
%a = MOVE %b
%a = ADD %a %c
Notice that, internally, the second instruction is represented as ADD %a[def/use] %c. I.e., the register
operand %a is both used and defined by the instruction.
There are many ways in which PHI instructions can safely be removed from the target code. The most
traditional PHI deconstruction algorithm replaces PHI instructions with copy instructions. That is the strategy
adopted by LLVM. The SSA deconstruction algorithm is implemented in
lib/CodeGen/PHIElimination.cpp. In order to invoke this pass, the identifier
PHIEliminationID must be marked as required in the code of the register allocator.
Instruction folding
Instruction folding is an optimization performed during register allocation that removes unnecessary copy
instructions. For instance, a sequence of instructions such as:
• Simple — This is a very simple implementation that does not keep values in registers across
instructions. This register allocator immediately spills every value right after it is computed, and
reloads all used operands from memory to temporary registers before each instruction.
• Local — This register allocator is an improvement on the Simple implementation. It allocates registers
on a basic block level, attempting to keep values in registers and reusing registers as appropriate.
• Linear Scan — The default allocator. This is the well-know linear scan register allocator. Whereas
the Simple and Local algorithms use a direct mapping implementation technique, the Linear Scan
implementation uses a spiller in order to place load and stores.
The type of register allocator used in llc can be chosen with the command line option -regalloc=...:
Code Emission
To Be Written
• Caller and callee have the calling convention fastcc or cc 10 (GHC call convention).
• The call is a tail call - in tail position (ret immediately follows call and ret uses value of call or is
void).
• Option -tailcallopt is enabled.
• Platform specific constraints are met.
x86/x86-64 constraints:
PowerPC constraints:
Example:
declare fastcc i32 @tailcallee(i32 inreg %a1, i32 inreg %a2, i32 %a3, i32 %a4)
Implications of -tailcallopt:
To support tail call optimization in situations where the callee has more arguments than the caller a 'callee
pops arguments' convention is used. This currently causes each fastcc call that is not tail call optimized
(because one or more of above constraints are not met) to be followed by a readjustment of the stack. So
performance might be worse in such cases.
Sibling call optimization is currently performed on x86/x86-64 when the following constraints are met:
• Caller and callee have the same calling convention. It can be either c or fastcc.
• The call is a tail call - in tail position (ret immediately follows call and ret uses value of call or is
void).
• Caller and callee have matching return type or the callee result is not used.
• If any of the callee arguments are being passed in stack, they must be available in caller's own
incoming argument stack and the frame offsets must be the same.
Example:
• i686-pc-linux-gnu — Linux
• i386-unknown-freebsd5.3 — FreeBSD 5.3
• i686-pc-cygwin — Cygwin on Win32
• i686-pc-mingw32 — MingW on Win32
• i386-pc-mingw32msvc — MingW crosscompiler on Linux
• i686-apple-darwin* — Apple Darwin on X86
• x86_64-unknown-linux-gnu — Linux
• x86_StdCall — stdcall calling convention seen on Microsoft Windows platform (CC ID = 64).
• x86_FastCall — fastcall calling convention seen on Microsoft Windows platform (CC ID = 65).
In order to represent this, LLVM tracks no less than 5 operands for each memory operand of this form. This
means that the "load" form of 'mov' has the following MachineOperands in this order:
Index: 0 | 1 2 3 4 5
Meaning: DestReg, | BaseReg, Scale, IndexReg, Displacement Segment
OperandTy: VirtReg, | VirtReg, UnsImm, VirtReg, SignExtImm PhysReg
Stores, and all other instructions, treat the four memory operands in the same way and in the same order. If the
segment register is unspecified (regno = 0), then no segment override is generated. "Lea" operations do not
have a segment register specified, so they only have 4 operands for their memory reference.
While these address spaces may seem similar to TLS via the thread_local keyword, and often use the
same underlying hardware, there are some fundamental differences.
Special address spaces, in contrast, apply to static types. Every load and store has a particular address space in
its address operand type, and this is what determines which address space is accessed. LLVM ignores these
special address space qualifiers on global variables, and does not provide a way to directly allocate storage in
them. At the LLVM IR level, the behavior of these special address spaces depends in part on the underlying
OS or runtime environment, and they are specific to x86 (and LLVM doesn't yet handle them correctly in
some cases).
Some operating systems and runtime environments use (or may in the future use) the FS/GS-segment registers
for various low-level purposes, so care should be taken when considering them.
Instruction naming
An instruction name consists of the base name, a default operand size, and a a character per operand with an
optional special size. For example:
Frame Layout
The size of a PowerPC frame is usually fixed for the duration of a function's invocation. Since the frame is
fixed size, all references into the frame can be accessed via fixed offsets from the stack pointer. The exception
to this is when dynamic alloca or variable sized arrays are present, then a base pointer (r31) is used as a proxy
for the stack pointer and stack pointer is free to grow or shrink. A base pointer is also used if llvm-gcc is not
passed the -fomit-frame-pointer flag. The stack pointer is always aligned to 16 bytes, so that space allocated
for altivec vectors will be properly aligned.
Linkage
Parameter area
Dynamic area
Locals area
Previous Frame
The linkage area is used by a callee to save special registers prior to allocating its own frame. Only three
entries are relevant to LLVM. The first entry is the previous stack pointer (sp), aka link. This allows probing
tools like gdb or exception handlers to quickly scan the frames in the stack. A function epilog can also use the
link to pop the frame from the stack. The third entry in the linkage area is used to save the return address from
the lr register. Finally, as mentioned above, the last entry is used to save the previous frame pointer (r31.) The
entries in the linkage area are the size of a GPR, thus the linkage area is 24 bytes long in 32 bit mode and 48
bytes in 64 bit mode.
0 Saved SP (r1)
4 Saved CR
8 Saved LR
12 Reserved
16 Reserved
20 Saved FP (r31)
64 bit linkage area
0 Saved SP (r1)
8 Saved CR
16 Saved LR
24 Reserved
32 Reserved
40 Saved FP (r31)
The parameter area is used to store arguments being passed to a callee function. Following the PowerPC
ABI, the first few arguments are actually passed in registers, with the space in the parameter area unused.
However, if there are not enough registers or the callee is a thunk or vararg function, these register arguments
can be spilled into the parameter area. Thus, the parameter area must be large enough to store all the
parameters for the largest call sequence made by the caller. The size must also be minimally large enough to
spill registers r3-r10. This allows callees blind to the call signature, such as thunks and vararg functions,
enough space to cache the argument registers. Therefore, the parameter area is minimally 32 bytes (64 bytes
in 64 bit mode.) Also note that since the parameter area is a fixed offset from the top of the frame, that a callee
can access its spilt arguments using fixed offsets from the stack pointer (or base pointer.)
Combining the information about the linkage, parameter areas and alignment. A stack frame is minimally 64
bytes in 32 bit mode and 128 bytes in 64 bit mode.
The locals area is where the llvm compiler reserves space for local variables.
The saved registers area is where the llvm compiler spills callee saved registers on entry to the callee.
Prolog/Epilog
The llvm prolog and epilog are the same as described in the PowerPC ABI, with the following exceptions.
Callee saved registers are spilled after the frame is created. This allows the llvm epilog/prolog support to be
common with other targets. The base pointer callee saved register r31 is saved in the TOC slot of linkage area.
This simplifies allocation of space for the base pointer and makes it convenient to locate programatically and
during debugging.
Dynamic Allocation
TODO - More to come.
Chris Lattner
The LLVM Compiler Infrastructure
Last modified: $Date: 2010-03-11 18:12:20 -0600 (Thu, 11 Mar 2010) $
• Introduction
1. Basic concepts
2. An example record
3. Running TableGen
• TableGen syntax
1. TableGen primitives
1. TableGen comments
2. The TableGen type system
3. TableGen values and expressions
2. Classes and definitions
1. Value definitions
2. 'let' expressions
3. Class template arguments
4. Multiclass definitions and instances
3. File scope entities
1. File inclusion
2. 'let' expressions
• TableGen backends
1. todo
Introduction
TableGen's purpose is to help a human develop and maintain records of domain-specific information. Because
there may be a large number of these records, it is specifically designed to allow writing flexible descriptions
and for common features of these records to be factored out. This reduces the amount of duplication in the
description, reduces the chance of error, and makes it easier to structure domain specific information.
The core part of TableGen parses a file, instantiates the declarations, and hands the result off to a
domain-specific "TableGen backend" for processing. The current major user of TableGen is the LLVM code
generator.
Note that if you work on TableGen much, and use emacs or vim, that you can find an emacs "TableGen
mode" and a vim language file in the llvm/utils/emacs and llvm/utils/vim directories of your
LLVM distribution, respectively.
Basic concepts
TableGen files consist of two key parts: 'classes' and 'definitions', both of which are considered 'records'.
TableGen records have a unique name, a list of values, and a list of superclasses. The list of values is the
main data that TableGen builds for each record; it is this that holds the domain specific information for the
application. The interpretation of this data is left to a specific TableGen backend, but the structure and format
rules are taken care of and are fixed by TableGen.
TableGen definitions are the concrete form of 'records'. These generally do not have any undefined values,
and are marked with the 'def' keyword.
TableGen classes are abstract records that are used to build and describe other records. These 'classes' allow
the end-user to build abstractions for either the domain they are targeting (such as "Register", "RegisterClass",
TableGen multiclasses are groups of abstract records that are instantiated all at once. Each instantiation can
result in multiple TableGen definitions. If a multiclass inherits from another multiclass, the definitions in the
sub-multiclass become part of the current multiclass, as if they were declared in the current multiclass.
An example record
With no other arguments, TableGen parses the specified file and prints out all of the classes, then all of the
definitions. This is a good way to see what the various definitions expand to fully. Running this on the
X86.td file prints this (at the time of this writing):
...
def ADD32rr { // Instruction X86Inst I
string Namespace = "X86";
dag OutOperandList = (outs GR32:$dst);
dag InOperandList = (ins GR32:$src1, GR32:$src2);
string AsmString = "add{l}\t{$src2, $dst|$dst, $src2}";
list<dag> Pattern = [(set GR32:$dst, (add GR32:$src1, GR32:$src2))];
list<Register> Uses = [];
list<Register> Defs = [EFLAGS];
list<Predicate> Predicates = [];
int CodeSize = 3;
int AddedComplexity = 0;
bit isReturn = 0;
bit isBranch = 0;
bit isIndirectBranch = 0;
bit isBarrier = 0;
bit isCall = 0;
bit canFoldAsLoad = 0;
bit mayLoad = 0;
bit mayStore = 0;
bit isImplicitDef = 0;
bit isTwoAddress = 1;
bit isConvertibleToThreeAddress = 1;
bit isCommutable = 1;
bit isTerminator = 0;
bit isReMaterializable = 0;
bit isPredicable = 0;
bit hasDelaySlot = 0;
bit usesCustomInserter = 0;
bit hasCtrlDep = 0;
bit isNotDuplicable = 0;
bit hasSideEffects = 0;
bit neverHasSideEffects = 0;
InstrItinClass Itinerary = NoItinerary;
string Constraints = "";
string DisableEncoding = "";
bits<8> Opcode = { 0, 0, 0, 0, 0, 0, 0, 1 };
Format Form = MRMDestReg;
bits<6> FormBits = { 0, 0, 0, 0, 1, 1 };
ImmType ImmT = NoImm;
bits<3> ImmTypeBits = { 0, 0, 0 };
bit hasOpSizePrefix = 0;
bit hasAdSizePrefix = 0;
bits<4> Prefix = { 0, 0, 0, 0 };
This definition corresponds to a 32-bit register-register add instruction in the X86. The string after the 'def'
string indicates the name of the record—"ADD32rr" in this case—and the comment at the end of the line
indicates the superclasses of the definition. The body of the record contains all of the data that TableGen
assembled for the record, indicating that the instruction is part of the "X86" namespace, the pattern indicating
how the the instruction should be emitted into the assembly file, that it is a two-address instruction, has a
particular encoding, etc. The contents and semantics of the information in the record is specific to the needs of
the X86 backend, and is only shown as an example.
As you can see, a lot of information is needed for every instruction supported by the code generator, and
specifying it all manually would be unmaintainable, prone to bugs, and tiring to do in the first place. Because
we are using TableGen, all of the information was derived from the following definition:
This definition makes use of the custom class I (extended from the custom class X86Inst), which is defined
in the X86-specific TableGen file, to factor out the common features that instructions of its class share. A key
feature of TableGen is that it allows the end-user to define the abstractions they prefer to use when describing
their information.
Running TableGen
TableGen runs just like any other LLVM tool. The first (optional) argument specifies the file to read. If a
filename is not specified, tblgen reads from standard input.
To be useful, one of the TableGen backends must be used. These backends are selectable on the command
line (type 'tblgen -help' for a list). For example, to get a list of all of the definitions that subclass a
particular type (which can be useful for building up an enum list of these records), use the -print-enums
option:
The default backend prints out all of the records, as described above.
If you plan to use TableGen, you will most likely have to write a backend that extracts the information
specific to what you need and formats it in the appropriate way.
TableGen syntax
TableGen doesn't care about the meaning of data (that is up to the backend to define), but it does care about
syntax, and it enforces a simple type system. This section describes the syntax and the constructs allowed in a
TableGen file.
TableGen primitives
TableGen comments
TableGen supports BCPL style "//" comments, which run to the end of the line, and it also supports nestable
"/* */" comments.
TableGen supports a mixture of very low-level types (such as bit) and very high-level types (such as dag).
This flexibility is what allows it to describe a wide range of information conveniently and compactly. The
TableGen types are:
bit
A 'bit' is a boolean value that can hold either 0 or 1.
int
The 'int' type represents a simple 32-bit integer value, such as 5.
string
The 'string' type represents an ordered sequence of characters of arbitrary length.
bits<n>
A 'bits' type is an arbitrary, but fixed, size integer that is broken up into individual bits. This type is
useful because it can handle some bits being defined while others are undefined.
list<ty>
This type represents a list whose elements are some other type. The contained type is arbitrary: it can
even be another list type.
Class type
Specifying a class name in a type context means that the defined value must be a subclass of the
specified class. This is useful in conjunction with the list type, for example, to constrain the
elements of the list to a common base class (e.g., a list<Register> can only contain definitions
derived from the "Register" class).
dag
This type represents a nestable directed graph of elements.
code
This represents a big hunk of text. NOTE: I don't remember why this is distinct from string!
To date, these types have been sufficient for describing things that TableGen has been used for, but it is
straight-forward to extend this list if needed.
?
uninitialized field
0b1001011
binary integer value
07654321
octal integer value (indicated by a leading 0)
7
decimal integer value
0x7F
hexadecimal integer value
"foo"
string value
[{ ... }]
code fragment
[ X, Y, Z ]<type>
list value. <type> is the type of the list element and is usually optional. In rare cases, TableGen is
unable to deduce the element type in which case the user must specify it explicitly.
{ a, b, c }
initializer for a "bits<3>" value
value
value reference
value{17}
access to one bit of a value
value{15-17}
access to multiple bits of a value
DEF
reference to a record definition
CLASS<val list>
reference to a new anonymous definition of CLASS with the specified template arguments.
X.Y
reference to the subfield of a value
list[4-7,17,2-3]
A slice of the 'list' list, including elements 4,5,6,7,17,2, and 3 from it. Elements may be included
multiple times.
(DEF a, b)
a dag value. The first element is required to be a record definition, the remaining elements in the list
may be arbitrary other values, including nested `dag' values.
!strconcat(a, b)
A string value that is the result of concatenating the 'a' and 'b' strings.
!cast<type>(a)
A symbol of type type obtained by looking up the string 'a' in the symbol table. If the type of 'a' does
not match type, TableGen aborts with an error. !cast<string> is a special case in that the argument
must be an object defined by a 'def' construct.
!nameconcat<type>(a, b)
Shorthand for !cast<type>(!strconcat(a, b))
!subst(a, b, c)
Note that all of the values have rules specifying how they convert to values for different types. These rules
allow you to assign a value like "7" to a "bits<4>" value, for example.
class C { bit V = 1; }
def X : C;
def Y : C {
string Greeting = "hello";
}
This example defines two definitions, X and Y, both of which derive from the C class. Because of this, they
both get the V bit value. The Y definition also gets the Greeting member as well.
In general, classes are useful for collecting together the commonality between a group of records and isolating
it in a single place. Also, classes permit the specification of default values for their subclasses, allowing the
subclasses to override them as they wish.
Value definitions
Value definitions define named entries in records. A value must be defined before it can be referred to as the
operand for another value definition or before the value is reset with a let expression. A value is defined by
specifying a TableGen type and a name. If an initial value is available, it may be specified after the type with
an equal sign. Value definitions require terminating semicolons.
'let' expressions
A record-level let expression is used to change the value of a value definition in a record. This is primarily
useful when a superclass defines a value that a derived class or definition wants to override. Let expressions
consist of the 'let' keyword followed by a value name, an equal sign ("="), and a new value. For example, a
class D : C { let V = 0; }
def Z : D;
In this case, the Z definition will have a zero value for its "V" value, despite the fact that it derives (indirectly)
from the C class, because the D class overrode its value.
In this case, template arguments are used as a space efficient way to specify a list of "enumeration values",
each with a "Value" field set to the specified integer.
The more esoteric forms of TableGen expressions are useful in conjunction with template arguments. As an
example:
// other stuff...
}
// Example uses
def bork : Value<Mod>;
def zork : Value<Ref>;
def hork : Value<ModRef>;
This is obviously a contrived example, but it shows how template arguments can be used to decouple the
interface provided to the user of the class from the actual internal data representation expected by the class. In
this case, running tblgen on the example prints the following definitions:
This shows that TableGen was able to dig into the argument and extract a piece of information that was
requested by the designer of the "Value" class. For more realistic examples, please see existing users of
TableGen, such as the X86 backend.
def ops;
def GPR;
def Imm;
class inst<int opc, string asmstr, dag operandlist>;
The name of the resultant definitions has the multidef fragment names appended to them, so this defines
ADD_rr, ADD_ri, SUB_rr, etc. A defm may inherit from multiple multiclasses, instantiating definitions
from each multiclass. Using a multiclass this way is exactly equivalent to instantiating the classes multiple
times yourself, e.g. by writing:
def ops;
def GPR;
def Imm;
class inst<int opc, string asmstr, dag operandlist>;
include "foo.td"
'let' expressions
"Let" expressions at file scope are similar to "let" expressions within a record, except they can specify a value
binding for multiple records at a time, and may be useful in certain other cases. File-scope let expressions are
really just another way that TableGen allows the end-user to factor out commonality from the records.
File-scope "let" expressions take a comma-separated list of bindings to apply, and one or more records to bind
the values in. Here are some examples:
let isCall = 1 in
// All calls clobber the non-callee saved registers...
let Defs = [EAX, ECX, EDX, FP0, FP1, FP2, FP3, FP4, FP5, FP6, ST0,
MM0, MM1, MM2, MM3, MM4, MM5, MM6, MM7,
XMM0, XMM1, XMM2, XMM3, XMM4, XMM5, XMM6, XMM7, EFLAGS] in {
def CALLpcrel32 : Ii32<0xE8, RawFrm, (outs), (ins i32imm:$dst,variable_ops),
"call\t${dst:call}", []>;
def CALL32r : I<0xFF, MRM2r, (outs), (ins GR32:$dst, variable_ops),
"call\t{*}$dst", [(X86call GR32:$dst)]>;
def CALL32m : I<0xFF, MRM2m, (outs), (ins i32mem:$dst, variable_ops),
"call\t{*}$dst", []>;
}
File-scope "let" expressions are often useful when a couple of definitions need to be added to several records,
and the records do not otherwise need to be opened, as in the case with the CALL* instructions above.
(implicit a)
an implicitly defined physical register. This tells the dag instruction selection emitter the input pattern's extra
definitions matches implicit physical register definitions.
(parallel (a), (b))
Chris Lattner
LLVM Compiler Infrastructure
Last modified: $Date: 2010-02-27 17:47:46 -0600 (Sat, 27 Feb 2010) $
1. Introduction
2. AliasAnalysis Class Overview
♦ Representation of Pointers
♦ The alias method
♦ The getModRefInfo methods
♦ Other useful AliasAnalysis methods
3. Writing a new AliasAnalysis Implementation
♦ Different Pass styles
♦ Required initialization calls
♦ Interfaces which may be specified
♦ AliasAnalysis chaining behavior
♦ Updating analysis results for transformations
♦ Efficiency Issues
4. Using alias analysis results
♦ Using the MemoryDependenceAnalysis Pass
♦ Using the AliasSetTracker class
♦ Using the AliasAnalysis interface directly
5. Existing alias analysis implementations and clients
♦ Available AliasAnalysis implementations
♦ Alias analysis driven transformations
♦ Clients for debugging and evaluation of implementations
6. Memory Dependence Analysis
Introduction
Alias Analysis (aka Pointer Analysis) is a class of techniques which attempt to determine whether or not two
pointers ever can point to the same object in memory. There are many different algorithms for alias analysis
and many different ways of classifying them: flow-sensitive vs flow-insensitive, context-sensitive vs
context-insensitive, field-sensitive vs field-insensitive, unification-based vs subset-based, etc. Traditionally,
alias analyses respond to a query with a Must, May, or No alias response, indicating that two pointers always
point to the same object, might point to the same object, or are known to never point to the same object.
The LLVM AliasAnalysis class is the primary interface used by clients and implementations of alias
analyses in the LLVM system. This class is the common interface between clients of alias analysis
information and the implementations providing it, and is designed to support a wide range of implementations
and clients (but currently all clients are assumed to be flow-insensitive). In addition to simple alias analysis
information, this class exposes Mod/Ref information from those implementations which can provide it,
allowing for powerful analyses and transformations to work well together.
This document contains information necessary to successfully implement this interface, use it, and to test both
sides. It also explains some of the finer points about what exactly results mean. If you feel that something is
unclear or should be added, please let me know.
Representation of Pointers
Most importantly, the AliasAnalysis class provides several methods which are used to query whether or
not two memory objects alias, whether function calls can modify or read a memory object, etc. For all of these
queries, memory objects are represented as a pair of their starting address (a symbolic LLVM Value*) and a
static size.
Representing memory objects as a starting address and a size is critically important for correct Alias Analyses.
For example, consider this (silly, but possible) C code:
int i;
char C[2];
char A[10];
/* ... */
for (i = 0; i != 10; ++i) {
C[0] = A[i]; /* One byte store */
C[1] = A[9-i]; /* One byte store */
}
In this case, the basicaa pass will disambiguate the stores to C[0] and C[1] because they are accesses to
two distinct locations one byte apart, and the accesses are each one byte. In this case, the LICM pass can use
store motion to remove the stores from the loop. In constrast, the following code:
int i;
char C[2];
char A[10];
/* ... */
for (i = 0; i != 10; ++i) {
((short*)C)[0] = A[i]; /* Two byte store! */
C[1] = A[9-i]; /* One byte store */
}
In this case, the two stores to C do alias each other, because the access to the &C[0] element is a two byte
access. If size information wasn't available in the query, even the first case would have to conservatively
assume that the accesses alias.
The MayAlias response is used whenever the two pointers might refer to the same object. If the two memory
objects overlap, but do not start at the same location, return MayAlias.
The MustAlias response may only be returned if the two memory objects are guaranteed to always start at
exactly the same location. A MustAlias response implies that the pointers compare equal.
The AliasAnalysis class also provides a getModRefInfo method for testing dependencies between
function calls. This method takes two call sites (CS1 & CS2), returns NoModRef if the two calls refer to
disjoint memory locations, Ref if CS1 reads memory written by CS2, Mod if CS1 writes to memory read or
written by CS2, or ModRef if CS1 might read or write memory accessed by CS2. Note that this relation is not
commutative.
The onlyReadsMemory method returns true for a function if analysis can prove that (at most) the function
only reads from non-volatile memory. Functions with this property are side-effect free, only depending on
their input arguments and the state of memory when they are called. This property allows calls to these
functions to be eliminated and moved around, as long as there is no store instruction that changes the contents
of memory. Note that all functions that satisfy the doesNotAccessMemory method also satisfies
onlyReadsMemory.
Additionally, your must invoke the InitializeAliasAnalysis method from your analysis run method
(run for a Pass, runOnFunction for a FunctionPass, or InitializePass for an
ImmutablePass). For example (as part of a Pass):
In addition to analysis queries, you must make sure to unconditionally pass LLVM update notification
methods to the superclass as well if you override them, which allows all alias analyses in a change to be
updated.
The AliasAnalysis interface exposes two methods which are used to communicate program changes
from the clients to the analysis implementations. Various alias analysis implementations should use these
methods to ensure that their internal data structures are kept up-to-date as the program changes (for example,
when an instruction is deleted), and clients of alias analysis must be sure to call these interfaces appropriately.
First you initialize the AliasSetTracker by using the "add" methods to add information about various
potentially aliasing instructions in the scope you are interested in. Once all of the alias sets are completed,
your pass should simply iterate through the constructed alias sets, using the AliasSetTracker
begin()/end() methods.
The AliasSets formed by the AliasSetTracker are guaranteed to be disjoint, calculate mod/ref
information and volatility for the set, and keep track of whether or not all of the pointers in the set are Must
aliases. The AliasSetTracker also makes sure that sets are properly folded due to call instructions, and can
provide a list of pointers in each set.
The AliasSetTracker class must maintain a list of all of the LLVM Value*'s that are in each AliasSet. Since
the hash table already has entries for each LLVM Value* of interest, the AliasesSets thread the linked list
through these hash-table nodes to avoid having to allocate memory unnecessarily, and to make merging alias
sets extremely efficient (the linked list merge is constant time).
You shouldn't need to understand these details if you are just a client of the AliasSetTracker, but if you look at
the code, hopefully this brief description will help make sense of why things are designed the way they are.
• Distinct globals, stack allocations, and heap allocations can never alias.
• Globals, stack allocations, and heap allocations never alias the null pointer.
• Different fields of a structure do not alias.
• Indexes into arrays with statically differing subscripts cannot alias.
• Many common standard C library functions never access memory or only read memory.
• Pointers that obviously point to constant globals "pointToConstantMemory".
• Function calls can not modify or references stack allocations if they never escape from the function
that allocates them (a common case for automatic arrays).
The real power of this pass is that it provides context-sensitive mod/ref information for call instructions. This
allows the optimizer to know that calls to a function do not clobber or read the value of the global, allowing
loads and stores to be eliminated.
Note that this pass is somewhat limited in its scope (only support non-address taken globals), but is very quick
analysis.
Note that -steens-aa is available in the optional "poolalloc" module, it is not part of the LLVM core.
This algorithm is capable of responding to a full variety of alias analysis queries, and can provide
context-sensitive mod/ref information as well. The only major facility not implemented so far is support for
must-alias information.
Note that -ds-aa is available in the optional "poolalloc" module, it is not part of the LLVM core.
• It uses mod/ref information to hoist or sink load instructions out of loops if there are no instructions in
the loop that modifies the memory loaded.
• It uses mod/ref information to hoist function calls out of loops that do not write to memory and are
loop-invariant.
• If uses alias information to promote memory objects that are loaded and stored to in loops to live in a
register instead. It can do this if there are no may aliases to the loaded/stored memory location.
will print out how many queries (and what responses are returned) by the -licm pass (of the -ds-aa pass)
and how many queries are made of the -basicaa pass by the -ds-aa pass. This can be useful when
debugging a transformation or an alias analysis implementation.
Chris Lattner
LLVM Compiler Infrastructure
Last modified: $Date: 2010-03-01 13:24:17 -0600 (Mon, 01 Mar 2010) $
1. Introduction
♦ Goals and non-goals
2. Getting started
♦ In your compiler
♦ In your runtime library
♦ About the shadow stack
3. Core support
♦ Specifying GC code generation: gc "..."
♦ Identifying GC roots on the stack: llvm.gcroot
♦ Reading and writing references in the heap
◊ Write barrier: llvm.gcwrite
◊ Read barrier: llvm.gcread
4. Compiler plugin interface
♦ Overview of available features
♦ Computing stack maps
♦ Initializing roots to null: InitRoots
♦ Custom lowering of intrinsics: CustomRoots, CustomReadBarriers, and
CustomWriteBarriers
♦ Generating safe points: NeededSafePoints
♦ Emitting assembly code: GCMetadataPrinter
5. Implementing a collector runtime
♦ Tracing GC pointers from heap objects
6. References
Introduction
Garbage collection is a widely used technique that frees the programmer from having to know the lifetimes of
heap objects, making software easier to produce and maintain. Many programming languages rely on garbage
collection for automatic memory management. There are two primary forms of garbage collection:
conservative and accurate.
Conservative garbage collection often does not require any special support from either the language or the
compiler: it can handle non-type-safe programming languages (such as C/C++) and does not require any
special information from the compiler. The Boehm collector is an example of a state-of-the-art conservative
collector.
Accurate garbage collection requires the ability to identify all pointers in the program at run-time (which
requires that the source-language be type-safe in most cases). Identifying pointers at run-time requires
compiler support to locate all places that hold live pointer variables at run-time, including the processor stack
and registers.
Conservative garbage collection is attractive because it does not require any special compiler support, but it
does have problems. In particular, because the conservative garbage collector cannot know that a particular
word in the machine is a pointer, it cannot move live objects in the heap (preventing the use of compacting
and generational GC algorithms) and it can occasionally suffer from memory leaks due to integer values that
happen to point to objects in the program. In addition, some aggressive compiler transformations can break
conservative garbage collectors (though these seem rare in practice).
Accurate garbage collectors do not suffer from any of these problems, but they can suffer from degraded
scalar optimization of the program. In particular, because the runtime must be able to identify and update all
pointers active in the program, some optimizations are less effective. In practice, however, the locality and
performance benefits of using aggressive garbage collection techniques dominates any low-level losses.
This document describes the mechanisms and interfaces provided by LLVM to support accurate garbage
collection.
• semi-space collectors
• mark-sweep collectors
• generational collectors
• reference counting
• incremental collectors
• concurrent collectors
• cooperative collectors
We hope that the primitive support built into the LLVM IR is sufficient to support a broad class of garbage
collected languages including Scheme, ML, Java, C#, Perl, Python, Lua, Ruby, other scripting languages, and
more.
However, LLVM does not itself provide a garbage collectorthis should be part of your language's runtime
library. LLVM provides a framework for compile time code generation plugins. The role of these plugins is to
generate code and data structures which conforms to the binary interface specified by the runtime library.
This is similar to the relationship between LLVM and DWARF debugging info, for example. The difference
primarily lies in the lack of an established standard in the domain of garbage collectionthus the plugins.
The aspects of the binary interface with which LLVM's GC support is concerned are:
• Creation of GC-safe points within code where collection is allowed to execute safely.
• Computation of the stack map. For each safe point in the code, object references within the stack
frame must be identified so that the collector may traverse and perhaps update them.
• Write barriers when storing object references to the heap. These are commonly used to optimize
incremental scans in generational collectors.
• Emission of read barriers when loading object references. These are useful for interoperating with
concurrent collectors.
There are additional areas that LLVM does not directly address:
In general, LLVM's support for GC does not include features which can be adequately addressed with other
features of the IR and does not specify a particular binary interface. On the plus side, this means that you
should be able to integrate LLVM with an existing runtime. On the other hand, it leaves a lot of work for the
Getting started
Using a GC with LLVM implies many things, for example:
To help with several of these tasks (those indicated with a *), LLVM includes a highly portable, built-in
ShadowStack code generator. It is compiled into llc and works even with the interpreter and C backends.
In your compiler
To turn the shadow stack on for your functions, first call:
F.setGC("shadow-stack");
for each function your compiler emits. Since the shadow stack is built into LLVM, you do not need to load a
plugin.
Your compiler must also use @llvm.gcroot as documented. Don't forget to create a root for each
intermediate value that is generated when evaluating an expression. In h(f(), g()), the result of f()
could easily be collected if evaluating g() triggers a collection.
There's no need to use @llvm.gcread and @llvm.gcwrite over plain load and store for now. You
will need them when switching to a more advanced GC.
In your runtime
The shadow stack doesn't imply a memory allocation algorithm. A semispace collector or building atop
malloc are great places to start, and can be implemented with very little code.
/// @brief The map for a single function's stack frame. One of these is
/// compiled as constant data into the executable for each function.
///
/// Storage of metadata values is elided if the %metadata parameter to
/// @llvm.gcroot is null.
struct FrameMap {
int32_t NumRoots; //< Number of roots in stack frame.
int32_t NumMeta; //< Number of metadata entries. May be < NumRoots.
const void *Meta[0]; //< Metadata for each root.
};
/// @brief A link in the dynamic shadow stack. One of these is embedded in the
/// stack frame of each function on the call stack.
struct StackEntry {
StackEntry *Next; //< Link to next stack entry (the caller's).
const FrameMap *Map; //< Pointer to constant FrameMap.
void *Roots[0]; //< Stack roots (in-place array).
};
/// @brief The head of the singly-linked list of StackEntries. Functions push
/// and pop onto this in their prologue and epilogue.
///
/// Since there is only a global list, this technique is not threadsafe.
StackEntry *llvm_gc_root_chain;
/// @brief Calls Visitor(root, meta) for each GC root on the stack.
/// root and meta are exactly the values passed to
/// @llvm.gcroot.
///
/// Visitor could be a function to recursively mark live objects. Or it
/// might copy them to another heap or generation.
///
/// @param Visitor A function to invoke for every GC root on the stack.
void visitGCRoots(void (*Visitor)(void **Root, const void *Meta)) {
for (StackEntry *R = llvm_gc_root_chain; R; R = R->Next) {
unsigned i = 0;
Still, it's an easy way to get started. After your compiler and runtime are up and running, writing a plugin will
allow you to take advantage of more advanced GC features of LLVM in order to improve performance.
IR features
This section describes the garbage collection facilities provided by the LLVM intermediate representation.
The exact behavior of these IR features is specified by the binary interface implemented by a code generation
plugin, not by this document.
These facilities are limited to those strictly necessary; they are not intended to be a complete interface to any
garbage collector. A program will need to interface with the GC library using the facilities provided by that
program.
Setting gc "name" on a function triggers a search for a matching code generation plugin "name"; it is that
plugin which defines the exact nature of the code generated to support GC. If none is found, the compiler will
raise an error.
Specifying the GC style on a per-function basis allows LLVM to link together programs that use different
garbage collection algorithms (or none at all).
A compiler which uses mem2reg to raise imperative code using alloca into SSA form need only add a call
to @llvm.gcroot for those variables which a pointers into the GC heap.
It is also important to mark intermediate values with llvm.gcroot. For example, consider h(f(),
g()). Beware leaking the result of f() in the case that g() triggers a collection.
The first argument must be a value referring to an alloca instruction or a bitcast of an alloca. The second
contains a pointer to metadata that should be associated with the pointer, and must be a constant or global
value address. If your target collector uses tags, use a null pointer for metadata.
The %metadata argument can be used to avoid requiring heap objects to have 'isa' pointers or tag bits.
[Appel89, Goldberg91, Tolmach94] If specified, its value will be tracked along with the location of the
pointer in the stack frame.
This block (which may be located in the middle of a function or in a loop nest), could be compiled to this
LLVM code:
Entry:
;; In the entry block for the function, allocate the
;; stack space for X, which is an LLVM pointer.
%X = alloca %Object*
...
Barriers often require access to the object pointer rather than the derived pointer (which is a pointer to the
field within the object). Accordingly, these intrinsics take both pointers as separate arguments for
completeness. In this snippet, %object is the object pointer, and %derived is the derived pointer:
;; An array type.
%class.Array = type { %class.Object, i32, [0 x %class.Object*] }
...
LLVM does not enforce this relationship between the object and derived pointer (although a plugin might).
However, it would be an unusual collector that violated it.
Many important algorithms require write barriers, including generational and concurrent collectors.
Additionally, write barriers could be used to implement reference counting.
Read barriers are needed by fewer algorithms than write barriers, and may have a greater performance impact
since pointer reads are more frequent than writes.
This is not the appropriate place to implement a garbage collected heap or a garbage collector itself. That code
should exist in the language's runtime library. The compiler plugin is responsible for generating code which
conforms to the binary interface defined by library, most essentially the stack map.
#include "llvm/CodeGen/GCStrategy.h"
#include "llvm/CodeGen/GCMetadata.h"
#include "llvm/Support/Compiler.h"
namespace {
class VISIBILITY_HIDDEN MyGC : public GCStrategy {
public:
MyGC() {}
};
GCRegistry::Add<MyGC>
X("mygc", "My bespoke garbage collector.");
}
Using the LLVM makefiles (like the sample project), this code can be compiled as a plugin using a simple
makefile:
# lib/MyGC/Makefile
LEVEL := ../..
LIBRARYNAME = MyGC
LOADABLE_MODULE = 1
include $(LEVEL)/Makefile.common
Once the plugin is compiled, code using it may be compiled using llc -load=MyGC.so (though
MyGC.so may have some other platform-specific extension):
$ cat sample.ll
define void @f() gc "mygc" {
entry:
ret void
}
$ llvm-as < sample.ll | llc -load=MyGC.so
It is also possible to statically link the collector plugin into tools, such as a language-specific compiler
front-end.
shadow
Algorithm Done refcount mark-sweep copying incremental threaded concurrent
stack
stack map ✔ ✘ ✘ ✘ ✘ ✘
initialize roots ✔ ✘ ✘ ✘ ✘ ✘ ✘ ✘
derived pointers NO ✘* ✘*
custom lowering ✔
gcroot ✔ ✘ ✘
gcwrite ✔ ✘ ✘ ✘
gcread ✔ ✘
safe points
in calls ✔ ✘ ✘ ✘ ✘ ✘
before calls ✔ ✘ ✘
for loops NO ✘ ✘
before escape ✔ ✘ ✘
emit code at safe
NO ✘ ✘
points
output
assembly ✔ ✘ ✘ ✘ ✘ ✘
JIT NO ✘ ✘ ✘ ✘ ✘
obj NO ✘ ✘ ✘ ✘ ✘
live analysis NO ✘ ✘ ✘ ✘ ✘
register map NO ✘ ✘ ✘ ✘ ✘
* Derived pointers only pose a hazard to copying collectors.
✘ in gray denotes a feature which could be utilized if available.
To be clear, the collection techniques above are defined as:
Shadow Stack
The mutator carefully maintains a linked list of stack roots.
Reference Counting
The mutator maintains a reference count for each object and frees an object when its count falls to
zero.
Mark-Sweep
When the heap is exhausted, the collector marks reachable objects starting from the roots, then
deallocates unreachable objects in a sweep phase.
Copying
As reachability analysis proceeds, the collector copies objects from one heap area to another,
compacting them in the process. Copying collectors enable highly efficient "bump pointer" allocation
and can improve locality of reference.
Incremental
(Including generational collectors.) Incremental collectors generally have all the properties of a
copying collector (regardless of whether the mature heap is compacting), but bring the added
complexity of requiring write barriers.
Threaded
Denotes a multithreaded mutator; the collector must still stop the mutator ("stop the world") before
beginning reachability analysis. Stopping a multithreaded mutator is a complicated problem. It
generally requires highly platform specific code in the runtime, and the production of carefully
designed machine code at safe points.
Concurrent
In this technique, the mutator and the collector run concurrently, with the goal of eliminating pause
times. In a cooperative collector, the mutator further aids with collection should a pause occur,
allowing collection to take advantage of multiprocessor hosts. The "stop the world" problem of
threaded collectors is generally still present to a limited extent. Sophisticated marking algorithms are
necessary. Read barriers may be necessary.
As the matrix indicates, LLVM's garbage collection infrastructure is already suitable for a wide variety of
collectors, but does not currently extend to multithreaded programs. This will be added in the future as there is
interest.
The stack map consists of the location and identity of each GC root in the each function in the module. For
each root:
• getFrameSize(): The overall size of the function's initial stack frame, not accounting for any
dynamic allocation.
• roots_size(): The count of roots in the function.
To access the stack map, use GCFunctionMetadata::roots_begin() and -end() from the
GCMetadataPrinter:
If the llvm.gcroot intrinsic is eliminated before code generation by a custom lowering pass, LLVM will
compute an empty stack map. This may be useful for collector plugins which implement reference counting or
a shadow stack.
When set, LLVM will automatically initialize each root to null upon entry to the function. This prevents the
GC's sweep phase from visiting uninitialized pointers, which will almost certainly cause it to crash. This
initialization occurs before custom lowering, so the two may be used together.
Since LLVM does not yet compute liveness information, there is no means of distinguishing an uninitialized
stack root from an initialized one. Therefore, this feature should be used by all GC plugins. It is enabled by
default.
If any of these flags are set, then LLVM suppresses its default lowering for the corresponding intrinsics and
instead calls performCustomLowering.
• llvm.gcroot: Leave it alone. The code generator must see it or the stack map will not be
computed.
• llvm.gcread: Substitute a load instruction.
• llvm.gcwrite: Substitute a store instruction.
#include "llvm/Module.h"
#include "llvm/IntrinsicInst.h"
return MadeChange;
}
namespace GC {
/// PointKind - The type of a collector-safe point.
///
enum PointKind {
Loop, //< Instr is a loop (backwards branch).
Return, //< Instr is a return instruction.
PreCall, //< Instr is a call instruction.
PostCall //< Instr is the return address of a call.
};
}
A collector can request any combination of the four by setting the NeededSafePoints mask:
MyGC::MyGC() {
NeededSafePoints = 1 << GC::Loop
| 1 << GC::Return
| 1 << GC::PreCall
| 1 << GC::PostCall;
}
Almost every collector requires PostCall safe points, since these correspond to the moments when the
function is suspended during a call to a subroutine.
Threaded programs generally require Loop safe points to guarantee that the application will reach a safe point
within a bounded amount of time, even if it is executing a long-running loop which contains no function calls.
Threaded collectors may also require Return and PreCall safe points to implement "stop the world"
techniques using self-modifying code, where it is important that the program not exit the function without
reaching a safe point (because only the topmost function has been patched).
MyGC::MyGC() {
UsesMetadata = true;
}
Note that LLVM does not currently have analogous APIs to support code generation in the JIT, nor using the
object writers.
#include "llvm/CodeGen/GCMetadataPrinter.h"
#include "llvm/Support/Compiler.h"
namespace {
class VISIBILITY_HIDDEN MyGCPrinter : public GCMetadataPrinter {
public:
virtual void beginAssembly(std::ostream &OS, AsmPrinter &AP,
const TargetAsmInfo &TAI);
GCMetadataPrinterRegistry::Add<MyGCPrinter>
X("mygc", "My bespoke garbage collector.");
}
The collector should use AsmPrinter and TargetAsmInfo to print portable assembly code to the
std::ostream. The collector itself contains the stack map for the entire module, and may access the
GCFunctionInfo using its own begin() and end() methods. Here's a realistic example:
#include "llvm/CodeGen/AsmPrinter.h"
#include "llvm/Function.h"
#include "llvm/Target/TargetMachine.h"
#include "llvm/Target/TargetData.h"
#include "llvm/Target/TargetAsmInfo.h"
// Emit the symbol by which the stack map entry can be found.
std::string Symbol;
Symbol += TAI.getGlobalPrefix();
Symbol += "__gcmap_";
Symbol += MD.getFunction().getName();
if (const char *GlobalDirective = TAI.getGlobalDirective())
OS << GlobalDirective << Symbol << "\n";
OS << TAI.getGlobalPrefix() << Symbol << ":\n";
// Emit PointCount.
AP.EmitInt32(MD.size());
AP.EOL("safe point count");
References
[Appel89] Runtime Tags Aren't Necessary. Andrew W. Appel. Lisp and Symbolic Computation
19(7):703-705, July 1989.
[Goldberg91] Tag-free garbage collection for strongly typed programming languages. Benjamin Goldberg.
ACM SIGPLAN PLDI'91.
[Tolmach94] Tag-free garbage collection using explicit type parameters. Andrew Tolmach. Proceedings of the
1994 ACM conference on LISP and functional programming.
Chris Lattner
LLVM Compiler Infrastructure
Last modified: $Date: 2009-08-05 10:42:44 -0500 (Wed, 05 Aug 2009) $
• Introduction
1. Philosophy behind LLVM
debugging information
2. Debug information consumers
3. Debugging optimized code
• Debugging information format
1. Debug information descriptors
◊ Compile unit descriptors
◊ Global variable
descriptors
◊ Subprogram descriptors
◊ Block descriptors
◊ Basic type descriptors
◊ Derived type descriptors
◊ Composite type
descriptors
◊ Subrange descriptors
◊ Enumerator descriptors
◊ Local variables
2. Debugger intrinsic functions
◊ llvm.dbg.declare
◊ llvm.dbg.value
• Object lifetimes and scoping
• C/C++ front-end specific debug
information
1. C/C++ source file information
2. C/C++ global variable
information
3. C/C++ function information
4. C/C++ basic types
5. C/C++ derived types
6. C/C++ struct/union types
7. C/C++ enumeration types
Written by Chris Lattner and Jim Laskey
Introduction
This document is the central repository for all information pertaining to debug information in LLVM. It
describes the actual format that the LLVM debug information takes, which is useful for those interested in
creating front-ends or dealing directly with the information. Further, this document provides specific
examples of what debug information for C/C++.
• Debugging information should have very little impact on the rest of the compiler. No transformations,
analyses, or code generators should need to be modified because of debugging information.
• LLVM optimizations should interact in well-defined and easily described ways with the debugging
information.
• Because LLVM is designed to support arbitrary programming languages, LLVM-to-LLVM tools
should not need to know anything about the semantics of the source-level-language.
• Source-level languages are often widely different from one another. LLVM should not put any
restrictions of the flavor of the source-language, and the debugging information should work with any
language.
• With code generator support, it should be possible to use an LLVM compiler to compile a program to
native machine code and standard debugging formats. This allows compatibility with traditional
machine-code level debuggers, like GDB or DBX.
The approach used by the LLVM implementation is to use a small set of intrinsic functions to define a
mapping between LLVM program objects and the source-level objects. The description of the source-level
program is maintained in LLVM metadata in an implementation-defined format (the C/C++ front-end
currently uses working draft 7 of the DWARF 3 standard).
When a program is being debugged, a debugger interacts with the user and turns the stored debug information
into source-language specific information. As such, a debugger must be aware of the source-language, and is
thus tied to a specific language or family of languages.
Currently, debug information is consumed by the DwarfWriter to produce dwarf information used by the gdb
debugger. Other targets could use the same information to produce stabs or other debug forms.
It would also be reasonable to use debug information to feed profiling tools for analysis of generated code, or,
tools for reconstructing the original source from generated code.
• LLVM debug information always provides information to accurately read the source-level state
of the program, regardless of which LLVM optimizations have been run, and without any
modification to the optimizations themselves. However, some optimizations may impact the ability to
modify the current state of the program with a debugger, such as setting program variables, or calling
functions that have been deleted.
• LLVM optimizations gracefully interact with debugging information. If they are not aware of debug
information, they are automatically disabled as necessary in the cases that would invalidate the debug
info. This retains the LLVM features, making it easy to write new transformations.
• As desired, LLVM optimizations can be upgraded to be aware of the LLVM debugging information,
allowing them to update the debugging information as they perform aggressive optimizations. This
means that, with effort, the LLVM optimizers could optimize debug code just as well as non-debug
code.
• LLVM debug information does not prevent many important optimizations from happening (for
example inlining, basic block reordering/merging/cleanup, tail duplication, etc), further reducing the
Basically, the debug information allows you to compile a program with "-O0 -g" and get full debug
information, allowing you to arbitrarily modify the program as it executes from a debugger. Compiling a
program with "-O3 -g" gives you full debug information that is always available and accurate for reading
(e.g., you get accurate stack traces despite tail call elimination and inlining), but you might lose the ability to
modify the program and call functions where were optimized out of the program, or inlined away completely.
LLVM test suite provides a framework to test optimizer's handling of debugging information. It can be run
like this:
This will test impact of debugging information on optimization passes. If debugging information influences
optimization passes then it will be reported as a failure. See TestingGuide for more information on LLVM test
infrastructure and how to run various tests.
To do this, most of the debugging information (descriptors for types, variables, functions, source files, etc) is
inserted by the language front-end in the form of LLVM metadata.
Debug information is designed to be agnostic about the target debugger and debugging information
representation (e.g. DWARF/Stabs/etc). It uses a generic pass to decode the information that represents
variables, types, functions, namespaces, etc: this allows for arbitrary source-language semantics and
type-systems to be used, as long as there is a module written for the target debugger to interpret the
information.
To provide basic functionality, the LLVM debugger does have to make some assumptions about the
source-level language being debugged, though it keeps these to a minimum. The only common features that
the LLVM debugger assumes exist are source files, and program objects. These abstract objects are used by a
debugger to form stack traces, show information about local variables, etc.
This section of the documentation first describes the representation aspects common to any source-language.
The next section describes the data layout conventions used by the C and C++ front-ends.
Consumers of LLVM debug information expect the descriptors for program objects to start in a canonical
format, but the descriptors can include additional information appended at the end that is source-language
The fields of debug descriptors used internally by LLVM are restricted to only the simple data types int,
uint, bool, float, double, mdstring and mdnode.
!1 = metadata !{
uint, ;; A tag
...
}
The first field of a descriptor is always an uint containing a tag value identifying the content of the
descriptor. The remaining fields are specific to the descriptor. The values of tags are loosely bound to the tag
values of DWARF information entries. However, that does not restrict the use of the information supplied to
DWARF targets. To facilitate versioning of debug information, the tag is augmented with the current debug
version (LLVMDebugVersion = 7 << 16 or 0x70000 or 458752.)
These descriptors contain a source language ID for the file (we use the DWARF 3.0 ID numbers, such as
DW_LANG_C89, DW_LANG_C_plus_plus, DW_LANG_Cobol74, etc), three strings describing the
filename, working directory of the compiler, and an identifier string for the compiler that produced it.
Compile unit descriptors provide the root context for objects declared in a specific source file. Global
variables and top level functions would be defined using this context. Compile unit descriptors also provide
context for source line correspondence.
Each input file is encoded as a separate compile unit in LLVM debugging information output. However the
code generator emits only one compile unit, marked as main compile unit, in an object file's debugging
information section. Most of the, if not all, target specific tool chains expect only one compile unit entry per
object file.
These descriptors provide debug information about globals variables. The provide details such as name, type
and where the variable is defined.
Subprogram descriptors
!2 = metadata !{
i32, ;; Tag = 46 + LLVMDebugVersion
;; (DW_TAG_subprogram)
i32, ;; Unused field.
metadata, ;; Reference to context descriptor
metadata, ;; Name
metadata, ;; Display name (fully qualified C++ name)
metadata, ;; MIPS linkage name (for C++)
metadata, ;; Reference to compile unit where defined
i32, ;; Line number where defined
metadata, ;; Reference to type descriptor
i1, ;; True if the global is local to compile unit (static)
i1 ;; True if the global is defined in the compile unit (not extern)
}
These descriptors provide debug information about functions, methods and subprograms. They provide details
such as name, return types and the source location where the subprogram is defined.
Block descriptors
!3 = metadata !{
i32, ;; Tag = 13 + LLVMDebugVersion (DW_TAG_lexical_block)
metadata ;; Reference to context descriptor
}
These descriptors provide debug information about nested blocks within a subprogram. The array of member
descriptors is used to define local variables and deeper nested blocks.
The type encoding provides the details of the type. The values are typically one of the following:
DW_ATE_address = 1
DW_ATE_boolean = 2
DW_ATE_float = 4
DW_ATE_signed = 5
DW_ATE_signed_char = 6
DW_ATE_unsigned = 7
DW_ATE_unsigned_char = 8
These descriptors are used to define types derived from other types. The value of the tag varies depending on
the meaning. The following are possible tag values:
DW_TAG_formal_parameter = 5
DW_TAG_member = 13
DW_TAG_pointer_type = 15
DW_TAG_reference_type = 16
DW_TAG_typedef = 22
DW_TAG_const_type = 38
DW_TAG_volatile_type = 53
DW_TAG_restrict_type = 55
DW_TAG_member is used to define a member of a composite type or subprogram. The type of the member is
the derived type. DW_TAG_formal_parameter is used to define a member which is a formal argument of
a subprogram.
DW_TAG_pointer_type,DW_TAG_reference_type, DW_TAG_const_type,
DW_TAG_volatile_type and DW_TAG_restrict_type are used to qualify the derived type.
Derived type location can be determined from the compile unit and line number. The size, alignment and
offset are expressed in bits and can be 64 bit values. The alignment is used to round the offset when embedded
in a composite type (example to keep float doubles on 64 bit boundaries.) The offset is the bit offset if
embedded in a composite type.
These descriptors are used to define types that are composed of 0 or more elements. The value of the tag
varies depending on the meaning. The following are possible tag values:
DW_TAG_array_type = 1
DW_TAG_enumeration_type = 4
DW_TAG_structure_type = 19
DW_TAG_union_type = 23
DW_TAG_vector_type = 259
DW_TAG_subroutine_type = 21
DW_TAG_inheritance = 28
The vector flag indicates that an array type is a native packed vector.
For C++ classes (tag = DW_TAG_structure_type), member descriptors provide information about base
classes, static members and member functions. If a member is a derived type descriptor and has a tag of
DW_TAG_inheritance, then the type represents a base class. If the member of is a global variable
descriptor then it represents a static member. And, if the member is a subprogram descriptor then it represents
a member function. For static members and member functions, getName() returns the members link or the
C++ mangled name. getDisplayName() the simplied version of the name.
The first member of subroutine (tag = DW_TAG_subroutine_type) type elements is the return type for
the subroutine. The remaining elements are the formal arguments to the subroutine.
Subrange descriptors
%llvm.dbg.subrange.type = type {
i32, ;; Tag = 33 + LLVMDebugVersion (DW_TAG_subrange_type)
i64, ;; Low value
i64 ;; High value
}
These descriptors are used to define ranges of array subscripts for an array composite type. The low value
defines the lower bounds typically zero for C/C++. The high value is the upper bounds. Values are 64 bit.
High - low + 1 is the size of the array. If low == high the array will be unbounded.
Enumerator descriptors
!6 = metadata !{
i32, ;; Tag = 40 + LLVMDebugVersion
;; (DW_TAG_enumerator)
metadata, ;; Name
i64 ;; Value
}
These descriptors are used to define members of an enumeration composite type, it associates the name to the
value.
Local variables
!7 = metadata !{
i32, ;; Tag (see below)
metadata, ;; Context
metadata, ;; Name
metadata, ;; Reference to compile unit where defined
i32, ;; Line number where defined
metadata ;; Type descriptor
}
These descriptors are used to define variables local to a sub program. The value of the tag depends on the
usage of the variable:
DW_TAG_auto_variable = 256
DW_TAG_arg_variable = 257
DW_TAG_return_variable = 258
An auto variable is any variable declared in the body of the function. An argument variable is any variable
that appears as a formal argument to the function. A return variable is used to track the result of a function and
has no source correspondent.
The context is either the subprogram or block where the variable is defined. Name the source variable name.
Compile unit and line indicate where the variable was defined. Type descriptor defines the declared type of
the variable.
llvm.dbg.declare
void %llvm.dbg.declare( { } *, metadata )
This intrinsic provides information about a local element (ex. variable.) The first argument is the alloca for the
variable, cast to a { }*. The second argument is the %llvm.dbg.variable containing the description of
the variable.
llvm.dbg.value
void %llvm.dbg.value( metadata, i64, metadata )
This intrinsic provides information when a user source variable is set to a new value. The first argument is the
new value (wrapped as metadata). The second argument is the offset in the user source variable where the new
value is written. The third argument is the %llvm.dbg.variable containing the description of the user
source variable.
In order to handle this, the LLVM debug format uses the metadata attached to llvm instructions to encode line
nuber and scoping information. Consider the following C fragment, for example:
1. void foo() {
2. int X = 21;
3. int Y = 22;
4. {
5. int Z = 23;
6. Z = X;
7. }
8. X = Y;
9. }
This example illustrates a few important details about LLVM debugging information. In particular, it shows
how the llvm.dbg.declare intrinsic and location information, which are attached to an instruction, are
applied together to allow a debugger to analyze the relationship between statements, variable definitions, and
the code used to implement the function.
The first intrinsic %llvm.dbg.declare encodes debugging information for the variable X. The metadata
!dbg !7 attached to the intrinsic provides scope information for the variable X.
Here !7 is metadata providing location information. It has four fields: line number, column number, scope,
and original scope. The original scope represents inline location if this instruction is inlined inside a caller,
and is null otherwise. In this example, scope is encoded by !1. !1 represents a lexical block inside the scope
!2, where !2 is a subprogram descriptor. This way the location information attached to the intrinsics
indicates that the variable X is declared at line number 2 at a function level scope in function foo.
The second intrinsic %llvm.dbg.declare encodes debugging information for variable Z. The metadata
!dbg !14 attached to the intrinsic provides scope information for the variable Z.
Here !14 indicates that Z is declaread at line number 5 and column number 9 inside of lexical scope !13.
The lexical scope itself resides inside of lexical scope !1 described above.
The scope information attached with each instruction provides a straightforward way to find instructions
covered by a scope.
This section describes the forms used to represent C and C++ programs. Other languages could pattern
themselves after this (which itself is tuned to representing programs in the same way that DWARF 3 does), or
they could choose to provide completely different forms if they don't fit into the DWARF model. As support
for debugging information gets added to the various LLVM source-language front-ends, the information used
should be documented here.
The following sections provide examples of various C/C++ constructs and the debug information that would
best describe those constructs.
#include "MyHeader.h"
...
;;
;; Define the compile unit for the source file "/Users/mine/sources/MySource.cpp".
;;
!3 = metadata !{
i32 458769, ;; Tag
i32 0, ;; Unused
i32 4, ;; Language Id
metadata !"MySource.cpp",
metadata !"/Users/mine/sources",
metadata !"4.2.1 (Based on Apple Inc. build 5649) (LLVM build 00)",
;;
;; Define the compile unit for the header file "/Users/mine/sources/MyHeader.h".
;;
!1 = metadata !{
i32 458769, ;; Tag
i32 0, ;; Unused
i32 4, ;; Language Id
metadata !"MyHeader.h",
metadata !"/Users/mine/sources",
metadata !"4.2.1 (Based on Apple Inc. build 5649) (LLVM build 00)",
i1 false, ;; Main Compile Unit
i1 false, ;; Optimized compile unit
metadata !"", ;; Compiler flags
i32 0} ;; Runtime version
...
;;
;; Define the global itself.
;;
%MyGlobal = global int 100
...
;;
;; List of debug info of globals
;;
!llvm.dbg.gv = !{!0}
;;
;; Define the global variable descriptor. Note the reference to the global
;; variable anchor and the global variable itself.
;;
!0 = metadata !{
i32 458804, ;; Tag
i32 0, ;; Unused
metadata !1, ;; Context
metadata !"MyGlobal", ;; Name
metadata !"MyGlobal", ;; Display Name
metadata !"MyGlobal", ;; Linkage Name
metadata !1, ;; Compile Unit
i32 1, ;; Line Number
metadata !2, ;; Type
i1 false, ;; Is a local variable
i1 true, ;; Is this a definition
i32* @MyGlobal ;; The global variable
}
;;
;; Define the basic type of 32 bit signed integer. Note that since int is an
;;
;; Define the anchor for subprograms. Note that the second field of the
;; anchor is 46, which is the same as the tag for subprograms
;; (46 = DW_TAG_subprogram.)
;;
!0 = metadata !{
i32 458798, ;; Tag
i32 0, ;; Unused
metadata !1, ;; Context
metadata !"main", ;; Name
metadata !"main", ;; Display name
metadata !"main", ;; Linkage name
metadata !1, ;; Compile unit
i32 1, ;; Line number
metadata !2, ;; Type
i1 false, ;; Is local
i1 true ;; Is definition
}
;;
;; Define the subprogram itself.
;;
define i32 @main(i32 %argc, i8** %argv) {
...
}
bool
!2 = metadata !{
i32 458788, ;; Tag
metadata !1, ;; Context
metadata !"bool", ;; Name
metadata !1, ;; Compile Unit
char
!2 = metadata !{
i32 458788, ;; Tag
metadata !1, ;; Context
metadata !"char", ;; Name
metadata !1, ;; Compile Unit
i32 0, ;; Line number
i64 8, ;; Size in Bits
i64 8, ;; Align in Bits
i64 0, ;; Offset in Bits
i32 0, ;; Flags
i32 6 ;; Encoding
}
unsigned char
!2 = metadata !{
i32 458788, ;; Tag
metadata !1, ;; Context
metadata !"unsigned char",
metadata !1, ;; Compile Unit
i32 0, ;; Line number
i64 8, ;; Size in Bits
i64 8, ;; Align in Bits
i64 0, ;; Offset in Bits
i32 0, ;; Flags
i32 8 ;; Encoding
}
short
!2 = metadata !{
i32 458788, ;; Tag
metadata !1, ;; Context
metadata !"short int",
metadata !1, ;; Compile Unit
i32 0, ;; Line number
i64 16, ;; Size in Bits
i64 16, ;; Align in Bits
i64 0, ;; Offset in Bits
i32 0, ;; Flags
i32 5 ;; Encoding
}
unsigned short
!2 = metadata !{
i32 458788, ;; Tag
metadata !1, ;; Context
metadata !"short unsigned int",
metadata !1, ;; Compile Unit
i32 0, ;; Line number
i64 16, ;; Size in Bits
i64 16, ;; Align in Bits
i64 0, ;; Offset in Bits
int
!2 = metadata !{
i32 458788, ;; Tag
metadata !1, ;; Context
metadata !"int", ;; Name
metadata !1, ;; Compile Unit
i32 0, ;; Line number
i64 32, ;; Size in Bits
i64 32, ;; Align in Bits
i64 0, ;; Offset in Bits
i32 0, ;; Flags
i32 5 ;; Encoding
}
unsigned int
!2 = metadata !{
i32 458788, ;; Tag
metadata !1, ;; Context
metadata !"unsigned int",
metadata !1, ;; Compile Unit
i32 0, ;; Line number
i64 32, ;; Size in Bits
i64 32, ;; Align in Bits
i64 0, ;; Offset in Bits
i32 0, ;; Flags
i32 7 ;; Encoding
}
long long
!2 = metadata !{
i32 458788, ;; Tag
metadata !1, ;; Context
metadata !"long long int",
metadata !1, ;; Compile Unit
i32 0, ;; Line number
i64 64, ;; Size in Bits
i64 64, ;; Align in Bits
i64 0, ;; Offset in Bits
i32 0, ;; Flags
i32 5 ;; Encoding
}
float
!2 = metadata !{
i32 458788, ;; Tag
metadata !1, ;; Context
metadata !"float",
metadata !1, ;; Compile Unit
i32 0, ;; Line number
i64 32, ;; Size in Bits
i64 32, ;; Align in Bits
i64 0, ;; Offset in Bits
i32 0, ;; Flags
i32 4 ;; Encoding
}
double
!2 = metadata !{
i32 458788, ;; Tag
metadata !1, ;; Context
metadata !"double",;; Name
metadata !1, ;; Compile Unit
i32 0, ;; Line number
i64 64, ;; Size in Bits
i64 64, ;; Align in Bits
i64 0, ;; Offset in Bits
i32 0, ;; Flags
i32 4 ;; Encoding
}
;;
;; Define the typedef "IntPtr".
;;
!2 = metadata !{
i32 458774, ;; Tag
metadata !1, ;; Context
metadata !"IntPtr", ;; Name
metadata !3, ;; Compile unit
i32 0, ;; Line number
i64 0, ;; Size in bits
i64 0, ;; Align in bits
i64 0, ;; Offset in bits
i32 0, ;; Flags
metadata !4 ;; Derived From type
}
;;
;; Define the pointer type.
;;
!4 = metadata !{
i32 458767, ;; Tag
metadata !1, ;; Context
metadata !"", ;; Name
metadata !1, ;; Compile unit
struct Color {
unsigned Red;
unsigned Green;
unsigned Blue;
};
;;
;; Define basic type for unsigned int.
;;
!5 = metadata !{
i32 458788, ;; Tag
metadata !1, ;; Context
metadata !"unsigned int",
metadata !1, ;; Compile Unit
i32 0, ;; Line number
i64 32, ;; Size in Bits
i64 32, ;; Align in Bits
;;
;; Define the Red field.
;;
!4 = metadata !{
i32 458765, ;; Tag
metadata !1, ;; Context
metadata !"Red", ;; Name
metadata !1, ;; Compile Unit
i32 2, ;; Line number
i64 32, ;; Size in bits
i64 32, ;; Align in bits
i64 0, ;; Offset in bits
i32 0, ;; Flags
metadata !5 ;; Derived From type
}
;;
;; Define the Green field.
;;
!6 = metadata !{
i32 458765, ;; Tag
metadata !1, ;; Context
metadata !"Green", ;; Name
metadata !1, ;; Compile Unit
i32 3, ;; Line number
i64 32, ;; Size in bits
i64 32, ;; Align in bits
i64 32, ;; Offset in bits
i32 0, ;; Flags
metadata !5 ;; Derived From type
}
;;
;; Define the Blue field.
;;
!7 = metadata !{
i32 458765, ;; Tag
metadata !1, ;; Context
metadata !"Blue", ;; Name
;;
;; Define the array of fields used by the composite type Color.
;;
!3 = metadata !{metadata !4, metadata !6, metadata !7}
enum Trees {
Spruce = 100,
Oak = 200,
Maple = 300
};
;;
;; Define composite type for enum Trees
;;
!2 = metadata !{
i32 458756, ;; Tag
metadata !1, ;; Context
metadata !"Trees", ;; Name
metadata !1, ;; Compile unit
i32 1, ;; Line number
i64 32, ;; Size in bits
i64 32, ;; Align in bits
i64 0, ;; Offset in bits
i32 0, ;; Flags
null, ;; Derived From type
metadata !3, ;; Elements
i32 0 ;; Runtime language
}
;;
;; Define the array of enumerators used by composite type Trees.
;;
!3 = metadata !{metadata !4, metadata !5, metadata !6}
;;
;; Define Spruce enumerator.
;;
!4 = metadata !{i32 458792, metadata !"Spruce", i64 100}
;;
;; Define Oak enumerator.
;;
!5 = metadata !{i32 458792, metadata !"Oak", i64 200}
;;
;; Define Maple enumerator.
Chris Lattner
LLVM Compiler Infrastructure
Last modified: $Date: 2010-03-11 18:16:00 -0600 (Thu, 11 Mar 2010) $
• Introduction
1. Itanium ABI Zero-cost
Exception Handling
2. Setjmp/Longjmp Exception
Handling
3. Overview
• LLVM Code Generation
1. Throw
2. Try/Catch
3. Cleanups
4. Throw Filters
5. Restrictions
• Exception Handling Intrinsics
1. llvm.eh.exception
2. llvm.eh.selector
3. llvm.eh.typeid.for
4. llvm.eh.sjlj.setjmp
5. llvm.eh.sjlj.longjmp
6. llvm.eh.sjlj.lsda
7. llvm.eh.sjlj.callsite
• Asm Table Formats
1. Exception Handling Frame
2. Exception Tables
• ToDo
Written by Jim Laskey
Introduction
This document is the central repository for all information pertaining to exception handling in LLVM. It
describes the format that LLVM exception handling information takes, which is useful for those interested in
creating front-ends or dealing directly with the information. Further, this document provides specific
examples of what exception handling information is used for in C/C++.
The Itanium ABI Exception Handling Specification defines a methodology for providing outlying data in the
form of exception tables without inlining speculative exception handling code in the flow of an application's
main algorithm. Thus, the specification is said to add "zero-cost" to the normal execution of an application.
A more complete description of the Itanium ABI exception handling runtime support of can be found at
Itanium C++ ABI: Exception Handling. A description of the exception frame format can be found at
Exception Frames, with details of the DWARF 3 specification at DWARF 3 Standard. A description for the
C++ exception table formats can be found at Exception Handling Tables.
Setjmp/Longjmp (SJLJ) based exception handling uses LLVM intrinsics llvm.eh.sjlj.setjmp and
llvm.eh.sjlj.longjmp to handle control flow for exception handling.
For each function which does exception processing, be it try/catch blocks or cleanups, that function registers
itself on a global frame list. When exceptions are being unwound, the runtime uses this list to identify which
functions need processing.
Landing pad selection is encoded in the call site entry of the function context. The runtime returns to the
function via llvm.eh.sjlj.longjmp, where a switch table transfers control to the appropriate landing
pad based on the index stored in the function context.
In contrast to DWARF exception handling, which encodes exception regions and frame information in
out-of-line tables, SJLJ exception handling builds and removes the unwind frame context at runtime. This
results in faster exception handling at the expense of slower execution when no exceptions are thrown. As
exceptions are, by their nature, intended for uncommon code paths, DWARF exception handling is generally
preferred to SJLJ.
Overview
When an exception is thrown in LLVM code, the runtime does its best to find a handler suited to processing
the circumstance.
The runtime first attempts to find an exception frame corresponding to the function where the exception was
thrown. If the programming language (e.g. C++) supports exception handling, the exception frame contains a
reference to an exception table describing how to process the exception. If the language (e.g. C) does not
support exception handling, or if the exception needs to be forwarded to a prior activation, the exception
frame contains information about how to unwind the current activation and restore the state of the prior
activation. This process is repeated until the exception is handled. If the exception is not handled and no
activations remain, then the application is terminated with an appropriate error message.
Because different programming languages have different behaviors when handling exceptions, the exception
handling ABI provides a mechanism for supplying personalities. An exception handling personality is defined
by way of a personality function (e.g. __gxx_personality_v0 in C++), which receives the context of
the exception, an exception structure containing the exception object type and value, and a reference to the
exception table for the current function. The personality function for the current compile unit is specified in a
common exception frame.
The organization of an exception table is language dependent. For C++, an exception table is organized as a
series of code ranges defining what to do if an exception occurs in that range. Typically, the information
associated with a range defines which types of exception objects (using C++ type info) that are handled in that
range, and an associated action that should take place. Actions typically pass control to a landing pad.
A landing pad corresponds to the code found in the catch portion of a try/catch sequence. When execution
resumes at a landing pad, it receives the exception structure and a selector corresponding to the type of
exception thrown. The selector is then used to determine which catch should actually process the exception.
From the C++ developers perspective, exceptions are defined in terms of the throw and try/catch
statements. In this section we will describe the implementation of LLVM exception handling in terms of C++
examples.
Throw
Languages that support exception handling typically provide a throw operation to initiate the exception
process. Internally, a throw operation breaks down into two steps. First, a request is made to allocate
exception space for an exception structure. This structure needs to survive beyond the current activation. This
structure will contain the type and value of the object being thrown. Second, a call is made to the runtime to
raise the exception, passing the exception structure as an argument.
In C++, the allocation of the exception structure is done by the __cxa_allocate_exception runtime
function. The exception raising is handled by __cxa_throw. The type of the exception is represented using
a C++ RTTI structure.
Try/Catch
A call within the scope of a try statement can potentially raise an exception. In those circumstances, the
LLVM C++ front-end replaces the call with an invoke instruction. Unlike a call, the invoke has two
potential continuation points: where to continue when the call succeeds as per normal; and where to continue
if the call raises an exception, either by a throw or the unwinding of a throw.
The term used to define a the place where an invoke continues after an exception is called a landing pad.
LLVM landing pads are conceptually alternative function entry points where an exception structure reference
and a type info index are passed in as arguments. The landing pad saves the exception structure reference and
then proceeds to select the catch block that corresponds to the type info of the exception object.
Two LLVM intrinsic functions are used to convey information about the landing pad to the back end.
1. llvm.eh.exception takes no arguments and returns a pointer to the exception structure. This
only returns a sensible value if called after an invoke has branched to a landing pad. Due to code
generation limitations, it must currently be called in the landing pad itself.
2. llvm.eh.selector takes a minimum of three arguments. The first argument is the reference to
the exception structure. The second argument is a reference to the personality function to be used for
this try/catch sequence. Each of the remaining arguments is either a reference to the type info for
a catch statement, a filter expression, or the number zero (0) representing a cleanup. The exception
is tested against the arguments sequentially from first to last. The result of the llvm.eh.selector
is a positive number if the exception matched a type info, a negative number if it matched a filter, and
zero if it matched a cleanup. If nothing is matched, the behaviour of the program is undefined. This
only returns a sensible value if called after an invoke has branched to a landing pad. Due to codegen
limitations, it must currently be called in the landing pad itself. If a type info matched, then the
selector value is the index of the type info in the exception table, which can be obtained using the
llvm.eh.typeid.for intrinsic.
Once the landing pad has the type info selector, the code branches to the code for the first catch. The catch
then checks the value of the type info selector against the index of type info for that catch. Since the type info
index is not known until all the type info have been gathered in the backend, the catch code will call the
llvm.eh.typeid.for intrinsic to determine the index for a given type info. If the catch fails to match the
selector then control is passed on to the next catch. Note: Since the landing pad will not be used if there is no
match in the list of type info on the call to llvm.eh.selector, then neither the last catch nor catch all
need to perform the check against the selector.
Finally, the entry and exit of catch code is bracketed with calls to __cxa_begin_catch and
__cxa_end_catch.
• __cxa_begin_catch takes a exception structure reference as an argument and returns the value
of the exception object.
• __cxa_end_catch takes no arguments. This function:
1. Locates the most recently caught exception and decrements its handler count,
2. Removes the exception from the "caught" stack if the handler count goes to zero, and
3. Destroys the exception if the handler count goes to zero, and the exception was not re-thrown
by throw.
Note: a rethrow from within the catch may replace this call with a __cxa_rethrow.
Cleanups
To handle destructors and cleanups in try code, control may not run directly from a landing pad to the first
catch. Control may actually flow from the landing pad to clean up code and then to the first catch. Since the
required clean up for each invoke in a try may be different (e.g. intervening constructor), there may be
several landing pads for a given try. If cleanups need to be run, an i32 0 should be passed as the last
llvm.eh.selector argument. However, when using DWARF exception handling with C++, a i8*
null must be passed instead.
Throw Filters
C++ allows the specification of which exception types can be thrown from a function. To represent this a top
level landing pad may exist to filter out invalid types. To express this in LLVM code the landing pad will call
llvm.eh.selector. The arguments are a reference to the exception structure, a reference to the
personality function, the length of the filter expression (the number of type infos plus one), followed by the
type infos themselves. llvm.eh.selector will return a negative value if the exception does not match
any of the type infos. If no match is found then a call to __cxa_call_unexpected should be made,
otherwise _Unwind_Resume. Each of these functions requires a reference to the exception structure. Note
that the most general form of an llvm.eh.selector call can contain any number of type infos, filter
expressions and cleanups (though having more than one cleanup is pointless). The LLVM C++ front-end can
generate such llvm.eh.selector calls due to inlining creating nested exception handling scopes.
Restrictions
The semantics of the invoke instruction require that any exception that unwinds through an invoke call should
result in a branch to the invoke's unwind label. However such a branch will only happen if the
llvm.eh.selector matches. Thus in order to ensure correct operation, the front-end must only generate
llvm.eh.selector calls that are guaranteed to always match whatever exception unwinds through the
invoke. For most languages it is enough to pass zero, indicating the presence of a cleanup, as the last
llvm.eh.selector argument. However for C++ this is not sufficient, because the C++ personality
function will terminate the program if it detects that unwinding the exception only results in matches with
cleanups. For C++ a null i8* should be passed as the last llvm.eh.selector argument instead. This
is interpreted as a catch-all by the C++ personality function, and will always match.
llvm.eh.exception
i8* %llvm.eh.exception( )
llvm.eh.selector
This intrinsic is used to compare the exception with the given type infos, filters and cleanups.
llvm.eh.selector takes a minimum of three arguments. The first argument is the reference to the
exception structure. The second argument is a reference to the personality function to be used for this try catch
sequence. Each of the remaining arguments is either a reference to the type info for a catch statement, a filter
expression, or the number zero representing a cleanup. The exception is tested against the arguments
sequentially from first to last. The result of the llvm.eh.selector is a positive number if the exception
matched a type info, a negative number if it matched a filter, and zero if it matched a cleanup. If nothing is
matched, the behaviour of the program is undefined. If a type info matched then the selector value is the index
of the type info in the exception table, which can be obtained using the llvm.eh.typeid.for intrinsic.
llvm.eh.typeid.for
i32 %llvm.eh.typeid.for(i8*)
This intrinsic returns the type info index in the exception table of the current function. This value can be used
to compare against the result of llvm.eh.selector. The single argument is a reference to a type info.
llvm.eh.sjlj.setjmp
i32 %llvm.eh.sjlj.setjmp(i8*)
The SJLJ exception handling uses this intrinsic to force register saving for the current function and to store the
address of the following instruction for use as a destination address by llvm.eh.sjlj.longjmp. The
buffer format and the overall functioning of this intrinsic is compatible with the GCC __builtin_setjmp
implementation, allowing code built with the two compilers to interoperate.
The single parameter is a pointer to a five word buffer in which the calling context is saved. The front end
places the frame pointer in the first word, and the target implementation of this intrinsic should place the
destination address for a llvm.eh.sjlj.longjmp in the second word. The following three words are
available for use in a target-specific manner.
llvm.eh.sjlj.lsda
i8* %llvm.eh.sjlj.lsda( )
Used for SJLJ based exception handling, the llvm.eh.sjlj.lsda intrinsic returns the address of the
Language Specific Data Area (LSDA) for the current function. The SJLJ front-end code stores this address in
the exception handling function context for use by the runtime.
llvm.eh.sjlj.callsite
void %llvm.eh.sjlj.callsite(i32)
For SJLJ based exception handling, the llvm.eh.sjlj.callsite intrinsic identifies the callsite value
associated with the following invoke instruction. This is used to ensure that landing pad entries in the LSDA
are generated in the matching order.
Exception Tables
An exception table contains information about what actions to take when an exception is thrown in a
particular part of a function's code. There is one exception table per function except leaf routines and
functions that have only calls to non-throwing functions will not need an exception table.
ToDo
1. Testing/Testing/Testing.
Chris Lattner
LLVM Compiler Infrastructure
Last modified: $Date: 2010-01-27 19:45:32 -0600 (Wed, 27 Jan 2010) $
• Description
• Design Philosophy
♦ Automatic Debugger Selection
♦ Crash debugger
♦ Code generator debugger
♦ Miscompilation debugger
• Advice for using bugpoint
Description
bugpoint narrows down the source of problems in LLVM tools and passes. It can be used to debug three
types of failures: optimizer crashes, miscompilations by optimizers, or bad native code generation (including
problems in the static and JIT compilers). It aims to reduce large test cases to small, useful ones. For example,
if opt crashes while optimizing a file, it will identify the optimization (or combination of optimizations) that
causes the crash, and reduce the file down to a small example which triggers the crash.
For detailed case scenarios, such as debugging opt, llvm-ld, or one of the LLVM code generators, see
How To Submit a Bug Report document.
Design Philosophy
bugpoint is designed to be a useful tool without requiring any hooks into the LLVM infrastructure at all. It
works with any and all LLVM passes and code generators, and does not need to "know" how they work.
Because of this, it may appear to do stupid things or miss obvious simplifications. bugpoint is also
designed to trade off programmer time for computer time in the compiler-debugging process; consequently, it
may take a long period of (unattended) time to reduce a test case, but we feel it is still worth it. Note that
bugpoint is generally very quick unless debugging a miscompilation where each test of the program (which
requires executing it) takes a long time.
Otherwise, if the -output option was not specified, bugpoint runs the test program with the C backend
(which is assumed to generate good code) to generate a reference output. Once bugpoint has a reference
output for the test program, it tries executing it with the selected code generator. If the selected code generator
crashes, bugpoint starts the crash debugger on the code generator. Otherwise, if the resulting output differs
from the reference output, it assumes the difference resulted from a code generator failure, and starts the code
generator debugger.
Finally, if the output of the selected code generator matches the reference output, bugpoint runs the test
program after all of the LLVM passes have been applied to it. If its output differs from the reference output, it
assumes the difference resulted from a failure in one of the LLVM passes, and enters the miscompilation
debugger. Otherwise, there is no problem bugpoint can debug.
Crash debugger
Next, bugpoint tries removing functions from the test program, to reduce its size. Usually it is able to
reduce a test program to a single function, when debugging intraprocedural optimizations. Once the number of
functions has been reduced, it attempts to delete various edges in the control flow graph, to reduce the size of
the function as much as possible. Finally, bugpoint deletes any individual LLVM instructions whose
absence does not eliminate the failure. At the end, bugpoint should tell you what passes crash, give you a
bitcode file, and give you instructions on how to reproduce the failure with opt or llc.
Miscompilation debugger
The miscompilation debugger works similarly to the code generator debugger. It works by splitting the test
program into two pieces, running the optimizations specified on one piece, linking the two pieces back
together, and then executing the result. It attempts to narrow down the list of passes to the one (or few) which
are causing the miscompilation, then reduce the portion of the test program which is being miscompiled. The
miscompilation debugger assumes that the selected code generator is working properly.
1. In the code generator and miscompilation debuggers, bugpoint only works with programs that have
deterministic output. Thus, if the program outputs argv[0], the date, time, or any other "random"
data, bugpoint may misinterpret differences in these data, when output, as the result of a
miscompilation. Programs should be temporarily modified to disable outputs that are likely to vary
from run to run.
2. In the code generator and miscompilation debuggers, debugging will go faster if you manually modify
the program or its inputs to reduce the runtime, but still exhibit the problem.
3. bugpoint is extremely useful when working on a new optimization: it helps track down regressions
quickly. To avoid having to relink bugpoint every time you change your optimization however,
have bugpoint dynamically load your optimization with the -load option.
4. bugpoint can generate a lot of output and run for a long period of time. It is often useful to capture
the output of the program to file. For example, in the C shell, you can run:
to get a copy of bugpoint's output in the file bugpoint.log, as well as on your terminal.
5. bugpoint cannot debug problems with the LLVM linker. If bugpoint crashes before you see its
"All input ok" message, you might try llvm-link -v on the same set of input files. If that also
to get a list of passes that are used with -O2 and then pass this list to bugpoint.
Chris Lattner
LLVM Compiler Infrastructure
Last modified: $Date: 2009-10-12 13:12:47 -0500 (Mon, 12 Oct 2009) $
• Introduction
• Compiling with LLVMC
• Using LLVMC to generate toolchain drivers
Introduction
LLVMC is a generic compiler driver, which plays the same role for LLVM as the gcc program does for GCC
- the difference being that LLVMC is designed to be more adaptable and easier to customize. Most of
LLVMC functionality is implemented via plugins, which can be loaded dynamically or compiled in. This
tutorial describes the basic usage and configuration of LLVMC.
This will invoke llvm-g++ under the hood (you can see which commands are executed by using the -v
option). For further help on command-line LLVMC usage, refer to the llvmc --help output.
$ cd $LLVM_DIR/tools/llvmc
$ cp -r example/Simple plugins/Simple
Here we link our plugin with the LLVMC core statically to form an executable file called mygcc. It is also
possible to build our plugin as a dynamic library to be loaded by the llvmc executable (or any other
// Tool descriptions
def gcc : Tool<
[(in_language "c"),
(out_language "executable"),
(output_suffix "out"),
(cmd_line "gcc $INFILE -o $OUTFILE"),
(sink)
]>;
// Language map
def LanguageMap : LanguageMap<[LangToSuffixes<"c", ["c"]>]>;
// Compilation graph
def CompilationGraph : CompilationGraph<[Edge<"root", "gcc">]>;
As you can see, this file consists of three parts: tool descriptions, language map, and the compilation graph
definition.
At the heart of LLVMC is the idea of a compilation graph: vertices in this graph are tools, and edges represent
a transformation path between two tools (for example, assembly source produced by the compiler can be
transformed into executable code by an assembler). The compilation graph is basically a list of edges; a
special node named root is used to mark graph entry points.
Tool descriptions are represented as property lists: most properties in the example above should be
self-explanatory; the sink property means that all options lacking an explicit description should be
forwarded to this tool.
The LanguageMap associates a language name with a list of suffixes and is used for deciding which
toolchain corresponds to a given input file.
To learn more about LLVMC customization, refer to the reference manual and plugin source code in the
plugins directory.
Mikhail Glushenkov
LLVM Compiler Infrastructure
Last modified: $Date: 2008-12-11 11:34:48 -0600 (Thu, 11 Dec 2008) $
• Introduction
• Compiling with LLVMC
• Predefined options
• Compiling LLVMC plugins
• Compiling standalone LLVMC-based drivers
• Customizing LLVMC: the compilation graph
• Describing options
♦ External options
• Conditional evaluation
• Writing a tool description
♦ Actions
• Language map
• Option preprocessor
• More advanced topics
♦ Hooks and environment variables
♦ How plugins are loaded
♦ Debugging
♦ Conditioning on the executable name
Introduction
LLVMC is a generic compiler driver, designed to be customizable and extensible. It plays the same role for
LLVM as the gcc program does for GCC - LLVMC's job is essentially to transform a set of input files into a
set of targets depending on configuration rules and user options. What makes LLVMC different is that these
transformation rules are completely customizable - in fact, LLVMC knows nothing about the specifics of
transformation (even the command-line options are mostly not hard-coded) and regards the transformation
structure as an abstract graph. The structure of this graph is completely determined by plugins, which can be
either statically or dynamically linked. This makes it possible to easily adapt LLVMC for other purposes - for
example, as a build tool for game resources.
Because LLVMC employs TableGen as its configuration language, you need to be familiar with it to
customize LLVMC.
On the other hand, when using LLVMC as a linker to combine several C++ object files you should provide
the --linker option since it's impossible for LLVMC to choose the right linker in that case:
$ llvmc -c hello.cpp
$ llvmc hello.o
[A lot of link-time errors skipped]
$ llvmc --linker=c++ hello.o
$ ./a.out
hello
By default, LLVMC uses llvm-gcc to compile the source code. It is also possible to choose the clang
compiler with the -clang option.
Predefined options
LLVMC has some built-in options that can't be overridden in the configuration libraries:
$ cd $LLVMC_DIR/plugins
$ cp -r Simple MyPlugin
$ cd MyPlugin
$ ls
Makefile PluginMain.cpp Simple.td
As you can see, our basic plugin consists of only two files (not counting the build script). Simple.td
contains TableGen description of the compilation graph; its format is documented in the following sections.
PluginMain.cpp is just a helper file used to compile the auto-generated C++ code produced from
TableGen source. It can also contain hook definitions (see below).
The first thing that you should do is to change the LLVMC_PLUGIN variable in the Makefile to avoid
conflicts (since this variable is used to name the resulting library):
LLVMC_PLUGIN=MyPlugin
$ mv Simple.td MyPlugin.td
To build your plugin as a dynamic library, just cd to its source directory and run make. The resulting file will
be called plugin_llvmc_$(LLVMC_PLUGIN).$(DLL_EXTENSION) (in our case,
plugin_llvmc_MyPlugin.so). This library can be then loaded in with the -load option. Example:
$ cd $LLVMC_DIR/plugins/Simple
$ make
$ llvmc -load $LLVM_DIR/Release/lib/plugin_llvmc_Simple.so
$ cd $LLVMC_DIR/example/
$ cp -r Skeleton mydriver
$ cd mydriver
$ vim Makefile
[...]
$ make
If you're compiling LLVM with different source and object directories, then you must perform the following
additional steps before running make:
# LLVMC_SRC_DIR = $LLVM_SRC_DIR/tools/llvmc/
# LLVMC_OBJ_DIR = $LLVM_OBJ_DIR/tools/llvmc/
$ cp $LLVMC_SRC_DIR/example/mydriver/Makefile \
$ cd $LLVMC_DIR
$ make LLVMC_BUILTIN_PLUGINS=MyPlugin LLVMC_BASED_DRIVER_NAME=mydriver
This works with both srcdir == objdir and srcdir != objdir, but assumes that the plugin source directory was
placed under $LLVMC_DIR/plugins.
Sometimes, you will want a 'bare-bones' version of LLVMC that has no built-in plugins. It can be compiled
with the following command:
$ cd $LLVMC_DIR
$ make LLVMC_BUILTIN_PLUGINS=""
include "llvm/CompilerDriver/Common.td"
Internally, LLVMC stores information about possible source transformations in form of a graph. Nodes in this
graph represent tools, and edges between two nodes represent a transformation path. A special "root" node is
used to mark entry points for the transformations. LLVMC also assigns a weight to each edge (more on this
later) to choose between several alternative edges.
The definition of the compilation graph (see file plugins/Base/Base.td for an example) is just a list of
edges:
Edge<"llvm_gcc_c", "llc">,
Edge<"llvm_gcc_cpp", "llc">,
...
OptionalEdge<"llvm_gcc_assembler", "llvm_gcc_cpp_linker",
(case (input_languages_contain "c++"), (inc_weight),
(or (parameter_equals "linker", "g++"),
(parameter_equals "linker", "c++")), (inc_weight))>,
...
]>;
The default edges are assigned a weight of 1, and optional edges get a weight of 0 + 2*N where N is the
number of tests that evaluated to true in the case expression. It is also possible to provide an integer
parameter to inc_weight and dec_weight - in this case, the weight is increased (or decreased) by the
provided value instead of the default 2. It is also possible to change the default weight of an optional edge by
using the default clause of the case construct.
When passing an input file through the graph, LLVMC picks the edge with the maximum weight. To avoid
ambiguity, there should be only one default edge between two nodes (with the exception of the root node,
which gets a special treatment - there you are allowed to specify one default edge per language).
When multiple plugins are loaded, their compilation graphs are merged together. Since multiple edges that
have the same end nodes are not allowed (i.e. the graph is not a multigraph), an edge defined in several
plugins will be replaced by the definition from the plugin that was loaded last. Plugin load order can be
controlled by using the plugin priority feature described above.
To get a visual representation of the compilation graph (useful for debugging), run llvmc
--view-graph. You will need dot and gsview installed for this to work properly.
Describing options
Command-line options that the plugin supports are defined by using an OptionList:
As you can see, the option list is just a list of DAGs, where each DAG is an option description consisting of
the option name and some properties. A plugin can define more than one option list (they are all merged
together in the end), which can be handy if one wants to separate option groups syntactically.
◊ help - help string associated with this option. Used for -help output.
◊ required - this option must be specified exactly once (or, in case of the list
options without the multi_val property, at least once). Incompatible with
zero_or_one and one_or_more.
◊ one_or_more - the option must be specified at least one time. Useful only
for list options in conjunction with multi_val; for ordinary lists it is
synonymous with required. Incompatible with required and
zero_or_one.
◊ optional - the option can be specified zero or one times. Useful only for
list options in conjunction with multi_val. Incompatible with required
and one_or_more.
◊ hidden - the description of this option will not appear in the -help output
(but will appear in the -help-hidden output).
◊ really_hidden - the option will not be mentioned in any help output.
◊ comma_separated - Indicates that any commas specified for an option's
value should be used to split the value up into multiple values for the option.
This property is valid only for list options. In conjunction with
forward_value can be used to implement option forwarding in style of
gcc's -Wa,.
◊ multi_val n - this option takes n arguments (can be useful in some
special cases). Usage example: (parameter_list_option "foo",
(multi_val 3)); the command-line syntax is '-foo a b c'. Only list
options can have this attribute; you can, however, use the one_or_more,
optional and required properties.
◊ init - this option has a default value, either a string (if it is a parameter), or
a boolean (if it is a switch; as in C++, boolean constants are called true and
false). List options can't have init attribute. Usage examples:
(switch_option "foo", (init true)); (prefix_option
"bar", (init "baz")).
◊ extern - this option is defined in some other plugin, see below.
External options
Sometimes, when linking several plugins together, one plugin needs to access options defined in some other
plugin. Because of the way options are implemented, such options must be marked as extern. This is what
the extern option property is for. Example:
...
(switch_option "E", (extern))
...
If an external option has additional attributes besides 'extern', they are ignored. See also the section on plugin
priorities.
Conditional evaluation
The 'case' construct is the main means by which programmability is achieved in LLVMC. It can be used to
calculate edge weights, program actions and modify the shell commands to be executed. The 'case' expression
is designed after the similarly-named construct in functional languages and takes the form (case
(test_1), statement_1, (test_2), statement_2, ... (test_N), statement_N).
The statements are evaluated only if the corresponding tests evaluate to true.
Examples:
(case
(switch_on "A"), "cmdline1",
(switch_on "B"), "cmdline2",
(default), "cmdline3")
Note the slight difference in 'case' expression handling in contexts of edge weights and command line
specification - in the second example the value of the "B" switch is never checked when switch "A" is
enabled, and the whole expression always evaluates to "cmdline1" in that case.
You should, however, try to avoid doing that because it hurts readability. It is usually better to split tool
descriptions and/or use TableGen inheritance instead.
This defines a new tool called llvm_gcc_cpp, which is an alias for llvm-g++. As you can see, a tool
definition is just a list of properties; most of them should be self-explanatory. The sink property means that
this tool should be passed all command-line options that aren't mentioned in the option list.
♦ in_language - input language name. Can be either a string or a list, in case the tool
supports multiple input languages.
♦ out_language - output language name. Multiple output languages are not allowed.
♦ output_suffix - output file suffix. Can also be changed dynamically, see documentation
on actions.
♦ cmd_line - the actual command used to run the tool. You can use $INFILE and
$OUTFILE variables, output redirection with >, hook invocations ($CALL), environment
variables (via $ENV) and the case construct.
♦ join - this tool is a "join node" in the graph, i.e. it gets a list of input files and joins them
together. Used for linkers.
♦ sink - all command-line options that are not handled by other tools are passed to this tool.
♦ actions - A single big case expression that specifies how this tool reacts on
command-line options (described in more detail below).
Actions
A tool often needs to react to command-line options, and this is precisely what the actions property is for.
The next example illustrates this feature:
The actions tool property is implemented on top of the omnipresent case expression. It associates one or
more different actions with given conditions - in the example, the actions are forward, which forwards a
given option unchanged, and append_cmd, which appends a given string to the tool execution command.
Multiple actions can be associated with a single condition by using a list of actions (used in the example to
append some dummy options). The same case construct can also be used in the cmd_line property to
modify the tool command line.
The "join" property used in the example means that this tool behaves like a linker.
• Possible actions:
Actions 439
Documentation for the LLVM System at SVN head
◊ forward - Forward the option unchanged. Example: (forward
"Wall").
◊ forward_as - Change the option's name, but forward the argument
unchanged. Example: (forward_as "O0",
"--disable-optimization").
◊ forward_value - Forward only option's value. Cannot be used with
switch options (since they don't have values), but works fine with lists.
Example: (forward_value "Wa,").
◊ forward_transformed_value - As above, but applies a hook to the
option's value before forwarding (see below). When
forward_transformed_value is applied to a list option, the hook
must have signature std::string hooks::HookName (const
std::vector<std::string>&). Example:
(forward_transformed_value "m", "ConvertToMAttr").
◊ output_suffix - Modify the output suffix of this tool. Example:
(output_suffix "i").
◊ stop_compilation - Stop compilation after this tool processes its input.
Used without arguments. Example: (stop_compilation).
Language map
If you are adding support for a new language to LLVMC, you'll need to modify the language map, which
defines mappings from file extensions to language names. It is used to choose the proper toolchain(s) for a
given input file set. Language map definition looks like this:
For example, without those definitions the following command wouldn't work:
$ llvmc hello.cpp
llvmc: Unknown suffix: cpp
The language map entries are needed only for the tools that are linked from the root node. Since a tool can't
have multiple output languages, for inner nodes of the graph the input and output languages should match.
This is enforced at compile-time.
Option preprocessor
It is sometimes useful to run error-checking code before processing the compilation graph. For example, if
optimization options "-O1" and "-O2" are implemented as switches, we might want to output a warning if the
user invokes the driver with both of these options enabled.
The OptionPreprocessor feature is reserved specially for these occasions. Example (adapted from the
built-in Base plugin):
Here, OptionPreprocessor is used to unset all spurious -O options so that they are not forwarded to the
compiler. If no optimization options are specified, -O2 is enabled.
OptionPreprocessor is basically a single big case expression, which is evaluated only once right after
the plugin is loaded. The only allowed actions in OptionPreprocessor are error, warning, and two
special actions: unset_option and set_option. As their names suggest, they can be used to set or unset
a given option. To set an option with set_option, use the two-argument form: (set_option
"parameter", VALUE). Here, VALUE can be either a string, a string list, or a boolean constant.
For convenience, set_option and unset_option also work on lists. That is, instead of
[(unset_option "A"), (unset_option "B")] you can use (unset_option ["A",
"B"]). Obviously, (set_option ["A", "B"]) is valid only if both A and B are switches.
To change the command line string based on user-provided options use the case expression (documented
above):
(cmd_line
(case
(switch_on "E"),
"llvm-g++ -E -x c $INFILE -o $OUTFILE",
(default),
"llvm-g++ -c -x c $INFILE -o $OUTFILE -emit-llvm"))
Plugins are loaded in order of their (increasing) priority, starting with 0. Therefore, the plugin with the highest
priority value will be loaded last.
Debugging
When writing LLVMC plugins, it can be useful to get a visual view of the resulting compilation graph. This
can be achieved via the command line option --view-graph. This command assumes that Graphviz and
Ghostview are installed. There is also a --write-graph option that creates a Graphviz source file
(compilation-graph.dot) in the current directory.
Another useful llvmc option is --check-graph. It checks the compilation graph for common errors like
mismatched output/input language names, multiple default edges and cycles. These checks can't be performed
at compile-time because the plugins can load code dynamically. When invoked with --check-graph,
llvmc doesn't perform any compilation tasks and returns the number of encountered errors as its status code.
namespace llvmc {
extern const char* ProgramName;
}
namespace hooks {
std::string MyHook() {
//...
if (strcmp(ProgramName, "mydriver") == 0) {
//...
In general, you're encouraged not to make the behaviour dependent on the executable file name, and use
command-line switches instead. See for example how the Base plugin behaves when it needs to choose the
correct linker options (think g++ vs. gcc).
Mikhail Glushenkov
LLVM Compiler Infrastructure
1. Abstract
2. Overview
3. Bitstream Format
1. Magic Numbers
2. Primitives
3. Abbreviation IDs
4. Blocks
5. Data Records
6. Abbreviations
7. Standard Blocks
4. Bitcode Wrapper Format
5. LLVM IR Encoding
1. Basics
2. MODULE_BLOCK Contents
3. PARAMATTR_BLOCK Contents
4. TYPE_BLOCK Contents
5. CONSTANTS_BLOCK Contents
6. FUNCTION_BLOCK Contents
7. TYPE_SYMTAB_BLOCK Contents
8. VALUE_SYMTAB_BLOCK Contents
9. METADATA_BLOCK Contents
10. METADATA_ATTACHMENT Contents
Abstract
This document describes the LLVM bitstream file format and the encoding of the LLVM IR into it.
Overview
What is commonly known as the LLVM bitcode file format (also, sometimes anachronistically known as
bytecode) is actually two things: a bitstream container format and an encoding of LLVM IR into the container
format.
The bitstream format is an abstract encoding of structured data, very similar to XML in some ways. Like
XML, bitstream files contain tags, and nested structures, and you can parse the file without having to
understand the tags. Unlike XML, the bitstream format is a binary encoding, and unlike XML it provides a
mechanism for the file to self-describe "abbreviations", which are effectively size optimizations for the
content.
LLVM IR files may be optionally embedded into a wrapper structure that makes it easy to embed extra data
along with LLVM IR files.
This document first describes the LLVM bitstream format, describes the wrapper format, then describes the
record structure used by LLVM IR files.
Bitstream Format
The bitstream format is literally a stream of bits, with a very simple structure. This structure consists of the
following concepts:
Note that the llvm-bcanalyzer tool can be used to dump and inspect arbitrary bitstreams, which is very useful
for understanding the encoding.
Magic Numbers
The first two bytes of a bitcode file are 'BC' (0x42, 0x43). The second two bytes are an application-specific
magic number. Generic bitcode tools can look at only the first two bytes to verify the file is bitcode, while
application-specific programs will want to look at all four.
Primitives
A bitstream literally consists of a stream of bits, which are read in order starting with the least significant bit
of each byte. The stream is made up of a number of primitive values that encode a stream of unsigned integer
values. These integers are encoded in two ways: either as Fixed Width Integers or as Variable Width Integers.
For example, the value 27 (0x1B) is encoded as 1011 0011 when emitted as a vbr4 value. The first set of four
bits indicates the value 3 (011) with a continuation piece (indicated by a high bit of 1). The next word
indicates a value of 24 (011 << 3) with no continuation. The sum (3+24) yields the value 27.
6-bit characters
6-bit characters encode common characters into a fixed 6-bit field. They represent the following characters
with the following 6-bit values:
'a' .. 'z' — 0 .. 25
'A' .. 'Z' — 26 .. 51
'0' .. '9' — 52 .. 61
'.' — 62
'_' — 63
This encoding is only suitable for encoding characters and strings that consist only of the above characters. It
is completely incapable of encoding characters not in the set.
Word Alignment
Occasionally, it is useful to emit zero bits until the bitstream is a multiple of 32 bits. This ensures that the bit
position in the stream can be represented as a multiple of 32-bit words.
Abbreviation IDs
A bitstream is a sequential series of Blocks and Data Records. Both of these start with an abbreviation ID
encoded as a fixed-bitwidth field. The width is specified by the current block, as described below. The value
of the abbreviation ID specifies either a builtin ID (which have special meanings, defined below) or one of the
abbreviation IDs defined for the current block by the stream itself.
Abbreviation IDs 4 and above are defined by the stream itself, and specify an abbreviated record encoding.
Blocks
Blocks in a bitstream denote nested regions of the stream, and are identified by a content-specific id number
(for example, LLVM IR uses an ID of 12 to represent function bodies). Block IDs 0-7 are reserved for
standard blocks whose meaning is defined by Bitcode; block IDs 8 and greater are application specific. Nested
blocks capture the hierarchical structure of the data encoded in it, and various properties are associated with
blocks as the file is parsed. Block definitions allow the reader to efficiently skip blocks in constant time if the
reader wants a summary of blocks, or if it wants to efficiently skip data it does not understand. The LLVM IR
reader uses this mechanism to skip function bodies, lazily reading them on demand.
When reading and encoding the stream, several properties are maintained for the block. In particular, each
block maintains:
1. A current abbrev id width. This value starts at 2 at the beginning of the stream, and is set every time a
block record is entered. The block entry specifies the abbrev id width for the body of the block.
2. A set of abbreviations. Abbreviations may be defined within a block, in which case they are only
defined in that block (neither subblocks nor enclosing blocks see the abbreviation). Abbreviations can
also be defined inside a BLOCKINFO block, in which case they are defined in all blocks that match
the ID that the BLOCKINFO block is describing.
As sub blocks are entered, these properties are saved and the new sub-block has its own set of abbreviations,
and its own abbrev id width. When a sub-block is popped, the saved values are restored.
ENTER_SUBBLOCK Encoding
[ENTER_SUBBLOCK, blockidvbr8, newabbrevlenvbr4, <align32bits>, blocklen32]
The ENTER_SUBBLOCK abbreviation ID specifies the start of a new block record. The blockid value is
encoded as an 8-bit VBR identifier, and indicates the type of block being entered, which can be a standard
block or an application-specific block. The newabbrevlen value is a 4-bit VBR, which specifies the abbrev
id width for the sub-block. The blocklen value is a 32-bit aligned value that specifies the size of the
subblock in 32-bit words. This value allows the reader to skip over the entire block in one jump.
END_BLOCK Encoding
[END_BLOCK, <align32bits>]
The END_BLOCK abbreviation ID specifies the end of the current block record. Its end is aligned to 32-bits to
ensure that the size of the block is an even multiple of 32-bits.
Data Records
Data records consist of a record code and a number of (up to) 64-bit integer values. The interpretation of the
code and values is application specific and may vary between different block types. Records can be encoded
either using an unabbrev record, or with an abbreviation. In the LLVM IR format, for example, there is a
record which encodes the target triple of a module. The code is MODULE_CODE_TRIPLE, and the values of
the record are the ASCII codes for the characters in the string.
UNABBREV_RECORD Encoding
[UNABBREV_RECORD, codevbr6, numopsvbr6, op0vbr6, op1vbr6, ...]
An UNABBREV_RECORD provides a default fallback encoding, which is both completely general and
extremely inefficient. It can describe an arbitrary record by emitting the code and operands as VBRs.
For example, emitting an LLVM IR target triple as an unabbreviated record requires emitting the
UNABBREV_RECORD abbrevid, a vbr6 for the MODULE_CODE_TRIPLE code, a vbr6 for the length of the
string, which is equal to the number of operands, and a vbr6 for each character. Because there are no letters
with values less than 32, each letter would need to be emitted as at least a two-part VBR, which means that
each letter would require at least 12 bits. This is not an efficient encoding, but it is fully general.
An abbreviated record is a abbreviation id followed by a set of fields that are encoded according to the
abbreviation definition. This allows records to be encoded significantly more densely than records encoded
with the UNABBREV_RECORD type, and allows the abbreviation types to be specified in the stream itself,
which allows the files to be completely self describing. The actual encoding of abbreviations is defined below.
The record code, which is the first field of an abbreviated record, may be encoded in the abbreviation
definition (as a literal operand) or supplied in the abbreviated record (as a Fixed or VBR operand value).
Abbreviations
Abbreviations are an important form of compression for bitstreams. The idea is to specify a dense encoding
for a class of records once, then use that encoding to emit many records. It takes space to emit the encoding
into the file, but the space is recouped (hopefully plus some) when the records that use it are emitted.
Abbreviations can be determined dynamically per client, per file. Because the abbreviations are stored in the
bitstream itself, different streams of the same format can contain different sets of abbreviations according to
the needs of the specific stream. As a concrete example, LLVM IR files usually emit an abbreviation for
binary operators. If a specific LLVM module contained no or few binary operators, the abbreviation does not
need to be emitted.
DEFINE_ABBREV Encoding
[DEFINE_ABBREV, numabbrevopsvbr5, abbrevop0, abbrevop1, ...]
A DEFINE_ABBREV record adds an abbreviation to the list of currently defined abbreviations in the scope of
this block. This definition only exists inside this immediate block — it is not visible in subblocks or enclosing
blocks. Abbreviations are implicitly assigned IDs sequentially starting from 4 (the first application-defined
abbreviation ID). Any abbreviations defined in a BLOCKINFO record for the particular block type receive IDs
first, in order, followed by any abbreviations defined within the block itself. Abbreviated data records
reference this ID to indicate what abbreviation they are invoking.
An abbreviation definition consists of the DEFINE_ABBREV abbrevid followed by a VBR that specifies the
number of abbrev operands, then the abbrev operands themselves. Abbreviation operands come in three
forms. They all start with a single bit that indicates whether the abbrev operand is a literal operand (when the
bit is 1) or an encoding operand (when the bit is 0).
1. Literal operands — [11, litvaluevbr8] — Literal operands specify that the value in the result is
always a single specific value. This specific value is emitted as a vbr8 after the bit indicating that it is
a literal operand.
2. Encoding info without data — [01, encoding3] — Operand encodings that do not have extra
data are just emitted as their code.
3. Encoding info with data — [01, encoding3, valuevbr5] — Operand encodings that do have
extra data are emitted as their code, followed by the extra data.
• Fixed (code 1): The field should be emitted as a fixed-width value, whose width is specified by the
operand's extra data.
• VBR (code 2): The field should be emitted as a variable-width value, whose width is specified by the
operand's extra data.
• Array (code 3): This field is an array of values. The array operand has no extra data, but expects
another operand to follow it, indicating the element type of the array. When reading an array in an
abbreviated record, the first integer is a vbr6 that indicates the array length, followed by the encoded
elements of the array. An array may only occur as the last operand of an abbreviation (except for the
one final operand that gives the array's type).
• Char6 (code 4): This field should be emitted as a char6-encoded value. This operand type takes no
extra data. Char6 encoding is normally used as an array element type.
• Blob (code 5): This field is emitted as a vbr6, followed by padding to a 32-bit boundary (for
alignment) and an array of 8-bit objects. The array of bytes is further followed by tail padding to
ensure that its total length is a multiple of 4 bytes. This makes it very efficient for the reader to decode
the data without having to make a copy of it: it can use a pointer to the data in the mapped in file and
poke directly at it. A blob may only occur as the last operand of an abbreviation.
For example, target triples in LLVM modules are encoded as a record of the form [TRIPLE, 'a', 'b',
'c', 'd']. Consider if the bitstream emitted the following abbrev entry:
[0, Fixed, 4]
[0, Array]
[0, Char6]
When emitting a record with this abbreviation, the above entry would be emitted as:
Standard Blocks
In addition to the basic block structure and record encodings, the bitstream also defines specific built-in block
types. These block types specify how the stream is to be decoded or other metadata. In the future, new
standard blocks may be added. Block IDs 0-7 are reserved for standard blocks.
#0 - BLOCKINFO Block
The BLOCKINFO block allows the description of metadata for other blocks. The currently specified records
are:
The SETBID record (code 1) indicates which block ID is being described. SETBID records can occur
multiple times throughout the block to change which block ID is being described. There must be a SETBID
record prior to any other records.
Standard DEFINE_ABBREV records can occur inside BLOCKINFO blocks, but unlike their occurrence in
normal blocks, the abbreviation is defined for blocks matching the block ID we are describing, not the
BLOCKINFO block itself. The abbreviations defined in BLOCKINFO blocks receive abbreviation IDs as
described in DEFINE_ABBREV.
The BLOCKNAME record (code 2) can optionally occur in this block. The elements of the record are the bytes
of the string name of the block. llvm-bcanalyzer can use this to dump out bitcode files symbolically.
The SETRECORDNAME record (code 3) can also optionally occur in this block. The first operand value is a
record ID number, and the rest of the elements of the record are the bytes for the string name of the record.
llvm-bcanalyzer can use this to dump out bitcode files symbolically.
Note that although the data in BLOCKINFO blocks is described as "metadata," the abbreviations they contain
are essential for parsing records from the corresponding blocks. It is not safe to skip them.
Each of the fields are 32-bit fields stored in little endian form (as with the rest of the bitcode file fields). The
Magic number is always 0x0B17C0DE and the version is currently always 0. The Offset field is the offset in
bytes to the start of the bitcode stream in the file, and the Size field is the size in bytes of the stream.
CPUType is a target-specific value that can be used to encode the CPU of the target.
LLVM IR Encoding
LLVM IR is encoded into a bitstream by defining blocks and records. It uses blocks for things like constant
pools, functions, symbol tables, etc. It uses records for things like instructions, global variable descriptors,
type descriptions, etc. This document does not describe the set of abbreviations that the writer uses, as these
are fully self-described in the file, and the reader is not allowed to build in any knowledge of this.
Basics
LLVM IR Magic Number
The magic number for LLVM IR files is:
When combined with the bitcode magic number and viewed as bytes, this is "BC 0xC0DE".
Signed VBRs
Variable Width Integer encoding is an efficient way to encode arbitrary sized unsigned values, but is an
extremely inefficient for encoding signed values, as signed values are otherwise treated as maximally large
unsigned values.
• Positive values are emitted as VBRs of the specified width, but with their value shifted left by one.
• Negative values are emitted as VBRs of the specified width, but the negated value is shifted left by
one, and the low bit is set.
With this encoding, small positive and small negative values can both be emitted efficiently. Signed VBR
encoding is used in CST_CODE_INTEGER and CST_CODE_WIDE_INTEGER records within
CONSTANTS_BLOCK blocks.
LLVM IR Blocks
LLVM IR is defined with the following blocks:
• 8 — MODULE_BLOCK — This is the top-level block that contains the entire module, and describes a
variety of per-module information.
• 9 — PARAMATTR_BLOCK — This enumerates the parameter attributes.
• 10 — TYPE_BLOCK — This describes all of the types in the module.
• 11 — CONSTANTS_BLOCK — This describes constants for a module or function.
• 12 — FUNCTION_BLOCK — This describes a function body.
• 13 — TYPE_SYMTAB_BLOCK — This describes the type symbol table.
• 14 — VALUE_SYMTAB_BLOCK — This describes a value symbol table.
• 15 — METADATA_BLOCK — This describes metadata items.
• 16 — METADATA_ATTACHMENT — This contains records associating metadata with function
instruction values.
MODULE_BLOCK Contents
The MODULE_BLOCK block (id 8) is the top-level block for LLVM bitcode files, and each bitcode file must
contain exactly one. In addition to records (described below) containing information about the module, a
MODULE_BLOCK block may contain the following sub-blocks:
• BLOCKINFO
• PARAMATTR_BLOCK
• TYPE_BLOCK
• TYPE_SYMTAB_BLOCK
• VALUE_SYMTAB_BLOCK
• CONSTANTS_BLOCK
• FUNCTION_BLOCK
• METADATA_BLOCK
MODULE_CODE_VERSION Record
[VERSION, version#]
The VERSION record (code 1) contains a single value indicating the format version. Only version 0 is
supported at this time.
MODULE_CODE_TRIPLE Record
[TRIPLE, ...string...]
The TRIPLE record (code 2) contains a variable number of values representing the bytes of the target
triple specification string.
MODULE_CODE_DATALAYOUT Record
[DATALAYOUT, ...string...]
The DATALAYOUT record (code 3) contains a variable number of values representing the bytes of the
target datalayout specification string.
MODULE_CODE_ASM Record
[ASM, ...string...]
The ASM record (code 4) contains a variable number of values representing the bytes of module asm
strings, with individual assembly blocks separated by newline (ASCII 10) characters.
MODULE_CODE_SECTIONNAME Record
[SECTIONNAME, ...string...]
The SECTIONNAME record (code 5) contains a variable number of values representing the bytes of a single
section name string. There should be one SECTIONNAME record for each section name referenced (e.g., in
global variable or function section attributes) within the module. These records can be referenced by the
1-based index in the section fields of GLOBALVAR or FUNCTION records.
MODULE_CODE_DEPLIB Record
[DEPLIB, ...string...]
The DEPLIB record (code 6) contains a variable number of values representing the bytes of a single
dependent library name string, one of the libraries mentioned in a deplibs declaration. There should be one
DEPLIB record for each library name referenced.
MODULE_CODE_GLOBALVAR Record
[GLOBALVAR, pointer type, isconst, initid, linkage, alignment, section,
visibility, threadlocal]
The GLOBALVAR record (code 7) marks the declaration or definition of a global variable. The operand fields
are:
MODULE_CODE_FUNCTION Record
[FUNCTION, type, callingconv, isproto, linkage, paramattr, alignment,
section, visibility, gc]
The FUNCTION record (code 8) marks the declaration or definition of a function. The operand fields are:
• type: The type index of the function type describing this function
• callingconv: The calling convention number:
♦ ccc: code 0
♦ fastcc: code 8
♦ coldcc: code 9
♦ x86_stdcallcc: code 64
♦ x86_fastcallcc: code 65
♦ arm_apcscc: code 66
♦ arm_aapcscc: code 67
♦ arm_aapcs_vfpcc: code 68
• isproto: Non-zero if this entry represents a declaration rather than a definition
• linkage: An encoding of the linkage type for this function
• paramattr: If nonzero, the 1-based parameter attribute index into the table of
PARAMATTR_CODE_ENTRY entries.
• alignment: The logarithm base 2 of the function's requested alignment, plus 1
• section: If non-zero, the 1-based section index in the table of MODULE_CODE_SECTIONNAME
entries.
• visibility: An encoding of the visibility of this function
• gc: If present and nonzero, the 1-based garbage collector index in the table of
MODULE_CODE_GCNAME entries.
MODULE_CODE_ALIAS Record
[ALIAS, alias type, aliasee val#, linkage, visibility]
The ALIAS record (code 9) marks the definition of an alias. The operand fields are
MODULE_CODE_PURGEVALS Record
[PURGEVALS, numvals]
The PURGEVALS record (code 10) resets the module-level value list to the size given by the single operand
value. Module-level value list items are added by GLOBALVAR, FUNCTION, and ALIAS records. After a
PURGEVALS record is seen, new value indices will start from the given numvals value.
MODULE_CODE_GCNAME Record
[GCNAME, ...string...]
The GCNAME record (code 11) contains a variable number of values representing the bytes of a single garbage
collector name string. There should be one GCNAME record for each garbage collector name referenced in
function gc attributes within the module. These records can be referenced by 1-based index in the gc fields of
FUNCTION records.
PARAMATTR_BLOCK Contents
The PARAMATTR_BLOCK block (id 9) ...
PARAMATTR_CODE_ENTRY Record
[ENTRY, paramidx0, attr0, paramidx1, attr1...]
TYPE_BLOCK Contents
The TYPE_BLOCK block (id 10) ...
CONSTANTS_BLOCK Contents
The CONSTANTS_BLOCK block (id 11) ...
FUNCTION_BLOCK Contents
The FUNCTION_BLOCK block (id 12) ...
In addition to the record types described below, a FUNCTION_BLOCK block may contain the following
sub-blocks:
• CONSTANTS_BLOCK
• VALUE_SYMTAB_BLOCK
• METADATA_ATTACHMENT
VALUE_SYMTAB_BLOCK Contents
The VALUE_SYMTAB_BLOCK block (id 14) ...
METADATA_BLOCK Contents
The METADATA_BLOCK block (id 15) ...
METADATA_ATTACHMENT Contents
The METADATA_ATTACHMENT block (id 16) ...
Chris Lattner
The LLVM Compiler Infrastructure
Last modified: $Date: 2010-01-20 11:53:51 -0600 (Wed, 20 Jan 2010) $
• Abstract
• Keeping LLVM Portable
1. Don't Include System Headers
2. Don't Expose System Headers
3. Allow Standard C Header Files
4. Allow Standard C++ Header Files
5. High-Level Interface
6. No Exposed Functions
7. No Exposed Data
8. No Duplicate Implementations
9. No Unused Functionality
10. No Virtual Methods
11. Minimize Soft Errors
12. No throw() Specifications
13. Code Organization
14. Consistent Semantics
15. Tracking Bugzilla Bug: 351
Abstract
This document provides some details on LLVM's System Library, located in the source at lib/System and
include/llvm/System. The library's purpose is to shield LLVM from the differences between operating
systems for the few services LLVM needs from the operating system. Much of LLVM is written using
portability features of standard C++. However, in a few areas, system dependent facilities are needed and the
System Library is the wrapper around those system calls.
By centralizing LLVM's use of operating system interfaces, we make it possible for the LLVM tool chain and
runtime libraries to be more easily ported to new platforms since (theoretically) only lib/System needs to
be ported. This library also unclutters the rest of LLVM from #ifdef use and special cases for specific
operating systems. Such uses are replaced with simple calls to the interfaces provided in
include/llvm/System.
Note that the System Library is not intended to be a complete operating system wrapper (such as the Adaptive
Communications Environment (ACE) or Apache Portable Runtime (APR)), but only provides the
functionality necessary to support LLVM.
The System Library was written by Reid Spencer who formulated the design based on similar work
originating from the eXtensible Programming System (XPS). Several people helped with the effort;
especially, Jeff Cohen and Henrik Bach on the Win32 port.
For example, consider what is needed to execute a program, wait for it to complete, and return its result code.
On Unix, this involves the following operating system calls: getenv, fork, execve, and wait. The
correct thing for lib/System to provide is a function, say ExecuteProgramAndWait, that implements the
functionality completely. what we don't want is wrappers for the operating system calls involved.
There must not be a one-to-one relationship between operating system calls and the System library's interface.
Any such interface function will be suspicious.
No Unused Functionality
There must be no functionality specified in the interface of lib/System that isn't actually used by LLVM.
We're not writing a general purpose operating system wrapper here, just enough to satisfy LLVM's needs.
And, LLVM doesn't need much. This design goal aims to keep the lib/System interface small and
understandable which should foster its actual use and adoption.
No Duplicate Implementations
The implementation of a function for a given platform must be written exactly once. This implies that it must
be possible to apply a function's implementation to multiple operating systems if those operating systems can
share the same implementation. This rule applies to the set of operating systems supported for a given class of
operating system (e.g. Unix, Win32).
No Exposed Functions
Any functions defined by system libraries (i.e. not defined by lib/System) must not be exposed through the
lib/System interface, even if the header file for that function is not exposed. This prevents inadvertent use of
system specific functionality.
For example, the stat system call is notorious for having variations in the data it provides. lib/System
must not declare stat nor allow it to be declared. Instead it should provide its own interface to discovering
information about files and directories. Those interfaces may be implemented in terms of stat but that is
strictly an implementation detail. The interface provided by the System Library must be implemented on all
platforms (even those without stat).
No Exposed Data
Any data defined by system libraries (i.e. not defined by lib/System) must not be exposed through the
lib/System interface, even if the header file for that function is not exposed. As with functions, this prevents
inadvertent use of data that might not exist on all platforms.
lib/System must always attempt to minimize soft errors. This is a design requirement because the
minimization of soft errors can affect the granularity and the nature of the interface. In general, if you find that
you're wanting to throw soft errors, you must review the granularity of the interface because it is likely you're
trying to implement something that is too low level. The rule of thumb is to provide interface functions that
can't fail, except when faced with hard errors.
For a trivial example, suppose we wanted to add an "OpenFileForWriting" function. For many operating
systems, if the file doesn't exist, attempting to open the file will produce an error. However, lib/System should
not simply throw that error if it occurs because its a soft error. The problem is that the interface function,
OpenFileForWriting is too low level. It should be OpenOrCreateFileForWriting. In the case of the soft
"doesn't exist" error, this function would just create it and then open it for writing.
This design principle needs to be maintained in lib/System because it avoids the propagation of soft error
handling throughout the rest of LLVM. Hard errors will generally just cause a termination for an LLVM tool
so don't be bashful about throwing them.
Rules of thumb:
No throw Specifications
None of the lib/System interface functions may be declared with C++ throw() specifications on them. This
requirement makes sure that the compiler does not insert additional exception handling code into the interface
functions. This is a performance consideration: lib/System functions are at the bottom of many call chains and
as such can be frequently called. We need them to be as efficient as possible. However, no routines in the
system library should actually throw exceptions.
Code Organization
Implementations of the System Library interface are separated by their general class of operating system.
Currently only Unix and Win32 classes are defined but more could be added for other operating system
classifications. To distinguish which implementation to compile, the code in lib/System uses the
LLVM_ON_UNIX and LLVM_ON_WIN32 #defines provided via configure through the
llvm/Config/config.h file. Each source file in lib/System, after implementing the generic (operating system
independent) functionality needs to include the correct implementation using a set of #if
defined(LLVM_ON_XYZ) directives. For example, if we had lib/System/File.cpp, we'd expect to see in
that file:
#if defined(LLVM_ON_UNIX)
#include "Unix/File.cpp"
#endif
#if defined(LLVM_ON_WIN32)
#include "Win32/File.cpp"
#endif
The implementation in lib/System/Unix/File.cpp should handle all Unix variants. The implementation in
lib/System/Win32/File.cpp should handle all Win32 variants. What this does is quickly differentiate the basic
class of operating system that will provide the implementation. The specific details for a given platform must
still be determined through the use of #ifdef.
Consistent Semantics
The implementation of a lib/System interface can vary drastically between platforms. That's okay as long as
the end result of the interface function is the same. For example, a function to create a directory is pretty
straight forward on all operating system. System V IPC on the other hand isn't even supported on all
platforms. Instead of "supporting" System V IPC, lib/System should provide an interface to the basic concept
of inter-process communications. The implementations might use System V IPC if that was available or
named pipes, or whatever gets the job done effectively for a given operating system. In all cases, the interface
and the implementation must be semantically consistent.
Bug 351
See bug 351 for further details on the progress of this work
Reid Spencer
LLVM Compiler Infrastructure
Last modified: $Date: 2009-07-17 16:11:24 -0500 (Fri, 17 Jul 2009) $
• Description
• Design Philosophy
♦ Example of link time optimization
♦ Alternative Approaches
• Multi-phase communication between LLVM and linker
♦ Phase 1 : Read LLVM Bytecode Files
♦ Phase 2 : Symbol Resolution
♦ Phase 3 : Optimize Bitcode Files
♦ Phase 4 : Symbol Resolution after optimization
• libLTO
♦ lto_module_t
♦ lto_code_gen_t
Description
LLVM features powerful intermodular optimizations which can be used at link time. Link Time Optimization
(LTO) is another name for intermodular optimization when performed during the link stage. This document
describes the interface and design between the LTO optimizer and the linker.
Design Philosophy
The LLVM Link Time Optimizer provides complete transparency, while doing intermodular optimization, in
the compiler tool chain. Its main goal is to let the developer take advantage of intermodular optimizations
without making any significant changes to the developer's makefiles or build system. This is achieved through
tight integration with the linker. In this model, the linker treates LLVM bitcode files like native object files
and allows mixing and matching among them. The linker uses libLTO, a shared object, to handle LLVM
bitcode files. This tight integration between the linker and LLVM optimizer helps to do optimizations that are
not possible in other models. The linker input allows the optimizer to avoid relying on conservative escape
analysis.
void foo2(void) {
i = -1;
}
int foo1(void) {
int data = 0;
void foo4(void) {
printf ("Hi\n");
}
int main() {
return foo1();
}
In this example, the linker recognizes that foo2() is an externally visible symbol defined in LLVM bitcode
file. The linker completes its usual symbol resolution pass and finds that foo2() is not used anywhere. This
information is used by the LLVM optimizer and it removes foo2(). As soon as foo2() is removed, the
optimizer recognizes that condition i < 0 is always false, which means foo3() is never used. Hence, the
optimizer removes foo3(), also. And this in turn, enables linker to remove foo4(). This example
illustrates the advantage of tight integration with the linker. Here, the optimizer can not remove foo3()
without the linker's input.
Alternative Approaches
The lto* functions are all implemented in a shared object libLTO. This allows the LLVM LTO code to be
updated independently of the linker tool. On platforms that support it, the shared object is lazily loaded.
After this phase, the linker continues linking as if it never saw LLVM bitcode files.
libLTO
libLTO is a shared object that is part of the LLVM tools, and is intended for use by a linker. libLTO
provides an abstract C interface to use the LLVM interprocedural optimizer without exposing details of
LLVM's internals. The intention is to keep the interface as stable as possible even when the LLVM optimizer
continues to evolve. It should even be possible for a completely different compilation technology to provide a
different libLTO that works with their object files and the standard linker tool.
lto_module_t
lto_module_is_object_file(const char*)
lto_module_is_object_file_for_target(const char*, const char*)
lto_module_is_object_file_in_memory(const void*, size_t)
lto_module_is_object_file_in_memory_for_target(const void*, size_t, const char*)
If the object file can be processed by libLTO, the linker creates a lto_module_t by using one of
lto_module_create(const char*)
lto_module_create_from_memory(const void*, size_t)
lto_module_dispose(lto_module_t)
The linker can introspect the non-native object file by getting the number of symbols and getting the name and
attributes of each symbol via:
lto_module_get_num_symbols(lto_module_t)
lto_module_get_symbol_name(lto_module_t, unsigned int)
lto_module_get_symbol_attribute(lto_module_t, unsigned int)
lto_code_gen_t
Once the linker has loaded each non-native object files into an lto_module_t, it can request libLTO to
process them all and generate a native object file. This is done in a couple of steps. First, a code generator is
created with:
lto_codegen_create()
Then, each non-native object file is added to the code generator with:
lto_codegen_add_module(lto_code_gen_t, lto_module_t)
The linker then has the option of setting some codegen options. Whether or not to generate DWARF debug
info is set with:
lto_codegen_set_debug_model(lto_code_gen_t)
lto_codegen_set_pic_model(lto_code_gen_t)
And each symbol that is referenced by a native object file or otherwise must not be optimized away is set
with:
After all these settings are done, the linker requests that a native object file be created from the modules with
the settings using:
which returns a pointer to a buffer containing the generated native object file. The linker then parses that and
links it with the rest of the native object files.
1. Introduction
2. How to build it
3. Usage
♦ Example of link time optimization
♦ Quickstart for using LTO with autotooled projects
4. Licensing
The LLVM gold plugin implements the gold plugin interface on top of libLTO. The same plugin can also be
used by other tools such as ar and nm.
How to build it
You need to build gold with plugin support and build the LLVMgold plugin.
mkdir binutils
cd binutils
cvs -z 9 -d :pserver:[email protected]:/cvs/src login
{enter "anoncvs" as the password}
cvs -z 9 -d :pserver:[email protected]:/cvs/src co src
mkdir build
cd build
../src/configure --enable-gold --enable-plugins
make all-gold
That should leave you with binutils/build/gold/ld-new which supports the -plugin option.
• Build the LLVMgold plugin: Configure LLVM with
--with-binutils-include=/path/to/binutils/src/include and run make.
Usage
The linker takes a -plugin option that points to the path of the plugin .so file. To find out what link
command gcc would run in a given situation, run gcc -v [...] and look for the line where it runs
collect2. Replace that with ld-new -plugin /path/to/LLVMgold.so to test it out. Once you're
ready to switch to using gold, backup your existing /usr/bin/ld then replace it with ld-new.
You can produce bitcode files from llvm-gcc using -emit-llvm or -flto, or the -O4 flag which is
synonymous with -O3 -flto.
llvm-gcc has a -use-gold-plugin option which looks for the gold plugin in the same directories as it
looks for cc1 and passes the -plugin option to ld. It will not look for an alternate linker, which is why you
need gold to be the installed system linker in your path.
void foo2(void) {
printf("Foo2\n");
}
void foo3(void) {
foo4();
}
int main(void) {
foo1();
}
void foo1(void) {
foo2();
}
void foo4(void) {
printf("Foo4");
}
Gold informs the plugin that foo3 is never referenced outside the IR, leading LLVM to delete that function.
However, unlike in the libLTO example gold does not currently eliminate foo4.
The environment variable settings may work for non-autotooled projects too, but you may need to set the LD
environment variable as well.
Licensing
Gold is licensed under the GPLv3. LLVMgold uses the interface file plugin-api.h from gold which
means that the resulting LLVMgold.so binary is also GPLv3. This can still be used to link non-GPLv3
programs just as much as gold could without the plugin.
Nick Lewycky
The LLVM Compiler Infrastructure
Last modified: $Date: 2009-01-01 23:10:51 -0800 (Thu, 01 Jan 2009) $
1. Introduction
2. Quickstart
3. Example with clang and lli
Depending on the architecture, this can impact the debugging experience in different ways. For example, on
most 32-bit x86 architectures, you can simply compile with -fno-omit-framepointer for GCC and
-fdisable-fp-elim for LLVM. When GDB creates a backtrace, it can properly unwind the stack, but the stack
frames owned by JITed code have ??'s instead of the appropriate symbol name. However, on Linux x86_64 in
particular, GDB relies on the DWARF CFA debug information to unwind the stack, so even if you compile
your program to leave the frame pointer untouched, GDB will usually be unable to unwind the stack past any
JITed code stack frames.
In order to communicate the necessary debug info to GDB, an interface for registering JITed code with
debuggers has been designed and implemented for GDB and LLVM. At a high level, whenever LLVM
generates new machine code, it also generates an object file in memory containing the debug information.
LLVM then adds the object file to the global list of object files and calls a special function
(__jit_debug_register_code) marked noinline that GDB knows about. When GDB attaches to a process, it puts
a breakpoint in this function and loads all of the object files in the global list. When LLVM calls the
registration function, GDB catches the breakpoint signal, loads the new object file from LLVM's memory, and
resumes the execution. In this way, GDB can get the necessary debug information.
At the time of this writing, LLVM only supports architectures that use ELF object files and it only generates
symbols and DWARF CFA information. However, it would be easy to add more information to the object
file, so we don't need to coordinate with GDB to get better debug information.
Quickstart
In order to debug code JITed by LLVM, you need to install a recent version of GDB. The interface was added
on 2009-08-19, so you need a snapshot of GDB more recent than that. Either download a snapshot of GDB or
checkout CVS as instructed here. Here are the commands for doing a checkout and building the code:
You can then use -jit-emit-debug in the LLVM command line arguments to enable the interface.
#include <stdio.h>
void bar() {
foo();
}
void baz() {
bar();
}
Here are the commands to run that application under GDB and print the stack trace at the crash:
# Compile foo.c to bitcode. You can use either clang or llvm-gcc with this
# command line. Both require -fexceptions, or the calls are all marked
# 'nounwind' which disables DWARF CFA info.
$ clang foo.c -fexceptions -emit-llvm -c -o foo.bc
# Run foo.bc under lli with -jit-emit-debug. If you built lli in debug mode,
# -jit-emit-debug defaults to true.
$ $GDB_INSTALL/gdb --args lli -jit-emit-debug foo.bc
...
As you can see, GDB can correctly unwind the stack and has the appropriate function names.
Reid Kleckner
The LLVM Compiler Infrastructure
Last modified: $Date: 2009-01-01 23:10:51 -0800 (Thu, 01 Jan 2009) $
• Introduction
• Quick start
• Basic CMake usage
• Options and variables
♦ Frequently-used CMake variables
♦ LLVM-specific variables
• Executing the test suite
• Cross compiling
• Embedding LLVM in your project
• Compiler/Platform specific topics
♦ Microsoft Visual C++
Introduction
CMake is a cross-platform build-generator tool. CMake does not build the project, it generates the files
needed by your build tool (GNU make, Visual Studio, etc) for building LLVM.
If you are really anxious about getting a functional LLVM build, go to the Quick start section. If you are a
CMake novice, start on Basic CMake usage and then go back to the Quick start once you know what you are
doing. The Options and variables section is a reference for customizing your build. If you already have
experience with CMake, this is the recommended starting point.
Quick start
We use here the command-line, non-interactive CMake interface
mkdir mybuilddir
cd mybuilddir
4. Execute this command on the shell replacing path/to/llvm/source/root with the path to the root of your
LLVM source tree:
cmake path/to/llvm/source/root
CMake will detect your development environment, perform a series of test and generate the files
required for building LLVM. CMake will use default values for all build parameters. See the Options
and variables section for fine-tuning your build
This can fail if CMake can't detect your toolset, or if it thinks that the environment is not sane enough.
On this case make sure that the toolset that you intend to use is the only one reachable from the shell
and that the shell itself is the correct one for you development environment. CMake will refuse to
build MinGW makefiles if you have a POSIX shell reachable through the PATH environment
variable, for instance. You can force CMake to use a given build tool, see the Usage section.
CMake comes with extensive documentation in the form of html files and on the cmake executable itself.
Execute cmake --help for further help options.
CMake requires to know for which build tool it shall generate files (GNU make, Visual Studio, Xcode, etc). If
not specified on the command line, it tries to guess it based on you environment. Once identified the build
tool, CMake uses the corresponding Generator for creating files for your build tool. You can explicitly
specify the generator with the command line option -G "Name of the generator". For knowing the available
generators on your platform, execute
cmake --help
This will list the generator's names at the end of the help text. Generator's names are case-sensitive. Example:
For a given development platform there can be more than one adequate generator. If you use Visual Studio
"NMake Makefiles" is a generator you can use for building with NMake. By default, CMake chooses the
more specific generator supported by your development environment. If you want an alternative generator,
you must tell this to CMake with the -G option.
TODO: explain variables and cache. Move explanation here from #options section.
You can set a variable after the initial CMake invocation for changing its value. You can also undefine a
variable:
Variables are stored on the CMake cache. This is a file named CMakeCache.txt on the root of the build
directory. Do not hand-edit it.
Variables are listed here appending its type after a colon. It is correct to write the variable and the type on the
CMake command line:
CMAKE_BUILD_TYPE:STRING
LLVM-specific variables
LLVM_TARGETS_TO_BUILD:STRING
Semicolon-separated list of targets to build, or all for building all targets. Case-sensitive. For Visual
C++ defaults to X86. On the other cases defaults to all. Example:
-DLLVM_TARGETS_TO_BUILD="X86;PowerPC;Alpha".
LLVM_BUILD_TOOLS:BOOL
Build LLVM tools. Defaults to ON. Targets for building each tool are generated in any case. You can
build an tool separately by invoking its target. For example, you can build llvm-as with a
makefile-based system executing make llvm-as on the root of your build directory.
LLVM_BUILD_EXAMPLES:BOOL
Build LLVM examples. Defaults to OFF. Targets for building each example are generated in any
case. See documentation for LLVM_BUILD_TOOLS above for more details.
LLVM_ENABLE_THREADS:BOOL
Build with threads support, if available. Defaults to ON.
LLVM_ENABLE_ASSERTIONS:BOOL
Enables code assertions. Defaults to OFF if and only if CMAKE_BUILD_TYPE is Release.
LLVM_ENABLE_PIC:BOOL
Add the -fPIC flag for the compiler command-line, if the compiler supports this flag. Some systems,
like Windows, do not need this flag. Defaults to ON.
LLVM_ENABLE_WARNINGS:BOOL
Enable all compiler warnings. Defaults to ON.
LLVM_ENABLE_PEDANTIC:BOOL
Enable pedantic mode. This disable compiler specific extensions, is possible. Defaults to ON.
LLVM_ENABLE_WERROR:BOOL
Stop and fail build, if a compiler warning is triggered. Defaults to OFF.
LLVM_BUILD_32_BITS:BOOL
Build 32-bits executables and libraries on 64-bits systems. This option is available only on some
64-bits unix systems. Defaults to OFF.
LLVM_TARGET_ARCH:STRING
LLVM target to use for native code generation. This is required for JIT generation. It defaults to
"host", meaning that it shall pick the architecture of the machine where LLVM is being built. If you
are cross-compiling, set it to the target architecture name.
LLVM_TABLEGEN:STRING
Full path to a native TableGen executable (usually named tblgen). This is intented for
cross-compiling: if the user sets this variable, no native TableGen will be created.
TODO
Cross compiling
See this wiki page for generic instructions on how to cross-compile with CMake. It goes into detailed
explanations and may seem daunting, but it is not. On the wiki page there are several examples including
toolchain files. Go directly to this section for a quick solution.
Also see the LLVM-specific variables section for variables used when cross-compiling.
Oscar Fuentes
LLVM Compiler Infrastructure
Last modified: $Date: 2008-12-31 03:59:36 +0100 (Wed, 31 Dec 2008) $
1. Hardware
1. Alpha
2. ARM
3. Itanium
4. MIPS
5. PowerPC
6. SPARC
7. X86
8. Other lists
2. Application Binary Interface (ABI)
1. Linux
2. OS X
3. Miscellaneous resources
Hardware
Alpha
• Alpha manuals
ARM
Itanium (ia64)
• Itanium documentation
MIPS
PowerPC
IBM - Official manuals and docs
SPARC
• SPARC resources
• SPARC standards
X86
AMD - Official manuals and docs
• IA-32 manuals
• Intel Itanium documentation
ABI
Linux
OS X
Miscellaneous resources
Misha Brukman
LLVM Compiler Infrastructure
Last modified: $Date: 2008-12-11 11:34:48 -0600 (Thu, 11 Dec 2008) $