summaryrefslogtreecommitdiff
path: root/clang/docs/InternalsManual.html
diff options
context:
space:
mode:
authorCarlo Zancanaro <carlo@pc-4w14-0.cs.usyd.edu.au>2012-10-15 17:10:06 +1100
committerCarlo Zancanaro <carlo@pc-4w14-0.cs.usyd.edu.au>2012-10-15 17:10:06 +1100
commitbe1de4be954c80875ad4108e0a33e8e131b2f2c0 (patch)
tree1fbbecf276bf7c7bdcbb4dd446099d6d90eaa516 /clang/docs/InternalsManual.html
parentc4626a62754862d20b41e8a46a3574264ea80e6d (diff)
parentf1bd2e48c5324d3f7cda4090c87f8a5b6f463ce2 (diff)
Merge branch 'master' of ssh://bitbucket.org/czan/honours
Diffstat (limited to 'clang/docs/InternalsManual.html')
-rw-r--r--clang/docs/InternalsManual.html2011
1 files changed, 2011 insertions, 0 deletions
diff --git a/clang/docs/InternalsManual.html b/clang/docs/InternalsManual.html
new file mode 100644
index 0000000..bd6af8d
--- /dev/null
+++ b/clang/docs/InternalsManual.html
@@ -0,0 +1,2011 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
+ "http://www.w3.org/TR/html4/strict.dtd">
+<html>
+<head>
+<title>"Clang" CFE Internals Manual</title>
+<link type="text/css" rel="stylesheet" href="../menu.css">
+<link type="text/css" rel="stylesheet" href="../content.css">
+<style type="text/css">
+td {
+ vertical-align: top;
+}
+</style>
+</head>
+<body>
+
+<!--#include virtual="../menu.html.incl"-->
+
+<div id="content">
+
+<h1>"Clang" CFE Internals Manual</h1>
+
+<ul>
+<li><a href="#intro">Introduction</a></li>
+<li><a href="#libsupport">LLVM Support Library</a></li>
+<li><a href="#libbasic">The Clang 'Basic' Library</a>
+ <ul>
+ <li><a href="#Diagnostics">The Diagnostics Subsystem</a></li>
+ <li><a href="#SourceLocation">The SourceLocation and SourceManager
+ classes</a></li>
+ <li><a href="#SourceRange">SourceRange and CharSourceRange</a></li>
+ </ul>
+</li>
+<li><a href="#libdriver">The Driver Library</a>
+</li>
+<li><a href="#pch">Precompiled Headers</a>
+<li><a href="#libfrontend">The Frontend Library</a>
+</li>
+<li><a href="#liblex">The Lexer and Preprocessor Library</a>
+ <ul>
+ <li><a href="#Token">The Token class</a></li>
+ <li><a href="#Lexer">The Lexer class</a></li>
+ <li><a href="#AnnotationToken">Annotation Tokens</a></li>
+ <li><a href="#TokenLexer">The TokenLexer class</a></li>
+ <li><a href="#MultipleIncludeOpt">The MultipleIncludeOpt class</a></li>
+ </ul>
+</li>
+<li><a href="#libparse">The Parser Library</a>
+</li>
+<li><a href="#libast">The AST Library</a>
+ <ul>
+ <li><a href="#Type">The Type class and its subclasses</a></li>
+ <li><a href="#QualType">The QualType class</a></li>
+ <li><a href="#DeclarationName">Declaration names</a></li>
+ <li><a href="#DeclContext">Declaration contexts</a>
+ <ul>
+ <li><a href="#Redeclarations">Redeclarations and Overloads</a></li>
+ <li><a href="#LexicalAndSemanticContexts">Lexical and Semantic
+ Contexts</a></li>
+ <li><a href="#TransparentContexts">Transparent Declaration Contexts</a></li>
+ <li><a href="#MultiDeclContext">Multiply-Defined Declaration Contexts</a></li>
+ </ul>
+ </li>
+ <li><a href="#CFG">The CFG class</a></li>
+ <li><a href="#Constants">Constant Folding in the Clang AST</a></li>
+ </ul>
+</li>
+<li><a href="#Howtos">Howto guides</a>
+ <ul>
+ <li><a href="#AddingAttributes">How to add an attribute</a></li>
+ <li><a href="#AddingExprStmt">How to add a new expression or statement</a></li>
+ </ul>
+</li>
+</ul>
+
+
+<!-- ======================================================================= -->
+<h2 id="intro">Introduction</h2>
+<!-- ======================================================================= -->
+
+<p>This document describes some of the more important APIs and internal design
+decisions made in the Clang C front-end. The purpose of this document is to
+both capture some of this high level information and also describe some of the
+design decisions behind it. This is meant for people interested in hacking on
+Clang, not for end-users. The description below is categorized by
+libraries, and does not describe any of the clients of the libraries.</p>
+
+<!-- ======================================================================= -->
+<h2 id="libsupport">LLVM Support Library</h2>
+<!-- ======================================================================= -->
+
+<p>The LLVM libsupport library provides many underlying libraries and
+<a href="http://llvm.org/docs/ProgrammersManual.html">data-structures</a>,
+including command line option processing, various containers and a system
+abstraction layer, which is used for file system access.</p>
+
+<!-- ======================================================================= -->
+<h2 id="libbasic">The Clang 'Basic' Library</h2>
+<!-- ======================================================================= -->
+
+<p>This library certainly needs a better name. The 'basic' library contains a
+number of low-level utilities for tracking and manipulating source buffers,
+locations within the source buffers, diagnostics, tokens, target abstraction,
+and information about the subset of the language being compiled for.</p>
+
+<p>Part of this infrastructure is specific to C (such as the TargetInfo class),
+other parts could be reused for other non-C-based languages (SourceLocation,
+SourceManager, Diagnostics, FileManager). When and if there is future demand
+we can figure out if it makes sense to introduce a new library, move the general
+classes somewhere else, or introduce some other solution.</p>
+
+<p>We describe the roles of these classes in order of their dependencies.</p>
+
+
+<!-- ======================================================================= -->
+<h3 id="Diagnostics">The Diagnostics Subsystem</h3>
+<!-- ======================================================================= -->
+
+<p>The Clang Diagnostics subsystem is an important part of how the compiler
+communicates with the human. Diagnostics are the warnings and errors produced
+when the code is incorrect or dubious. In Clang, each diagnostic produced has
+(at the minimum) a unique ID, an English translation associated with it, a <a
+href="#SourceLocation">SourceLocation</a> to "put the caret", and a severity (e.g.
+<tt>WARNING</tt> or <tt>ERROR</tt>). They can also optionally include a number
+of arguments to the dianostic (which fill in "%0"'s in the string) as well as a
+number of source ranges that related to the diagnostic.</p>
+
+<p>In this section, we'll be giving examples produced by the Clang command line
+driver, but diagnostics can be <a href="#DiagnosticClient">rendered in many
+different ways</a> depending on how the DiagnosticClient interface is
+implemented. A representative example of a diagnostic is:</p>
+
+<pre>
+t.c:38:15: error: invalid operands to binary expression ('int *' and '_Complex float')
+ <span style="color:darkgreen">P = (P-42) + Gamma*4;</span>
+ <span style="color:blue">~~~~~~ ^ ~~~~~~~</span>
+</pre>
+
+<p>In this example, you can see the English translation, the severity (error),
+you can see the source location (the caret ("^") and file/line/column info),
+the source ranges "~~~~", arguments to the diagnostic ("int*" and "_Complex
+float"). You'll have to believe me that there is a unique ID backing the
+diagnostic :).</p>
+
+<p>Getting all of this to happen has several steps and involves many moving
+pieces, this section describes them and talks about best practices when adding
+a new diagnostic.</p>
+
+<!-- ============================= -->
+<h4>The Diagnostic*Kinds.td files</h4>
+<!-- ============================= -->
+
+<p>Diagnostics are created by adding an entry to one of the <tt>
+clang/Basic/Diagnostic*Kinds.td</tt> files, depending on what library will
+be using it. From this file, tblgen generates the unique ID of the diagnostic,
+the severity of the diagnostic and the English translation + format string.</p>
+
+<p>There is little sanity with the naming of the unique ID's right now. Some
+start with err_, warn_, ext_ to encode the severity into the name. Since the
+enum is referenced in the C++ code that produces the diagnostic, it is somewhat
+useful for it to be reasonably short.</p>
+
+<p>The severity of the diagnostic comes from the set {<tt>NOTE</tt>,
+<tt>WARNING</tt>, <tt>EXTENSION</tt>, <tt>EXTWARN</tt>, <tt>ERROR</tt>}. The
+<tt>ERROR</tt> severity is used for diagnostics indicating the program is never
+acceptable under any circumstances. When an error is emitted, the AST for the
+input code may not be fully built. The <tt>EXTENSION</tt> and <tt>EXTWARN</tt>
+severities are used for extensions to the language that Clang accepts. This
+means that Clang fully understands and can represent them in the AST, but we
+produce diagnostics to tell the user their code is non-portable. The difference
+is that the former are ignored by default, and the later warn by default. The
+<tt>WARNING</tt> severity is used for constructs that are valid in the currently
+selected source language but that are dubious in some way. The <tt>NOTE</tt>
+level is used to staple more information onto previous diagnostics.</p>
+
+<p>These <em>severities</em> are mapped into a smaller set (the
+Diagnostic::Level enum, {<tt>Ignored</tt>, <tt>Note</tt>, <tt>Warning</tt>,
+<tt>Error</tt>, <tt>Fatal</tt> }) of output <em>levels</em> by the diagnostics
+subsystem based on various configuration options. Clang internally supports a
+fully fine grained mapping mechanism that allows you to map almost any
+diagnostic to the output level that you want. The only diagnostics that cannot
+be mapped are <tt>NOTE</tt>s, which always follow the severity of the previously
+emitted diagnostic and <tt>ERROR</tt>s, which can only be mapped to
+<tt>Fatal</tt> (it is not possible to turn an error into a warning,
+for example).</p>
+
+<p>Diagnostic mappings are used in many ways. For example, if the user
+specifies <tt>-pedantic</tt>, <tt>EXTENSION</tt> maps to <tt>Warning</tt>, if
+they specify <tt>-pedantic-errors</tt>, it turns into <tt>Error</tt>. This is
+used to implement options like <tt>-Wunused_macros</tt>, <tt>-Wundef</tt> etc.
+</p>
+
+<p>
+Mapping to <tt>Fatal</tt> should only be used for diagnostics that are
+considered so severe that error recovery won't be able to recover sensibly from
+them (thus spewing a ton of bogus errors). One example of this class of error
+are failure to #include a file.
+</p>
+
+<!-- ================= -->
+<h4>The Format String</h4>
+<!-- ================= -->
+
+<p>The format string for the diagnostic is very simple, but it has some power.
+It takes the form of a string in English with markers that indicate where and
+how arguments to the diagnostic are inserted and formatted. For example, here
+are some simple format strings:</p>
+
+<pre>
+ "binary integer literals are an extension"
+ "format string contains '\\0' within the string body"
+ "more '<b>%%</b>' conversions than data arguments"
+ "invalid operands to binary expression (<b>%0</b> and <b>%1</b>)"
+ "overloaded '<b>%0</b>' must be a <b>%select{unary|binary|unary or binary}2</b> operator"
+ " (has <b>%1</b> parameter<b>%s1</b>)"
+</pre>
+
+<p>These examples show some important points of format strings. You can use any
+ plain ASCII character in the diagnostic string except "%" without a problem,
+ but these are C strings, so you have to use and be aware of all the C escape
+ sequences (as in the second example). If you want to produce a "%" in the
+ output, use the "%%" escape sequence, like the third diagnostic. Finally,
+ Clang uses the "%...[digit]" sequences to specify where and how arguments to
+ the diagnostic are formatted.</p>
+
+<p>Arguments to the diagnostic are numbered according to how they are specified
+ by the C++ code that <a href="#producingdiag">produces them</a>, and are
+ referenced by <tt>%0</tt> .. <tt>%9</tt>. If you have more than 10 arguments
+ to your diagnostic, you are doing something wrong :). Unlike printf, there
+ is no requirement that arguments to the diagnostic end up in the output in
+ the same order as they are specified, you could have a format string with
+ <tt>"%1 %0"</tt> that swaps them, for example. The text in between the
+ percent and digit are formatting instructions. If there are no instructions,
+ the argument is just turned into a string and substituted in.</p>
+
+<p>Here are some "best practices" for writing the English format string:</p>
+
+<ul>
+<li>Keep the string short. It should ideally fit in the 80 column limit of the
+ <tt>DiagnosticKinds.td</tt> file. This avoids the diagnostic wrapping when
+ printed, and forces you to think about the important point you are conveying
+ with the diagnostic.</li>
+<li>Take advantage of location information. The user will be able to see the
+ line and location of the caret, so you don't need to tell them that the
+ problem is with the 4th argument to the function: just point to it.</li>
+<li>Do not capitalize the diagnostic string, and do not end it with a
+ period.</li>
+<li>If you need to quote something in the diagnostic string, use single
+ quotes.</li>
+</ul>
+
+<p>Diagnostics should never take random English strings as arguments: you
+shouldn't use <tt>"you have a problem with %0"</tt> and pass in things like
+<tt>"your argument"</tt> or <tt>"your return value"</tt> as arguments. Doing
+this prevents <a href="#translation">translating</a> the Clang diagnostics to
+other languages (because they'll get random English words in their otherwise
+localized diagnostic). The exceptions to this are C/C++ language keywords
+(e.g. auto, const, mutable, etc) and C/C++ operators (<tt>/=</tt>). Note
+that things like "pointer" and "reference" are not keywords. On the other
+hand, you <em>can</em> include anything that comes from the user's source code,
+including variable names, types, labels, etc. The 'select' format can be
+used to achieve this sort of thing in a localizable way, see below.</p>
+
+<!-- ==================================== -->
+<h4>Formatting a Diagnostic Argument</h4>
+<!-- ==================================== -->
+
+<p>Arguments to diagnostics are fully typed internally, and come from a couple
+different classes: integers, types, names, and random strings. Depending on
+the class of the argument, it can be optionally formatted in different ways.
+This gives the DiagnosticClient information about what the argument means
+without requiring it to use a specific presentation (consider this MVC for
+Clang :).</p>
+
+<p>Here are the different diagnostic argument formats currently supported by
+Clang:</p>
+
+<table>
+<tr><td colspan="2"><b>"s" format</b></td></tr>
+<tr><td>Example:</td><td><tt>"requires %1 parameter%s1"</tt></td></tr>
+<tr><td>Class:</td><td>Integers</td></tr>
+<tr><td>Description:</td><td>This is a simple formatter for integers that is
+ useful when producing English diagnostics. When the integer is 1, it prints
+ as nothing. When the integer is not 1, it prints as "s". This allows some
+ simple grammatical forms to be to be handled correctly, and eliminates the
+ need to use gross things like <tt>"requires %1 parameter(s)"</tt>.</td></tr>
+
+<tr><td colspan="2"><b>"select" format</b></td></tr>
+<tr><td>Example:</td><td><tt>"must be a %select{unary|binary|unary or binary}2
+ operator"</tt></td></tr>
+<tr><td>Class:</td><td>Integers</td></tr>
+<tr><td>Description:</td><td><p>This format specifier is used to merge multiple
+ related diagnostics together into one common one, without requiring the
+ difference to be specified as an English string argument. Instead of
+ specifying the string, the diagnostic gets an integer argument and the
+ format string selects the numbered option. In this case, the "%2" value
+ must be an integer in the range [0..2]. If it is 0, it prints 'unary', if
+ it is 1 it prints 'binary' if it is 2, it prints 'unary or binary'. This
+ allows other language translations to substitute reasonable words (or entire
+ phrases) based on the semantics of the diagnostic instead of having to do
+ things textually.</p>
+ <p>The selected string does undergo formatting.</p></td></tr>
+
+<tr><td colspan="2"><b>"plural" format</b></td></tr>
+<tr><td>Example:</td><td><tt>"you have %1 %plural{1:mouse|:mice}1 connected to
+ your computer"</tt></td></tr>
+<tr><td>Class:</td><td>Integers</td></tr>
+<tr><td>Description:</td><td><p>This is a formatter for complex plural forms.
+ It is designed to handle even the requirements of languages with very
+ complex plural forms, as many Baltic languages have. The argument consists
+ of a series of expression/form pairs, separated by ':', where the first form
+ whose expression evaluates to true is the result of the modifier.</p>
+ <p>An expression can be empty, in which case it is always true. See the
+ example at the top. Otherwise, it is a series of one or more numeric
+ conditions, separated by ','. If any condition matches, the expression
+ matches. Each numeric condition can take one of three forms.</p>
+ <ul>
+ <li>number: A simple decimal number matches if the argument is the same
+ as the number. Example: <tt>"%plural{1:mouse|:mice}4"</tt></li>
+ <li>range: A range in square brackets matches if the argument is within
+ the range. Then range is inclusive on both ends. Example:
+ <tt>"%plural{0:none|1:one|[2,5]:some|:many}2"</tt></li>
+ <li>modulo: A modulo operator is followed by a number, and
+ equals sign and either a number or a range. The tests are the
+ same as for plain
+ numbers and ranges, but the argument is taken modulo the number first.
+ Example: <tt>"%plural{%100=0:even hundred|%100=[1,50]:lower half|:everything
+ else}1"</tt></li>
+ </ul>
+ <p>The parser is very unforgiving. A syntax error, even whitespace, will
+ abort, as will a failure to match the argument against any
+ expression.</p></td></tr>
+
+<tr><td colspan="2"><b>"ordinal" format</b></td></tr>
+<tr><td>Example:</td><td><tt>"ambiguity in %ordinal0 argument"</tt></td></tr>
+<tr><td>Class:</td><td>Integers</td></tr>
+<tr><td>Description:</td><td><p>This is a formatter which represents the
+ argument number as an ordinal: the value <tt>1</tt> becomes <tt>1st</tt>,
+ <tt>3</tt> becomes <tt>3rd</tt>, and so on. Values less than <tt>1</tt>
+ are not supported.</p>
+ <p>This formatter is currently hard-coded to use English ordinals.</p></td></tr>
+
+<tr><td colspan="2"><b>"objcclass" format</b></td></tr>
+<tr><td>Example:</td><td><tt>"method %objcclass0 not found"</tt></td></tr>
+<tr><td>Class:</td><td>DeclarationName</td></tr>
+<tr><td>Description:</td><td><p>This is a simple formatter that indicates the
+ DeclarationName corresponds to an Objective-C class method selector. As
+ such, it prints the selector with a leading '+'.</p></td></tr>
+
+<tr><td colspan="2"><b>"objcinstance" format</b></td></tr>
+<tr><td>Example:</td><td><tt>"method %objcinstance0 not found"</tt></td></tr>
+<tr><td>Class:</td><td>DeclarationName</td></tr>
+<tr><td>Description:</td><td><p>This is a simple formatter that indicates the
+ DeclarationName corresponds to an Objective-C instance method selector. As
+ such, it prints the selector with a leading '-'.</p></td></tr>
+
+<tr><td colspan="2"><b>"q" format</b></td></tr>
+<tr><td>Example:</td><td><tt>"candidate found by name lookup is %q0"</tt></td></tr>
+<tr><td>Class:</td><td>NamedDecl*</td></tr>
+<tr><td>Description</td><td><p>This formatter indicates that the fully-qualified name of the declaration should be printed, e.g., "std::vector" rather than "vector".</p></td></tr>
+
+</table>
+
+<p>It is really easy to add format specifiers to the Clang diagnostics system,
+but they should be discussed before they are added. If you are creating a lot
+of repetitive diagnostics and/or have an idea for a useful formatter, please
+bring it up on the cfe-dev mailing list.</p>
+
+<!-- ===================================================== -->
+<h4 id="producingdiag">Producing the Diagnostic</h4>
+<!-- ===================================================== -->
+
+<p>Now that you've created the diagnostic in the DiagnosticKinds.td file, you
+need to write the code that detects the condition in question and emits the
+new diagnostic. Various components of Clang (e.g. the preprocessor, Sema,
+etc) provide a helper function named "Diag". It creates a diagnostic and
+accepts the arguments, ranges, and other information that goes along with
+it.</p>
+
+<p>For example, the binary expression error comes from code like this:</p>
+
+<pre>
+ if (various things that are bad)
+ Diag(Loc, diag::err_typecheck_invalid_operands)
+ &lt;&lt; lex-&gt;getType() &lt;&lt; rex-&gt;getType()
+ &lt;&lt; lex-&gt;getSourceRange() &lt;&lt; rex-&gt;getSourceRange();
+</pre>
+
+<p>This shows that use of the Diag method: they take a location (a <a
+href="#SourceLocation">SourceLocation</a> object) and a diagnostic enum value
+(which matches the name from DiagnosticKinds.td). If the diagnostic takes
+arguments, they are specified with the &lt;&lt; operator: the first argument
+becomes %0, the second becomes %1, etc. The diagnostic interface allows you to
+specify arguments of many different types, including <tt>int</tt> and
+<tt>unsigned</tt> for integer arguments, <tt>const char*</tt> and
+<tt>std::string</tt> for string arguments, <tt>DeclarationName</tt> and
+<tt>const IdentifierInfo*</tt> for names, <tt>QualType</tt> for types, etc.
+SourceRanges are also specified with the &lt;&lt; operator, but do not have a
+specific ordering requirement.</p>
+
+<p>As you can see, adding and producing a diagnostic is pretty straightforward.
+The hard part is deciding exactly what you need to say to help the user, picking
+a suitable wording, and providing the information needed to format it correctly.
+The good news is that the call site that issues a diagnostic should be
+completely independent of how the diagnostic is formatted and in what language
+it is rendered.
+</p>
+
+<!-- ==================================================== -->
+<h4 id="fix-it-hints">Fix-It Hints</h4>
+<!-- ==================================================== -->
+
+<p>In some cases, the front end emits diagnostics when it is clear
+that some small change to the source code would fix the problem. For
+example, a missing semicolon at the end of a statement or a use of
+deprecated syntax that is easily rewritten into a more modern form.
+Clang tries very hard to emit the diagnostic and recover gracefully
+in these and other cases.</p>
+
+<p>However, for these cases where the fix is obvious, the diagnostic
+can be annotated with a hint (referred to as a "fix-it hint") that
+describes how to change the code referenced by the diagnostic to fix
+the problem. For example, it might add the missing semicolon at the
+end of the statement or rewrite the use of a deprecated construct
+into something more palatable. Here is one such example from the C++
+front end, where we warn about the right-shift operator changing
+meaning from C++98 to C++11:</p>
+
+<pre>
+test.cpp:3:7: warning: use of right-shift operator ('&gt;&gt;') in template argument will require parentheses in C++11
+A&lt;100 &gt;&gt; 2&gt; *a;
+ ^
+ ( )
+</pre>
+
+<p>Here, the fix-it hint is suggesting that parentheses be added,
+and showing exactly where those parentheses would be inserted into the
+source code. The fix-it hints themselves describe what changes to make
+to the source code in an abstract manner, which the text diagnostic
+printer renders as a line of "insertions" below the caret line. <a
+href="#DiagnosticClient">Other diagnostic clients</a> might choose
+to render the code differently (e.g., as markup inline) or even give
+the user the ability to automatically fix the problem.</p>
+
+<p>All fix-it hints are described by the <code>FixItHint</code> class,
+instances of which should be attached to the diagnostic using the
+&lt;&lt; operator in the same way that highlighted source ranges and
+arguments are passed to the diagnostic. Fix-it hints can be created
+with one of three constructors:</p>
+
+<dl>
+ <dt><code>FixItHint::CreateInsertion(Loc, Code)</code></dt>
+ <dd>Specifies that the given <code>Code</code> (a string) should be inserted
+ before the source location <code>Loc</code>.</dd>
+
+ <dt><code>FixItHint::CreateRemoval(Range)</code></dt>
+ <dd>Specifies that the code in the given source <code>Range</code>
+ should be removed.</dd>
+
+ <dt><code>FixItHint::CreateReplacement(Range, Code)</code></dt>
+ <dd>Specifies that the code in the given source <code>Range</code>
+ should be removed, and replaced with the given <code>Code</code> string.</dd>
+</dl>
+
+<!-- ============================================================= -->
+<h4><a name="DiagnosticClient">The DiagnosticClient Interface</a></h4>
+<!-- ============================================================= -->
+
+<p>Once code generates a diagnostic with all of the arguments and the rest of
+the relevant information, Clang needs to know what to do with it. As previously
+mentioned, the diagnostic machinery goes through some filtering to map a
+severity onto a diagnostic level, then (assuming the diagnostic is not mapped to
+"<tt>Ignore</tt>") it invokes an object that implements the DiagnosticClient
+interface with the information.</p>
+
+<p>It is possible to implement this interface in many different ways. For
+example, the normal Clang DiagnosticClient (named 'TextDiagnosticPrinter') turns
+the arguments into strings (according to the various formatting rules), prints
+out the file/line/column information and the string, then prints out the line of
+code, the source ranges, and the caret. However, this behavior isn't required.
+</p>
+
+<p>Another implementation of the DiagnosticClient interface is the
+'TextDiagnosticBuffer' class, which is used when Clang is in -verify mode.
+Instead of formatting and printing out the diagnostics, this implementation just
+captures and remembers the diagnostics as they fly by. Then -verify compares
+the list of produced diagnostics to the list of expected ones. If they disagree,
+it prints out its own output.
+</p>
+
+<p>There are many other possible implementations of this interface, and this is
+why we prefer diagnostics to pass down rich structured information in arguments.
+For example, an HTML output might want declaration names be linkified to where
+they come from in the source. Another example is that a GUI might let you click
+on typedefs to expand them. This application would want to pass significantly
+more information about types through to the GUI than a simple flat string. The
+interface allows this to happen.</p>
+
+<!-- ====================================================== -->
+<h4><a name="translation">Adding Translations to Clang</a></h4>
+<!-- ====================================================== -->
+
+<p>Not possible yet! Diagnostic strings should be written in UTF-8, the client
+can translate to the relevant code page if needed. Each translation completely
+replaces the format string for the diagnostic.</p>
+
+
+<!-- ======================================================================= -->
+<h3 id="SourceLocation">The SourceLocation and SourceManager classes</h3>
+<!-- ======================================================================= -->
+
+<p>Strangely enough, the SourceLocation class represents a location within the
+source code of the program. Important design points include:</p>
+
+<ol>
+<li>sizeof(SourceLocation) must be extremely small, as these are embedded into
+ many AST nodes and are passed around often. Currently it is 32 bits.</li>
+<li>SourceLocation must be a simple value object that can be efficiently
+ copied.</li>
+<li>We should be able to represent a source location for any byte of any input
+ file. This includes in the middle of tokens, in whitespace, in trigraphs,
+ etc.</li>
+<li>A SourceLocation must encode the current #include stack that was active when
+ the location was processed. For example, if the location corresponds to a
+ token, it should contain the set of #includes active when the token was
+ lexed. This allows us to print the #include stack for a diagnostic.</li>
+<li>SourceLocation must be able to describe macro expansions, capturing both
+ the ultimate instantiation point and the source of the original character
+ data.</li>
+</ol>
+
+<p>In practice, the SourceLocation works together with the SourceManager class
+to encode two pieces of information about a location: its spelling location
+and its instantiation location. For most tokens, these will be the same.
+However, for a macro expansion (or tokens that came from a _Pragma directive)
+these will describe the location of the characters corresponding to the token
+and the location where the token was used (i.e. the macro instantiation point
+or the location of the _Pragma itself).</p>
+
+<p>The Clang front-end inherently depends on the location of a token being
+tracked correctly. If it is ever incorrect, the front-end may get confused and
+die. The reason for this is that the notion of the 'spelling' of a Token in
+Clang depends on being able to find the original input characters for the token.
+This concept maps directly to the "spelling location" for the token.</p>
+
+
+<!-- ======================================================================= -->
+<h3 id="SourceRange">SourceRange and CharSourceRange</h3>
+<!-- ======================================================================= -->
+<!-- mostly taken from
+ http://lists.cs.uiuc.edu/pipermail/cfe-dev/2010-August/010595.html -->
+
+<p>Clang represents most source ranges by [first, last], where first and last
+each point to the beginning of their respective tokens. For example
+consider the SourceRange of the following statement:</p>
+<pre>
+x = foo + bar;
+^first ^last
+</pre>
+
+<p>To map from this representation to a character-based
+representation, the 'last' location needs to be adjusted to point to
+(or past) the end of that token with either
+<code>Lexer::MeasureTokenLength()</code> or
+<code>Lexer::getLocForEndOfToken()</code>. For the rare cases
+where character-level source ranges information is needed we use
+the <code>CharSourceRange</code> class.</p>
+
+
+<!-- ======================================================================= -->
+<h2 id="libdriver">The Driver Library</h2>
+<!-- ======================================================================= -->
+
+<p>The clang Driver and library are documented <a
+href="DriverInternals.html">here</a>.<p>
+
+<!-- ======================================================================= -->
+<h2 id="pch">Precompiled Headers</h2>
+<!-- ======================================================================= -->
+
+<p>Clang supports two implementations of precompiled headers. The
+ default implementation, precompiled headers (<a
+ href="PCHInternals.html">PCH</a>) uses a serialized representation
+ of Clang's internal data structures, encoded with the <a
+ href="http://llvm.org/docs/BitCodeFormat.html">LLVM bitstream
+ format</a>. Pretokenized headers (<a
+ href="PTHInternals.html">PTH</a>), on the other hand, contain a
+ serialized representation of the tokens encountered when
+ preprocessing a header (and anything that header includes).</p>
+
+
+<!-- ======================================================================= -->
+<h2 id="libfrontend">The Frontend Library</h2>
+<!-- ======================================================================= -->
+
+<p>The Frontend library contains functionality useful for building
+tools on top of the clang libraries, for example several methods for
+outputting diagnostics.</p>
+
+<!-- ======================================================================= -->
+<h2 id="liblex">The Lexer and Preprocessor Library</h2>
+<!-- ======================================================================= -->
+
+<p>The Lexer library contains several tightly-connected classes that are involved
+with the nasty process of lexing and preprocessing C source code. The main
+interface to this library for outside clients is the large <a
+href="#Preprocessor">Preprocessor</a> class.
+It contains the various pieces of state that are required to coherently read
+tokens out of a translation unit.</p>
+
+<p>The core interface to the Preprocessor object (once it is set up) is the
+Preprocessor::Lex method, which returns the next <a href="#Token">Token</a> from
+the preprocessor stream. There are two types of token providers that the
+preprocessor is capable of reading from: a buffer lexer (provided by the <a
+href="#Lexer">Lexer</a> class) and a buffered token stream (provided by the <a
+href="#TokenLexer">TokenLexer</a> class).
+
+
+<!-- ======================================================================= -->
+<h3 id="Token">The Token class</h3>
+<!-- ======================================================================= -->
+
+<p>The Token class is used to represent a single lexed token. Tokens are
+intended to be used by the lexer/preprocess and parser libraries, but are not
+intended to live beyond them (for example, they should not live in the ASTs).<p>
+
+<p>Tokens most often live on the stack (or some other location that is efficient
+to access) as the parser is running, but occasionally do get buffered up. For
+example, macro definitions are stored as a series of tokens, and the C++
+front-end periodically needs to buffer tokens up for tentative parsing and
+various pieces of look-ahead. As such, the size of a Token matter. On a 32-bit
+system, sizeof(Token) is currently 16 bytes.</p>
+
+<p>Tokens occur in two forms: "<a href="#AnnotationToken">Annotation
+Tokens</a>" and normal tokens. Normal tokens are those returned by the lexer,
+annotation tokens represent semantic information and are produced by the parser,
+replacing normal tokens in the token stream. Normal tokens contain the
+following information:</p>
+
+<ul>
+<li><b>A SourceLocation</b> - This indicates the location of the start of the
+token.</li>
+
+<li><b>A length</b> - This stores the length of the token as stored in the
+SourceBuffer. For tokens that include them, this length includes trigraphs and
+escaped newlines which are ignored by later phases of the compiler. By pointing
+into the original source buffer, it is always possible to get the original
+spelling of a token completely accurately.</li>
+
+<li><b>IdentifierInfo</b> - If a token takes the form of an identifier, and if
+identifier lookup was enabled when the token was lexed (e.g. the lexer was not
+reading in 'raw' mode) this contains a pointer to the unique hash value for the
+identifier. Because the lookup happens before keyword identification, this
+field is set even for language keywords like 'for'.</li>
+
+<li><b>TokenKind</b> - This indicates the kind of token as classified by the
+lexer. This includes things like <tt>tok::starequal</tt> (for the "*="
+operator), <tt>tok::ampamp</tt> for the "&amp;&amp;" token, and keyword values
+(e.g. <tt>tok::kw_for</tt>) for identifiers that correspond to keywords. Note
+that some tokens can be spelled multiple ways. For example, C++ supports
+"operator keywords", where things like "and" are treated exactly like the
+"&amp;&amp;" operator. In these cases, the kind value is set to
+<tt>tok::ampamp</tt>, which is good for the parser, which doesn't have to
+consider both forms. For something that cares about which form is used (e.g.
+the preprocessor 'stringize' operator) the spelling indicates the original
+form.</li>
+
+<li><b>Flags</b> - There are currently four flags tracked by the
+lexer/preprocessor system on a per-token basis:
+
+ <ol>
+ <li><b>StartOfLine</b> - This was the first token that occurred on its input
+ source line.</li>
+ <li><b>LeadingSpace</b> - There was a space character either immediately
+ before the token or transitively before the token as it was expanded
+ through a macro. The definition of this flag is very closely defined by
+ the stringizing requirements of the preprocessor.</li>
+ <li><b>DisableExpand</b> - This flag is used internally to the preprocessor to
+ represent identifier tokens which have macro expansion disabled. This
+ prevents them from being considered as candidates for macro expansion ever
+ in the future.</li>
+ <li><b>NeedsCleaning</b> - This flag is set if the original spelling for the
+ token includes a trigraph or escaped newline. Since this is uncommon,
+ many pieces of code can fast-path on tokens that did not need cleaning.
+ </ol>
+</li>
+</ul>
+
+<p>One interesting (and somewhat unusual) aspect of normal tokens is that they
+don't contain any semantic information about the lexed value. For example, if
+the token was a pp-number token, we do not represent the value of the number
+that was lexed (this is left for later pieces of code to decide). Additionally,
+the lexer library has no notion of typedef names vs variable names: both are
+returned as identifiers, and the parser is left to decide whether a specific
+identifier is a typedef or a variable (tracking this requires scope information
+among other things). The parser can do this translation by replacing tokens
+returned by the preprocessor with "Annotation Tokens".</p>
+
+<!-- ======================================================================= -->
+<h3 id="AnnotationToken">Annotation Tokens</h3>
+<!-- ======================================================================= -->
+
+<p>Annotation Tokens are tokens that are synthesized by the parser and injected
+into the preprocessor's token stream (replacing existing tokens) to record
+semantic information found by the parser. For example, if "foo" is found to be
+a typedef, the "foo" <tt>tok::identifier</tt> token is replaced with an
+<tt>tok::annot_typename</tt>. This is useful for a couple of reasons: 1) this
+makes it easy to handle qualified type names (e.g. "foo::bar::baz&lt;42&gt;::t")
+in C++ as a single "token" in the parser. 2) if the parser backtracks, the
+reparse does not need to redo semantic analysis to determine whether a token
+sequence is a variable, type, template, etc.</p>
+
+<p>Annotation Tokens are created by the parser and reinjected into the parser's
+token stream (when backtracking is enabled). Because they can only exist in
+tokens that the preprocessor-proper is done with, it doesn't need to keep around
+flags like "start of line" that the preprocessor uses to do its job.
+Additionally, an annotation token may "cover" a sequence of preprocessor tokens
+(e.g. <tt>a::b::c</tt> is five preprocessor tokens). As such, the valid fields
+of an annotation token are different than the fields for a normal token (but
+they are multiplexed into the normal Token fields):</p>
+
+<ul>
+<li><b>SourceLocation "Location"</b> - The SourceLocation for the annotation
+token indicates the first token replaced by the annotation token. In the example
+above, it would be the location of the "a" identifier.</li>
+
+<li><b>SourceLocation "AnnotationEndLoc"</b> - This holds the location of the
+last token replaced with the annotation token. In the example above, it would
+be the location of the "c" identifier.</li>
+
+<li><b>void* "AnnotationValue"</b> - This contains an opaque object
+that the parser gets from Sema. The parser merely preserves the
+information for Sema to later interpret based on the annotation token
+kind.</li>
+
+<li><b>TokenKind "Kind"</b> - This indicates the kind of Annotation token this
+is. See below for the different valid kinds.</li>
+</ul>
+
+<p>Annotation tokens currently come in three kinds:</p>
+
+<ol>
+<li><b>tok::annot_typename</b>: This annotation token represents a
+resolved typename token that is potentially qualified. The
+AnnotationValue field contains the <tt>QualType</tt> returned by
+Sema::getTypeName(), possibly with source location information
+attached.</li>
+
+<li><b>tok::annot_cxxscope</b>: This annotation token represents a C++
+scope specifier, such as "A::B::". This corresponds to the grammar
+productions "::" and ":: [opt] nested-name-specifier". The
+AnnotationValue pointer is a <tt>NestedNameSpecifier*</tt> returned by
+the Sema::ActOnCXXGlobalScopeSpecifier and
+Sema::ActOnCXXNestedNameSpecifier callbacks.</li>
+
+<li><b>tok::annot_template_id</b>: This annotation token represents a
+C++ template-id such as "foo&lt;int, 4&gt;", where "foo" is the name
+of a template. The AnnotationValue pointer is a pointer to a malloc'd
+TemplateIdAnnotation object. Depending on the context, a parsed
+template-id that names a type might become a typename annotation token
+(if all we care about is the named type, e.g., because it occurs in a
+type specifier) or might remain a template-id token (if we want to
+retain more source location information or produce a new type, e.g.,
+in a declaration of a class template specialization). template-id
+annotation tokens that refer to a type can be "upgraded" to typename
+annotation tokens by the parser.</li>
+
+</ol>
+
+<p>As mentioned above, annotation tokens are not returned by the preprocessor,
+they are formed on demand by the parser. This means that the parser has to be
+aware of cases where an annotation could occur and form it where appropriate.
+This is somewhat similar to how the parser handles Translation Phase 6 of C99:
+String Concatenation (see C99 5.1.1.2). In the case of string concatenation,
+the preprocessor just returns distinct tok::string_literal and
+tok::wide_string_literal tokens and the parser eats a sequence of them wherever
+the grammar indicates that a string literal can occur.</p>
+
+<p>In order to do this, whenever the parser expects a tok::identifier or
+tok::coloncolon, it should call the TryAnnotateTypeOrScopeToken or
+TryAnnotateCXXScopeToken methods to form the annotation token. These methods
+will maximally form the specified annotation tokens and replace the current
+token with them, if applicable. If the current tokens is not valid for an
+annotation token, it will remain an identifier or :: token.</p>
+
+
+
+<!-- ======================================================================= -->
+<h3 id="Lexer">The Lexer class</h3>
+<!-- ======================================================================= -->
+
+<p>The Lexer class provides the mechanics of lexing tokens out of a source
+buffer and deciding what they mean. The Lexer is complicated by the fact that
+it operates on raw buffers that have not had spelling eliminated (this is a
+necessity to get decent performance), but this is countered with careful coding
+as well as standard performance techniques (for example, the comment handling
+code is vectorized on X86 and PowerPC hosts).</p>
+
+<p>The lexer has a couple of interesting modal features:</p>
+
+<ul>
+<li>The lexer can operate in 'raw' mode. This mode has several features that
+ make it possible to quickly lex the file (e.g. it stops identifier lookup,
+ doesn't specially handle preprocessor tokens, handles EOF differently, etc).
+ This mode is used for lexing within an "<tt>#if 0</tt>" block, for
+ example.</li>
+<li>The lexer can capture and return comments as tokens. This is required to
+ support the -C preprocessor mode, which passes comments through, and is
+ used by the diagnostic checker to identifier expect-error annotations.</li>
+<li>The lexer can be in ParsingFilename mode, which happens when preprocessing
+ after reading a #include directive. This mode changes the parsing of '&lt;'
+ to return an "angled string" instead of a bunch of tokens for each thing
+ within the filename.</li>
+<li>When parsing a preprocessor directive (after "<tt>#</tt>") the
+ ParsingPreprocessorDirective mode is entered. This changes the parser to
+ return EOD at a newline.</li>
+<li>The Lexer uses a LangOptions object to know whether trigraphs are enabled,
+ whether C++ or ObjC keywords are recognized, etc.</li>
+</ul>
+
+<p>In addition to these modes, the lexer keeps track of a couple of other
+ features that are local to a lexed buffer, which change as the buffer is
+ lexed:</p>
+
+<ul>
+<li>The Lexer uses BufferPtr to keep track of the current character being
+ lexed.</li>
+<li>The Lexer uses IsAtStartOfLine to keep track of whether the next lexed token
+ will start with its "start of line" bit set.</li>
+<li>The Lexer keeps track of the current #if directives that are active (which
+ can be nested).</li>
+<li>The Lexer keeps track of an <a href="#MultipleIncludeOpt">
+ MultipleIncludeOpt</a> object, which is used to
+ detect whether the buffer uses the standard "<tt>#ifndef XX</tt> /
+ <tt>#define XX</tt>" idiom to prevent multiple inclusion. If a buffer does,
+ subsequent includes can be ignored if the XX macro is defined.</li>
+</ul>
+
+<!-- ======================================================================= -->
+<h3 id="TokenLexer">The TokenLexer class</h3>
+<!-- ======================================================================= -->
+
+<p>The TokenLexer class is a token provider that returns tokens from a list
+of tokens that came from somewhere else. It typically used for two things: 1)
+returning tokens from a macro definition as it is being expanded 2) returning
+tokens from an arbitrary buffer of tokens. The later use is used by _Pragma and
+will most likely be used to handle unbounded look-ahead for the C++ parser.</p>
+
+<!-- ======================================================================= -->
+<h3 id="MultipleIncludeOpt">The MultipleIncludeOpt class</h3>
+<!-- ======================================================================= -->
+
+<p>The MultipleIncludeOpt class implements a really simple little state machine
+that is used to detect the standard "<tt>#ifndef XX</tt> / <tt>#define XX</tt>"
+idiom that people typically use to prevent multiple inclusion of headers. If a
+buffer uses this idiom and is subsequently #include'd, the preprocessor can
+simply check to see whether the guarding condition is defined or not. If so,
+the preprocessor can completely ignore the include of the header.</p>
+
+
+
+<!-- ======================================================================= -->
+<h2 id="libparse">The Parser Library</h2>
+<!-- ======================================================================= -->
+
+<!-- ======================================================================= -->
+<h2 id="libast">The AST Library</h2>
+<!-- ======================================================================= -->
+
+<!-- ======================================================================= -->
+<h3 id="Type">The Type class and its subclasses</h3>
+<!-- ======================================================================= -->
+
+<p>The Type class (and its subclasses) are an important part of the AST. Types
+are accessed through the ASTContext class, which implicitly creates and uniques
+them as they are needed. Types have a couple of non-obvious features: 1) they
+do not capture type qualifiers like const or volatile (See
+<a href="#QualType">QualType</a>), and 2) they implicitly capture typedef
+information. Once created, types are immutable (unlike decls).</p>
+
+<p>Typedefs in C make semantic analysis a bit more complex than it would
+be without them. The issue is that we want to capture typedef information
+and represent it in the AST perfectly, but the semantics of operations need to
+"see through" typedefs. For example, consider this code:</p>
+
+<code>
+void func() {<br>
+&nbsp;&nbsp;typedef int foo;<br>
+&nbsp;&nbsp;foo X, *Y;<br>
+&nbsp;&nbsp;typedef foo* bar;<br>
+&nbsp;&nbsp;bar Z;<br>
+&nbsp;&nbsp;*X; <i>// error</i><br>
+&nbsp;&nbsp;**Y; <i>// error</i><br>
+&nbsp;&nbsp;**Z; <i>// error</i><br>
+}<br>
+</code>
+
+<p>The code above is illegal, and thus we expect there to be diagnostics emitted
+on the annotated lines. In this example, we expect to get:</p>
+
+<pre>
+<b>test.c:6:1: error: indirection requires pointer operand ('foo' invalid)</b>
+*X; // error
+<span style="color:blue">^~</span>
+<b>test.c:7:1: error: indirection requires pointer operand ('foo' invalid)</b>
+**Y; // error
+<span style="color:blue">^~~</span>
+<b>test.c:8:1: error: indirection requires pointer operand ('foo' invalid)</b>
+**Z; // error
+<span style="color:blue">^~~</span>
+</pre>
+
+<p>While this example is somewhat silly, it illustrates the point: we want to
+retain typedef information where possible, so that we can emit errors about
+"<tt>std::string</tt>" instead of "<tt>std::basic_string&lt;char, std:...</tt>".
+Doing this requires properly keeping typedef information (for example, the type
+of "X" is "foo", not "int"), and requires properly propagating it through the
+various operators (for example, the type of *Y is "foo", not "int"). In order
+to retain this information, the type of these expressions is an instance of the
+TypedefType class, which indicates that the type of these expressions is a
+typedef for foo.
+</p>
+
+<p>Representing types like this is great for diagnostics, because the
+user-specified type is always immediately available. There are two problems
+with this: first, various semantic checks need to make judgements about the
+<em>actual structure</em> of a type, ignoring typedefs. Second, we need an
+efficient way to query whether two types are structurally identical to each
+other, ignoring typedefs. The solution to both of these problems is the idea of
+canonical types.</p>
+
+<!-- =============== -->
+<h4>Canonical Types</h4>
+<!-- =============== -->
+
+<p>Every instance of the Type class contains a canonical type pointer. For
+simple types with no typedefs involved (e.g. "<tt>int</tt>", "<tt>int*</tt>",
+"<tt>int**</tt>"), the type just points to itself. For types that have a
+typedef somewhere in their structure (e.g. "<tt>foo</tt>", "<tt>foo*</tt>",
+"<tt>foo**</tt>", "<tt>bar</tt>"), the canonical type pointer points to their
+structurally equivalent type without any typedefs (e.g. "<tt>int</tt>",
+"<tt>int*</tt>", "<tt>int**</tt>", and "<tt>int*</tt>" respectively).</p>
+
+<p>This design provides a constant time operation (dereferencing the canonical
+type pointer) that gives us access to the structure of types. For example,
+we can trivially tell that "bar" and "foo*" are the same type by dereferencing
+their canonical type pointers and doing a pointer comparison (they both point
+to the single "<tt>int*</tt>" type).</p>
+
+<p>Canonical types and typedef types bring up some complexities that must be
+carefully managed. Specifically, the "isa/cast/dyncast" operators generally
+shouldn't be used in code that is inspecting the AST. For example, when type
+checking the indirection operator (unary '*' on a pointer), the type checker
+must verify that the operand has a pointer type. It would not be correct to
+check that with "<tt>isa&lt;PointerType&gt;(SubExpr-&gt;getType())</tt>",
+because this predicate would fail if the subexpression had a typedef type.</p>
+
+<p>The solution to this problem are a set of helper methods on Type, used to
+check their properties. In this case, it would be correct to use
+"<tt>SubExpr-&gt;getType()-&gt;isPointerType()</tt>" to do the check. This
+predicate will return true if the <em>canonical type is a pointer</em>, which is
+true any time the type is structurally a pointer type. The only hard part here
+is remembering not to use the <tt>isa/cast/dyncast</tt> operations.</p>
+
+<p>The second problem we face is how to get access to the pointer type once we
+know it exists. To continue the example, the result type of the indirection
+operator is the pointee type of the subexpression. In order to determine the
+type, we need to get the instance of PointerType that best captures the typedef
+information in the program. If the type of the expression is literally a
+PointerType, we can return that, otherwise we have to dig through the
+typedefs to find the pointer type. For example, if the subexpression had type
+"<tt>foo*</tt>", we could return that type as the result. If the subexpression
+had type "<tt>bar</tt>", we want to return "<tt>foo*</tt>" (note that we do
+<em>not</em> want "<tt>int*</tt>"). In order to provide all of this, Type has
+a getAsPointerType() method that checks whether the type is structurally a
+PointerType and, if so, returns the best one. If not, it returns a null
+pointer.</p>
+
+<p>This structure is somewhat mystical, but after meditating on it, it will
+make sense to you :).</p>
+
+<!-- ======================================================================= -->
+<h3 id="QualType">The QualType class</h3>
+<!-- ======================================================================= -->
+
+<p>The QualType class is designed as a trivial value class that is
+small, passed by-value and is efficient to query. The idea of
+QualType is that it stores the type qualifiers (const, volatile,
+restrict, plus some extended qualifiers required by language
+extensions) separately from the types themselves. QualType is
+conceptually a pair of "Type*" and the bits for these type qualifiers.</p>
+
+<p>By storing the type qualifiers as bits in the conceptual pair, it is
+extremely efficient to get the set of qualifiers on a QualType (just return the
+field of the pair), add a type qualifier (which is a trivial constant-time
+operation that sets a bit), and remove one or more type qualifiers (just return
+a QualType with the bitfield set to empty).</p>
+
+<p>Further, because the bits are stored outside of the type itself, we do not
+need to create duplicates of types with different sets of qualifiers (i.e. there
+is only a single heap allocated "int" type: "const int" and "volatile const int"
+both point to the same heap allocated "int" type). This reduces the heap size
+used to represent bits and also means we do not have to consider qualifiers when
+uniquing types (<a href="#Type">Type</a> does not even contain qualifiers).</p>
+
+<p>In practice, the two most common type qualifiers (const and
+restrict) are stored in the low bits of the pointer to the Type
+object, together with a flag indicating whether extended qualifiers
+are present (which must be heap-allocated). This means that QualType
+is exactly the same size as a pointer.</p>
+
+<!-- ======================================================================= -->
+<h3 id="DeclarationName">Declaration names</h3>
+<!-- ======================================================================= -->
+
+<p>The <tt>DeclarationName</tt> class represents the name of a
+ declaration in Clang. Declarations in the C family of languages can
+ take several different forms. Most declarations are named by
+ simple identifiers, e.g., "<code>f</code>" and "<code>x</code>" in
+ the function declaration <code>f(int x)</code>. In C++, declaration
+ names can also name class constructors ("<code>Class</code>"
+ in <code>struct Class { Class(); }</code>), class destructors
+ ("<code>~Class</code>"), overloaded operator names ("operator+"),
+ and conversion functions ("<code>operator void const *</code>"). In
+ Objective-C, declaration names can refer to the names of Objective-C
+ methods, which involve the method name and the parameters,
+ collectively called a <i>selector</i>, e.g.,
+ "<code>setWidth:height:</code>". Since all of these kinds of
+ entities - variables, functions, Objective-C methods, C++
+ constructors, destructors, and operators - are represented as
+ subclasses of Clang's common <code>NamedDecl</code>
+ class, <code>DeclarationName</code> is designed to efficiently
+ represent any kind of name.</p>
+
+<p>Given
+ a <code>DeclarationName</code> <code>N</code>, <code>N.getNameKind()</code>
+ will produce a value that describes what kind of name <code>N</code>
+ stores. There are 8 options (all of the names are inside
+ the <code>DeclarationName</code> class)</p>
+<dl>
+ <dt>Identifier</dt>
+ <dd>The name is a simple
+ identifier. Use <code>N.getAsIdentifierInfo()</code> to retrieve the
+ corresponding <code>IdentifierInfo*</code> pointing to the actual
+ identifier. Note that C++ overloaded operators (e.g.,
+ "<code>operator+</code>") are represented as special kinds of
+ identifiers. Use <code>IdentifierInfo</code>'s <code>getOverloadedOperatorID</code>
+ function to determine whether an identifier is an overloaded
+ operator name.</dd>
+
+ <dt>ObjCZeroArgSelector, ObjCOneArgSelector,
+ ObjCMultiArgSelector</dt>
+ <dd>The name is an Objective-C selector, which can be retrieved as a
+ <code>Selector</code> instance
+ via <code>N.getObjCSelector()</code>. The three possible name
+ kinds for Objective-C reflect an optimization within
+ the <code>DeclarationName</code> class: both zero- and
+ one-argument selectors are stored as a
+ masked <code>IdentifierInfo</code> pointer, and therefore require
+ very little space, since zero- and one-argument selectors are far
+ more common than multi-argument selectors (which use a different
+ structure).</dd>
+
+ <dt>CXXConstructorName</dt>
+ <dd>The name is a C++ constructor
+ name. Use <code>N.getCXXNameType()</code> to retrieve
+ the <a href="#QualType">type</a> that this constructor is meant to
+ construct. The type is always the canonical type, since all
+ constructors for a given type have the same name.</dd>
+
+ <dt>CXXDestructorName</dt>
+ <dd>The name is a C++ destructor
+ name. Use <code>N.getCXXNameType()</code> to retrieve
+ the <a href="#QualType">type</a> whose destructor is being
+ named. This type is always a canonical type.</dd>
+
+ <dt>CXXConversionFunctionName</dt>
+ <dd>The name is a C++ conversion function. Conversion functions are
+ named according to the type they convert to, e.g., "<code>operator void
+ const *</code>". Use <code>N.getCXXNameType()</code> to retrieve
+ the type that this conversion function converts to. This type is
+ always a canonical type.</dd>
+
+ <dt>CXXOperatorName</dt>
+ <dd>The name is a C++ overloaded operator name. Overloaded operators
+ are named according to their spelling, e.g.,
+ "<code>operator+</code>" or "<code>operator new
+ []</code>". Use <code>N.getCXXOverloadedOperator()</code> to
+ retrieve the overloaded operator (a value of
+ type <code>OverloadedOperatorKind</code>).</dd>
+</dl>
+
+<p><code>DeclarationName</code>s are cheap to create, copy, and
+ compare. They require only a single pointer's worth of storage in
+ the common cases (identifiers, zero-
+ and one-argument Objective-C selectors) and use dense, uniqued
+ storage for the other kinds of
+ names. Two <code>DeclarationName</code>s can be compared for
+ equality (<code>==</code>, <code>!=</code>) using a simple bitwise
+ comparison, can be ordered
+ with <code>&lt;</code>, <code>&gt;</code>, <code>&lt;=</code>,
+ and <code>&gt;=</code> (which provide a lexicographical ordering for
+ normal identifiers but an unspecified ordering for other kinds of
+ names), and can be placed into LLVM <code>DenseMap</code>s
+ and <code>DenseSet</code>s.</p>
+
+<p><code>DeclarationName</code> instances can be created in different
+ ways depending on what kind of name the instance will store. Normal
+ identifiers (<code>IdentifierInfo</code> pointers) and Objective-C selectors
+ (<code>Selector</code>) can be implicitly converted
+ to <code>DeclarationName</code>s. Names for C++ constructors,
+ destructors, conversion functions, and overloaded operators can be retrieved from
+ the <code>DeclarationNameTable</code>, an instance of which is
+ available as <code>ASTContext::DeclarationNames</code>. The member
+ functions <code>getCXXConstructorName</code>, <code>getCXXDestructorName</code>,
+ <code>getCXXConversionFunctionName</code>, and <code>getCXXOperatorName</code>, respectively,
+ return <code>DeclarationName</code> instances for the four kinds of
+ C++ special function names.</p>
+
+<!-- ======================================================================= -->
+<h3 id="DeclContext">Declaration contexts</h3>
+<!-- ======================================================================= -->
+<p>Every declaration in a program exists within some <i>declaration
+ context</i>, such as a translation unit, namespace, class, or
+ function. Declaration contexts in Clang are represented by
+ the <code>DeclContext</code> class, from which the various
+ declaration-context AST nodes
+ (<code>TranslationUnitDecl</code>, <code>NamespaceDecl</code>, <code>RecordDecl</code>, <code>FunctionDecl</code>,
+ etc.) will derive. The <code>DeclContext</code> class provides
+ several facilities common to each declaration context:</p>
+<dl>
+ <dt>Source-centric vs. Semantics-centric View of Declarations</dt>
+ <dd><code>DeclContext</code> provides two views of the declarations
+ stored within a declaration context. The source-centric view
+ accurately represents the program source code as written, including
+ multiple declarations of entities where present (see the
+ section <a href="#Redeclarations">Redeclarations and
+ Overloads</a>), while the semantics-centric view represents the
+ program semantics. The two views are kept synchronized by semantic
+ analysis while the ASTs are being constructed.</dd>
+
+ <dt>Storage of declarations within that context</dt>
+ <dd>Every declaration context can contain some number of
+ declarations. For example, a C++ class (represented
+ by <code>RecordDecl</code>) contains various member functions,
+ fields, nested types, and so on. All of these declarations will be
+ stored within the <code>DeclContext</code>, and one can iterate
+ over the declarations via
+ [<code>DeclContext::decls_begin()</code>,
+ <code>DeclContext::decls_end()</code>). This mechanism provides
+ the source-centric view of declarations in the context.</dd>
+
+ <dt>Lookup of declarations within that context</dt>
+ <dd>The <code>DeclContext</code> structure provides efficient name
+ lookup for names within that declaration context. For example,
+ if <code>N</code> is a namespace we can look for the
+ name <code>N::f</code>
+ using <code>DeclContext::lookup</code>. The lookup itself is
+ based on a lazily-constructed array (for declaration contexts
+ with a small number of declarations) or hash table (for
+ declaration contexts with more declarations). The lookup
+ operation provides the semantics-centric view of the declarations
+ in the context.</dd>
+
+ <dt>Ownership of declarations</dt>
+ <dd>The <code>DeclContext</code> owns all of the declarations that
+ were declared within its declaration context, and is responsible
+ for the management of their memory as well as their
+ (de-)serialization.</dd>
+</dl>
+
+<p>All declarations are stored within a declaration context, and one
+ can query
+ information about the context in which each declaration lives. One
+ can retrieve the <code>DeclContext</code> that contains a
+ particular <code>Decl</code>
+ using <code>Decl::getDeclContext</code>. However, see the
+ section <a href="#LexicalAndSemanticContexts">Lexical and Semantic
+ Contexts</a> for more information about how to interpret this
+ context information.</p>
+
+<h4 id="Redeclarations">Redeclarations and Overloads</h4>
+<p>Within a translation unit, it is common for an entity to be
+declared several times. For example, we might declare a function "f"
+ and then later re-declare it as part of an inlined definition:</p>
+
+<pre>
+void f(int x, int y, int z = 1);
+
+inline void f(int x, int y, int z) { /* ... */ }
+</pre>
+
+<p>The representation of "f" differs in the source-centric and
+ semantics-centric views of a declaration context. In the
+ source-centric view, all redeclarations will be present, in the
+ order they occurred in the source code, making
+ this view suitable for clients that wish to see the structure of
+ the source code. In the semantics-centric view, only the most recent "f"
+ will be found by the lookup, since it effectively replaces the first
+ declaration of "f".</p>
+
+<p>In the semantics-centric view, overloading of functions is
+ represented explicitly. For example, given two declarations of a
+ function "g" that are overloaded, e.g.,</p>
+<pre>
+void g();
+void g(int);
+</pre>
+<p>the <code>DeclContext::lookup</code> operation will return
+ a <code>DeclContext::lookup_result</code> that contains a range of iterators
+ over declarations of "g". Clients that perform semantic analysis on a
+ program that is not concerned with the actual source code will
+ primarily use this semantics-centric view.</p>
+
+<h4 id="LexicalAndSemanticContexts">Lexical and Semantic Contexts</h4>
+<p>Each declaration has two potentially different
+ declaration contexts: a <i>lexical</i> context, which corresponds to
+ the source-centric view of the declaration context, and
+ a <i>semantic</i> context, which corresponds to the
+ semantics-centric view. The lexical context is accessible
+ via <code>Decl::getLexicalDeclContext</code> while the
+ semantic context is accessible
+ via <code>Decl::getDeclContext</code>, both of which return
+ <code>DeclContext</code> pointers. For most declarations, the two
+ contexts are identical. For example:</p>
+
+<pre>
+class X {
+public:
+ void f(int x);
+};
+</pre>
+
+<p>Here, the semantic and lexical contexts of <code>X::f</code> are
+ the <code>DeclContext</code> associated with the
+ class <code>X</code> (itself stored as a <code>RecordDecl</code> AST
+ node). However, we can now define <code>X::f</code> out-of-line:</p>
+
+<pre>
+void X::f(int x = 17) { /* ... */ }
+</pre>
+
+<p>This definition of has different lexical and semantic
+ contexts. The lexical context corresponds to the declaration
+ context in which the actual declaration occurred in the source
+ code, e.g., the translation unit containing <code>X</code>. Thus,
+ this declaration of <code>X::f</code> can be found by traversing
+ the declarations provided by
+ [<code>decls_begin()</code>, <code>decls_end()</code>) in the
+ translation unit.</p>
+
+<p>The semantic context of <code>X::f</code> corresponds to the
+ class <code>X</code>, since this member function is (semantically) a
+ member of <code>X</code>. Lookup of the name <code>f</code> into
+ the <code>DeclContext</code> associated with <code>X</code> will
+ then return the definition of <code>X::f</code> (including
+ information about the default argument).</p>
+
+<h4 id="TransparentContexts">Transparent Declaration Contexts</h4>
+<p>In C and C++, there are several contexts in which names that are
+ logically declared inside another declaration will actually "leak"
+ out into the enclosing scope from the perspective of name
+ lookup. The most obvious instance of this behavior is in
+ enumeration types, e.g.,</p>
+<pre>
+enum Color {
+ Red,
+ Green,
+ Blue
+};
+</pre>
+
+<p>Here, <code>Color</code> is an enumeration, which is a declaration
+ context that contains the
+ enumerators <code>Red</code>, <code>Green</code>,
+ and <code>Blue</code>. Thus, traversing the list of declarations
+ contained in the enumeration <code>Color</code> will
+ yield <code>Red</code>, <code>Green</code>,
+ and <code>Blue</code>. However, outside of the scope
+ of <code>Color</code> one can name the enumerator <code>Red</code>
+ without qualifying the name, e.g.,</p>
+
+<pre>
+Color c = Red;
+</pre>
+
+<p>There are other entities in C++ that provide similar behavior. For
+ example, linkage specifications that use curly braces:</p>
+
+<pre>
+extern "C" {
+ void f(int);
+ void g(int);
+}
+// f and g are visible here
+</pre>
+
+<p>For source-level accuracy, we treat the linkage specification and
+ enumeration type as a
+ declaration context in which its enclosed declarations ("Red",
+ "Green", and "Blue"; "f" and "g")
+ are declared. However, these declarations are visible outside of the
+ scope of the declaration context.</p>
+
+<p>These language features (and several others, described below) have
+ roughly the same set of
+ requirements: declarations are declared within a particular lexical
+ context, but the declarations are also found via name lookup in
+ scopes enclosing the declaration itself. This feature is implemented
+ via <i>transparent</i> declaration contexts
+ (see <code>DeclContext::isTransparentContext()</code>), whose
+ declarations are visible in the nearest enclosing non-transparent
+ declaration context. This means that the lexical context of the
+ declaration (e.g., an enumerator) will be the
+ transparent <code>DeclContext</code> itself, as will the semantic
+ context, but the declaration will be visible in every outer context
+ up to and including the first non-transparent declaration context (since
+ transparent declaration contexts can be nested).</p>
+
+<p>The transparent <code>DeclContexts</code> are:</p>
+<ul>
+ <li>Enumerations (but not C++11 "scoped enumerations"):
+ <pre>
+enum Color {
+ Red,
+ Green,
+ Blue
+};
+// Red, Green, and Blue are in scope
+ </pre></li>
+ <li>C++ linkage specifications:
+ <pre>
+extern "C" {
+ void f(int);
+ void g(int);
+}
+// f and g are in scope
+ </pre></li>
+ <li>Anonymous unions and structs:
+ <pre>
+struct LookupTable {
+ bool IsVector;
+ union {
+ std::vector&lt;Item&gt; *Vector;
+ std::set&lt;Item&gt; *Set;
+ };
+};
+
+LookupTable LT;
+LT.Vector = 0; // Okay: finds Vector inside the unnamed union
+ </pre>
+ </li>
+ <li>C++11 inline namespaces:
+<pre>
+namespace mylib {
+ inline namespace debug {
+ class X;
+ }
+}
+mylib::X *xp; // okay: mylib::X refers to mylib::debug::X
+</pre>
+</li>
+</ul>
+
+
+<h4 id="MultiDeclContext">Multiply-Defined Declaration Contexts</h4>
+<p>C++ namespaces have the interesting--and, so far, unique--property that
+the namespace can be defined multiple times, and the declarations
+provided by each namespace definition are effectively merged (from
+the semantic point of view). For example, the following two code
+snippets are semantically indistinguishable:</p>
+<pre>
+// Snippet #1:
+namespace N {
+ void f();
+}
+namespace N {
+ void f(int);
+}
+
+// Snippet #2:
+namespace N {
+ void f();
+ void f(int);
+}
+</pre>
+
+<p>In Clang's representation, the source-centric view of declaration
+ contexts will actually have two separate <code>NamespaceDecl</code>
+ nodes in Snippet #1, each of which is a declaration context that
+ contains a single declaration of "f". However, the semantics-centric
+ view provided by name lookup into the namespace <code>N</code> for
+ "f" will return a <code>DeclContext::lookup_result</code> that contains
+ a range of iterators over declarations of "f".</p>
+
+<p><code>DeclContext</code> manages multiply-defined declaration
+ contexts internally. The
+ function <code>DeclContext::getPrimaryContext</code> retrieves the
+ "primary" context for a given <code>DeclContext</code> instance,
+ which is the <code>DeclContext</code> responsible for maintaining
+ the lookup table used for the semantics-centric view. Given the
+ primary context, one can follow the chain
+ of <code>DeclContext</code> nodes that define additional
+ declarations via <code>DeclContext::getNextContext</code>. Note that
+ these functions are used internally within the lookup and insertion
+ methods of the <code>DeclContext</code>, so the vast majority of
+ clients can ignore them.</p>
+
+<!-- ======================================================================= -->
+<h3 id="CFG">The <tt>CFG</tt> class</h3>
+<!-- ======================================================================= -->
+
+<p>The <tt>CFG</tt> class is designed to represent a source-level
+control-flow graph for a single statement (<tt>Stmt*</tt>). Typically
+instances of <tt>CFG</tt> are constructed for function bodies (usually
+an instance of <tt>CompoundStmt</tt>), but can also be instantiated to
+represent the control-flow of any class that subclasses <tt>Stmt</tt>,
+which includes simple expressions. Control-flow graphs are especially
+useful for performing
+<a href="http://en.wikipedia.org/wiki/Data_flow_analysis#Sensitivities">flow-
+or path-sensitive</a> program analyses on a given function.</p>
+
+<!-- ============ -->
+<h4>Basic Blocks</h4>
+<!-- ============ -->
+
+<p>Concretely, an instance of <tt>CFG</tt> is a collection of basic
+blocks. Each basic block is an instance of <tt>CFGBlock</tt>, which
+simply contains an ordered sequence of <tt>Stmt*</tt> (each referring
+to statements in the AST). The ordering of statements within a block
+indicates unconditional flow of control from one statement to the
+next. <a href="#ConditionalControlFlow">Conditional control-flow</a>
+is represented using edges between basic blocks. The statements
+within a given <tt>CFGBlock</tt> can be traversed using
+the <tt>CFGBlock::*iterator</tt> interface.</p>
+
+<p>
+A <tt>CFG</tt> object owns the instances of <tt>CFGBlock</tt> within
+the control-flow graph it represents. Each <tt>CFGBlock</tt> within a
+CFG is also uniquely numbered (accessible
+via <tt>CFGBlock::getBlockID()</tt>). Currently the number is
+based on the ordering the blocks were created, but no assumptions
+should be made on how <tt>CFGBlock</tt>s are numbered other than their
+numbers are unique and that they are numbered from 0..N-1 (where N is
+the number of basic blocks in the CFG).</p>
+
+<!-- ===================== -->
+<h4>Entry and Exit Blocks</h4>
+<!-- ===================== -->
+
+Each instance of <tt>CFG</tt> contains two special blocks:
+an <i>entry</i> block (accessible via <tt>CFG::getEntry()</tt>), which
+has no incoming edges, and an <i>exit</i> block (accessible
+via <tt>CFG::getExit()</tt>), which has no outgoing edges. Neither
+block contains any statements, and they serve the role of providing a
+clear entrance and exit for a body of code such as a function body.
+The presence of these empty blocks greatly simplifies the
+implementation of many analyses built on top of CFGs.
+
+<!-- ===================================================== -->
+<h4 id ="ConditionalControlFlow">Conditional Control-Flow</h4>
+<!-- ===================================================== -->
+
+<p>Conditional control-flow (such as those induced by if-statements
+and loops) is represented as edges between <tt>CFGBlock</tt>s.
+Because different C language constructs can induce control-flow,
+each <tt>CFGBlock</tt> also records an extra <tt>Stmt*</tt> that
+represents the <i>terminator</i> of the block. A terminator is simply
+the statement that caused the control-flow, and is used to identify
+the nature of the conditional control-flow between blocks. For
+example, in the case of an if-statement, the terminator refers to
+the <tt>IfStmt</tt> object in the AST that represented the given
+branch.</p>
+
+<p>To illustrate, consider the following code example:</p>
+
+<code>
+int foo(int x) {<br>
+&nbsp;&nbsp;x = x + 1;<br>
+<br>
+&nbsp;&nbsp;if (x > 2) x++;<br>
+&nbsp;&nbsp;else {<br>
+&nbsp;&nbsp;&nbsp;&nbsp;x += 2;<br>
+&nbsp;&nbsp;&nbsp;&nbsp;x *= 2;<br>
+&nbsp;&nbsp;}<br>
+<br>
+&nbsp;&nbsp;return x;<br>
+}
+</code>
+
+<p>After invoking the parser+semantic analyzer on this code fragment,
+the AST of the body of <tt>foo</tt> is referenced by a
+single <tt>Stmt*</tt>. We can then construct an instance
+of <tt>CFG</tt> representing the control-flow graph of this function
+body by single call to a static class method:</p>
+
+<code>
+&nbsp;&nbsp;Stmt* FooBody = ...<br>
+&nbsp;&nbsp;CFG* FooCFG = <b>CFG::buildCFG</b>(FooBody);
+</code>
+
+<p>It is the responsibility of the caller of <tt>CFG::buildCFG</tt>
+to <tt>delete</tt> the returned <tt>CFG*</tt> when the CFG is no
+longer needed.</p>
+
+<p>Along with providing an interface to iterate over
+its <tt>CFGBlock</tt>s, the <tt>CFG</tt> class also provides methods
+that are useful for debugging and visualizing CFGs. For example, the
+method
+<tt>CFG::dump()</tt> dumps a pretty-printed version of the CFG to
+standard error. This is especially useful when one is using a
+debugger such as gdb. For example, here is the output
+of <tt>FooCFG->dump()</tt>:</p>
+
+<code>
+&nbsp;[ B5 (ENTRY) ]<br>
+&nbsp;&nbsp;&nbsp;&nbsp;Predecessors (0):<br>
+&nbsp;&nbsp;&nbsp;&nbsp;Successors (1): B4<br>
+<br>
+&nbsp;[ B4 ]<br>
+&nbsp;&nbsp;&nbsp;&nbsp;1: x = x + 1<br>
+&nbsp;&nbsp;&nbsp;&nbsp;2: (x > 2)<br>
+&nbsp;&nbsp;&nbsp;&nbsp;<b>T: if [B4.2]</b><br>
+&nbsp;&nbsp;&nbsp;&nbsp;Predecessors (1): B5<br>
+&nbsp;&nbsp;&nbsp;&nbsp;Successors (2): B3 B2<br>
+<br>
+&nbsp;[ B3 ]<br>
+&nbsp;&nbsp;&nbsp;&nbsp;1: x++<br>
+&nbsp;&nbsp;&nbsp;&nbsp;Predecessors (1): B4<br>
+&nbsp;&nbsp;&nbsp;&nbsp;Successors (1): B1<br>
+<br>
+&nbsp;[ B2 ]<br>
+&nbsp;&nbsp;&nbsp;&nbsp;1: x += 2<br>
+&nbsp;&nbsp;&nbsp;&nbsp;2: x *= 2<br>
+&nbsp;&nbsp;&nbsp;&nbsp;Predecessors (1): B4<br>
+&nbsp;&nbsp;&nbsp;&nbsp;Successors (1): B1<br>
+<br>
+&nbsp;[ B1 ]<br>
+&nbsp;&nbsp;&nbsp;&nbsp;1: return x;<br>
+&nbsp;&nbsp;&nbsp;&nbsp;Predecessors (2): B2 B3<br>
+&nbsp;&nbsp;&nbsp;&nbsp;Successors (1): B0<br>
+<br>
+&nbsp;[ B0 (EXIT) ]<br>
+&nbsp;&nbsp;&nbsp;&nbsp;Predecessors (1): B1<br>
+&nbsp;&nbsp;&nbsp;&nbsp;Successors (0):
+</code>
+
+<p>For each block, the pretty-printed output displays for each block
+the number of <i>predecessor</i> blocks (blocks that have outgoing
+control-flow to the given block) and <i>successor</i> blocks (blocks
+that have control-flow that have incoming control-flow from the given
+block). We can also clearly see the special entry and exit blocks at
+the beginning and end of the pretty-printed output. For the entry
+block (block B5), the number of predecessor blocks is 0, while for the
+exit block (block B0) the number of successor blocks is 0.</p>
+
+<p>The most interesting block here is B4, whose outgoing control-flow
+represents the branching caused by the sole if-statement
+in <tt>foo</tt>. Of particular interest is the second statement in
+the block, <b><tt>(x > 2)</tt></b>, and the terminator, printed
+as <b><tt>if [B4.2]</tt></b>. The second statement represents the
+evaluation of the condition of the if-statement, which occurs before
+the actual branching of control-flow. Within the <tt>CFGBlock</tt>
+for B4, the <tt>Stmt*</tt> for the second statement refers to the
+actual expression in the AST for <b><tt>(x > 2)</tt></b>. Thus
+pointers to subclasses of <tt>Expr</tt> can appear in the list of
+statements in a block, and not just subclasses of <tt>Stmt</tt> that
+refer to proper C statements.</p>
+
+<p>The terminator of block B4 is a pointer to the <tt>IfStmt</tt>
+object in the AST. The pretty-printer outputs <b><tt>if
+[B4.2]</tt></b> because the condition expression of the if-statement
+has an actual place in the basic block, and thus the terminator is
+essentially
+<i>referring</i> to the expression that is the second statement of
+block B4 (i.e., B4.2). In this manner, conditions for control-flow
+(which also includes conditions for loops and switch statements) are
+hoisted into the actual basic block.</p>
+
+<!-- ===================== -->
+<!-- <h4>Implicit Control-Flow</h4> -->
+<!-- ===================== -->
+
+<!--
+<p>A key design principle of the <tt>CFG</tt> class was to not require
+any transformations to the AST in order to represent control-flow.
+Thus the <tt>CFG</tt> does not perform any "lowering" of the
+statements in an AST: loops are not transformed into guarded gotos,
+short-circuit operations are not converted to a set of if-statements,
+and so on.</p>
+-->
+
+
+<!-- ======================================================================= -->
+<h3 id="Constants">Constant Folding in the Clang AST</h3>
+<!-- ======================================================================= -->
+
+<p>There are several places where constants and constant folding matter a lot to
+the Clang front-end. First, in general, we prefer the AST to retain the source
+code as close to how the user wrote it as possible. This means that if they
+wrote "5+4", we want to keep the addition and two constants in the AST, we don't
+want to fold to "9". This means that constant folding in various ways turns
+into a tree walk that needs to handle the various cases.</p>
+
+<p>However, there are places in both C and C++ that require constants to be
+folded. For example, the C standard defines what an "integer constant
+expression" (i-c-e) is with very precise and specific requirements. The
+language then requires i-c-e's in a lot of places (for example, the size of a
+bitfield, the value for a case statement, etc). For these, we have to be able
+to constant fold the constants, to do semantic checks (e.g. verify bitfield size
+is non-negative and that case statements aren't duplicated). We aim for Clang
+to be very pedantic about this, diagnosing cases when the code does not use an
+i-c-e where one is required, but accepting the code unless running with
+<tt>-pedantic-errors</tt>.</p>
+
+<p>Things get a little bit more tricky when it comes to compatibility with
+real-world source code. Specifically, GCC has historically accepted a huge
+superset of expressions as i-c-e's, and a lot of real world code depends on this
+unfortuate accident of history (including, e.g., the glibc system headers). GCC
+accepts anything its "fold" optimizer is capable of reducing to an integer
+constant, which means that the definition of what it accepts changes as its
+optimizer does. One example is that GCC accepts things like "case X-X:" even
+when X is a variable, because it can fold this to 0.</p>
+
+<p>Another issue are how constants interact with the extensions we support, such
+as __builtin_constant_p, __builtin_inf, __extension__ and many others. C99
+obviously does not specify the semantics of any of these extensions, and the
+definition of i-c-e does not include them. However, these extensions are often
+used in real code, and we have to have a way to reason about them.</p>
+
+<p>Finally, this is not just a problem for semantic analysis. The code
+generator and other clients have to be able to fold constants (e.g. to
+initialize global variables) and has to handle a superset of what C99 allows.
+Further, these clients can benefit from extended information. For example, we
+know that "foo()||1" always evaluates to true, but we can't replace the
+expression with true because it has side effects.</p>
+
+<!-- ======================= -->
+<h4>Implementation Approach</h4>
+<!-- ======================= -->
+
+<p>After trying several different approaches, we've finally converged on a
+design (Note, at the time of this writing, not all of this has been implemented,
+consider this a design goal!). Our basic approach is to define a single
+recursive method evaluation method (<tt>Expr::Evaluate</tt>), which is
+implemented in <tt>AST/ExprConstant.cpp</tt>. Given an expression with 'scalar'
+type (integer, fp, complex, or pointer) this method returns the following
+information:</p>
+
+<ul>
+<li>Whether the expression is an integer constant expression, a general
+ constant that was folded but has no side effects, a general constant that
+ was folded but that does have side effects, or an uncomputable/unfoldable
+ value.
+</li>
+<li>If the expression was computable in any way, this method returns the APValue
+ for the result of the expression.</li>
+<li>If the expression is not evaluatable at all, this method returns
+ information on one of the problems with the expression. This includes a
+ SourceLocation for where the problem is, and a diagnostic ID that explains
+ the problem. The diagnostic should be have ERROR type.</li>
+<li>If the expression is not an integer constant expression, this method returns
+ information on one of the problems with the expression. This includes a
+ SourceLocation for where the problem is, and a diagnostic ID that explains
+ the problem. The diagnostic should be have EXTENSION type.</li>
+</ul>
+
+<p>This information gives various clients the flexibility that they want, and we
+will eventually have some helper methods for various extensions. For example,
+Sema should have a <tt>Sema::VerifyIntegerConstantExpression</tt> method, which
+calls Evaluate on the expression. If the expression is not foldable, the error
+is emitted, and it would return true. If the expression is not an i-c-e, the
+EXTENSION diagnostic is emitted. Finally it would return false to indicate that
+the AST is ok.</p>
+
+<p>Other clients can use the information in other ways, for example, codegen can
+just use expressions that are foldable in any way.</p>
+
+<!-- ========== -->
+<h4>Extensions</h4>
+<!-- ========== -->
+
+<p>This section describes how some of the various extensions Clang supports
+interacts with constant evaluation:</p>
+
+<ul>
+<li><b><tt>__extension__</tt></b>: The expression form of this extension causes
+ any evaluatable subexpression to be accepted as an integer constant
+ expression.</li>
+<li><b><tt>__builtin_constant_p</tt></b>: This returns true (as a integer
+ constant expression) if the operand evaluates to either a numeric value
+ (that is, not a pointer cast to integral type) of integral, enumeration,
+ floating or complex type, or if it evaluates to the address of the first
+ character of a string literal (possibly cast to some other type). As a
+ special case, if <tt>__builtin_constant_p</tt> is the (potentially
+ parenthesized) condition of a conditional operator expression ("?:"), only
+ the true side of the conditional operator is considered, and it is evaluated
+ with full constant folding.</li>
+<li><b><tt>__builtin_choose_expr</tt></b>: The condition is required to be an
+ integer constant expression, but we accept any constant as an "extension of
+ an extension". This only evaluates one operand depending on which way the
+ condition evaluates.</li>
+<li><b><tt>__builtin_classify_type</tt></b>: This always returns an integer
+ constant expression.</li>
+<li><b><tt>__builtin_inf,nan,..</tt></b>: These are treated just like a
+ floating-point literal.</li>
+<li><b><tt>__builtin_abs,copysign,..</tt></b>: These are constant folded as
+ general constant expressions.</li>
+<li><b><tt>__builtin_strlen</tt></b> and <b><tt>strlen</tt></b>: These are
+ constant folded as integer constant expressions if the argument is a string
+ literal.</li>
+</ul>
+
+
+<!-- ======================================================================= -->
+<h2 id="Howtos">How to change Clang</h2>
+<!-- ======================================================================= -->
+
+<!-- ======================================================================= -->
+<h3 id="AddingAttributes">How to add an attribute</h3>
+<!-- ======================================================================= -->
+
+<p>To add an attribute, you'll have to add it to the list of attributes, add it
+to the parsing phase, and look for it in the AST scan.
+<a href="http://llvm.org/viewvc/llvm-project?view=rev&amp;revision=124217">r124217</a>
+has a good example of adding a warning attribute.</p>
+
+<p>(Beware that this hasn't been reviewed/fixed by the people who designed the
+attributes system yet.)</p>
+
+<h4><a
+href="http://llvm.org/viewvc/llvm-project/cfe/trunk/include/clang/Basic/Attr.td?view=markup">include/clang/Basic/Attr.td</a></h4>
+
+<p>Each attribute gets a <tt>def</tt> inheriting from <tt>Attr</tt> or one of
+its subclasses. <tt>InheritableAttr</tt> means that the attribute also applies
+to subsequent declarations of the same name.</p>
+
+<p><tt>Spellings</tt> lists the strings that can appear in
+<tt>__attribute__((here))</tt> or <tt>[[here]]</tt>. All such strings
+will be synonymous. If you want to allow the <tt>[[]]</tt> C++11
+syntax, you have to define a list of <tt>Namespaces</tt>, which will
+let users write <tt>[[namespace:spelling]]</tt>. Using the empty
+string for a namespace will allow users to write just the spelling
+with no "<tt>:</tt>".</p>
+
+<p><tt>Subjects</tt> restricts what kinds of AST node to which this attribute
+can appertain (roughly, attach).</p>
+
+<p><tt>Args</tt> names the arguments the attribute takes, in order. If
+<tt>Args</tt> is <tt>[StringArgument&lt;"Arg1">, IntArgument&lt;"Arg2">]</tt>
+then <tt>__attribute__((myattribute("Hello", 3)))</tt> will be a valid use.</p>
+
+<h4>Boilerplate</h4>
+
+<p>Add an element to the <tt>AttributeList::Kind</tt> enum in <a
+href="http://llvm.org/viewvc/llvm-project/cfe/trunk/include/clang/Sema/AttributeList.h?view=markup">include/clang/Sema/AttributeList.h</a>
+named <tt>AT_lower_with_underscores</tt>. That is, a CamelCased
+<tt>AttributeName</tt> in <tt>Attr.td</tt> name should become
+<tt>AT_attribute_name</tt>.</p>
+
+<p>Add a case to the <tt>StringSwitch</tt> in <tt>AttributeList::getKind()</tt>
+in <a
+href="http://llvm.org/viewvc/llvm-project/cfe/trunk/lib/Sema/AttributeList.cpp?view=markup">lib/Sema/AttributeList.cpp</a>
+for each spelling of your attribute. Less common attributes should come toward
+the end of that list.</p>
+
+<p>Write a new <tt>HandleYourAttr()</tt> function in <a
+href="http://llvm.org/viewvc/llvm-project/cfe/trunk/lib/Sema/SemaDeclAttr.cpp?view=markup">lib/Sema/SemaDeclAttr.cpp</a>,
+and add a case to the switch in <tt>ProcessNonInheritableDeclAttr()</tt> or
+<tt>ProcessInheritableDeclAttr()</tt> forwarding to it.</p>
+
+<p>If your attribute causes extra warnings to fire, define a <tt>DiagGroup</tt>
+in <a
+href="http://llvm.org/viewvc/llvm-project/cfe/trunk/include/clang/Basic/DiagnosticGroups.td?view=markup">include/clang/Basic/DiagnosticGroups.td</a>
+named after the attribute's <tt>Spelling</tt> with "_"s replaced by "-"s. If
+you're only defining one diagnostic, you can skip <tt>DiagnosticGroups.td</tt>
+and use <tt>InGroup&lt;DiagGroup&lt;"your-attribute">></tt> directly in <a
+href="http://llvm.org/viewvc/llvm-project/cfe/trunk/include/clang/Basic/DiagnosticSemaKinds.td?view=markup">DiagnosticSemaKinds.td</a></p>
+
+<h4>The meat of your attribute</h4>
+
+<p>Find an appropriate place in Clang to do whatever your attribute needs to do.
+Check for the attribute's presence using <tt>Decl::getAttr&lt;YourAttr>()</tt>.</p>
+
+<p>Update the <a href="LanguageExtensions.html">Clang Language Extensions</a>
+document to describe your new attribute.</p>
+
+<!-- ======================================================================= -->
+<h3 id="AddingExprStmt">How to add an expression or statement</h3>
+<!-- ======================================================================= -->
+
+<p>Expressions and statements are one of the most fundamental constructs within a
+compiler, because they interact with many different parts of the AST,
+semantic analysis, and IR generation. Therefore, adding a new
+expression or statement kind into Clang requires some care. The following list
+details the various places in Clang where an expression or statement needs to be
+introduced, along with patterns to follow to ensure that the new
+expression or statement works well across all of the C languages. We
+focus on expressions, but statements are similar.</p>
+
+<ol>
+ <li>Introduce parsing actions into the parser. Recursive-descent
+ parsing is mostly self-explanatory, but there are a few things that
+ are worth keeping in mind:
+ <ul>
+ <li>Keep as much source location information as possible! You'll
+ want it later to produce great diagnostics and support Clang's
+ various features that map between source code and the AST.</li>
+ <li>Write tests for all of the "bad" parsing cases, to make sure
+ your recovery is good. If you have matched delimiters (e.g.,
+ parentheses, square brackets, etc.), use
+ <tt>Parser::BalancedDelimiterTracker</tt> to give nice diagnostics when
+ things go wrong.</li>
+ </ul>
+ </li>
+
+ <li>Introduce semantic analysis actions into <tt>Sema</tt>. Semantic
+ analysis should always involve two functions: an <tt>ActOnXXX</tt>
+ function that will be called directly from the parser, and a
+ <tt>BuildXXX</tt> function that performs the actual semantic
+ analysis and will (eventually!) build the AST node. It's fairly
+ common for the <tt>ActOnCXX</tt> function to do very little (often
+ just some minor translation from the parser's representation to
+ <tt>Sema</tt>'s representation of the same thing), but the separation
+ is still important: C++ template instantiation, for example,
+ should always call the <tt>BuildXXX</tt> variant. Several notes on
+ semantic analysis before we get into construction of the AST:
+ <ul>
+ <li>Your expression probably involves some types and some
+ subexpressions. Make sure to fully check that those types, and the
+ types of those subexpressions, meet your expectations. Add
+ implicit conversions where necessary to make sure that all of the
+ types line up exactly the way you want them. Write extensive tests
+ to check that you're getting good diagnostics for mistakes and
+ that you can use various forms of subexpressions with your
+ expression.</li>
+ <li>When type-checking a type or subexpression, make sure to first
+ check whether the type is "dependent"
+ (<tt>Type::isDependentType()</tt>) or whether a subexpression is
+ type-dependent (<tt>Expr::isTypeDependent()</tt>). If any of these
+ return true, then you're inside a template and you can't do much
+ type-checking now. That's normal, and your AST node (when you get
+ there) will have to deal with this case. At this point, you can
+ write tests that use your expression within templates, but don't
+ try to instantiate the templates.</li>
+ <li>For each subexpression, be sure to call
+ <tt>Sema::CheckPlaceholderExpr()</tt> to deal with "weird"
+ expressions that don't behave well as subexpressions. Then,
+ determine whether you need to perform
+ lvalue-to-rvalue conversions
+ (<tt>Sema::DefaultLvalueConversion</tt>e) or
+ the usual unary conversions
+ (<tt>Sema::UsualUnaryConversions</tt>), for places where the
+ subexpression is producing a value you intend to use.</li>
+ <li>Your <tt>BuildXXX</tt> function will probably just return
+ <tt>ExprError()</tt> at this point, since you don't have an AST.
+ That's perfectly fine, and shouldn't impact your testing.</li>
+ </ul>
+ </li>
+
+ <li>Introduce an AST node for your new expression. This starts with
+ declaring the node in <tt>include/Basic/StmtNodes.td</tt> and
+ creating a new class for your expression in the appropriate
+ <tt>include/AST/Expr*.h</tt> header. It's best to look at the class
+ for a similar expression to get ideas, and there are some specific
+ things to watch for:
+ <ul>
+ <li>If you need to allocate memory, use the <tt>ASTContext</tt>
+ allocator to allocate memory. Never use raw <tt>malloc</tt> or
+ <tt>new</tt>, and never hold any resources in an AST node, because
+ the destructor of an AST node is never called.</li>
+
+ <li>Make sure that <tt>getSourceRange()</tt> covers the exact
+ source range of your expression. This is needed for diagnostics
+ and for IDE support.</li>
+
+ <li>Make sure that <tt>children()</tt> visits all of the
+ subexpressions. This is important for a number of features (e.g., IDE
+ support, C++ variadic templates). If you have sub-types, you'll
+ also need to visit those sub-types in the
+ <tt>RecursiveASTVisitor</tt>.</li>
+
+ <li>Add printing support (<tt>StmtPrinter.cpp</tt>) and dumping
+ support (<tt>StmtDumper.cpp</tt>) for your expression.</li>
+
+ <li>Add profiling support (<tt>StmtProfile.cpp</tt>) for your AST
+ node, noting the distinguishing (non-source location)
+ characteristics of an instance of your expression. Omitting this
+ step will lead to hard-to-diagnose failures regarding matching of
+ template declarations.</li>
+ </ul>
+ </li>
+
+ <li>Teach semantic analysis to build your AST node! At this point,
+ you can wire up your <tt>Sema::BuildXXX</tt> function to actually
+ create your AST. A few things to check at this point:
+ <ul>
+ <li>If your expression can construct a new C++ class or return a
+ new Objective-C object, be sure to update and then call
+ <tt>Sema::MaybeBindToTemporary</tt> for your just-created AST node
+ to be sure that the object gets properly destructed. An easy way
+ to test this is to return a C++ class with a private destructor:
+ semantic analysis should flag an error here with the attempt to
+ call the destructor.</li>
+ <li>Inspect the generated AST by printing it using <tt>clang -cc1
+ -ast-print</tt>, to make sure you're capturing all of the
+ important information about how the AST was written.</li>
+ <li>Inspect the generated AST under <tt>clang -cc1 -ast-dump</tt>
+ to verify that all of the types in the generated AST line up the
+ way you want them. Remember that clients of the AST should never
+ have to "think" to understand what's going on. For example, all
+ implicit conversions should show up explicitly in the AST.</li>
+ <li>Write tests that use your expression as a subexpression of
+ other, well-known expressions. Can you call a function using your
+ expression as an argument? Can you use the ternary operator?</li>
+ </ul>
+ </li>
+
+ <li>Teach code generation to create IR to your AST node. This step
+ is the first (and only) that requires knowledge of LLVM IR. There
+ are several things to keep in mind:
+ <ul>
+ <li>Code generation is separated into scalar/aggregate/complex and
+ lvalue/rvalue paths, depending on what kind of result your
+ expression produces. On occasion, this requires some careful
+ factoring of code to avoid duplication.</li>
+
+ <li><tt>CodeGenFunction</tt> contains functions
+ <tt>ConvertType</tt> and <tt>ConvertTypeForMem</tt> that convert
+ Clang's types (<tt>clang::Type*</tt> or <tt>clang::QualType</tt>)
+ to LLVM types.
+ Use the former for values, and the later for memory locations:
+ test with the C++ "bool" type to check this. If you find
+ that you are having to use LLVM bitcasts to make
+ the subexpressions of your expression have the type that your
+ expression expects, STOP! Go fix semantic analysis and the AST so
+ that you don't need these bitcasts.</li>
+
+ <li>The <tt>CodeGenFunction</tt> class has a number of helper
+ functions to make certain operations easy, such as generating code
+ to produce an lvalue or an rvalue, or to initialize a memory
+ location with a given value. Prefer to use these functions rather
+ than directly writing loads and stores, because these functions
+ take care of some of the tricky details for you (e.g., for
+ exceptions).</li>
+
+ <li>If your expression requires some special behavior in the event
+ of an exception, look at the <tt>push*Cleanup</tt> functions in
+ <tt>CodeGenFunction</tt> to introduce a cleanup. You shouldn't
+ have to deal with exception-handling directly.</li>
+
+ <li>Testing is extremely important in IR generation. Use <tt>clang
+ -cc1 -emit-llvm</tt> and <a
+ href="http://llvm.org/cmds/FileCheck.html">FileCheck</a> to verify
+ that you're generating the right IR.</li>
+ </ul>
+ </li>
+
+ <li>Teach template instantiation how to cope with your AST
+ node, which requires some fairly simple code:
+ <ul>
+ <li>Make sure that your expression's constructor properly
+ computes the flags for type dependence (i.e., the type your
+ expression produces can change from one instantiation to the
+ next), value dependence (i.e., the constant value your expression
+ produces can change from one instantiation to the next),
+ instantiation dependence (i.e., a template parameter occurs
+ anywhere in your expression), and whether your expression contains
+ a parameter pack (for variadic templates). Often, computing these
+ flags just means combining the results from the various types and
+ subexpressions.</li>
+
+ <li>Add <tt>TransformXXX</tt> and <tt>RebuildXXX</tt> functions to
+ the
+ <tt>TreeTransform</tt> class template in <tt>Sema</tt>.
+ <tt>TransformXXX</tt> should (recursively) transform all of the
+ subexpressions and types
+ within your expression, using <tt>getDerived().TransformYYY</tt>.
+ If all of the subexpressions and types transform without error, it
+ will then call the <tt>RebuildXXX</tt> function, which will in
+ turn call <tt>getSema().BuildXXX</tt> to perform semantic analysis
+ and build your expression.</li>
+
+ <li>To test template instantiation, take those tests you wrote to
+ make sure that you were type checking with type-dependent
+ expressions and dependent types (from step #2) and instantiate
+ those templates with various types, some of which type-check and
+ some that don't, and test the error messages in each case.</li>
+ </ul>
+ </li>
+
+ <li>There are some "extras" that make other features work better.
+ It's worth handling these extras to give your expression complete
+ integration into Clang:
+ <ul>
+ <li>Add code completion support for your expression in
+ <tt>SemaCodeComplete.cpp</tt>.</li>
+
+ <li>If your expression has types in it, or has any "interesting"
+ features other than subexpressions, extend libclang's
+ <tt>CursorVisitor</tt> to provide proper visitation for your
+ expression, enabling various IDE features such as syntax
+ highlighting, cross-referencing, and so on. The
+ <tt>c-index-test</tt> helper program can be used to test these
+ features.</li>
+ </ul>
+ </li>
+</ol>
+
+</div>
+</body>
+</html>