Postsilicon And Runtime Verification For Modern Processors Wagner pdf download
Postsilicon And Runtime Verification For Modern Processors Wagner pdf download
https://ebookbell.com/product/postsilicon-and-runtime-
verification-for-modern-processors-wagner-21347428
https://ebookbell.com/product/postsilicon-validation-and-debug-1st-ed-
prabhat-mishra-farimah-farahmandi-7324904
https://ebookbell.com/product/advanced-nanoelectronics-postsilicon-
materials-and-devices-hussain-9951988
Big Tech Tyrants How Silicon Valleys Stealth Practices Addict Teens
Silence Speech And Steal Your Privacy Floyd Brown
https://ebookbell.com/product/big-tech-tyrants-how-silicon-valleys-
stealth-practices-addict-teens-silence-speech-and-steal-your-privacy-
floyd-brown-11782046
https://ebookbell.com/product/tracebased-postsilicon-validation-for-
vlsi-circuits-1st-edition-xiao-liu-4396664
Debug Automation From Presilicon To Postsilicon 1st Edition Mehdi
Dehbashi
https://ebookbell.com/product/debug-automation-from-presilicon-to-
postsilicon-1st-edition-mehdi-dehbashi-4932092
https://ebookbell.com/product/graphene-for-postmoore-silicon-
optoelectronics-ang-xu-khurram-shehzad-49169366
https://ebookbell.com/product/post2015-un-development-making-change-
happen-stephen-browne-44912544
https://ebookbell.com/product/postheroic-leadership-context-process-
and-outcomes-miha-kerlavaj-44939346
https://ebookbell.com/product/postkeynesian-economics-new-foundations-
second-edition-2nd-edition-marc-lavoie-45591036
Post-Silicon and Runtime Verification
for Modern Processors
Ilya Wagner • Valeria Bertacco
The growing complexity of modern processor designs and their shrinking produc-
tion schedules cause an increasing number of errors to escape into released products.
Many of these escaped bugs can have dramatic effects on the security and stabil-
ity of consumer systems, undermine the image of the manufacturing company and
cause substantial financial grief. Moreover, recent trends towards multi-core pro-
cessor chips, with complex memory subsystems and sometimes non-deterministic
communication delays, further exacerbate the problem with more subtle, yet more
devastating, escaped bugs. This worsening situation calls for high-efficiency and
high-coverage verification methodologies for systems under development, a goal
that is unachievable with today’s pre-silicon simulation and formal validation so-
lutions. In light of this, functional post-silicon validation and runtime verification
are becoming vitally important components of a modern microprocessor develop-
ment process. Post-silicon validation leverages orders of magnitude performance
improvements over pre-silicon simulation while providing very high coverage. Run-
time verification solutions augment the hardware with on-chip monitors and check-
ing modules that can detect erroneous executions in systems deployed in the field
and recover from them dynamically.
The purpose of this book is to present and discuss the state of the art in post-
silicon and runtime verification techniques: two very recent and fast growing trends
in the world of microprocessor design and verification. The first part of this book
begins with a high-level overview of the various verification activities that a proces-
sor is subjected to as it moves through its life-cycle, from architectural conception
to silicon deployment. When a chip is being designed, and before early hardware
prototypes are manufactured, the verification landscape is dominated by two main
groups of techniques: simulation-based validation and formal verification. Simula-
tion solutions leverage a model of the design’s structure, often written in specialized
hardware programming languages, and validate a design by providing input stimuli
to the model and evaluating its responses to those stimuli. Formal techniques, on the
other hand, treat a design as a mathematical description of its functionality and fo-
cus on proving a wide range of properties of its functional behavior. Unfortunately,
these two categories of validation methods are becoming increasingly inadequate
vii
viii Preface
in coping with the complexity of modern multi-core systems. This is exactly where
post-silicon and runtime validation techniques, the primary scope of this book, can
lend a much needed hand.
Throughout the book we present a range of recent solutions in these two domains,
designed specifically to identify functional bugs located in different components of
a modern processor, from individual computational cores to the memory subsystem
and on-chip fabrics for inter-core communication. We transition into the second part
of the book by presenting mainstream post-silicon validation and test activities that
are currently being deployed in industrial development environments and outline
important performance bottlenecks of these techniques. We then present Reversi,
our proposed methodology to alleviate these bottlenecks in processor cores. Basic
principles of inter-core communication through shared memory are overviewed in
the following chapter, which also details new approaches to validation of commu-
nication invariants in silicon prototypes. We conclude the discussion of functional
post-silicon validation with a novel echnique, targeted specifically to modern multi-
cores, called Dacota.
The recently proposed approaches to validation that we collected in part two
of this book have an enormous potential to improve verification performance and
coverage; however, there still is a chance that complex and subtle errors evade them
and escape into end-user silicon systems. Runtime solutions, the focus of the third
part of this work, are designed to address these situations and to guarantee that
a processor performs correctly even in presence of escaped design bugs without
degrading user experience. To better analyze these techniques we investigate the
taxonomy of escaped bugs reported for some of the processor designs available
today, and we also classify runtime approaches into two major groups: checker- and
patching-based. In the remainder of part three we detail several runtime verification
methods within both categories, first relating to individual cores and then to multi-
core systems. We conclude the book with a glance towards the future, discussing
modern trends in processor architecture and silicon technology, and their potential
impacts on the verification of upcoming designs.
Acknowledgements
We would like to acknowledge several people that made the writing of this book pos-
sible. First, and foremost, we express our gratitude to our colleagues, who worked
with us on the research presented in this book. In particular, we would like to thank
Professor Todd Austin, who was a vital member of our runtime verification research
and provided critical advice in many other projects. Andrew DeOrio has contributed
immensely to our post-silicon validation research and has helped us greatly in the
experimental evaluations of several other techniques. Both Todd and Andrew have
also worked tirelessly on the original development of these works and provided
valuable insights on the presentation of the material in this manuscript.
We would also like to thank many students working in the Advanced Com-
puter Architecture Lab, and, especially, all those who have devoted their research
to hardware verification: Kai-hui Chang, Stephen Plaza, Andrea Pellegrini, De-
bapriya Chatterjee, Rawan Abdel-Khalek have worked particularly closely to us
among many others. Every day these individuals are relentlessly advancing the the-
ory and the practice of this exciting and challenging field. We also acknowledge all
of the faculty and staff at the Computer Science and Engineering Department of The
University of Michigan, as well as all engineers and researchers in academia and in-
dustry who provided us with valuable feedback on our work at conferences and
workshops, reviewed and critiqued our papers and published their findings, upon
which much of our research was built. These are truly the giants, on whose shoul-
ders we stand.
We also thank our families, who faithfully supported us throughout the years of
research that led to the publication of this book. Each and every one of them was
constantly ready to offer technical advice or a heartwarming consolation in difficult
times, and celebrate the moments of our successes. Indeed, without their trust and
encouragement this writing would be absolutely impossible.
Finally, we would like to acknowledge our editors, Mr. Alex Greene, Ms. Ciara
Vincent and Ms. Katelyn Chin from Springer, who worked closely with us on this
publication with truly angelic patience. Time and again, they encouraged us to con-
tinue writing and gave us valuable advice on many aspects of this book.
ix
Contents
xi
xii Contents
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Acronyms
xv
xvi Acronyms
Abstract. Over the past four decades, microprocessors have come to be a vital and
inseparable part of the modern world, becoming the digital brain of numerous elec-
tronic devices and gadgets that make today’s lifestyle possible. Processors are ca-
pable of performing computation at astonishingly high speeds and are extremely
integrated, occupying only a few square centimeters of silicon die. However, this
computational power comes at a price: the task of verifying a modern micropro-
cessor and guaranteeing the correctness of its operation is increasingly challenging,
even for most established processor vendors. To deliver always higher performance
to end-users, processor manufacturers are forced to design progressively more com-
plex circuits and employ immense verification teams to eliminate critical design
bugs in a timely manner. Unfortunately, too often size doesn’t seem to matter in
verification, as schedules continue to slip, and microprocessors find their way to the
marketplace with design errors. In this chapter we overview the life-cycle of a mi-
croprocessors, discuss the challenges of verifying these devices, and show examples
of hardware errors that have escaped into production silicon because of insufficient
validation an d their impact.
Over the past four decades microprocessors have permeated our world, ushering in
the digital age and enabling numerous technologies, without which today’s life style
would be all but impossible. Processors are microscopic circuits printed onto silicon
dies and consisting of hundreds of millions of transistors interconnected by wires.
What distinguishes microprocessors from other integrated circuits is their ability to
execute arbitrary software programs. In other words, processors make digital de-
vices programmable and flexible, so a single device can efficiently perform various
operations, depending on the program that is running on it. In our every day activi-
ties we encounter and use these tiny devices hundreds of times, often without even
realizing it. Processors allow us to untether our phones from the wired network and
I. Wagner and V. Bertacco, Post-Silicon and Runtime Verification for Modern Processors, 3
DOI 10.1007/978-1-4419-8034-2_1, © Springer Science+Business Media, LLC 2011
4 1 VERIFICATION OF A MODERN PROCESSOR
ceptually, these verification approaches can be divided into three families, based on
where they intervene in a processor life-cycle: pre-silicon, post-silicon and runtime
verification solutions.
Pre-silicon techniques are heavily deployed in the early stages of a processor’s
design, before any silicon prototype of the device is available, and can be classified
as simulation-based or formal solutions. Simulation-based methods are the most
common approaches to locate design errors in microprocessors. Random instruc-
tion sequences are generated and fed into a detailed software model, also called a
hardware description of a design, results are computed by simulation of this model
and then checked for correctness. This approach is used extensively in the industry,
yet it suffers from a number of drawbacks. First, the simulation speed of the detailed
hardware description is several orders of magnitude slower than the actual proces-
sor’s performance. Therefore, only relatively short test sequences can be checked in
this phase; for instance, it is almost impossible to simulate an operating system boot
sequence, or the complete execution of a software application running on the pro-
cessor. More importantly, simulation-based validation is a non-exhaustive process:
the number of configurations and possible behaviors of a modern microprocessor is
too large to allow for the system to be fully checked in a reasonable time.
Another family of pre-silicon solutions, formal verification techniques, solves the
non-exhaustive nature of simulation, using sophisticated mathematical derivations
to reason about a design. If all possible behaviors of the processor could be described
with mathematical formulas, then it would be possible to prove the correctness of
the device’s operation as a theorem. In practice, in the best scenarios it is possible
to guarantee that a design will not exhibit a certain erroneous behavior, or that it
will never produce a result that differs from a known-correct reference model. The
primary drawback of formal techniques, however, is that they are far from capable
of dealing with the complexity of modern designs, and thus their use is limited to
only a few, small components within the processor.
After a microprocessor is sufficiently verified at the pre-silicon stage, a proto-
type is manufactured and subjected to post-silicon validation where tests can, for
the first time, be executed on the actual hardware. The key advantage of post-silicon
validation is that its raw performance enables significantly faster verification than
pre-silicon software-based simulation, thus it could deliver much better correct-
ness guarantees. Unfortunately, post-silicon validation presents a challenge when
it comes to detection and diagnosis of errors because of the limited observability
provided by this technique, since at this stage it is impossible to monitor the internal
components of the hardware prototype. Therefore, errors cannot be detected until
they generate an invalid result, or cause the system to hang. The limited observabil-
ity leads to extremely involved and time consuming debugging procedures, with the
result that today post-silicon validation and debugging has become the single largest
cost factor for processor companies such as Intel.
Due to the limitations of pre- and post-silicon verification, and shrinking time-
lines for product delivery, processor manufacturers have started to accept the fact
that bugs do slip into production hardware and thus they are beginning to explore
runtime verification solutions that can repair a device directly at the customer’s site.
6 1 VERIFICATION OF A MODERN PROCESSOR
Pre-silicon
Sy
nt
verification
he
L
si
HD
Pl
ac ape
Architectural
e -o
t
model
& ut
ro
Register-
ut
e,
transfer
level model
R
Netlist
el
ea
es
Post-silicon
validation
Silicon prototype
Runtime
verification End-user system
Fig. 1.1 Modern microprocessor design and verification flow. In the pre-silicon phase an ar-
chitectural model, derived from the original design specification, is converted into an RTL imple-
mentation in a hardware description language (HDL). The RTL model can then be synthesized,
producing a design netlist. Place and route software calculates where individual logic gates and
wire connections should be placed on the silicon die and then a prototype of the processor is man-
ufactured. Once a prototype becomes available, it is subjected to post-silicon validation. Only after
the hardware is shown to be sufficiently stable in this validation phase, the processor is released
and deployed in the market.
available, they can be inserted into a computer system for post-silicon validation
(as opposed to the pre-silicon verification that occurs before the tape-out). One of
the distinguishing features of post-silicon verification is its high performance: the
same performance as the final product, which is orders of magnitude higher than
simulation speeds in the pre-silicon domain. Typically, at this stage engineers try
to evaluate the hardware in real life-like settings, such as booting operating sys-
tems, executing legacy programs, etc. The prototype is also subjected to additional
random tests, in an attempt to create a diverse pool of scenarios where differences
between the hardware and the architectural model can be identified to flag any re-
maining errors. When a bug is found at this stage, the RTL model is modified to
correct the issue and the design often must be manufactured again (this process is
called re-spin).
A processor design usually goes through several re-spins, as bugs are progres-
sively exposed and fixed in manufactured prototypes. Ultimately, the design is sta-
bilized and it can go into production. Unfortunately, due to the complexity of any
modern processor, it is impossible to exhaustively verify its behavior either in pre-
silicon or in post-silicon, thus subtle, but sometimes critical, bugs often slip through
all validation stages. Until recently, if a critical functional bug was exposed in end-
user’s hardware, manufacturers had no other choice but to recall the device. Today
8 1 VERIFICATION OF A MODERN PROCESSOR
vendors are starting to develop measures to avoid such costly recalls and allow their
products to be patched in the field. Researchers in academia have also proposed
solutions to ensure correctness of processor operation with special on-die check-
ers. Patching- and checker-based techniques are cumulatively classified as runtime
verification approaches. In Chapter 2 we review the current techniques deployed in
pre-silicon, post-silicon and runtime phases in more details, while in the remainder
of the book we concentrate on several new promising solutions in the two latter
domains.
fort, the engineering team had to resort to large server farms that processed batches
of tests around the clock.
Pre-silicon validation, however, was not limited to simulation: the Pentium 4 val-
idation project pioneered an extensive use of formal validation techniques for the
microprocessor domain. Yet, those were only applied to a few critical blocks, such
as the floating point unit and decoder logic, since the full design, comprising 42
million transistors, was too much for formal tools to handle. Thus, these mathe-
matical approaches were selectively directed towards blocks that had been sources
of errors in past designs. The effort payed off well, and several critical bugs were
exposed and fixed before tape-out. One of those had a probability of occurrence of
1 in 5 ∗ 1020 floating-point instructions, thus it was very likely that it would have
escaped a simulation-only verification methodology; its discovery overted a disaster
that could have been critical for the company.
While the “quality” of bugs caught by formal verification was high, the quantity
was fairly low (about 6% of total pre-silicon bugs), since only a few blocks were
targeted. Simulation-based validation with randomized tests at cluster-level, on the
other hand, provided for the majority of errors exposed (3,411 out of total of 7,855),
due to its scalability and relatively high speed. To make randomized test generation
as effective as possible, engineers tracked 2.5 million unit-level types of behavior
inside the device and directed the testing sequences to cover as many of these con-
ditions as possible. Directed assembly-level tests (over 12,000 in total) also led to
a very high error discovery rate, accounting for more than 2,000 bugs. Note that, a
number of additional issues were also exposed in testbenches and validation tools,
and had to be addressed during the verification process. These issues are not ac-
counted for in the reported 7,855 bugs. Post-silicon validation issues are also not
accounted for in the total. This phase of the design cycle of Pentium 4 was only
ten months long, yet during this time the device executed orders of magnitude more
cycles than in the three years of pre-silicon effort. Operating at speeds of 1GHz and
up, these prototypes underwent testing at different temperatures and voltages, run
scores of applications and random tests, and communicated with a range of periph-
eral devices. In addition to time and engineering efforts, post-silicon validation also
incurs high equipment costs: verification engineers commonly have to build and de-
bug specialized in-house testing and analysis platforms and purchase test-pattern
generators, optical probing machines and logic analyzers, with costs in the range of
hundreds of thousands of dollars.
The comprehensive report discussed in the previous section was compiled for a
processor designed in the late 90’s, nevertheless, it provides an accurate picture of
industrial-scale validation, which we can use as a baseline reference for outlining
future trends in verification. From the early 1970s, Moore’s law implacably has
pushed the device density up, and since the release of the first Pentium 4, proces-
10 1 VERIFICATION OF A MODERN PROCESSOR
UltraSPARC
2005 90 300 378 8 crossbar
T1 Niagara
Intel Polaris
2007 65 100 275 80 2D mesh
prototype
future designs is not expected to be greater (and will probably be worse) than that of
the Pentium 4, despite significant improvements in the simulation hardware hosts.
As the number of features grows, in some cases full-system simulation may become
unfeasible. Likewise, increasing capabilities of formal verification tools in the future
will be outpaced by the complexity of critical modules requiring formal analysis.
In this worsening situation, the number of total bugs in processor products and the
speed with which they are discovered in the field is rapidly increasing. Researchers
have already reported that the escape bug discovery rate in Core 2 Duo designs is 3
times larger than that of the Pentium 4 design [CMA08]. Note that these are func-
tional errors that have evaded all validation efforts and found their way into the final
product, deployed in millions of computer systems world-wide. Thus, they entail a
very high risk of having a critical impact on the user base and the design house. It is
already clear that because of the expanding gap between complexity and verification
effort, in the future errors will continue to slip into silicon, potentially causing much
more damage than the infamous FDIV bug, which resulted in a $420 million loss for
Intel in the mid-90s [Mar08]. For instance, as recently as 2007, an error in the trans-
lation look-aside buffer of the third level cache in the Phenom processor by AMD
forced the manufacturer to delay the market release by several months. Not only this
delayed the distribution of the product to the market, but also created much nega-
tive publicity for the company, and influenced the price of its stock [Val07]. From
this grim picture we draw the conclusion that new validation solutions are critically
needed to enable the continued evolution of microprocessors designs in the future.
This concern is also voiced in the International Technology Roadmap (ITRS) for
Semiconductors, which states that “without major breakthroughs, verification will
be a non-scalable, show-stopping barrier to further progress in the semiconductor
industry.” ITRS also reports that there are no solutions available to provide high-
quality verification of integrated circuits and sufficiently low rate of escapes beyond
the year 2016 [ITR07].
1.5 Summary
Microprocessors have entered our world four decades ago and since that time have
played a vital role in our everyday life. Throughout these forty years, the capabilities
of these amazing devices have been increasing exponentially, and today processors
are capable of performing billions of computations per second, and executing multi-
ple software applications in parallel. This performance level was enabled by rapidly
growing complexity of integrated circuits, which, in turn, exacerbated the problem
of their verification. To validate modern processors, consisting of hundreds of mil-
lions of transistors, manufacturing companies are forced to employ large verification
teams, develop new validation technology and invest into costly testing and analysis
equipment. Traditionally, this verification activity is broken down into three major
steps: pre-silicon verification, post-silicon validation, and runtime verification.The
former two are conducted internally by the vendors on a software model of the de-
12 1 VERIFICATION OF A MODERN PROCESSOR
vice and and a silicon prototype, respectively, and are primarily carried out through
execution of test sequences. However, as we discussed in this chapter, the com-
plexity of modern processors prohibits their exhaustive verification, and thus errors
often slip into final products, causing damage to both end-users and vendor com-
panies. To protect hardware deployed in the field from such errors, researches have
recently started to propose techniques for runtime verification and to rethink post-
silicon validation strategies to derive better quality of results from them. Some of
the emerging techniques in these domains will be presented in detail in the later
chapters of this book, however, first, we must take a deeper look into a traditional
processor verification cycle to understand advantages and limitations of each of its
steps.
References
[Ben01] Bob Bentley. Validating the Intel R Pentium R 4 microprocessor. In DAC, Pro-
ceedings of the Design Automation Conference, pages 224–228, June 2001.
[Ben05] Bob Bentley. Validating a modern microprocessor. In CAV, Proceedings of the
International Conference on Computer Aided Verification, pages 2–4, July 2005.
[CMA08] Kypros Constantinides, Onur Mutlu, and Todd Austin. Online design bug detection:
RTL analysis, flexible mechanisms, and evaluation. In MICRO, Proceedings of the
International Symposium on Microarchitecture, pages 282–293, November 2008.
[Int04] Intel Corporation. Intel R Pentium R Processor Invalid Instruction Erratum
Overview, July 2004. http://www.intel.com/support/processors/
pentium/sb/cs-013151.htm.
[ITR07] International Technology Roadmap for Semiconductors executive summary, 2007.
http://www.itrs.net/Links/2007ITRS/Home2007.htm.
[Kas08] Kris Kaspersky. Remote code execution through Intel CPU bugs. In HITB, Pro-
ceedings of Hack In The Box Conference, October 2008.
[Mar08] John Markoff. Burned once, Intel prepares new chip fortified by constant tests.
New York Times, November 2008. http://www.nytimes.com/2008/11/
17/technology/companies/17chip.html.
[Val07] Theo Valich. AMD delays Phenom 2.4 GHz due to TLB errata. The In-
quirer, November 2007. http://www.theinquirer.net/inquirer/
news/995/1025995/amd-delays-phenom-ghz-due-tlb.
Chapter 2
THE VERIFICATION UNIVERSE
Abstract. In this chapter we take the reader through a typical microprocessor’s life-
cycle, from its first high-level specification to a finished product deployed in a end-
user’s system, and overview the verification techniques that are applied at each step
of this flow. We first discuss pre-silicon verification, the process of validating a
model of the processor at various levels of abstraction, from an architectural speci-
fication to a gate-level netlist. Throughout the pre-silicon phase, two main families
of techniques are commonly used: formal methods and simulation-based solutions.
While the former provide mathematical guarantees of design correctness, the lat-
ter are significantly more scalable and, consequently, are more commonly used in
the industry today. After the first few prototypes of a processor are manufactured,
validation enters the post-silicon domain, where tests can run on the actual silicon
hardware. The raw performance of in-hardware execution is one of the major advan-
tages of post-silicon validation, while lack of internal observability and limited de-
buggability are its main drawbacks. To alleviate this, designers often augment their
creations with special features for silicon state acquisition, which we review here.
After an arduous process of pre- and post-silicon validation, the device is released to
the market and finds its way into a final system. Yet, it may still contain subtle bugs,
which could not be exposed earlier by designers due to very compressed production
timelines. To combat these escaped errors, vendors and researchers in industry and
academia have began investigating alternative dynamic verification techniques: with
minimal impact on the processor’s performance, these solutions monitor its health
and invoke specialized correction mechanisms when errors manifest at runtime. As
we show in this chapter, all three phases of verification, pre-silicon, post-silicon and
runtime, have their unique advantages and limitations, which must be taken into ac-
count by design houses to attain sufficient verification coverage within their time
and cost budgets and to avoid major catastrophes caused by releasing faulty proces-
sor products to the commercial market.
I. Wagner and V. Bertacco, Post-Silicon and Runtime Verification for Modern Processors, 13
DOI 10.1007/978-1-4419-8034-2_2, © Springer Science+Business Media, LLC 2011
14 2 THE VERIFICATION UNIVERSE
ual blocks and the entire system will be verified, how bugs will be diagnosed and
tracked, and how verification thoroughness, also known as coverage, will be mea-
sured. As with the specification, the test plan is not created once and forever, but
rather evolves and morphs, as designers add, modify and refine processor’s features.
At the same time, with specification at hand, engineers may begin implement-
ing the design at the architectural (or ISA) level, describing the way the processor
interacts with the rest of the computer system. For instance, they determine the for-
mat of all supported instructions, interrupts, communication protocols, etc., but do
not concern themselves with the details of the inner structure of the design. This is
somewhat similar to devising the API of a software application without detailed de-
scriptions of individual functions. Note that operating systems, user-level programs
and many hardware components in a computing platform interact with the proces-
sor at the ISA level, and for the most part, need not be aware of its internals. In the
end, the specification is transformed into an architectural simulator of the processor,
which is typically a program written in a high-level programming language. As an
example, Simics [MCE+ 02] and Bochs [Boc07] are fairly popular architectural sim-
ulators. An architectural simulator enables the engineers to evaluate the high level
functionality of the design by simulating applications and it also provides early es-
timations of its performance. The latter, however, are very approximate, since at
this point the exact latency of various operations is unknown. More importantly,
at this stage of the development, the architectural simulator becomes the embodi-
ment of the specification and in later verification stages it is referred to as a golden
model of the design, to which detailed implementations (netlist-level and circuit-
level) can be efficiently compared. The architectural simulator can then be refined
into a microarchitectural description , where the detailed functionality of individual
sub-modules of the processor is specified (examples of microarchitectural simula-
tors are SimpleScalar [BA08] and GEMS [MSB+ 05]). With a microarchitectural
simulator designers define the internal behavior of the processor and characterize
performance-oriented features such as pipelining, out-of-order execution, branch
prediction, etc. The microarchitectural description, however, does not specify how
these blocks will be implemented in silicon and is usually also written in a high-level
language, such as C, C++ or SystemC. Nevertheless, with the detailed information
on instruction latency and throughput now available, the performance of the device
can be benchmarked through simulation of software applications and other tests.
Golden model
Regression
suite
Directed Random
test
test generator
Comparison with
the golden model
PC mismatch
@ cycle 2765
Waveform
Simulation results
Test environment
module CPU_DUT
input clk, reset;
input [63:0] addr_in, data_in;
output [63:0] data_out;
reg [63:0] reg_file [0:31], Assertions /
PC_reg; checkers
wire pipeline_stall;
… cycle 83: Assertion
endmodule Design error !=1'b1 violated!
Coverage
Logic simulator code cov.: 57%
branch cov.: 35%
...
Fig. 2.1 A typical framework for simulation-based verification. Inputs to a logic simulator are
typically manually-written directed tests and/or randomized sequences produced automatically by
a pseudo-random test generator. Tests are fed into a logic simulator, which, in turn, uses them as
stimuli to the integrated test environment and design description. The test environment emulates
the behavior of blocks surrounding the design under test. The simulator computed outputs and
internal signal values of the design from the test’s inputs. These outputs can then be analyzed with
a variety of tools: they can be compared with the output of a golden architectural model, can be
viewed as waveforms, particularly for error diagnosis, can be monitored by assertions and checkers
to detect violations of invariants in the design behavior and, finally, can be used to track coverage,
thus evaluating the thoroughness of the test.
dedicated languages for hardware verification (HVLs), some examples of which are
OpenVera [HKM01], the e language [HMN01] and SystemVerilog [IEE07]. The
latter implements a unified framework for both hardware design and verification,
providing an object-oriented programming model, a rich assertion specification lan-
guage, and even automatic coverage collection.
2.1 Pre-silicon Verification 19
Formal verification encompasses a variety techniques that have one common prin-
ciple: to prove with mathematical guarantees that the design abides to a certain
specification under all valid input sequences. If the design does not adheres to the
specification the formal techniques should produce a counterexample execution se-
quence, which describes one way of violating the specification. To this end, the
design under verification and the specification must be represented as mathemati-
cal/logic formulas subjected to formal analysis. One of the most important advan-
tages of these solutions over simulation-based techniques is the ability to prove cor-
rectness for all legal stimuli sequences. As we described in the previous section,
only explicitly simulated behaviors of the design can be tested for correctness and,
consequently, only the presence of bugs can be established. Formal verification ap-
proaches, however, can reason about absence of errors in a design, without the need
to exhaustively check all of its behaviors one by one. The body of work in the field
of formal verification is immense and diverse: for decades researchers in industry
and academia have been developing several families of solutions and algorithms,
which are all too numerous to be fully discussed in this book. Fortunately for the
readers, sources such as [Ber05, PF05, PBG05, BAWR07, WGR05, CGP08, KG99]
and many others, describe the solutions in this domain in much greater detail. In this
book, we overview some of the most notable techniques in the field in the hope to
stir the readers’ curiosity to investigate this research field further. However, before
we begin this survey, we first must take a look at two main computation engines,
which empower a large fraction of formal methods, namely SAT solvers and binary
decision diagrams (BDDs).
SAT is a short-hand notation for Boolean satisfiability, which is the classic the-
oretical computer science NP-complete problem of determining if there exists an
assignment of Boolean variables that evaluates a given Boolean formula to true, or
showing that no such assignment exists. Therefore, given Boolean formulas describ-
ing a logic design and a property to be verified, a SAT-based algorithm constructs
an instance of a SAT problem, in which a satisfactory assignment of variables rep-
resents a violation of the property. For example, given a design that arbitrates bus
accesses between two masters, one can use a SAT-solver to prove that it will never
assert both grant lines at the same time. Boolean satisfiability can also be applied to
sequential circuits, which are in this case “unrolled” into a larger design by replica-
tion of the combinational logic part and elimination of the internal state elements.
Engineers can then check if certain erroneous states can be reached in this “un-
rolled” design or if invariants of execution are always satisfied. As we will describe
in the subsequent section, SAT techniques can also be used for equivalence check-
ing, i.e., establishing if two representations of a circuit behave the same way for
all input sequences. Today, there is a variety of stand-alone SAT-solver applications
available, typically they either implement a variation of the Davis-Putnam algorithm
[DP60] as, for instance, GRASP [MSS99] and MiniSAT [ES03], or use stochastic
methods to search for satisfiable variable assignments as in WalkSAT [SKC95].
Heuristics and inference procedures used in these engines can dramatically improve
20 2 THE VERIFICATION UNIVERSE
the performance of the solver; however, in the worst case the satisfiability problem
remains NP, that is, as of today, it requires exponential time on the input size to com-
plete execution. As a result, solutions based on SAT solvers cannot be guaranteed
to provide results within reasonable time. Furthermore, when handling sequential
designs, these techniques tend to dramatically increase the size of the SAT problem,
due to the aforementioned circuit “unrolling”, further exacerbating the problem.
A A
B B B
C C C C C
0 0 0 1 1 1 1 1 0 1
a. b.
Fig. 2.2 Binary decision diagrams. a. A full-size decision tree for the logic function f =
A|(B&C): each layer of nodes represents a logic variable and edges represent assignments to the
variable. In the figure, solid edges represent a 1 assignment, while dashed edges represent a 0 value
of the variable. The leaf nodes of the tree are either of 1- or 0-type and represents the value that the
entire Boolean expression for the assignment in the corresponding path from root to leaf. Note that
the size of the full-size decision tree is exponential in the number of variables in the formula. b.
Reduced ordered binary decision diagram for the logic function f = A|(B&C). In this data structure
redundant nodes are removed, creating a compact representation for the same Boolean expression.
Binary decision diagrams (BDDs), the second main computational engine used
in formal verification, are data structures to represent Boolean functions [Bry86].
BDDs are acyclic directed graphs: each node represents one variable in the formula
and has two outgoing edges, one for each possible variable assignment – 0 and 1
(see Figure 2.2). There are two types of terminal (also called “leaf”) nodes in the
graph, 0 and 1, which correspond to the value assumed by the entire function for
a given variable assignment. Thus, a path from the root of the graph to a leaf node
corresponds to a variable assignment. The corresponding leaf node at the end of the
path represents the value that the logic function assumes for the assignment. BDDs
are reduced to contain fewer nodes and edges and, as a result, can often represent
complex Boolean functions compactly [Bry86, BRB90]. An example of this is il-
lustrated in Figure 2.2.b, where redundant nodes and edges in the complete binary
decision tree for the function f = A|(B&C) are removed, creating a structure that is
linear in the number of variables. BDD software packages as, for instance CUDD
[Som09], contain routines that allow a fast and efficient manipulation of BDDs and
2.1 Pre-silicon Verification 21
The reachability analysis problem has been mentioned before and consists of char-
acterizing the set of states that a design can attain during execution. To this end the
combinational function of the design is typically represented as a binary decision
diagram and the reset state of the circuit is used to initialize the “reached set”. Then
BDD manipulation functions are used to compute all states that the system can at-
tain after one clock cycle of execution, which are then added to the reached set.
22 2 THE VERIFICATION UNIVERSE
The process continues until the host runs out of memory or the reached set stops in-
creasing, in which case the obtained set contains all states that a system can achieve
throughout the execution and can be subsequently analyzed for absence of errors and
presence of runtime invariants. Reachability analysis is a very powerful verification
tool, although one must keep in mind that this technique, as other BDD-based solu-
tions, suffers from the memory explosion problem and has limited scalability. See
also publications such as [CBM89, RS95] for more information on this approach.
Symbolic simulation [CJB79, Bry85, BBB+ 87, BBS90, Ber05] has many similari-
ties with logic simulation, in that output functions are computed based on input val-
ues. The main difference here is that inputs are now symbolic Boolean variables and,
consequently, outputs are Boolean functions. For instance, the symbolic simulation
of the small logic block in Figure 2.3 produces the Boolean expression A&(B|C) for
the output, which we show in the BDD depicted by the output wire. Expressions for
the state of the design and the values of its outputs are updated at each simulation
cycle. The Boolean expressions altogether are a compact representation of all the
possible behaviors that the design can manifest, for all possible inputs, within the
simulated cycles. To find the response of the circuit for a concrete input sequence (a
sequence of 0s and 1s), one simply needs to evaluate the computed expressions with
appropriate values for the primary inputs. Consequently, symbolic simulation can
be used to compute a reached state set (within a fixed number of simulation cycles)
or prove a bounded time property. As with other solutions, however, the reliance on
BDDs brings the possibility of the simulator exhausting the memory resources of
the host before errors in the circuit can be identified.
2.1 Pre-silicon Verification 23
F = A & (B | C)
Design B
A
B C
C
0 1
Fig. 2.3 An example of symbolic simulation. In symbolic simulation, outputs and internal design
nodes are expressed as Boolean functions of the primary inputs, which, in turn, are described by
Boolean variables. In the circuit shown in the figure, symbols A, B and C are applied to the primary
inputs and the output assumes the resulting expression A&(B|C). Functions in symbolic simulation
are typically represented by binary decision diagrams, as the one shown at the circuit’s output.
As discussed above, modern digital logic designers have access to a wide va-
riety of high-quality formal tools that can be used for different types of analyses
and proofs. Yet all of them have limitations, requiring either deep knowledge of
declarative languages for specification of formal properties or being prone to ex-
hausting memory and time resources. Consequently, processor designers, who work
with extremely large and complex system descriptions, must continue to rely mostly
on logic simulation for verification of their products at the pre-silicon level. The
guarantees for correctness that formal techniques provide, however, have not been
overlooked by the microprocessor industry, and formal tools are often deployed to
verify the most critical blocks in modern processors, particularly control units. In
addition, researchers have designed methods to merge the power of formal analysis
with the scalability of simulation-based techniques, creating hybrid or semi-formal
solutions, such as the ones surveyed in [BAWR07]. Hybrid solutions use a variety
of techniques to leverage the scalability of simulation tools and the depth of analy-
sis of formal techniques to reach better coverage quality with a manageable amount
of resources. They often use a tight integration among several tools and make an
effort to have them exchange relevant information. Although hybrid solutions com-
bine the best of the formal and simulation worlds, their performance is still outpaced
by the growing complexity of processor designs and shortening production sched-
ules. Thus, the verification gap continues to grow and designs go into the silicon
prototype stage with latent bugs.
24 2 THE VERIFICATION UNIVERSE
The RTL simulation and formal analysis described above are, perhaps, the most
important and effort-consuming steps in the entire processor design flow. After these
steps are completed, the register-transfer level description of the design must be
transformed and further refined before a prototype can be manufactured. The next
step in the design flow is logic synthesis, which automatically converts an RTL
description into a gate-level netlist (see [HS06] for more details of logic synthesis
algorithms and tools). To this end, expressions in the RTL source code are expanded
and mapped to structures of logic gates, based on the target library specific to the
manufacturing process to be used. These, in turn, are further transformed through
local and global optimizations in order to attain a design with better characteristics,
such as lower timing delay, smaller area, etc. Again, with the increasing level of
detail, the code base grows further and, correspondingly, the performance of the
circuit’s simulation is reduced. Formal techniques are often deployed in this stage
to check that transformations and optimizations applied during synthesis do not alter
the functionality of the design. This task is called equivalence checking and it is most
often deployed to prove the equivalence of the combinational portion of a circuit. In
contrast, sequential equivalence checking has not yet reached sufficient robustness
to be commonly deployed in the industry. The basis of combinational equivalence
checking relies on the construction of a miter circuit connecting two netlists from
before and after a given transformation. The corresponding primary inputs in both
netlists are tied together, while the corresponding outputs are connected through xor
gates, as shown in Figure 2.4. Several techniques can then be used to prove that
the miter circuit outputs 0 under all possible input stimuli, thus indicating that the
outputs of the two netlist are always identical when subjected to identical input.
Among these, are BDD-based techniques, used in a fashion similar to symbolic
simulation, SAT solvers proving that the miter circuit cannot be satisfied (that is,
can never evaluate to 1), and also graph isomorphism algorithms, which allow to
prune the size of the circuits whose equivalence must be proven.
After the netlist is synthesized, optimized and validated, it must undergo the
place and route phase, where a specific location on silicon is identified for each
logic gate in the design and all connecting wires are routed among the placed gates
[AMS08]. Only after this phase the processor is finally described at the level of
individual transistors, and thus engineers can validate properties such as electrical
drive strength and timing with accurate SPICE (Simulation Program with Integrated
Circuit Emphasis) techniques. It is typical at this stage to only be able to analyze a
small portion of the design at a time, and to focus mostly to the critical paths through
the processor’s sub-modules.
2.1 Pre-silicon Verification 25
Design A
A
B
C
D Design B
Miter
Fig. 2.4 Miter circuit construction for equivalence checking. To check that two versions A and
B of a design implement the same combinational logic function, a miter is built by tying corre-
sponding primary inputs together and xor-ing corresponding primary outputs. Several techniques,
including formal verification engines such as Binary Decision Diagrams and SAT solvers can then
invoked to prove that under the outputs of such miter circuit can never evaluate to 1, under all
possible input combinations.
After the design is deemed sufficiently bug-free, that is, satisfactory coverage
levels are reached and the design is stable, the device is taped out, i.e., sent to the
fabrication facility to be manufactured in silicon. Once the first few prototypes are
available, the verification transitions into the post-silicon phase, which we describe
in the following section. It is important to remember, that the pre- and post-silicon
phases of processor validation are not disjoint ventures - much of the verification
collateral, generated in early design stages, can and should be reused for validation
of the manufactured hardware. For instance, random test generators, directed tests
and regression suites can be shared between the two. Moreover, in the case of ran-
domized tests, architectural simulators can be used to check the output of the actual
hardware prototypes for correctness. Finally, RTL simulation and emulation pro-
vide valuable support in silicon debugging. As we explain later, observability of the
design’s internal signals in post-silicon validation is extremely limited, therefore, it
is very difficult to diagnose internal design conditions that manifest in to an error.
To alleviate this, test sequences exposing bugs may be replayed in RTL simulation
or emulated, to narrow the region affected by the problem and to diagnose the root
cause of the error.
In summary, the pre-silicon verification of a modern processor is a complex and
arduous task, which requires deep understanding of the design and the capabilities
of validation tools, as well as good planning and management skills. After each de-
sign transformation, several important questions must be answered: what needs to
be verified? how do we verify it? and how do we know when verification is sat-
isfactory? Time is another important concern in this process, since designers must
meet stringent schedules, so their products remain competitive in the market. There-
fore, they must often forgo guarantees of full correctness for the circuit, and rely on
coverage and other indirect measures of validation thoroughness and completeness.
Exacerbating the process further is the fact that the design process is not a straight-
forward one - some modules and verification collaterals may be inherited from pre-
vious generations of the product, and some may be only available in a low level
description. For instance, performance-critical units may be first created at gate or
transistor level (so called full-custom design), and the corresponding RTL is gener-
ated later. Moreover, often the verification steps discussed in this section must be
revisited several times, e.g., when a change in one unit triggers modifications to oth-
ers components. Thus, pre-silicon verification goes hand in hand with design, and it
is often hard to separate them from each other.
the device can also be evaluated using detailed transistor-level descriptions, as dis-
cussed in the previous section but, when using such models, these analyses can only
be conducted on fairly small portions of the design and on very short execution se-
quences, making it difficult to attain highly quality measures of electrical aspects.
For instance, the operational region of the device cannot be evaluated precisely in
pre-silicon and must be checked after the device is manufactured. The actual oper-
ational region of the design is compared to the requirements imposed by the speci-
fication, driven by the market sector targeted by this product. Processors for mobile
platforms, for example, usually must operate at lower voltages, to reduce power con-
sumption, while server processors, for which performance is paramount, can tolerate
higher temperatures, but also must run at much higher frequency. Other electrical
properties that are also typically checked at this stage include drive strength of the
I/O pins, power consumption, etc.
Most of the electrical defects targeted during post-silicon validation are first dis-
covered as functional errors. For instance, incorrectly sized transistors on the die
may result in unanticipated critical paths. Consequently, when the device is running
at high frequency, occasionally data will not propagate through these paths within a
single cycle, resulting in incorrect computation results. Similarly, jitter in the clock
signal may cause internal flip-flops to latch erroneous data values, or cross-talk be-
tween buses may corrupt messages in flight. Such bugs are frequently first found
as failures of test sequences to which the prototype is subjected. Designers proceed
then to investigate the nature of the bug: by executing the same test sequence on a
other prototypes they can determine if the bug is of electrical or functional nature.
In fact, functional bugs will manifest on all prototypes, while electrical ones will
only occur in a fraction of the chips. Bugs that manifest only in a very small portion
of the prototypes are deemed to be due to manufacturing defects. Because each of
these issues are diagnosed with distinct methods, a correct classification is key to
shorten the diagnosis time.
In debugging electrical defects, engineers try to first determine the boundaries
of the failure, i.e., discover the range of conditions that trigger the problem, so it
can be reproduced and analyzed in the future. To this end shmoo plots that depict
failure occurrences as a function of frequency and supply voltage are created. Typ-
ically, multiple shmoo plots for different temperature settings are created, adding
the third dimension to the failure region of the processor. This data can then be an-
alyzed for characteristic patterns of various bug types. For instance, failures at high
temperatures are strong indicators of transistor leakage and critical path violations,
while errors that occur at low temperatures are often due to race conditions, charge
sharing, etc. [JG04]. Designers then try to pinpoint the area of the circuit where
the error occurs by adjusting the operating parameters of individual sub-modules
with techniques such as on-die clock shrinks, clock skewing circuits [JPG01] and
optical probing , which relies on lasers to measure voltage across individual transis-
tors on the die [EWR00]. In optical probing, the silicon substrate on the back side
of the die is etched and infrared light is pulsed at a precise point of the die. The
silicon substrate is partially transparent to this wavelength, while doped regions of
transistors reflect the laser back. If electrical charge is present in these regions, the
28 2 THE VERIFICATION UNIVERSE
power of the reflected light changes, allowing the engineers to measure the voltage.
Note that in this case, the top side of the processor, where multiple metal layers
reside, does not need to be physically etched or probed, so integrity of the die is not
violated. Unfortunately, the laser cannot get to the back side of the die through a
heat sink, which, therefore, must be removed around the sampling point. While pro-
viding good spacial and timing resolution, optical probing remains a very expensive
and often ineffective way of testing, since it requires sophisticated apparatus and en-
ables access to only a single location at a time. Finally, when the issue is narrowed
down to a small block, transistor-level simulation can be leveraged to establish the
root cause of the bug and determine ways to remedy it.
As we mentioned above, in addition to electrical bugs, two types of issues can
be discovered in manufactured prototypes: fabrication defects and functional errors,
which are the target of structural testing and functional post-silicon validation, re-
spectively. Although similar at first glance, these approaches have a very important
difference in that testing assumes that the pre-silicon netlist is functionally correct
and tries to establish if each prototype faithfully and fully implements it. Validation,
on the other hand checks if the prototype’s functionality adheres to the specifica-
tion, that is, if the processor can properly execute software and correctly interact
with other components of a computer system. In the following section we overview
structural testing approaches and discuss silicon state acquisition techniques, which
are typically deployed in complex designs to improve testability. Incidentally, most
of these acquisition solutions can be also used in post-silicon validation, discussed
in Section 2.2.2.
Scan
multiplexer
Data
S_out
D Q D Q
S_in
E Q E Q
S_en
Clock
Flip-flop
Fig. 2.5 Scan flip-flop design. To insert a regular D flip-flop in a scan chain, the flip-flop is aug-
mented with a scan multiplexer, which selects the source of data to be stored. During regular
operation, the S en (scan enable) signal is de-asserted and the flip-flop stores bits from the Data
line. When enable is asserted, however, the flip-flop samples the scan in (S in) signal instead. This
allows the designers to create a scan chain, by connecting the scan out (S out) output with a scan
in input of the next flip-flop in the chain. The last output in this chain is connected to a dedicated
circuit output port, so that the internal state of the system may be shifted out by asserting S en
signal and pulsing the clock. Likewise, since the S in input of the first chain element is driven
from a circuit’s primary input, engineers can quickly pre-set an arbitrary internal state in the de-
sign for testing and debugging purposes. Note that during scan chain operations the regular design
functionality is suspended.
S_clk
Capture
multiplexer
S_in
D Q D Q S_out
E Q E Q
Shadow flip-flop
Capture
Update
Update
multiplexer
D Q D Q
Data E Q E Q
Clock
Primary flip-flop
Fig. 2.6 Hold-scan flip-flop design. Hold-scan flip-flops provide the ability to overlap system ex-
ecution with scan activity. The component comprises two scan flip-flops, a primary and a shadow
flip-flop. The shadow flip-flop can capture and hold the state of the primary storage element.
Shadow elements are connected in chain and can transmit the captured values without the need
to suspend regular system operation. Similarly they can be used to load a system state, which is
then transferred to the primary flip-flops by asserting the update signal. Note that shadow flip-
flops operate on a separate clock (S clk), so that the transmit frequency can be decoupled from the
system’s operating frequency.
THE STAUBBACH.
The rainbow was suggested by the sun shining on the lower part of
the torrent, “of all colors but principally purple and gold, the bow
moving as you move.”
A day later he climbed to the top of the Wengern Mountain, five
thousand feet above the valley, the view comprising the whole of the
Jungfrau with all her glacier, then the Dent d’Argent, “shining like
truth,” the two Eigers and the Wetterhorn. He says: “I heard the
avalanches falling every five minutes nearly—as if God was pelting the
Devil down from Heaven with snowballs. From where we stood, on the
Wengern Alp, we had all these in view on one side: on the other, the
clouds rose from the opposite valley, curling up perpendicular
precipices like the foam of the Ocean of Hell during a Springtide—it
was white and sulphury and immeasurably deep in appearance.” From
the summit they “looked down upon a boiling sea of cloud, dashing
against the crags on which we stood.”
The avalanches and sulphurous clouds of course became part of the
décor of “Manfred:”
“Blantz et neìre,
Rotz et motaìle,
Dzjoùven et ôtro
Les sonaillire
Van lez premire
La tôte neìre
Van lez derrière:
Hau! hau! llauba!”
THE END.
BIBLIOGRAPHY
Abraham, George D.: The Complete Mountaineer
Abraham, George D.: Mountain Adventures at Home and Abroad
Agassiz, Louis: A Journey to Switzerland and Pedestrian Tours in
that Country
Anteisser, Roland: Altschweizerische Baukunst
Auvigne, Edmund B. d’: Switzerland in Sunshine and Snow
Bauden, Henry: Villas et Maisons de Campagne en Suisse
Bernowilli, A.: Balci Descriptio Helvetiae
Bonstetten, Albrecht von: Editor Descriptio Helvetiae
Burnet, Gilbert: Bishop of Salisbury. Travels or Letters containing
an account of what seemed most remarkable in Switzerland
Collings, Henry: Switzerland as I Saw It
Cook, Joel: Switzerland, Picturesque and Descriptive
Coolidge, W. A. B.: Swiss Travel and Swiss Guide-Books
Cooper, James Fenimore: Excursions in Switzerland
Dauzat, Albert: La Suisse moderne
Dumas, Alexandre: La Suisse
Edouard, Desor, and Favre, Leopold: Le bel âge du bronze lacustre
en Suisse
Elton, Charles Isaac: An Account of Shelley’s Visits to France,
Switzerland and Savoy in 1814 and 1816
Ferguson, Robert: Swiss Men and Swiss Mountains
Gribole, Francis: The Early Mountaineers
Guerber, Hélène Adeline: Legends of Switzerland
Guillon, Louis Maxime: Napoleon et la Suisse
Hasler, Fr. and H.: Galerie berühmter Schweizer der Neuzeit. In
Bildern mit biographischem Text von Alfred Hartmann
Havergal, Frances Ridley: Swiss Letters and Alpine Poems
Heer, J. C.: Album der Schweiz: 450 Bildern ... Nach
Schilderungen. Edited by Alexander B. Freiherr von
Bergenroth
Howard, Blanche Willis: One Year Abroad
Howells, William D.: A Little Swiss Sojourn
Istria, la Comtesse Dora d’ (Princess Helena Koltsova-Masalskaya):
La Suisse Allemande et l’ascension du Mönch
Kuhns, Levi Oscar: Switzerland, Its Scenery, History and Literary
Associations
Lerden, Walter: Recollections of an old Mountaineer
Lubbock, Sir John: The Scenery of Switzerland
LeDuc, Violet: Mont Blanc
MacCrackan, William D.: Romance and Teutonic Switzerland
Mummery, A. F.: My Climbs in the Alps and the Caucasus
Orelli, Johann Caspar von: Editor Inscriptiones Helvetiae, Collectae
et explicatae
Rey, Guido: The Matterhorn. Translated by J. E. C. Eaton
Rickmers, W. Rickmer: Ski-ing for Beginners and Mountaineers
Rhys, Isobel L.: The Education of Girls in Switzerland and America
Rook, Clarence: Switzerland, the Country and its People
Saitschik, Robert M.: Meister der Schweizerischen Dichtung des 19.
Jahrhunderts
Scheuber, Joseph: Die mittelalterlichen Chorstühle in der Schweiz
Schneider, Albert: Die neuesten römischen Ausgrabungen in der
Schweiz
Sennett, Alfred Richard: Across the Great Saint Bernard
Stephen, Leslie: The Playground of Europe
Stock, E. Elliott: Scrambles in Storm and Sunshine among the
Swiss and English Alps
Stoddard, Frederick Wolcott: Tramp through Tyrol
Symonds, John Addington: Our Life in the Swiss Highlands
Tyndall, John: Hours of Exercise in the Alps
Umlauft, F., P. H. D.: The Alps. Translated by Louisa Brough
Usteris, Martin: Pilatus und St. Dominick unter Benutzung einer
Handschrift
Webb, Frank: Switzerland of the Swiss
Whymper, Edward: Scrambles Amongst the Alps
Wood, Edith Elmer: An Oberland Châlet
Zsigemondy, Dr. Emil: Im Hochgebirge: Wanderungen
—— Annuaire du Club Alpin Français
—— Geschichte der Vermissungen in der Schweiz, als historische
Einleitung zu den Arbeiten der schweiz. geodätischen
Commission
—— Musée cantonal vaudois. Antiquités lacustres. Album publié
par la Société d’histoire de la Suisse romande et la Société
academique vaudoise, avec l’appui du Gouvernement vaudois
INDEX
ebookbell.com