0% found this document useful (0 votes)
12 views86 pages

Postsilicon And Runtime Verification For Modern Processors Wagner pdf download

The document discusses the importance of post-silicon and runtime verification for modern processors due to the increasing complexity and potential errors in processor designs. It highlights the inadequacies of traditional pre-silicon validation methods and presents various methodologies for effective post-silicon validation and runtime verification. The book aims to provide insights into recent advancements and techniques that enhance verification performance and coverage in microprocessor development.

Uploaded by

sabirruckhtl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views86 pages

Postsilicon And Runtime Verification For Modern Processors Wagner pdf download

The document discusses the importance of post-silicon and runtime verification for modern processors due to the increasing complexity and potential errors in processor designs. It highlights the inadequacies of traditional pre-silicon validation methods and presents various methodologies for effective post-silicon validation and runtime verification. The book aims to provide insights into recent advancements and techniques that enhance verification performance and coverage in microprocessor development.

Uploaded by

sabirruckhtl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 86

Postsilicon And Runtime Verification For Modern

Processors Wagner download

https://ebookbell.com/product/postsilicon-and-runtime-
verification-for-modern-processors-wagner-21347428

Explore and download more ebooks at ebookbell.com


Here are some recommended products that we believe you will be
interested in. You can click the link to download.

Postsilicon Validation And Debug 1st Ed Prabhat Mishra Farimah


Farahmandi

https://ebookbell.com/product/postsilicon-validation-and-debug-1st-ed-
prabhat-mishra-farimah-farahmandi-7324904

Advanced Nanoelectronics Postsilicon Materials And Devices Hussain

https://ebookbell.com/product/advanced-nanoelectronics-postsilicon-
materials-and-devices-hussain-9951988

Big Tech Tyrants How Silicon Valleys Stealth Practices Addict Teens
Silence Speech And Steal Your Privacy Floyd Brown

https://ebookbell.com/product/big-tech-tyrants-how-silicon-valleys-
stealth-practices-addict-teens-silence-speech-and-steal-your-privacy-
floyd-brown-11782046

Tracebased Postsilicon Validation For Vlsi Circuits 1st Edition Xiao


Liu

https://ebookbell.com/product/tracebased-postsilicon-validation-for-
vlsi-circuits-1st-edition-xiao-liu-4396664
Debug Automation From Presilicon To Postsilicon 1st Edition Mehdi
Dehbashi

https://ebookbell.com/product/debug-automation-from-presilicon-to-
postsilicon-1st-edition-mehdi-dehbashi-4932092

Graphene For Postmoore Silicon Optoelectronics Ang Xu Khurram Shehzad

https://ebookbell.com/product/graphene-for-postmoore-silicon-
optoelectronics-ang-xu-khurram-shehzad-49169366

Post2015 Un Development Making Change Happen Stephen Browne

https://ebookbell.com/product/post2015-un-development-making-change-
happen-stephen-browne-44912544

Postheroic Leadership Context Process And Outcomes Miha Kerlavaj

https://ebookbell.com/product/postheroic-leadership-context-process-
and-outcomes-miha-kerlavaj-44939346

Postkeynesian Economics New Foundations Second Edition 2nd Edition


Marc Lavoie

https://ebookbell.com/product/postkeynesian-economics-new-foundations-
second-edition-2nd-edition-marc-lavoie-45591036
Post-Silicon and Runtime Verification
for Modern Processors
Ilya Wagner • Valeria Bertacco

Post-Silicon and Runtime


Verification for Modern
Processors
Ilya Wagner Valeria Bertacco
Platform Validation Engineering Group Department of Electrical Engineering
Intel Corporation and Computer Science
Hillsboro, Oregon University of Michigan
USA Ann Arbor, Michigan
[email protected] USA
[email protected]

ISBN 978-1-4419-8033-5 e-ISBN 978-1-4419-8034-2


DOI 10.1007/978-1-4419-8034-2
Springer New York Dordrecht Heidelberg London
© Springer Science+Business Media, LLC 2011
All rights reserved. This work may not be translated or copied in whole or in part without the
written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street,
New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis.
Use in connection with any form of information storage and retrieval, electronic adaptation, computer
software, or by similar or dissimilar methodology now known or hereafter developed is forbidden.
The use in this publication of trade names, trademarks, service marks, and similar terms, even if they
are not identified as such, is not to be taken as an expression of opinion as to whether or not they are
subject to proprietary rights.

Printed on acid-free paper

Springer is part of Springer Science+Business Media (www.springer.com)


To my niece Ellie, who showed me
the miracle of learning.
Ilya Wagner

To all my students, who make working


in the field of verification such a rewarding
experience.
Valeria Bertacco
Preface

The growing complexity of modern processor designs and their shrinking produc-
tion schedules cause an increasing number of errors to escape into released products.
Many of these escaped bugs can have dramatic effects on the security and stabil-
ity of consumer systems, undermine the image of the manufacturing company and
cause substantial financial grief. Moreover, recent trends towards multi-core pro-
cessor chips, with complex memory subsystems and sometimes non-deterministic
communication delays, further exacerbate the problem with more subtle, yet more
devastating, escaped bugs. This worsening situation calls for high-efficiency and
high-coverage verification methodologies for systems under development, a goal
that is unachievable with today’s pre-silicon simulation and formal validation so-
lutions. In light of this, functional post-silicon validation and runtime verification
are becoming vitally important components of a modern microprocessor develop-
ment process. Post-silicon validation leverages orders of magnitude performance
improvements over pre-silicon simulation while providing very high coverage. Run-
time verification solutions augment the hardware with on-chip monitors and check-
ing modules that can detect erroneous executions in systems deployed in the field
and recover from them dynamically.
The purpose of this book is to present and discuss the state of the art in post-
silicon and runtime verification techniques: two very recent and fast growing trends
in the world of microprocessor design and verification. The first part of this book
begins with a high-level overview of the various verification activities that a proces-
sor is subjected to as it moves through its life-cycle, from architectural conception
to silicon deployment. When a chip is being designed, and before early hardware
prototypes are manufactured, the verification landscape is dominated by two main
groups of techniques: simulation-based validation and formal verification. Simula-
tion solutions leverage a model of the design’s structure, often written in specialized
hardware programming languages, and validate a design by providing input stimuli
to the model and evaluating its responses to those stimuli. Formal techniques, on the
other hand, treat a design as a mathematical description of its functionality and fo-
cus on proving a wide range of properties of its functional behavior. Unfortunately,
these two categories of validation methods are becoming increasingly inadequate

vii
viii Preface

in coping with the complexity of modern multi-core systems. This is exactly where
post-silicon and runtime validation techniques, the primary scope of this book, can
lend a much needed hand.
Throughout the book we present a range of recent solutions in these two domains,
designed specifically to identify functional bugs located in different components of
a modern processor, from individual computational cores to the memory subsystem
and on-chip fabrics for inter-core communication. We transition into the second part
of the book by presenting mainstream post-silicon validation and test activities that
are currently being deployed in industrial development environments and outline
important performance bottlenecks of these techniques. We then present Reversi,
our proposed methodology to alleviate these bottlenecks in processor cores. Basic
principles of inter-core communication through shared memory are overviewed in
the following chapter, which also details new approaches to validation of commu-
nication invariants in silicon prototypes. We conclude the discussion of functional
post-silicon validation with a novel echnique, targeted specifically to modern multi-
cores, called Dacota.
The recently proposed approaches to validation that we collected in part two
of this book have an enormous potential to improve verification performance and
coverage; however, there still is a chance that complex and subtle errors evade them
and escape into end-user silicon systems. Runtime solutions, the focus of the third
part of this work, are designed to address these situations and to guarantee that
a processor performs correctly even in presence of escaped design bugs without
degrading user experience. To better analyze these techniques we investigate the
taxonomy of escaped bugs reported for some of the processor designs available
today, and we also classify runtime approaches into two major groups: checker- and
patching-based. In the remainder of part three we detail several runtime verification
methods within both categories, first relating to individual cores and then to multi-
core systems. We conclude the book with a glance towards the future, discussing
modern trends in processor architecture and silicon technology, and their potential
impacts on the verification of upcoming designs.
Acknowledgements

We would like to acknowledge several people that made the writing of this book pos-
sible. First, and foremost, we express our gratitude to our colleagues, who worked
with us on the research presented in this book. In particular, we would like to thank
Professor Todd Austin, who was a vital member of our runtime verification research
and provided critical advice in many other projects. Andrew DeOrio has contributed
immensely to our post-silicon validation research and has helped us greatly in the
experimental evaluations of several other techniques. Both Todd and Andrew have
also worked tirelessly on the original development of these works and provided
valuable insights on the presentation of the material in this manuscript.
We would also like to thank many students working in the Advanced Com-
puter Architecture Lab, and, especially, all those who have devoted their research
to hardware verification: Kai-hui Chang, Stephen Plaza, Andrea Pellegrini, De-
bapriya Chatterjee, Rawan Abdel-Khalek have worked particularly closely to us
among many others. Every day these individuals are relentlessly advancing the the-
ory and the practice of this exciting and challenging field. We also acknowledge all
of the faculty and staff at the Computer Science and Engineering Department of The
University of Michigan, as well as all engineers and researchers in academia and in-
dustry who provided us with valuable feedback on our work at conferences and
workshops, reviewed and critiqued our papers and published their findings, upon
which much of our research was built. These are truly the giants, on whose shoul-
ders we stand.
We also thank our families, who faithfully supported us throughout the years of
research that led to the publication of this book. Each and every one of them was
constantly ready to offer technical advice or a heartwarming consolation in difficult
times, and celebrate the moments of our successes. Indeed, without their trust and
encouragement this writing would be absolutely impossible.
Finally, we would like to acknowledge our editors, Mr. Alex Greene, Ms. Ciara
Vincent and Ms. Katelyn Chin from Springer, who worked closely with us on this
publication with truly angelic patience. Time and again, they encouraged us to con-
tinue writing and gave us valuable advice on many aspects of this book.

ix
Contents

Part I VERIFICATION OF A MODERN PROCESSOR

1 VERIFICATION OF A MODERN PROCESSOR . . . . . . . . . . . . . . . . . 3


1.1 The Birth of the Microprocessor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Verification Throughout the Processor Life-cycle . . . . . . . . . . . . . . . . 6
1.3 Verification of a Modern Processor: a Case Study . . . . . . . . . . . . . . . . 8
1.4 Looking Ahead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2 THE VERIFICATION UNIVERSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13


2.1 Pre-silicon Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.1 From specification to microarchitectural description . . . . . . . 14
2.1.2 Verification through logic simulation . . . . . . . . . . . . . . . . . . . . 15
2.1.3 Formal verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.4 Logic optimization and equivalence verification . . . . . . . . . . . 24
2.1.5 Emulation and beyond . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2 Post-silicon Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.2.1 Structural testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.2.2 Functional post-silicon validation . . . . . . . . . . . . . . . . . . . . . . . 33
2.3 Runtime Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

Part II FUNCTIONAL POST-SILICON VERIFICATION

3 POST-SILICON VALIDATION OF PROCESSOR CORES . . . . . . . . . 45


3.1 Traditional post-silicon validation in industry . . . . . . . . . . . . . . . . . . . 45
3.2 The Reversi Test Generation System . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.3 Reversible and Non-reversible Instructions . . . . . . . . . . . . . . . . . . . . . 54
3.3.1 Arithmetic and logic instructions . . . . . . . . . . . . . . . . . . . . . . . 54

xi
xii Contents

3.3.2 Load/store instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57


3.3.3 Branch instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.3.4 Control register manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.3.5 Floating point instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.3.6 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.3.7 Reversi Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.5 Experimental Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.5.1 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.5.2 Design Error Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

4 POST-SILICON VERIFICATION OF MULTI-CORE


PROCESSORS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.1 Overview of Multi-core Processor Architectures . . . . . . . . . . . . . . . . . 76
4.2 The Challenge of Multi-core Processor Verification . . . . . . . . . . . . . . 79
4.3 Cache Coherence Verification Using String Matching . . . . . . . . . . . . 83
4.4 Verification of Memory Consistency Through Constraint Graph
Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

5 CONSISTENCY VERIFICATION USING DATA COLORING . . . . . 95


5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.2 Dacota Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5.3 Activity Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.3.1 Access vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
5.3.2 Core Activity Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.3.3 Activity Logging Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.4 Policy Validation Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
5.4.1 Access Log Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
5.4.2 Graph Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
5.4.3 Consistency Graph Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
5.4.4 Error Detection Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
5.4.5 Checking Algorithm Requirements . . . . . . . . . . . . . . . . . . . . . 113
5.5 Strengths and Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
5.5.1 Debugging with Dacota . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
5.5.2 Design Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5.6 Experimental Evaluation of Dacota . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5.6.1 Design Error Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5.6.2 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5.6.3 Area Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
5.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Contents xiii

Part III RUNTIME VERIFICATION FOR MODERN MICROPROCES-


SORS

6 RUNTIME VERIFICATION WITH PATCHING AND


HARDWARE CHECKERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
6.1 Analysis of Escaped Errors in Commercial Processors . . . . . . . . . . . . 129
6.2 Classification of Runtime Verification Solutions . . . . . . . . . . . . . . . . . 132
6.3 DIVA: Dynamic Verification of Microprocessors . . . . . . . . . . . . . . . . 135
6.3.1 Checker core operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.3.2 DIVA in action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
6.3.3 Benefits and limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
6.4 Runtime Verification of Simple Cores with Argus . . . . . . . . . . . . . . . . 141
6.5 Hardware Patching Approaches for Runtime Verification . . . . . . . . . 144
6.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

7 HARDWARE PATCHING WITH FIELD-REPAIRABLE


CONTROL LOGIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
7.2 Field-Repairable Control Logic Overview . . . . . . . . . . . . . . . . . . . . . . 153
7.2.1 Pattern Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
7.2.2 Matching Flawed Configurations . . . . . . . . . . . . . . . . . . . . . . . 155
7.2.3 Pattern Compression Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 157
7.2.4 Processor Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
7.2.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
7.3 Design Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
7.3.1 Overview of the Design Framework . . . . . . . . . . . . . . . . . . . . . 163
7.3.2 Verification Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
7.3.3 Control Signal Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
7.3.4 Automatic Signal Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
7.3.5 Performance-Critical Execution . . . . . . . . . . . . . . . . . . . . . . . . 168
7.4 Trusted Hardware Design with Semantic Guardians . . . . . . . . . . . . . . 168
7.4.1 Combining Semantic Guardians and Hardware Patching . . . . 172
7.5 Experimental Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
7.5.1 Experimental Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
7.5.2 Design Defects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
7.5.3 Specificity of the Matcher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
7.5.4 State Matcher Area and Timing Overheads . . . . . . . . . . . . . . . 180
7.5.5 Performance Impact of Degraded Mode . . . . . . . . . . . . . . . . . 181
7.5.6 Semantic Guardian Framework Analysis . . . . . . . . . . . . . . . . . 182
7.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
xiv Contents

8 RUNTIME VERIFICATION IN MULTI-CORES . . . . . . . . . . . . . . . . . 189


8.1 Dynamic Verification of Memory Consistency . . . . . . . . . . . . . . . . . . 189
8.2 Caspar: A Multi-core Patching Solution . . . . . . . . . . . . . . . . . . . . . . . . 193
8.3 Caspar’s Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
8.3.1 Detection and Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
8.3.2 Recovery and Bypass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
8.3.3 Checkpointing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
8.4 Post-silicon Debugging with Caspar . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
8.5 Experimental Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
8.5.1 Error Resiliency Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
8.5.2 Checkpointing Overhead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
8.5.3 Caspar Recovery Performance . . . . . . . . . . . . . . . . . . . . . . . . . 203
8.5.4 Area Overhead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
8.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206

9 ENSURING CORRECTNESS IN FUTURE MICROPROCESSORS . 207


9.1 Advances and Trends in Processor Validation . . . . . . . . . . . . . . . . . . . 207
9.2 A Proactive Approach to Verification . . . . . . . . . . . . . . . . . . . . . . . . . . 208

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Acronyms

ALU Arithmetic-logic unit. A hardware block that performs integer arithmetic


and logic functions, such as addition, subtraction, logic AND, etc.
API Application programming interface. A set of functions and routines which
describe the interface of a software application. API description is typi-
cally limited to the application interface, and does not specify the way the
functionality of the program is actually implemented.
ATPG Automatic test pattern generation. A technique for post-silicon validation
of a manufactured circuit, which attempts to expose errors in fabrication of
individual gates and interconnect in the design. ATPG tools use a software
representation of the netlist to derive input stimuli that can expose a variety
of manufacturing defects and then apply these tests to the prototype.
BDD Binary decision diagrams. A data structure for compact representation and
fast operations on Boolean logic functions. BDDs are commonly used as
an underlying engine of formal verification approaches, such as symbolic
simulation, reachability analysis and model checking.
BIOS Basic input-output system. A firmware residing on the motherboard, that
tests and configures various modules of a computer system upon startup.
BMC Bounded model checking. A pre-silicon verification technique which es-
tablishes adherence of design’s behavior, within a finite number of clock
cycles, to formal specifications, often written as temporal logic formulas.
CPI Cycles per instruction. A metric of processor performance, which mea-
sures the average number of clock cycles that the device needs to perform
one operation.
CPU Central processing unit. Electronic circuit capable of executing software
programs, synonymous to the word “microprocessor”.
DFT Design for testability. A class of techniques, which enable testing and de-
bugging of digital circuits, e.g., scan-chains, boundary-scan, on-chip logic
analyzers, etc.
ECC Error-correcting code. A special redundant encoding of the data that al-
lows to recover the information even if some of the bits of it are corrupted.
Typically used to protect computer storage, such as memory and caches.

xv
xvi Acronyms

EDA Electronic design automation. A generic name for computer-aided tools


for electronics design, as well as for the companies producing and de-
ploying such tools.
FPGA Field-programmable gate array. A digital circuit, which can be programmed
to implement arbitrary logic functions. FPGAs are commonly used to em-
ulate behavior of complex devices, such as microprocessors, before the
first prototype is manufactured.
FPU Floating point unit. A hardware block that performs floating point compu-
tation inside of the processor.
FRCL Field-repairable control logic. A hardware patching solution, which aug-
ments a processor with a programmable matcher to detect and recover
from erroneous control logic states.
FSM Finite state machine. A graph description of hardware block operation.
Consists of graph vertices, describing states of the machine, and edges,
identifying legal transitions between the states.
GPU Graphics processing unit. An electronic circuit dedicated to processing
graphical information, before it is sent to a computer display. GPU chips
are commonly placed on the motherboard or implemented as a separate
graphics card.
HDL Hardware description language. A class of programming languages that
are used to describe functionality and organization of digital hardware, so
the behavior of the design can be simulated.
ILP Instruction level parallelism. A potential overlap in the execution of in-
dependent instructions. ILP measures how many operations in a program
can be performed in parallel.
ISA Instruction set architecture. A complete list of all instructions and oper-
ations that a processor can execute. ISA also often includes a complete
specification of the processor interface, in terms of communication proto-
cols, interrupt handling, etc.
JTAG Joint test action group. Initially, an industry group that designed boundary-
scan technique for circuit board testing. Later, term JTAG became synony-
mous with the boundary scan architecture.
NMR N-modular redundancy. A resiliency technique, where N modules (identi-
cal or heterogeneous) compute the same function in parallel. Errors in one
unit can then be detected and corrected through majority voting scheme.
OCLA On-chip logic analyzer. A circuit, residing on a processor die, which can
be programmed to monitor for specific activity of the processor and record
the internal state of the chip upon occurrence of the trigger.
OS Operating system. A software supervisor that manages the hardware and
user-level programs of a computer system. An operating system provides
applications with access to the hardware and coordinates the resource
sharing in a computer system.
PCI Peripheral component interconnect. A type of computer bus that connects
peripheral devices of a system, e.g., graphics and network cards, to the
motherboard and the processor.
Acronyms xvii

PSMI Periodic state management interrupt. A technology developed at Intel,


which allows to periodically stop program execution and collect the in-
ternal state of a silicon prototype for post-silicon debugging.
QoS Quality of service. A network control mechanism that distributes re-
sources to different classes of traffic to achieve deterministic or statistical
guarantees of system performance.
ROB Re-order buffer. A hardware block for instruction re-ordering used in
complex out-of-order processor architectures. To hide the effects of long-
latency operations a processor may issue instructions in an order different
than that of the original program and use an ROB to ensure that the in-
struction stream is committed in strict program order.
ROM Read-only memory. A class of storage of computer devices, where stored
data cannot be modified. In microprocessor domain read-only memory is
typically implemented as a hard-wired lookup table.
RTG Random test generator. A software that creates randomized sequences of
inputs to the design for testing purposes. Typically the test stimuli are not
entirely random, but are constrained to a subset of all valid inputs to the
design.
RTL Register transfer level. An register transfer level description of a logic
device consists of memory elements (registers) and functions that transfer
the data between them. An RTL description is typically implemented in a
hardware description language.
SAT Boolean satisfiability. A class of problems that establishes if there exist
an assignment to variables of a Boolean formula that evaluates it to true.
SAT-solvers and their derivatives are often used by formal verification ap-
proaches, such as equivalence checking.
SEU Single event upset. A change or corruption of the state of a latch (or a
flip-flop), due to an energetic particle strike.
SPICE Simulation program with integrated circuit emphasis. A class of simu-
lation software that can evaluate the behavior of an integrated circuit
from the electrical standpoint. SPICE technique solve complex differen-
tial equations, which describe how voltage and current at in various points
in the design change over time, thus, operating at lower abstraction level
than logic simulation solutions.
STG State transition graph. A graph description of states and transitions be-
tween the states that hardware can assume at runtime. Synonymous to
finite state machine.
SoC System-on-a-chip. A hardware device that integrates multiple components
of a computer system, e.g., processing cores, non-volitile memory, periph-
eral controllers, etc. on a single silicon die.
TLB Translation look-aside buffer. A hardware module that acts as a fast cache
to a larger and slower lookup table, thus increasing processor performance
for accesses that hit in the TLB.
Part I
VERIFICATION OF A MODERN
PROCESSOR
This part of the book provides a high-level overview of the design and verification
cycle of a modern processor, starting from the early design phases, to prototype
manufacturing and testing and to product release. In the second chapter we explore
each of the three major phases of the verification universe: pre-silicon verification,
post-silicon validation and runtime techniques. For each phase, we discuss several
key solutions and investigate their main advantages and drawbacks. Most impor-
tantly, we compare performance and coverage of verification methods across dif-
ferent phases, making the case for functional post-silicon and runtime verification
techniques.
Chapter 1
VERIFICATION OF A MODERN
PROCESSOR

Abstract. Over the past four decades, microprocessors have come to be a vital and
inseparable part of the modern world, becoming the digital brain of numerous elec-
tronic devices and gadgets that make today’s lifestyle possible. Processors are ca-
pable of performing computation at astonishingly high speeds and are extremely
integrated, occupying only a few square centimeters of silicon die. However, this
computational power comes at a price: the task of verifying a modern micropro-
cessor and guaranteeing the correctness of its operation is increasingly challenging,
even for most established processor vendors. To deliver always higher performance
to end-users, processor manufacturers are forced to design progressively more com-
plex circuits and employ immense verification teams to eliminate critical design
bugs in a timely manner. Unfortunately, too often size doesn’t seem to matter in
verification, as schedules continue to slip, and microprocessors find their way to the
marketplace with design errors. In this chapter we overview the life-cycle of a mi-
croprocessors, discuss the challenges of verifying these devices, and show examples
of hardware errors that have escaped into production silicon because of insufficient
validation an d their impact.

1.1 The Birth of the Microprocessor

Over the past four decades microprocessors have permeated our world, ushering in
the digital age and enabling numerous technologies, without which today’s life style
would be all but impossible. Processors are microscopic circuits printed onto silicon
dies and consisting of hundreds of millions of transistors interconnected by wires.
What distinguishes microprocessors from other integrated circuits is their ability to
execute arbitrary software programs. In other words, processors make digital de-
vices programmable and flexible, so a single device can efficiently perform various
operations, depending on the program that is running on it. In our every day activi-
ties we encounter and use these tiny devices hundreds of times, often without even
realizing it. Processors allow us to untether our phones from the wired network and

I. Wagner and V. Bertacco, Post-Silicon and Runtime Verification for Modern Processors, 3
DOI 10.1007/978-1-4419-8034-2_1, © Springer Science+Business Media, LLC 2011
4 1 VERIFICATION OF A MODERN PROCESSOR

enable mobile communications, while their counterparts, deployed by phone com-


panies, made communications richer and much more reliable. Processors monitor
the health of hospital patients, control airplanes, tally election votes and predict
weather. And, of course, they power millions of personal computers of all shapes
and sizes, as well as the backbone of the Internet, a vital and inseparable part of
modern life. The computational power of these devices grows every year at an as-
tonishing pace: not long ago processors were only capable of executing just a few
thousands operations per second, while today they can perform billions of complex
computations per second. Finally, in the past few years, hardware design houses
have introduced multi-core processors, that is, systems comprising multiple proces-
sors (cores) on the single silicon die. These systems, can execute several programs
concurrently, thereby multiplying the overall performance delivered to the user.
However, to be so powerful, processors implement extremely complex architec-
tures, making the design and manufacturing of these devices a major challenge for
the semiconductor industry. Companies such as Intel, IBM and AMD are forced to
dedicate hundreds of engineers for years at a time to continue to advance micropro-
cessor technology and deliver the next generation processors to end-users. More-
over, as these designs grow in complexity, it becomes increasingly harder to verify
them and ensure that they operate properly. Design houses report that today verifica-
tion efforts significantly overweigh design activities, and that they often staff their
teams with two verification engineers per designer. Unfortunately, the complexity
and the number of features in each new generation of CPUs have quickly outpaced
the capabilities of even the largest industrial teams. As a consequence, today it is
impossible to provide high-quality verification of microprocessors with traditional
means, and products released to the public are becoming less and less reliable. Fur-
thermore, early in the development process engineers must assess the verifiability
of all the features that they want to introduce into the new product: if proposed fea-
tures cannot pass high-quality validation on time and within budget, they cannot be
deployed in the final product and are removed from the design plan, resulting in a
reduced set of capabilities and performance of the new system.
The consequences of this trend of diminishing quality in verification can be dra-
matic: indeed, the impact of bugs in production microprocessors can range widely
from innocuous to devastating, for several reasons. For instance, it is possible for a
computer system to become compromised, in terms of safety and security, because
of a hardware bug. As a result, a system with a buggy processor becomes vulner-
able to security attacks. Attacks of this type could be perpetrated even on systems
running completely correct software, since they rely exclusively on underlying hard-
ware flaws [Int04, Kas08]. Moreover, bugs can have a disastrous financial impact on
the manufacturing company by triggering a costly recall of faulty hardware, as was
the case in a past Intel processor [Mar08]; or causing significant delays in product
release, similar to what happened with AMD’s Phenom, released in 2007 [Val07].
The impact in both cases is estimated in billions of dollars, due to the large volume
of defective components that a functional bug always entails. To prevent devastat-
ing errors from seeping into the released designs, a variety of techniques have been
devised to detect and correct issues during system’s design and manufacturing. Con-
1.1 The Birth of the Microprocessor 5

ceptually, these verification approaches can be divided into three families, based on
where they intervene in a processor life-cycle: pre-silicon, post-silicon and runtime
verification solutions.
Pre-silicon techniques are heavily deployed in the early stages of a processor’s
design, before any silicon prototype of the device is available, and can be classified
as simulation-based or formal solutions. Simulation-based methods are the most
common approaches to locate design errors in microprocessors. Random instruc-
tion sequences are generated and fed into a detailed software model, also called a
hardware description of a design, results are computed by simulation of this model
and then checked for correctness. This approach is used extensively in the industry,
yet it suffers from a number of drawbacks. First, the simulation speed of the detailed
hardware description is several orders of magnitude slower than the actual proces-
sor’s performance. Therefore, only relatively short test sequences can be checked in
this phase; for instance, it is almost impossible to simulate an operating system boot
sequence, or the complete execution of a software application running on the pro-
cessor. More importantly, simulation-based validation is a non-exhaustive process:
the number of configurations and possible behaviors of a modern microprocessor is
too large to allow for the system to be fully checked in a reasonable time.
Another family of pre-silicon solutions, formal verification techniques, solves the
non-exhaustive nature of simulation, using sophisticated mathematical derivations
to reason about a design. If all possible behaviors of the processor could be described
with mathematical formulas, then it would be possible to prove the correctness of
the device’s operation as a theorem. In practice, in the best scenarios it is possible
to guarantee that a design will not exhibit a certain erroneous behavior, or that it
will never produce a result that differs from a known-correct reference model. The
primary drawback of formal techniques, however, is that they are far from capable
of dealing with the complexity of modern designs, and thus their use is limited to
only a few, small components within the processor.
After a microprocessor is sufficiently verified at the pre-silicon stage, a proto-
type is manufactured and subjected to post-silicon validation where tests can, for
the first time, be executed on the actual hardware. The key advantage of post-silicon
validation is that its raw performance enables significantly faster verification than
pre-silicon software-based simulation, thus it could deliver much better correct-
ness guarantees. Unfortunately, post-silicon validation presents a challenge when
it comes to detection and diagnosis of errors because of the limited observability
provided by this technique, since at this stage it is impossible to monitor the internal
components of the hardware prototype. Therefore, errors cannot be detected until
they generate an invalid result, or cause the system to hang. The limited observabil-
ity leads to extremely involved and time consuming debugging procedures, with the
result that today post-silicon validation and debugging has become the single largest
cost factor for processor companies such as Intel.
Due to the limitations of pre- and post-silicon verification, and shrinking time-
lines for product delivery, processor manufacturers have started to accept the fact
that bugs do slip into production hardware and thus they are beginning to explore
runtime verification solutions that can repair a device directly at the customer’s site.
6 1 VERIFICATION OF A MODERN PROCESSOR

“Patching” microprocessor bugs, however, is a non-trivial task, since the function-


ality of the device is already embodied in the transistors comprising the silicon die,
and it cannot be easily modified at this point. To enable in-the-field patching, design-
ers create special processor components dedicated to detecting erroneous behaviors
and recovering from them. Runtime verification is currently in its early research
stage: a few techniques have been recently proposed by academic research, while
problem-specific solutions are starting to appear in commercial products.

1.2 Verification Throughout the Processor Life-cycle

A traditional microprocessor’s design and manufacturing flow (shown in Figure 1.1)


consists of a series of steps that considers a high-level description of processor oper-
ation (specification), refines and transforms it, and, finally, implements the specified
functionalities on a silicon die. After each step, the design is progressively verified,
to ensure that, after all transformation and concretization steps, the behavior of the
device still adheres to the original specification. The process starts with a high-level
specification of the microprocessor’s required characteristics and functionalities, of-
ten described in a natural language, and/or diagrams describing its basic structure
and how the device should interface to other digital systems. This specification is
then converted to an architectural model of the device, typically written in a high-
level programming language (such as C). This model represents the first formalized
reference of the final system’s behavior. Implementation in a hardware description
language (HDL) can then start. The HDL description of the design describes the op-
eration of individual sub-modules of the processor, as well as their interactions, and
is also known as the register-transfer level (RTL) model. This RTL model is then
verified to establish its equivalence to the architectural model through simulation-
based and formal techniques. The outcome of simulation-based tests is compared
to those of the known-correct (or “golden”) architectural model and discrepancies,
indicators of errors, are identified and fixed. In addition to simulation-based tech-
niques, in pre-silicon verification, engineers often employ formal methods, which
can check correctness of a design using mathematical proofs and can thus guarantee
the absence of certain types of errors. Unfortunately, formal methods cannot handle
complex RTL models due to their limited scalability, therefore, their usage is limited
to a few, small critical blocks.
Once the RTL model is sufficiently validated, designers use synthesis software
that maps HDL into individual logic gates, registers and wires, generating a netlist
of the circuit. Since conversion from an RTL model to a netlist may incur errors, spe-
cialized verification solutions intervene again to check that this new transformation
is still equivalent to the previous model. Place and route software applications then
calculate how individual logic elements in the netlist can be placed on the silicon
die to produce a design that fulfills the required characteristics of power, area, delay,
etc. After placement, the final description of the design is taped-out, i.e., sent to a
fabrication facility to be manufactured. When the first hardware prototypes become
1.2 Verification Throughout the Processor Life-cycle 7

Pre-silicon

Sy
nt
verification

he
L

si
HD

Pl
ac ape
Architectural

e -o
t
model

& ut
ro
Register-

ut
e,
transfer
level model

R
Netlist

el
ea
es
Post-silicon
validation
Silicon prototype

Runtime
verification End-user system

Fig. 1.1 Modern microprocessor design and verification flow. In the pre-silicon phase an ar-
chitectural model, derived from the original design specification, is converted into an RTL imple-
mentation in a hardware description language (HDL). The RTL model can then be synthesized,
producing a design netlist. Place and route software calculates where individual logic gates and
wire connections should be placed on the silicon die and then a prototype of the processor is man-
ufactured. Once a prototype becomes available, it is subjected to post-silicon validation. Only after
the hardware is shown to be sufficiently stable in this validation phase, the processor is released
and deployed in the market.
available, they can be inserted into a computer system for post-silicon validation
(as opposed to the pre-silicon verification that occurs before the tape-out). One of
the distinguishing features of post-silicon verification is its high performance: the
same performance as the final product, which is orders of magnitude higher than
simulation speeds in the pre-silicon domain. Typically, at this stage engineers try
to evaluate the hardware in real life-like settings, such as booting operating sys-
tems, executing legacy programs, etc. The prototype is also subjected to additional
random tests, in an attempt to create a diverse pool of scenarios where differences
between the hardware and the architectural model can be identified to flag any re-
maining errors. When a bug is found at this stage, the RTL model is modified to
correct the issue and the design often must be manufactured again (this process is
called re-spin).
A processor design usually goes through several re-spins, as bugs are progres-
sively exposed and fixed in manufactured prototypes. Ultimately, the design is sta-
bilized and it can go into production. Unfortunately, due to the complexity of any
modern processor, it is impossible to exhaustively verify its behavior either in pre-
silicon or in post-silicon, thus subtle, but sometimes critical, bugs often slip through
all validation stages. Until recently, if a critical functional bug was exposed in end-
user’s hardware, manufacturers had no other choice but to recall the device. Today
8 1 VERIFICATION OF A MODERN PROCESSOR

vendors are starting to develop measures to avoid such costly recalls and allow their
products to be patched in the field. Researchers in academia have also proposed
solutions to ensure correctness of processor operation with special on-die check-
ers. Patching- and checker-based techniques are cumulatively classified as runtime
verification approaches. In Chapter 2 we review the current techniques deployed in
pre-silicon, post-silicon and runtime phases in more details, while in the remainder
of the book we concentrate on several new promising solutions in the two latter
domains.

1.3 Verification of a Modern Processor: a Case Study

The importance and difficulty of microprocessor verification is often underestimated


today by casual users and electronic design industry professionals alike. One of the
main causes of this is the fact that processor vendors rarely release exact data about
their internal validation techniques and experiences into the public domain, and are
even more cautions to disclose information about any encountered bugs. This is
understandable from the business point of view to a certain degree, since a large
portion of the tools and methodology used in validation by design companies are
developed in-house and are used as competitive advantage against rival vendors.
Likewise, information about any sort of bug, even those that had been fixed before
product release, casts a negative image on the manufacturer and could potentially
reveal confidential details about the inner-workings of the product. Nevertheless,
there are a few publications from processor vendors that shed light on the validation
of their products and give some degree of appreciation of the challenges faced by
verification engineers.
One of the most comprehensive of such studies reports the pre-silicon validation
effort on the Pentium 4 project , which was a new processor designed based on In-
tel’s 7th generation NetBurst architecture [Ben05, Ben01]. In this work, authored
by Bob Bentley, the daunting task of verification engineers is vividly described.
The design process for Pentium 4 was started at Intel in 1996 and continued until
tape-out three years later. During this time the validation team increased from 10 to
70 people, the majority of which had to be trained to use such verification tools as
model checking, cluster-level simulation, etc. By tape-out time, the processor had
undergone a transformation from a high-level idea and an architectural description
to an RTL design of more than one million lines and was validated at cluster- and
chip-level for more than 200 billions simulation cycles. This number is an astonish-
ing feat if one takes into account that the chip-level simulation speed did not exceed
5Hz, due to sheer size of the RTL code base: that corresponds to more than a thou-
sand years of cumulative simulation time. Consequently, the majority of pre-silicon
simulation was done at cluster level, which not only allowed for faster simulation,
but added much needed controllability to individual processor blocks. Moreover,
as the RTL code was modified, regressions suites were executed time and again to
maintain code stability throughout the development. To aid with this verification ef-
1.4 Looking Ahead 9

fort, the engineering team had to resort to large server farms that processed batches
of tests around the clock.
Pre-silicon validation, however, was not limited to simulation: the Pentium 4 val-
idation project pioneered an extensive use of formal validation techniques for the
microprocessor domain. Yet, those were only applied to a few critical blocks, such
as the floating point unit and decoder logic, since the full design, comprising 42
million transistors, was too much for formal tools to handle. Thus, these mathe-
matical approaches were selectively directed towards blocks that had been sources
of errors in past designs. The effort payed off well, and several critical bugs were
exposed and fixed before tape-out. One of those had a probability of occurrence of
1 in 5 ∗ 1020 floating-point instructions, thus it was very likely that it would have
escaped a simulation-only verification methodology; its discovery overted a disaster
that could have been critical for the company.
While the “quality” of bugs caught by formal verification was high, the quantity
was fairly low (about 6% of total pre-silicon bugs), since only a few blocks were
targeted. Simulation-based validation with randomized tests at cluster-level, on the
other hand, provided for the majority of errors exposed (3,411 out of total of 7,855),
due to its scalability and relatively high speed. To make randomized test generation
as effective as possible, engineers tracked 2.5 million unit-level types of behavior
inside the device and directed the testing sequences to cover as many of these con-
ditions as possible. Directed assembly-level tests (over 12,000 in total) also led to
a very high error discovery rate, accounting for more than 2,000 bugs. Note that, a
number of additional issues were also exposed in testbenches and validation tools,
and had to be addressed during the verification process. These issues are not ac-
counted for in the reported 7,855 bugs. Post-silicon validation issues are also not
accounted for in the total. This phase of the design cycle of Pentium 4 was only
ten months long, yet during this time the device executed orders of magnitude more
cycles than in the three years of pre-silicon effort. Operating at speeds of 1GHz and
up, these prototypes underwent testing at different temperatures and voltages, run
scores of applications and random tests, and communicated with a range of periph-
eral devices. In addition to time and engineering efforts, post-silicon validation also
incurs high equipment costs: verification engineers commonly have to build and de-
bug specialized in-house testing and analysis platforms and purchase test-pattern
generators, optical probing machines and logic analyzers, with costs in the range of
hundreds of thousands of dollars.

1.4 Looking Ahead

The comprehensive report discussed in the previous section was compiled for a
processor designed in the late 90’s, nevertheless, it provides an accurate picture of
industrial-scale validation, which we can use as a baseline reference for outlining
future trends in verification. From the early 1970s, Moore’s law implacably has
pushed the device density up, and since the release of the first Pentium 4, proces-
10 1 VERIFICATION OF A MODERN PROCESSOR

Table 1.1 Characteristics of processor designs developed during 2000-2008.

Tech- Number of Die Number Inter-core


Year Name noglogy transistors area of comm.
(nm) (mil.) (mm2 ) cores medium

2000 Intel Pentium 4 180 42 217 1 —-


Willamette

2004 Intel Pentium 4 90 125 109 1 —-


Prescott
2005 IBM Cell 90 234 221 9 bus

UltraSPARC
2005 90 300 378 8 crossbar
T1 Niagara

2006 Intel Core 2 65 291 143 2 bus


Merom
AMD Phenom
2007 65 463 285 4 bus
X4 Agena

Intel Polaris
2007 65 100 275 80 2D mesh
prototype

2007 Tilera TILE64 90 615 430 64 2D mesh

2008 Intel Core i7 45 731 263 4 crossbar


Bloomfield

sor complexity has increased by at least an order of magnitude. In Table 1.1 we


report the characteristics of several notable processor designs developed in the past
decade for comparison, analysis, and as an indicator of future trends. The first crit-
ical point to note is the fact that, since the first Pentium 4, silicon fabrication tech-
nology has shrunk from 180 to 45nm. Moreover, various advanced features, such
as support for simultaneous multi-threading, multiple power savings techniques and
virtualization, have become commonplace. More importantly, modern designs of-
ten contain multiple computing cores, sometimes heterogeneous, communicating
with each other through on-die media. This diversification of on-chip resources is
expected to continue well into the future: designers have already began to integrate
peripheral components, such as memory controllers and graphics, into the main pro-
cessor. Thus, we can foresee that the CPU of the 21st century will be comparable
to a complex system on a chip (SoC), with a wide range of functionalities and in-
terdependent components, all requiring thorough validation. Production timelines
of these integrated circuits, however, are not expected to increase, since end-users
continue to demand higher performance and broader functionality at the same pace.
Consequently, designers will be forced to staff larger validation teams and increase
investments into simulation servers, prototyping tools, etc. Worse still, the perfor-
mance of traditional verification and design tools in the future will be lagging behind
the complexity of final silicon products even more. For instance, due to an increased
number of on-die components, the performance of a full chip-level simulation for
1.5 Summary 11

future designs is not expected to be greater (and will probably be worse) than that of
the Pentium 4, despite significant improvements in the simulation hardware hosts.
As the number of features grows, in some cases full-system simulation may become
unfeasible. Likewise, increasing capabilities of formal verification tools in the future
will be outpaced by the complexity of critical modules requiring formal analysis.
In this worsening situation, the number of total bugs in processor products and the
speed with which they are discovered in the field is rapidly increasing. Researchers
have already reported that the escape bug discovery rate in Core 2 Duo designs is 3
times larger than that of the Pentium 4 design [CMA08]. Note that these are func-
tional errors that have evaded all validation efforts and found their way into the final
product, deployed in millions of computer systems world-wide. Thus, they entail a
very high risk of having a critical impact on the user base and the design house. It is
already clear that because of the expanding gap between complexity and verification
effort, in the future errors will continue to slip into silicon, potentially causing much
more damage than the infamous FDIV bug, which resulted in a $420 million loss for
Intel in the mid-90s [Mar08]. For instance, as recently as 2007, an error in the trans-
lation look-aside buffer of the third level cache in the Phenom processor by AMD
forced the manufacturer to delay the market release by several months. Not only this
delayed the distribution of the product to the market, but also created much nega-
tive publicity for the company, and influenced the price of its stock [Val07]. From
this grim picture we draw the conclusion that new validation solutions are critically
needed to enable the continued evolution of microprocessors designs in the future.
This concern is also voiced in the International Technology Roadmap (ITRS) for
Semiconductors, which states that “without major breakthroughs, verification will
be a non-scalable, show-stopping barrier to further progress in the semiconductor
industry.” ITRS also reports that there are no solutions available to provide high-
quality verification of integrated circuits and sufficiently low rate of escapes beyond
the year 2016 [ITR07].

1.5 Summary

Microprocessors have entered our world four decades ago and since that time have
played a vital role in our everyday life. Throughout these forty years, the capabilities
of these amazing devices have been increasing exponentially, and today processors
are capable of performing billions of computations per second, and executing multi-
ple software applications in parallel. This performance level was enabled by rapidly
growing complexity of integrated circuits, which, in turn, exacerbated the problem
of their verification. To validate modern processors, consisting of hundreds of mil-
lions of transistors, manufacturing companies are forced to employ large verification
teams, develop new validation technology and invest into costly testing and analysis
equipment. Traditionally, this verification activity is broken down into three major
steps: pre-silicon verification, post-silicon validation, and runtime verification.The
former two are conducted internally by the vendors on a software model of the de-
12 1 VERIFICATION OF A MODERN PROCESSOR

vice and and a silicon prototype, respectively, and are primarily carried out through
execution of test sequences. However, as we discussed in this chapter, the com-
plexity of modern processors prohibits their exhaustive verification, and thus errors
often slip into final products, causing damage to both end-users and vendor com-
panies. To protect hardware deployed in the field from such errors, researches have
recently started to propose techniques for runtime verification and to rethink post-
silicon validation strategies to derive better quality of results from them. Some of
the emerging techniques in these domains will be presented in detail in the later
chapters of this book, however, first, we must take a deeper look into a traditional
processor verification cycle to understand advantages and limitations of each of its
steps.

References

[Ben01] Bob Bentley. Validating the Intel R Pentium R 4 microprocessor. In DAC, Pro-
ceedings of the Design Automation Conference, pages 224–228, June 2001.
[Ben05] Bob Bentley. Validating a modern microprocessor. In CAV, Proceedings of the
International Conference on Computer Aided Verification, pages 2–4, July 2005.
[CMA08] Kypros Constantinides, Onur Mutlu, and Todd Austin. Online design bug detection:
RTL analysis, flexible mechanisms, and evaluation. In MICRO, Proceedings of the
International Symposium on Microarchitecture, pages 282–293, November 2008.
[Int04] Intel Corporation. Intel R Pentium R Processor Invalid Instruction Erratum
Overview, July 2004. http://www.intel.com/support/processors/
pentium/sb/cs-013151.htm.
[ITR07] International Technology Roadmap for Semiconductors executive summary, 2007.
http://www.itrs.net/Links/2007ITRS/Home2007.htm.
[Kas08] Kris Kaspersky. Remote code execution through Intel CPU bugs. In HITB, Pro-
ceedings of Hack In The Box Conference, October 2008.
[Mar08] John Markoff. Burned once, Intel prepares new chip fortified by constant tests.
New York Times, November 2008. http://www.nytimes.com/2008/11/
17/technology/companies/17chip.html.
[Val07] Theo Valich. AMD delays Phenom 2.4 GHz due to TLB errata. The In-
quirer, November 2007. http://www.theinquirer.net/inquirer/
news/995/1025995/amd-delays-phenom-ghz-due-tlb.
Chapter 2
THE VERIFICATION UNIVERSE

Abstract. In this chapter we take the reader through a typical microprocessor’s life-
cycle, from its first high-level specification to a finished product deployed in a end-
user’s system, and overview the verification techniques that are applied at each step
of this flow. We first discuss pre-silicon verification, the process of validating a
model of the processor at various levels of abstraction, from an architectural speci-
fication to a gate-level netlist. Throughout the pre-silicon phase, two main families
of techniques are commonly used: formal methods and simulation-based solutions.
While the former provide mathematical guarantees of design correctness, the lat-
ter are significantly more scalable and, consequently, are more commonly used in
the industry today. After the first few prototypes of a processor are manufactured,
validation enters the post-silicon domain, where tests can run on the actual silicon
hardware. The raw performance of in-hardware execution is one of the major advan-
tages of post-silicon validation, while lack of internal observability and limited de-
buggability are its main drawbacks. To alleviate this, designers often augment their
creations with special features for silicon state acquisition, which we review here.
After an arduous process of pre- and post-silicon validation, the device is released to
the market and finds its way into a final system. Yet, it may still contain subtle bugs,
which could not be exposed earlier by designers due to very compressed production
timelines. To combat these escaped errors, vendors and researchers in industry and
academia have began investigating alternative dynamic verification techniques: with
minimal impact on the processor’s performance, these solutions monitor its health
and invoke specialized correction mechanisms when errors manifest at runtime. As
we show in this chapter, all three phases of verification, pre-silicon, post-silicon and
runtime, have their unique advantages and limitations, which must be taken into ac-
count by design houses to attain sufficient verification coverage within their time
and cost budgets and to avoid major catastrophes caused by releasing faulty proces-
sor products to the commercial market.

I. Wagner and V. Bertacco, Post-Silicon and Runtime Verification for Modern Processors, 13
DOI 10.1007/978-1-4419-8034-2_2, © Springer Science+Business Media, LLC 2011
14 2 THE VERIFICATION UNIVERSE

2.1 Pre-silicon Verification

Pre-silicon verification is a multi-step process, which is aimed at establishing if


a design adheres to its specification and fulfills the designer’s intentions. Pre-silicon
verification, or design-time verification, is conducted before any silicon prototype
is available (hence its name), and operates over a range of different descriptions of
a digital design: architectural, RTL, gate-level, etc. The higher levels of abstraction
of the design allow engineers to check, often through mathematical proofs, funda-
mental properties of a circuit’s operation, such as absence of erroneous behaviors
and adherence to formally specified invariants in its functionality. The design is
then progressively refined to include more detail, requiring the designers to check
correctness after each transformation. However, with the growing level of detail,
the time and computational effort required to validate the design’s functionality in-
creases as well, thus, in practice, only the most critical blocks of a modern processor
are fully verified at all levels of abstraction before tape-out.
The history of pre-silicon verification of digital circuits goes hand in hand with
the evolution of these devices over the last several decades: what began as a fairly
simple in-house activity, quickly became a large industry of its own. Today, tens of
electronic design automation (EDA) companies and a handful of the largest digi-
tal design houses offer designers a wide variety of software and hardware tools to
address various phases of pre-silicon verification; while researchers in industry and
academia alike publish thousands of papers yearly on the subject. Some of these
techniques can be applied to any logic design, while others are domain-specific
heuristics that improve the performance of verification for certain types of designs.
Since it would be impossible for us to cover the entire spectrum of solutions avail-
able, in this section we simply focus on presenting a concise overview of the most
vital verification techniques, and provide a range of references at the end of the
chapter for the readers interested in deepening their knowledge on the subject.

2.1.1 From specification to microarchitectural description

In today’s microprocessor industry, verification begins early in the design cycle,


when architects of the new design develop a detailed set of specifications, outlining
the major component blocks and describing their functionality. This specification
document is then used as the yardstick to verify various pre-production implemen-
tations of the design at varying levels of detail. Note, however, that the document is
typically written by several people in a natural human language, such as English,
and, therefore, may contain hidden ambiguities or contradictions. Consequently,
when bugs are found during the verification of a behavioral model, these might be
due to a poor implementation or may be caused by errors in the specification itself,
which then must be clarified and updated. As soon as a design specification becomes
available, verification engineers begin crafting a test plan that outlines how individ-
2.1 Pre-silicon Verification 15

ual blocks and the entire system will be verified, how bugs will be diagnosed and
tracked, and how verification thoroughness, also known as coverage, will be mea-
sured. As with the specification, the test plan is not created once and forever, but
rather evolves and morphs, as designers add, modify and refine processor’s features.
At the same time, with specification at hand, engineers may begin implement-
ing the design at the architectural (or ISA) level, describing the way the processor
interacts with the rest of the computer system. For instance, they determine the for-
mat of all supported instructions, interrupts, communication protocols, etc., but do
not concern themselves with the details of the inner structure of the design. This is
somewhat similar to devising the API of a software application without detailed de-
scriptions of individual functions. Note that operating systems, user-level programs
and many hardware components in a computing platform interact with the proces-
sor at the ISA level, and for the most part, need not be aware of its internals. In the
end, the specification is transformed into an architectural simulator of the processor,
which is typically a program written in a high-level programming language. As an
example, Simics [MCE+ 02] and Bochs [Boc07] are fairly popular architectural sim-
ulators. An architectural simulator enables the engineers to evaluate the high level
functionality of the design by simulating applications and it also provides early es-
timations of its performance. The latter, however, are very approximate, since at
this point the exact latency of various operations is unknown. More importantly,
at this stage of the development, the architectural simulator becomes the embodi-
ment of the specification and in later verification stages it is referred to as a golden
model of the design, to which detailed implementations (netlist-level and circuit-
level) can be efficiently compared. The architectural simulator can then be refined
into a microarchitectural description , where the detailed functionality of individual
sub-modules of the processor is specified (examples of microarchitectural simula-
tors are SimpleScalar [BA08] and GEMS [MSB+ 05]). With a microarchitectural
simulator designers define the internal behavior of the processor and characterize
performance-oriented features such as pipelining, out-of-order execution, branch
prediction, etc. The microarchitectural description, however, does not specify how
these blocks will be implemented in silicon and is usually also written in a high-level
language, such as C, C++ or SystemC. Nevertheless, with the detailed information
on instruction latency and throughput now available, the performance of the device
can be benchmarked through simulation of software applications and other tests.

2.1.2 Verification through logic simulation

The microarchitectural level design is then further refined into a register-transfer


level (RTL) implementation, typically written in a hardware design language (HDL),
such as Verilog [IEE01b], SystemVerilog [IEE07] or VHDL[IEE04] . At this level,
the functionality of individual blocks is further broken down into logic/mathematical
operations and storage elements to contain the results of computation. Because of
this refinement, the size of the code base becomes significantly larger, while the sim-
16 2 THE VERIFICATION UNIVERSE

ulation performance of the design decreases by 4-5 orders of magnitude, compared


to the architectural level. With a register-transfer level description it is possible to
simulate and evaluate the behavior of the system very accurately, tracking the inter-
action of all components. The vast majority of the verification effort is dedicated to
testing and simulating this level of the design description and most functional design
bugs are exposed and corrected at this stage. However, one of the main drawbacks
of simulation-based techniques is the fact that they can only identify the presence
of errors but cannot guarantee their absence. In other words, with logic simulation
designers can check if the processor behaves properly, i.e., adheres to its specifica-
tion, when running specific program sequences, designed and selected by the design
team. However, there still may be latent bugs manifesting only when running other
programs, that had not been simulated and verified. Nevertheless, this technique
remains the primary method to validate RTL designs, especially in the micropro-
cessor industry, due to its ability to scale and operate even on very complex proces-
sor descriptions. During logic simulation, internal design signals are monitored and
evaluated one by one, making the time required to complete the simulation directly
proportional to the size of the design and the length of the executed test sequence.
Therefore, full processor designs can be simulated for millions of cycles, often pro-
viding satisfactory confidence that the system is error-free. Example of commercial
logic simulators available today include VCS [Syn09] and ModelSim [Men08]. In
this chapter we overview the basic components of a logic simulation framework,
while the reader is encouraged to investigate surveys, such as [GTKS05, Mei93],
and works on functional verification, such as [Mic03, WGR05], for a more detailed
analysis of a range of simulation tools, setups and techniques.
In a typical framework for RTL simulation, illustrated in Figure 2.1, the design
under validation is wrapped in a test environment that models the behavior of other
components external to the system under verification. For instance, when the entire
processor is simulated, the test environment represents systems external to the CPU,
such as memory, peripherals, etc. The test environment and the design are executed
together within the logic simulator. The simulator itself is a software application
parsing the RTL code of the design and the test environment and computing cir-
cuit’s output values when subjected to a given test. The outputs are generated by
evaluating each internal design component throughout the entire simulated time in-
terval. A simulator obtains the values of primary inputs to the design from a test and
then evaluates the circuit’s internal logic, as well as its primary outputs, in discrete
timesteps. The obtained data can then be saved into a file for later comparison with
the outputs of an architectural simulator, used as a golden model, or can be presented
to a verification engineer in form of waveforms, i.e., a diagram of the signals’ values
over time. The latter form is often adopted when diagnosing incorrect test responses
to investigate the root cause of a bug, since all signal transitions can be visualized.
Test sequences supplied to the simulator can be deterministic, such as manually-
generated assembly tests, or may be pseudo-random. The former are usually short
and designed to validate specific features or modules of the device, as required by
the test plan. Randomized tests are most commonly focused on system-level aspects
and interactions, subjecting the design to a variety of stressful stimuli. To this end,
2.1 Pre-silicon Verification 17

verification engineers leverage pseudo-random test generators (also called RTGs)


that can be tuned to produce streams of valid instructions with a wide range of prop-
erties, e.g., focused activity on certain functional units, specific inter-instruction de-
pendencies, and so on [BLL+ 04, AAF+ 04]. Some RTGs have the ability to monitor
a number of pre-selected internal design signals and use the information to dynam-
ically adjust the test they generate so to boost stimulus quality [WBA07, FZ03].
Furthermore, the most relevant random and directed tests are combined into regres-
sion suites: sets of tests simulated each time that the design is modified, to guarantee
that that the changes did not produce new bugs.
As mentioned before, simulation-based techniques can not provide guarantees
of design correctness for the scenarios that are not explicitly tested. To gauge the
need for additional verification throughout an industrial-scale digital system devel-
opment, designers track coverage, which is a measure of thoroughness of verifica-
tion. For example, achieving a 100% code coverage is a typical goal, thus requiring
that every statement in the RTL code is activated in simulation least once. Code cov-
erage alone is a fairly weak metric to evaluate the completeness of a test suite; thus,
more sophisticated measures such as functional and transaction coverage are com-
monly used in addition to code coverage [Glu06]. As discussed in detail in [Piz04],
functional coverage metrics allow to evaluate the testing quality of high-level func-
tions in a design’s block in a thorough and systematic fashion.
One of the shortcomings of the framework just described is its reliance on the
architectural simulator to detect errors. Because modern microprocessors are ex-
tremely complex systems, with vast amounts of internal state, subtle errors in inner
blocks can often take hundreds or thousands of cycles to manifest at the architectural
level. Tracing an error backwards from an erroneous architectural event to the actual
malfunction is a tedious and time-consuming process; therefore, the ability to detect
issues as soon as they occur is a major advantage in terms of length of the diagnosis
effort. One way to achieve this, is to use high-level behavioral models of individual
blocks, instead of a monolithic architectural simulator. These models can be derived
from the microarchitectural description of the processor and converted to dedicated
checking units, running concurrently with the RTL simulation and comparing their
output to the corresponding block’s simulated output. Sometimes, block-level errors
can be identified with simple scoreboards, which could, for instance, keep track of
request-response pairs and thus detect rogue messages. Alternatively, designers can
use assertions or checkers to encode invariants of the design’s operation and make
sure that they are upheld throughout the simulation. More information on this topic
can also be found in [FKL04, YPA06]. Assertions on input signals are also helpful
in detecting improper communication between blocks or errors arising from misin-
terpretation of the specification. Digital logic designers may interpret the specifica-
tion in different ways while developing their components. However, if they encode
their assumptions on the behavior of the surrounding components in the form of
assertions, mismatches can be detected early in the development process, as soon
as blocks are integrated. Finally, assertions may be used to evaluate coverage by
monitoring the input signals into the assertion. To enable these advanced verifica-
tion concepts, in the past few years, the electronic design industry has developed
18 2 THE VERIFICATION UNIVERSE

Golden model
Regression
suite

Directed Random
test
test generator
Comparison with
the golden model
PC mismatch
@ cycle 2765

Waveform

Simulation results
Test environment

module CPU_DUT
input clk, reset;
input [63:0] addr_in, data_in;
output [63:0] data_out;
reg [63:0] reg_file [0:31], Assertions /
PC_reg; checkers
wire pipeline_stall;
… cycle 83: Assertion
endmodule Design error !=1'b1 violated!

Coverage
Logic simulator code cov.: 57%
branch cov.: 35%
...

Fig. 2.1 A typical framework for simulation-based verification. Inputs to a logic simulator are
typically manually-written directed tests and/or randomized sequences produced automatically by
a pseudo-random test generator. Tests are fed into a logic simulator, which, in turn, uses them as
stimuli to the integrated test environment and design description. The test environment emulates
the behavior of blocks surrounding the design under test. The simulator computed outputs and
internal signal values of the design from the test’s inputs. These outputs can then be analyzed with
a variety of tools: they can be compared with the output of a golden architectural model, can be
viewed as waveforms, particularly for error diagnosis, can be monitored by assertions and checkers
to detect violations of invariants in the design behavior and, finally, can be used to track coverage,
thus evaluating the thoroughness of the test.

dedicated languages for hardware verification (HVLs), some examples of which are
OpenVera [HKM01], the e language [HMN01] and SystemVerilog [IEE07]. The
latter implements a unified framework for both hardware design and verification,
providing an object-oriented programming model, a rich assertion specification lan-
guage, and even automatic coverage collection.
2.1 Pre-silicon Verification 19

2.1.3 Formal verification

Formal verification encompasses a variety techniques that have one common prin-
ciple: to prove with mathematical guarantees that the design abides to a certain
specification under all valid input sequences. If the design does not adheres to the
specification the formal techniques should produce a counterexample execution se-
quence, which describes one way of violating the specification. To this end, the
design under verification and the specification must be represented as mathemati-
cal/logic formulas subjected to formal analysis. One of the most important advan-
tages of these solutions over simulation-based techniques is the ability to prove cor-
rectness for all legal stimuli sequences. As we described in the previous section,
only explicitly simulated behaviors of the design can be tested for correctness and,
consequently, only the presence of bugs can be established. Formal verification ap-
proaches, however, can reason about absence of errors in a design, without the need
to exhaustively check all of its behaviors one by one. The body of work in the field
of formal verification is immense and diverse: for decades researchers in industry
and academia have been developing several families of solutions and algorithms,
which are all too numerous to be fully discussed in this book. Fortunately for the
readers, sources such as [Ber05, PF05, PBG05, BAWR07, WGR05, CGP08, KG99]
and many others, describe the solutions in this domain in much greater detail. In this
book, we overview some of the most notable techniques in the field in the hope to
stir the readers’ curiosity to investigate this research field further. However, before
we begin this survey, we first must take a look at two main computation engines,
which empower a large fraction of formal methods, namely SAT solvers and binary
decision diagrams (BDDs).
SAT is a short-hand notation for Boolean satisfiability, which is the classic the-
oretical computer science NP-complete problem of determining if there exists an
assignment of Boolean variables that evaluates a given Boolean formula to true, or
showing that no such assignment exists. Therefore, given Boolean formulas describ-
ing a logic design and a property to be verified, a SAT-based algorithm constructs
an instance of a SAT problem, in which a satisfactory assignment of variables rep-
resents a violation of the property. For example, given a design that arbitrates bus
accesses between two masters, one can use a SAT-solver to prove that it will never
assert both grant lines at the same time. Boolean satisfiability can also be applied to
sequential circuits, which are in this case “unrolled” into a larger design by replica-
tion of the combinational logic part and elimination of the internal state elements.
Engineers can then check if certain erroneous states can be reached in this “un-
rolled” design or if invariants of execution are always satisfied. As we will describe
in the subsequent section, SAT techniques can also be used for equivalence check-
ing, i.e., establishing if two representations of a circuit behave the same way for
all input sequences. Today, there is a variety of stand-alone SAT-solver applications
available, typically they either implement a variation of the Davis-Putnam algorithm
[DP60] as, for instance, GRASP [MSS99] and MiniSAT [ES03], or use stochastic
methods to search for satisfiable variable assignments as in WalkSAT [SKC95].
Heuristics and inference procedures used in these engines can dramatically improve
20 2 THE VERIFICATION UNIVERSE

the performance of the solver; however, in the worst case the satisfiability problem
remains NP, that is, as of today, it requires exponential time on the input size to com-
plete execution. As a result, solutions based on SAT solvers cannot be guaranteed
to provide results within reasonable time. Furthermore, when handling sequential
designs, these techniques tend to dramatically increase the size of the SAT problem,
due to the aforementioned circuit “unrolling”, further exacerbating the problem.

A A

B B B

C C C C C

0 0 0 1 1 1 1 1 0 1
a. b.
Fig. 2.2 Binary decision diagrams. a. A full-size decision tree for the logic function f =
A|(B&C): each layer of nodes represents a logic variable and edges represent assignments to the
variable. In the figure, solid edges represent a 1 assignment, while dashed edges represent a 0 value
of the variable. The leaf nodes of the tree are either of 1- or 0-type and represents the value that the
entire Boolean expression for the assignment in the corresponding path from root to leaf. Note that
the size of the full-size decision tree is exponential in the number of variables in the formula. b.
Reduced ordered binary decision diagram for the logic function f = A|(B&C). In this data structure
redundant nodes are removed, creating a compact representation for the same Boolean expression.

Binary decision diagrams (BDDs), the second main computational engine used
in formal verification, are data structures to represent Boolean functions [Bry86].
BDDs are acyclic directed graphs: each node represents one variable in the formula
and has two outgoing edges, one for each possible variable assignment – 0 and 1
(see Figure 2.2). There are two types of terminal (also called “leaf”) nodes in the
graph, 0 and 1, which correspond to the value assumed by the entire function for
a given variable assignment. Thus, a path from the root of the graph to a leaf node
corresponds to a variable assignment. The corresponding leaf node at the end of the
path represents the value that the logic function assumes for the assignment. BDDs
are reduced to contain fewer nodes and edges and, as a result, can often represent
complex Boolean functions compactly [Bry86, BRB90]. An example of this is il-
lustrated in Figure 2.2.b, where redundant nodes and edges in the complete binary
decision tree for the function f = A|(B&C) are removed, creating a structure that is
linear in the number of variables. BDD software packages as, for instance CUDD
[Som09], contain routines that allow a fast and efficient manipulation of BDDs and
2.1 Pre-silicon Verification 21

on-the-fly tree minimization algorithms. Within formal verification, binary decision


diagrams can be used for equivalence checking, i.e., testing if two circuits imple-
ment the same logical function, in reachability analysis and symbolic simulation, as
well as in other techniques. In reachability analysis, BDDs are used to represent the
set of states that a design can attain under all possible inputs. This set can be later
analyzed to detect the presence of erroneous states. In symbolic simulation, on the
other hand, decision diagrams store formulas representing the design’s state and the
functionality of the outputs in terms of all possible input values. There are, how-
ever, a few drawbacks to BDD-based solutions, the most important one being the
“memory explosion problem”: a situation, where functions cannot be represented
compactly by the data structure, causing a prohibitively large number of nodes to be
created in the host computer’s memory. As a consequence, the host may terminate
before completion of the formal proof. The engineering team is then forced to re-
structure the design or apply simplifying assumptions or constraints to the system,
abstracting away some of the possible behaviors.
As we already mentioned above, SAT-solvers and BDD libraries enable design-
ers to establish how circuits operate under all possible input conditions and conduct
a variety of analyses. Let us now briefly review some of the formal verification
solutions that are employed in today’s semiconductor industry, outlining their ad-
vantages and limitations. Note that only major families of solutions are presented
here, since a comprehensive survey is beyond the scope of this book.

Theorem proving is a powerful technique that uses automatic reasoning to derive


proof for “theorems” about the system under study. Theorems are mathematical for-
mulas that describe the design’s functionality and its properties in abstract form.
Theorem provers may use a variety of theories to derive the mathematical trans-
formations required to prove a given theorem. More information may be found in
[KK94], which also provides a collection of references on this subject. Theorem
provers have found wide acceptance in both the software and hardware verifica-
tion domains, however, these systems are still not fully automatic and often human
assistance is required in directing the proof derivation process. Consequently, en-
gineers spend valuable time investigating conjectures produced by the prover tool
and assisting it in reaching its goal. Furthermore, the behavior of the design and the
properties are usually specified in abstract form, requiring even more engineering
effort. As the result, theorem provers are often applied to high-level protocol or ar-
chitectural descriptions to ensure that such invariants as absence of deadlock and
fairness are upheld. The RTL implementation can then be compared to the formally
verified architectural model using more scalable techniques, such as simulation.

The reachability analysis problem has been mentioned before and consists of char-
acterizing the set of states that a design can attain during execution. To this end the
combinational function of the design is typically represented as a binary decision
diagram and the reset state of the circuit is used to initialize the “reached set”. Then
BDD manipulation functions are used to compute all states that the system can at-
tain after one clock cycle of execution, which are then added to the reached set.
22 2 THE VERIFICATION UNIVERSE

The process continues until the host runs out of memory or the reached set stops in-
creasing, in which case the obtained set contains all states that a system can achieve
throughout the execution and can be subsequently analyzed for absence of errors and
presence of runtime invariants. Reachability analysis is a very powerful verification
tool, although one must keep in mind that this technique, as other BDD-based solu-
tions, suffers from the memory explosion problem and has limited scalability. See
also publications such as [CBM89, RS95] for more information on this approach.

Model checking, described in detail in [CGP08, CBRZ01], is another powerful


technique that establishes if states and transitions within the design adhere to a for-
mal specification. Properties in this technique are typically written as formulas in
temporal logic, which reasons about events in time, for instance, a property may
require that a given system’s event will eventually occur, independently of the type
of execution. However, frequently, formal properties have a limited time window in
which they need to be considered, in which case bounded model checking (BMC)
can be used. In bounded model checking a property is considered only over a finite
number of clock cycles of execution: if no violation is exposed then the property
holds. Solutions in this domain may use either BDDs or SAT solvers as the under-
lying engine. A variety of property specification languages exists today, including
PSL [Acc04], recently extended to PSL/Sugar [CVK04], System Verilog Assertion
language (SVA) [VR05], which is a subset of SystemVerilog, the proprietary Blue-
spec language [Arv03], TLA+ [Lam02], etc. However, since these languages are
declarative, they are often fairly complex to use in describing non-trivial properties
and in general are more difficult to use than imperative languages. Indeed, indus-
try experts report that, when a property violation is exposed in model checking, it
is more often the case that the problem lies in the property description than in the
design itself. The scalability of model checking may also be limited due to the un-
derlying engines’s complexity and memory demands.

Symbolic simulation [CJB79, Bry85, BBB+ 87, BBS90, Ber05] has many similari-
ties with logic simulation, in that output functions are computed based on input val-
ues. The main difference here is that inputs are now symbolic Boolean variables and,
consequently, outputs are Boolean functions. For instance, the symbolic simulation
of the small logic block in Figure 2.3 produces the Boolean expression A&(B|C) for
the output, which we show in the BDD depicted by the output wire. Expressions for
the state of the design and the values of its outputs are updated at each simulation
cycle. The Boolean expressions altogether are a compact representation of all the
possible behaviors that the design can manifest, for all possible inputs, within the
simulated cycles. To find the response of the circuit for a concrete input sequence (a
sequence of 0s and 1s), one simply needs to evaluate the computed expressions with
appropriate values for the primary inputs. Consequently, symbolic simulation can
be used to compute a reached state set (within a fixed number of simulation cycles)
or prove a bounded time property. As with other solutions, however, the reliance on
BDDs brings the possibility of the simulator exhausting the memory resources of
the host before errors in the circuit can be identified.
2.1 Pre-silicon Verification 23

F = A & (B | C)

Design B
A
B C
C

0 1

Fig. 2.3 An example of symbolic simulation. In symbolic simulation, outputs and internal design
nodes are expressed as Boolean functions of the primary inputs, which, in turn, are described by
Boolean variables. In the circuit shown in the figure, symbols A, B and C are applied to the primary
inputs and the output assumes the resulting expression A&(B|C). Functions in symbolic simulation
are typically represented by binary decision diagrams, as the one shown at the circuit’s output.

As discussed above, modern digital logic designers have access to a wide va-
riety of high-quality formal tools that can be used for different types of analyses
and proofs. Yet all of them have limitations, requiring either deep knowledge of
declarative languages for specification of formal properties or being prone to ex-
hausting memory and time resources. Consequently, processor designers, who work
with extremely large and complex system descriptions, must continue to rely mostly
on logic simulation for verification of their products at the pre-silicon level. The
guarantees for correctness that formal techniques provide, however, have not been
overlooked by the microprocessor industry, and formal tools are often deployed to
verify the most critical blocks in modern processors, particularly control units. In
addition, researchers have designed methods to merge the power of formal analysis
with the scalability of simulation-based techniques, creating hybrid or semi-formal
solutions, such as the ones surveyed in [BAWR07]. Hybrid solutions use a variety
of techniques to leverage the scalability of simulation tools and the depth of analy-
sis of formal techniques to reach better coverage quality with a manageable amount
of resources. They often use a tight integration among several tools and make an
effort to have them exchange relevant information. Although hybrid solutions com-
bine the best of the formal and simulation worlds, their performance is still outpaced
by the growing complexity of processor designs and shortening production sched-
ules. Thus, the verification gap continues to grow and designs go into the silicon
prototype stage with latent bugs.
24 2 THE VERIFICATION UNIVERSE

2.1.4 Logic optimization and equivalence verification

The RTL simulation and formal analysis described above are, perhaps, the most
important and effort-consuming steps in the entire processor design flow. After these
steps are completed, the register-transfer level description of the design must be
transformed and further refined before a prototype can be manufactured. The next
step in the design flow is logic synthesis, which automatically converts an RTL
description into a gate-level netlist (see [HS06] for more details of logic synthesis
algorithms and tools). To this end, expressions in the RTL source code are expanded
and mapped to structures of logic gates, based on the target library specific to the
manufacturing process to be used. These, in turn, are further transformed through
local and global optimizations in order to attain a design with better characteristics,
such as lower timing delay, smaller area, etc. Again, with the increasing level of
detail, the code base grows further and, correspondingly, the performance of the
circuit’s simulation is reduced. Formal techniques are often deployed in this stage
to check that transformations and optimizations applied during synthesis do not alter
the functionality of the design. This task is called equivalence checking and it is most
often deployed to prove the equivalence of the combinational portion of a circuit. In
contrast, sequential equivalence checking has not yet reached sufficient robustness
to be commonly deployed in the industry. The basis of combinational equivalence
checking relies on the construction of a miter circuit connecting two netlists from
before and after a given transformation. The corresponding primary inputs in both
netlists are tied together, while the corresponding outputs are connected through xor
gates, as shown in Figure 2.4. Several techniques can then be used to prove that
the miter circuit outputs 0 under all possible input stimuli, thus indicating that the
outputs of the two netlist are always identical when subjected to identical input.
Among these, are BDD-based techniques, used in a fashion similar to symbolic
simulation, SAT solvers proving that the miter circuit cannot be satisfied (that is,
can never evaluate to 1), and also graph isomorphism algorithms, which allow to
prune the size of the circuits whose equivalence must be proven.
After the netlist is synthesized, optimized and validated, it must undergo the
place and route phase, where a specific location on silicon is identified for each
logic gate in the design and all connecting wires are routed among the placed gates
[AMS08]. Only after this phase the processor is finally described at the level of
individual transistors, and thus engineers can validate properties such as electrical
drive strength and timing with accurate SPICE (Simulation Program with Integrated
Circuit Emphasis) techniques. It is typical at this stage to only be able to analyze a
small portion of the design at a time, and to focus mostly to the critical paths through
the processor’s sub-modules.
2.1 Pre-silicon Verification 25

Design A

A
B

C
D Design B
Miter
Fig. 2.4 Miter circuit construction for equivalence checking. To check that two versions A and
B of a design implement the same combinational logic function, a miter is built by tying corre-
sponding primary inputs together and xor-ing corresponding primary outputs. Several techniques,
including formal verification engines such as Binary Decision Diagrams and SAT solvers can then
invoked to prove that under the outputs of such miter circuit can never evaluate to 1, under all
possible input combinations.

2.1.5 Emulation and beyond

As mentioned before, RTL simulation is several orders of magnitude slower than


simulation at the architectural level. However, its performance can be improved by
leveraging emulation techniques. In emulation, also called fast-prototyping, a de-
sign is mapped into programmable hardware components, such as FPGAs. These
components can be configured after their manufacturing to implement any logic
function in hardware and, thus, can be used to create early prototypes of complex
systems before they are manufactured. Several companies provide fast prototyping
solutions based on FPGAs or other specialized reconfigurable components. It is typ-
ical for these systems to have lower performance and lower device density than the
final manufactured version of the system under verification. On the other hand, em-
ulation can be conducted on a hardware prototype at much better performance than
in a software simulator. Modern FPGAs can run at speeds of hundreds of mega-
hertz (about one order magnitude slower than today’s processors) and one may need
dozens of them to model a single processor. Note also that, when a block of logic is
mapped to an FPGA, its internal signals become invisible to the engineer, thus error
diagnosis may become more challenging, unless other means of access to internal
signals are specifically implemented in the prototype. Despite these shortcomings,
the emulation of a processor design is an important and useful step in pre-silicon
verification, since it allows a detailed netlist description to be executed at fairly high
performance, enabling testing with longer sequences of stimuli and achieving higher
design coverage.
26 2 THE VERIFICATION UNIVERSE

After the design is deemed sufficiently bug-free, that is, satisfactory coverage
levels are reached and the design is stable, the device is taped out, i.e., sent to the
fabrication facility to be manufactured in silicon. Once the first few prototypes are
available, the verification transitions into the post-silicon phase, which we describe
in the following section. It is important to remember, that the pre- and post-silicon
phases of processor validation are not disjoint ventures - much of the verification
collateral, generated in early design stages, can and should be reused for validation
of the manufactured hardware. For instance, random test generators, directed tests
and regression suites can be shared between the two. Moreover, in the case of ran-
domized tests, architectural simulators can be used to check the output of the actual
hardware prototypes for correctness. Finally, RTL simulation and emulation pro-
vide valuable support in silicon debugging. As we explain later, observability of the
design’s internal signals in post-silicon validation is extremely limited, therefore, it
is very difficult to diagnose internal design conditions that manifest in to an error.
To alleviate this, test sequences exposing bugs may be replayed in RTL simulation
or emulated, to narrow the region affected by the problem and to diagnose the root
cause of the error.
In summary, the pre-silicon verification of a modern processor is a complex and
arduous task, which requires deep understanding of the design and the capabilities
of validation tools, as well as good planning and management skills. After each de-
sign transformation, several important questions must be answered: what needs to
be verified? how do we verify it? and how do we know when verification is sat-
isfactory? Time is another important concern in this process, since designers must
meet stringent schedules, so their products remain competitive in the market. There-
fore, they must often forgo guarantees of full correctness for the circuit, and rely on
coverage and other indirect measures of validation thoroughness and completeness.
Exacerbating the process further is the fact that the design process is not a straight-
forward one - some modules and verification collaterals may be inherited from pre-
vious generations of the product, and some may be only available in a low level
description. For instance, performance-critical units may be first created at gate or
transistor level (so called full-custom design), and the corresponding RTL is gener-
ated later. Moreover, often the verification steps discussed in this section must be
revisited several times, e.g., when a change in one unit triggers modifications to oth-
ers components. Thus, pre-silicon verification goes hand in hand with design, and it
is often hard to separate them from each other.

2.2 Post-silicon Validation

Post-silicon validation commences when the first prototype of a design becomes


available. With the actual silicon in hand, designers can test many physical char-
acteristics of the device, which could not be validated with models of the design.
For instance, engineers can determine the operational region of the design in terms
of such parameters as temperature, voltage and frequency. Physical properties of
2.2 Post-silicon Validation 27

the device can also be evaluated using detailed transistor-level descriptions, as dis-
cussed in the previous section but, when using such models, these analyses can only
be conducted on fairly small portions of the design and on very short execution se-
quences, making it difficult to attain highly quality measures of electrical aspects.
For instance, the operational region of the device cannot be evaluated precisely in
pre-silicon and must be checked after the device is manufactured. The actual oper-
ational region of the design is compared to the requirements imposed by the speci-
fication, driven by the market sector targeted by this product. Processors for mobile
platforms, for example, usually must operate at lower voltages, to reduce power con-
sumption, while server processors, for which performance is paramount, can tolerate
higher temperatures, but also must run at much higher frequency. Other electrical
properties that are also typically checked at this stage include drive strength of the
I/O pins, power consumption, etc.
Most of the electrical defects targeted during post-silicon validation are first dis-
covered as functional errors. For instance, incorrectly sized transistors on the die
may result in unanticipated critical paths. Consequently, when the device is running
at high frequency, occasionally data will not propagate through these paths within a
single cycle, resulting in incorrect computation results. Similarly, jitter in the clock
signal may cause internal flip-flops to latch erroneous data values, or cross-talk be-
tween buses may corrupt messages in flight. Such bugs are frequently first found
as failures of test sequences to which the prototype is subjected. Designers proceed
then to investigate the nature of the bug: by executing the same test sequence on a
other prototypes they can determine if the bug is of electrical or functional nature.
In fact, functional bugs will manifest on all prototypes, while electrical ones will
only occur in a fraction of the chips. Bugs that manifest only in a very small portion
of the prototypes are deemed to be due to manufacturing defects. Because each of
these issues are diagnosed with distinct methods, a correct classification is key to
shorten the diagnosis time.
In debugging electrical defects, engineers try to first determine the boundaries
of the failure, i.e., discover the range of conditions that trigger the problem, so it
can be reproduced and analyzed in the future. To this end shmoo plots that depict
failure occurrences as a function of frequency and supply voltage are created. Typ-
ically, multiple shmoo plots for different temperature settings are created, adding
the third dimension to the failure region of the processor. This data can then be an-
alyzed for characteristic patterns of various bug types. For instance, failures at high
temperatures are strong indicators of transistor leakage and critical path violations,
while errors that occur at low temperatures are often due to race conditions, charge
sharing, etc. [JG04]. Designers then try to pinpoint the area of the circuit where
the error occurs by adjusting the operating parameters of individual sub-modules
with techniques such as on-die clock shrinks, clock skewing circuits [JPG01] and
optical probing , which relies on lasers to measure voltage across individual transis-
tors on the die [EWR00]. In optical probing, the silicon substrate on the back side
of the die is etched and infrared light is pulsed at a precise point of the die. The
silicon substrate is partially transparent to this wavelength, while doped regions of
transistors reflect the laser back. If electrical charge is present in these regions, the
28 2 THE VERIFICATION UNIVERSE

power of the reflected light changes, allowing the engineers to measure the voltage.
Note that in this case, the top side of the processor, where multiple metal layers
reside, does not need to be physically etched or probed, so integrity of the die is not
violated. Unfortunately, the laser cannot get to the back side of the die through a
heat sink, which, therefore, must be removed around the sampling point. While pro-
viding good spacial and timing resolution, optical probing remains a very expensive
and often ineffective way of testing, since it requires sophisticated apparatus and en-
ables access to only a single location at a time. Finally, when the issue is narrowed
down to a small block, transistor-level simulation can be leveraged to establish the
root cause of the bug and determine ways to remedy it.
As we mentioned above, in addition to electrical bugs, two types of issues can
be discovered in manufactured prototypes: fabrication defects and functional errors,
which are the target of structural testing and functional post-silicon validation, re-
spectively. Although similar at first glance, these approaches have a very important
difference in that testing assumes that the pre-silicon netlist is functionally correct
and tries to establish if each prototype faithfully and fully implements it. Validation,
on the other hand checks if the prototype’s functionality adheres to the specifica-
tion, that is, if the processor can properly execute software and correctly interact
with other components of a computer system. In the following section we overview
structural testing approaches and discuss silicon state acquisition techniques, which
are typically deployed in complex designs to improve testability. Incidentally, most
of these acquisition solutions can be also used in post-silicon validation, discussed
in Section 2.2.2.

2.2.1 Structural testing

Modern integrated circuits are manufactured with a photolithographic process,


which “prints” individual transistors, as well as metal interconnect between them,
in multiple layers onto a silicon substrate. However, due to the nanometer-scale of
transistors and wires, their features’ boundaries may be blurry or distorted. This
may cause, for example, two metal paths to be shorted together, or result in mis-
alignment of the doped regions of a transistor, impairing the overall functionality
of the circuit. The phase of post-silicon validation that tries to uncover these faults
is called structural testing. In this framework, the gate-level netlist of the design is
assumed as the golden model of functionality and used by automatic test pattern
generators (ATPGs) in producing test sequences that check these design aspects.
An ATPG analyzes the combinational logic of the netlist and infers test vectors that
would expose a range of possible defects generated in the manufacturing process.
In a sense, ATPG-based structural testing can be thought of as equivalence checking
between the pre-production and the printed netlists. The type of faults that ATPGs
can discover include shorts and opens, “stuck-at” defects, where a wire’s value never
changes, and violations of the circuit’s internal propagation delays. ATPG patterns
are then applied to the silicon prototype and the responses are compared to those
2.2 Post-silicon Validation 29

predicted by simulation on the pre-production netlist. For example, to test an imple-


mentation of a two-input logical and gate, an ATPG solution would check that the
output of the silicon prototype is consistent with the truth table of the function, that
is only when both inputs are high, the output value is equal to one. Modern testing
tools typically do not subject the design to an exhaustive set of test vectors, which
may be prohibitively large, but use advanced heuristics to minimize the set of test
patterns without loss of defect coverage. Note, however, that ATPG testers cannot
discover functional errors in the circuit, that is, discrepancies between design intent
and the implemented silicon prototype.
In addition to its inability to discover functional errors in the circuit, the scala-
bility of structural testing is severely limited in sequential designs: i.e., systems that
have internal storage elements in addition to combinational logic blocks. This is es-
pecially pronounced in complex microprocessor designs, where data is retained by
internal storage elements for many cycles. As a consequence, it is virtually impos-
sible for an ATPG technique to create test vectors to be applied to primary inputs of
the circuit that can test behavior of logic functions deep inside of the design and con-
trol all types of manufacturing faults. Likewise, it may be impossible to propagate
the information about the error to primary outputs to be observed by the designer.
Faced with the dual problem of controllability and observability in such complex
sequential circuits, it is mainstream today to augment the design with structures that
allow comparatively easier access to internal logic nodes through I/O pins. This is
commonly referred to as design for testability, or DFT. In particular, DFT tech-
niques often provide ways to sample and write state elements of the circuits, such
as flip-flops and latches, so combinational logic can efficiently be tested by ATPG
patterns. Furthermore, as discussed later in this chapter, DFT techniques play an
important role in functional post-silicon validation and debug, where they are used
to analyze the internal behavior of a prototype that leads to an error. Research on
structural testing and DFT has been carried out for decades in both industry and
academia and it would be impossible to overview the most successful techniques
within the scope of this section. Therefore, we will limit ourselves to briefly discuss
a handful of the most notable solutions for silicon state acquisition, and recommend
two textbooks as starting points for a more in depth study: “Digital Systems Testing
and Testable Design” [ABF94] and “Essentials in Electronic Testing” [BA00].
One of the most basic and classic techniques in the DFT domain are scan chain
components, an example of which is shown in Figure 2.5. To include a scan chain in
a design, flip-flops are augmented with an additional data input (scan in) and output
(scan out), as well as an enable line. The scan in and scan out lines of different stor-
age elements are then connected serially into a scan chain, the ends of which are tied
to special I/O pins of the device. When the scan enable signal is de-asserted, the flip-
flops act as regular storage cells and the processor operates in normal mode. When
enable is asserted, on the other hand, the scan chain reconfigures itself into a serial
register, so that data can be passed serially from one flip-flop to another. With this
tool at hand, verification engineers can suspend execution and scan out the values of
the chained storage elements through a single output wire and analyze it. Moreover,
an arbitrary internal state can be pre-set through the scan in functionality, enabling
30 2 THE VERIFICATION UNIVERSE

Scan
multiplexer
Data
S_out

D Q D Q

S_in
E Q E Q

S_en
Clock
Flip-flop

Fig. 2.5 Scan flip-flop design. To insert a regular D flip-flop in a scan chain, the flip-flop is aug-
mented with a scan multiplexer, which selects the source of data to be stored. During regular
operation, the S en (scan enable) signal is de-asserted and the flip-flop stores bits from the Data
line. When enable is asserted, however, the flip-flop samples the scan in (S in) signal instead. This
allows the designers to create a scan chain, by connecting the scan out (S out) output with a scan
in input of the next flip-flop in the chain. The last output in this chain is connected to a dedicated
circuit output port, so that the internal state of the system may be shifted out by asserting S en
signal and pulsing the clock. Likewise, since the S in input of the first chain element is driven
from a circuit’s primary input, engineers can quickly pre-set an arbitrary internal state in the de-
sign for testing and debugging purposes. Note that during scan chain operations the regular design
functionality is suspended.

fine-grain controllability of the device, in addition to observability. Particular im-


plementations of the scan technique vary in different designs and include full-scan
(all state elements are connected), partial scan (a subset of flip-flops is connected),
and multiple parallel chains. The drawbacks of the approach are a somewhat slower
latch operation, the additional area and interconnect overheads, and, most impor-
tantly, the need to suspend operation of the device under test to shift the state out or
bring a new state in, an activity that requires several clock cycles.
Modern scan chains often rely on an even more complex scan flip-flop design,
that overcomes the limitation of having to suspend execution while routing state
in and out of the system. Hold-scan flip-flops consists of two basic scan flip-flops
connected together, called a primary and a shadow flip-flop, as shown in Figure 2.6.
When the capture line is asserted, the shadow element samples and stores the value
of the main latch. This flip-flop can be reconfigured into a scan chain connection
at any later time, so the sampled data can be routed outside of the device. Since
the chain comprises only shadow flip-flops and operates on an independent clock,
the device can continue to function normally, while the captured snapshot is being
transferred out. Similarly, a new state can be loaded into the shadow latches without
interrupting the operation of the device and then propagated to main flip-flops by
the asserting the update line. Although hold-scan flip-flops have significantly larger
area than the baseline design of Figure 2.5, it provides the designers with much more
flexibility in the debugging process and, consequently, and it is frequently deployed
in microprocessor systems [KDF+ 04].
2.2 Post-silicon Validation 31

S_clk
Capture
multiplexer
S_in

D Q D Q S_out

E Q E Q

Shadow flip-flop
Capture

Update
Update
multiplexer

D Q D Q

Data E Q E Q

Clock
Primary flip-flop

Fig. 2.6 Hold-scan flip-flop design. Hold-scan flip-flops provide the ability to overlap system ex-
ecution with scan activity. The component comprises two scan flip-flops, a primary and a shadow
flip-flop. The shadow flip-flop can capture and hold the state of the primary storage element.
Shadow elements are connected in chain and can transmit the captured values without the need
to suspend regular system operation. Similarly they can be used to load a system state, which is
then transferred to the primary flip-flops by asserting the update signal. Note that shadow flip-
flops operate on a separate clock (S clk), so that the transmit frequency can be decoupled from the
system’s operating frequency.

Boundary-scan is another technique often used in structural testing and valida-


tion, to allow individual modules of the processor to be tested in isolation [IEE01a].
Boundary-scan was developed by the Joint Test Action Group (JTAG), which was
formed by the industry to enable testing of complex circuits boards. JTAG calls for
inputs and outputs of the chip, or a block of logic, to be tied to dedicated, scan
storage elements, as shown in Figure 2.7. These cells can be configured to perform
several distinct functions by means of specialized control logic. For instance, inputs
to the block can be captured to verify operation of other modules or can be used
provide stimulus to the block. The response to the stimulus can be then observed
Other documents randomly have
different content
mounted in the Apocalypse. It is neither mist nor water but a
something between both; its immense height (nine hundred feet)
gives it a wave, a curve, a spreading here, a condensation there,
wonderful and indescribable.”

THE STAUBBACH.

Here, again, he got aliment for “Manfred:”


“It is not noon—the sunbow’s rays still arch
The torrent with the many hues of heaven,
And roll the sheeted silver’s waving column,
O’er the crag’s headlong perpendicular,
And flings its lines of foaming light along
And to and fro, like the pale courser’s tail,
The giant steed, to be bestrode by Death
As told in the Apocalypse.”

The rainbow was suggested by the sun shining on the lower part of
the torrent, “of all colors but principally purple and gold, the bow
moving as you move.”
A day later he climbed to the top of the Wengern Mountain, five
thousand feet above the valley, the view comprising the whole of the
Jungfrau with all her glacier, then the Dent d’Argent, “shining like
truth,” the two Eigers and the Wetterhorn. He says: “I heard the
avalanches falling every five minutes nearly—as if God was pelting the
Devil down from Heaven with snowballs. From where we stood, on the
Wengern Alp, we had all these in view on one side: on the other, the
clouds rose from the opposite valley, curling up perpendicular
precipices like the foam of the Ocean of Hell during a Springtide—it
was white and sulphury and immeasurably deep in appearance.” From
the summit they “looked down upon a boiling sea of cloud, dashing
against the crags on which we stood.”
The avalanches and sulphurous clouds of course became part of the
décor of “Manfred:”

“Ye avalanches, whom a breath draws down


In mountainous overwhelming, come and crush me!
I hear ye momently above, beneath,
Crash with a frequent conflict; but ye pass,
And only fall on things which still would live.

“The mists boil up around the glaciers; clouds


Rise curling fast beneath me, white and sulphury,
Like foam from the roused ocean of deep Hell.”

He saw the Grindelwald Glacier distinct, though it was twilight, and he


compared it to a frozen hurricane, a figure which he put unchanged in
his poem:

“O’er the savage sea,


The glassy ocean of the mountain ice,
We skim its rugged breakers, which put on
The aspect of a tumbling tempest’s foam,
Frozen in a moment.”

Passing over the Great Scheideck, Rosenlaui, the Falls of the


Reichenbach (“two hundred feet high”), the Valley of Oberhasli, he
reached Brienz, where four of the peasant girls of Oberhasli sang the
airs of their country—“wild and original and at the same time of great
sweetness.”

The summer was drawing to an end. I had got somewhat tired of


excursions, and was content to settle down to a regular course of
reading. I suppose if it had not been for my beloved relatives I might
have been tempted to plan for a winter in Rome, which had for years
seemed to me a desirable place to visit. If it had not been for these
same dear ones, there were a dozen places in Switzerland which
would have attracted me. I detest the cold, and Montreux, which has
been called the Riviera of Helvetia, offered a climate tempered against
the pernicious bise. We ran up to the Tour d’Aï one afternoon and I
was fascinated with the place.
Will and I made a walking trip through the Bernese Oberland and we
both liked Thun. He suggested that it was because we, or I, happened
to be musical. I vowed that I would, in some way, get possession of
the Twelfth-Century Castle of Zähringen-Kyburg, have it refitted with
all American conveniences and live there the rest of my days—
provided I could find the right kind of a housekeeper. Seriously, is
there any more magnificent view in all Switzerland than from the
environs of Thun and from the lake? I trow not. But perhaps one
would weary of too grandiose views; after all, for human nature’s daily
food, human society is preferable to mountains, and the fact that the
tamer lakes, such as Leman and Constance, seem to attract for
regular residence more congenial personages than I could find dwelt
at Thun might make one pause in one’s plan to oust the museum and
turn public property into a selfish private possession. I could not follow
Voltaire’s example and buy every château I saw and liked!
So I was contented enough with Lausanne as a home. I do not
propose to inflict on my friends an account of every excursion that I
took. That through the Oberland perhaps more than any other made
me realize how completely I was subjected to that peculiar hypnotic
influence which we agree to call a spell.
A STREET IN THUN.

It is a curious thing that in many of the high mountain passes, where


desolation of barrenness reigns, there is a lake said to have been
formed by the tears of Ahasuerus, the Wandering Jew. For instance,
when he first came to the Grimsel pass, between Bern and Valais, it
was radiant with fertile beauty; the climate was warm; it supported a
happy population; but he passed like a desolating breath, and when,
years later, he came again, in that never-ceasing round, all was
changed. He wept and his tears formed “The Lake of the Dead”—Der
Totensee. In it lie the bones of those who perished in that terrible
struggle between the Austrians and the French in 1799. There are all
sorts of wonderful legends which one might collect. For instance, how
came the Grindelwald to be so wide?—not that it is so wide,—but still
it is wider than it once was! Well, Saint Martin came there and was not
satisfied with its appearance, so he pried the valley walls apart. The
prints of his feet are visible. On the way to the Grimsel we spent a
long time at the Handeck Fall, which is regarded as the finest in
Europe; the Aar with considerable volume of water falls into an abyss
about twenty-three meters higher than Niagara.

I followed Byron’s footsteps in following Rousseau’s—only much more


deliberately. It is rather difficult now, for many of the houses which
sheltered Rousseau and his fair mistress have been destroyed; that
one which belonged to Madame de Warens’s father, J. B. de la Tour,
“Baron de l’Empire,” was taken down in 1889. The daughter was
educated at Lausanne and married Noble Sebastien-Isaac de Loys, son
of the Seigneur de Villardin, and a soldier who had fought in the
Swedish service. As M. de Loys possessed a seigneurie in a
neighbouring village he took the name of it and called himself
Vuarens, which the Bernese made into Warens. I sympathized with
poor M. de Warens. He tells the story of his marital troubles in a letter
which is a volume and breathes sincerity. But there is a good deal of
comedy about the whole affair, and only Madame de Warens’s pathetic
ending, in poverty and neglect, makes one feel sorry for her.
In 1762 the Comte d’Escheray—a young man of twenty-nine—
happened to be living in a little house at Motiers-Travers, in a
delightful valley, spending his time in the cultivation of literature and
music, in walking and in hunting. Rousseau was there also, and the
count gives a lively narrative of his acquaintance with the philosopher;
his dinners, his conversations, his evening walks in the woods, singing
duets. One day he and Rousseau walked from Colombier to Les
Brenets—six leagues—stopping every little while to study the wild
places. The count says: “I consider this little portion of the Jura,
enclosed in the boundaries of Neuchâtel, as one of the most curious
countries in the world for the philosopher, the physician, the geologist,
the artist and the mechanician to study.” They finally came to the
residence of M. du Peyron, a rich, charitable American. Rousseau took
kindly to him and they botanized together.
It was a pleasant excursion to pick out Rousseau’s tracks in this
expedition.
I also made a study of Voltaire’s life, and read a great deal of his
writings. I prepared an article on his theatrical ventures. One of his
châteaux was Monrion (which means mons rotundus) on the crest
between Lausanne and the lake. It was a square two-story building
with high attic and L-shaped wings. It had twenty-four rooms with
superb views. He did not live in it long, and it passed into the hands of
Dr. Tissot. Voltaire moved into a house in Lausanne, 6, Rue du Grand
Chêne, and here he gave theatrical entertainments. He also organized
them at Monrepos, a château then owned by the Marquis de
Langalérie. The stage was in the barn but the spectators were in the
house. He wrote his friends about the success of them: “I play the old
man, Lusignan.... I assure you, without vanity, that I am the best old
fool to be found in any company.” To his friend Thiriot: “I wish that
you had passed the winter with me at Lausanne. You would have seen
new pieces performed by excellent actors, strangers coming from
thirty leagues around, and my beautiful shores of Lake Leman become
the home of art, of pleasure, and of taste.” To his niece, Madame de
Fontaine: “The idlers of Paris think that Switzerland is a savage
country; they would be very much astonished if they saw ‘Zaire’ better
played at Lausanne than it is played at Paris; they would be still more
surprised to see two hundred spectators as good judges as there are
in Europe.... I have made tears flow from all the Swiss eyes.” When he
moved to Geneva, and especially when he bought the château of
Ferney, so that he might be a thorn in the flesh of Genevese
sanctimoniousness, he was older, but still played his parts.

CHÂTEAU VOLTAIRE, FERNEY.

In 1760 Catherine de Chandieu, then a girl of nineteen, was at Geneva


and saw Voltaire’s play “Fanime,” given extremely well by Madame
Denis, Madame Constant-Pictet, Mademoiselle de Basincourt and
Voltaire himself. She describes him thus: “Voltaire was dressed in a
way which was enough to make one choke with amusement; he wore
huge culottes which came down to his ankles, a little vest of red silk
embroidered with gold; over this vest a very large vest of magnificent
material, white embroidered in gold and silver; it was open at one side
so as to show the undervest and on the other it came down below the
knee; his culottes were of satin cramoisi; over his great vest he wore a
kind of coat of satin with silver, and over the whole a blue mantle
doublé de cramoisi galooned with gold and superb; when he appeared
on the stage many people began to laugh and I was one of them; he
had a huge white beard which he had to readjust several times, and a
certain comic look even in the most tragic passages.”
Madame de Genlis went to Geneva on purpose to call on M. de
Voltaire, though she had no letter to him. He invited her to dinner,
and, by a mistake, she arrived too early. She gives a very entertaining
account of her experiences. One little passage is characteristic:
“What an effect the presence of such a man as Voltaire must have had
on the pious Genevans may be imagined when this story was told of
him. Shortly after the publication of ‘Emile,’ Voltaire was discussing
Rousseau’s marvellous picture of the sunrise. ‘I must try it,’ said he. ‘I,
too, will go some morning on the top of a mountain; I should like to
know if one is really compelled to adore the Creator at daybreak.’ The
necessary preparations were made; they set out at night and reached
just before dawn the Col de la Faucille in the Jura. The sunrise was
splendid.... Voltaire knelt down, gazed in silence and then said: ‘Yes,
Creator of heaven and earth, I adore you before the magnificence of
your works.’ ... Then getting up, he rubbed his knees and cried: ‘Mais
quant à monsieur votre fils et à madame sa mère, je ne les connais
pas!’
“When Rousseau heard that he became pensive and then said, ‘Oh,
that man, that man, he would make me hate the page of my works
which I like best.’
“When the earthquake at Lisbon shocked the whole world Pastor
Vernes preached a celebrated sermon which led Voltaire to write: ‘Sir,
it is said you have written such a beautiful sermon on the event that it
would have been really unfortunate had Lisbon not been destroyed,
for we should have been deprived of a magnificent discourse.’”
WRESTLING AT A VILLAGE FESTIVAL.

Another plan which occupied me in the hours which I consecrated to


regular work was for an article on the village festivals of Switzerland:—
The charming Narcissus Festival of Montreux, celebrated in May, the
great Fête of the Abbé des Vignerons, so fascinatingly described by
Juste Olivier and so cleverly worked by James Fenimore Cooper into
his novel, “The Headsman.” It would include processions through
picturesque streets and the rejoicings at the return of the cows from
the Alp with the Ranz des Vaches:—

“Blantz et neìre,
Rotz et motaìle,
Dzjoùven et ôtro
Les sonaillire
Van lez premire
La tôte neìre
Van lez derrière:
Hau! hau! llauba!”

I gathered any quantity of material about Swiss authors and


composers: Jacques Hoffmann, Johanna Spyri, Töpfer, Amiel, Olivier,—
none, perhaps, stars of the first magnitude—unless the Painter Böcklin
—but all interesting.
When winter came we went to see the winter sports at Saint-Moritz—
the skiing where it was not uncommon for some of the French and
Norwegian champions to leap almost thirty meters. Indeed, one man
flew through the air forty-six meters, but could not keep his balance
when he struck far down the slope. I was not tempted to try it.
Switzerland in winter is even more beautiful than in summer. The
uniform blanket of dazzling snow, though its curves are filled with vivid
tints of violet and blue, may be hard on the eyes. The mercury may go
low but the purity of the atmosphere and its exhilaration atone for the
discomfort of cold. In the house we kept warm and cozy. The children
were well and happy and I stayed on and on: I could not resist the
Spell.

THE END.
BIBLIOGRAPHY
Abraham, George D.: The Complete Mountaineer
Abraham, George D.: Mountain Adventures at Home and Abroad
Agassiz, Louis: A Journey to Switzerland and Pedestrian Tours in
that Country
Anteisser, Roland: Altschweizerische Baukunst
Auvigne, Edmund B. d’: Switzerland in Sunshine and Snow
Bauden, Henry: Villas et Maisons de Campagne en Suisse
Bernowilli, A.: Balci Descriptio Helvetiae
Bonstetten, Albrecht von: Editor Descriptio Helvetiae
Burnet, Gilbert: Bishop of Salisbury. Travels or Letters containing
an account of what seemed most remarkable in Switzerland
Collings, Henry: Switzerland as I Saw It
Cook, Joel: Switzerland, Picturesque and Descriptive
Coolidge, W. A. B.: Swiss Travel and Swiss Guide-Books
Cooper, James Fenimore: Excursions in Switzerland
Dauzat, Albert: La Suisse moderne
Dumas, Alexandre: La Suisse
Edouard, Desor, and Favre, Leopold: Le bel âge du bronze lacustre
en Suisse
Elton, Charles Isaac: An Account of Shelley’s Visits to France,
Switzerland and Savoy in 1814 and 1816
Ferguson, Robert: Swiss Men and Swiss Mountains
Gribole, Francis: The Early Mountaineers
Guerber, Hélène Adeline: Legends of Switzerland
Guillon, Louis Maxime: Napoleon et la Suisse
Hasler, Fr. and H.: Galerie berühmter Schweizer der Neuzeit. In
Bildern mit biographischem Text von Alfred Hartmann
Havergal, Frances Ridley: Swiss Letters and Alpine Poems
Heer, J. C.: Album der Schweiz: 450 Bildern ... Nach
Schilderungen. Edited by Alexander B. Freiherr von
Bergenroth
Howard, Blanche Willis: One Year Abroad
Howells, William D.: A Little Swiss Sojourn
Istria, la Comtesse Dora d’ (Princess Helena Koltsova-Masalskaya):
La Suisse Allemande et l’ascension du Mönch
Kuhns, Levi Oscar: Switzerland, Its Scenery, History and Literary
Associations
Lerden, Walter: Recollections of an old Mountaineer
Lubbock, Sir John: The Scenery of Switzerland
LeDuc, Violet: Mont Blanc
MacCrackan, William D.: Romance and Teutonic Switzerland
Mummery, A. F.: My Climbs in the Alps and the Caucasus
Orelli, Johann Caspar von: Editor Inscriptiones Helvetiae, Collectae
et explicatae
Rey, Guido: The Matterhorn. Translated by J. E. C. Eaton
Rickmers, W. Rickmer: Ski-ing for Beginners and Mountaineers
Rhys, Isobel L.: The Education of Girls in Switzerland and America
Rook, Clarence: Switzerland, the Country and its People
Saitschik, Robert M.: Meister der Schweizerischen Dichtung des 19.
Jahrhunderts
Scheuber, Joseph: Die mittelalterlichen Chorstühle in der Schweiz
Schneider, Albert: Die neuesten römischen Ausgrabungen in der
Schweiz
Sennett, Alfred Richard: Across the Great Saint Bernard
Stephen, Leslie: The Playground of Europe
Stock, E. Elliott: Scrambles in Storm and Sunshine among the
Swiss and English Alps
Stoddard, Frederick Wolcott: Tramp through Tyrol
Symonds, John Addington: Our Life in the Swiss Highlands
Tyndall, John: Hours of Exercise in the Alps
Umlauft, F., P. H. D.: The Alps. Translated by Louisa Brough
Usteris, Martin: Pilatus und St. Dominick unter Benutzung einer
Handschrift
Webb, Frank: Switzerland of the Swiss
Whymper, Edward: Scrambles Amongst the Alps
Wood, Edith Elmer: An Oberland Châlet
Zsigemondy, Dr. Emil: Im Hochgebirge: Wanderungen
—— Annuaire du Club Alpin Français
—— Geschichte der Vermissungen in der Schweiz, als historische
Einleitung zu den Arbeiten der schweiz. geodätischen
Commission
—— Musée cantonal vaudois. Antiquités lacustres. Album publié
par la Société d’histoire de la Suisse romande et la Société
academique vaudoise, avec l’appui du Gouvernement vaudois
INDEX

Abraham, age of, 433.


“Abraham’s Sacrifice,” drama by Theodore de Bèze, 41.
Acaunum, old name of Saint-Maurice, 340, 341.
Adams, Charles Francis, at Geneva, 248.
Addison, Joseph, on the Alps, 223;
makes trip round Lake Leman, 291-294.
Aeroplane, 279.
Agassiz, Louis, studies glacial action, 373.
Agesilaus, hero of Rousseau, 236.
Aiguille du Midi, 377.
Aile, Château de l’, 120.
Airolo, captured, 423.
Aix-les-Bains, 30.
Alabama claims, settled at Geneva, 247, 248.
Albano, Lake of, 34.
Alexander, Father, gives amulets, 265.
Allalinhorn, ascent of, 368, 369.
Allemanni, invasions of, 48;
relics of, 269, 441.
Allobrogi, 208;
attack the Carthaginians, 387;
freedom loving, 433.
Alpenglow, 11, 147;
described by Javelle, 359;
from Bern, 412.
Alphubel, the, 369.
Alpine Club, shelters of the, 362;
Annuaire of, 372.
Alps, formation of, 12, 13;
description of, 163;
described by Amiel, 184;
time in crossing, 272;
effect on Geneva, 293;
view of, 340, 348, 353;
motion of, 366;
ancient passages of, 382;
from the Lake of Zürich, 441.
Altorf, 422.
Amédée VIII, Duc, monument to, at Lausanne, 61.
Amiel, Henri-Frédéric, quoted, 184, 468.
Amphion, Spring of, 177.
Anchor Inn, Byron at, 138.
Andermatt, capture of, 423.
Angeville, Mlle. Henriette d’, climbs Mont Blanc, 278.
Annecy, Madame de Warens at, 239;
Rousseau at, 240;
M. Venture at, 243.
Aoste, 382.
Apostles, Gate of the (Lausanne Cathedral), 58, 59.
Ardon, 348.
Areuse, River, 403.
Argentière, Mont, seen by Byron, 141, 370, 375.
Arianna, Musée, treasures of, 269.
Aristocracy, in Switzerland and Spain, 71.
Aristotle, hero of Rousseau, 236.
Arnold, Sir Edwin, poem on Pilatus, 448, 449.
Arpille, the, 348.
Art, village of, 309.
Arval, Mont, 122.
Arve, River, 123, 162;
junction with the Rhône, 199, 203;
in Coleridge, 329;
in Shelley, 333;
dammed, 375.
Arveiron, River, 329.
Aubigné, T. A. d’, tablet to, 212.
Aubonne, M. d’, writes a play, 242.
Aubonne, torrent of, 288.
Augustus, Emperor, conquers the Wallisi, 343.
Auldjo, M., shows limit of vision, 273.
Aulph, Saint Jean d’, hamlet of, 183.
Auvergnier, lake-dwellings at, 16.
Auvermé, 404.
Avalanches, 108, 367, 458.
Avenches, a modern Pompeii, 408.
Aventicum, relics of, 408.
Avignon, 30.
“Avis au Peuple,” 312.
Aymon, Count, bestows Chamonix valley, 371.
Aztecs and Egyptians, 17.

Bacon, Lord, on travel, 323-325.


Baedecker’s Guide-book, 322.
Bagration, 426, 429.
Bâle (Basel), 78;
Chatillon at, 251, 404, 439.
Balfrin, height of the, 352.
Balgrist, view from the, 440.
Balmat, Jacques, climbs Mont Blanc, 273-275;
monument to, 374.
Balme, Grotte de, 217.
Banc du Travers, 110.
Barthélemy, Château de Saint, 51.
Batiaz, La, castle at, 346.
Baulion, La Dent de, 297, 298, 300.
Bears of Bern, 411.
Beaufort, Antoine de, 127.
beine, the, 34, 160, 168.
Bellegarde, 200.
Bellinzona, 423.
Belotte, La, view of, 196.
Bergues, Hôtel des, 197.
Bern, robs Lausanne, 60;
takes possession of Lausanne, 63, 72, 78;
government of, 79;
separate from Rome, 126;
persecutes Rousseau, 246;
joins Geneva, 252;
lands of, 263;
receives appeal from Geneva, 267;
owns Vaud, 292;
bandière of, 405;
arcaded streets of, 410;
militarism of, 419.
Bern, The Headsman of, 110.
Bernard, Pass of Saint, 123, 342.
Berthe, Queen, 48.
Berthold V, founds Bern, 410.
Betzberg, 423.
Bevaix, Abbey of, 403, 404.
Bex, “smiling village” of, 338.
Bèze, Theodore de, at Lausanne, 40;
at Geneva, 257;
offers prayer, 266.
Bich, Jean Baptiste, reaches top of Matterhorn, 356.
Biel, 405.
Bienne, 78;
lake of, 246, 408.
Bionnassay, Glacier of, 272.
Birds of Lake Leman, 194.
Bise, la, 138, 162.
Blackie, John Stuart, poem of, 363.
Blancherose, Doctor, asks inconvenient questions, 64.
Blécheret, Jacques, city physician at Lausanne, 311.
Blegno, Val di, 422.
Bloch, Baron von, war museum, 446.
Blonay, castle of, 71, 402.
Blümlisalp, 108, 413.
Bobbio, Abbey of, 123.
Bodensee, 434.
Bois d’Amont, Le, 302.
Bois de la Bâtie, 204.
Bolsec, Jerome, gets better of Calvin, 211, 212.
Bomilcar, King, 383.
Bonaparte, Joseph, castle of, 288.
Bonivard, Francis, career, 126, 127;
dungeon of, 131;
character of, 137;
prison of, 154;
at University of Geneva, 250;
petitions Council of University, 256.
Bonnet, Charles de, influence of, 52.
Bonneville, 381, 403.
Bons, M. de, describes rockfall, 190.
Bonstetten, Karl Viktor von, 52.
Borgne, the gorge of, 350.
Bossey, Rousseau at, 228, 236.
Bossons, Glacier des, 374, 381.
Boston, at Lausanne, 69.
Bourbourg, Brasseur de, theory of, 17.
Bourg, Rue du, 69.
Bourgoin, 382.
Bourrit, Marc-Théodore, “Historian of the Alps,” 272;
discovers the Col du Géant, 273.
Boutry, 403.
Bovannaz, 108.
Boveret, 154.
Bozen, 444.
Bregaglia, rockfall at, 291.
Brenets, Les, 463.
Bretigny, Seigneur de, gift of, 51.
Brévent, Le, climbed by De Saussure, 271.
Brevoort, Miss, attempts Matterhorn, 356.
Brienz, 452, 459.
Brionne, Comtesse de, 315.
Broccone Pass, 444.
Brogny, Cardinal Jean de, builds chapel, 213;
attempts to found University of Geneva, 249.
Brontë, Charlotte, 119.
Broye, the, 408.
Brunegghorn, the, 352.
Brunn, Frederika, “Chamouni at Sunrise,” 327.
Brutus, hero of Rousseau, 236.
Bryant, William Cullen, describes the Arve, 376.
Bultogerus, Henricus, 431.
Bürkli, Karl, leadership of, 21.
Byron, Lord, criticizes Switzerland, 87;
memories of, 121, 135;
sonnet on Lake of Geneva, 137;
at Sécheron, 137;
excursion with Shelley on Lake Leman, 138;
writes third canto of “Childe Harold,” 140;
criticized by “Dora d’Istria,” 149;
at Coppet, 281, 286;
at Aubonne, 288;
on music of cowbells, 454.

Cæsar, Julius, 208;


names Nyon, 287;
mentions Octodurus, 342.
Calvaires, 184.
Calvin, John, banished by Geneva, 65;
burial-place of, 209;
chair of, 211;
adopted by Geneva, 232;
lacks monument, 234;
takes charge of University, 251.
Calvinism, 72, 75.
Carcassonne, 29.
Carrel, Jean Antoine, reaches top of Matterhorn, 356.
Carrel, Miss, attempts Matterhorn, 356.
Carthage, destruction of, 382.
Cassaccia, 422.
Castillio, driven out of Geneva, 65.
Cau, Mont, 155.
Caucasus, the, 32, 33.
Cenis, Mont, tunnel of, 354;
used by Hannibal (?), 382.
Cerjat, Gaston de, buys Château de Saint-Barthélemy, 52.
Cellemberg, Comte de, sings delicious airs, 94.
Cerlier, 403.
Cervin, Le, 350;
glimpse of, 354-356;
form of, 357, 369.
Chablais, 183;
under Duke of Savoy, 263, 292.
Chambéry, seized by France, 267;
Jean Volat de, 311.
Chamblais, Province of, 123.
Chamonix, 185;
summit of, 263, 271;
discovery of, 279, 372;
poems on, 327-336;
name of, 371;
glaciers at, 373;
formation of, 374;
centre of traffic, 377, 451.
Champéry, starting-point for la Dent du Midi, 185.
Chandieu, Charles de, 49;
family of, 49, 50;
Catherine de, 51, 465.
Chanvan, Château de, 305.
Charlemagne, Emperor, presents Saint-Maurice with ewer and
crozier, 341.
Charles II, Duke of Brunswick’s gift to Geneva, 204;
monument to, 205, 407.
Charles Emmanuel of Savoy, attacks Geneva, 264;
characterizes his general, 266.
Charles Augustus, Duke of Weimar, 308.
Charles the Bold, defeat of, at Grandson, 62, 402-406.
Charles III, Duke of Savoy, 128.
Charles IV, Emperor, attempts to found University at Geneva, 249.
Charrière, Madame de, writes a play, 94;
balloon of, 118, 318;
Professor d’Apples de, 314.
Chastelard, 403.
Châtelard, manoir of, 155.
Chatillon, Sébastien, professor at Geneva, 251.
Chaumény, mountain, 154.
Chaumont (at Neuchâtel), 408.
“Childe Harold,” 140.
Chillon, Castle of, 106, 121, 122-136;
described by Juste Olivier, 146;
finest aspect of, 147;
described by “Dora d’Istria,” 154;
mentioned by Rogers, 169;
from La Dôle, 304.
“Cid, The,” performed at Geneva, 258.
Cité, La (Lausanne), 80.
Clairmont, Jane, with Shelley at Sécheron, 138.
Clarens, 121, 155, 161.
Claude, Canonici of Saint, 302.
Claudius, makes Octodurus market-town, 344.
Clavel, arms of, 48.
Clavière, Etienne, banished from Geneva, 267.
Cluges, 381.
Cockburn, Sir Alexander J. E., at Geneva, 247.
Coire, Russians at, 430.
Col de la Seigne, 382.
Col du Midi, 370.
Coleridge discussed, 327-332, 451.
Collanges, Avenue de, 37, 40.
Colombier, 404, 463.
Combin, Le Grand, 106.
Comte, Auguste, 30.
Confignon, Rousseau at, 238.
Conrad, Duke of Zähringen, builds convent, 401.
Conrad, Emperor, founds Church of St. Peter at Geneva, 210.
Concise, lake dwellings at, 432.
Constance, Lake of, 434.
Constans, 382.
Constant de Rebecque, Benjamin, as a musician, 95;
love-affair with Mme. de Staël, 281;
adoration of, 285.
Constantin Pavlovitch, Grand Duke, 422, 426, 428.
Coolidge, W. A. B., describes Matterhorn, 357.
Cooper, James Fenimore, 109;
describes Lake Leman, 110, 111;
on neglected views, 224;
at Geneva, 261;
describes Lake Leman, 262, 467.
Coppet, Barony of, 84;
mentioned by Rogers, 169;
Madame de Staël at, 280-286.
Corcelle, 404.
Cordier, Mathurin, resigns as professor at Geneva, 251.
“Corinne,” 281;
Madame de Staël in character of, 285.
Cormondrèche, 404.
Cornaz, Jacques-Daniel, sells Château de l’Isle, 51.
Corneille, 263.
Cortailloud, 403.
Coryat, Thomas, “Crudities” of, 431.
“Cossacks, The,” quoted, 32.
Côte, Montagne de la, climbed, 272, 274.
Courland, Pierre de, 317;
at Lausanne, 317.
Couteau, H., painter, 16.
Crassy (Crassier), town of, 79.
Credo, Mont, 305.
Crêt d’eau (Credo), 185.
Crêtes, Château des, 121.
Criant, Pierre, 18, 21, 201, 202, 203.
Crissier, portrait at Château de, 314.
Crousaz, Jean Pierre de, “Logic” of, 77.
Crousaz, Madame de (Montolieu), 321.
Cully, 119.
Curchod, Mlle. Suzanne, 79;
her beauty, 80.
Curchod, Pastor, death of, 82.

Dard, Cascade du, 381.


Daudet, Alphonse, 30.
David fountain at Bern, 412.
Davoz-Platz, 444.
Debate between Catholic and Protestant parties, 63, 64.
Delilah, 438.
Dent, Blanche, la, 361.
Dent du Midi, la, 38;
height of, 66, 105;
description of, 106;
ascent of, 185.
Devil, Swiss names of the, 220.
Devil’s Bridge, 424;
granite cross at, 430.
Devonshire, Georgianna, Duchess of, dinner to, 92.
Dexter, Lord Timothy, example of, 53.
Deyverdun, Georges, 56;
plays the spinet, 94;
death of, 96;
inspires Gibbon, 98;
society founded by, 68;
early diaries of, 77;
invites Gibbon to Lausanne, 85, 86;
indolence of, 88.
Diablerets, Les, 45;
dance of Wotan on, 217.
Diodati, Villa, Byron at, 139.
Dissentis, 422.
Dol, town of, 26.
Dôle, la, 178;
Gœthe’s ascent of, 295-307.
Dolomites, the, 9, 401.
Dom, the, 353.
Donnerbrühl, 411.
Dorannaz, 108.
Douglas, Lord Edward, death of, 355;
body lost, 356.
Dranse, La, 162, 180;
valley of, 183;
cone of, 193, 342;
overflow of, 346;
robbed by the Rhône, 348.
Druidical rites, 105.
Ducommun, Abel, Rousseau’s master, 237.
Dufour, General, places plaques on le Niton, 65;
reckons heights, 66.
Duluth, compared in latitude to Lausanne, 112.
Dumas, Alexandre, Père, quoted, 103.
Duvillard, map of, 260.

Ecluse, Fort l’, 305.


Edelspitze, the, 353.
Education of Rousseau, 236;
of French children, 237.
Egli, Emil, discovers 9th century MS., 340.
Eiger, the, seen from Bern, 413.
“Ekkehard,” 434.
Elephants cross the Rhône, 385;
pass the Alps, 391.
Eliot, George, portrait of, at Geneva, 260.
Elton, Sir Charles Abraham, translator of Silius Italicus, 394.
Emerson, Ralph Waldo, parodied, 36;
on travel, 326.
“Emile,” shocks Calvinists, 233, 466.
Emmenthal, 150.
Enoch, Louis, Regent of Geneva University, 251.
Entebüchel, remains at, 440.
Enville, Duc d’, studies glacial action, 372.
Epaune, destruction of, 187.
Erlach, Rudolf von, statue to, 411.
Erlenbach, 221.
Ermenonville, Rousseau dies at, 246.
Escalade, fountain of the, 264;
episode of, 264-266.
Escher, Alfred, autocracy of, 21.
Escheray, Comte d’, trips with Rousseau, 463.
Estavayer, Catherine de, loved by Otho de Grandson, 60.
Estavayer, Gérard de, duel with Otho de Grandson, 60.
Etruscans, perhaps settled Zürich, 433.
Eugster, Fidèle, aerial railway of, 377.
Evarts, William M., at Geneva, 248.
Everest, Mount, 33.
Evian, Byron and Shelley at, 139;
night at, 177.
Evionnaz, catastrophe at, 188.
Eynard, Charles, Life of Dr. Tissot, 13-18.

Fairy of Lake Leman, The, 114.


Falzarego, new road of the, 444.
“Fanime,” Voltaire’s play, 465.
Farel, banished by Geneva, 65.
Faucigni (Faucigny), mountains of, 263, 296.
Faucille, Col de la, 466.
Faulhorn, the, 352.
Félicité, Col de, 356.
Felix V, Pope, at Lausanne, 61, 62.
Ferney, 169, 197, 464.
Finetta, 218, 337.
Finsteraarhorn, the, 108;
seen from Bern, 413.
Fish of Lake Leman, 194.
Flegère, view from, 381.
Flims, derivation of, 192.
Flon, River, 69, 75.
Flowers of the Alps, 152, 450.
Fog, Alpine, 156, 177, 299.
Fontaine, Mme. de, Voltaire’s letter to, 464.
Forces Motrices at Geneva, 199.
Forclaz, Col de la, 310.
Forel, M., 34.
Foron, torrent of, 193.
Four Cantons, Lake of, 444, 450.
Franche-Comté, 300, 303, 361.
François I, court of, 40.
Fraumünster, the, at Zürich, 436.
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookbell.com

You might also like