Test Driven Development A Practical Guide
Test Driven Development A Practical Guide
By David Astels
Test-Driven Development: A Practical Guide presents TDD from the perspective of the
working programmer: real projects, real challenges, real solutions, ...real code. Dave Astels
explains TDD through a start-to-finish project written in Java and using JUnit. He introduces
powerful TDD tools and techniques; shows how to utilize refactoring, mock objects, and
"programming by intention"; even introduces TDD frameworks for C++, C#/.NET, Python,
VB6, Ruby, and Smalltalk. Invaluable for anyone who wants to write better code... and have
more fun doing it!
Amazon
Prev don't be afraid of buying books Next
By David Astels
Copyright
The Coad Series
About the Series
About the Series Editor
FOREWORD
PREFACE
EXTREME PROGRAMMING
WHO SHOULD READ THIS BOOK?
THE STRUCTURE OF THIS BOOK
CONVENTIONS USED IN THIS BOOK
ACKNOWLEDGEMENTS
Part I. BACKGROUND
Chapter 1. TEST-DRIVEN DEVELOPMENT
WHAT IS TEST-DRIVEN DEVELOPMENT?
LET THE COMPUTER TELL YOU
A QUICK EXAMPLE
SUMMARY
Chapter 2. REFACTORING
WHAT IS REFACTORING?
WHEN TO REFACTOR
HOW TO REFACTOR
SOME IMPORTANT REFACTORINGS
REFACTORING TO PATTERNS
SUMMARY
Chapter 3. PROGRAMMING BY INTENTION
NAMES
SIMPLICITY
WARRANTED ASSUMPTIONS
HOW TO PROGRAM BY INTENTION
"NO COMMENT"
SUMMARY
Part II. TOOLS AND TECHNIQUES
Chapter 4. JUNIT
ARCHITECTURAL OVERVIEW
THE ASSERTIONS
WRITING A TESTCASE
RUNNING YOUR TESTS
USING SETUP() AND TEARDOWN()
USING TESTSUITE
HOW DOES IT ALL FIT TOGETHER?
WHERE DO TESTS BELONG?
TIPS
SUMMARY
Chapter 5. JUNIT EXTENSIONS
STANDARD EXTENSIONS
ADDING MISSING ASSERTS WITH MockObjects
PERFORMANCE AND SCALABILITY WITH JUnitPerf
DAEDALOS JUNIT EXTENSIONS
WRITING XML-BASED TESTS WITH XMLUnit
GARGOYLE SOFTWARE JUNIT EXTENSIONS
Chapter 6. JUNIT-RELATED TOOLS
JESTER
NOUNIT
CLOVER
ECLIPSE
IDEA
Chapter 7. MOCK OBJECTS
MOCK OBJECTS
AN ILLUSTRATIVE EXAMPLE
USES FOR MOCKOBJECTS
WOULDN'T IT BE NICE?
ACOMMON EXAMPLE
THE MOCKOBJECTS FRAMEWORK
MOCKMAKER
EASYMOCK
EasyMock Summary
SUMMARY
Chapter 8. DEVELOPING A GUI TEST-FIRST
THE EXAMPLE
THE AWT ROBOT
BRUTE FORCE
JFCUNIT
JEMMY
ULTRA-THIN GUI
SUMMARY
Part III. A JAVA PROJECT: TEST-DRIVEN END TO END
Chapter 9. THE PROJECT
OVERVIEW
USER STORIES AND TASKS
Chapter 10. MOVIE LIST
MAKE A MOVIE CONTAINER
MAKE A MOVIE LIST GUI
ADD A MOVIE IN THE GUI
RETROSPECTIVE
Chapter 11. MOVIES CAN BE RENAMED
SUPPORT MOVIE NAME EDITING
MOVIE RENAME GUI
RETROSPECTIVE
Chapter 12. MOVIES ARE UNIQUE
MOVIES ARE UNIQUE
ERROR MESSAGE ON NON-UNIQUENESS
RETROSPECTIVE
Chapter 13. RATINGS
ADD A SINGLE RATING TO MOVIE
SHOW THE RATING IN THE GUI
EDIT THE RATING
RETROSPECTIVE
Chapter 14. CATEGORIES
ADD A CATEGORY
SHOW THE CATEGORY IN THE GUI
ADD A SELECTION OF CATEGORY
RETROSPECTIVE
Chapter 15. FILTER ON CATEGORY
GET A SUBLIST BASED ON CATEGORY
SUPPORT AN ALL CATEGORY
ADD A CATEGORY SELECTOR TO THE GUI
HANDLE CHANGING A MOVIE'S CATEGORY
INTERFACE CLEANUP
RETROSPECTIVE
Chapter 16. PERSISTENCE
WRITE TO A FLAT FILE
SAVE-AS IN GUI
SAVE IN GUI
READ FROM A FLAT FILE
LOAD IN GUI
RETROSPECTIVE
Chapter 17. SORTING
COMPARE MOVIES
SORT A MOVIELIST
ASK A MOVIELISTEDITOR FOR SORTED LISTS
ADD A WAY TO SORT TO THE GUI
RETROSPECTIVE
Chapter 18. MULTIPLE RATINGS
MULTIPLE RATINGS
RATING SOURCE
REVISED PERSISTENCE
SHOW MULTIPLE RATINGS IN THE GUI
ADD A RATING IN THE GUI
REMOVE THE SINGLE-RATING FIELD
RETROSPECTIVE
Chapter 19. REVIEWS
ADD A REVIEW TO RATINGS
SAVE REVIEW
LOAD REVIEW
DISPLAY REVIEW
ADD A REVIEW
RETROSPECTIVE
Chapter 20. PROJECT RETROSPECTIVE
THE DESIGN
TEST VS. APPLICATION
TEST QUALITY
OUR USE OF MOCKS
GENERAL COMMENTS
DEBUGGING
LIST OF TESTS
SUMMARY
Part IV. XUNIT FAMILY MEMBERS
Chapter 21. RUBYUNIT
THE FRAMEWORK
EXAMPLE
Chapter 22. SUNIT
THE FRAMEWORK
BOOLEAN ASSERTIONS
BLOCK ASSERTIONS
EXCEPTION ASSERTIONS
FAILURE
EXAMPLE
Chapter 23. CPPUNIT
THE FRAMEWORK
EXAMPLE
Chapter 24. NUNIT
THE FRAMEWORK
EXAMPLE
Chapter 25. PYUNIT
THE FRAMEWORK
EXAMPLE
Chapter 26. VBUNIT
THE FRAMEWORK
EXAMPLE
Part v. APPENDICES
Appendix A. EXTREME PROGRAMMING
THE AGILE REVOLUTION
EXTREME PROGRAMMING
THE FOUR VARIABLES
THE VALUES
THE PRACTICES
SUMMARY
Appendix B. AGILE MODELING
THE MYTHS SURROUNDING MODELING
AN INTRODUCTION TO AGILE MODELING (AM)
WHAT ARE AGILE MODELS?
Appendix C. ONLINE RESOURCES
FORUMS
INFORMATION ON AGILE PROCESSES
INFORMATION ON EXTREME PROGRAMMING
JUNIT-RELATED SOFTWARE
JUNIT-RELATED INFORMATION
TOOLS
OTHER XUNIT FAMILY MEMBERS
COMPANIES
MISCELLANEOUS
Appendix D. ANSWERS TO EXERCISES
BIBLIOGRAPHY
Index
Amazon
Prev don't be afraid of buying books Next
Copyright
Library of Congress Cataloging-in-Publication Data
A CIP catalog record for this book can be obtained from the Library of Congress
Company and product names mentioned herein are the trademarks or registered trademarks of their
respective owners.
All rights reserved. No part of this book may be reproduced, in any form or by any means, without
permission in writing from the publisher.
First Printing
Dedication
To my parents who, though they may not have always approved or understood, let me try.
Amazon
Prev don't be afraid of buying books Next
David J. Anderson
Agile Management for Software Engineering: Applying the Theory of Constraints for Business
Results
David Astels
Peter is globally recognized as an accomplished business strategist, model builder, and thought leader.
As business strategist, Peter formulates long-term competitive strategies for Borland. Previously, as
Chairman, CEO, and President of Together Soft, he led in growing that company 11.6 times revenue in
the span of two years, overall profitably. As model builder, Peter has built hundreds of models for nearly
every business imaginable, fully focused on building-in strategic business advantage. As thought leader,
Peter writes books (six to date) on building better software faster; he speaks at industry events
worldwide; and he is the Editor-in-Chief of The Coad Series, published by Prentice Hall; in addition,
Peter was an invited speaker on business strategy at the 2003 "Future in Review" conference.
Coad attended the Stanford Executive Program for Growing Companies and received a Master of Science
in Computer Science (USC) and a Bachelor of Science with Honors in Electrical Engineering (OSU).
Amazon
Prev don't be afraid of buying books Next
FOREWORD
My responsibility in writing this foreword is to help you decide whether to read this book. If you are
interested in improving your programs and your programming skill, this book can help you.
Test-Driven Development is a practice that can make your programs better. If you're like me, using the
techniques in Dave's book, you will find that your programs are more clear, that they come into being
more easily, and that you'll have fewer defects than you used to.
I'm not saying that TDD is some kind of magic potion; quite the contrary. TDD isn't magic, it is
something that you yourself do. By focusing attention on tests first, you'll be designing your program
more from the viewpoint of the user. By doing the tests one at a time, you'll be creating a simple
design that's focused exactly on the problem. As you work up all these little tests, you'll drive out most
of the defects that otherwise slip into your code. Finally, by saving the tests, you make the program
easier to maintain and improve as time goes on.
Dave's book is full of examples of Test-Driven Development. There's an extended example to show you
how TDD works over a longer haul. There are small examples showing how to use most of he TDD-
related tools that are available. There are even examples in most of the languages where TDD is used,
though the book's main focus is on examples in Java. This is a book about practice, with real examples
rather than dry theory.
But wait! There's more! Dave also gives a good introduction to refactoring, and to programming by
intention. And he introduces Mock Objects, an advanced and powerful technique in testing and Test-
Driven Development. Plus, he has a section on one of the tricky areas in TDD, creating GUIs in test-first
fashion.
You'll also find a quick and valuable summary of eXtreme Programming, a look at Agile Modeling, and a
comprehensive list of online resources relating to all the book's topics.
All these things are good and serve as reasons to buy this book. The core value of Dave's book, the real
meat, is in the code. Test-Driven Development is a technique that we use as we program. No matter
what design or modeling we have done before we begin programming, TDD helps us make the code
better. I'm sure that it will help you, if you'll give this book, and what it teaches, a chance.
Test-Driven Development has made my programs better, and those of many other programmers as well.
It's a technique that is worth adding to your bag of tricks. This book will help you improve as a
programmer. That's why I'm recommending it.
Ron Jeffries
www.XProgramming.com
Pinckney, Michigan
18 December 2002
Amazon
Prev don't be afraid of buying books Next
PREFACE
This isn't a book about testing.
This is a book about a way of programming that leads to simple, clear, robust code. Code that is easy
to design, write, read, understand, extend, and maintain.
This is a book about thinking, designing, communicating, and programming. It's just a really nice side
effect that we end up with a comprehensive[1] suite of tests.
This book explores Test-Driven Development, Test-First Programming, call it what you will: programming
by writing the tests first, then writing the code needed to make the tests pass. Specifically, working in
the smallest steps possible: write just enough of a test to fail, write just enough code to make it pass,
refactor to clean up the mess you made getting the test to pass.
This book focuses on the Java programming language and uses Java examples throughout. It is assumed
that the reader has at least an intermediate understanding of Java (and a working Java system if you
want to try out the examples for yourself). Example code and other support material is available at my
website[URL 54].
Even though the focus is on Java, Part IV looks at other prominent members of the xUnit family for
several popular languages. This is done by taking the first task from Chapter 10 and recasting it in the
various languages. This provides a good comparison of the different frameworks.
Amazon
Prev don't be afraid of buying books Next
EXTREME PROGRAMMING
Test-Driven Development is a core part of the agile process formalized by Kent Beck called eXtreme
Programming (XP). XP is probably the most agile of the agile processes, being extremely low overhead,
and extremely low ceremony. However, it is extremely high discipline, very effective, and incredibly
resilient to change.
That being said, you do not need to adopt XP in order to practice TDD and gain the benefit from it. TDD
is worth doing on its own. The quality of your code will improve. Of course, if you are doing XP it's well
worth it to get really good at TDD.
TDD is one of the main design tools that we have in XP. [2] As I mentioned earlier, the fact that we end
up with a set of tests is a very pleasant by-product. Because we have those tests, we can have
confidence we haven't inadvertently broken anything if the tests ran successfully before the change and
after it. Conversely, if a test fails after we make a change we know exactly what broke and are in the
best position to find the problem and fix it. The only thing that could have caused the failure was the
change we made since the last time the tests ran clean.
All this means is that because the tests are there, we can safely use another of the XP practices:
refactoring. As we will see in Chapter 2, refactoring is the process of making changes to the structure of
code without changing its external behavior. The tests let you confirm that you haven't changed the
behavior. This gives you the courage necessary to make (sometimes drastic) changes to working code.
The result is that the code is cleaner, more extensible, more maintainable, and more understandable.
Appendix A talks a bit more about eXtreme Programming. For more exhaustive information, you can
browse the bibliography and explore the online XP resources listed in Appendix C.
Amazon
Prev don't be afraid of buying books Next
Read this book if you want to adopt eXtreme Programming. As stated earlier, being able to do TDD well
is worth the time and effort it takes to get good at it. TDD is at the heart of XP, so doing TDD well
makes the entire process that much more effective.
Read this book if you want to write code that is clearer, more robust, easier to extend, and as slim (as
opposed to bloated) as possible.
Read this book if you know there must be a better way than spending weeks or months drawing
pictures before writing a line of code.
Finally, read this book if you want to know how to make programming fun again.
In terms of what you should know before reading this book, it would help if you had at least an
intermediate understanding of Java. Having a good background in another OO language or two (such as
Smalltalk, C++, Python, or Ruby) will, however, enable you to get even more out of this book.
As this book goes to print there is one other TDD book available[9] (although I'm sure many will follow).
I was aware of that book being written as I wrote much of this one, and it was always a partial goal to
be complementary to it. From it you will get the philosophy and metaphysics of TDD, mixed with enough
pragmatics to make it real. If you are so inclined, I encourage you to read it first. The book you hold in
your hands is, as the title says, a practical guide to doing TDD. It's focused on one language (not the
best language, but arguably one that is very popular and well supported for TDD), and presents not only
concepts and principles, but tools and techniques.
Amazon
Prev don't be afraid of buying books Next
I Background In Part I we examine some topics that relate to the main body of material
in the book (i.e., TDD in Java). We start with an introduction to TDD. This is followed by
chapters on refactoring and programming by intention. These two techniques are also
prominent in XP and are required and enabled by TDD.
II Tools and Techniques In Part II we take an in-depth look at various tools that are
useful for practicing TDD with Java, and how to use them. We start with a tutorial
introduction to JUnit, the defacto standard Java TDD framework. We continue by
exploring some of the standard (i.e., included in the distribution) and nonstandard
extensions to JUnit. Next, we explore some tools that support the use of JUnit and other
tools that are completely independent of JUnit but work well with it. The final chapters in
this part examine specific techniques or issues and the related tools.
III A Java Project: Test-Driven End to End This is a practical hands-on book. To that
end, Part III (which makes up the bulk of the book) is built around the development of a
real system, not a toy example. We work through this project test-first. Along the way
we draw on material from the previous parts of the book.
IV xUnit Family Members JUnit is just one member of a large and growing family of
programmer test frameworks. In Part IV we have a look at some of the other members
of the family. We don't look at all of them, but we go over several for the more popular
languages. So that we get a good comparison, we go through the same set of stories
(i.e., requirements) for each. Specifically, these are the initial stories from the Java
project. This lets us compare the various members with JUnit as well.
D Answers to Exercises Many of the chapters in this book contain exercises for the
reader. This appendix contains all exercises in the book, with answers.
Amazon
Prev don't be afraid of buying books Next
Source Code
This book contains a large amount of source code. When one of more complete lines of code is being
presented, it is indented and set in a sans-serif font, like this:
When only part of a line is being presented, it is set in the same font, but kept in the body of the text.
This often includes class names (M ovie), methods (eq u a ls () ), and constants (t ru e ," a s tr i ng ",
42).
In general, when a method is referred to parameters are not included, but empty parentheses are, so
that it is obvious that it is a method as opposed to some other type of identifier, for example:
aMe t hod() .
In blocks of code, p ac kag e and imp ort statements are generally left out.
Terms relating to the filesystem are set in a serif, monospaced font. This includes items like filenames
(fil t er.pr ope rt ies )and commands and their output:
$ j a va \
> - c lassp ath b in: /u sr/l ocal /jav a / lib / M ock M a ke r. ja r \
> m o ckmak er. Mo ckM ak er \
> c o m.sao rsa .t ddb oo k.sa mple s.mo c k obj e c ts. I n tC al cu l at o r \
> > s rc/co m/s ao rsa /t ddbo ok/s ampl e s /mo c k obj e c ts /M oc k In t Ca lc u la to r .j av a
I've used a couple of different callout mechanisms to highlight information that is important to take note
of, or is interesting but doesn't fit in the body of the text for some reason.
Throughout the book there are small bits of wisdom that you may find
especially useful.
I've used sidebars to separate short passages that are not directly related to the main body of text.
Sidebars get placed as sees fit, usually at the top of a page. There's one around here
somewhere as an example.
Terminology
I learned OO in the context of Smalltalk and I've used Smalltalk terminology from the beginning. If
you're not familiar with Smalltalk, I include a few terms that I use, and how they map to Java and
C++:
instance variable A variable whose scope is an object. Each object has a separate copy
of this variable. (Java: field or member variable, C++: data member.)
class variable A variable whose scope is a class. All instances of the class share a single
copy of the variable. (Java: static field, C++: static data member.)
senders Methods that send a specific message, that is, call a specific method (commonly
called references to a method).
Amazon
Prev don't be afraid of buying books Next
ACKNOWLEDGEMENTS
A preface is not complete without acknowledging the other people that make a book possible. Being an
author is like being at the peak of a pyramid. . . you are being supported (and your work made possible)
in various ways by a multitude of other people. This is my chance to acknowledge and thank them. . .
by name for the ones I'm aware of.
Kent Beck for making TDD and XP household words—at least in my household—and for his support of
this book.
Miroslav Novak for first turning me on to this new way of programming that a bunch of smart people
were talking about on something called a Wiki. Miroslav may be my junior in terms of time spent
programming, but I've learned more from him than I sometimes care to admit.
Patrick Wilson-Welsh for several things: for always reminding me of the big picture when I got mired
down in the details of the moment; for being the best sounding board and copy editor that an author
could ask for; and for having the courage to leave an established life in Washington, D.C. to move to
small-town Canada to become my co-founder and first apprentice.
Dave Thomas of "The Pragmatic Programmers"[URL 55]for letting me use the macros he
wrote for the book "The Pragmatic Programmer"[25]. That book was inspiring in its layout and
typesetting as well as catalytic in bringing about a turning point in my thoughts about programming.
Hand in hand with "The Pragmatic Programmer" went "Software Craftsmanship"[34] by Pete McBreen. I
mean that literally,.. . I read them back-to-back. Pete provides a wonderful introduction to and
discussion of software as a craft. A fabulous book, it was another contributing factor to my career-
shaking epiphany (the third being XP). Thanks, Pete.
Peter Coad, to whom I owe a great debt for taking me under his wing in many ways and helping me to
get this project off the ground. I have to thank him also for letting me charge ahead with a TDD edition
of The Coad Letter[URL 61].
Paul Petralia, my acquisitions editor at Prentice Hall, and the fine crew that works with him. Thanks for
letting us convince you that this book isn't about "Testing," and for believing in it whole heartedly once
we had accomplished that.
Craig Larman must be mentioned here for his encouragement, support, and advice. I still have great
memories of spending a day with Craig at his home outside Dallas, discussing UML and Together[URL
34]and drinking homemade Chai.
And a big thanks to Ron Jeffries for writing the foreword for me, as well as being generally supportive of
my XP-related endeavors, specifically (well, what comes to mind as I write this) this book, and the TDD
Coad Letter. Also, for doing so much to bring XP so far.
Special thanks and a hearty acknowledgement to members of the TDD Yahoo! group that sent me their
JUnit tips: Darren Hobbs, J. B. Rainsberger, and Derek Weber.
Very special thanks to those that contributed to the book by writing and letting me use material on
subjects that they are the experts in, specifically (in order of appearance):
Bob Payne for stepping in at the last minute with the chapter on PyUnit,
Thanks to all the folks in the XP community who gave me feedback (in no particular order): Kay
Pentecost, Edmund Schweppe, Aldo Bergamini, Mike Clark, Francesco Cirillo, and my friends, colleagues,
and past co-authors: Randy Miller and Miroslav Novak. As with all authors, I'm sure I've missed
someone. Sorry about that.
I need to acknowledge and thank my reviewers as well: Alan Francis and William Wake.
And yes, as Kent Beck says in the preface of his TDD book[9], it is cliché to thank our families, but they
heartily deserve it. To my wife, Kate, for saying "I'll clean up the kitchen. You go write." To my kids,
Tasha and Jason, for being understanding when I had to write, and for thinking that it's so cool to have
a Dad who writes books. Finally, to my youngest child, Leah, who is too young to notice what I'm doing
but simply smiles when she sees me and gives me a hug when I pick her up.
This book was produced using a variety of open source software. All my computers run Redhat Linux.
The manuscript was prepared using GNU Emacs, and typeset using . Image manipulation was
done with Gimp. The xdvi previewer was used extensively. The PDF version was created using dvips, and
ps2pdf. Several packages were used with , some off the shelf (lgrind, draft-copy, and fixme),
several courtesy of Dave Thomas (for exercises, extended cross reference support, and url references),
and several of my own (chapter heading quotes, story/task/test management, sidebars, and tips).
Amazon
Prev don't be afraid of buying books Next
Part I: BACKGROUND
This part of the book provides an introduction to Test-Driven Development and associated
techniques and issues: refactoring and programming by intention.
Amazon
Prev don't be afraid of buying books Next
From programmers to users, everyone involved in software development agrees: testing is good. So why
are so many systems so badly tested? There are several problems with the traditional approach to
testing:
If testing is not comprehensive enough, errors can make it into production and cause potentially
devastating problems.
Testing is often done after all the code is written, or at least after the programmer who wrote the
code has moved on to other things. When you are no longer living and breathing a particular
program, it takes some time and effort to back into the code enough to deal with the problems.
Often tests are written by programmers other than those who wrote the code. Since they may
not understand all of the details of the code, it is possible that they may miss important tests.
If the test writers base their tests on documentation or other artifacts other than the code, any
extent to which those artifacts are out of date will cause problems.
If tests are not automated, they most likely will not be performed frequently, regularly, or in
exactly the same way each time.
Finally, it is quite possible with traditional approaches to fix a problem in a way that creates
problems elsewhere. The existing test infrastructure may or may not find these new problems.
The programmer does the testing, working with tests while the code is freshly in mind. In fact,
the code is based on the tests, which guarantees testability, helps ensure exhaustive test
coverage, and keeps the code and tests in sync. All tests are automated. They are run quite
frequently and identically each time.
Exhaustive test coverage means that if a bug is introduced during debugging, the test scaffolding
finds it immediately and pinpoints its location. And the test-debug cycle is kept quite short: there
are no lengthy delays between the discovery of a bug and its repair.
Finally, when the system is delivered, the exhaustive test scaffolding is delivered with it, making
future changes and extensions to it easier.
So from a pure testing standpoint, before we begin to discuss non-testing benefits, TDD is superior to
traditional testing approaches. You get more thoroughly tested code, period.
But you do indeed get much more than that. You get simpler designs. You get systems that reveal
intent (describe themselves) clearly. The tests themselves help describe the system. You get extremely
low-defect systems that start out robust, are robust at the end, and stay robust all the time. At the end
of every day, the latest build is robust.
These are benefits for all project stakeholders. But perhaps the most immediate and most tangible
benefit is exclusively the programmer's: more fun. TDD gives you, the programmer, small, regular,
frequent doses of positive feedback while you work. You have tangible evidence that you are making
progress, and that your code works.
There is a potential problem with all this, of course. It is more addictive than caffeine. Once you're
hooked, you'll want to program this way, and only this way, from then on. And I for one certainly hope
you do.
This chapter will give you an overview of Test-Driven Development, including a short example of a
programming session.
Amazon
Prev don't be afraid of buying books Next
You have Programmer Tests to test that your classes exhibit the proper behavior. Programmer Tests are
written by the developer who writes the code being tested. They're called Programmer Tests because
although they are similar to unit tests, they are written for a different reason. Unit tests are written to
test that the code you've written works. Programmer Tests are written to define what it means for the
code to work. Finally, they're called Programmer Tests to differentiate them from the tests that the
Customer writes (called, logically enough, Customer Tests) to test that the system behaves as required
form the point of view of a user.
Using Test-Driven Development implies, in theory, that you have an exhaustive test suite. This is
because there is no code unless there is a test that requires it in order to pass. You write the test, then
(and not until then) write the code that is tested by the test. There should be no code in the system
which was not written in response to a test. Hence, the test suite is, by definition, exhaustive.
One of eXtreme Programming's tenets is that a feature does not exist until there is a suite of tests to go
with it. The reason for this is that everything in the system has to be testable as part of the safety net
that gives you confidence and courage. Confidence that all the code tests clean gives you the courage
(not to mention the simple ability) to refactor and integrate. How can you possibly make changes to the
code without some way to confidently tell whether you have broken the previous behavior? How can you
integrate if you don't have a suite of tests that will immediately (or at least in a short time) tell you if
you have inadvertently broken some other part of the code?
Now we're getting eXtreme. What do I mean by write the tests first? I mean that when you have a task
to do (i.e., some bit of functionality to implement) you write code that will test that the functionality
works as required before you implement the functionality itself.
Furthermore, you write a little bit of test, followed by just enough code to make that test pass, then a
bit more test, and a bit more code, test, code, test, code, etc.
To pass this test we need a Mov ieLi st class that has a si z e( ) method.
When you are working this way, you want to work in small increments. . . some-times increments that
seem ridiculously small. When you grasp the significance of this, you will be on your way to mastering
TDD. Later we'll explore the important and sometimes unexpected benefits and side effects of testing
and coding in tiny increments.
Amazon
Prev don't be afraid of buying books Next
If you need to add a class or method the compiler will tell you. It provides a better To Do list than you
could, and faster. In the previous example when I compile [1] after writing the test (with nothing else
written) I get the error:
[1] Modern Java programming environments will alert me to these missing items even before I
compile. Furthermore, they will offer solutions and do the work of creating the stubs for me.
This immediately tells me that I need to create a new Mov i eL i st class, so I do:
pub l ic in t s iz e() {
r e turn 0;
}
Now it will compile. Run the test, and it works. Due to Java requiring a return statement when a return
type is defined, we need to combine the steps of creating the method and adding the simplest return
statement. I have made a habit of always stubbing methods to return the simplest value possible (i.e.,
0, fal s e, or nul l).
What!?! Just re tu rn 0? That can't be right. Ah...but it is right. It is the simplest thing that could
possibly work to pass the test we just wrote. As we write more tests we will likely need to revisit the
siz e () method, generalizing and refactoring, but for now r et u rn 0 is all that is required.
Amazon
Prev don't be afraid of buying books Next
A QUICK EXAMPLE
Let's take a peek into the development of the project from later in the book. We have a Mo vi e class
which now needs to accept multiple ratings (e.g., 3 as in "3 stars out of 5") and give access to the
average.
As we go through the example, we will be alluding to a metaphor for the TDD flow originated by William
Wake: The TDD Traffic Light[URL 9][URL 61].
We start by writing a test, and we start the test by making an assertion that we want to be true:
Now we need to set the stage for that assertion to be true. To do that we'll add some rating to the
Mov i e:
When we compile this, the compiler complains that a ddR a ti n g( in t ) and g et Av e ra ge R at in g () are
undefined. This is our yellow light. Now we make it compile by adding the following code to Mo vi e :
Note that since we are using Java, we must provide a return value for g e tA ve r ag eR a ti ng ( ) since
we've said it returns an i nt .
Now it compiles, but the test fails. This is the red light (aka red bar). This term is derived by the JUnit
interfaces that present a progress bar that advances as tests are run. As long as all tests pass, the bar
is green. As soon as a test fails, the bar turns red and remains red. The message we get is:
Bad avera ge ra tin g. exp ecte d:<4 > but w as: < 0 >
Now we have to make the test pass. We add code to ge t Av e ra ge R at i ng () to make the test pass:
Recompile and rerun the test. Green light! Now we refactor to remove the duplication and other smells
that we introduced when we made the test pass.
You're probably thinking "Duplication.. . what duplication?" It's not always obvious at first. We'll start by
looking for constants that we used in making the test work. Sure enough, look at
get A verag eRa ti ng( ).It returns a constant. Remember that we set the test up to get the desired
result. How did we do that? In this case we gave the movie two ratings: 3 and 5. The average result is
the 4 that we are returning. So, that 4 is duplicated. We provide the information required to compute it,
as well as returning it as a constant. Returning a constant when we can compute its value is a form of
duplication. Let's get rid of it.
Our first step is to rewrite that constant into something related to the provided information:
Compile and run the tests. We're OK. We have the courage to continue. The 3 and 5 are duplicate with
the arguments to ad dRa ti ng() so let's capture them. Since we add the constants we can simply
accumulate the arguments. First we add a variable to accumulate them:
Compile, test, it works! We're not finished yet, though. While we were refactoring we introduced another
constant: the 2 in get Av erag eRat ing( ) .The duplication here is a little subtler. The 2 is the number
ratings we added, i.e., the number of times add R a tin g ( ) was called. We need to keep track of that in
order to get rid of the 2.
Like before, start by defining a place for it:
Compile, run the tests, green. Now, increment it every time a d dR at i ng ( ) is called:
Compile, run the tests, green. OK, finally we replace the constant 2 with n um b er Of R at in g s:
Compile, run the tests, green. OK, we're done. If we want to reinforce our confidence in what we did,
we can add more calls to a ddRa ting () and check against the appropriate expected average. For
example:
I need to underline the fact that I recompiled and ran the tests after each little change above. This
cannot be stressed enough. Running tests after each small change gives us confidence and reassurance.
The result is that we have courage to continue, one little step at a time. If at any point a test failed we
know exactly what change caused the failure: the last one. We back it out and rerun the tests. The tests
should pass again. Now we can try again... with courage.
The above example shows one school of thought when it comes to cleaning up code. In it we worked to
get rid of the duplication that was embodied in the constant. Another school of thought would leave the
constant 4 in place and write another test that added different ratings, and a different number of them.
This second test would be designed to require a different returned average. This would force us to
refactor and generalize in order to get the test to pass.
Which approach should you take? It really depends on how comfortable you are with what you are
attempting. Remember that you do have the test to safeguard you. As long as the test runs, you know
that you haven't broken anything. In either case you will want to write that second test: either to drive
the generalization, or to verify it.
Amazon
Prev don't be afraid of buying books Next
SUMMARY
We've explored what Test-Driven Development is:
tests first,
We've seen how to leverage feedback from the computer to keep track of what we should do next: if we
need to create a class, method, variable, etc., the system will let us know.
We've even seen a quick example of TDD in action, step by step, building some code to maintain the
average rating of a movie.
However, before we can jump in and start practicing it in earnest, we need to make sure we have a few
basic techniques and skills that TDD builds on. The next few chapters will explore these.
Consider XP projects that clearly take a TDD approach. Modeling is definitely an important
part of XP. XP practitioners work with user stories, and user stories are clearly agile models.
XP practitioners also create CRC cards whenever they need to, also agile models. In
eXtreme Programming Explained [8], Kent Beck even includes sketches of class diagrams.
Heresy? No. Just common sense. If creating a model can help our software development
efforts then that's what we do. It's as simple as that.
Creating agile models can help our TDD efforts because they can reveal the need for some
tests. As an agile modeler sketches a diagram they will always be thinking, "How can I test
this?" in the back of their minds because they will be following the practice consider
testability. This will lead to new test cases. Furthermore, we are likely to find that some of
our project stakeholders or even other developers simply don't think in terms of tests;
instead, they are visual thinkers. There's nothing wrong with this as long as we recognize it
as an issue and act accordingly—use visual modeling techniques with visual thinkers and
test-driven techniques with people that have a testing mindset.
TDD can also improve our agile modeling efforts. Following a test-first approach, agile
developers quickly discover whether their ideas actually work or not—the tests will either
validate their models or not—providing rapid feedback regarding the ideas captured within
the models. This fits in perfectly with AM's practice of Prove it With Code.
Agile Modeling (AM) and Test-Driven Development (TDD) go hand in hand. The most
effective developers have a wide range of techniques in their intellectual toolboxes, and AM
and TDD should be among those techniques.
Chapter 2. REFACTORING
Urge the necessity and state of times, And be not peevish-fond in great designs.
This chapter is here because refactoring is so important to TDD. We'll have a quick look at some of the
ideas and techniques—just enough to get a firm enough grasp of it to tackle the project later in the
book. For everything we ever wanted to know about refactoring check out [16], [29], [41], [URL 4],
and [URL 10].
Amazon
Prev don't be afraid of buying books Next
WHAT IS REFACTORING?
Refactoring is the process of making changes to existing, working code without changing its external
behavior. In other words, changing how it does it, but not what it does. The goal is to improve the
internal structure.
After doing the simplest thing possible to make a test pass (breaking any/all rules in the
process), we refactor to clean up, mostly removing duplication we introduced getting the test to
pass.
If we are practicing TDD, then we have the safety net of tests in place that allows us to refactor
with confidence.
Amazon
Prev don't be afraid of buying books Next
WHEN TO REFACTOR
Generally speaking, we refactor whenever we need to. However, there are three situations when we
must refactor:
2. when we perceive that the code and/or its intent isn't clear
3. when we detect code smells, that is, subtle (or not so subtle) indications that there is a problem.
Duplication
Duplicated code in its various forms is the death of good code. We absolutely must get rid of it.
Duplication is such a problem that several people have warned against it at length, including Ron Jeffries
et al. ("Say Everything Once and Only Once"[27]) and Dave Thomas & Andrew Hunt ("Don't Repeat
Yourself"[25]).
If we see duplicated code, we need to look closely to see what it's doing. It's likely that there is
something useful or important being done. We need to put it in one place. We can do this by extracting
duplicate code into separate methods that can then be called from multiple locations. If the duplication is
in an inheritance hierarchy, we may be able to push the duplicated code up the hierarchy. If the
structure of some code is duplicated but not the details, we can extract the differing parts and make a
template method of the common structure.
Some cases of duplication will be simple, others won't be. Some will be obvious, but some will be very
subtle.
An example of a simple case of duplication is an expression (or sequence of expressions) that appears in
multiple methods. In this case, it is simply a matter of moving the duplicated expression(s) into a
separate method and replacing the original occurrences with calls to the new method.
Here's an example of simple duplication. The majority of these two methods are identical. The only
difference is the first line of s aveA s():
We can remove the duplication by chaining to s a v e ( ) after getting the file in s a v e A s ( ) , like so:
Another form of duplication occurs at runtime when the same information is maintained redundantly. In
the following example, we've moved from keeping track of how many times add has been called
(satisfying initial tests that verify the list size) to storing the added movies. To satisfy a test that verifies
that the list contains the movies that were added and not ones that weren't added we needed to add
storage for the added movies. We properly just added the required code to get the new test passing,
without much regard for the code that is there already.
coding: When you wear this hat you are adding new
functionality to the system. You are not refactoring.
When you are getting a test to pass, you wear your coding hat. Once
the test is passing, you switch hats and refactor to clean up.
Now we need to refactor the old code out since it is now maintaining redundant information. Here's the
code before refactoring:
p u bl ic in t siz e () {
re tu rn m ovie s .siz e ();
}
p u bl ic vo i d ad d (Mov i e mo vieT o A d d ) {
mo vi es. a dd(m o vieT o Add) ;
nu mb erO f Movi e s++;
}
}
After removing the maintenance of the redundant information, the code looks like this:
p ub l ic c las s Mov i eLis t {
p r iv at e C o llec t ion m ovie s = n e w A r r a y L i s t ( ) ;
p u bl ic in t siz e () {
re tu rn m ovie s .siz e ();
}
p u bl ic vo i d ad d (Mov i e mo vieT o A d d ) {
mo vi es. a dd(m o vieT o Add) ;
}
}
Unclear Intent
The code is the most important deliverable. Because of that, it must be as clear and understandable as
possible. It's not always possible to write it this way at the beginning, so we refactor the code to make
it more understandable. One powerful and simple way to clarify intent is to choose better names for
things. We can often pick a name out of the air that is close but not exactly what we want. One trick is
to keep a thesaurus on hand and use it when we are having trouble coming up with a name that
communicates our exact intent.
Using TDD helps make our intent clear when writing code by forcing us to think about the interface of a
class rather than its implementation. We have a chance to decide what makes the most sense from the
point of view of a user of the class, without getting mired in implementation details. Chapter 3 goes into
detail on how to make intent clear when coding.
Code Smells
The concept of code smells [1] is widely used by the eXtreme Programming community to refer to
characteristics of code that indicate less than acceptable quality. In this section we will briefly overview
a few of the more important smells.
[1] The term code smells was first used by Fowler and Beck in [ 16].
When we find smells in our code, we refactor to get rid of them. In fact, one very useful feature of [16]
is a cross reference of smells to indicated refactorings.
One thing to remember is that many of these smells don't always indicate a problem, but they do
indicate something that we should have a closer look at to see if there is a problem.
Comments
Most comments are written either because they are required by the process being used or to
compensate for poorly written code. If we see a comment or feel compelled to write a comment,
consider refactoring or rewriting the code first. See Chapter 3 for details on how to make code more
understandable.
Here's some code that has comments that serve to break the method into separate functional units.
Here, the comments indicate that those functional units should be extracted into separate methods. Look
on page 29 to see this done.
p ub l ic v oid init ( ) {
/ / s et th e lay o ut
g e tC on ten t Pane ( ).se t Layo ut(n e w F l o w L a y o u t ( ) ) ;
Data Class
This is a class that is essentially an evolved r e c o r d or s t r u c t . It simply contains data and has little
behavior. Don't be fooled by accessor methods that are only there to support the data. A real stink is
the existence of public instance variables (fields/attributes).
If we look around the code, we may find other classes that operate on instances of the data class. We
need to merge the two either by moving data into the other class or moving behavior into the data
class.
p ub l ic c las s Poi n t {
p r iv at e i n t x;
p r iv at e i n t y;
p u bl ic Po i nt() {
t h is (0 , 0 ) ;
}
p u bl ic Po i nt(i n t in i tial X, i n t i n i t i a l Y ) {
x= in iti a lX;
y= in iti a lY;
}
p u bl ic in t get X () {
re tu rn x ;
}
p u bl ic in t get Y () {
re tu rn y ;
}
p u bl ic vo i d se t X(in t new X) {
x= ne wX;
}
p u bl ic vo i d se t Y(in t new Y) {
y= ne wY;
}
}
p ub l ic c las s Poi n t {
p u bl ic in t x;
p u bl ic in t y;
p u bl ic Po i nt() {
th is (0, 0);
}
p u bl ic Po i nt(i n t in i tial X, i n t i n i t i a l Y ) {
x= in iti a lX;
y= in iti a lY;
}
}
This version should, initially, be refactored to encapsulate the instance variables. That would result in the
first version of Poi n t. Data classes are often manipulated by code that lives in other classes. The next
step in this example would be to look at the code that uses P o i n t to see if there is code that would be
better placed in Po i nt. For example:
p ub l ic c las s Sha p e {
p r iv at e P o int c ente r ;
p u bl ic Sh a pe() {
ce nt er = new P oint ( );
}
p u bl ic vo i d tr a nsla t e(in t dX , i n t d Y ) {
ce nt er. s etX( c ente r .get X() + d X ) ;
ce nt er. s etY( c ente r .get Y() + d Y ) ;
}
}
p ub l ic c las s Poi n t {
//. . .
p u bl ic vo i d tr a nsla t e(in t dX , i n t d Y ) {
x += dX ;
y += dY ;
}
}
p ub l ic c las s Sha p e
{
//. . .
p u bl ic vo i d tr a nsla t e(in t dX , i n t d Y ) {
ce nt er. t rans l ate( d X, d Y);
}
}
Duplicated Code
This is one of the worst smells. We discussed duplication earlier in this chapter.
Inappropriate Intimacy
This is the case where a class knows too much about another's internal details. To deal with this,
methods should be moved so that the pieces that need to know about each other are together.
For example, consider this method in a class M o v i e L i s t which writes the list to a W r i t e r :
p ub l ic v oid writ e To(W r iter des t i n a t i o n ) t h r o w s I O E x c e p t i o n {
I t er at or m ovie I tera t or = mov i e s . i t e r a t o r ( ) ;
w h il e (mo v ieIt e rato r .has Next ( ) ) {
Mo vi e m o vieT o Writ e = ( Movi e ) m o v i e I t e r a t o r . n e x t ( ) ; ;
de st ina t ion. w rite ( movi eToW r i t e . g e t N a m e ( ) ) ;
de st ina t ion. w rite ( '|') ;
de st ina t ion. w rite ( movi eToW r i t e . g e t C a t e g o r y ( ) . t o S t r i n g ( ) ) ;
de st ina t ion. w rite ( '|') ;
tr y {
de sti n atio n .wri t e(In tege r . t o S t r i n g ( m o v i e T o W r i t e . g e t R a t i n g ( ) ) ) ;
} ca tch (Unr a tedE x cept ion e x ) {
de sti n atio n .wri t e("- 1");
}
de st ina t ion. w rite ( '\n' );
}
}
This method has and uses full knowledge of the structure of the class M o v i e . If the structure of M o v i e
changes (e.g., multiple ratings are supported), then this method, in a different class, must change to
match. In short, this method uses too much knowledge about M o v i e . The solution is to extract the
portion that deals with writing a single M o v i e and move it to the M o v i e class. The final result is:
p ub l ic c las s Movi e {
/ / .. .
p u bl ic vo i d wr i teTo ( Writ er d e s t i n a t i o n ) {
de st ina t ion. w rite ( getN ame( ) ) ;
de st ina t ion. w rite ( '|') ;
de st ina t ion. w rite ( getC ateg o r y ( ) . t o S t r i n g ( ) ) ;
de st ina t ion. w rite ( '|') ;c
tr y {
de sti n atio n .wri t e(In tege r . t o S t r i n g ( g e t R a t i n g ( ) ) ) ;
This can also be a common problem with inheritance, where subclasses know more than they should
about the implementation details of their ancestors. To deal with this we can decouple the relationship
by replacing the inheritance with delegation or by making the details of the ancestors private.
Large Class
If we find a class that is disproportionately larger than most of the other classes in the system, look
carefully at it. Why is it so large? Does it try to do too much? Does it know too much? Is much of the
behavior conditional? If so we might be able to extract subclasses and use polymorphism to see that the
appropriate code is executed. If there are definite sets of subfunctionality they may be able to be
extracted into classes of their own.
There are two ways to quickly find classes that might be too large:
1. Run a lines-of-code metric and look for inordinately large results. See Figure 2.1.
2. Visualize the code as UML and look for large classes. See Figure 2.2.
This is the opposite of Large Class. A Lazy Class doesn't pull its weight and should be merged with
another class as appropriate. You can use the same techniques as you do when looking for large classes,
but look for the other extreme, that is, unusually small classes.
Long Method
Just as an overly large class is a problem, so is an overly large method. Extracting functionally cohesive
blocks of code into their own methods reduces the size of the method and makes the code more
understandable. What's long? If you look at line count, then anything more than 10 lines, plus or minus.
Another thing to consider is cognitive length—how many things is the method trying to do? In this case,
long is anything more than one. Consider the following example of a method that isn't really long in
terms of line count (it's borderline), but it's doing four different things. See page 29 to see how to
refactor it.
p ub l ic v oid init ( ) {
g e tC on ten t Pane ( ).se t Layo ut(n e w F l o w L a y o u t ( ) ) ;
m o vi eL ist = ne w JLi s t(my Edit o r . g e t M o v i e s ( ) ) ;
J S cr ol lPa n e sc r olle r = n ew J S c r o l l P a n e ( m o v i e L i s t ) ;
g e tC on ten t Pane ( ).ad d (scr olle r ) ;
m o vi eF iel d = n e w JT e xtFi eld( 1 6 ) ;
g e tC on ten t Pane ( ).ad d (mov ieFi e l d ) ;
a d dB ut ton = ne w JBu t ton( "Add " ) ;
Switch Statements
The free use of switch statements (aka case statements) indicate a lack of deep understanding of object-
oriented principles. Switch statements often result in Shotgun Surgery. We can often use polymorphism
to accomplish the same goal in a much better, cleaner way.
Shotgun Surgery
This isn't so much a smell from the code, but more from how we work with the code. The problem is
evidenced when we need to make a single functional change and find that we need to make changes to
code in several places. An example is having to add a clause in switch statements in several methods.
This is a disaster waiting to happen. The solution is to move things that change together to the same
place. This can often be done by using the Replace Conditional with Polymorphism refactoring (see page
32).
Amazon
Prev don't be afraid of buying books Next
HOW TO REFACTOR
First of all, we really need to have automated tests in place that will give us feedback as to whether
we've broken anything as we are refactoring. The key to refactoring is that we don't want to change
behavior, and having tests that verify that behavior will let us know as soon as it has changed.
Refactoring is done in small steps, running the tests after each one. That way we know as soon as we
have broken something. By taking small steps, we will know exactly what caused the break: the last
step we took. We must back it out and try again.
It really helps if we can use a good refactoring tool. The original refactoring tool was the refactoring
browser for Smalltalk[URL 38]. There are several tools available for Java that provide refactoring
support. Some are plugins for IDEs (such as jFactor[URL 39] and RefactorIt[URL 40]). Some IDEs are
starting to include refactoring support as core features. These include Eclipse[URL 32] (which has
included refactoring support from even early builds), IDEA[URL 33], Together[URL 34], and jBuilder[URL
35].
Refactoring has been done almost exclusively in source code, but it doesn't need to be limited to that.
Working with code using a UML tool (specifically class and sequence diagrams) has advantages in some
cases. Some smells are easy or easier to find when we can see the system graphically. Likewise, some
refactorings can be easily performed on a class diagram. See [5], [11] for more information.
Amazon
Prev don't be afraid of buying books Next
Extract Class
When a class gets too big, or its behavior is unfocused, we need to split it into pieces that have
cohesive behavior, creating new classes as required. Extract Class deals with extracting one of these
sets of behavior into a new class. Another reason why we might want to use this refactoring is if we will
need multiple implementations of some behavior. In this case, we can extract the variable code into a
separate class. Once there, we can extract an interface and then go about writing the required
implementations.
In the following example, the code to write a list of movies is in the M o v i e L i s t and M o v i e classes:
p ub l ic c las s Movi e {
/ / .. .
p u bl ic vo i d wr i teTo ( Writ er d e s t i n a t i o n ) {
de st ina t ion. w rite ( getN ame( ) ) ;
de st ina t ion. w rite ( '|') ;
de st ina t ion. w rite ( getC ateg o r y ( ) . t o S t r i n g ( ) ) ;
de st ina t ion. w rite ( '|') ;
tr y {
de sti n atio n .wri t e(In tege r . t o S t r i n g ( g e t R a t i n g ( ) ) ) ;
We'd like to extract the writing functionality to a separate class so that it is easier to replace or
enhance. Rather than have it spread over two classes, we'd like it in one place:
tr y {
de sti n atio n .wri t e(In tege r . t o S t r i n g ( a M o v i e . g e t R a t i n g ( ) ) ) ;
Now, if we need to change the format of the output, it's all in one place. Also, notice that we've
simplified the public interface in the process. Now only the top level call (i.e., w r i t e M o v i e L i s t ( ) ) is
exposed. The details are now hidden.
Extract Interface
We might want to extract an interface for several reasons. We may want to abstract away from a
concrete implementation so that we can use a technique like Mock Objects more easily. It is often
advantageous to have interfaces defining the important behavior groups in a system.
Suppose we had a class that implements the management of a list of movies called M o v i e L i s t .
Asweare going about our development we get to a point where we are developing a class that is
responsible for being a bridge between a user interface and the movie list. To better isolate that new
class we may want to mock (see Chapter 7) the movie list. Our existing class (M o v i e L i s t ) is concrete,
but to make a mock we should have an interface (and sometimes we need one, e.g., if we want to use
EasyMock). So, we need to extract an interface from M o v i e L i s t . It's generally a good idea to keep
interfaces small and focused, so only extract what you need and/or what makes sense.
Here's the class; it's still simple, but we'd like to be able to mock it:
p u bl ic in t siz e () {
re tu rn m ovie s .siz e ();
}
p u bl ic vo i d ad d (Mov i e mo vieT o A d d ) {
mo vi es. a dd(m o vieT o Add) ;
}
To extract an interface, we create an interface with abstract methods corresponding to those in the
concrete class that are of interest. In this case, we're interested in all the methods in M o v i e L i s t :
Extract Method
When a method gets too long or the logic is too complex to be easily understood, part of it can be
pulled out into a method of its own. Here's another example from the code we will see in Chapter 8.
Here's some code that builds a user interface:
p ub l ic v oid init ( ) {
g e tC on ten t Pane ( ).se t Layo ut(n e w F l o w L a y o u t ( ) ) ;
m o vi eL ist = ne w JLi s t(my Edit o r . g e t M o v i e s ( ) ) ;
J S cr ol lPa n e sc r olle r = n ew J S c r o l l P a n e ( m o v i e L i s t ) ;
g e tC on ten t Pane ( ).ad d (scr olle r ) ;
m o vi eF iel d = n e w JT e xtFi eld( 1 6 ) ;
g e tC on ten t Pane ( ).ad d (mov ieFi e l d ) ;
a d dB ut ton = ne w JBu t ton( "Add " ) ;
a d dB ut ton . addA c tion L iste ner( n e w A c t i o n L i s t e n e r ( ) {
pu bl ic v oid a ctio n Perf orme d ( A c t i o n E v e n t e ) {
my Edi t or.a d d(mo v ieFi eld. g e t T e x t ( ) ) ;
mo vie L ist. s etLi s tDat a(my E d i t o r . g e t M o v i e s ( ) ) ;
}
});
g e tC on ten t Pane ( ).ad d (add Butt o n ) ;
}
This method does several different things. It sets a layout and it creates and adds three components: a
list, a field, and a button. That's rather busy. To make the situation more extreme, I've made sure there
are no blank lines to split up the different bits of functionality. One approach would be to add blank lines
and explanatory comments, like this:
p ub l ic v oid init ( ) {
/ / s et th e lay o ut
g e tC on ten t Pane ( ).se t Layo ut(n e w F l o w L a y o u t ( ) ) ;
/ / c re ate the l ist
m o vi eL ist = ne w JLi s t(my Edit o r . g e t M o v i e s ( ) ) ;
J S cr ol lPa n e sc r olle r = n ew J S c r o l l P a n e ( m o v i e L i s t ) ;
g e tC on ten t Pane ( ).ad d (scr olle r ) ;
Is that any better? Maybe a little, but the method is even longer! All we've really done is try to mask
the smell with some deodorant comments. A better solution is to use Extract Method to pull each of
the different bits of functionality out into its own method:
p ub l ic v oid init ( ) {
s e tL ay out ( );
i n it Mo vie L ist( ) ;
i n it Mo vie F ield ( );
i n it Ad dBu t ton( ) ;
}
Notice how much clearer this is. Each functional unit is in its own method. The i n i t ( ) method is not
extremely clear, but from the decomposition and naming it is self-evident what is happening. For details
on any one aspect, you can simply look at the appropriate method.
We can use this refactoring when we have a class that indicates subtypes using a type-code (e.g., an
employee is either an Engineer or a Salesman). We make a subclass for each alternative. Doing this will
often help break up complex conditionals and switch statements that decide based on the type code.
Here's a simple example:
/ / ..
}
This isn't very object-oriented. Replacing e m p l o y e e T y p e with subclasses we get the following code,
which is also shown as UML in Figure 2.3:
When we find switch statements, consider creating subclasses to handle the different cases and get rid
of the switch. If there are subclasses already, maybe the conditional behavior can be pushed into them?
We can revisit the previous example for an illustration. Here's more of the original code:
After extracting subclasses for each type of employee (as in the previous example) we can use
polymorphism to replace the switch statement in d e p a r t m e n t N a m e ( ) :
We can use this when we have a similar method in multiple classes that have a common structure but
different details. We want to end up with a method having the common structure in the superclass
(which may have to be created) and the extracted detail methods in subclasses. Polymorphism takes
care of calling the proper detail methods.
We'll reuse our employment example again, with a different twist. Here are methods that compute an
XML representation of employees:
p ub l ic c las s Eng i neer exte nds E m p l o y e e {
p u bl ic St r ing a sXML ( ) {
St ri ngB u ffer buf = new Str i n g B u f f e r ( ) ;
bu f. app e nd(" < empl o yee name = \ " " ) ;
bu f. app e nd(g e tNam e ());
bu f. app e nd(" \ " de p artm ent= \ " E n g i n e e r i n g \ " > " ) ;
// .. .
re tu rn b uf.t o Stri n g();
}
//. . .
}
Notice how all three methods are almost identical. This is a sure indication that refactoring is required.
To refactor to a template method, we first push one of the methods up to the superclass, in this case
that is E mpl o yee:
Next, we extract the differences into separate methods. In this case that's the department name. See
the description of Replace Conditional with Polymorphism on page 32 on how this is done. Now, given
that we have d epar t ment N ame( ) methods in place, the next step is to call that polymorphic method
from the template method in E mplo yee :
Finally, delete the a sXM L () methods in each of the E m p l o y e e subclasses. All the tests should pass as
before.
When we have a complex expression that is difficult to understand, we can extract parts of it and store
the intermediate results in well-named temporary variables. This breaks the expression into easy-to-
understand pieces, as well as making the overall expression clearer.
The above code is quite simple, but it could certainly be clearer. There's also a potential performance
issue since get S ubto t al() is called repeatedly. We can refactor this, breaking the complex expression
up into simpler pieces and using well-named variables to better communicate the intent:
If we have several constructors that create different flavors of the class, it can be confusing since, in
Java and C++ (as opposed to Smalltalk), all constructors have the same name. Instead of using hard-
coded constructors, we can use static factory methods. This lets us give each one a meaningful name.
p ub l ic c las s Rati n g {
p r iv at e i n t va l ue = 0;
p r iv at e S t ring sour c e = null ;
p r iv at e S t ring revi e w = null ;
p u bl ic Ra t ing( i nt a R atin g) {
th is (aR a ting , "An o nymo us", " " ) ;
}
p u bl ic Ra t ing( i nt a R atin g, S t r i n g a R a t i n g S o u r c e ) {
th is (aR a ting , aRa t ingS ourc e , " " ) ;
}
p u bl ic Ra t ing( i nt a R atin g, S t r i n g a R a t i n g S o u r c e , S t r i n g a R e v i e w ) {
va lu e = aRat i ng;
so ur ce = aRa t ingS o urce ;
re vi ew = aRe v iew;
}
/ / .. .
}
This isn't too bad. We're chaining constructors to provide default arguments to the next more specific
one. In this way the functionality of the constructor is implemented only in the most specific one.
However, we can communicate the intent more clearly by refactoring to convert these to factory
methods:
Part of refactoring this is changing all references to use the factory methods. When testsall pass again
we can delete the unnecessary constructors and mark the ones that remain as p r i v a t e . For example,
constructor calls such as:
Making the constructors private also gives us more control over where and how instances are created.
Replace Inheritance with Delegation
Inheritance should only be used when the subclasses are special kinds of the superclass, or they extend
the superclass and not just override parts of it[12]. If inheritance is being used just to reuse some of the
capabilities of the superclass (e.g., subclassing V e c t o r to be able to store objects sequentially), it
should be replaced with delegation by making the V e c t o r an instance variable and using it to store the
objects.
For example, consider this code for a class representing a company department that contains a
collection of employees:
The idea here is to get for free the collection maintenance support from V e c t o r . Big mistake. There is
no type-safety at all. Anything can be added to the department, and anything coming out of it has to be
cast to Em pl oyee . Consider the following code:
p ub l ic C omp a ny() {
d e pa rt men t s.pu t ("En g inee ring " , n e w D e p a r t m e n t ( ) ) ;
d e pa rt men t s.pu t ("Sa l es", new D e p a r t m e n t ( ) ) ;
}
p ub l ic i nt r ollC a ll() {
i n t ab sen t ees = 0;
C o ll ec tio n all D epar t ment s = d e p a r t m e n t s . v a l u e s ( ) ;
I t er at or d eptI t erat o r = allD e p a r t m e n t s . i t e r a t o r ( ) ;
w h il e (de p tIte r ator . hasN ext( ) ) {
De pa rtm e nt a D epar t ment = ( D e p a r t m e n t ) d e p t I t e r a t o r . n e x t ( ) ;
De pa rtm e nt.D e part m entI tera t o r e m p l o y e e s =
aD ep art m ent. i tera t or() ;
Not a very good design. Any code can get a department object and add anything it wants to it. By
extending Ve ctor to reuse the collection implementation we've totally discarded any control over what
goes into Dep a rtme n t. Talk about equal-opportunity hiring!
What can we do? Well, the first step would be to refactor out the collection management and provide a
type-safe iterator:
p u bl ic vo i d hi r e(Em p loye e ne w H i r e ) {
em pl oye e s.ad d (new H ire) ;
em pl oye e s.ad d (new H ire) ;
}
pu bl ic b oole a n ha s Next () {
re tur n und e rlyi n g.ha sNex t ( ) ;
}
pu bl ic E mplo y ee n e xt() {
re tur n (Em p loye e )und erly i n g . n e x t ( ) ;
}
}
}
Now the only way to add to the collection is to use the h i r e ( ) method. This provides type-safety.
Since we now have that, we can create the customer iterator to encapsulate the cast.
Having hard-coded literal values embedded in code is a very bad thing. They are harder to see,
changing them is shotgun surgery, and it's blatant duplication. We can use a well-named symbolic
constant instead. Then when it has to change, it's only in one place. This is really more general, and
applies to any literal values, such as strings.
c o nt ro l.v e rify ( );
}
The smell here is the duplication of the literal string" L o s t I n S p a c e " . We start by introducing a
constant to hold the value:
Then we replace every occurrence of the literal string with L O S T I N S P A C E . The resulting code is:
c o nt ro l.v e rify ( );
}
all uses of the string are synchronized, so we no longer need to worry about references being
the same.
Replace Nested Conditional with Guard Clauses I know I said this was my ten favorite refactorings, but I
thought I'd throw in a bonus.
Many people have been taught that a method should have only a single exit point (i.e., r e t u r n
statement). There is no valid reason for this, certainly not at the expense of clarity. In a method that
should exit under multiple conditions, this leads to complex, nested conditional statements. A better, and
much clearer, alternative is to use guard clauses to return under those conditions.
p ub l ic i nt f ib(i n t i) {
i n t re sul t ;
i f ( i == 0 ) {
re su lt = 0;
} el se if (i
< =2 ) {
re su lt = 1;
} el se {
re su lt = fib ( i
- 1 ) + f ib( i
- 2);
}
} r et ur n r e sult ;
}
It's correct, but very ugly and crowded. And the conditional isn't even deeply nested. Now let's refactor
it to use guard clauses instead of the conditional:
p ub l ic i nt f ib(i n t i) {
i f ( i == 0 ) re t urn 0 ;
if (i
< =2 ) re tu rn1 ;
r e tu rn fi b (i
- 1 ) + f ib( i
- 2);
}
This is one case where I will drop the braces surrounding the body of
an if: guard clauses that perform one very simple action, typically a
re t urn. The intent is revealed much more by using this format
rather than the more canonical:
pu b lic i nt f ib(i n t i ) {
i f (i == 0 ) {
ret u rn 0 ;
}
i f (i
< = 2) {
ret u rn 1 ;
}
r etur n fib (i
- 1 ) + f ib(i
- 2 );
}
Using guard clauses results in code that is much cleaner, shorter, and clearer.
Amazon
Prev don't be afraid of buying books Next
REFACTORING TO PATTERNS
Design patterns are distillations of proven design ideas. To be good programmers we should know as
many patterns as possible, and be constantly learning new ones. More than that, we need to know when
to use them and, just as important, when not to.
If we don't know design patterns and/or don't use them, we are at risk of under-engineering. We won't
see similarities as easily, and will find ourselves solving the same problems again and again. Knowing
patterns helps us recognize recurring problems and gives us approaches to solving them.
The danger of design patterns is getting caught up in them. Many programmers overdo it when they
discover design patterns. They begin seeing Composites around every requirement. Worse yet, they
design by using patterns at the outset.
So, how should we be using patterns? The answer(if you haven't guessed yet, then you should reread
this chapter before continuing) is as targets for refactoring. The thing to remember is that we shouldn't
drop patterns into the design in final form, rather we should gradually evolve into using them via
refactoring. . . but only when we need to.
Amazon
Prev don't be afraid of buying books Next
SUMMARY
Now we know a bit about refactoring: what it is, some of the specific techniques, and some of the
indicators that tell us that refactoring may be required.
It is very important to engrain the habit of refactoring and to become proficient at refactoring. Both are
accomplished through practice. We can start by pulling out some recent code and sniffing through it for
code smells with a newly critical nose. Then we can apply specific refactorings as appropriate.
The problem with refactoring working code is that our changes might break it. That's where TDD comes
in. If we are practicing TDD, all of the code is checked by our teststoensure its proper behavior. We
must always refactor as we do any TDD coding: in tiny increments. After making each component
change in a refactoring, we run the tests again to catch any small bugs that might have been
introduced. The level of confidence this gives us must be experienced to be understood.
Pretty soon we will have the courage to continually whittle down our code to the leanest, sparest
possible implementation. As we become proficient at refactoring through TDD, we'll find that we develop
a more discriminating eye (and nose!) for coding excellence, and a deeper level of software craft.
Stay alert for opportunities to consolidate duplicate fixture code within test methods
into a single s etup () method.
Stay alert for fixtures that are not used uniformly by all the test methods in the
Tes tCa se . Look for opportunities to split Te s t Ca se s to keep them focused on
specific fixtures.
Stay alert for ways to collect Te s tCa s e s into T e st S ui te s. Make sure that
Tes tCa se s are not themselves serving as Te s t Su it es(organized around structure
or functionality instead of common fixture requirements).
Look for ways to keep test methods independent of each other. Look for
opportunities to mock state-sensitive resources that bind your test methods
together.
For more on test code smells and test code refactoring, see [39], [40].
Amazon
Prev don't be afraid of buying books Next
...
This chapter discusses programming by intention, a central idea in XP. It means making your intent clear
when you write code.
Have you ever had to work on a piece of code and found that it was hard to understand? Maybe the
logic was convoluted, the identifiers meaningless, or worse, misleading. You say there was
documentation? Was it readable? Did it make sense? Was it up to date with the code? Are you sure?
How do you know?
The main idea of programming by intention is communication, specifically, the communication of our
intent to whomever may be reading the code. The goal is to enable them to understand what we had in
mind when we wrote the code.
Let's look at what we need to do in order to have our code be as understandable and intent-revealing
as possible.
Amazon
Prev don't be afraid of buying books Next
NAMES
By names I mean the identifiers we choose to name the various classes, variables, methods, etc., that
we create as we work. We need to choose names that are semantically transparent, that is, they say
what they mean and mean what they say.
When I'm talking about choosing names, I like to pull out the quote from Romeo and Juliet that appears
at the beginning of this chapter. We can use whatever word/name we like to refer to a thing, but if it
does not convey the meaning that we intend then it is an enemy of clarity and may serve to confuse
people who read or work on this code in the future. The lesson here is to use names that make sense. .
. call it what it is.
My oldest daughter used to have a cute, although somewhat annoying, habit of coming up with her own
names for songs and stories that she liked. She would focus on one line from the whole song/story and
derive a name from that. Many nights at bedtime she would ask frantically for a new favorite, while we
went through a game of Twenty Questions to try and deduce what she was talking about. This is a
danger of using non-obvious names for things. Sure, it may make sense to us, but what about everyone
else?
There are several patterns that we can use when choosing names; we discuss them in detail next.
Use nouns or noun phrases for class names. Name classes for what they represent or
what they do:
Use either adjectives or generic nouns and noun phrases for interfaces. Interfaces
are a bit different. If an adjective is used for an interface name it usually ends with -
able, for example, Runn able , S e ria l i zab l e . My advice is to avoid conventions that
prepend or append "I" to the name when possible. Sometimes there isn't a good name
for an interface other than "interface to something." In that case, IS om e th in g is
acceptable.
Use accepted conventions for accessors and mutators. Many languages have
generally accepted conventions for how to retrieve and modify instance variables. For
instance, if you are working in Java, you are advised to use the Java Beans conventions
of ge tX and se tX to retrieve/modify a variable named x .
To access a boolean variable, use i sX in both Java and Smalltalk. An alternative in some
cases is to use the form is X() where X refers to an optional part of the object; for
example, either of the following could be used:
Note that this applies to computed properties as well as actual instance variables, for
example, when a circular queue class needs a method to query whether it is full, we
suggest is Ful l.
Sometimes it is clearer to drop the get prefix. For example, I tend to use s iz e rather
than g et Siz e to fetch the size of a composite object. This is often the case when the
value you are asking for is a property of the object as opposed to one of its attributes.
publ ic in t s iz e() {
re tur n mov ie s.si ze() ;
}
In this case, we are also conforming to the convention used in the Java class libraries. It
is a good idea to conform to the naming idioms of the environment you are working in
because that is what others who will be reading your code will be accustomed to.
Don't put redundant information in method names. If I needed a method to add an
instance of X, I would tend to call the method a d d rather than a d dX since the parameter
will be an X. This has the benefit that if later refactoring changes the type of the
argument (even simple renaming of the class), then the method name is out of sync with
its argument. The innocent redundancy has now become misleading and confusing.
Another benefit is that you gain clarity if you need to support adding of different types by
the ability to overload the method name.
There are always exceptions. Sometimes it is just clearer, and reads better, if you have
type information in the method name.
Use nouns and noun phrases for variable names. Variables hold things, so nouns
make sense, for example, rat in g, mo v i es, co n ne c ti on T oR e vi ew S er ve r .
// ...
}
Choose names carefully, but don't spend too much time at it. Remember, they are easy to change. If
later we decide that a different name would communicate the intent better, it can be changed. That's
one of the most basic refactorings. Using a tool that has automated refactoring support helps.
Amazon
Prev don't be afraid of buying books Next
SIMPLICITY
When we are explaining something unfamiliar to someone we speak as simply as we can. Likewise,
when we write code we should write it as simply as we can. Keep in mind that someone will be reading
that code later, trying to understand what it does and how it does it. We should strive to keep our code
as clear and intent-revealing as possible.
So how do we keep our code clear? One way is to use simple algorithms as much as possible. Always
assume that the simplest way of doing something is the best. If it proves not to be, we can always
change it. One of the phrases heard a lot in the eXtreme Programing world is "What is The Simplest
Thing That Could Possibly Work?"
"What," you may ask, "does that mean?" Well, consider this simple test:
In order to get this test to pass once it is compiling, we do the simplest thing that could possibly work.
Specifically:
pub l ic in t s iz e() {
r e turn 0;
}
When we do this we know that the siz e () method is too simple, but at this point we don't know what
it should be, and returning zero is the simplest thing that makes the test pass. We will have to
generalize this method later, but not until we write tests for non-empty list behavior.
Another way to keep code simple and intent-revealing is to change unclear code so that it is clear. This
is one way that refactoring is used. Chapter 2 has a more in-depth discussion of this incredibly
important topic.
There are many refactorings that deal with increasing the intent-revealing quality of a piece of code.
While it could be argued that most of Fowler's refactorings make code clearer, we only consider a
handful here. See [15] and [16] for more information on these and the remainder. The simplest of these
include renaming classes, methods, variables, etc. Some of the more complex intent-revealing
refactorings are explained next.
Extract Class If we have a class that is doing too much or that has multiple
responsibilities, we extract related responsibilities into separate classes.
Replace Temp with Query Rather than computing a value and storing it in a temporary
variable, we extract the computation into a method that returns the result, and call that
instead.
WARRANTED ASSUMPTIONS
The test in the section titled Programming by Intention was the first test written in the project it came
from. It was written as shown and no other code had been written yet. Why is this significant? Well, if
we look at the test code we will see that it makes some assumptions:
3. Movi eLi st has a method named s i ze which takes no parameters and returns an i nt .
Making these assumptions as we write tests (and later when we write the real code) gives us the ability
to design interfaces from the point of view of code that will use them. This allows us to choose names
that make sense and that read well, that is, are understandable and clear. It also allows us to decide
what behavior is required from a much better point of view: again, that of the client. This limits the
behavior to exactly what is required, which lets us write less code, which lets us work faster, which lets
us deliver sooner and more often.
How does this let us write less code? Primarily by deferring the addition of any code until it is proven to
be required. This not only includes implementation detail, but also the existence of classes and methods.
You do not create a class until you have a test that creates an instance of it. Likewise, you do not add
a method until you have a test that calls it.
When you are writing code (especially tests), write what you want, without worrying about how to do it.
Make up the support classes, methods, and variables you want and worry about their implementation
later. This keeps you focused on what you have in mind at the time. The compiler will remind you about
the assumptions you made.
Amazon
Prev don't be afraid of buying books Next
Common Vocabulary
Everyone involved with the project should use a common vocabulary to talk about the domain and the
system. This can be based on the reality of the situation or it can be metaphorical if that eases
understanding.
The original XP project at Chrysler is a prime example. The people at Chrysler understood
factory and assembly line terminology. The programmers used that as a metaphor for the
payroll system they were developing. An employee's paycheck was the product,
constructed from dollar parts. It passed through various workstations, each of which
performed some operation such as converting hour parts to dollar parts, taking a
deduction, adding a bonus, etc.
Choosing names If we choose names from the common vocabulary, everyone knows
what we mean. Having a common vocabulary means that everyone is using the same
terms to mean the same thing. A great deal of confusion and miscommunication can be
avoided, and a great deal of time and money can be saved. Not only should the
terminology be used by everyone involved, it should also make sense in a fairly obvious
way.
Test First
One of the biggest effects of working a test first is that we have a large suite of programmer tests that
make it possible for us to refactor mercilessly. Also, by writing the tests first, we have time to think
about what we need to do before we have to think about how to do it.
Another benefit to writing the test first is that when the test passes, we're done. This has at least two
advantages:
1. it helps you work faster, because when the test passes we're done and can move on to
something else, and
2. because we stop when we have the test passing, it's easier to avoid overengineering a more
complex solution than is required. This improves the overall quality of our code, as well as
making it smaller and more understandable.
Make Assumptions
This is closely related to writing tests first, but isn't limited to writing tests. You can make assumptions
during implementations as well. By making assumptions as we work, we first decide what we need and
what it should be called, and then create it. This lets us think about what we want to accomplish before
worrying about the technical details of how to accomplish it. It also allows us to focus on the task at
hand. If we made some assumptions that turn out to be invalid (i.e., what we assumed existed isn't
there yet), we will find out when we try to compile. At that point we can deal with it; we have finished
what we were doing when we had to make the assumption. The alternative is to deal with our
assumptions as we make them. If we do that we risk losing track of what we were originally doing.
Keep a notepad or stack of blank cards next to the workstation for jotting down notes and To Do items.
Refactor
By refactoring as we see the need for it, we can change what we have already done to make our
intention clearer. Refactoring was discussed earlier in this chapter as well as in Chapter 2.
This technique helps us to defer work until it is required, by allowing us not to worry about keeping
track of the assumptions we make. When we make an assumption which turns out to be false (i.e., the
class, method, etc., that we are using in our code does not yet exist) the computer will inform us. In
some cases (e.g., Java and C++) the compiler will report that something is missing and we will have to
correct the issue (by stubbing the missing classes or methods). In other cases (such as Smalltalk) will
we receive notification at runtime that a class, method, etc., is missing.
For example, consider the following test which was the first written for a project:
If this is the first code written in a project, compiling will produce the following message:
pub l ic in t s iz e() {
r e turn 0;
}
By leveraging the compiler like this, you can stop worrying about the assumptions you made that have
to be dealt with. . . let the compiler tell you. This is also a way to avoid doing more than you absolutely
need to.
I've said it before, but it's worth repeating: Strive for simplicity. If code starts out simple it's easier to
keep it simple than to make complex code into simple code. You may not always do the simplest thing,
but your code will be simpler and clearer if you take the time to figure out what the simplest thing
would be. An earlier section discussed simplicity in more detail.
There is a difference of opinion on this point. Some advise always doing the simplest thing that could
possibly work. I agree that forcing yourself to do the simplest thing is preferred when you are just
starting out. Get into the habit of being simple. Once you have that, actually doing the simplest thing is
not as important as long as you are aware of what it is.
Amazon
Prev don't be afraid of buying books Next
"NO COMMENT"
There are valid reasons to write comments. We'll talk about these later. However, as we discussed in
Chapter 2, most comments are not written for valid reasons. Fowler's Refactoring[16] calls comments
"deodorant". . . they are there to try to hide the bad smell of the code. The code is unclear, the code is
poorly written, names are badly chosen, the logic is obtuse, etc. Comments were added to try to explain
what the code does. The code should have been, and should be, refactored to make the comments
unnecessary.
Don't get me wrong. I'm not saying "Don't write documentation." Nobody really doing XP will say that.
Sometimes it is important to the customer to have specific documentation written. Also, I'm not saying
"Never write comments." What I am saying is "Never write unnecessary comments." Most comments are
unnecessary if the code is written so that the intent is clear. If and when we do write comments, we need
to make sure they communicate why and not how.
Valid Comments
As I mentioned above, there are several valid reasons for us to write comments.
Incomplete code This type of comment serves as a note to what we were in the midst of
working on, or how we see the code evolving. There generally is not much need for this
type of comment, since tasks should be no larger than what can be accomplished in a
single day. If you find that you do need to make such a note, choose a standard tag for it
so you can quickly search for loose ends. And stay away from generic n o t d o n e y e t
comments. I'd suggest something like:
A valid use for this type of comment might be to note code that could benefit from being
refactored. Maybe we saw the need to refactor but didn't have time to do it. Make a note
so that someone will spot it and do the refactoring when there is time. It might be
prudent to use standard wording for these "refactoring To Do" comments so that a global
search can be performed as a rough code-debt metric. I suggest something similar to the
following:
Refactoring doesn't make it clear enough This isn't really a valid comment, rather it is
more like the previous type. . . it's an IOU. If refactoring doesn't clean up the code, either
someone else should try their hand at it, or (more likely) the code in question should be
scrapped and rewritten.
// A ci rc ula r que u e is us e d h e r e f o r p e r f o r m a n c e r e a s o n s : t o a v o i d
// h avi ng to move elem ent s a r o u n d .
Class comment This is the one case that is often useful, in spite of the general
disapproval of comments. Not much is required. A simple note at the beginning of the class
briefly explaining why the class exists and what it is used for will suffice. Avoid writing a
how to use this class tutorial. That's one of the things the tests are for.
/* *
* T his c las s rep r esen ts a s i n g l e m o v i e t i t l e . I t i s r e s p o n s i b l e f o r
* m ain ta ini n g it s own ra t i n g s , r e v i e w s , e t c .
*/
A final note related to comments. Programmers often sign their work by including a comment in the file
header noting who wrote it. That's fine; credit where credit is due and all that, except:
1. If we are practicing XP, specifically collective code ownership, everyone will likely work on that
code. It is owned by the team.
2. A record of who worked on each file will be maintained by the source code control system (SCCS).
It is redundant to include the information in the file itself as well. Another comment on this:
Please don't include an expandable change log in the file (e.g., $ L o g $ in CVS). This clutters the
code, makes the files larger, and duplicates information that is easy to extract from the SCCS.
Amazon
Prev don't be afraid of buying books Next
SUMMARY
In this chapter we've seen how even seemingly minor details, like picking a name for something, can
have a great impact on the code quality. We've discussed several techniques for choosing names,
achieving simplicity, staying focused, and making your code intent-revealing. The benefit to practicing
these techiques is code that is simple, understandable, and maintainable.
Sooner or later, someone will read our code. We will ourselves reread our code, sometimes after a long
absence. Any readers, including us, will be hindered or thwarted by code that is awkward, complex,
obscure, or in the worst case, misleading.
Well-crafted, simple, clear, intent-revealing code will speed us in our work instead. Such code also helps
teach readers how to program well. It teaches good solutions to problems, and teaches how to express
those solutions succinctly.
Learning to code with this clear intent is well worth any programmer's effort.
Amazon
Prev don't be afraid of buying books Next
Chapter 4. JUNIT
I do owe them still
In this chapter we'll be looking at and learning a bit about the de facto tool for Test-Driven Development
using Java: the JUnit programmer test framework. Specifically, we'll be looking at the latest released
version as of this writing: JUnit 3.8.
The esteem in which developers hold JUnit is evidenced by the fact that it has repeatedly won honors at
the Java-World Editors Choice Awards in the Best Java Performance Monitoring/Testing Tool category.
Amazon
Prev don't be afraid of buying books Next
ARCHITECTURAL OVERVIEW
First let's look at how JUnit is structured. Figure 4.1 shows a high-level UML class diagram of the core of
JUnit: the ju n it.f r amew o rk package.
Test This is the interface that all types of test classes must implement. Currently in the
framework there are only two such classes: T e s t C a s e and T e s t S u i t e .
TestCase This class is the main class you will be extending when you write your tests. It
is the simplest type of Te st. A concrete T e s t C a s e (i.e., a class that extends
Te st Case ) has methods that implement individual tests as well as optional s e t U p and
te ar Down methods (see the section later on "Writing a TestCase" for details).
Assert This is the superclass of T e s t C a s e that provides all of the a s s e r t methods that
are available to you when you write your tests.
TestFailure This class simply encapsulates an error or failure that occurs during a test
run. It keeps track of the Test that failed and the exception that was responsible for the
error or failure (in the case of a failure this will be an A s s e r t i o n F a i l e d E r r o r ).
TestResult This class accumulates the results of running the tests. It is also responsible
for informing any interested parties of the start and end of a test, as well as failures and
errors. It is this facility that enables the Graphical User Interface's (GUI) progress bar to
move for each test, and the results to be displayed.
TestListener This is an interface that is implemented by any class that wishes to track
the progress of a test run. Methods are declared for notification of the start and end of
each test, as well as the occurrence of errors and failures.
THE ASSERTIONS
When you are writing test methods you make heavy use of the capabilities you inherit (by way of
T es t Ca se ) from A s ser t. For all As se r t methods there are two versions: with an initial S t r i n g
parameter that contains a message to be displayed in the case of failure, and without.
fail
The simplest method (really we'll be talking about pairs of methods, but from here on I'll just consider
them single methods) is fa i l().
v oi d f ai l()
v oi d f ai l(S t ring mess a ge)
Calling fa il( ) causes an immediate test failure. This is useful in cases where the test can get to a
point that indicates something went wrong. An example is when you expect a method call to throw a
specific exception. You would place a call to f a i l ( ) immediately after the method call inside the t r y
block. The following example demonstrates this:
If c o n dit io n is fal s e, the test fails. The a s s e r t F a l s e ( ) method does the opposite, failing if the
value of the condition is tr ue. This is provided as a cleaner alternative to using a s s e r t T r u e ( ) with a
negated condition.
negated condition.
Here's an example:
Next are two method pairs for checking for null or not null:
Each of these takes an Obje c t and checks whether it is or isn't n u l l , then fails if it isn't as expected.
These methods succeed if the two supplied arguments are the same object (for a s s e r t S a m e ( ) )or
different objects (for asse r tNot Same( ) ).
This is really a family of methods, one for each Java type. Each method tests for equality using the
appropriate mechanism (i.e., = for primitives and the e q u a l s ( ) method for objects). For this reason
I've omitted types for e xpec t ed and ac t u a l .
There is a set of methods (with and without an initial message parameter) for the following types:
b oo l ea n, b yte , c h ar, do u ble, f loa t , i n t , l o n g , O b j e c t , and s h o r t .
The version for Ob j ect makes use of the e q u a l s method of the e x p e c t e d argument.
The version for do u ble and f loat have an additional parameter (of type d o u b l e or f l o a t ,
respectively) that specifies the maximum difference allowed between the expected and actual values.
Amazon
Prev don't be afraid of buying books Next
WRITING A TESTCASE
Tes t Case is the class most often used in JUnit. You begin by creating a subclass of Te s tC as e to which
you will add related test methods. The examples here will be taken from the project that forms the core
of the book: keeping track of a list of movies you would like to see, with ratings, recommendations, and
so forth. We start with the idea of supporting a list of movies. Here is what the skeletal test class looks
like:[1]
[1] Note that prior to JUnit 3.8 a constructor was required which took a S tr i ng parameter, which
was then passed to the superclass' constructor. As of version 3.8 this is no longer required.
Now that we have a skeleton set up we can start writing tests. We do this by writing methods that
follow the basic pattern:
set up preconditions
check postconditions
1. write the assertions that test what you want as a result (one small assertion at a time)
3. set up any preconditions (this may be done implicitly by s et U p( ) ; more on this later).
Test methods should be short and to the point. Each method should test a specific bit of functionality. In
keeping with the TDD mantra of test a little, code a little, don't write all the tests then go about writing
code. Write one test method... now make it run... write another test... make it run... and so on. And try
to focus on keeping those tests as small as you can. Time for another example: time to write the first
test. This is often an awkward point: How do you start testing? What test should you start with? My
advice is to find the simplest thing to test. If there is a basis case, test that first, then move
incrementally to the larger cases.
In this example our basis case is an empty list of movies. Here's a test method for empty list behavior,
from which we can learn several things:
Next, notice the a ss ert ... method calls. The final thing to notice is that the test refers to classes
(Mov i eList and Mo vi e)as well as methods (si z e (), is E mp t y( ), and m o vi es ( )) none of which
have been written yet. This is Test-Driven Development in action.
As we'll explore later on, you also want to be on the lookout for opportunities to refactor your test code
just as you do with the production code. Test code has characteristic smells of its own. At XP2001 there
was a great paper on refactoring test code[40].
Amazon
Prev don't be afraid of buying books Next
When you run tests there are two types of feedback you get for each test:
if the test failed, whether it was an error or a failure, and what caused it
Additionally, you get feedback on the test run as a whole: pass (the diligently sought-after green bar) or
fail (the not-so-pleasant, but still invaluable, red bar). The overall pass/fail feedback is commonly
referred to as a greenbar/redbar due to the implementation of the graphical test runners: the progress
bar is green as long as tests are passing but turns red if one test fails (and stays red). This provides
immediate and undeniable feedback.
As mentioned above, when a test fails it can be due to an error or a failure. An error indicates that the
test terminated unsuccessfully due to an unexpected exception being thrown. A failure indicates an
unsuccessful termination due to an assertion failure.
JUnit's Text UI
We'll start with the simplest thing that could possibly work: a command line, textual T e s t R u n n e r .
You simply run j unit . text u i.Te stR u n n e r , passing it the name of the T e s t to run, as shown in
Figure 4.2. As each test is run, a character is printed to the console (actually S y s t e m . o u t ): . for a
pass, E for an error, or F for a failure.
The j un it. s wing u i.T es tRun ner, shown in Figure 4.3, provides a much more elaborate interface for
running tests. A notable addition in respect to the AWT GUI is the test hierarchy tab that allows you to
open test suites to recursively see the component tests. Each test case is annotated by a green check or
red X, indicating the pass/fail status of that particular test. Selecting a test and clicking on the Run
button will rerun that particular test.
There are two tasks in the optional task jar for using JUnit from within an ANT build file. Both support a
variety of attributes that we won't cover here. The ANT documentation does a more than adequate job.
The first task is ju nit , which runs a set of JUnit tests. The screen output from a j u n i t task is
minimal, and not overly useful. Here's a b u i l d . x m l snippet that shows basic use of the j u n i t task
which specifies an XML formatter that is needed in support of the task we'll discuss in a moment:
< ju n it p rin t summ a ry=" o n"
>
< cl a ss pa th
>
< /c l as sp ath
>
< /j u ni t
>
ANT provides another task for use with JUnit: j u n i t r e p o r t . The purpose of this task is to take status
reports (in XML) that are generated by one or more j u n i t tasks, consolidate them, and generate a
nested set of HTML pages that present the test results.
Using the j uni t repo r t task increases the value of running the entire test suite as part of an
automated build. The HTML output can be copied (by another ANT task) to a known place on the
development team's intranet site. By running the build regularly, you can make full and detailed status
information (as described by the tests) available in an up-to-date form. Also very useful is the high-
level test count. Coupled with a continuous integration and build process (e.g., CruiseControl [URL 37])
you can literally see the test count increase over the course of even hours. In an XP setting, this
feedback is quite valuable.
Figures 4.4 and 4.5 show parts of the resulting report (top level and details, respectively). Here's a
b ui l d. xm l snippet to build a report from the output of the previous snippet:
< fi l es et di r ="."
>
< /f i le se t
>
< /j u ni tr epo r t
>
Figure 4.6 shows an Eclipse session with a JUnit view open (center right). The Eclipse TestRunner is a
view within workbench. It can be made into a fast view (i.e., docked in the perspective bar and popped
back on demand), run in debug mode, and invoked by running a class as a JUnit Test. This tight
integration also provides the ability to double-click a line of the failure trace and have an editor open on
the corresponding file, positioned at the line causing the failure. Nifty. Clicking on the name of a test in
the Failures or Hierarchy tabs opens an editor on the source of that test. Having JUnit this integrated
speeds up the programming experience greatly: make a change, a quick keystroke to save and compile,
and another quick keystroke to rerun the last set of tests. These features coupled with Eclipse's other
TDD supporting features (and the fact that it is open source) make it a great choice of IDE for TDD.
pub l ic vo id se tUp () {
e m ptyLi st= ne w M ov ieLi st() ;
}
The counterpart of se tU p() is te arDow n () in which you perform any required clean up of the test
fixture. This is most useful if, as part of the fixture, you need to allocate limited resources that will not
be reclaimed automatically, such as databases and communication connections.
The setU p () method is called before each test method, and t e ar Do w n( ) is called after each one. This
keeps tests isolated and reduces the chance of side effects from test to test. If you want s et Up ( ) and
tea r Down( ) to be common for all tests in a T e stC a s e, there is a decorator that does just that.
Chapter 5 discusses this and other extensions.
Amazon
Prev don't be afraid of buying books Next
USING TESTSUITE
As mentioned earlier, Te st Suit es are used to collect a selection of T e st s so that they can be run as
a unit. Note that a T es tSui te may recursively contain other T e st Su i te s.
When you start out with JUnit you will likely be using Te s tS u it es without even knowing it. The
framework uses reflection to find and collect all of the test methods in a given T es t Ca se whose
signatures match:
When you use this facility, you have no control over or any guarantee of the order in which the test
methods will be run. Tests should be written in such a way that they are order independent, so this
should not be a problem. If, however, you have a situation where you want or need tests to run in a
specific order you can supply a pu blic s tat i c Te s t s ui te ( ) method which builds the T e st Su i te
as required. Here is an example:
I can't stress enough that test methods within a T e stC a se should be completely independent. You
should seldom have to resort to writing your own sui t e () method. In xUnit for languages that lack a
reflection mechanism, you will have to build suites by hand, as it were.
If you find that you want or need to have test methods run in a specific order you should see if there is
a way to remove that requirement. It may be that you are relying on an external resource whose state
is modified by each subsequent test. Try to find a way to tease apart those dependencies, setting up the
appropriate fixture individually for each test. You might be able to do this by mocking such an external
resource. For more on mock objects, see Chapter 7.
Another more valid use of TestS uite, in my opinion, is to create smaller T es t Su it e s which include a
specific subset of the tests. There are two main reasons to do this:
1. You can create custom suites of tests that relate directly to the task you are working on. By
doing this you can save time running tests by focusing the tests that are run on the area of the
code you are working on. But always keep that suite and the code it tests in sync. As you find
yourself needing to involve code that your suite doesn't test, be sure to add the tests for that
code to your suite so that you can proceed, protected and confident.
2. You will most likely want to have a Te s t Sui t e that tests a complete package or module. My
practice is to have a suite in each package called Te st Al l that includes all tests in that
package and all subpackages. By doing this, I can easily test any subtree of a project. Here's an
example:
Let's begin by considering T estC ase. It is used to group related tests together. But what does related
mean? It is often misunderstood to mean all tests for a specific class or specific group of related classes.
This misunderstanding is reinforced by some of the IDE plugins that will generate a T e s t C a s e for a
specified class, creating a test method for each method in the target class. These test creation facilities
are overly simplistic at best, and misleading at worst. They reinforce the view that you should have a
T es t Ca se for each class being tested, and a test for each method in those classes. But that approach
has nothing to do with TDD, so we won't discuss it further.
This structural correspondence of tests misses the point. You should write tests for behaviors, not
methods. A test method should test a single behavior. Examples include:
p u bl ic vo i d te s tSiz e () {
as se rtE q uals ( "Siz e of an e m p t y l i s t s h o u l d b e z e r o . " ,
0,
empt y List .siz e ( ) ) ;
}
p u bl ic vo i d te s tIsE m pty( ) {
as se rtT r ue(" E mpty list sho u l d r e p o r t e m p t y . " ,
e m ptyL i st.i sEmp t y ( ) ) ;
}
p u bl ic vo i d te s tIte r ator () {
It er ato r emp t yLis t Iter ator = e m p t y L i s t . i t e r a t o r ( ) ;
as se rtF a lse( " Iter a tor from e m p t y l i s t s h o u l d b e e m p t y . " ,
e mpty L istI tera t o r . h a s N e x t ( ) ) ;
}
}
TestCase is a mechanism to allow fixture reuse. Each T e s t C a s e subclass represents a fixture, and
contains a group of tests that run in the context of that fixture. A fixture is the set of preconditions and
assumptions with which a test is run. It is the runtime context for the test, embodied in the instance
variables of the Te stC a se, the code in the s e t U p ( ) method, and any variables and setup code local to
the test method. In the above example, the fixture consisted of e m p t y L i s t , an A r r a y L i s t with no
elements.
An instance of a T e stC a se is created for each individual test method. When the T e s t C a s e object is
run, it builds the fixture (using se tUp() ), runs its single test, and tears down the fixture (using
t ea r Do wn ()). Figure 4.7 shows this. Note that the two test case objects are the same object, but with
methods from the super and subclass. By allowing T e s t C a s e to contain multiple tests (i.e., p u b l i c
v oi d t es tXX X () methods) you are sharing a fixture definition.
Figure 4.7. Sequence of running a single TestCase.
This hardwired fixture-orientation makes T e s t C a s e a bad choice for grouping tests structurally or
conceptually—T estS u ite is better for that, as we'll see later. So instead of using T e s t C a s e to group
tests for a given class, try thinking about it as a way to group tests that need to be set up in exactly
the same way.
A measure of how well your Tes tCase is mapping to the requirements of a single fixture is how
uniformly that fixture (as described by the s e t U p ( ) method) is used by all of the test methods.
Whenever you discover that your se tUp ( ) method contains code for some of your test methods, and
different code for other test methods, consider it a smell that indicates that you should refactor the
T es t Ca se into two or more Tes tCase s.
Running a cohesion metric on your test code can help bring this to
light.
Once you get the hang of defining T est C a s e s this narrowly, you will find that they are easier to
understand and maintain. And again, as we will see shortly, there are still lots of ways to organize the
tests for a specific class (or for any other natural grouping) so that they are easy to identify.
For example, here is a Tes t Case similar to the one above, but with multiple fixtures:
This is better, but notice that it leaves a little smell behind: overly long test method names that echo
the fact that we are testing a list with one item. Because the class name makes it clear what size list
we are testing, we can shorten these method names to describe just the behaviors being tested:
t es t Si ze () and t e stIs E mpty ().
Both Te st Sui t e and Tes t Case implement the T e s t interface. The T e s t R u n n e r s work with objects
that are Te sts, hence, we can point a runner at any T e s t (i.e., T e s t S u i t e or T e s t C a s e )and
recursively run all the contained tests. At runtime, after the T e s t s have been recursively collected, the
structure looks something like that shown in Figure 4.9. The internal nodes are T e s t S u i t e s and the
leaves are T est Ca ses (remember, each T e s t C a s e instance represents a single test method).
With the code being tested. This allows you to reference package-scoped members
from your test code. A disadvantage is that it clutters your packages with test classes,
making it less clear what is production code.
A second issue is whether you put test code in the same source tree as the code being tested. My
personal preference is to answer this with a resounding "yes." There is no real technical advantage either
way, but I just like to keep the tests tightly bound to the tested code. Everything is in one place, and it
reinforces the philosophy that code without tests doesn't exist. Also, not all tools handle multiple source
trees very well.
Keep in mind that some organizations will have policies about where test code must reside, keeping test
code separate from production code, or about what code is where in general. If that's the case, you'll
generally have to go along with it.
As for the question of using the same or parallel source tree, I use a
single source tree. This keeps things simple, binds the test code
closely to the corresponding production code, and works with all tools.
Amazon
Prev don't be afraid of buying books Next
TIPS
As with any technology or technique, there are certain idioms or rules of thumb that are helpful to know
when you are practicing TDD. I've polled the TDD community and collected a list of what the leading
practitioners consider good ideas.
Some things are easier to test, or it's obvious how to write the tests. The types of things that this could
include are:
proper handling of null (but only in cases where nu l l would be a potential value)
generally, the basis case for recursive or iterative structures and computations.
By tackling these easy tests first you quickly get into the test-first rhythm.
Use assertEquals
You should use the ass er tEqu als( ) method as much as possible[31]. This is a matter of clarity.
All of the methods that are inherited from As s e rt have a form that takes a message as the initial
argument. Use it![31] If you feel that the test is clear and failure would be unambiguous, please
disregard this advice. However, if you find that the failure isn't clear, including a message does help.
For example, if the test is as simple as possible, having only a single assertion, and is descriptively
named then there is little reason to have a message in the assertion. It is simply redundant. However, if
you have multiple assertions in a single test, use the message to aid in identifying them in the
TestRunner output. If an assertion fails, the message will indicate which one.
Failure to use the message argument is one of the smells reported by Arie van Deursen et.al. at
XP2001[40].
This message argument provides an explanation of the failure to the user. It makes it easier to
understand why the test is failing. This can be especially useful when you are not running JUnit in an
integrated environment where you can double-click on the error trace and jump to the assert line that
failed. It also serves as documentation for the tests, stating explicitly what the failure condition is for
that test.
In any case, the message shouldn't just echo what the test code, or the built-in error message, says.
Most importantly, keep the number of assertions to a minimum in each test method. Doing this will keep
your tests small and focused. This leads to easy-to-understand tests. The "extreme" of this is one
assertion per test.
Here's an initial version of a test from Chapter 8 which has one monolithic test method:
Notice that this one method, tes tWidg e ts( ) , sets up a fixture and tests three different things: the
list, the field, and the button. Oops! Time to refactor, making a fixture and separating those tests. The
result is shown below. Notice how much cleaner and more understandable it is.
p r otect ed vo id se tUp( ) {
contr ol = Eas yM ock. nice Cont r o lFo r ( Mov i e Li st Ed i to r .c la s s) ;
mockE dit or = (M ovie List Edit o r ) c o n tro l . ge tM oc k () ;
contr ol. ac tiv at e();
windo w = n ew Mo vieL istW indo w ( moc k E dit o r );
windo w.i ni t() ;
windo w.s ho w() ;
}
This is similar to testing nu ll handling, but includes items such as empty strings, 0 , and MA X _I NT .
Don't forget about domain-specific boundary conditions. These are often more restrictive than the natural
ones. This is also a bootstrap or tester's block technique. These tests tend to be easy to write so if
you're casting about for what to write, try these if there are any untested.
Try writing tests in such a way as to make them as independent as possible. Conditions that cause one
test to fail shouldn't (ideally) cause other tests to fail as well.
When you are extracting interfaces keep them small. Interfaces are meant to be focused, so keep them
that way. An interface that declares too much is a smell. Interfaces with less than three declarations are
great.
An advantage of small, focused interfaces is that they make it easier to create and maintain mock
implementations.
A few lines of output per test look fine when running a small number of tests, but when you're running a
suite of 1,000-plus tests, it just becomes so much screen garbage. Worse than that it can actually slow
down the tests. One approach that you can take if there are log messages being generated is to reassign
the log's output stream in the se tUp() and tea r D own ( ) methods.
If you can check something visually, try to do the same with an assertion instead. After all, these tests
are supposed to be automated, with programmatic assertions to do the work.
When you are testing code that talks to a database or communications system (serial, wireless, network,
etc.) use interfaces to decouple the code from the actual resources. Then you can use mocks (see
Chapter 7 for more on mocks).
This decoupling is a good idea in any event, regardless of whether you are using TDD.
In each of your test cases, or suites for that matter, put in a ma i n( ) method that runs that test in the
textui TestR unn er . The following would suffice:
Doing this lets you easily run any test from the command line or other tool.
When you are writing a test, start by writing what you are testing: the assert. Then work backward and
fill in whatever is needed to set things up for the assert.
The text of various JUnit failure reports use the t o S tri n g( ) method of the expected and/or actual
objects involved in the assertion. If meaningful implementations of to S tr in g () are provided, these
failure reports will be more informative, saving time and effort.
Amazon
Prev don't be afraid of buying books Next
SUMMARY
There are a variety of frameworks available for writing unit/programmer tests in a variety of languages.
The most effective are those that allow you to develop tests in the same language and same
environment as the code being tested. In this book we use the xUnit family of test frameworks—JUnit,
in particular, since we are working in Java.
We looked at the architecture of JUnit, its assertion statement syntax, and the structure and style of
good JUnit Tes tC ase s and test methods. We looked at how to run JUnit tests and interpret the results.
We looked at the JUnit plugin for the Eclipse IDE, and their TDD-friendly tight integration with each
other. We discussed automatically running tests with an automated build system. We then discussed the
differences and best uses of T estC ase and T e stS u i te. We then discussed the topic of organizing test
code in relation to production code, and finally closed with a number of JUnit-related TDD tips.
Part IV provides a look at the xUnit implementation for some other languages.
The xUnit family of frameworks are useful in any development context, but in eXtreme Programming
projects their use is nearly universal. In a sense, XP and xUnit grew up together. The articulator of XP,
Kent Beck, was also the author of SUnit (the original member of the xUnit family) and an author of
JUnit. In the spirit of the Agile development process movement, JUnit is barely sufficient for the task. It
provides just enough overhead to accomplish the task, and no more.
Another reason for JUnit's success is the fact that it is open source. This has allowed its users to create
extensions and tools that work with JUnit to provide more complete or wide-reaching solutions to the
challenge of testing. We cover a number of those extensions in the next chapter.
Amazon
Prev don't be afraid of buying books Next
This chapter discusses some of the useful extensions that have been made to JUnit. Some come with
JUnit while others are independent efforts. I have the good fortune to have become friends with the
creators of several of the extensions we will be looking at in this chapter. They have agreed to write
about the extensions they've created.
Amazon
Prev don't be afraid of buying books Next
STANDARD EXTENSIONS
This section looks at the set of extensions that come as part of JUnit, in the package
j un i t. ex ten s ions .
ActiveTestSuite
This is a simple extension that allows you to write/run tests that must interoperate in realtime. When
the suite is run, it runs each test in its own thread and waits until all terminate before completing. This
is a subclass of Test S uite , so to use this capability we simply need to instantiate A c t i v e T e s t S u i t e
and add tests to it rather than to a Tes t S u i t e .
ExceptionTestCase
There are two ways to test that an Exc e p t i o n is being thrown when it should be. One is to do it
explicitly, failing immediately following the statement that should throw it:
There is another way, which I do not recommend: E x c e p t i o n T e s t C a s e . To do that, write the test
ignoring the exception that should be thrown and create a T e s t C a s e using E x c e p t i o n T e s t C a s e :
Then you create the tests, passing the E x c e p t i o n to be thrown as well as the name of the test to run:
To use this effectively you will need to maintain your own s u i t e ( ) method in any test cases that are
subclasses of Exce p tio nT estC ase, in order to supply the E x c e p t i o n that should be thrown. For the
above example you would need the following:
p ub l ic s tat i c Te s t su i te() {
T e st Su ite suit e = n e w Te stSu i t e ( ) ;
s u it e. add T est( n ew T h rowi ngTe s t ( " t e s t F o r T h r o w " , S p e c i a l E x c e p t i o n . c l a s s ) ) ;
r e tu rn su i te;
}
This approach is more limited than doing the E x c e p t i o n check explicitly (as in the first code snippet).
Firstly, having the exception explicit in the test communicates intent better. Secondly, you can no longer
rerun a single test method.
TestDecorator
This extension is used to wrap some additional behavior around a test run. To do this, you subclass
T es t De co rat o r and overload the r un ( ) method to do whatever is required around a call to
b as i cR un ().The following two extensions are built using T e s t D e c o r a t o r . Note that any T e s t can be
decorated.
I think a simple example is in order. Let's assume that we would like a test to be a bit more
instrumented when run on the command line. We can use T e s t D e c o r a t o r to achieve this.
p u bl ic vo i d ru n (Tes t Resu lt r e s u l t ) {
Sy st em. o ut.p r intl n ("St arti n g " + f T e s t . t o S t r i n g ( ) ) ;
ba si cRu n (res u lt);
}
}
p u bl ic vo i d te s tMet h od() {
// .. .
}
O K ( 1 te sts )
RepeatedTest
This class lets us run a test a specified number of times. To use it, we simply create an instance,
passing it the test to be repeated and the number of times to repeat it. R e p e a t e d T e s t is a subclass of
T es t De co rat o r so we can wrap it around any instance of T e s t .
Here's an example of a use of Rep eate d T e s t which will run t e s t M e t h o d ( ) ten times:
p u bl ic vo i d te s tMet h od() {
}
. .. . .. .. ..
T im e : 0. 096
O K ( 10 t est s )
TestSetup
This class is a Decorator that allows us to set up and tear down a fixture that stays in place for the
duration of running the decorated test. Recall that the overriding goal is to have tests totally
independent and not sharing a fixture. There are resources that are expensive enough that we don't
want them to be allocated and freed for each test. That would cause the tests to run too slowly. For
these cases, we can use TestS etup to do one-time setup and teardown for an entire set of tests (that
are in a Te stS u ite). Resources that this could be done for include such things as database or network
connections. Another approach is the use of T e s t R e s o u r c e that is described later.
Use this decorator by subclassing it and overriding s e t U p ( ) and t e a r D o w n ( ) . Here's a trivial example.
It uses a Re p eate d Tes t to run a trivial test five times (to show the local fixture), and decorates that
with an instance of a Te st Setu p subclass, S e t u p E x a m p l e (to show the global fixture).
p u bl ic vo i d se t Up() {
Sy st em. o ut.p r intl n ("Se ttin g u p g l o b a l f i x t u r e . . . " ) ;
}
p u bl ic vo i d te a rDow n () {
Sy st em. o ut.p r intl n ("Te arin g d o w n g l o b a l f i x t u r e . . . " ) ;
}
}
p ub l ic c las s Tes t ForS e tupE xamp l e e x t e n d s T e s t C a s e {
p u bl ic vo i d te s tMet h od() {
Sy st em. o ut.p r intl n ("te stMe t h o d " ) ;
}
p u bl ic vo i d se t Up() {
Sy st em. o ut.p r intl n ("\n loca l s e t U p " ) ;
}
p u bl ic vo i d te a rDow n () {
Sy st em. o ut.p r intl n ("lo cal t e a r D o w n " ) ;
}
l oc a l te arD o wn
T ea r in g dow n glo b al f i xtur e...
T im e : 0. 037
O K ( 5 te sts )
Amazon
Prev don't be afraid of buying books Next
Ass e rtMo extends As ser t from the JUnit framework and adds several assertion methods that are
lacking in the standard JUnit distribution: [1]
[1] In keeping with the philosophy of the folks who wrote MockObjects, these assert methods are
available only in a version with the initial message parameter.
This is another version of ass ertE quals that asserts that the length of two arrays is the same, and
that each pair of objects at a given index are equal.
assertExcludes
assertIncludes
assertStartsWith
This section was contributed by Mike Clark of Clarkware Consulting [URL 56], the developer
and maintainer of JUnitPerf [URL 15].
Introduction
Good mountain bikers first learn to meticulously pick their way through every obstacle on every trail. Great
mountain bikers must then learn to disregard smaller obstacles, crudely plowing over them to conserve
energy for the toughest terrain. Similarly, good programmers optimize every line of code using a variety of
techniques, but great programmers learn to write simple code first. They save optimizations for hot spots
and performance-critical sections of code. The trick is identifying the most important code sections and
continually measuring them over time as the software undergoes change.
JUnitPerf is an open source JUnit extension that uses the decorator pattern to adorn standard JUnit tests as
performance or scalability tests. A Time dT es t measures the elapsed time of an existing test. A Lo a dT es t
runs an existing test with a simulated number of concurrent users and iterations. Using JUnitPerf tests we
can transparently leverage our existing tests to ensure that the functionality being tested continues to meet
performance and scalability requirements over time.
Our team is building a Web-based application for the medical industry. Today's particular story involves
querying the database for all prescriptions written by a given doctor, then displaying the prescription
information to the user.
We started by writing clean and simple code that expressed intent, driven by tests that validate that the
code is doing the right things. We've been wisely deferring any premature code optimizations because we
don't want to speculate and overspend on performance until we identify a problem. But now that we've
demonstrated working code, our customer decides there's value in improving the performance and scalability
of this story.
We don't want to compromise the reliability of prescription queries by complicating the code for performance
and scalability. JUnitPerf tests will fail if the tests they decorate fail, so we have confidence that tweaking
performance won't adversely affect existing functionality.
Our Web page is taking approximately 5.0 seconds to display 100 rows of prescription information. That's
less than ideal for our users. So our customer writes a story stating that the response time of a Web page
containing up to 100 prescriptions should not exceed 3.0 seconds. We've got our work cut out for us!
Many things happen behind that curtain of a Web page. We need to know which sections of the code
contribute the most to the page's overall response time. We will then focus on tuning those performance-
critical areas first. We fire up our favorite profiling tool, hit the Web page, and let the profiler hunt down the
busiest code. Alas, it identifies the Pharm a cy. f i ndP r e scr i pt i on sB y Pr e sc ri b er () method,
contributing approximately 3.5 seconds to the total response time.
Thankfully, we already have the following JUnit test for that method:
pu bli c void te st Fin dP resc ript ions ( ) th r o ws E xc e pt io n {
Pre s cript ion Da tab as e da taba se = n ew P r esc r ip t io nD a ta b as e( ) ;
Pha r macy pha rm acy = new Pha rmac y ( dat a b ase ) ;
Col l ectio n p re scr ip tion s = phar m a cy. f i ndP r es c ri pt i on s By Pr e sc ri b er (" D r. J e ky l" ) ;
The test validates that indeed every prescription written by Dr. Jekyl is returned. The Ph a rm ac y class
employs a Pre sc rip ti onD ataba se to perform the prescription query, without worrying about what kind
of persistence strategy is being used under the hood. In this case a one-time test fixture was used to
prepopulate the P res cr ipt ionDa tabas e with exactly 100 prescriptions signed by the villainous Dr. Jekyl.
We're glad we wrote well-factored code from the start because in doing so we enabled the profiler to point
us towards a single cohesive method that we can tune for maximum benefit. If we could only boost the
performance of the Ph arm ac y.fi ndPr es cr i p tio n s ByP r es c ri be r () method, our Web users would be
much happier. But how do we know when to stop tuning? Write an automated test for the performance
story, of course. When it passes, we're done!
With the performance story in hand, we write a P har m a cyT i me d Te st that creates a T im ed T es t instance
passing in the existing P ha rmac yTes t.te s t Fin d P res c r ip ti on s () test case method and a maximum
elapsed time of 1.5 seconds. By default, the T i m edT e s t will wait for the completion of its decorated test
and then fail if the maximum elapsed time was exceeded. The Web page contributes another 1.5 seconds in
presentation overhead, so when this test passes we'll be on target for a 3.0 second total response time.
As a convenient way to run the Ph armac y Tim e d Tes t , we define a s u it e( ) method called by a
Te stR u nner in the ma in () method, as follows:
We run the Pha rm acy Ti medT est and it fails with the following output:
.
Ti med T est ( WAI TI NG) : test Find Pres c r ipt i o ns( P ha r ma cy T es t ): 3 5 16 m s
F
Ti me: 3.516
Th ere was 1 fa il ure :
1) te s tFind Pre sc rip ti ons( Phar macy T e st)
ju nit . frame wor k. Ass er tion Fail edEr r o r:
Ma xim u m ela pse d tim e exce eded ! Ex p e cte d 150 0 ms , b ut wa s 3 51 6 ms .
FA ILU R ES!!!
Te sts run: 1, Fa ilu re s: 1 , Er rors : 0
The expected elapsed time was exceeded—we expected 1.5 seconds but it actually took 3.5 seconds. We're
not surprised, and now we have a goal to work towards, as well as a test that tells us if we're getting
warmer or colder.
We then go after some low-hanging fruit: we tune the underlying SQL a bit and optimize some of the
object-relational mapping logic necessary to create P res c r ip ti on objects from database rows. Then we
run the Pharm acy Ti med Te st again and it passes with the following output:
.
Ti med T est ( WAI TI NG) :
te stF i ndPre scr ip tio ns (Pha rmac yTes t ) : 1 4 5 2 m s
Ti me: 1.452
OK (1 tests )
Much better! We're making progress. Had the test failed we would have continued to optimize until it
passed, but no further. Now that we have quantitative performance criteria in the form of an automated
test, the test will continue to keep its performance in check as we refactor the code.
Scaling Up
We've successfully demonstrated single-user response time with a repeatable test. We haven't thought much
about multiuser performance because up until now the scalability requirements were unknown. Instead,
we've been relying on simple, modular code capable of responding when the customer writes a scalability
story. Now that it's a bit later in the project our customer has a better estimate of the expected load on the
production system.
It is expected that multiple pharmacies will be hitting the same Web page concurrently. So the customer
writes a scalability story that states that the response time of a Web page containing up to 100
prescriptions should not exceed 10.0 seconds under a load of 10 concurrent users.
The L o adTes t will run the Ti medT est with 10 concurrent users (threads), and each user will run the
Ph arm a cyTes t.t es tFi nd Pres crip tion s ( ) test case method once. We calculate 7.0 seconds to be a
sufficient response time under load based on the Web page incurring 1.5 seconds of additional overhead.
We write the following test that will fail if any user's response time exceeds 7.0 seconds:
We chose this order of test decoration because the scalability story explicitly related to each user's individual
response time under load. If instead the story were expressed in terms of throughput, we could switch the
order of test decoration such that the Time d Tes t decorated the L oa dT e st which in turn decorated the
Ph arm a cyTes t. This would fail if the cumulative response time of all concurrent users exceeded a
threshold.
We run the Pha rm acy Lo adTe st and it fails with the following output:
.. ... . ....
Ti med T est ( WAI TI NG) : test Find Pres c r ipt i o ns( P ha r ma cy T es t ): 1 5 22 ms
Ti med T est ( WAI TI NG) : test Find Pres c r ipt i o ns( P ha r ma cy T es t ): 2 3 53 ms
Ti med T est ( WAI TI NG) : test Find Pres c r ipt i o ns( P ha r ma cy T es t ): 3 1 75 ms
Ti med T est ( WAI TI NG) : test Find Pres c r ipt i o ns( P ha r ma cy T es t ): 4 0 16 ms
Ti med T est ( WAI TI NG) : test Find Pres c r ipt i o ns( P ha r ma cy T es t ): 4 8 37 ms
Ti med T est ( WAI TI NG) : test Find Pres c r ipt i o ns( P ha r ma cy T es t ): 5 6 78 ms
Ti med T est ( WAI TI NG) : test Find Pres c r ipt i o ns( P ha r ma cy T es t ): 6 5 19 m s
Ti med T est ( WAI TI NG) : test Find Pres c r ipt i o ns( P ha r ma cy T es t ): 7 3 71 m s
F
Ti med T est ( WAI TI NG) : test Find Pres c r ipt i o ns( P ha r ma cy T es t ): 8 2 12 m s
F
Ti med T est ( WAI TI NG) : test Find Pres c r ipt i o ns( P ha r ma cy T es t ): 9 0 63 m s
F
Ti me: 9.303
FA ILU R ES!!!
Te sts run: 10, F ail ur es: 3, E rror s : 0
Again, we're not surprised because we didn't concern ourselves with scalability when we wrote the original
code. We notice that the response times increased for each successive user until the last three users failed
the test. It appears to be a classic case of resource contention creating a bottleneck in our system. We
quickly conclude that concurrent users must be contending for a single database connection. Ideally, we
want to maximize our time by making a change with low cost and a high reward, so we decide to make a
minor change to the P re scri ptio nData b ase class to use a database connection pool with 10 active
connections. This should help the query scale.
We run the Pha rm acy Lo adTe st again and it passes with the following output:
.. ... . ....
Ti med T est ( WAI TI NG) : test Find Pres c r ipt i o ns( P ha r ma cy T es t ): 6 8 30 ms
Ti med T est ( WAI TI NG) : test Find Pres c r ipt i o ns( P ha r ma cy T es t ): 6 7 90 ms
Ti med T est ( WAI TI NG) : test Find Pres c r ipt i o ns( P ha r ma cy T es t ): 6 7 80 ms
Ti med T est ( WAI TI NG) : test Find Pres c r ipt i o ns( P ha r ma cy T es t ): 6 8 50 ms
Ti med T est ( WAI TI NG) : test Find Pres c r ipt i o ns( P ha r ma cy T es t ): 6 9 10 ms
Ti med T est ( WAI TI NG) : test Find Pres c r ipt i o ns( P ha r ma cy T es t ): 6 8 50 ms
Ti med T est ( WAI TI NG) : test Find Pres c r ipt i o ns( P ha r ma cy T es t ): 6 8 40 ms
Ti med T est ( WAI TI NG) : test Find Pres c r ipt i o ns( P ha r ma cy T es t ): 6 8 70 ms
Ti med T est ( WAI TI NG) : test Find Pres c r ipt i o ns( P ha r ma cy T es t ): 6 9 10 ms
Ti med T est ( WAI TI NG) : test Find Pres c r ipt i o ns( P ha r ma cy T es t ): 6 9 50 ms
Ti me: 7
OK (1 0 test s)
Excellent! The scalability tests are passing and the decorated test continues to pass as well. We notice that
the response times are nearly consistent for each user there by confirming our theory about resource
contention. If scalability starts to degrade in the future, the test will let us know by failing.
As we write more performance and scalability tests, they may take a while to run, and we don't want these
types of tests slowing down our development pace. We'll organize them in a special test suite to be run at
least once a day. We won't run this test suite during our test-driven development cycle, unless of course
we're fiddling with performance-critical code.
Conclusion
It's all too easy to optimize early only to find out that the optimized code isn't executed frequently. Equally
troubling and expensive are intended optimizations that instead degrade performance and scalability. It's
always best to write simple code, identify performance-critical sections of code when the reward is high, and
then create benchmarks in the form of repeatable JUnitPerf tests. The tests help reduce the stress of tuning
and retain their value over time by failing whenever the response time and throughput of the software move
in the wrong direction. They can be run automatically to continually measure performance in the face of
change. Moreover, the tests tell us when to stop tuning and deliver the software.
Amazon
Prev don't be afraid of buying books Next
This section was contributed by Jens Uwe Pipka of Daedalos Consulting GmbH [URL 57],
the author and maintainer of The Daedalos JUnit Extensions[URL 17].
The Daedalos JUnit Extensions, or in short djux, provide additional tools to speed up programmer testing.
The core components are test resources. Using test resources, it is possible to make static resources
available for all tests during the whole development cycle. So, time-consuming initializations are done
only once and remain active over a complete series of test runs. Furthermore, djux includes additional
tools for database tests as well as for the integration of external test programs. You can download the
latest version from the djux homepage.
We will present an example of how you can implement and use your own test resources and show how
tests are sped up. We will implement a test that checks entries inside a database. To do this, the test
operates as follows:
the test checks if certain data exists in the database (this unit test is implemented using the
database checker that also comes with djux)
We start with a standard JUnit test. Our test case is named M y D a t a b a s e T e s t and contains a class
variable for the database name. We also need instance variables for the database connection as well as
for our database checker. Furthermore, the constructor registers the appropriate driver to access our
database. Here's the code:
Next, we implement the se t Up() method to open the database connection and initialize the database
checker. This is done by asking the DriverManager for the connection and passing it to the database
checker's constructor.
We also need to implement t earD own( ) to close the database connection properly:
After implementing the necessary actions to open and close the database connection, we can start with
the actual test. First, the existence of the table B O O K S is checked. Using the djux database checker, this
can be done fairly easily:
Now we can run our first test. Assuming that the database is set up properly and the table B O O K S really
exists, the test is successful. Now we can go on with defining additional tests; for example, one that
checks that the table does not contain any records:
Running the unit test again, it takes some time before the results are displayed. This is because for each
test a new connection is opened and closed. This gets even worse if you are working with a remote
database. This overhead can be reduced by implementing an appropriate T e s t S e t u p instead of using
the s et Up () and t e arDo w n() inside the test case, but the connection is still opened and closed for
each test run. Furthermore, it gets even more complex if several test cases refer to the same database
connection. It then becomes necessary to implement specific T e s t S u i t e s considering the different test
runs.
Here is where the idea of test resources comes into play: using them, it is possible to open the database
connection once and access it during the whole testing cycle. It is no longer necessary to reopen the
database connection before a test run is executed.
But what does this mean for our implementation? Let's have a look at our test case. We only have to
change the s e tUp( ) and tear Down () methods. Instead of handling the database connection there, we
implement the functionality inside a test resource.
5. Afterwards, you can access your test resource directly within your test cases.
Now we are going to define the connection to our database as a T e s t R e s o u r c e . We start refactoring
our existing testcase and implement a new class, D b C o n n e c t i o n T R , that inherits from
c om . da ed alo s .jun i t.Te s tRes ourc e . This new class holds the database connection. Therefore, the
constructor is used to register the appropriate driver. Furthermore, we add an instance variable,
c on n ec ti on, that holds the database connection. Finally, we define a constant for the database name.
Now we have to implement the inherited abstract methods s t a r t ( ) and s t o p ( ) . These methods are
called when the test resource is started at the beginning and stopped at the end of the test cycle,
respectively. We want to open the database connection when the T e s t R e s o u r c e is started and close it
at the end of the development cycle. So, we move this functionality from the test case to the
corresponding methods in the test resource.
The method s t art( ) tries to open the database connection. If it is successful, the return value is t r u e ,
otherwise f als e is returned. The return value is essential to enable the T e s t R e s o u r c e F a c t o r y to
manage the state of the existing test resources.
p ub l ic b ool e an s t art( ) {
try {
co nn ect i on = Driv e rMan ager . g e t C o n n e c t i o n ( D A T A B A S E N A M E ) ;
} ca tc h ( S QLEx c epti o n ex c) {
re tu rn f alse ;
}
r e tu rn tr u e;
}
The method s t op() closes the database connection. Again, the return value is t r u e if the operation
was successful and fa l se otherwise:
p ub l ic b ool e an s t op() {
try {
if ( con n ecti o n != null ) {
co nne c tion . clos e ();
co nne c tion = nu l l;
}
} ca tc h ( S QLEx c epti o n e) {
re tu rn f alse ;
}
r e tu rn tr u e;
}
It is also possible to provide a short description of the test resource. This is an optional but useful way
to manage different test resources. To do this, we override the method g e t D e s c r i p t i o n ( ) by
returning the description:
As you can see, the implementation is exactly as in our test case before. All we have done is move the
functionality to another class.
Finally, we have to adapt our test case itself. The test case has to inherit from the class
c om . da ed alo s .jun i t.Te s tRes ourc e T e s t C a s e that is again a subclass from T e s t C a s e . In
addition to moving the implementation for opening and closing the database to the test resource class,
we must change the database access inside our test case. Instead of the database connection being
accessed directly as in the original implementation, it is now accessed via the D b C o n n e c t i o n T R class.
This is done by using the T estR esour c e F a c t o r y that is the unique interface to handle test resources
inside your tests. It uses the singleton pattern. To use a specific T e s t R e s o u r c e inside a test case, the
method p re par eT est R esou r ce() is expected. The argument is the name of the test resource, for
example, Db C onne c tio nT RStr ing in our example. After registering the test resource, you can access
it using the method ge t Test R esou rce ( ) ,which also expects the test resource name as argument.
p u bl ic vo i d se t Up() {
pr ep are T estR e sour c e("D bCon n e c t i o n T R " ) ;
Te st Res o urce F acto r y tr Fact o r y = T e s t R e s o u r c e F a c t o r y . c u r r e n t ( ) ;
Db Co nne c tion T R db T estR esou r c e =
(D bCo n nect i onTR ) trF acto r y . g e t T e s t R e s o u r c e ( " D b C o n n e c t i o n T R " ) ;
db Ch eck e r = n ew D a taba seCh e c k e r ( d b T e s t R e s o u r c e ) ;
}
Now, the test methods themselves are not modified. The most important changes are made inside the
s et U p( ) method. Specifically, the database connection is not opened anymore. Instead, the
D bC o nn ec tio n TR is added to the active test resources. Afterwards, the connection is obtained from
T es t Re so urc e Fact o ry directly. It is necessary to cast the T e s t R e s o u r c e to its definition class if
you want to access user defined methods. The other modifications are rather easy: The constructor now
does nothing else but delegate the work to its parent constructor. And, finally, teardown is no longer
necessary.
That's it! Now, you can run your Tes tC a s e inside your T e s t R u n n e r , for example,
j un i t. sw ing u i.Te s tRun n er. Running the test the first time, nothing seems to be changed: it still
takes a fair amount of time for the test to go green. But try it again: Instantly, the test is green again.
This is because, of course, the database connection must be opened by your first test run. But thereafter
the connection stays open, until you finally close the whole test runner.
In the preceding example, the new test resource is accessed via its fully qualified class name, including
the entire package name. Happily, it is possible to map your test resources to logical names instead,
which keeps your test cases cleaner. For example, m y . e x a m p l e . D B C o n n e c t i o n T e s t R e s o u r c e can
be mapped to dbAccess.
For this purpose, the T e stRe s ourc eFa c t o r y provides the method r e g i s t e r T e s t R e s o u r c e ( ) .The
first argument specifies the logical name you want to use for the test resource; the second the class it
refers to. The registering of the Te stRe s o u r c e inside the T e s t R e s o u r c e F a c t o r y is only necessary
one time. If it is not done directly using the T e s t R e s o u r c e F a c t o r y , it can also be done by calling the
mapper class Regi s ter Te stRe sourc e . For the earlier example, you can register your test resource by
executing the following command:
Until now, a test case that uses a test resource is defined as a subclass from
T es t Re so urc e Test C ase. This class ensures that a specific test resource is accessible inside the test
case, and that it is started automatically when, for example, it is accessed for the first time.
Nevertheless, it is often better to manage the test resources manually in a more comprehensive test
environment. For this purpose, djux provides a specialized test runner based on the original swing test
runner. This test runner also includes a test resource browser that shows the current state of test
resources and enables the user to manage test resources.
Then you can open the Test Resource Browser using the TestResourceBrowser entry in the JUnit menu. A
new window pops up that shows the state of the currently registered TestResources (see Figure 5.1).
You can start, stop, and reset test resources with this GUI. Furthermore, it is possible to register new
test resources with their logical and class names, as well as remove them. This is much handier than the
manual operation described earlier. You also have the option to "auto"-start your test resources, so that
all registered test resources are started when you start the TRTestRunner.
p u bl ic vo i d se t Up() {
Te st Res o urce F acto r y tr Fact o r y = T e s t R e s o u r c e F a c t o r y . c u r r e n t ( ) ;
Db Co nne c tion T R db T estR esou r c e =
(D bCo n nect i onTR ) trF acto r y . g e t T e s t R e s o u r c e ( " d b A c c e s s " ) ;
db Ch eck e r = n ew D a taba seCh e c k e r ( d b T e s t R e s o u r c e ) ;
}
As you can see, this looks even more like a traditional JUnit test. After starting the dbAccess test
resource via the Test Resource Browser, our test runs again showing the green bar. If the test resource
is stopped, the unit test will fail with an E x c e p t i o n : " T e s t R e s o u r c e n o t a v a i l a b l e " .
The decision about which approach to use to manage your test resources is dependent on your
environment: If you work in a graphical environment, it is much more convenient to use the
T RT e st Ru nne r that is fully compatible with the Swing TestRunner and offers additional tools to manage
your test resources. But if your tests should also work with the AWT or the command-line-based test
runner, you should implement your tests that access test resources as subclasses of
T es t Re so urc e Test C ase. In either case, the test resource implementation is identical, so you can use
them in both environments.
ExtensibleTestCase
Even though JUnit is a comprehensive testing tool, it is often necessary to also include external testing
tools. To simplify the integration of external testing tools, djux provides an easy-to-use mechanism to
plug in external testing tools as additional test cases: the E x t e n s i b l e T e s t C a s e .
Despite setting the expected return value, E x t e n s i b l e T e s t C a s e s are handled like any other standard
test cases and need no additional setup. Therefore, E x t e n s i b l e T e s t C a s e s can be executed using the
standard JUnit TestRunner.
DatabaseChecker
In the previous example, we used the D a t a b a s e C h e c k e r . This is a djux test utility that allows
executing simple database checks. In particular, the database checker provides checks concerning the
database structure as well as its content. It provides a simple and easy way to verify if the application
behavior has the expected effect on the persistent database content. If a middleware persistency
framework is used, this cross check becomes particularly important.
To introduce the database checker in more detail, we continue with our previous example. Here we
develop an application that provides a book list that allows adding and removing books. The
requirements are that the book list be stored immediately in a database table B O O K S each time it is
changed. The table contains the columns T I T L E , A U T H O R , and I S B N .
In the example above, we have already checked that the table BOOKS exists. We continue defining the
test case for the new user story. Therefore, a new test case B o o k L i s t T e s t is created as a subclass of
T es t Ca se . It contains an instance dbC h e c k e r of class D a t a b a s e C h e c k e r that is initialized in the
s et U p( ) method of our test by instantiating the class c o m . d a e d a l o s . d b c h e c k . D a t a b a s e C h e c k e r .
Note that the constructor expects a valid database connection that is used for all database access. Again,
we use the database test resource that we developed in the previous example. We also use an instance
variable booklist that manages the new class representing the book list.
p u bl ic vo i d se t Up() {
Te st Res o urce F acto r y tr Fact o r y = T e s t R e s o u r c e F a c t o r y . c u r r e n t ( ) ;
Db Co nne c tion T estR e sour ce d b T e s t R e s o u r c e =
(D bCo n nect i onTe s tRes ourc e ) t r F a c t o r y . g e t T e s t R e s o u r c e ( " d b A c c e s s " ) ;
db Ch eck e r = n ew D a taba seCh e c k e r ( d b T e s t R e s o u r c e . g e t C o n n e c t i o n ( ) ) ;
bo ok lis t = n e w Bo o kLis t();
}
}
Now we are going to define the interface of the B o o k L i s t class by defining the test cases for our user
story. Because all changes have a direct effect on the database, D a t a b a s e C h e c k e r is a valuable tool
to verify the functionality of the BookL i s t class.
First, we have to initialize the book list. After this, it should contain no entries. To check this, we use
the database checker method row Count ( t a b l e N a m e ) that returns the number of rows for a specific
table. In our example, we have to check the table B O O K S :
Next, a new book is added. If the book already exists, it should not be added to the book list. To check
if a book is added correctly, we use the method r o w E x i s t s ( t a b l e N a m e , w h e r e C l a u s e ) .It returns
whether or not a row exists in the specified table for the given where clause. The method returns t r u e
if there is exactly one row that fulfills the where clause. If there exists more than one row or the entry
is not found, the return value is false .
Here, we add the same book two times. It is the task of the B o o k L i s t class to meet the condition that
there is only one entry for each book. If this doesn't work, the database checker method
r ow E xi st s(t a bleN a me, w here Clau s e ) would cause the test to fail, because more than one entry
is found in the database. To check the simple existence of a specific entry, where more than one entry
is allowed, the database checker provides the method m u l t i p l e R o w s E x i s t s ( t a b l e N a m e ,
w he r eC la use ) that returns true if any row matches.
Now we want to implement a test for removing a book. We add some books and then remove the
second book. Again, we verify each step with the help of the database checker and the methods we
used before:
Here we check whether the number of books in the book list is growing constantly with the addition of
each new book. Similarly, we verify that it is decreased when a book is removed. But the test is not
complete until it checks whether the correct book is removed. We accomplish this by verifying that the
book list still contains the entries for the other two books. We use the method
g et S tr in gCo l umn( t able N ame, col u m n N a m e , w h e r e C l a u s e ) that returns the value of the
specified column of the first row that matches the given where clause in the specified table. Thus, we
verify that for each I SBN the expected author and title are still available inside the book list.
With this test case it is now possible to implement the appropriate domain class B o o k L i s t that contains
the mapping between the Java code and the database content. This small example shows how the
database checker can be used to verify if this mapping works as expected and the database content is
consistent with the application behavior.
Amazon
Prev don't be afraid of buying books Next
This section was contributed by Tim Bacon of ThoughtWorks UK [URL 58], the author and
maintainer of xmlUnit[URL 18].
What is XMLUnit?
XMLUnit enables JUnit-style assertions to be made about the content and structure of XML. It is an open
source project hosted at sourceforge.net that grew out of a need to test a system that generated and
received custom XML messages. The problem that we faced was how to verify that the system generated
the correct message from a known set of inputs. Obviously, we could use a DTD or a schema to validate
the message output, but this approach wouldn't allow us to distinguish between valid XML with correct
content (e.g., element <foo > bar< /foo > )and valid XML with incorrect content (e.g., element
< fo o >b az </f o o>). What we really wanted was an a s s e r t X M L E q u a l s ( ) method so we could
compare the message that we expected the system to generate and the message that the system
actually generated. And that was the beginning of XMLUnit.
Quick tour
XMLUnit provides a single JUnit extension class, X M L T e s t C a s e , and a set of supporting classes that
allow assertions to be made about:
XMLUnit can also treat HTML content (even badly-formed HTML) as valid XML to allow these assertions
to be made about Web pages.
Glossary
As with many projects, some words in XMLUnit have particular meanings, so a quick review would
probably be helpful. A piece of XML is either a DOM Document, a S t r i n g containing marked-up content,
or a R ead er that allows access to marked-up content in some underlying source. XMLUnit compares the
expected control XML to some actual test XML. The comparison can reveal that two pieces of XML are
identical, similar, or different. The unit of measurement used by the comparison is a difference, and
differences can be either recoverable or unrecoverable. Two pieces of XML are identical if there are no
differences between them, similar if there are only recoverable differences between them, and different if
there are any unrecoverable differences between them.
Configuring XMLUnit
There are many Java XML parsers available, and XMLUnit should work with any JAXP-compliant parser
library, such as Xerces from the Apache Jakarta project. To use the XSL and XPath features of XMLUnit
a Trax-compliant transformation engine is required, such as Xalan, from the Apache Jakarta project. To
configure XMLUnit to use your parser and transformation engine set three System properties before any
tests are run:
Alternatively, there are static methods on the X M L U n i t class that can be called directly. The advantage
of this approach is that you can specify a different parser class for control and test XML and change the
current parser class at any time in your tests, should you need to make assertions about the
compatibility of different parsers.
Let's say we have two pieces of XML that we wish to compare and assert that they are equal. We could
write a simple test class like this:
The a ss ert X MLEq u al test will pass if the control and test XML are either similar or identical.
Obviously, in this case, the pieces of XML are different and the test will fail with the message:
C om p ar in g t e st x m l to cont rol x m l
[ di f fe re nt] Expe c ted e leme nt t a g n a m e ' u u i d ' b u t w a s ' l o c a l I d ' -
comp a ring <uui d... > t o < l o c a l I d . . . >
When comparing pieces of XML, XMLUnit is creating an instance of the D i f f class behind the scenes.
The D if f class stores the result of an XML comparison and makes it available through the methods
s im i la r( ) and id e ntic a l().The XM L T e s t C a s e class tests the value of s i m i l a r ( ) in the
a ss e rt XM LEq u als( ) method and the value of i d e n t i c a l ( ) in a s s e r t X M L I d e n t i c a l ( ) .However,
it is easy to create a D i ff directly in test cases:
This test fails as two pieces of XML are considered similar but not identical if their nodes occur in a
different sequence. The failure message reported by JUnit from the call to m y D i f f . t o S t r i n g ( ) looks
like this:
For efficiency reasons, a Diff stops the comparison process as soon as the first difference is found. To
get all the differences between two pieces of XML an instance of the D e t a i l e d D i f f class, a subclass of
D if f , is required. Consider this test:
D e ta il edD i ff m y Diff = ne w De t a i l e d D i f f ( n e w D i f f ( m y C o n t r o l X M L , m y T e s t X M L ) ) ;
L i st a llD i ffer e nces = my Diff . g e t A l l D i f f e r e n c e s ( ) ;
a s se rt Equ a ls(m y Diff . toSt ring ( ) , 2 , a l l D i f f e r e n c e s . s i z e ( ) ) ;
}
This test fails with the following message as each of the three news items differs between the control
and test XML:
XMLUnit can test XSL transformations at a high level using the T r a n s f o r m class that wraps a
j av a x. xm l.t r ansf o rm.T r ansf orme r instance. Knowing the input XML, input style-sheet, and
expected output XML, we can assert that the output of the transformation matches the expected output
as follows:
The g et Res u ltSt r ing () and get Res u l t D o c u m e n t ( ) methods of the T r a n s f o r m class can be used
to access the result of the XSL transformation programmatically if required, for example, as seen here:
XML parsers that validate a piece of XML against a DTD are common; however, they rely on a DTD
reference being present in the XML, and they can only validate against a single DTD. When writing a
system that exchanges XML messages with third parties there are times when you would like to validate
the XML against a DTD that is not available to the recipient of the message and so cannot be referenced
in the message itself. XMLUnit provides a V a l i d a t o r class for this purpose.
Xpath Tests
One of the strengths of XML is the ability to programmatically extract specific parts of a document using
XPath expressions. The X MLTes tCas e class offers a number of XPath related assertion methods, as
demonstrated in this test:
When an XPath expression is evaluated against a piece of XML a N o d e L i s t is created that contains the
matching No d es. The methods in the previous test—a s s e r t X P a t h E x i s t s ( ) ,
a ss e rt No tXP a thEx i sts( ) , asse rtX P a t h s E q u a l ( ) ,and a s s e r t X P a t h s N o t E q u a l ( ) —use these
a ss e rt No tXP a thEx i sts( ) , asse rtX P a t h s E q u a l ( ) ,and a s s e r t X P a t h s N o t E q u a l ( ) —use these
N od e Li st s. However, the contents of a N o d e L i s t can be flattened (or String-ified) to a single value,
and XMLUnit also allows assertions to be made about this single value, as in this test:
Xpaths are especially useful where a document is made up largely of known, unchanging content with
only a small amount of changing content created by the system. One of the main areas where constant
boilerplate markup is combined with system-generated markup is of course in Web applications. The
power of XPath expressions can make testing Web page output quite trivial, and XMLUnit supplies a
means of converting even very badly formed HTML into XML to aid this approach to testing.
The H TM LDo c umen t Bui ld er class uses the Swing HTML parser to convert marked-up content to Sax
events. The Tol er ant S axDo c umen tBu i l d e r class handles the Sax events to build up a DOM
document in a tolerant fashion, that is, without mandating that opened elements are closed. (In a purely
XML world this class would have no purpose as there are plenty of Sax event handlers that can build
DOM documents from well-formed content.)The following test illustrates the use of these classes:
One of the key points about using Xpaths with HTML content is that extracting values in tests requires
the values to be identifiable. (This is just another way of saying that testing HTML is easier when it is
written to be testable.) In the previous example id attributes were used to identify the list item values
that needed to be testable; however, class attributes or span and div tags can also be used to identify
specific content for testing.
This test fails as there are five text nodes, and JUnit supplies the following message:
E xp e ct ed no d e te s t to pass , bu t i t f a i l e d !
C ou n te d 5 n o de(s ) but expe cted 4
Note that if your DOM implementation does not support the D o c u m e n t T r a v e r s a l interface, then
XMLUnit will throw an I l legal Argu me n t E x c e p t i o n informing you that you cannot use the N o d e T e s t
or N o d eT es ter classes. Unfortunately, even if your DOM implementation does support
D oc u me nt Tra v ersa l , attributes are not exposed by iteration; however, they can be examined from
the E le me nt node that contains them.
While the previous test could have been performed easily using XPath, there are times when Node
iteration is more powerful. In general, this is true when there are programmatic relationships between
nodes that can be more easily tested iteratively. The following test uses a custom N o d e T e s t e r class to
illustrate the potential:
p u bl ic vo i d te s tTex t (Tex t te x t ) t h r o w s N o d e T e s t E x c e p t i o n {
in tv al = Inte g er.p a rseI nt(t e x t . g e t D a t a ( ) ) ;
if ( nex t Val ! = va l ) {
t hro w new Node T estE xcep t i o n ( " I n c o r r e c t s e q u e n c e v a l u e " , t e x t ) ;
}
ne xt Val = va l + l a stVa l;
pr io rVa l = l a stVa l ;
la st Val = va l ;
}
As expected, the test fails because the XML contains the wrong value for the last number in the
sequence.
E xp e ct ed no d e te s t to pass , bu t i t f a i l e d !
I nc o rr ec t s e quen c e va l ue [ #tex t : 9 ]
Amazon
Prev don't be afraid of buying books Next
This section was contributed by Mike Bowler of Gargoyle Software Inc. [URL 59], the author
and maintainer of the Gargoyle JUnit Extensions[URL 19].
As a consultant, I find myself being asked to write the same code over and over again. Many of my clients
share the same needs and require the same base utility classes. Because each client owns the code that I
wrote when working for them, I am unable to reuse that code for the next client and consequently end up
rewriting the same classes repeatedly.
In 1998, I got sufficiently frustrated with this situation so I started writing these classes on my own time
and releasing them as open source. This way I would only have to write each class once and I could reuse
them for each one of my clients. This collection of classes has become the GSBase project.
The testing classes within GSBase originally started out as support for the testing of GSBase itself but have
shown themselves to be generally useful for testing other projects.
RecursiveTestSuite
The standard mechanism for running tests in JUnit is to build nested hierarchies of test suites. In addition
to being very monotonous work, this way is extremely error prone as it is very easy to accidentally miss a
test in a suite. I wanted JUnit to figure out where all my tests were and to just execute them. Because
JUnit itself doesn't do this, I created R ec u r s i v e T e s t S u i t e , a special T e s t S u i t e that will recursively
walk through the directory structure looking for subclasses of T e s t C a s e .
If you are using Ant for your builds then the Ant junit target provides
the same kind of functionality as R e c u r s i v e T e s t S u i t e . If that target
had been available when I first started using JUnit then writing this class
would have been unnecessary.
The most common usage of Recu rsive T e s t S u i t e is to create a subclass like this:
The reason for creating a subclass is that you need a class that can be passed into the test runners that
will initialize everything the way you want. When you have this class, you can invoke the gui test runner
with this command:
The first parameter that is passed into R e c u r s i v e T e s t S u i t e is the directory to start at. This is the root
of your classpath.
The second parameter is a test filter. The main advantage to recursively walking the directories looking for
tests is that you won't miss any by mistake. The main disadvantage is that you may include tests that you
don't want to run. The test filter is intended to solve that problem. It allows you to exclude certain tests
from the full run. The Acce p tAll TestF i l t e r specified in the example will accept all of the tests. If you
want to selectively exclude tests then you need to implement the T e s t F i l t e r interface yourself.
For example, if you have your infrastructure code separate from your business logic and you wanted to be
able to run the tests for one or the other, only then you might write a test filter like this:
In each case, the filter only accepts tests located within certain package structures. It checks the class
name of the test class to see if it starts with a given prefix and accepts or rejects the class based on its
package.
OrderedTestSuite
Assuming that your tests are in a class that subclasses T e s t C a s e , there are two ways to identify which
tests should be run. The easiest way is to use the TestSuite constructor that takes a class. This will
automatically create a new test for every method that follows a specific naming convention
The other way is to use the default constructor for T e s t S u i t e and then manually add each test that you
want run.
EventCatcher
When testing objects that fire events, it becomes neccessary to collect events to ensure that the right
events were fired at the right time.
f in a l Li st co ll ect e dEve n ts = ne w A r r a y L i s t ( ) ;
f in a l JF ra me fr ame = ne w JFr ame ( " t e s t " ) ;
f ra m e . ad dW ind ow Lis t ener ( new Win d o w A d a p t e r ( ) {
p u b l ic v oid w ind o wClo s ing( fin a l W i n d o w E v e n t e v e n t ) {
co ll ect ed Eve n ts.a d d(ev ent ) ;
}
} );
The typical approach requires creating many inner classes for the various types of events that you want to
listen for. This quickly becomes very tedious and introduces the potential for error as very similar code is
written over and over again.
E ve n t C at ch er is the solution to this problem. This is a single object that can listen for any kind of event
that might be fired.
In this example, the event catcher will register itself as a listener for every type of event that is thrown
by frame. Note that this will work for any event so long as the object has a method for registering
listeners that follows the form addX XXLi s t e n e r ( ) .
An E v e ntC at cher R eco r d will be created for each event that is caught. This will contain the event itself,
the method that was invoked on the listener, and the name of the thread on which this method was
called. These records can be retrieved with E v e n t C a t c h e r . g e t E v e n t R e c o r d A t ( i n t i n d e x ) .
The preceding example caught every event fired by the J F r a m e . Although convenient, sometimes that is
too much information—sometimes you only want to catch one specific kind of event. E v e n t C a t c h e r
provides support for this by generating listener objects which you can then manually add to the object you
wish to listen to.
The g et Li ste n er() method takes a class object that is a listener interface such as A c t i o n L i s t e n e r
or W i n do wLi s tene r . It returns an object that implements that interface and that will log all events that
are passed into it.
This will call every accessor on each object and compare the results. If the values of all the properties are
the same, then we assume that the objects themselves are in fact equal. This isn't perfect—it is possible
to get false positives—but it's accurate enough in most cases.
EqualsTester
It is not a trivial exercise to fully test the equals contract, so many people don't bother. Instead, they
assume that e q uals ( ) and has hCod e( ) are simple enough that they couldn't break and therefore don't
need to be tested. Ironically enough, it is very common for this contract to be implemented incorrectly.
Peter Haggar goes into depth on the common pitfalls when implementing the equals contract in his book
Practical Java[23].
In my experience, the most common problems when implementing equals are as follows:
According to the contract, the eq u a l s ( ) method must return false but many implementations
don't explicitly handle null and consequently throw a N u l l P o i n t e r E x c e p t i o n .
The problem with this is that the wrong method has been overridden and it will not be called in all
situations.
mo vie 1 .equ a ls(m o vie2 ) will call the method declared above and will correctly compare the two
movies.
mo vie 1 .equ a ls(m o vieA sObje c t ) will call the superclasses e q u a l s ( O b j e c t ) method which
will not return the expected result.
// Wr o ng c o mpar i son
pu bli c boo l ean e qual s(fin a l O b j e c t o b j e c t ) {
if ( obje c t == null ) ret u r n f a l s e ;
if ( !(ob j ect i nsta nceof M o v i e ) ) r e t u r n f a l s e ;
fin a l Mo v ie o t herM ovie = ( M o v i e ) o b j e c t ;
ret u rn g e tNam e ().e quals ( o t h e r M o v i e . g e t N a m e ( ) ) ;
}
This will return false positives in some situations when comparing against subclasses. If the class in
question is final then using inst a n c e o f is safe to use, but if it can be overridden then
ge tCl a ss() should be used instead.
The reason that false positives are possible is that i n s t a n c e o f checks to ensure that the object
is an instance of this class or a subclass of this class. In the case of a subclass, then this object is
not able to properly perform a comparison as it only knows about the fields that it has and is
unable to perform a comparison against fields present in the subclass but not present in the
superclass. As a result it is possible for a . e q u a l s ( b ) to not equal b . e q u a l s ( a ) .
pu bli c cla s s Mo v ie {
pri v ate f inal Stri ng na m e ;
pub l ic M o vie( f inal Stri n g n e w N a m e ) {
n a me = newN a me;
}
This case, a. e qual s (b), will return t r u e because b is an instance of M o v i e and the names are
the same. The reverse is not true; however, b . e q u a l s ( a ) will return f a l s e as a is not an
instance of Ra t edMo v ie. This has violated the portion of the equals contract that says
a. equ a ls(b ) must be the same as b . e q u a l s ( a ) . Instead of using i n s t a n c e o f , the proper
comparison should have been written as follows, using the g e t C l a s s ( ) method.
Had both eq ual s () methods been written this way then both of them would have returned f a l s e
which would have satisfied the contract.
The contract states that if two objects return true for e q u a l s ( ) , then they must return the same
hash code. By default, the value returned by h a s h C o d e ( ) is fairly random so if you override
eq ual s () to be safe, then you must also override h a s h C o d e ( ) .
While it is possible for two objects to return different values from e q u a l s ( ) at different times (as
property values on the object change, for example), h a s h C o d e values must remain the same for
the entire lifetime of the object. If the value of h a s h C o d e was to change after the object had
been placed into a hash table then unpredictable behavior would result.
Note that it is perfectly acceptable to return a constant value (e.g., r e t u r n 2 ; ) from the
ha shC o de() method. Be aware that if you do return a constant and this object is used as a key
in a hash table then performance will be extremely bad. Many objects are not used as keys and
therefore performance is not an issue.
Many developers will also put in a check for t h i s as a quick return. There is nothing wrong with this but
it should be noted that this check is purely done for performance reasons—adding this check does not
make the method any more "correct."
p ub l i c v oi d t es tEq u als( ) {
f i n a l Mo vie a = n ew M o vie( "A" ) ;
f i n a l Mo vie b = n ew M o vie( "A" ) ;
f i n a l Mo vie c = n ew M o vie( "B" ) ;
f i n a l Mo vie d = n ew M o vie( "A" ) { } ;
n e w Eq ua lsT es ter ( a, b , c, d);
}
The four values that you pass into the Eq u a l s T e s t e r are as follows:
2. Another object of the same class that has the same values as the first. We expect this object to
return t r ue when compared to the first.
3. Another object of the same class that has different values than the first. We expect this object to
return f a lse when compared to the first.
4. An instance of a subclass of the first class that has the same values as the first object. If the first
object is final, then this must be null as it is not possible to subclass a final class.
If any part of the equals contract is broken then an A s s e r t i o n F a i l e d E r r o r will be thrown—this is the
same as calling the fai l () method in JUnit.
Detailed exceptions
The problem with this code is that while we are checking to ensure that a NullPointerException was
thrown, we aren't checking to ensure that the exception was thrown from the place we expected.
i f ( n ew Cu rre nc y = = nul l ) {
t h ro w new N ull P oint e rExc ept i o n ( " n e w C u r r e n c y " ) ;
}
}
The next best solution would be to put the name of the null parameter in the exception and check that
from the test code.
The problem with this is that expecting to get structured data out of a message field is always dangerous.
Inevitably someone will change the message while they are debugging something else.
i f ( n ew Cu rre nc y = = nul l ) {
t h ro w new D eta i ledN u llPo int e r E x c e p t i o n ( " n e w C u r r e n c y " ) ;
}
}
In this case, if g e tLog ( ) returns n ull then a regular N u l l P o i n t e r E x c e p t i o n will be thrown, which
will not be caught by the test and will result in a failing test.
i f ( n ew Cu rre nc y = = nul l ) {
t hr ow ne w Det a iled N ullP oin t e r E x c e p t i o n ( " n e w C u r r e n c y " ) ;
}
i f ( n ew Va lue
< 0) {
t hr ow ne w Det a iled I lleg alA r g u m e n t E x c e p t i o n ( " n e w V a l u e " ,
newValue,
"Must be greater than zero");
}
}
BaseTestCase
B as e T e st Ca se is the common superclass that all of the GSBase test cases inherit from. It includes
common methods that we would like to see in JUnit such as a s s e r t I n s t a n c e O f ( ) and
a ss e r t Co ll ect io nsE q ual( ) . It also overrides some existing methods in T e s t C a s e that don't
normally provide enough information. For example, a s s e r t S a m e ( ) doesn't tell you what the two objects
are in the case that they are different. B a s e T e s t C a s e overrides a s s e r t S a m e ( ) to print out more
information.
a ss e r t In st anc eO f() asserts that the specified object is an instance of a given class.
a ss e r t Co ll ect io nsE q uals ( ) compares two collections for equality independent of order. Often, you
don't care what order items have been added to a collection so long as all the items are present in the
correct quantities. Calling eq u als( ) on the collection will in some cases compare against a specific
ordering.
n ot I m p le me nte d( ) indicates that a given test has not yet been completed. This is a simple way to mark
a class so that you get a simple reminder every time you run the tests. This method will print out a
message indicating that the test is not complete and will provide the name of the test.
In this chapter we'll discuss a few tools that help with practicing TDD and/or using JUnit. The first three
assist in verifying that you can have confidence in your tests: one by giving us feedback on how precise
our tests are, and the remaining two by letting us know how much of our code is being tested. We'll
then look at two IDEs that provide support for TDD: Eclipse and Idea.
There's a great number of useful tools out there, with more written every day. I've selected a few that
are handy and specifically related to TDD. Furthermore, I've mostly picked open source tools. Idea and
Clover are the exceptions.
Amazon
Prev don't be afraid of buying books Next
JESTER
Jester is a tool written by Ivan Moore for finding code that is not tested. The way it works is by making
systematic changes to your application's source code (as opposed to your tests' source code), recompiling,
and running the test suite. If all the tests still pass after making the change, then there is potentially a
problem and it is reported.
The Jester Web site[URL 21] has links to download as well as more information regarding installation and
use. There was also a paper at XP2001 about Jester[35].
Jester makes changes textually, not syntactically. The changes it makes are selected to change the logic of
the code, for example, replacing true with f als e , forcing a conditional to tr ue or f al s e, etc. The
rationale is that if such a major logical change is made it should cause a test to fail. If all the tests still
pass, then more than likely there is a problem.
As packaged, Jester makes a small number of simple changes to your code, shown in Figure 6.1. By
editing the file, m ut ati on s.cf g, you can add more. Additionally, it changes numeric constants.
Let's see how we can use Jester to improve our tests. Here are a few tests that drove some initial
development of the Mo vie class:
t ru e
f als e
if (
i f(t r ue ||
if (
i f ( t rue | |
if (
i f(f a lse & &
if (
i f ( f als e &&
==
!=
++
--
p ubl i c v oi d s et Up( ) {
st a rWa rs = ne w M ov ie(" Star W ar s " );
}
/ **
*C opy r igh t (c) 2 002 S aors a De ve lo p m ent I nc.
*a uth o r S ao rsa D eve lo pmen t In c.
*/
pu bli c cl as s M ov ie {
p riv a te St rin g tit le ;
p riv a te in t r at ing ;
p ubl i c i nt ge tR ati ng () {
ret u rn ra tin g;
}
/* *
* Thi s Ex ce pti on is t hrow n wh en a < cod e > Mov i e< / co de > 's ra ti n g is
* set to a val ue ou ts ide of t he a l l owe d ran g e
*/
To run Jester, you first have to set your CLASSPATH (not by using the - c p option of the j av a command)
to include everything that your test needs to run. For this example the directory structure is:
.
Te stM o vie .c las s
sr c
Mov i e.j av a
Rat i ngO ut OfR an geE xc epti on.j av a
Be sure to run Jester only on a copy of your code. Jester makes changes
to the source files. Don't let it loose on files you can't replace. But then
again, you have everything under version control anyway, right? Right!
[1] Edited slightly for formatting and clarity. The '#'characters show the bit of code that Jester
changed.
F or Fi l e . /s rc/ Mo vie .j ava:
2 mu t ati on s s ur viv ed out of 4 ch a n ges
S cor e = 50
. /s rc/ M ovi e. jav a
- ch a nge d sou rc e o n line 2 ( ch ar i nde x = 21)
f ro m "2 "
t o "3"
/ **
* Cop y rig ht (c ) #2# 00 2 Sa orsa D ev e l opm e n t I n c.
* @au t hor S aor s
One shortcoming of the current implementation of Jester is immediately obvious: It makes (and counts)
changes in comments.[2] This leads to false positives in the results since changes to comments will, of
course, not cause tests to fail. Future versions of Jester will no doubt address this and other shortcomings,
doing more syntactic analysis of the code when deciding what changes to make.
[2] As the writing of this book finished, Jester had some experimental code to skip comments.
So, the comment changes account for two out of the three potential problems. What's the third one?
Looking at the output we can see that this change was to the constant that sets the upper limit to valid
ratings. The requirement was that valid ratings were in the range 0–5, inclusive. This code looks OK, so
why did a change to the upper limit (changing it to 6) not break a test? You would think that a change to
a limit should break a test—after all, boundary conditions should be tested! Hmm... let's look at the test
that checks for rating above the acceptable range:
S cor e = 0
. /s rc/ R ati ng Out Of Ran ge Exce ptio n. ja v a
- ch a nge d sou rc e o n line 2 ( ch ar i nde x = 21)
f ro m "2 "
t o "3"
/ **
* Cop y rig ht (c ) #2# 00 2 Sa orsa D ev e l opm e n t I n c.
* @au t hor S aor s
That's better. The spurious comment changes are still there, but everything else is as it should be. This
example showed one way in which Jester can be used to improve a test.
Another way in which Jester can help us is if we found that a method contained several changes that did
not cause a test to fail. In this case, it is likely that we need more tests. Let's give that a try. We'll add a
t oS tri n g() method to M ovie : [3]
[3] If we were doing this for real, we'd write a test first. Since we want to have an untested method,
we'll just write the method. Don't try this at home!
What can we learn from this output? Well, Jester made four changes to line 34 of M ov ie . ja va (the i f
statement in the to St rin g( ) method) that didn't cause any test to fail. That indicates that the
t oS tri n g() isn't being tested well enough. Well, that's no surprise since we added it explicitly to get this
kind of result. But consider how useful it would be to run Jester occasionally (daily?) to catch this kind of
hole in our tests. This example is somewhat overstated, since if we are practicing TDD, it's not likely that
this much code would not be tested. But we can easily get ourselves into a situation where a branch of a
conditional isn't being tested well enough. Jester can help you find cases like that. This is especially useful
for teams that are in the process of transitioning to TDD and adding tests to their existing code.
In addition to the console output, Jester creates an XML file that contains comparable information. Here's
an example:
Jester also has python scripts that take that XML file and use it to create a set of HTML pages that detail
each change that it was able to make without breaking tests. Table 6.1 and Figure 6.2 show an example of
this. This is the same information, but presented in a more meaningful way.
NOUNIT
NoUnit allows you to see how good your JUnit tests are. It generates a report from your code to
graphically show you how many of your project's methods are being tested, and how well.
The report (see Table 6.2 for a sample) includes a line for each method in your code indicating:
Whether it's called directly, indirectly, or not at all from any test method. This is indicated by the
color of the third and fourth columns and the icon in the third column:
Figure 6.2. Sample Jester change report for the Movie example.
how many times it's called for all test methods reported on (the second column)
the minimum number of methods between it and a test method (the first column).
NoUnit works by analyzing the class files from a project. It can answer several questions about your
tests:
If you are practicing Test-Driven Development, NoUnit isn't as useful as it would be if you were writing
tests after the code. Doing TDD implies there is no code in the system that wasn't written to get a test
to pass. That means that everything should be covered by some test.
Even so, it is worth running NoUnit on your code occasionally to make sure some-thing bad hasn't
happened. If you are working with legacy code, NoUnit can be very useful as you are adding tests to
existing code. Since NoUnit works with class files, and not the Java source code, you can even use it to
analyze the test coverage of third-party libraries.
To see NoUnit in action, let's run it on the code we used in the final Jester example (i.e., including the
untested toS tr ing () method):
$ pwd
/ h ome/d ave /p roj ec ts/b ooks /tdd / s amp l e s/n o u ni t
$ ls
M o vie.c las s
M o vie.j ava
o u tput
R a tingO utO fR ang eE xcep tion .cla s s
R a tingO utO fR ang eE xcep tion .jav a
T e stMov ie. cl ass
T e stMov ie. ja va
$ $ N OUNIT _HO ME /ba tc h/no unit .
G e nerat ing R epo rt :
X M L in: ./o ut put /p roje ct.x ml
X S LT fi le: /h ome /d ave/ work spac e / nou n i t/b a t ch /. ./ x sl t /n o- u ni t. x sl
O u tput fil e: ./o ut put/ inde x.ht m l
T r ansfo rma ti on Su cces sful
The results were shown in Table 6.2. It is much clearer in that report that the t oS tr i ng () method is
not being tested. Also notice that it has identified that the constructors for the exception class are only
called indirectly.
NoUnit excels at finding gaping holes in your test suite, that is, methods that are not tested, or only
tested indirectly. Jester is better at finding smaller holes—conditional branches that aren't being tested.
The result of running NoUnit is an HTML page containing the report and legend, ready for posting on our
project's intranet site. In the example, the table has been extracted from the entire page. The NoUnit
Web site[URL 20] gives instructions on downloading, installing, and using NoUnit.
There are some problems with NoUnit that I found; it has some bugs and short-comings. Fortunately,
there is a fairly large online user community that has posted bug reports, solutions, and workarounds at
the NoUnit SourceForge site. Since NoUnit is open source, you can easily fix the problems on your own,
especially in the cases where someone else has figured it out and posted instructions.
In Chapter 20 we will use NoUnit on the project we develop, and in the process bring to light some of its
shortcomings.
Amazon
Prev don't be afraid of buying books Next
CLOVER
Clover is a classic code coverage tool. During a run it measures what code is executed and how many
times—down to the statement level. The results can be reported in a variety of formats, including a nice
HTML format that lets you drill down to the source code level. At the package and class method it shows
the percentage of the code which was executed. At the source level it shows how often each line was
executed, and visually highlights lines that were not executed.
Clover works by instrumenting the source code. When the code is then compiled and run it writes
execution information to a logfile which is later processed to generate the report. This instrumentation is
transparent and automatic. All that you need to do is add some additional lines to your Ant build script.
Clover is very nicely integrated with Ant, and hence with anything that can use Ant to perform builds.
The instrumentation is done to a copy of your source; your source tree is untouched by the process.
Clover's Ant integration is done very nicely. Clover institutes its own compiler adapter in place of the
standard ja vac adapter. The Clover adapter takes care of instrumenting the code in your source tree,
placing the results in a temporary location (that it cleans up afterward), and then using the original
compiler adapter to compile the instrumented code, placing the resulting class files in your project's build
directory. Totally transparent. Nice job.
A benefit of Clover's approach (i.e., instrumenting the source) is that no modification or extensions are
required to the runtime environment.
I won't go into detail here on how you hook Clover into your b ui l d. x ml ; the Clover site [URL 22] has
a nicely done user guide and tutorial walking you through it. What I will show you is some examples of
the HTML report. These were generated from the tutorial example that ships with Clover, the JUnit
Money example. We'll start at the top level with the project report, shown in Figure 6.3. You can see
the coverage results for the overall project, as well as for each top level package, of which there is a
single one in this example.
By selecting the package, we can see the package level report, which shows us results for each class
(and subpackages, if there were any). See Figure 6.4.
Figure 6.4. Package level Clover report.
Selecting a class shows us the code with execution counts. Figure 6.5 shows us a piece of the M o ne y
class that Clover has found never to be executed: one branch in the e q ua ls ( ) method that handles
comparing with a zero value Mone y. Notice the color highlighting on the execution counts of the
offending lines. [4]
[4] In reality the highlighting is pink, but that didn't have enough contrast when converted to
grayscale.. . so I've taken artistic license.
We're discussing Clover here because it is especially useful for us if the execution used for gathering
coverage data is the run of the top level test suite. Clover's report will tell us if there is production code
that is not getting tested. If we're using TDD, theoretically there shouldn't be any. However, something
almost always slips through, whether it's a t o Str i n g() method, or an event handler that never gets
used. Using a coverage analyzer like Clover in conjunction with Jester will provide you with a fairly
complete picture of how comprehensive and effective your tests are.
One thing about Clover that sets it apart from most of the others we've looked at: it's a commercial
product. Even so, the price is very reasonable. Given the data it provides it's worth what it costs.
Amazon
Prev don't be afraid of buying books Next
ECLIPSE
Eclipse[URL 32] is a fairly new development environment that is becoming very popular. It was written
by Object Technology International (OTI), a subsidiary of IBM. OTI is the company behind IBM's
Smalltalk and Java tools. IBM's Websphere Application Development Studio (WSAD) is built on top of
Eclipse. Eclipse is an open source project that is freely available for download.
What makes Eclipse so interesting is how XP- and TDD-friendly it is. It should be no surprise that one of
the people central to the Eclipse effort is Erich Gamma (coauthor of Design Patterns [19] and JUnit).
JUnit integration
Eclipse is architected as a common core platform and a series of plugins. In fact, the Java development
environment is itself a set of Eclipse plugins. The JUnit plugin is of special interest, in particular for the
way it is integrated into the Java toolset. JUnit plugin's features include:
- clickable test lists and stack trace (clicking opens the corresponding file/line in an
editor)
an integration with the Run menus, allowing a class to be run as a JUnit Test, including in the
debugger
the ability to keep JUnit running to increase the speed of running tests, and to allow the
rerunning of individual tests.
Having JUnit integrated this tightly with the development environment greatly streamlines the TDD
experience.
Problem indication
Eclipse indicates problems that will prevent code from compiling. Offending code is underlined with a
wavy red line, and the line is flagged in the left margin with a yellow lightbulb. You can have Eclipse
offer corrections by clicking on the lightbulb, or positioning the caret on the offending code and typing
CTR L -1. Select a correction from the list to have it applied. Figure 6.7 shows an example of this in
action.
Refactoring support
Eclipse supports several automated refactorings that greatly speed development. Refactoring provided
includes:
Extracting a method
Renaming a package
Renaming a method
Renaming a field
Self-encapsulating a field
Being able to use automated refactorings increases both your confidence, since the refactorings are
proven not to introduce errors, and your speed, since you don't have to perform each step of the
refactoring manually.
Eclipse includes all of the standard features expected in an IDE, such as code assist, incremental
compilation, documentation lookup, etc., all of which serve to streamline the programming process.
Amazon
Prev don't be afraid of buying books Next
IDEA
wit h Brya n D ol ler y
I've asked Bryan to talk about IDEA IDE from IntelliJ. Bryan is a well-known consultant
in New Zealand and an outspoken IDEA user and advocate.
IDEA[URL 33] has many features that will help with practicing TDD; in this section we will look at
several of them.
If I use a class that hasn't yet been imported, IDEA will use a tool-tip to tell me, and offer to import it
for me — very simple, very fast. Once the class is imported its methods and attributes are available to
me for code completion.
If I use a class that doesn't exist, something that we all do at first with TDD, then IDEA puts a lightbulb
in the gutter which, if I click on it, offers me a number of code generation options, including the option
to create the class for me.
If I have an object and attempt to call a method that doesn't exist, IDEA will use the lightbulb again to
tell me that it can help out. Clicking on it gives me the option to create the method. Here is where IDEA
starts to show its real intelligence. When generating a method, IDEA has to make certain assumptions:
the return type, the parameter types, and their names.
It will then put a red box (a live template query) around void, with my cursor in it. It's telling me that
void was an assumption, and that I need to either accept it by pressing enter or tab, or change it. Once
I'm happy with the return type, I can press enter to move to the next assumption, the type for the
parameter. The final assumption here is the name for the parameter, which I don't like, so I can change
it. Of course, if I provide it with more information, say, by assigning the return type to a variable, then
IDEA will make better assumptions.
To run the tests I have a few choices available to me. I can compile, test, run, or debug, at a class
level or a method level. If I right-click on a method then it assumes that I'm interested in that method
and offers choices based on the method, but if I right-click on the class name (or a gap between
methods) then I'll be offered these choices for the whole class (which is what I usually want). I can also
choose to run all the tests within a given package.
To run or debug I don't need a main method, only the test methods. Being able to debug at a test-
method level is very useful. I don't have to play around getting to the method I really want to test; it's
all done for me.
The integration with JUnit is very tight. If the GUI runner shows a stack trace I can double-click on a
line of the trace and be taken straight to that line in the editor. Fix the error, recompile, and alt-tab
back to the GUI runner to rerun the tests. I can also choose to run the text-runner, the output of which
appears in IDEA's messages window.
However, refactoring is the jewel in the crown for IDEA. Look at your current editor right now, and open
the refactoring menu. If it's not at least half the size of your screen then you're really missing out.
There are 22 refactorings listed on its menu, but some of those are multirefactorings. Take, for example,
the rename refactoring. It works on variables of any scope, methods, classes, and packages — that
makes it four similar refactorings in one. When it renames a class, it also renames the file it's in, and
when it renames a package it'll rename the directory and ensure that the change is recorded in CVS —
this is a very bright tool, nothing is left unfinished.
One of my favorites is Change Signature — I can use it to add, remove, reorder, and rename parameters
to a method — all at once. If I change a parameter's name it'll do a rename refactoring automatically
for me, before it does the rest of the changes. If I add a parameter it asks for a default value. If I
reorder the parameters it'll ensure that the method is called correctly throughout my project.
IDEA attempts to transparently automate repetitive and common tasks. It leaves nothing undone, and
asks for clarification when it's guessing. It's highly polished, looks great, and will probably speed up your
coding significantly.
Amazon
Prev don't be afraid of buying books Next
What is the Holy Grail of TDD? What do you strive for when you write tests? Something like:
independent tests: the fixture is built and cleaned up for each test, allowing tests to run in any
order
There is a potential conflict here. Small, focused tests mean that you will have lots of tests, each very
small and focused. To keep them independent you need to have a clean fixture for each. That means
you need to rebuild (and re-cleanup) the fixture for each test, every time it is run. OK, so far so good.
The last goal is the problem. We want the tests to be fast...as fast as possible...so that we can run them
frequently. For trivial fixtures, that's OK. But what happens if your test fixtures get complex and time-
consuming to build and cleanup?
You may simply have a lot of fixture to put in place, or you may be working in the context of a large,
complex system. This could be a database, a workflow system, or some system for which you are
developing an extension (e.g., an IDE). Your fixture may involve getting the system into a specific state
so that it responds the way your tests require. This may be impossible to do quickly.
In light of such problematic test resources, how do we reconcile our three goals of focus, independence,
and speed? Mock objects provide one approach that has proven successful. Mock objects are used when
it is diffcult or impossible to create the required state for a problematic resource, or when access to that
resource is limited. There are other interesting uses for mock objects. In this chapter we'll talk about
mock objects, what they are, how to use them, a framework for working with them, and some tools to
make that easier.
Amazon
Prev don't be afraid of buying books Next
MOCK OBJECTS
Mock objects (we will follow convention and call them mocks)take the place of real objects for the
purposes of testing some functionality that interacts with and is dependent on the real objects. An
excellent paper on mocks was presented at XP2000[32], and there is a site dedicated to the idea[URL
23].
The basic idea behind mocks is to create lightweight, controllable replacements for objects you need in
order to write your tests. Mocks also enable you to specify and test your code's interaction with the
mocks themselves.
Amazon
Prev don't be afraid of buying books Next
AN ILLUSTRATIVE EXAMPLE
We're going to consider a simple, somewhat contrived example of a situation where mocks can help.
First we will do it the hard way: creating our mocks entirely by hand. Later we will explore excellent
tools for automating much of the business of mock creation and setup.
Our example involves writing an adventure game in which the player tries to rid the world of foul
creatures such as Orcs. When a player attacks an Orc they roll a 20-sided die to see if they hit or not.
A roll of 13 or higher is a hit, in which case they roll the 20-sided die again to determine the effect of
the hit. If the initial roll was less than 13, they miss.
Our task at the moment is to test the code in P l a y e r that governs this process. Here's P l a y e r (I
know, we aren't test driving this code. ..assume we inherited it):
p ub l ic c las s Play e r {
D i e my D20 = nu l l;
p u bl ic Pl a yer( D ie d 2 0) {
my D2 0 = d20;
}
p r iv at e b o olea n mis s () {
re tu rn f alse ;
}
}
Here's the initial take at a test. We'll start simply with the case where the attack misses. We'll assume
that a Di e class already exists:
p ub l ic c las s Die {
p r iv at e i n t si d es = 0;
p r iv at e R a ndom gene r ator = n u l l ;
p ub l ic i nt r oll( ) {
r e tu rn ge n erat o r.ne x tInt (sid e s ) + 1 ;
}
}
The problem is that there is a random number generator involved. Sometimes the test passes, other
times it fails. We need to be able to control the return value in order to control the preconditions for the
test. This is a case where we cannot (or rather, should not) get the actual test resource into the state
we need for the test. So instead, we use a simple mock object for that. Specifically, we can mock the
D ie class to return the value we want. But first we need to extract an interface from D i e , and use it in
place of D ie: [1]
[1] There are other ways of approaching this, such as creating a subclass that returns the constant,
but I like working with interfaces.
p ub l ic c las s Play e r {
R o ll ab le m yD20 = nu l l;
p u bl ic Pl a yer( R olla b le d 20) {
my D2 0 = d20;
}
//. . .
}
Now we can create a mock for a 20-sided die that always returns a value that will cause an attack to
miss, say, 10:
There, the test always passes now. Next, we write a corresponding test with a successful attack:
Now, these two mocks are almost identical, so we can refactor and merge them into a single
parameterized class:
p u bl ic in t rol l () {
re tu rn r etur n Valu e ;
}
}
Next, we want to write tests for the cases where an attack hurts the Orc, and where it kills it. For this
we will need to extend Mock D ie so that we can specify a sequence of return values (successful attack
followed by the amount of damage). In order to maintain the current behavior, M o c k D i e repeatedly
loops through the return value sequence as r o l l ( ) is called.
p u bl ic Mo c kDie ( ) {
}
p u bl ic in t rol l ()
{
in t val = (( I nteg e r)re turn V a l u e s . g e t ( n e x t R e t u r n e d I n d e x + + ) ) . i n t V a l u e ( ) ;
if ( nex t Retu r nedI n dex >=re t u r n V a l u e s . s i z e ( ) ) {
ne xtR e turn e dInd e x = 0;
}
re tu rn v al;
}
}
Using the R oll a ble interface allows us to easily and cleanly create mocks without impacting the
P la y er class at all. We can see this in the class diagram shown in Figure 7.1.
Figure 7.1. Class diagram of the Player and Die aspects of the example.
So far we've written simple mocks, really just stubs that return some predefined values in response to a
method call. Now we'll start writing and using a real mock, one that expects and can verify specific calls.
Again, so far, we're doing it all by hand.
p ub l ic c las s Orc {
p r iv at e G a me g a me = null ;
p r iv at e i n t he a lth = 0;
p u bl ic Or c (Ga m e th e Game , in t h i t P o i n t s )
{
ga me = t heGa m e;
he al th = hit P oint s ;
}
p u bl ic vo i d in j ure( i nt d amag e ) {
he al th
- =d a ma ge ;
if ( hea l th
< =0 ) {
di e() ;
}
}
p r iv at e v o id d i e() {
ga me .ha s Died ( this ) ;
}
M o ck Di e d 2 0 = n ew M o ckDi e();
d 2 0. ad dRo l l(18 ) ;
d 2 0. ad dRo l l(10 ) ;
M o ck Di e d 2 0 = n ew M o ckDi e();
d 2 0. ad dRo l l(18 ) ;
d 2 0. ad dRo l l(15 ) ;
if ( dea d Orc ! = nu l l) {
As ser t .fai l ("On l y ex pect e d o n e d e a d o r c . " ) ;
}
de ad Orc = or c ;
}
p u bl ic vo i d ve r ify ( ) {
As se rt. a sser t Equa l s("D oome d O r c d i d n ' t d i e . " , o r c E x p e c t e d T o D i e , d e a d O r c ) ;
}
}
When an Or c dies it reports the fact to the game. In the case of the tests, this is a M o c k G a m e that
checks that the dead orc is the one that was expected and that this is the only dead orc so far. If either
of these checks fail, then the mock causes the test to fail immediately by calling A s e r t . f a i l ( ) .When
the mock is set up by the test it is told what orc it should expect to die. At the end of the test the
M oc k Ga me can be asked to verify that the expected O r c died.
This is a trivial mock, but it should give you a taste of what is possible. Figure 7.2 shows this part of
the class diagram, while Figure 7.3 shows the sequence diagram for t e s t K i l l ( ) .
Figure 7.2. Class diagram of the Orc and Game aspects of the example.
Using mocks helps to enforce interface-centric design. Programming against a mock removes the
possibility of depending on the implementation of the objects being used.
By setting expectations in a mock, we can verify that the code we are working on properly uses the
mocked interface.
By setting return values in the mocks, we can provide specific information to the code under
development, then test that the resulting behavior is correct.
By mocking things like communications or database subsystems, we can avoid the overhead involved in
setting up and tearing down connections, etc.
If the code we are writing needs to interact with some tricky resource, we can create a proxy layer to
isolate us from the real resource. Then we can use a mock of that layer. This allows us to develop code
without needing access to the actual device or system.
We can use mocks for classes that we haven't written yet but that the code we are working on needs to
interact with. This lets us defer implementing those classes, letting us focus on the interface between
them and the code we are writing. This allows us to leave implementation decisions until we know more.
If all you need for your test is to simulate the behavior, a mock will suffce.
By mocking the component(s) that the code being written has to interact with, we can focus on it in
isolation. This lets us go faster, because complex interactions with other components are fully under our
control.
To promote interface-based design
Mocks are easiest to use when you adopt an interface-heavy style of programming. [2]
[2] This is simply a better way of programming. By using interfaces, you make it easier to decouple
your classes. In their book on Java design techniques[12], Coad et al. dedicate more pages to
interface- centric design than any of the other techniques they discuss. They boil down the reasons to
use interfaces to three characteristics that an interface- rich system has: flexibility, extensibility, and
plugability. Using mocks is made easier if you freely use interfaces, because if methods expect an
interface, you can easily create a mock implementation and use that in place of a real
implementation. It's no coincidence that Extract Interface is a very important refactoring.
This is related closely to the previous point. In general, inheritance is overused. As a result, monolithic
functionality builds up in an inheritance hierarchy. A class is and always will be whatever it inherits.
Mocking any aspect of a class in an inheritance hierarchy is diffcult because of all of the baggage that
the class inherits. The desire to mock certain aspects (e.g., persistence) tends to yield smaller classes
that get smart by collaborating with other classes. Then different implementations of these classes,
including mocks, can be easily interchanged.
To refine interfaces
Using a mock for a class you will eventually need to implement gives you an early chance to think about
and refine the interface. This is especially true when using test-first design. You have to think about the
interface from the view of a real class that will be using it — because you begin with a real test class
that uses it.
You can easily create a mock that will return values that don't usually occur, or that will throw
exceptions on demand. This allows you to easily test exception handling. For example, you can mock a
Fil e Outpu t S tr eam and have it throw an IOE x c ept i on upon writing the 10th byte. In general, you
can write a mock to replicate any situation that might be hard (or even impossible) to arrange to
happen for the purposes of a test.
Amazon
Prev don't be afraid of buying books Next
WOULDN'T IT BE NICE?
Wouldn't it be nice
if the mock could be told what method calls to expect from the object(s) being tested? Then you could
test the object behavior from the other side as well: not just the response to a call (i.e., the return
value and state change), but also what methods in the mock were called from the class being developed.
Wouldn't it be nice
if the mock could be told what value(s) to return from those calls? That would let you verify more
complex interactions.
Wouldn't it be nice
if it could be told to return different values each time a method is called? Again, this would let you test
even more complex interactions.
Wouldn't it be nice
if you could ask it to verify that everything that was expected to happen did, indeed, happen, and to
throw an exception if any expectation was violated? Some conditions could be detected immediately
(such as a method being called too many times, or with the wrong arguments), but some (such as a
method not being called enough times) couldn't until the mock was asked to check what really happened
against what was expected.
Wouldn't it be nice
if the mock threw an Exception as soon as it knew it was being used in a way that wasn't desired? That
would report the failure as close to the point of failure as possible, making it easier and faster to find
and fix the problem.
Wouldn't it be nice
Wouldn't it be nice
if you could do all that? Well, (as you might have guessed), you can indeed do all of that with the tools
we'll be talking about in the remainder of this chapter.
Amazon
Prev don't be afraid of buying books Next
ACOMMON EXAMPLE
In order to compare the different tools that we will explore in the remainder of this chapter we will adopt a
common example that will be used in each section. The example we will use is derived from code in the
MockObjects distribution. Specifically, we'll be using parts of the calculator example in the package
c om . mo c ko bjec t s. e xa m pl e s.ca lcs erv er.
We will assume that the calculation engine has been developed and implements the following interface:
The code we want to write takes an XML representation of a calculation and evaluates it using a concrete
C al c ul a to r. The initial test case will pass the following XML string to an instance of XmlCalculator and verify
the result (i.e., 2 ):
< ar g um e nt
>1
< /a r gu m en t
>
< ar g um e nt
>1
< /a r gu m en t
>
< /c a lc u la te
>
There would, of course, need to be various other tests but this will suffce for the purpose of this chapter.
Amazon
Prev don't be afraid of buying books Next
Verifiable provides an interface for classes that can be asked at the end of a test to
confirm that the correct behavior occurred. As shown in the diagram, it is the root of the
core of the framework.
MockObject is extended by mocks created with the framework. Its purpose is primarily to
provide some convenience methods.
This class can be used to manage a list of return values that are to be returned by
successive calls to a given method in a mock.
ExpectationSegment is used for substring expectations. That is, you expect that the
actual value contains a certain string.
Verification for this expectation checks that the expected string is a substring of the actual
string.
ExpectationCounter lets you set up an expectation that something (e.g., a method call)
should happen a specific number of times. You set it up with the expected count, increment
it as required, and verify that the count was, in fact, as expected. If this expectation is set
to fail immediately upon violation of the expectation, it will fail when incremented past the
expected value.
Methods are defined to add expected and actual values, singly and in multiples as arrays,
enumerations, and iterators.
When you use mocks in your tests you will notice yourself working in a common pattern:
If we look ahead at the example we can see this pattern. First, we create the mock:
Let's return to our example. We need to use the MockObjects framework to write a mock for the calculator
engine. Here it is, as it appears copied exactly from the examples package of the MockObjects distribution:
This mock used simple Expe c tati onVal u e instances to manage the pairs of expected and actual values.
As such, it only supports a single call of the c a l c u l a t e ( ) method.[3]
[3] When we look at MockMaker in the next section we'll see a mock that can support multiple calls
using the Expe c tati o nCou nter , R e t u r n V a l u e s , and E x p e c t a t i o n L i s t classes.
create an Exp e ctat i onVa lue for each parameter of the c a l c u l a t e ( ) method
have a variable to hold the return value
have an implementation of the interface's method to set actual values and return the preset return
value
Now we can write our test case using this mock to drive the development of X m l C a l c u l a t o r :
MOCKMAKER
MockMaker[URL 24] is a tool for creating mock classes from a given interface or class. MockMaker builds mock
classes using the MockObjects framework.
Building mocks by hand using MockObjects is certainly better and easier than building them from scratch each
time. It's still a fair bit of work. After you've built a few that way, you'll notice that they all look very much
alike. They let a method be called some number of times, with specific parameters each time, returning
specific values each time. The same folks who created the MockObjects framework have written a tool that
automates this drudgery: MockMaker.
MockMaker takes an interface or class and creates a mock for it. We will take the example shown earlier and
use MockMaker to create the mock. To keep things simple, we'll use MockMaker from the command line, as
so:
$ j ava \
> - clas sp at h b i n:/ u sr/ l oc al /ja va /l ib /M o ck M ak er . ja r \
> m ockm ak er .Mo c kMa k er \
> c om.s ao rs a.t d dbo o k.s a mp le s.m oc ko bj ec t s. I nt Ca l cu l a to r \
> > src/ co m/ sao r sa/ t ddb o ok /s amp le s/ mo ck o bj e ct s/ M oc k I nt C a lc u l at o r .j a v a
[4] After adding a package statement, shortening the expectation names, and reformatting to make it fit on
the page better.
Notice the differences between this and the mock in the previous section. This one supports an arbitrary
number of method calls rather than a single call. This is more complex and uses a different approach to
manage expectations about parameters and return values. Specifically:
As before, we need methods to set expected parameter values and return values. An additional
method is required to set the expected number of calls.
We have a similar implementation of the interface's method to set the actuals and return an
appropriate value.
Since MockMaker generates a fairly general mock we can write more involved tests such as:
One potential drawback of generating mocks by hand or with MockMaker is that you have another class that
has to be managed. If the interface that is being mocked changes, the mock has to be changed or
regenerated. The classes have to be stored, version controlled, packaged, etc. Next, we'll look at a
MockObjects-based framework that gets around all of this in exchange for a few cycles at runtime.
Amazon
Prev don't be afraid of buying books Next
EASYMOCK
EasyMock [URL 25] is a framework for creating mock objects dynamically at runtime. It was written and
is maintained by Tammo Freese, who presented it in a paper at XP2002[18].
Using EasyMock does, indeed, make things easier. We take the following test case from the MockMaker
example in the previous section and rewrite it to use EasyMock. Then we'll look at how EasyMock is used
in the example as well as some of its other capabilities.
p r o tec te d vo i d s e tUp() {
co ntro l = E asyMo ck.c on t r o l F o r ( I n t C a l c u l a t o r . c l a s s ) ;
mo ckCa l cul a tor = (In tC a l c u l a t o r ) c o n t r o l . g e t M o c k ( ) ;
}
Basic Features
When creating a mock with EasyMock, you start by creating an instance of M o c k C o n t r o l for the
interface you want to mock. This is done with a call to a static factory method in the E a s y M o c k class:
Now we can specify our expectations by using the newly created mock and the control. You set method
call expectations by making the expected method call on the mock, with the expected arguments. For
example:
We can specify the result to be returned from this method call by using the control:
That's pretty much it. We continue this until our expectations have been specified. Then we make one
more call to the control to switch the mock from specification mode to usage mode:
We then use the mock in our test as usual. At the end we ask the control to verify:
c on t r ol. ve rify ( );
That's the basic idea. There are some additional capabilities we can take advantage of. If we set an
expectation that a v o id method is to be called, then we can skip the step of setting the return value
and it will be assumed to be void. If, however, we prefer to make that explicit, we can do so with the
following instead of calling s etRe turnV a l u e ( ) :
Advanced Features
So far, all of the expectations we've looked at simply specify the methods to be called, the expected
arguments, and the value to return (or none if it's v o i d ). There are no expectations about how many
times the method should be called. The only thing we set an expectation for is that it will be called. We
can set an expectation about how many times a method is to be called by passing that number to the
return-value-setting method of the control. For example, if we expected c a l c u l a t e ( 1 , 1 , " / a d d " ) to
be called three times, returning 2 each time, we could use:
s et R e tur nV alue (
< ty p e
> v a l ue, i nt c o unt )
If more than the expected number of calls is made, an exception will be thrown on the first call that
exceeds the expectation. If fewer than the expected number of calls are made, an exception will be
thrown by the control when it is asked to verify the expectations. The following two examples show this.
First, an example of setting an expectation for two calls, and making three.
We get the following exception trace on the third call to c a l c u l a t e ( ) (trimmed for clarity and space):
The second example is of setting the same expectation (two calls) and only making one.
Sometimes we will want a mock method to throw an exception when called rather than return a value.
To do this, use setT h rowa b le() rather than s e t V o i d C a l l a b l e ( ) or s e t R e t u r n V a l u e ( ) .Here's an
example:
Note that you can specify the number of expected calls as with s e t V o i d C a l l a b l e ( ) and
s et R e tur nV alue ( ).
One nice feature of EasyMock is that after making the method call to set the expected arguments, you
can make repeated calls to setV oidCa l l a b l e ( ) , s e t R e t u r n V a l u e ( ) , and s e t T h r o w a b l e ( ) (with
the count parameter) to specify behavior for each subsequent call. I don't have a good example from our
Calculator code, but here's one from the EasyMock documentation:
Throwing an A s sert i onFa i ledE rror is the default behavior formethods in the mocked interface for
which you haven't set an expectation, or methods called with arguments other than those set in an
expectation. You can change the default for methods called with arguments other than those specified by
using one of the following, after the expectation-setting call to the mock:
< di v >
[ Vi e w fu ll wid t h]< / div>
c on t r ol. se tDef a ult R eturn Valu e(
< ty p e
> t o B eRe tu rned ) ; c o ntrol .set De f a u l t T h r o w a b l e ( T h r o w a b l e
To change the default behavior for methods for which you set no expectation, you can use a nice control
for which the default behavior is to return the appropriate null value (i.e., 0 , n u l l , or f a l s e ). We get a
nice control by using the following factory method in EasyMock:
EasyMock Summary
One very important advantage of EasyMock is that the mock is specified in the test. This means that the
mock is tightly bound to the test it serves.
1. There's a performance hit that you take at runtime. It takes some time to build the mock on the
fly. As you specify the mock, you not only specify what methods are mocked, but also the
expectations. This rolls two steps into one. This makes the mocks easier to build and more
localized, but means that unique mocks need to be built for each test, since each test will have
different expectations to verify.
2. You are limited to the mocks as EasyMock creates them. When you handcraft mocks or use
MockMaker you can tweak the mock as required.
3. Not being able to tune expectations may lead to over-specification in tests. There may be cases
where you only really want to test one parameter. If you are hand-coding you just write an
expectation-setting method that does just that. If you are using MockMaker you can tweak the
generated methods, or add what you need. With EasyMock you are stuck with expectations fully
specifying exactly the methods in the interface being mocked.
4. EasyMock-generated mocks are passive: you specify expected calls and provide return values.
There is no way to have the mock make calls into your objects.
This generally isn't a problem. When it is, use MockMaker or build it by hand.
5. You can only mock interfaces with EasyMock whereas MockMaker will generate mocks for classes
as well. This can be an issue if you are creating mocks for legacy code that wasn't designed in
an interface-centric way.
6. EasyMock only works with Java 1.3.1 or later due to its reliance on new classes in the
java .la ng .re fl ect package, specifically I n voc a ti o nH an d le r and P ro x y.
Amazon
Prev don't be afraid of buying books Next
SUMMARY
Mock objects can, indeed, provide a way to achieve the three goals that were defined at the beginning
of this chapter, namely:
focused tests
fast tests.
We looked at several reasons for using mocks, and the tradeoffs of four different mocking techniques:
coding them from scratch by hand, coding them by hand using the MockObjects framework, stubbing
them out using MockMaker, and finally auto-matically creating them (most easily, but with the biggest
performance hit and most constraints) using EasyMock.
Mock objects are a fairly young concept. New tools and techniques are continually being developed.
EasyMock is one of the recent developments in this area. It makes it fast, easy, and cheap to take full
advantage of mocks. There are no extra classes to create and maintain, and the mock-related code is
part of the test which uses it. We will use EasyMock extensively in our approach to test-driving user
interfaces later in this book.
There are still cases where you might want to build a mock class by hand (likely using the MockObjects
framework). Even then you can get a head start by using a tool like MockMaker to generate the skeletal
mock class for you. You can then add custom behavior.
I'll close this chapter with a slight warning: Mocks are great, and can be invaluable at times, but don't
overuse them or become overly reliant on them. They are but one tool in our bag of tricks, and we have
many others.
Stubs A class with methods that do nothing. They simply are there to allow
the system to compile and run.
Fakes A class with methods that return a fixed value or values that can
either be hardcoded or set programmatically.
Mocks A class in which you can set expectations regarding what methods
are calls, with what parameters, how often, etc. You can also set return
values for various calling situations. A mock will also provide a way to verify
that the expectations were met.
Amazon
Prev don't be afraid of buying books Next
There are several approaches that can be taken when developing the GUI test-first. We will illustrate
them using the same approach we used in Chapter 7: define a common example and apply each
approach to it.
Amazon
Prev don't be afraid of buying books Next
THE EXAMPLE
The example we will be working with in this chapter will be the first bit of GUI for the project we'll be
working through later in the book. Specifically, we need a GUI to a movie list with the ability to add
new movies.
First, we'll brainstorm and decide what it should look like. We need the following components:
1. a list of movies
3. a button to indicate that we would like to add the movie we entered in the text field.
We're intentionally limiting ourselves to a very simple GUI example in this chapter—only what suffices to
explore techniques for developing GUIs test-first. In later chapters we will see these techniques applied
to the development of more elaborate interfaces.
For the purposes of the examples we will assume that this is the interface that we use to create a mock
to which the GUI class talks:
pub l ic in ter fa ce Mo vieL istE dito r {
V e ctor get Mo vie s( );
v o id ad d(S tr ing s trin g);
v o id de let e( int i );
}
We'll examine several alternative approaches, starting with the simpler ones and moving to the more
elegant, flexible ones. We'll end the survey by considering the alternative which we'll make use of later
in the book.
Amazon
Prev don't be afraid of buying books Next
The Robo t is for programmatically feeding key and mouse events to an application. The problem is that
it deals with pixel locations, not components. This is at the wrong level for automated programmer tests.
Future changes to the production GUI code could too easily break our test code. R ob ot could be used
for automating customer tests, though. It could be used either programmatically or using a capture and
replay approach.
BRUTE FORCE
The first approach we will explore is to give our tests direct access to the visual components in the GUI.
We can do this in one of several ways:
1. make them public instance variables so that the test can access them directly
2. provide an accessor for each component so that the test can get hold of them
3. make the test class an inner class of the class that creates the GUI
All of these approaches have the drawback that the components cannot be local to a method; they must
be instance variables. Also, all but the last require extra production code that is used only for testing.
Since I learned Object Oriented Programming in a Smalltalk environment, I find the idea of making
instance variables public unthinkable. This rules out option one. Also, I like to keep my test code
separate from production code, so I avoid option three as well. The last option is very overhead
intensive to do manually. We'll explore some frameworks later in this chapter that make use of
reflection, but hide the details.
That leaves the second approach: adding accessors for the components.
Keep in mind that the code was developed incrementally: enough new test to fail, followed by enough
new code to pass. During development, both the code and the tests were refactored as needed. Notice
how the tests are in two Te s tCas e classes. Notice the different s e t U p ( ) methods.
Given that preamble, here's the test code. First we have a T e s t C a s e that verifies that the required
components are present. Notice how we've used EasyMock. For this test we aren't concerned with the
interaction of the GUI with the underlying object (i.e., the mock) so we've used n i c e C o n t r o l F o r ( )
which provides default implementations for all of the methods in M o v i e L i s t E d i t o r .
p u bl ic vo i d te s tLis t () {
JL is t m o vieL i st = wind ow.g e t M o v i e L i s t ( ) ;
as se rtN o tNul l ("Mo v ie l ist s h o u l d b e n o n n u l l " , m o v i e L i s t ) ;
as se rtT r ue(" M ovie list sho u l d b e s h o w i n g " , m o v i e L i s t . i s S h o w i n g ( ) ) ;
}
p u bl ic vo i d te s tFie l d() {
JT ex tFi e ld m o vieF i eld = wi n d o w . g e t M o v i e F i e l d ( ) ;
JT ex tFi e ld m o vieF i eld = wi n d o w . g e t M o v i e F i e l d ( ) ;
as se rtN o tNul l ("Mo v ie f ield s h o u l d b e n o n n u l l " , m o v i e F i e l d ) ;
as se rtT r ue(" M ovie fiel d sh o u l d b e s h o w i n g " , m o v i e F i e l d . i s S h o w i n g ( ) ) ;
}
The other Te stCa s e tests for correct operation of the GUI and uses a more involved mock. Here we
build the mock in each test method, setting different method call expectations and return values in each.
Note, however, the common mock creation code in s e t U p ( ) .
f o r (i nt i = 0 ; i
< m o vi eN ame s .siz e (); i ++) {
as se rtE q uals ( "Mov i e li st c o n t a i n s b a d n a m e " ,
movi e Name s.ge t ( i ) ,
movi e List Mode l . g e t E l e m e n t A t ( i ) ) ;
}
c o nt ro l.v e rify ( );
}
c o nt ro l.v e rify ( );
}
p u bl ic vo i d in i t() {
se tL ayo u t();
in it Mov i eLis t ();
in it Mov i eFie l d();
in it Add B utto n ();
in it Del e teBu t ton( ) ;
pa ck ();
}
p r iv at e v o id s e tLay o ut() {
ge tC ont e ntPa n e(). s etLa yout ( n e w F l o w L a y o u t ( ) ) ;
}
JFCUNIT
Now let's try a similar but more elegant solution that doesn't require us to explicitly make the GUI class
internals (components) accessible. JFCUnit also takes care of the threading issues involved with testing
Swing code that is driven by events that are processed by the AWT thread. It automatically blocks the
AWT thread while each test is run and unblocks it afterwards. As well, by calling a wt S le ep ( ),JFCUnit
gives you the ability to unblock the AWT thread within a test and allows it to process any events that
the test has placed in the queue. Once all events have been processed, the AWT thread is again blocked
and your test continues. One way or another, control is given back to the test (and the AWT thread is
blocked) after a specified period of time. You can set this period of time using the s e tS le e pT im e ()
method.
JFCUnit also provides a helper class named, aptly, J F C Tes t He l pe r. This class provides a variety of
methods you can use to find various parts of the GUI being tested, to track what windows are open, and
to clean up any open/showing windows.
Let's have a look at the APIs for these classes before tackling the example. These are the two main
classes that provide you with the functionality required to test Swing GUIs. There are also an assortment
of support classes for encapsulating event data that we will use in conjunction with the methods below.
junit.extensions.jfcunit.JFCTestCase
awtSleep() suspends the test, for no longer than the specified sleep time (the default or
that set by s et Slee pTim e()), allowing the AWT thread to run and process UI events
that are in its queue.
resetSleep Time() resets the a w tSl e e p() maximum sleep time to the default.
setSleep Time(long time) sets the maximum time (in milliseconds) for which
awtS lee p( ) will sleep.
sleep(long delay) suspends the test for at least de l ay milliseconds, allowing the AWT
thread to run.
junit.extensions.jfcunit.JFCTestHelper
cleanUp() can be used in the t e arD o w n() method to close and dispose of any windows
left open when a test fails partway through.
getWindows() gets all currently visible windows, with various selection criteria.
Our test classes aren't too different with JFCUnit than without it. The main differences are in how we
access the GUI components. Using JFCUnit we don't need direct access to them. Instead, we use the
JFC T estHe lpe r class to find them for us. Since the GUI is simple, we have only a single instance of
some classes of components (i.e., one list, and one field) so we can find them by class and not worry
about naming them. To find the buttons we can either request all the buttons and search the resulting
collection for the instance we want, or we can name them. Here, we'll do the latter. Note that the
search-based approach can be a better solution if we are adding tests to an existing GUI for which we
don't have (or can't change)the source code (and hence, don't know or can't set the widget names).
}
}
There are more differences in the TestO p era t i on class. Notice how we are now manipulating the GUI
components at a higher level by providing events to them, rather than directly manipulating their
internal state.
for ( int i = 0; i
< m o vieNa mes .s ize () ; i+ +) {
ass ert Eq ual s( "Mov ie l ist c o nta i n s b a d n am e" ,
mo vieN ames .get ( i ),
mo vieL istM odel . g etE l e men t A t( i) );
}
JButt on ad dBu tt on =
(JBut ton )h elp er .fin dNam edCo m p one n t ("a d d Bu tt on " ,
wi n d ow,
0) ;
addBu tto n. doC li ck() ;
Finally, here's the GUIclass. It is almost identical to what we had before, except for the component
access methods that we don't need.
JEMMY
Jemmy is from the NetBeans world where it takes the form of a module that plugs into the NetBeans
platform. The developers of Jemmy made a wise decision and made Jemmy capable of being used in a
stand-alone fashion, independent of NetBeans. In fact, Jemmy can be used directly from a standard
JUnit T e stC as e. We'll look at how Jemmy is structured and used, then put it into practice in our
example.
How it works
Jemmy works a bit differently than JFCUnit. Using Jemmy, you create operators that wrap a
corresponding Swing object.
Each component in the Swing API has a corresponding O pe ra to r class in the Jemmy API.
Before creating operators for individual components, you should create one for the main window of the
GUI you are testing, for example:
Now, to find the Add button from the example, we create a J B ut to n Op e ra to r in the window we
found with a specific label text:
For most operators there are a variety of constructors. Using J B ut to n Op e ra to r as an example, we'll
look at them below. With the exception of the first case, all constructors have an initial parameter that is
the operator for a parent (recursively) container (e.g., the main window J F ra me ).
One approach that's useful when we are test-driving a GUI is to give every component a unique name
(using s etN am e() ). Then we can use a C o m pon e n tCh o os e r that searches for a given name. The
chooser will look something like:
Once we have operators for the components we need to use, we can use those operators to access and
manipulate those components. Each operator provides methods that match those of the wrapped
component and forwards calls as appropriate.
Although simple to use, the Jemmy API is large and involved. To make it easier to learn and use, there
are a few simple ideas to keep in mind:
Find components by constructing an appropriate operator with the information required to find
what you need.
Operator constructors throw an exception that will cause the test to fail if a required component
isn't present, so you don't need to check the result.
Most component methods are present in the operators and are forwarded through to the
component.
The Example
Rather than go through the same exercise again with Jemmy, since it is quite similar to the JFCUnit
example, I will simply present the Jemmy-based tests that result in the comparable UI code.
for ( int i = 0; i
< m o vieNa mes .s ize () ; i+ +) {
ass ert Eq ual s( "Mov ie l ist c o nta i n s b a d n am e" ,
mov ieNa mes. g e t(i ) ,
mov ieLi stMo d e l.g e t Ele m e nt At (i ) );
}
Let's have a look at Figure 8.3: t estA dd () in operation. It shows the typical sequence of a Jemmy-
based test:
3. Create operators for the fields involved in the test, manipulating them as required (entering
text, making selections, etc.) to set the required state.
ULTRA-THIN GUI
When you practice TDD, you generally end up with your model well-encapsulated. When you get around
to adding a GUI, you find that all you need are components that make calls into the model. . . exactly
what the rules of good OO design tell you you should have. This makes test-driven development of the
GUI either trivial or beside the point.
This is the traditional approach to dealing with the GUI in a TDD context. It is still a fairly valid option.
If you make the GUI as thin as possible, then user interaction simply delegates to a domain object and
other components simply provide a getter/setter to get/set their value. All the action happens in the
domain objects. There is no logic in the GUI and so nothing that needs testing. The domain class can be
tested (through TDD) as any other. In fact, it should be already.
We can combine this approach with one of the previous ones to put functional tests in place for the GUI
as well. You could mock the domain class that responds to and controls the GUI to behave in a specific
way, although you will need to use more than EasyMock since the user action handlers will almost
always need to query or control the GUI components. Recall that mocks generated using EasyMock are
passive. They can not do anything other than maintain and verify expectations and provide return values.
To mock the domain classes you will need to have the mock query and/or manipulate the appropriate
components in response to user action (e.g., clicking the Add button calls a d d ( ) in the model which
should query the data fields, and update the list box).
One way to approach this is to use what Mike Feathers has called making a humble dialog[14].
No special tools are required since all processing and computation is done in a non-GUI class
where it can easily be test-driven using standard tools.
I prefer it.
OK, we start with one simple test: Given a collection of movie names, fill in the listbox in the GUI.
p u bl ic vo i d te s tLis t () {
Ve ct or m ovie N ames = ne w Ve c t o r ( ) {
{ add ( "Sta r War s "); add( " S t a r T r e k " ) ; a d d ( " S t a r g a t e " ) ; }
};
Mo ck Con t rol c ontr o l = Easy M o c k . c o n t r o l F o r ( M o v i e E d i t o r V i e w . c l a s s ) ;
Mo vi eEd i torV i ew m o ckVi ew = ( M o v i e E d i t o r V i e w ) c o n t r o l . g e t M o c k ( ) ;
This is another good summary view of the TDD process. We write one simple test, with nothing else
written. That test, while simple, drives the creation of an interface and a class. To support the above
test, we need to create a couple of things.
Notice how we do the simplest thing. The s e t M o v i e N a m e s ( ) method simply sends its argument on to
the view. It doesn't save it anywhere. That isn't needed to pass the test, so we don't do it.
Now we add another test, this one for the adding a movie operation. In the process, we noticed the
need for a common fixture, so we refactored that into a s e t U p ( ) method. Here's the test case now:
p u bl ic vo i d te s tLis t () {
mo ck Vie w .set M ovie N ames (mov i e N a m e s ) ;
co nt rol . setV o idCa l labl e(1) ;
p u bl ic vo i d te s tAdd i ng() {
St ri ng L OST I N SP A CE = "Lo s t I n S p a c e " ;
Ve ct or m ovie N ames W ithA ddit i o n = n e w V e c t o r ( m o v i e N a m e s ) ;
p u bl ic vo i d ad d () {
St ri ng n ewMo v ieNa m e = myVi e w . g e t N e w N a m e ( ) ;
my Na mes . add( n ewMo v ieNa me);
my Vi ew. s etMo v ieNa m es(m yNam e s ) ;
}
}
Figure 8.4 shows the sequence diagram for the a d d ( ) method we just wrote. While this is a very
simple example, we can see the clean separation of the interface and the application. The user clicks on
the add button, which is part of the interface, which simply delegates to the M o v i e L i s t E d i t o r . The
interaction with the interface is very simple: get the contents of a field and set the contents of a list.
First, look at s etMo v ieNa m es() . See how we store the argument in m y N a m e s , then send the argument
to the view? Recall that in getting test A d d i n g ( ) working we added the assignment to save the
names. Now we clean up. We start by noting that by the time we sent the argument to the view, we've
already saved it in m y Name s . We've just found a smaller bit of duplication. To get rid of it, we can send
the saved value of the argument rather than the argument itself. Compile and run the tests. Green bar!
Now s etM ov ieNa m es( ) is:
Now, look at a dd() .See how it sends m y N a m e s to the view as well? Let's extract that line into a
separate method:
p ub l ic v oid add( ) {
S t ri ng ne w Movi e Name = my View . g e t N e w N a m e ( ) ;
m y Na me s.a d d(ne w Movi e Name );
u p da te Nam e sInV i ew() ;
}
Compile and test. Green bar. The next step is to use the new method in s e t M o v i e N a m e s ( ) :
First, a test:
c o nt ro l.v e rify ( );
}
Notice how we are making the operations of the editor very simple:
add ( ) and del ete() ...no parameters, returning nothing. These are
just actions forwarded directly from the GUI, which is then queried for
its state as necessary. This, coupled with using an interface to define
the GUI, gives us very good separation between the domain code and
the GUI. This separation is worth striving for, and this approach helps
a great deal in achieving it.
Time to clean up. OK, nothing to do this time. Since the u p d a t e N a m e s I n V i e w ( ) method was already
there, we just used it when writing del e t e ( ) .We could have explicitly written the call to update the
view, then refactored to replace it with the update method, but why? We've done that duplication
elimination already, and we learned something in the process.
Now we have to create a real GUI to use in place of the mock. Notice that with this degree of
separation it will be very thin, and very simple, since it is almost completely passive.
In implementing the GUI we discover an oversight. It needs to know about the editor so that it can
delegate to it. We haven't done this yet since we were driving the editor from the tests, not the mock
view. But this is simple enough to add. We just need a method to set the editor. For the test with the
mock it doesn't really matter since the calls that the GUI would be making in response to button clicks
are coming from the test itself. Here's the new method in the interface:
Where is the best place to call this? The constructor of M o v i e L i s t E d i t o r is a good choice:
p ub l ic M ovi e List E dito r (Mov ieLi s t E d i t o r V i e w a V i e w ) {
m y Vi ew = a View ;
m y Vi ew .se t Edit o r(th i s);
}
To accommodate this call being made to the mock in the tests, we need to set up the mock to accept it
with any argument value. We can do this using the s e t D e f a u l t - V o i d C a l l a b l e ( ) method on the
mock control. And since this requirement applies to all of the tests, we can put it in s e t U p ( ) after
creating the mock. Here's the new version of s e t U p ( ) ; notice the last two lines:
p ro t ec te d v o id s e tUp( ) {
m o vi eN ame s = n e w Ve c tor( ) {
{ ad d(" S tar W ars" ) ; ad d("S t a r T r e k " ) ; a d d ( " S t a r g a t e " ) ; }
};
Now, with these tweaks made, we can write the GUI code. It is very much like the previous version,
except simpler. Notice that it is mostly just building the GUI. Other than that there are just accessors
and mutators for component contents, and enough code to delegate user actions to the associated
editor.
p u bl ic vo i d in i t() {
se tL ayo u t();
in it Mov i eLis t ();
in it Mov i eFie l d();
in it Add B utto n ();
in it Del e teBu t ton( ) ;
pa ck ();
}
p r iv at e v o id s e tLay o ut() {
ge tC ont e ntPa n e(). s etLa yout ( n e w F l o w L a y o u t ( ) ) ;
}
Note that we could use any of the earlier GUI-oriented techniques to test-drive the creation of the GUI
in this execution as well. For this simple example it would have been overkill. In Part III we'll see how
that approach works.
Amazon
Prev don't be afraid of buying books Next
SUMMARY
In this chapter we've explored approaches to test-driving the development of a graphical user interface.
At one extreme was a brute force approach, which required that the tests have direct access to the
components making up the GUI. This is very coupled, and requires a significant amount of overhead in
the GUI classes. At the other end of the spectrum was an elegant, flexible, two-stage approach. In that
we separated the user interface into two layers: one containing the logic of the user interaction, and one
containing the presentation code.
Using the latter technique, we can test-drive very simple, well decoupled interfaces. Using a mocked
view in place of a real GUI component we can design and develop the logical layer of the user interface,
which can be used as the basis of any sort of interface: Swing, Web, or whatever is the fad of the day.
In fact, it can easily, and cheaply, support multiple presentation layers.
When using JFCUnit or Jemmy we can drive the actual Swing-based GUI component. This approach
(using Jemmy) will be put into practice in Part III of this book. Jemmy has several advantages, including
not needing any extra helper utilities and being useful for adding tests to existing GUI components.
Amazon
Prev don't be afraid of buying books Next
an overall vision,
That is the purpose of Chapter 9. Each of the remaining chapters in this part of the book
will cover the development of a single story. Sections within each chapter will focus on a
specific task within the story.
I've written this part of the book as though I were sitting with you, the reader, and
working side-by-side on the project. The style is intentionally a little more casual and
conversational. I've tried to make you feel as though you are here with me as I write.
You have to provide your own coffee, though.
The code shown here is real. It was copied directly from the IDE. Sometimes we make
mistakes or oversights that we then catch, sometimes they get caught later. It's not
perfect, but it doesn't have to be. It does have to be understandable and malleable,
though. The final code is available at the companion Web site [URL 65].
OVERVIEW
This part of the book is written around the Test-Driven Development of a single project: an application
to help keep track of movies you would like to see and help in choosing a movie to see.
The following chapters are an annotated walk through the development of the application. It's real code;
how it was really written. I wrote this part of the book as the code was being written. Code was copied
and pasted directly from Eclipse into Emacs in real time. So sometimes you'll see things that aren't quite
right. In the last chapter of this part I'll talk a bit about what might have been done differently.
This project was done using eXtreme Programming, so requirements are written as user stories.
However, the stories are ordered with a linear path through the project in mind, rather than by some
measure of business value of importance to a customer. The stories are then decomposed into tasks.
Once you are at the task level, it makes little difference how you got there. I feel very strongly that,
while it is a cornerstone of XP, TDD can be used in many development processes. The caveat is that Big
Design Up Front (BDUF) conflicts with TDD, so you do need to be using a fairly agile process that lets
you evolve the design organically. In particular, you need to be practicing an agile approach to modeling
and design (see Appendix B).
As we work on each task, we'll start by sketching out what tests we'll want to see passing. These tests
are identified at the beginning of the task as we tackle each one. For your convenience, a full list of
these tests is provided in Chapter 20, with references to where each is defined and implemented. At
times we'll discover more tests as we work. These are taken as they come and no extra effort is made
to make them stand out.
Project Vision
Make it easy to keep track of movies you want to see. Provide support for ratings, reviews,
recommendations, etc., to help in deciding what to watch.
Amazon
Prev don't be afraid of buying books Next
Provide a way to keep a list of movies and a way to add movies to it. Ordering of the list isn't a
concern.
Task 1-1. Make a container for movies with a way to add to it. Doesn't have to be
ordered or sorted.
Task 1-2. Make a GUI that shows a list of movies. List order is just what is in the
underlying collection. List should scroll as required.
Task 1-3. Provide a text field and "Add" button for adding movies to the list via the GUI.
Task 2-2. On the GUI, selecting a movie fills in the text field with its name. An "Update"
button will update the movie name with the contents of the field.
A movie appears only once in the list; the names are unique and case-insensitive. Attempts to add a
duplicate should fail and result in an error.
Task 3-1. Enforce uniqueness of movies during add and update operations. Attempts to
add or rename with the same name as an existing movie should raise an exception.
Task 3-2. The GUI provides an error message dialog when a non-unique add or rename
is attempted.
Story 4. Ratings
Task 4-1. Add support for a single rating to Movie. Support the concept of an unrated
movie.
Task 4-2. Show the rating in the GUI, next to the movie name.
Story 5. Categories
Task 5-1. Add support for a movie to have a single category, from a fixed list of
alternatives.
Task 5-2. Show the category in the GUI. There should be a field that gets filled in when
a movie is selected.
Task 5-3. Allow the category of a movie to be changed by selecting it from a fixed list
of possibilities.
Filter the movies in the list by category. There needs to be a way to select the category to show,
including ALL.
Task 6-1. Given a category, ask the movie list for a list of its movies that belong to that
category.
Task 6-2. Get the entire list using an ALL category.
Task 6-3. Add a combo box to the GUI to select a category. When a category is
selected, update the list to include only movies of that category.
Task 6-4. When the list is filtered, changing a movie to a different category should
remove it from the list.
Story 7. Persistence
Task 7-2. Provide, in the GUI, the capability to save the movie collection to a specific
file.
Task 7-3. Provide, in the GUI, the capability to save the movie collection to the same
file it was previously saved to.
Task 7-5. Provide, in the GUI, the capability to load the movie collection from a file.
Story 8. Sorting
In the GUI, allow the movie list to be sorted by selecting from a set of options on a menu. Two
orderings are required: ascending by name, and descending by rating.
Task 8-1. Create the ability to compare two movies based on either name (ascending
order) or rating (descending order). This is needed for sorting.
Task 8-2. Add the capability to ask a M o v i e L i s t to provide its list sorted on a specific
attribute, for the moment this is one of name or rating.
Task 8-4. Add a View menu to the GUI with an option for each attribute that can be
sorted on. Selecting one of them results in the visible movie list being sorted by that
attribute.
Movies can have more than one rating, each with an identified source. Selecting a movie should show all
of its ratings. There should also be a way to add a rating. A movie's displayed rating should now be the
average of all of its individual ratings.
Task 9-1. Add support for multiple ratings rather than just one. Provide access to the
average of all the ratings.
Task 9-2. Create a class to represent the source for a rating. Keep in mind that a
single source can have many ratings, but not the reverse.
Task 9-3. Revamp the persistence capabilities to handle the changes to the rating
structure.
Task 9-4. In the GUI, show a list of all ratings for the selected movie.
Task 9-5. In the GUI, provide a way to add a rating to the selected movie. Be sure to
handle an empty source and map it to Anonymous.
Task 9-6. The single-rating field no longer serves any purpose. Remove it and update
the system appropriately.
Story 10. Reviews
Movies can have a review attached to each rating. If there is a review there must be a source.
Task 10-5. Add support to the GUI to enter a review as part of adding a rating.
Amazon
Prev don't be afraid of buying books Next
We begin, of course, with a single test. What should that test be? Well, it needs to be something simple
since there is absolutely no code yet. How do we figure out what the first test should be? Look at the
task. We need to keep a list of movies. Should we write a test for a movie? Let's not get ahead of
ourselves. The task doesn't say much about movies. It just says that we have some and we want some
place to keep them. The some place to keep them is the focus of the task, and should be our focus as
well.
OK, so we want a test of something that holds movies. What should we test? Look back at the tips at
the end of Chapter 4. Can we get ideas from that? You bet! How about Test the simple stuff first,
specifically, empty collection or null object behavior. OK, let's start with a test for an empty list of
movies. Now that we've gotten a start we can come up with more tests that we'll need for this task:
Test 2. Adding a movie to an empty list should result in a list with a size of one.
Test 3. Adding two movies to an empty list should result in a list with a size of two.
Test 4. If we add a movie to a list, we should be able to ask if it's there and receive a
positive response. Conversely, we should receive a negative response when we ask about
a movie that we haven't added.
Here it is, with the basic overhead. Notice that we've paid attention to another tip: we've included a
m ain ( ) method.
Eclipse provides a wizard for creating test cases. One of the options is
to automatically include exactly the main method we would like to
have.
p ubl i c cla ss Te stM ov ieLi st e xten d s Te s t Cas e {
This is about as simple as it gets. We create a new instance of MovieList, and assert that it contains zero
movies.
Try compiling...it won't. We need to add some code stubs to make it compile. Let's look at what we've
written and get a list of assumptions we've made.
Now we'll add a user interface to what we have so far. For this task all we need is to display a list of
movies. We are going to tackle this in two phases: the logic, and the presentation.
Test 5. The logical layer should send the appropriate list of movies to the view for
display.
Test 6. The GUI should have a listbox and should display a list of movies in it as
requested.
Test 5: The logical layer should send the appropriate list of movies to
the view for display
We will use the humble dialog approach described in Chapter 8 to drive the development of the GUI.
This task is pretty simple: we just need to display the list.
Here's the test for the interface model class's ability to display a list:
p u bl ic vo i d te s tLis t () {
co nt rol = Ea s yMoc k .con trol F o r ( M o v i e L i s t E d i t o r V i e w . c l a s s ) ;
mo ck Vie w = ( M ovie L istE dito r V i e w ) c o n t r o l . g e t M o c k ( ) ;
mo ck Vie w .set M ovie s (mov ies) ;
co nt rol . setV o idCa l labl e(1) ;
co nt rol . acti v ate( ) ;
Mo vi eLi s tEdi t or e d itor = n e w M o v i e L i s t E d i t o r ( m o v i e L i s t , m o c k V i e w ) ;
co nt rol . veri f y();
}
}
What are we testing here? We want a class that will sit between a M o v i e L i s t and the interface
(whatever that might be) for the purposes of presenting and editing the movie list. We'll call this new
class M ov ie Lis tE dit o r. We'll use EasyMock to mock the interface so that we can easily check that
the editor object is using it properly. This test just checks that when a M o v i e L i s t E d i t o r is asked to
manage the display and editing of a Mo v i e L i s t , it sends the expected collection of movies to the
interface for display.
The s et Up( ) method creates a few movies, and uses them to build a V e c t o r and a M o v i e L i s t . The
V ec t or is equal to the one that should be sent to the display by the M o v i e L i s t E d i t o r being tested.
Once the mock is set up, the test creates a M o v i e L i s t E d i t o r . We pass the constructor everything it
needs to do its job, specifically, the Mov i e L i s t to be edited and the interface object (in this case, a
mock).
This interface is fine the way it is, but M o v i e L i s t E d i t o r needs some work before the test will pass.
Fortunately, it doesn't take much to get the test to pass:
Now we can build a real interface to use in place of the mock in order to actually run the application.
Test 6: The GUI should have a listbox and should display a list of
movies in it as requested
We will use Jemmy to drive the development of the actual GUI layer. Mostly, we will be concerned with
functional issues, not aesthetic ones. For this task all we need is a window with a list box that shows
the names of the movies in the underlying M o v i e L i s t .
Here's our Jemmy-based test case (most of it is setup, the actual test just grabs the list elements and
checks them):
p u bl ic vo i d in i t() {
se tT itl e ("Mo v ie L i st") ;
ge tC ont e ntPa n e(). s etLa yout ( n e w F l o w L a y o u t ( ) ) ;
mo vi eLi s t = n ew J L ist( new V e c t o r ( ) ) ;
JS cr oll P ane s crol l er = new J S c r o l l P a n e ( m o v i e L i s t ,
S crol l Pane C onst ants . V E R T I C A L S C R O L L B A R A L W A Y S ,
S crol l Pane C onst ants . H O R I Z O N T A L S C R O L L B A R N E V E R ) ;
ge tC ont e ntPa n e(). a dd(s crol l e r ) ;
pa ck ();
}
Visual Inspection
Now that we have a real GUI we have something to look at. Notice how quickly we've gotten to the
point of having something on the screen. Not much, mind you, but something. Something to look at.
Something to show the customer. Something real.
So let's have a look (see Figure 10.1). Oh dear! There's the window. There's the scrollpane. Instead of
the list of movies we'd expect, there's list of object identifiers. The test tells us that the movies that
should be in the list box are, indeed, there. So, what's wrong?
p ub l ic c las s Movi e {
}
Well, that's the problem. There's nothing to be displayed. The items are in the list but there's nothing to
render. We need to give Mov i e something to identify it that can be placed in the list box. Now it's time
for you to sit back smugly and say that you said it should have had a name from the start. I thought
so, too, but it wasn't needed until now. We both knew that it would be eventually. Now it is, so now
we'll do something about it. But, we add no code without a failing test that requires it. So. . .
That looks good, but it causes our previous tests to fail because they used the default constructor (e.g.,
s ta r Wa rs = n ew M o vie( ) ;). We can either add a default constructor, or we can fix the other tests.
My opinion is that we fix the other tests rather than incur code debt. Why is this code debt? Because
we've just made the default constructor obsolete. Now that each M o v i e has a name that is set in the
constructor, can you imagine a use for a M o v i e without a name? Me neither. Do we do it now or after
we get the new test running? Now. Those tests will fail with errors if we don't. That will violate our goal
of only having one test failing at a time.
So, we go back and bring the tests up to date. (We won't include the code here, as the change is so
simple: you just need to change the default constructor calls to ones taking a movie name.) All of the
code in the book is available online anyway [URL 65].
Now we're back to t e stMo v ieNa me(). We need a g e t N a m e ( ) method that returns the name. The
simplest thing that will make this test pass is:
p ub l ic S tri n g ge t Name ( ) {
r e tu rn "S t ar W a rs";
}
Green bar. Now we have to clean up by refactoring to remove duplication. The duplication is the string
literal" S ta r W ar s". It's being passed into the constructor and returned by g e t N a m e ( ) . To remove
the duplication we can store the value that is passed in and return that:
p ub l ic c las s Movi e {
p r iv at e S t ring name ;
Green bar, still. We're finished with the tweak to M o v i e . Let's have another look at the GUI again.
Same thing. We need a way for the list box to get the name from each M o v i e . We could write a
customer renderer, or we could add a t o S t r i n g ( ) method. Guess what's simpler? You got it, we'll go
with t oSt ri ng() . But first a test:
p u bl ic vo i d te s tToS t ring () {
as se rtE q uals ( "sta r Wars sho u l d h a v e t o S t r i n g o f \ " S t a r W a r s \ " . " ,
"Sta r War s",
star W ars. toSt r i n g ( ) ) ;
}
The two tests look a lot alike, but they do test different methods, and t o S t r i n g ( ) may not always be
the same as getN a me( ) . YAGNI (You Ain't Gonna Need It)? My choice is to leave it as is.
p ub l ic S tri n g to S trin g () {
r e tu rn na m e;
}
How's the GUI look now? Have a look at Figure 10.2. That's more like it. Now we're done. And that
finishes off the task. High five! Snack time!
Test 7. When the logical layer is asked to add a movie, it should request the required data
from the view and update the movie list to include a new movie based on the data provided.
Test 8. The GUI should have a field for the movie name and an add button. It should answer
the contents of the name field when asked, and request that the logical layer add a movie
when the add button is pushed.
To add a movie to the list, we go back to our humble dialog approach and revisit T es t GU I.
Test 7: When the logical layer is asked to add a movie, it should request
the required data from the view and update the movie list to include a new
movie based on the data provided
Before we turn to making the test work, notice the duplication in setting up the control and mock between
this and our previous test. Let's start by refactoring that into se t Up () . We won't show the updated code
here just for that change... watch for it later.
Now let's get back to our green bar. The failure we're getting is an EasyMock expectation failure on verify
(i.e., found by the v er ify () call):
Looking at the te st Add in g() method we can see that the first expected call to set the initial movie list
was satisfied, but that the expectations to get the name of the movie to add and to set the larger movie list
were not satisfied. Let's take them one at a time.
We'll start with the expected call to g etNew N ame . The task says to add a text field in which the user can
enter the name of the movie to be added. So in the a d d () method of M o vi e Li st E di to r we need to get
the contents of that field using getN ewNam e . To do this we need access to the M o vi eL i st Ed i to rV i ew
that was passed into the constructor. This means we need to store it in an instance variable. So
Mo vi eLi s tEd it or is now:
That satisfies the expected call to getN ewNa m e (). Now to set the new list value. Notice that
ge tN ewN a me( ) is set up to return the name of the movie to be added. If we use that returned value to
create a new Mo vie which we add to the list, which we then pass to the view with s e tM ov i es () , we just
might get to the green bar again! Let's try it:
pu bl ic v oid a dd( ) {
Mo vie n ewM ov ie = new M ovie (vie w. ge t N ewN a m e() ) ;
}
OK, we need the Mo vie Li st that was passed to the constructor. We'll have to save it somewhere:
pu bl ic v oid a dd( ) {
Mo vie n ewM ov ie = new M ovie (vie w. ge t N ewN a m e() ) ;
mo vie s .ad d( new Mo vie );
vi ew. s etM ov ies (n ew Ve ctor (mov ie s. g e tMo v i es( ) )) ;
}
That should do it. Let's see. Red bar!?! OK, what's the failure?
Hmm. . . wasn't it just failing because that call wasn't being made? Now we're making it and it's complaining
that the call wasn't expected. Consider what's happening under the covers. If we dig through the EasyMock
and MockObjects code, we find that the expected and actual arguments to the expected method calls are
compared using e qu als () . Our argument is an instance of Ve ct or and Ve c to r. e qu al s () compares
elements pairwise using eq uals (). What does e q ual s ( ) mean for M ov i e? We haven't defined
Mo vi e.e q ual s( ) so it's using what it inherits from O b j ect , which considers two objects equal if they are
the same physical object.
Looking at our code we see that we created a Mov i e in the test and another in M ov ie L is tE d it or . ad d( )
which are getting compared. So, there's the problem. We just need to add an eq ua l s( ) method to M ov ie .
As always, we start with a test, this one in T e stM o v ie:
pu bl ic i nt ha shC od e() {
re tur n na me .ha sh Cod e( );
}
OK. That finishes off the logic part of the interface change. Now we'll turn our attention to the Swing-based
view and Te s tSw in gMo vi eLis tEdi torVi e w. Before we start we need to add a stub for the method we
added to the interface:
Test 8: The GUI should have a field for the movie name and an add button.
It should answer the contents of the name field when asked, and request
that the logical layer add a movie when the add button is pushed
When we run this we get a red bar. There's no JT e x tFi e l d component in the interface yet, so ne w
JT ex tFi e ldO pe rat or (ma in Wind ow) is failing. We need to add the text field in the i ni t( ) method:
Now our test is failing because the button can't be found. We need to add that to the interface (don't forget,
it's found by name):
Structurally, the components we need are there. Now we just need to hook the add button to the a dd ( )
method of M o vie Li stE di tor. Adding an Ac t i onL i s ten e r to the button is easy enough. The problem is
what should it do? We don't have an instance of Mo v i eLi s tE d it or . The way the code is now, we create a
view and hand that in to the M ovie List Ed it o r constuctor. We can add a call there to the supplied view in
order to tell it what editor instance it's hooked up to. To support that we'll need several things:
Now we can add the call to se tEdi tor( ) to the Mov i e Lis t Ed i to r constructor:
Now we can add the Act io nLis tene r to our add button:
One more thing remaining to get to the green bar. Recall that we had to stub ge t Ne wN a me () because we
had added it to the interface to support adding in M o v ieL i st E di to r . Now we need to make it real. In the
process we need to store the field component in an instance variable so that it is accessible:
Green bar! Time to clean up. The first thing that I see is that the Sw i ng M ov ie L is tE d it or V ie w. i ni t( )
method has gotten rather large. What do you think? It's doing several things:
pu bl ic v oid i nit () {
se tTi t le( );
se tLa y out () ;
in itL i st( );
in itF i eld () ;
in itA d dBu tt on( );
pa ck( ) ;
}
Now that we have the Swing GUI working and refactored, we need to run all of our tests. Uh oh! The tests in
Te st GUI are failing! A quick look at the trace tells us that the mock view is getting an unexpected call to
se tE dit o r() . We need to add an expectation for this call that doesn't care what the argument is. We can
do this by adding the following to the end of T e s tGU I . set U p( ) :
RETROSPECTIVE
In this chapter we've implemented the first story—taking the system from nothing to a list of named
movies with a way to add new movies, as well as a GUI. Some things we designed and built from the
start, while other things we didn't do until we had a need. There was no requirement to have an
equ a ls() method in M ov ie, but we needed it to support an argument expectation. No doubt we'll need
it at some point, but we didn't bother to consider it until we had an immediate, concrete need.
In addition to test-driving the core logic of the growing application, we implemented a simple GUI to
expose the functionality we built. This is an important point. It's imperative to develop the system in as
many wide-ranging slices as possible. Even after the first story you should have something to show.
Amazon
Prev don't be afraid of buying books Next
So far we have the class Movi e that has a name. When we construct an instance, we supply a name
and we can retrieve the name with the g etN a m e() method. For this task we need a way to set the
name of a Mov ie after it has been constructed.
Now, a question comes to mind: "Can we have a movie without a name?" Ask the customer... "No,
every movie has a name." We need to capture that in a test!
Test 9. Changing the name of a movie results in it using the new name hereafter.
Before starting, run the tests to give us confidence. Green bar. Then let's go.
Test 9: Changing the name of a movie results in it using the new name
hereafter
Everything compiles. Run the tests (we can just run Te s t Mo vi e at the moment). Red bar. Our new test
failed. Good. Now we add enough code to r e nam e ( ) to pass the test. Easy enough:
Green bar.
First, we'll add tests for the constructor to verify that it won't accept empty or null names. One test at a
time, first let's test with nul l. Constructing a M o v ie with a null name should throw an
Ill e galAr gum en tEx ce ptio n. If nothing is thrown, the test should fail. If an unexpected exception is
thrown, JUnit will notice and the test will fail. Here's the test:
Now for our newly added renaming capability. Again, one test at a time:
Compile, test, green. Excellent! Next, the "renaming with an empty name" test:
Again, we have duplication in the empty name check between the constructor and r e na me ( ).As we did
before, we will extract the empty name check into a separate method:
Now we must make the newly added rename capability available on the GUI. The task calls for two
things:
2. clicking an Update button that will cause the selected movie to be renamed with what is in the
text field.
Test 14. Indicating, to the logical layer, that a selection is made from the list causes the
view to be given a value for the name field, that is, the selected movie's name.
Test 15. Selecting from the list causes the name field to be filled in with the selected
movie's name.
Test 16. When an update is requested, the selected movie is renamed to whatever is
answered by the view as the new name.
Test 17. When the update button is pushed, the selected movie is renamed to whatever
is in the name field.
Let's tackle them one at a time, in the order above. First, we need a test for filling in the text field with
the name of the selected movie. As before, we will work at the logic level first, then the GUI.
Test 14: Indicating, to the logical layer, that a selection is made from the
list causes the view to be given a value for the name field, that is, the
selected movie's name
c o nt ro l.v e rify ( );
}
}
We need to stub two methods to get this to compile. First, we need to add s e t N e w N a m e ( ) to
M ov i eL is tEd i torV i ew:
Now it compiles and fails: the expected call to s e t N e w N a m e ( ) didn't happen. That needs to happen in
response to calling se l ect( ) on the M o v i e L i s t E d i t o r . So we add enough to s e l e c t ( ) to get the
test to pass:
Green bar! But look at that string literal. Yuck! That duplicates the name of the second movie in the list
we set up. Let's get it from there:
Oops! That breaks our test. But it's the n u l l that g e t M o v i e ( ) is returning that is causing the problem.
We'll have to expand on that:
OK, our test is working again. Just to be sure, let's add to our test. We'll make it select another movie
in the list and make sure that that name is sent to the view:
c o nt ro l.v e rify ( );
}
Compile, run, green bar. That verifies that the work we just did really does work for the general case.
Now we're done with the logical layer; time to turn our attention to the GUI.
Test 15: Selecting from the list causes the name field to be filled in with
the selected movie's name
This will be fairly simple since it doesn't involve adding additional components. Here's the test (in
T es t GU I):
Now it compiles, and fails. We need to add some code to hook up list selection to the underlying logic
(while we're there and dealing with selection, we'll set the selection mode):
Compile, run, red bar. One more thing to do. We need to fill in the s e t N e w N a m e method so that it puts
its argument into the text field:
Compile, run, green bar! Now we need to add the Update button.
c o nt ro l.v e rify ( );
}
The test fails due to the expected calls to g e t N e w N a m e ( ) and s e t M o v i e s ( ) not happening. This is
hardly surprising since u pdat e () does nothing. Let's add some code to it:
p ub l ic v oid upda t e() {
i f ( se lec t edMo v ie ! = nul l) {
se le cte d Movi e .ren a me(v iew. g e t N e w N a m e ( ) ) ;
vi ew .se t Movi e s(ne w Vec tor( m o v i e s . g e t M o v i e s ( ) ) ) ;
}
}
We haven't defined sel e ctedM ovie yet. We can just grab the M o v i e whose name we fetch in s e l e c t
(keeping in mind that we will get calls of s e l e c t ( - 1 ) due to the way J L i s t works):
You will find that there is a significant amount of code that you will
write in GUI classes that isn't strictly test-driven. This is, in my
opinion, an unavoidable aspect of developing a GUI. The code I'm
referring to is the framework-related details. An example here is
handling list selections with index -1 denoting the clearing of all
selection, dealing with layout managers, and other visual aspects of
the interface. These things are the subject of other types of testing
that don't relate strictly to the functional aspects of the application. At
the functional level, all we really care about is that the required
components are present and that they operate as required. Once the
tests are in place, you can tweak the visual aspects, confident that
you are not breaking the functionality. It's a lot like optimization: get
it right, then make it look good.
Test 17: When the update button is pushed, the selected movie is
renamed to whatever is in the name field
Now we can move on to the Swing layer. This will involve adding a new button for updating. Here's the
test (notice the click on the 0th list item to reset the text field):
This compiles, but fails, because there isn't an update button in the GUI yet. We add it:
p ub l ic v oid init ( ) {
s e tT it le( ) ;
s e tL ay out ( );
i n it Li st( ) ;
i n it Fi eld ( );
i n it Ad dBu t ton( ) ;
i n it Up dat e Butt o n();
p a ck () ;
}
Now the test fails due to an assertion failure: the name isn't being changed. We next need to hook up
the update button to the underlying editor:
Green bar! Now have a look. Is there anything that needs to be cleaned up?
In M o v ie Li stE di tor there is some duplication between methods that update the movie list on the
interface, specifically, the line:
This appears in three different methods; time to refactor, to be sure. We'll use Extract Method to put it
in its own method:
In each method where that line occurred we'll replace it by a call to u p d a t e - M o v i e L i s t ( ) ;for
example:
One last thing we should do is add a ma i n ( ) method to our Swing GUI class so that we can run our
application stand-alone:
Now there's some duplication between this method and s t a r t ( ) in the same class:
p ub l ic s tat i c vo i d st a rt() {
S w in gM ovi e List E dito r View win d o w = n e w S w i n g M o v i e L i s t E d i t o r V i e w ( ) ;
w i nd ow .in i t();
w i nd ow .sh o w();
}
Since, in this case, all of the code in st a r t ( ) is included in m a i n ( ) ,we can replace those lines in
m ai n () with a call to star t () if we have s t a r t ( ) return the window it creates:
RETROSPECTIVE
In this chapter we've added a small bit of functionality to the system: the ability to rename a movie.
This extends to the GUI where we can now select a movie in the list, edit its name, and update it.
Amazon
Prev don't be afraid of buying books Next
Now that we're getting the hang of this, we'll pick up the pace a bit.
Amazon
Prev don't be afraid of buying books Next
This task adds a bit of error detection/prevention functionality to what we did in the last chapter: it
disallows movies with duplicate names.
Test 18. Attempting to add a duplicate movie throws an exception and leaves the list
unchanged.
Test 19. Asking the movie list to rename a movie results in its name being changed.
Test 20. Asking the logical layer to add a duplicate movie causes it to inform the
presentation layer that the operation would result in a duplicate.
Test 21. Asking the logical layer to update a movie that would result in a duplicate
causes it to inform the presentation layer that the operation would result in a duplicate.
Test 22. Trying to add a movie that is the same as one in the list results in the display
of a "Duplicate Movie" error dialog.
Test 23. Trying to rename a movie to the name of one in the list results in the display
of a "Duplicate Movie" error dialog.
First, we'll tackle adding a new Mov ie that is the same as one in the list.
The test we're about to write needs to start with a populated list. OK, that should go into the fixture. All
the T es tC ase s we have so far have a sepcific fixture that serves their tests. The biggest fixture has
two movies in it. Is this enough? I'd like something a bit bigger... say, three movies. Should we try to
do this with T e stMo v ieLi s tWit hTwoM o v i e s , or should we create a new T e s t C a s e (aka fixture)?
When in doubt, create a new fixture.
First, we verify that the test fails. It does—good. Here's the extended M o v i e - L i s t . a d d ( ) method that
lets the test pass:
Test 19: Asking the movie list to rename a movie results in its name
being changed
Next, we'll move to renaming. It's a bit different because renaming was written with the focus on a
specific M ovi e in isolation. Now we are considering renaming in the context of a M o v i e L i s t . So, we
need to work at the M ovie L ist level. That will require some sort of rename method in that class. But
we can't do that without a test requiring it. We now need to digress for a short while to take care of
that:
OK, now we're ready to add the test for renaming a duplicate:
Now everything compiles and we have a red bar! The failure is because the rename is succeeding when
it shouldn't. We need to add code to Mo v i e L i s t . r e n a m e ( ) to throw a D u p l i c a t e M o v i e E x c e p t i o n
when a rename would result in a duplicate movie:
The approach we've used is to create a copy of the movie to be renamed, rename it, and check for the
result in the Movi e Lis t . To do that, we need to add a copy constructor to M o v i e . But first we need a
test (in Te stM o vie):
OK, everything compiles and te stCo py C o n s t r u c t o r ( ) passes. We have a copy constructor in place
for M o vi e now, so we can go back to our renaming test. Green bar there as well. Run
A ll T es ts ...green bar.
We should revisit the GUI rename code we wrote earlier and bring it up to spec in regards to routing the
renaming through M ovi e List rather than calling M o v i e . r e n a m e ( ) directly. All that needs to be
changed is Mo v ieLi s tEdi t or.u pdate ( ) . Because we have tests in place, we can make changes and
immediately see if they work. Here's the updated method:
This won't compile until we deal with the D u p l i c a t e M o v i e E x c e p t i o n that can be thrown by
M ov i eL is t.r e name ( ). Because there is no test to require anything more advanced, we'll just ignore
it:
Now we need to percolate the rename would cause a duplicate error up through the GUI. As usual, we will
start at the logic layer.
Test 20: Asking the logical layer to add a duplicate movie causes it to
inform the presentation layer that the operation would result in a duplicate
We need to add a d u pli c ateEx cept io n method to the M o v i e L i s t E d i t o r V i e w interface that will take
a S t r i ng indicating the potential duplicate name:
The compiler tells us that Movi e List Edi t o r . a d d ( ) can throw D u p l i c a t e M o v i e E x c e p t i o n which
needs handling. Well, we don't want to handle it in the test case; it's what should be causing the call to
d u pl i c ate Ex cept i on. Let's revisit Mov i e L i s t E d i t o r . a d d ( ) :
p u bl i c vo id add ( ) {
Mo v i e n ew Movi e = n ew Mo vie( vi e w . g e t N e w N a m e ( ) ) ;
tr y {
m o vie s. add( n ewM o vie);
u p dat eM ovie L ist ( );
} c a tch ( Dupl i cat e Movie Exce pt i o n e ) {
v i ew. du plic a teE x cepti on(n ew M o v i e . g e t N a m e ( ) ) ;
}
}
Green bar. Can we clean anything up? Yes, we get the name from the view twice in a d d ( ) . Let's pull that
out into a local variable:
p u bl i c vo id add ( ) {
St r i ng ne wNam e = v iew.g etNe wN a m e ( ) ;
Mo v i e n ew Movi e = n ew Mo vie( ne w N a m e ) ;
tr y {
m o vie s. add( n ewM o vie);
u p dat eM ovie L ist ( );
} c a tch ( Dupl i cat e Movie Exce pt i o n e ) {
v i e w.d up lica t eEx c eptio n(ne wN a m e ) ;
}
}
Test 21: Asking the logical layer to update a movie that would result in a
duplicate causes it to inform the presentation layer that the operation
would result in a duplicate
p u bl i c vo id upd a te( ) {
if ( sel ec tedM o vie != nu ll) {
try {
mov ie s.re n ame ( selec tedM ov i e , v i e w . g e t N e w N a m e ( ) ) ;
upd at eMov i eLi s t();
} cat ch (Du p lic a teMov ieEx ce p t i o n e ) {
}
}
}
It's just ignoring the exception. That's no longer the simplest thing that could possibly work. We need to do
something. We need to pass the information on to the view that there was a problem so that it can inform
the user:
p u bl i c vo id upd a te( ) {
if ( sel ec tedM o vie != nu ll) {
try {
mov ie s.re n ame ( selec tedM ov i e , v i e w . g e t N e w N a m e ( ) ) ;
upd at eMov i eLi s t();
} cat ch (Du p lic a teMov ieEx ce p t i o n e ) {
vie w. dupl i cat e Excep tion (v i e w . g e t N e w N a m e ( ) ) ;
}
}
}
But that doesn't work since it's doing two calls to v i e w . g e t N e w N a m e ( ) when the mock only expected one.
This is a problem that can occur when you are using mocks:
overspecification. Do we really care that we call g e t N e w N a m e ( ) an extra
time? Not really. It shouldn't cause the test to fail. We could have
specified the mock differently, setting up a default expectation for the call
to g e tNew N ame( ). See page 167 for more information.
Anyway, this is the same situation that we encountered earlier, so we can extract the g e t N e w N a m e ( ) call
into a local variable, which is clearer. The result is:
p u bl i c vo id upd a te( ) {
if ( sel ec tedM o vie != nu ll) {
S t rin g newN a me = view .get Ne w N a m e ( ) ;
try {
mov ie s.re n ame ( selec tedM ov i e , n e w N a m e ) ;
upd at eMov i eLi s t();
Green bar. On to the Swing GUI. The first thing we notice is that S w i n g M o v i e L i s t E d i t o r V i e w doesn't
compile since Mov ie Lis t Edit o r.ad d() no longer throws D u p l i c a t e M o v i e - E x c e p t i o n . We can
remove the tr y-ca t ch wrapper around the call to a d d ( ) :
Test 22: Trying to add a movie that is the same as one in the list results in
the display of a "Duplicate Movie" error dialog
Notice that since the expected error dialog is modal, we need to use the
pu s hNo Bl ock( ) method in J B u t t o n O p e r a t o r rather than d o C l i c k ( )
as we have up until now. If we don't, d o C l i c k ( ) wouldn't return until
the user manually dismissed the dialog. By then, the dialog would be
gone and the next step in the test (finding the dialog) would fail.
Test 23: Trying to rename a movie to the name of one in the list results in
the display of a "Duplicate Movie" error dialog
Take a minute to look at the tests we just added. They are very similar. We can refactor some of that
commonality out by extracting a couple of methods for the message dialog check and the check that the list
is unchanged. The result is:
mo v i es =n ewVe c tor ( );
mo v i es. ad d(st a rWa r s);
mo v i es. ad d(st a rTr e k);
mo v i es. ad d(st a rga t e);
Now those two lines can be removed from each of the tests. This cleans things up nicely.
Amazon
Prev don't be afraid of buying books Next
RETROSPECTIVE
In this chapter we added some error checking. Specifically, we checked whether an add or rename
operation will result in a duplicate movie in the list. If not, the action is performed, otherwise, a
Dup l icate Mov ie Exc ep tion is thrown. This percolates to the GUI where an error dialog is presented
to the user. Since this is a modal dialog, we had to use pu sh No B lo c k( ) rather than d oC l ic k( ) in
Jemmy to click the Add and Update buttons.
Amazon
Prev don't be afraid of buying books Next
This is a straightforward task; we're simply adding a piece of information to a class. Note that the task
does not require changing the rating.
Test 25. A rated movie answers positive when asked if it is rated, and it can answer its
rating when asked.
Test 26. Asking an unrated movie for its rating throws an exception.
Test 25: A rated movie answers positive when asked if it is rated, and it
can answer its rating when asked
Now we write a test for a movie with a rating. Here's the start of the test:
1. by a constructor, and/or
Since we have no requirement yet to be able to change a movie's rating, let's go with the first option.
We will add another constructor that takes a rating in addition to a name. Our test is:
Now we have to add a constructor and a g e t R a t i n g ( ) method to M o v i e to get the test to compile:
p ub l ic i nt g etRa t ing( ) {
r e tu rn 5;
}
Now, of course, ha sRa t ing( ) always returns f a l s e which causes the first assert to fail. We could go
in baby steps, but by now we're feeling more confident and experienced so we'll take a bigger step.
What needs to be done? It's pretty obvious that we should grab the rating that is passed to the
constructor, store it somewhere, and use that to determine the value returned from h a s R a t i n g ( ) and
g et R at in g() .Now the question is, "What does it mean for a movie to be unrated?" A valid rating is
between 0 and 5 , inclusive. The simplest thing is to assign - 1 to denote unrated. Here's the changed
code in Mo vie :
p ri v at e int rati n g;
p ub l ic i nt g etRa t ing( ) {
r e tu rn ra t ing;
}
Our tests all pass now. Can we refactor anything? I don't think so. One thing that would be nice to do
would be to move ra t ing from being an i n t to an object of some sort. In the future, may be, when
we need it. For now, however, an int works fine.
Test 26: Asking an unrated movie for its rating throws an exception
Now we need the rating on the GUI. This is all in the presentation layer since the movie list is given a
list of movies. This means that we have nothing to do at the logic level (i.e., M o v i e L i s t E d i t o r ) and
we can move directly to the Swing classes. After getting more information about what is
desired/possible/easy, it is decided that we will display an Nof5 star indicator to the left of the movie
name in the list. This will require a custom renderer for the list. We can easily write tests for this. We'll
need a new test case, T estC u stom Lis t R e n d e r e r .
Test 27. When asked for the renderer component, the renderer returns itself.
Test 28. When given a movie to render, the resulting test and rating image corresponds
to the movie being rendered.
Test 29. When rendering an unselected item, the renderer uses its list's unselected
colors.
Test 30. When rendering a selected item, the renderer uses its list's selected colors.
We'll need an instance of the renderer to test, a J L i s t to pass in, and a couple of M o v i e s to test
with. Let's start by creating a fixture:
p ri v at e Mov i e fo t r = n ull;
p ri v at e Mov i e st a rTre k = n ull;
p ri v at e Cus t omMo v ieLi s tRen dere r r e n d e r e r = n u l l ;
p ri v at e JLi s t li s t = n ull;
p ro t ec te d v o id s e tUp( ) {
f o tr = ne w Mov i e("F e llow ship o f T h e R i n g " , 5 ) ;
s t ar Tr ek = new Movi e ("St ar T r e k " , 3 ) ;
r e nd er er = new Cust o mMov ieLi s t R e n d e r e r ( ) ;
l i st = ne w JLi s t();
}
Test 27: When asked for the renderer component, the renderer returns
itself
Test 28: When given a movie to render, the resulting test and rating
image corresponds to the movie being rendered
The next step is to make sure the text and rating image is getting set properly:
To support this we need some icons where we can get at them based on the rating. We'll do this with a
static variable on the renderer:
Test 29: When rendering an unselected item, the renderer uses its list's
unselected colors
Next, we need to deal with the colors. First, the unselected colors:
Test 30: When rendering a selected item, the renderer uses its list's
selected colors
i f ( is Sel e cted ) {
se tB ack g roun d (lis t .get Sele c t i o n B a c k g r o u n d ( ) ) ;
se tF ore g roun d (lis t .get Sele c t i o n F o r e g r o u n d ( ) ) ;
} el se {
se tB ack g roun d (lis t .get Back g r o u n d ( ) ) ;
se tF ore g roun d (lis t .get Fore g r o u n d ( ) ) ;
}
Green bar all the way. We should refactor the code that sets the list colors into s e t U p ( ) . Here's the
new version of the affected code:
p ro t ec te d v o id s e tUp( ) {
f o tr = ne w Mov i e("F e llow ship o f T h e R i n g " , 5 ) ;
s t ar Tr ek = new Movi e ("St ar T r e k " , 3 ) ;
r e nd er er = new Cust o mMov ieLi s t R e n d e r e r ( ) ;
l i st = ne w JLi s t();
Now all we need to do is use this renderer in the Swing GUI. Figure 13.1 shows the new version of the
GUI, and here's the updated code:
J S cr ol lPa n e sc r olle r =
ne w JSc r ollP a ne(
m ov ieL i st,
S cr oll P aneC o nsta n ts.V ERTI C A L S C R O L L B A R A L W A Y S ,
S cr oll P aneC o nsta n ts.H ORIZ O N T A L S C R O L L B A R N E V E R ) ;
g e tC on ten t Pane ( ).ad d (scr olle r ) ;
}
Test 32. Updating a movie changes its rating if a different rating was selected for it.
Test 33. Selecting a movie from the list updates the displayed rating.
Test 34. Updating a movie in the GUI changes its rating if a different rating was
selected for it, and updates the display accordingly.
Next we'll extend te s tSel e ctin g() to check that a selection in the movie list causes an update of
the rating:
c o nt ro l.v e rify ( );
}
The first step to getting to a green bar again is to add a method to the M o v i e L i s t E d i t o r V i e w
interface:
Next, we need to add to M o vieL istEd i t o r . s e l e c t ( ) to set the rating as well as the name:
tr y {
vi ew. s etNe w Rati n g(se lect e d M o v i e . g e t R a t i n g ( ) + 1 ) ;
} ca tch (Unr a tedE x cept ion e ) {
vi ew. s etNe w Rati n g(0) ;
}
}
}
Now we see that several tests are failing due to the additional (and unexpected) calls to
s et N ew Ra tin g (). We can either add expectations to these tests or have them ignore those calls. We'll
do the latter since it's the simplest thing that will work. We just need to add the following line to the
beginning of each failing test:
Upon looking over the code, I feel that some renaming is in order. The method for getting and setting
the name and rating fields are misleading. Let's change them as so: g e t N e w N a m e ( ) changes to
g et N am eF iel d ().For brevity, we'll just show the interface:
Now for the flip side of editing the rating...updating the movie behind it. We'll start by extending
t es t Up da tin g () by adding ratings to the M o v i e s we create and adding expectations for calls to
s et R at in gFi e ld() and g etRa tingF i e l d ( ) :
c o nt ro l.v e rify ( );
}
tr y {
mo vie s .ren a me(s e lect edMo v i e , n e w N a m e ) ;
se lec t edMo v ie.s e tRat ing( v i e w . g e t R a t i n g F i e l d ( ) ) ;
up dat e Movi e List ( );
} ca tch (Dup l icat e Movi eExc e p t i o n e ) {
vi ew. d upli c ateE x cept ion( n e w N a m e ) ;
}
}
}
Now we need to add a setR a ting () method to M o v i e . We won't test-drive that, as it will be a simple
mutator and it will be tested indirectly. In fact, I won't even bother showing it here. Most IDEs will have
a way to generate accessors and mutators for instance variables.
Green bar. Now we can turn to the Swing layer. We can take the same approach to editing the rating as
to editing the name: When a movie is selected, use a field to edit its value. In this case, since the range
is constrained, we'll use a combo box.
Test 33: Selecting a movie from the list updates the displayed rating
We'll start with a test to check that selecting a movie in the list sets the rating selection combo box to
the appropriate value:
Now we need to make it pass. We need a combo box in the GUI and methods to set and get its value:
p ub l ic v oid init ( ) {
s e tT it le( ) ;
s e tL ay out ( );
i n it Li st( ) ;
i n it Fi eld ( );
i n it Ra tin g Comb o ();
i n it Ad dBu t ton( ) ;
i n it Up dat e Butt o n();
p a ck () ;
}
Green bar. OK. We talk to the customer. We need to be able to select a movie, change the rating, and
click update. We try it. Uh oh. We get the duplicate movie message. Hmmm... think think... talk talk...
what we need to do is raise that error only if the update makes the selected movie equal to a different
movie in the list.
We need to go back to T estMo vieL is t E d i t o r and create a test to drive that behavior:
c o nt ro l.v e rify ( );
}
Now we can tweak the Movi e List Edi t o r . u p d a t e ( ) method to make the bar green again:
t ry {
mo v ies. r enam e (sel ecte d M o v i e , n e w N a m e ) ;
up d ateM o vie( ) ;
} ca t ch ( D upli c ateM ovie E x c e p t i o n e ) {
vie w .dup l icat e Exce ptio n ( n e w N a m e ) ;
}
}
}
}
Test 34: Updating a movie in the GUI changes its rating if a different
rating was selected for it, and updates the display accordingly
Now back to the Swing GUI. We need a test that selects a movie, changes the rating, and updates:
Green bar. Life is good. Finally, Figure 13.2 shows the current state of the GUI.
RETROSPECTIVE
We've done a few things in this chapter. One interesting thing was developing a custom list cell renderer
test-first. We've also extended our Jemmy testing repertoire by adding combo boxes.
Amazon
Prev don't be afraid of buying books Next
ADD A CATEGORY
Add support for a movie to have a single category, from a fixed list of alternatives.
To add support for categories through the application we will want these tests:
Test 35. A movie that hasn't explicitly been given a category should answer that it is
uncategorized when asked for its category.
Test 36. If a movie is given a category when it is created, it answers that when asked
for its category.
Test 37. Trying to create a movie with an invalid category (i.e., not from the predefined
set) throws an exception.
We begin by adding support for a category to M o vie . Notice that the fleshed out task description
mentions that the categories are from a fixed list. That means that the category of a movie can't be just
anything (like an arbitrary string), but has to be picked from a fixed set of alternatives. So let's pick an
initial list to work with: Science Fiction, Drama, Comedy, Horror, Western, and Fantasy. We will also
need an Uncategorized setting.
Test 35: A movie that hasn't explicitly been given a category should
answer that it is uncategorized when asked for its category
We'll start with the last case, since it doesn't require actually setting the category of a Mo v ie :
Now, when we are testing for the other categories, we need a way to set the category of a M ov i e. The
simplest thing is to add a constructor that takes a category string. So here is the test:
We know we'll have to store the category value and return it from ge t Ca te g or y( ) , so let's take a
bigger step and just do it. And here's the new code:
Green bar. The constructors of M ovie are proliferating, so let's make a more general one and retrofit
our tests to use it:
While we're at it we notice that some refactoring can be applied to T es t Mo vi e . We do that as well,
removing unneeded instance creation from the tests and using st a rW a rs that is created in s e tU p( ) .
Exercises
1. Refactor Test Movi e to use a single Constructor Method that we just added to
M ov ie.
2. Refactor the tests again, this time removing the unneeded Mo v ie creation.
Test 37: Trying to create a movie with an invalid category (i.e., not from
the predefined set) throws an exception
Now, we need to enforce the constraint that only a fixed set of alternatives can be used:
So how do we constrain the category values to a closed, fixed set of alternatives? We could use guard
statements and the string comparison:
This smells! Imagine changing the spelling of a category once we have a large, complex system built.
We could refactor the string literals to constants and use them everywhere:
This still has a smell. It would be nice to refactor this in a way that would let us leverage Java to
enforce the constraints for us. We can do that by refactoring to a type-safe enumeration. Joshua
Kerievsky discusses this refactoring in [29].
Next, we update the related tests. One advantage of using a type-safe enumeration is that we no longer
need to test for bad values; the compiler enforces that for us. Here are the remaining tests:
And, finally, here are the affected bits of M o vie (note that the string constants have been deleted):
Now we can go back to the customer and get a list of the categories that should be supported and add
them to Cate gor y.
Exercises
3. The customer identified these categories: Science Fiction, Horror, Comedy, Western,
Drama, Fantasy, Kids, Adult, Mystery, Thriller. Add these to C a te go r y.
Amazon
Prev don't be afraid of buying books Next
Now that we have categories supported in M o v i e , we can add support for them to the interface layers.
We'll start with these tests:
Test 38. Telling the logical layer that a movie is selected causes the presentation layer to
be told the category to display.
Test 39. Selecting a movie in the GUI causes the category field to be updated.
To get the category to reflect on the interface, we'll start, as usual, with M o v i e L i s t - E d i t o r .
Test 38: Telling the logical layer that a movie is selected causes the
presentation layer to be told the category to display
c o nt ro l.v e rify ( );
}
To support this, we have to update set U p ( ) as well. Note: We don't need to create a new fixture
(which would require a new T e stCa se), but the current fixture needs to be more complete. Here it is:
m o vi es =n e wVec t or() ;
m o vi es .ad d (sta r Wars ) ;
m o vi es .ad d (sta r Trek ) ;
m o vi es .ad d (sta r gate ) ;
m o vi es .ad d (the S hini n g);
Now we have the test. Let's make it compile and get to the red bar as quickly as we can. We need to
add a se tCa t egor y Fiel d () method to M o v i e L i s t E d i t o r V i e w . Now we have the red bar we were
aiming for.
Exercises
Test 39: Selecting a movie in the GUI causes the category field to be
updated
First, we need a test that checks that selecting updates the category field as expected:
p ub l ic v oid init ( ) {
s e tT it le( ) ;
s e tL ay out ( );
i n it Li st( ) ;
i n it Na meF i eld( ) ;
i n it Ra tin g Comb o ();
i n it Ca teg o ryFi e ld() ;
i n it Ad dBu t ton( ) ;
i n it Up dat e Butt o n();
p a ck () ;
}
Exercises
6. We used the t oStr ing ( ) method to get the value for the category field, as well as
the value from the expected C a t e g o r y to compare against the field contents. What's
the problem that we have with the system in its current state? (Hint: look at
Ca t ego r y.) Fix it.
Amazon
Prev don't be afraid of buying books Next
For this task we need only be concerned with the interface layers. Here's our list of tests:
Test 40. Telling the logical layer to update and providing it with data that indicates a
category change results in the GUI layer being given a new set of movies with that
change reflected.
Test 41. Selecting a movie from the list, changing the value of the category, and
pressing Update updates the data for that movie. When that movie is selected again, the
new category is displayed.
Test 40: Telling the logical layer to update and providing it with data
that indicates a category change results in the GUI layer being given a
new set of movies with that change reflected
For this test we can extend T e stMo vie L i s t E d i t o r . t e s t U p d a t i n g ( ) to check that the movie
category gets changed:
This in turn requires a s etCa t egor y() mutator in M o v i e , which we won't bother showing here. Next,
we need to add an expectation to test U p d a t i n g W i t h S a m e N a m e ( ) for the call to
g et C at eg ory F ield ( ). OK. Green bar!
Exercises
Test 41: Selecting a movie from the list, changing the value of the
category, and pressing Update updates the data for that movie. When
that movie is selected again, the new category is displayed
Now that we've finished with the logic layer for this task, we can turn to the Swing layer. We'll start by
adding a test for updating the category:
This requires some slight changes to Ca t e g o r y to collect and fetch all the defined categories:
OK, that gets our new test to pass, but we've broken our earlier t e s t S e l e c t U p - d a t e s C a t e g o r y test.
It needs to be rewritten to use a combo box for the category field:
Green bar! We're done. Figure 14.1 shows the current GUI.
RETROSPECTIVE
There wasn't much that was really new in this chapter. We did make use of a type-safe enumeration to
encapsulate a closed, fixed set of alternative values. There's a good discussion of this pattern in [10],
and one of the refactorings in [29] deals with replacing a type with a type-safe enumeration.
Amazon
Prev don't be afraid of buying books Next
Now we need to have Mov ieLis t generate sublists based on categories. Here are some tests:
Test 42. Requesting a sublist filtered on a specific category answers a list containing all
movies of that category, and only those movies.
Let's start by setting up a fixture with a M ovi e L ist containing a selection of movies with various
categories and another M ov ieLi st containing just fantasy movies:
Compile, run, red bar. Now to make it green. But first, consider the failure message:
That doesn't tell us much other than something was expected, but nothing was provided. The instance
identifier of the expected Mo vieL ist is essentially unreadable, and generally meaningless. We can make
that more useful by adding a reasonable toS t r ing ( ) to Mo vi e Li st :
Exercises
OK, back to the pursuit of the green bar. The simplest thing is to make a new M ov ie L is t and
selectively add movies to it that match the filter criteria, in this case, a category. We'll move a bit faster
this time and skip the fake it step. I can imagine a time when we might want something more elaborate,
possibly using a Decorator pattern where you could decorate a Mo vi e Li s t with multiple filters and/or
sorters. Such a possibility is shown in Figure 15.1. This would involve a class structure something like
that shown in Figure 15.2. For now, though, the simplest thing will suffice:
Looks like we need an eq ua ls() method for Mov i e Lis t . Sure enough, so we'll write one.
Exercises
9. Write an equ als( ) method for Mov i e Lis t . Start by adding a test to
T es tMo vi eLis t-Wi thPo p u lat e d Lis t .
That does it—green bar! Now beef up the test to check for other subsets:
Exercises
We can now ask a Mo vi eLis t to return a sublist for a given category. The story requires that we be
able to fetch the entire list for the ALL category. Here's the test:
Test 43. Asking for a subset for the ALL category answers the original list.
Test 43: Asking for a subset for the ALL category answers the original
list
Red bar. Now we need to update c atego r ySu b l ist ( ) to handle the A LL category:
Now another GUI task. We need to extend the GUI logic layer to handle category changes and respond
by filtering the displayed list of movies.
Test 44. When the logical layer is told to filter on a specific category, the presentation
layer is given a new list to display containing movies for the specified category.
Test 45. Telling the logical layer to select a specific movie in a filtered list rather than
the complete list from actually selects the appropriate movie, in spite of being selected
from a sublist.
Test 46. Telling the logical layer to update a specific movie in a filtered list rather than
from the complete list actually updates the appropriate movie properly.
Test 47. A rename performed in the context of the logical layer is done in the underlying
full list; that is, potential duplicates in the full list are detected.
Test 48. Selecting a category to filter on in the GUI causes the displayed list of movies
to update accordingly.
Since we need movies with assorted categories, we'll start by creating a new Te st C as e to hold a more
elaborate fixture, rather than extending the one we have:
We need a test that will start with the entire list, select a category, then select the AL L category to get
the entire list back. Here's the test:
Add a stub for fi lte rO nCat egor y() to get to the red bar. It fails because the stub does nothing;
specifically, it does not cause a corresponding call to se t Mo v ie s( ) .We need to fill it in:
We also need to add a field to hold the filtered movie list, fi l te re d Mo v ie s, to maintain and use to
fill the list in the view.
Exercises
Green bar.
Test 45: Telling the logical layer to select a specific movie in a filtered
list rather than the complete list from actually selects the appropriate
movie, in spite of being selected from a sublist
The next test verifies that selecting a movie in a filtered list is really selecting the appropriate movie:
When we run it, it fails. We need to look at M o v ieL i s tEd i to r .s el e ct ( ). There we see that movie
selection is still dealing with the underlying movie list, not the filtered one. We need to change that:
Test 46: Telling the logical layer to update a specific movie in a filtered
list rather than from the complete list actually updates the appropriate
movie properly
Another test to make sure that the existing functionality still works ok:
Green bar. That's good. One more test and I'll be happy.
Test 47: A rename performed in the context of the logical layer is done
in the underlying full list; that is, potential duplicates in the full list are
detected
This one will test that renames are being done in the context of the master list, not the filtered one.
We'll do this by selecting a sublist and renaming one of the movies in it to be a duplicate of a movie
not in the sublist. This should cause an error:
As usual, we need to extend category selection to the Swing GUI. Like the last test, we'll create a new
fixture/TestC ase , very similar to the last.
Test 48: Selecting a category to filter on in the GUI causes the displayed
list of movies to update accordingly
f o r (in t i = 0; i
< f a ntasy Mov ie s.s iz e(); i++ ) {
asse rtE qu als (" Fant asy list c ont a i ns b a d mo vi e a t i nd e x " + i,
f anta syMo vies . g et( i ) ,
f anta syLi stMo d e l.g e t Ele m e nt At (i ) );
}
f o r (in t i = 0; i
< t h rille rMo vi es. si ze() ; i+ +) {
asse rtE qu als (" Thri ller lis t con t a ins b ad m ov i e a t in d ex " +i ,
t hril lerM ovie s . get ( i ),
t hril lerL istM o d el. g e tEl e m en tA t( i )) ;
}
f o r (in t i = 0; i
< mo v ies.s ize () ; i ++ ) {
asser tEq ua ls( "M ovie lis t co n t ain s bad m ov ie a t i n de x " + i,
mo vies .get (i),
al lLis tMod el.g e t Ele m e ntA t ( i) );
}
}
This immediately gives us a red bar since there is no combo box for selecting the category on which to
filter. That's easy enough to add:
pub l ic vo id in it( ) {
s e tTitl e() ;
s e tLayo ut( );
i n itCat ego ry Fil te rFie ld() ;
i n itLis t() ;
i n itNam eFi el d() ;
i n itRat ing Co mbo () ;
i n itCat ego ry Fie ld ();
i n itAdd But to n() ;
i n itUpd ate Bu tto n( );
p a ck();
}
Now we get a red bar due to the list box not getting updated when the category filter setting changes.
We need to add a selection handler to the new combo box:
This task lies completely in the logic layer of the GUI. One simple test is all that is required.
Test 49. Changing the category of a movie in a filtered list causes that movie to be
removed from the list. If that category is filtered on, that movie will be in the list.
Here it is:
Test 49: Changing the category of a movie in a filtered list causes that
movie to be removed from the list. If that category is filtered on, that
movie will be in the list
And how do we do that? The filtered list is just a Mov i e Li st with no retention of the filtering criteria.
The simplest thing that could work is to regenerate the filtered list on each update:
To support this we need a field to hold the current Cat e go r y on which to filter:
The test still fails since we're not maintaining the value in the new field yet. Let's take care of that:
The use of fi lte rO nCa te gory () is confusing. It's called from the GUI when the filter category is
changed, and from u pda te Movi e() whenever the update button is pushed. There is no reason to set
the filter category if it hasn't changed, so we should extract the common behavior into a separate
method and call that from upd ateM ovie ( ) .We'll also call it from f il t er On C at eg o ry () after setting
fil t erCat ego ry . Here's the new code:
The bar's still green, so we're done here. Before we're happy, though, we have to run all the tests. We
get a green bar on All Te sts but the Swing tests have some failures. We investigate and find that the
problem is the search for the rating combo box. Before we added the category filter combo box, the
rating combo box was the first one returned by default when Jemmy was asked for a
JComboBoxOperator. Now life is more complex. It's time to add a bit of infrastructure; specifically, we
will add a name to each component and search for them by name. To do this, we need to create a
Com p onent Cho os er subclass (part of Jemmy) to do the name checking. We will use this chooser for
finding components. Here it is:
OK, all is green, all is good. Now we're confident that we are finished.
Amazon
Prev don't be afraid of buying books Next
INTERFACE CLEANUP
The GUI has been gradually getting more complex as we've added functionality to the application. It's
time to go back and rework the layout as we've outgrown the simple Fl o wL ay o ut that we've been
using. Figure 15.3 shows the current state of the GUI.
1. a list pane for the category filter combo and the movie list,
2. a detail pane for the information about the new/selected movie, and
For each pane we will use a BoxL ayout —the button pane will be laid out horizontally, while the other
panes will be vertical. The panes themselves will be laid out by a vertical Bo x La yo u t. Here's the
updated GUI creation code:
pub l ic vo id in it( ) {
s e tTitl e() ;
s e tLayo ut( );
g e tCont ent Pa ne( ). add( init List P a ne( ) ) ;
g e tCont ent Pa ne( ). add( init Deta i l Pan e ( ));
g e tCont ent Pa ne( ). add( init Butt o n Pan e ( ));
p a ck();
}
The component initialization methods are tweaked to return the component rather than add it to the
main window. Here's an example:
RETROSPECTIVE
We started of by considering possible solutions to the problem of presenting a filtered list to the user.
We explored one model that used the Decorator pattern to compose filters. In the end we decided that
that was overkill at the moment, and did something much simpler.
We did more work on the interface to support category filtering. In the process we moved to a more
sophisticated way of having Jemmy find components. We finished of by refactoring the structure of the
GUI, using a more elaborate layout approach by that of nested B ox La y ou t s.
There's a bit of a smell to the code. It feels like Mov i e Li st Ed i to r knows too much about the filtering
of the list. Maybe that knowledge should be extracted into a class that sits between the
Mov i eList Edi to r and the Mov ieLis t . That's starting to sound like the decorator-based solution we
thought about at the beginning of this chapter. I'll resist pursuing that now. If we get pushed farther in
that direction, we'll have to stop and do some big refactoring to move forward. Stay tuned.
Amazon
Prev don't be afraid of buying books Next
We need to persist data. After some discussion, it was decided that the way in which the data was
stored didn't matter much...the choice is ours. Since our data is structurally very simple, we'll use a
simple solution: a flat ASCII file.
Amazon
Prev don't be afraid of buying books Next
Java has a lovely IO abstraction facility that we will use to decouple our persistence facility from physical
files: streams. Not only does this make our design cleaner, but it also makes writing tests easier. Here
are the tests we'll want:
Test 51. Writing a list of one movie should write that movie to the stream in the format
<name> | < category> | < rating>.
Test 52. Writing a list containing several movies writes each movie to the stream, one
per line, in the format of the previous test.
Our first task is to write a Mo v ieLi st to a S t r e a m . We begin, of course, with a test. Where should
the test go? My feeling is that since we are starting on a new area of functionality we should start with
a new fixture.
With this in place, the test compiles and, somewhat surprisingly, passes. Maybe it shouldn't be so
surprising. Writing an empty list results in empty output. Currently, w r i t e T o ( ) does nothing at all, so
the output is empty.
Test 51: Writing a list of one movie should write that movie to the
stream in the format <name> | <category> | <rating>
Notice how we've made a design decision while specifying the test. We've defined and specified the file
format that will be used to store our movie data.
Red bar right away! Now we implement M o v i e L i s t . w r i t e T o ( ) as simply as we can get away with.
t ry {
d est i nati o n.wr i te(I nteg e r . t o S t r i n g ( m o v i e T o W r i t e . g e t R a t i n g ( ) ) ) ;
} c atc h (Un r ated E xcep tion e x ) {
d est i nati o n.wr i te(" -1") ;
}
d es tin a tion . writ e ('\n ');
d es tin a tion . flus h ();
}
}
This is about as simple as we can get. Well, ok. . . we could have written a string literal. The test only
requires that the first (and only) movie in the collection is written, so that's all we write code to do.
Notice that we need the if ( size () > 0 ) check to keep t e s t W r i t i n g E m p t y L i s t ( ) passing.
Green, but not yet clean. First, look at the two tests. We can rename e m p t y L i s t and extract a fixture
to s e t Up () .
Exercises
12. Extract the fixture code from the two tests into s e t U p ( ) .
Now let's turn our attention to Movi eLi s t . w r i t e T o ( ) . The way we have M o v i e L i s t dealing with
the internal details of Movi e stinks of Inappropriate Intimacy. That code should be moved to M o v i e
where the structural details can be encapsulated. First, we will extract the M o v i e writing code to a
separate method, then move it to M ovi e . First to extract it:
try {
de st ina t ion. w rite ( Inte ger. t o S t r i n g ( m o v i e T o W r i t e . g e t R a t i n g ( ) ) ) ;
} ca tc h ( U nrat e dExc e ptio n ex ) {
de st ina t ion. w rite ( "-1" );
}
d e st in ati o n.wr i te(' \ n');
}
And in Mo vie :
try {
de st ina t ion. w rite ( Inte ger. t o S t r i n g ( g e t R a t i n g ( ) ) ) ;
} ca tc h ( U nrat e dExc e ptio n ex ) {
de st ina t ion. w rite ( "-1" );
}
d e st in ati o n.wr i te(' \ n');
}
Test 52: Writing a list containing several movies writes each movie to
the stream, one per line, in the format of the previous test
This test fails, which drives us to generalize M o v i e L i s t . w r i t e T o ( ) to write the entire collection of
M ov i es:
Green! Now there's a bit of cleanup to do on M o v i e . w r i t e T o ( ) : we should extract the writing of the
separator to its own method. This is because:
try {
de st ina t ion. w rite ( Inte ger. t o S t r i n g ( g e t R a t i n g ( ) ) ) ;
} ca tc h ( U nrat e dExc e ptio n ex ) {
de st ina t ion. w rite ( "-1" );
}
d e st in ati o n.wr i te(' \ n');
}
p ri v at e voi d wri t eSep a rato r(Wr i t e r d e s t i n a t i o n ) t h r o w s I O E x c e p t i o n {
d e st in ati o n.wr i te(' | ');
}
SAVE-AS IN GUI
Provide, in the GUI, the capability to save the movie collection to a specific file.
For the GUI, we will start with the "Save As" functionality. We are ordering our tasks so that we do this task before
simply saving since that needs to be a place to which the list was previously saved (or, as we will see, from which it
was loaded). Here are the tests:
Test 53. Telling the logical layer to "save–as" causes it to ask the presentation layer to specify a file
into which it writes the movie list.
Test 54. Selecting "Save As" from the "File" menu prompts for a file using the standard file chooser.
The list is written into the selected file.
Test 55. If the file selection is cancelled, nothing is written to any file.
Test 53: Telling the logical layer to "save–as" causes it to ask the presentation
layer to specify a file into which it writes the movie list
p u b l i c v o i d testSaving ( ) t h r o w s E x c e p t io n {
S t r i n g e x pected = "S t a r W a r s | S c i e n c e F ic ti on |5 \n " +
"S t a r T r e k | S c i e n c e F ic ti on |3 \n " +
"S t a r g a t e | S c i e n c e Fi ct io n| -1 \n " +
"T h e S h i n i n g | H o r r o r| 2\ n" ;
F i l e o u t p utFile = Fi l e . c r e a t e T e m p F i l e( "t es tS av eA s" , " . d a t " ) ;
o u t p u t F i l e.deleteOnE x i t ( ) ;
m o c k V i e w . setMovies(m o v i e s ) ;
c o n t r o l . s etVoidCalla b l e ( 1 ) ;
m o c k V i e w . getFile("*. d a t " ) ;
c o n t r o l . s etReturnVal u e ( o u t p u t F i l e , 1 );
c o n t r o l . a ctivate();
M o v i e L i s t Editor edit o r = n e w M o v i e L i st Ed it or (m ov ie L i s t , m o c k V i e w ) ;
e d i t o r . s a veAs();
c o n t r o l . v erify();
a s s e r t F i l eEquals("Sa v e d f i l e " , e x p e c te d, o ut pu tF il e ) ;
}
p r i v a t e v o i d assertFil e E q u a l s ( S t r i n g m es sa ge , St ri ng e x p e c t e d , F i l e o u tp u t F i l e )
t h r o w s Exception {
F i l e R e a d e r reader = n e w F i l e R e a d e r ( o ut pu tF il e) ;
f o r ( i n t charIndex = 0 ; c h a r I n d e x
< e x p e c t e d . length(); c h a r I n d e x + + ) {
c h a r c h aracterRead = ( c h a r ) r e a d e r . re ad () ;
a s s e r t E quals(messa g e + " h a s w r o n g ch ar ac te r at i n d e x " + c h a r I n d e x + " . " ,
expec t e d . c h a r A t ( c h a r I nd ex ),
chara c t e r R e a d ) ;
}
}
Exercises
13. Add the method stubs and declarations we need in order to get the test compiling.
We fail because the mock isn't getting the expected call to g e t F i l e ( ) . Let's add some code to
M o v i e L i s t E d itor.saveAs ( ) :
p u b l i c v o i d saveAs() {
F i l e o u t p utFile = vi e w . g e t F i l e ( " * . d a t" );
}
Now the failure is due to the saved file having a size of 0. We need to add more code to write out the list to the
specified file:
p u b l i c v o i d saveAs() t h r o w s I O E x c e p t i o n {
F i l e o u t p utFile = vi e w . g e t F i l e ( " * . d a t" );
F i l e W r i t e r writer = n e w F i l e W r i t e r ( o ut pu tF il e) ;
m o v i e s . w r iteTo(write r ) ;
w r i t e r . c l ose();
}
That does it. Before we move on, there is some refactoring we should do to our tests. I'd like to extract the file-
oriented tests into a separate test case since it's inherently different than the rest of the tests. However, we need an
extension of the existing fixture. Do we duplicate the fixture? Do we extend the existing test case? No to both:
Duplication is to be avoided as much as possible, and if we extend a test case, we inherit all of its tests, which isn't
what we want. The solution is to extract the fixture into a common, abstract superclass and extend it as required in
the subclasses. So here's the superclass that houses the common fixture:
p u b l i c a b s t ract class C o m m o n T e s t M o v i e L is tE di to r ex te n d s T e s t C a s e {
p r o t e c t e d MockContro l c o n t r o l = n u l l ;
p r o t e c t e d MovieListE d i t o r V i e w m o c k V i ew = n ul l;
p r o t e c t e d Vector mov i e s = n u l l ;
p r o t e c t e d Movie star W a r s = n u l l ;
p r o t e c t e d Movie star T r e k = n u l l ;
p r o t e c t e d Movie star g a t e = n u l l ;
p r o t e c t e d Movie theS h i n i n g = n u l l ;
p r o t e c t e d MovieList m o v i e L i s t = n u l l ;
p r o t e c t e d void setUp ( ) t h r o w s E x c e p t io n {
s t a r W a r s = new Mov i e ( " S t a r W a r s " , C at eg or y. SC IF I, 5 ) ;
s t a r T r e k = new Mov i e ( " S t a r T r e k " , C at eg or y. SC IF I, 3 ) ;
s t a r g a t e = new Mov i e ( " S t a r g a t e " , C a te go ry .S CI FI , - 1 ) ;
t h e S h i n ing = new M o v i e ( " T h e S h i n i n g" ,C at eg or y. HO R R O R , 2 ) ;
m o v i e s =newVector( ) ;
m o v i e s . add(starWar s ) ;
m o v i e s . add(starTre k ) ;
m o v i e s . add(stargat e ) ;
m o v i e s . add(theShin i n g ) ;
m o v i e L i st = newMov i e L i s t ( ) ;
m o v i e L i st.add(star W a r s ) ;
m o v i e L i st.add(star T r e k ) ;
m o v i e L i st.add(star g a t e ) ;
m o v i e L i st.add(theS h i n i n g ) ;
c o n t r o l = EasyMock . c o n t r o l F o r ( M o v i eL is tE di to rV ie w . c l a s s ) ;
m o c k V i e w = (MovieL i s t E d i t o r V i e w ) c o nt ro l. ge tM oc k( ) ;
m o c k V i e w.setEditor ( n u l l ) ;
c o n t r o l .setDefault V o i d C a l l a b l e ( ) ;
}
}
Our previous test case is the same except that now it extends Co mm on Te s t M o v i e L i s t E d i t o r and doesn't have a
fixture (i.e., no instance variables or setUp() method). We create a new T e s tC a s e for the file-related tests and
move the testSaving ( ) and as s e rt F i l e E q u a l s ( ) to it:
p u b l i c c l a s s TestMovie L i s t E d i t o r F i l e O p er at io ns e xt en d s C o m m o n T e s t M o v i e Li s t E d i t o r {
p r i v a t e S tring expec t e d ;
p r i v a t e F ile outputF i l e ;
p r o t e c t e d void setUp ( ) t h r o w s E x c e p t io n {
s u p e r . s etUp();
e x p e c t e d = "Star W a r s | S c i e n c e F i c t io n| 5\ n" +
"Star T r e k | S c i e n c e F i c t io n| 3\ n" +
"Starga t e | S c i e n c e F i c t i on |- 1\ n" +
"The Sh i n i n g | H o r r o r | 2 \ n ";
o u t p u t F ile = File. c r e a t e T e m p F i l e ( " te st Sa ve As ", " . d a t " ) ;
o u t p u t F ile.deleteO n E x i t ( ) ;
}
p u b l i c v o id testSavi n g ( ) t h r o w s E x c e pt io n {
m o c k V i e w.setMovies ( m o v i e s ) ;
c o n t r o l .setVoidCal l a b l e ( 1 ) ;
m o c k V i e w.getFile(" * . d a t " ) ;
c o n t r o l .setReturnV a l u e ( o u t p u t F i l e , 1 );
c o n t r o l .activate() ;
M o v i e L i stEditor ed i t o r = n e w M o v i e Li st Ed it or (m ov i e L i s t , m o c k V i e w ) ;
e d i t o r . saveAs();
a s s e r t F ileEquals(" S a v e d f i l e " , e x p ec te d, o ut pu tF i l e ) ;
c o n t r o l .verify();
}
p r i v a t e v oid assertF i l e E q u a l s ( S t r i n g m es sa ge , St ri n g e x p e c t e d , F i l e ou t p u t F i l e )
t h r o ws Exceptio n {
a s s e r t E quals(messa g e + " i s w r o n g s iz e. ",
expe c t e d . l e n g t h ( ) ,
outp u t F i l e . l e n g t h ( ) ) ;
F i l e R e a der reader = n e w F i l e R e a d e r (o ut pu tF il e) ;
f o r ( i n t charIndex = 0 ; c h a r I n d e x
< e x p e c t e d . length(); c h a r I n d e x + + ) {
c h a r characterRe a d = ( c h a r ) r e a d e r. re ad () ;
a s s e r tEquals(mes s a g e + " h a s w r o n g ch ar ac te r at " + c h a r I n d e x + " . ",
ex p e c t e d . c h a r A t ( c h ar In de x) ,
ch a r a c t e r R e a d ) ;
}
}
p u b l i c s t atic void m a i n ( S t r i n g [ ] a r gs ) {
j u n i t . t extui.TestR u n n e r . r u n ( T e s t M o vi eL is tE di to rF i l e O p e r a t i o n s . c l a s s) ;
}
}
Test 54: Selecting "Save As" from the "File" menu prompts for a file using the
standard file chooser. The list is written into the selected file
Now to add support for the save as functionality to the Swing GUI. We're going to gear down for a bit and take some
smaller steps again. Here's the start of the test:
p u b l i c v o i d testSaving ( ) {
F i l e A s s e r t.assertSiz e ( " S a v e d l i s t h a s wr on g si ze ." ,
s a v e d T e x t . l e n gt h( ),
outputFile);
F i l e A s s e r t.assertEqu a l s ( " S a v e d f i l e " ,s av ed Te xt ,o ut p u t F i l e ) ;
}
Notice that we've started with what we want to be true, written as assertions. Also notice that we're using a class that
doesn't exist yet: F i l eAssert. Out next step is to create that class and the methods in it that we call. The code for
those asserts are derived from the assertFi le E q u a l s ( ) method that we wrote for a previous test. Before we go
farther with this test, we'll revisit that previous one to refactor and extract the file-related assertion method into a
helper class. We're doing this because we see utility in having those assertions generally available, not buried in a
TestCase.
First, extract the method into a separate class:
p u b l i c c l a s s FileAsser t e x t e n d s A s s e r t {
p u b l i c s t atic void a s s e r t F i l e E q u a l s ( St ri ng m es sa ge , S t r i n g e x p e c t e d , F i l e o u t p u t F i l e )
t h r o w s Exception {
a s s e r t E q u als(message + " i s w r o n g s i z e. ",
expect e d . l e n g t h ( ) ,
output F i l e . l e n g t h ( ) ) ;
F i l e R e a d e r reader = n e w F i l e R e a d e r ( o ut pu tF il e) ;
f o r ( i n t charIndex = 0 ; c h a r I n d e x
< e x p e c t e d . length(); c h a r I n d e x + + ) {
c h a r c h aracterRead = ( c h a r ) r e a d e r . re ad () ;
a s s e r t E quals(messa g e + " h a s w r o n g ch ar ac te r at " + c h a r I n d e x + " . " ,
expe c t e d . c h a r A t ( c h a r In de x) ,
char a c t e r R e a d ) ;
}
}
}
Next, we change the test that called the original version so that it calls this version:
p u b l i c v o i d testSaving ( ) t h r o w s E x c e p t io n {
m o c k V i e w . setMovies(m o v i e s ) ;
c o n t r o l . s etVoidCalla b l e ( 1 ) ;
m o c k V i e w . getFile("*. d a t " ) ;
c o n t r o l . s etReturnVal u e ( o u t p u t F i l e , 1 );
c o n t r o l . a ctivate();
M o v i e L i s t Editor edit o r = n e w M o v i e L i st Ed it or (m ov ie L i s t , m o c k V i e w ) ;
e d i t o r . s a veAs();
F i l e A s s e r t.assertFil e E q u a l s ( " S a v e A s -e d fi le " ,e xp e c t e d , o u t p u t F i l e ) ;
c o n t r o l . v erify();
}
Green. Now we can delete the original copy in T e s tM ov ie Li st Ed it o r F i l e O p e r a t i o n s . Still green. This method
was fine as a helper method, but now it smells like it's doing too much. Let's split up the size and content checks:
p u b l i c c l a s s FileAsser t e x t e n d s A s s e r t {
p u b l i c s t atic void a s s e r t F i l e E q u a l s ( St ri ng m es sa ge , S t r i n g e x p e c t e d , F i l e o u t p u t F i l e )
t h r o w s Exception {
a s s e r t S ize(message , e x p e c t e d . l e n g t h( ), o ut pu tF il e ) ;
a s s e r t E quals(messa g e , e x p e c t e d , o u tp ut Fi le );
}
p u b l i c s t atic void a s s e r t E q u a l s ( S t r i ng m es sa ge , St r i n g e x p e c t e d , F i l e o u t p u t F i l e )
t h r o w s FileNotFo u n d E x c e p t i o n , I O Ex ce pt io n {
F i l e R e a der reader = n e w F i l e R e a d e r (o ut pu tF il e) ;
f o r ( i n t charIndex = 0 ; c h a r I n d e x < ex pe ct ed .l en g t h ( ) ; c h a r I n d e x + + ) {
c h a r characterRe a d = ( c h a r ) r e a d e r. re ad () ;
a s s e r tEquals(mes s a g e + " h a s w r o n g ch ar ac te r at " + c h a r I n d e x + " . ",
ex p e c t e d . c h a r A t ( c h ar In de x) ,
ch a r a c t e r R e a d ) ;
}
}
p u b l i c s t atic void a s s e r t S i z e ( S t r i n g m es sa ge , in t e x p e c t e d S i z e , F i l e o u t p u t F i l e ) {
a s s e r t E quals(messa g e + " i s w r o n g s iz e. ", ex pe ct ed S i z e , o u t p u t F i l e . l en g t h ( ) ) ;
}
}
Compile, test, green! The message handling is messy; let's clean that up so that the two nested asserts act more like
the built-in ones:
p u b l i c c l a s s FileAsser t e x t e n d s A s s e r t {
p u b l i c s t atic void a s s e r t F i l e E q u a l s ( St ri ng m es sa ge , S t r i n g e x p e c t e d , F i l e o u t p u t F i l e )
t h r o w s Exception {
a s s e r t S ize(message + " i s w r o n g s i z e. ", ex pe ct ed .l e n g t h ( ) , o u t p u t F i l e) ;
a s s e r t E quals(messa g e + " h a s w r o n g co nt en ts ." ,e xp e c t e d , o u t p u t F i l e ) ;
}
p u b l i c s t atic void a s s e r t E q u a l s ( S t r i ng m es sa ge , St r i n g e x p e c t e d , F i l e o u t p u t F i l e )
t h r o w s FileNotFo u n d E x c e p t i o n , I O Ex ce pt io n {
F i l e R e a der reader = n e w F i l e R e a d e r (o ut pu tF il e) ;
f o r ( i n t charIndex = 0 ; c h a r I n d e x
< e x p e c t e d . length(); c h a r I n d e x + + ) {
c h a r characterRe a d = ( c h a r ) r e a d e r. re ad () ;
a s s e r tEquals(mes s a g e + " a t i n d e x : " +c ha rI nd ex ,
ex p e c t e d . c h a r A t ( c h ar In de x) ,
ch a r a c t e r R e a d ) ;
}
}
p u b l i c s t atic void a s s e r t S i z e ( S t r i n g m es sa ge , in t e x p e c t e d S i z e , F i l e o u t p u t F i l e ) {
a s s e r t E quals(messa g e , e x p e c t e d S i z e , ou tp ut Fi le .l e n g t h ( ) ) ;
}
}
Still green. The next step is to rewrite the original test method to use the two new asserts:
p u b l i c v o i d testSaving ( ) t h r o w s E x c e p t io n {
m o c k V i e w . setMovies(m o v i e s ) ;
c o n t r o l . s etVoidCalla b l e ( 1 ) ;
m o c k V i e w . getFile("*. d a t " ) ;
c o n t r o l . s etReturnVal u e ( o u t p u t F i l e , 1 );
c o n t r o l . a ctivate();
M o v i e L i s t Editor edit o r = n e w M o v i e L i st Ed it or (m ov ie L i s t , m o c k V i e w ) ;
e d i t o r . s a veAs();
F i l e A s s e r t.assertSiz e ( " S a v e A s - e d f i le h as w ro ng s i z e . " ,
e x p e c t e d . l e n g th () ,
outputFile);
F i l e A s s e r t.assertEqu a l s ( " S a v e A s - e d fi le h as w ro ng c o n t e n t s " ,
expected,
outputFile);
c o n t r o l . v erify();
}
It still gives us a green bar, so we can delete as se rt F i l e E q u a l s ( ).Everything is still green, so we're done. Now
we can go back to the Swing test we were writing, with the required assertions written and available. The next thing
we need to do is build the fixture that we'll need. This involves creating the list of movies that we will be saving, the
expected contents of the saved file, a file to save to, a running view, and a M o v i e L i s t E d i t or instance:
p r o t e c t e d v oid setUp() t h r o w s E x c e p t i o n {
S w i n g M o v i eListEditor V i e w . s t a r t ( ) ;
m o v i e L i s t = newMovie L i s t ( ) ;
m o v i e L i s t .add(new Mo v i e ( " S t a r W a r s " , Ca te go ry .S CI FI , 5 ) ) ;
m o v i e L i s t .add(new Mo v i e ( " S t a r T r e k " , Ca te go ry .S CI FI , 3 ) ) ;
m o v i e L i s t .add(new Mo v i e ( " S t a r g a t e " , C at eg or y. SC IF I,
-1));
s a v e d T e x t = "Star Wa r s | S c i e n c e F i c t i on |5 \n " +
"Star Tr e k | S c i e n c e F i c t i on |3 \n " +
"Stargat e | S c i e n c e F i c t i o n| -1 \n ";
Now we have a compiling test with fixture and assertions in place. As expected, it fails. Now we need to add to the
test to manipulate the GUI to set things up for the assertions. This is fairly simple in this case. We just need to select
Save As... from the File menu, and enter the name of the temporary file we created in the filename field in the
resulting F i l e C h o o s e r dialog:
p u b l i c v o i d testSaving ( ) t h r o w s E x c e p t io n {
J M e n u B a r O perator men u b a r = n e w J M e n u Ba rO pe ra to r( ma i n W i n d o w ) ;
J M e n u O p e r ator fileMe n u = n e w J M e n u O p er at or (m en ub ar , " F i l e " ) ;
f i l e M e n u . push();
J M e n u I t e m Operator sa v e A s I t e m = n e w J M en uI te mO pe ra to r ( m a i n W i n d o w ,
" S a v e A s . . . " );
s a v e A s I t e m.pushNoBlo c k ( ) ;
J F i l e C h o o serOperator f i l e C h o o s e r = n ew J Fi le Ch oo se r O p e r a t o r ( ) ;
f i l e C h o o s er.setSelec t e d F i l e ( o u t p u t F i le );
J B u t t o n O p erator save B u t t o n = n e w J B u tt on Op er at or (f i l e C h o o s e r , " S a v e " );
s a v e B u t t o n.push();
This fails immediately when trying to find the menu bar, and we don't have one yet. Let's add a menu bar to our GUI:
p u b l i c v o i d init() {
s e t T i t l e ( );
s e t L a y o u t ();
s e t J M e n u B ar(initMenu B a r ( ) ) ;
//...
}
p r i v a t e J M e nuBar initM e n u B a r ( ) {
J M e n u B a r menuBar = n e w J M e n u B a r ( ) ;
J M e n u f i l eMenu = new J M e n u ( " F i l e " ) ;
m e n u B a r . a dd(fileMenu ) ;
J M e n u I t e m saveAsItem = n e w J M e n u I t e m (" Sa ve A s. . . " ) ;
f i l e M e n u . add(saveAsI t e m ) ;
r e t u r n m e nuBar;
}
Now the test gets to the point of waiting for the JF il e C h o o s e r . What causes the file chooser to open? Remember
that the s aveAs() method in M ovieListEd it o r called ge tF i l e ( ) on its view. That's what should use a
J F i l e C h o o s e r to get a F i l e from the user. Furthermore, the Save As... menu item should cause the sa v e A s ( )
method to be called on the editor. Here's the menu item handler:
s a v e A s I t e m . addActionLi s t e n e r ( n e w A c t i o nL is te ne r( ) {
p u b l i c v o id actionPe r f o r m e d ( A c t i o n E v en t e) {
try {
m y E d i tor.saveAs( ) ;
} c a t c h (IOExcepti o n e x ) {
/ / T O DO: deal wi t h t h i s
}
}});
p u b l i c F i l e getFile(St r i n g p a t t e r n ) {
i n t r e t u r nVal = file C h o o s e r . s h o w S a v e Di al og (t hi s) ;
i f ( r e t u r nVal == JFi l e C h o o s e r . A P P R O V E OP TI ON ) {
r e t u r n fileChooser . g e t S e l e c t e d F i l e () ;
} else {
r e t u r n null;
}
}
Green. Notice how we are handling the cancelling of the file chooser dialog: we return n u l l to the editor. It's time to
revisit M o v i e L istEditor's s a v e As() method to account for this. But first, of course, a test that will fail because
we aren't handling with getF i l e ( ) returning n u l l when the file chooser is cancelled.
Test 55: If the file selection is cancelled, nothing is written to any file
p u b l i c v o i d testCancel l e d S a v i n g ( ) t h r o ws E xc ep ti on {
m o c k V i e w . setMovies(m o v i e s ) ;
c o n t r o l . s etVoidCalla b l e ( 1 ) ;
m o c k V i e w . getFile();
c o n t r o l . s etReturnVal u e ( n u l l , 1 ) ;
c o n t r o l . a ctivate();
M o v i e L i s t Editor edit o r = n e w M o v i e L i st Ed it or (m ov ie L i s t , m o c k V i e w ) ;
a s s e r t F a l se("Editor s h o u l d n o t h a v e sa ve d. ", ed it or . s a v e A s ( ) ) ;
c o n t r o l . v erify();
}
This fails with an error: a N u l l P o interExce pt io n is thrown when it tries to create a F i l e W r i t er around n u l l.
Something else we've done is expect a b o o l e a n value to be returned from s a v e A s ( ) indicating whether saving took
place successfully. It would be a good idea to take advantage of this in our earlier t e s t S a v i ng ( ) in
T e s t M o v i e L i stEditorFil e O p e r a t i o n s .
Exercises
We'll fix it and get back to green by checking for nu ll being returned from g e t F i le ( ) (and returning the
appropriate value):
p u b l i c b o o l ean saveAs( ) t h r o w s I O E x c e p t io n {
F i l e o u t p utFile = vi e w . g e t F i l e ( ) ;
i f ( o u t p u tFile == nu l l ) {
r e t u r n false;
}
F i l e W r i t e r writer = n e w F i l e W r i t e r ( o ut pu tF il e) ;
m o v i e s . w r iteTo(write r ) ;
w r i t e r . c l ose();
r e t u r n t r ue;
}
Now, for completeness, we'll go back to the Swing-based tests and add a t e s t C a n c e l l ed S a v e ( ) for the GUI level:
p u b l i c v o i d testCancel l e d S a v i n g ( ) t h r o ws E xc ep ti on {
J M e n u B a r O perator men u b a r = n e w J M e n u Ba rO pe ra to r( ma i n W i n d o w ) ;
J M e n u O p e r ator fileMe n u = n e w J M e n u O p er at or (m en ub ar , " F i l e " ) ;
f i l e M e n u . push();
J M e n u I t e m Operator sa v e A s I t e m = n e w J Me nu It em Op er at o r ( m a i n W i n d o w ,
"Save As.. .");
s a v e A s I t e m.pushNoBlo c k ( ) ;
J F i l e C h o o serOperator f i l e C h o o s e r = n ew J Fi le Ch oo se r O p e r a t o r ( ) ;
f i l e C h o o s er.setSelec t e d F i l e ( o u t p u t F i le );
J B u t t o n O p erator canc e l B u t t o n = n e w J Bu tt on Op er at or ( f i l e C h o o s e r , " C a n ce l " ) ;
c a n c e l B u t ton.push();
F i l e A s s e r t.assertSiz e ( " S a v e d l i s t h a s wr on g si ze ." , 0 , o u t p u t F i l e ) ;
}
This also tests that we can select a file in the file chooser and cancel. It verifies that nothing happens to the file.
Amazon
Prev don't be afraid of buying books Next
SAVE IN GUI
Provide, in the GUI, the capability to save the movie collection to the same file it was
previously saved to.
We took it slowly for the last task since there were a few new ideas involved. Now we'll pick it up a bit
since this task requires only a refinement of what we've done so far on this story. Specifically, we need
to add the ability to resave to the same file without asking the user for a filename again.
Test 56. Telling the logical layer to "Save" causes it to save the current list to the same
file as the previous "Save As" operation.
Test 57. Selecting "Save" from the "File" menu causes the list to be written into the
previously selected file.
Since the required setup for testing this involves what we've already done for testing the Save As
functionality, we'll extend the te stSa vi ng ( ) tests.
Test 56: Telling the logical layer to "Save" causes it to save the current
list to the same file as the previous "Save As" operation
e d itor. add () ;
a s sertT rue (" Edi to r sh ould hav e res a v ed" , ed it or . sa v e( )) ;
F i leAss ert .a sse rt Equa ls(" Save d fil e ", e x te nd ed E xp e ct ed , o ut p ut Fi l e) ;
c o ntrol .ve ri fy( );
}
Here we've added to the test to add a movie to the list, resave it, and verify the result. To support this,
we need to extend the fixture:
Add the required stub for Mov ieLi stEdi t or. s a ve( ) and give it a try:
Of course it fails because sav e() doesn't do anything yet. Most notably, it just returns f al se . We need
to write some code for s av e():
To make this compile we need to have an instance variable for o ut p ut F il e. Add it, compile, and run
the tests. It still fails because we don't initialize the o utp u tF i le instance variable. We'll do this in
sav e As(); we need to use the new instance variable rather than the local we had before:
Now the test fails because the resulting file is wrong. Hmm. . . that seems unlikely. Everything looks
good. Run the test in the debugger (set a breakpoint on entry to Mo v ie Li s tE di t or .s a ve () ) and we
find that the Mo vie that got added is uncategorized and unrated. Hmm. . . a quick look at
Mov i eList Edi to r.a dd () reveals the problem. It was creating a movie with just a name. Ponder,
ponder, scratch, scratch. . . oh, yes. Our thought was to create a template movie to probe for a
duplicate. We missed going back and filling in the rest of the data. A quick look at the test for adding
(which was written when we had only a name) shows that the mock has no expectations regarding the
category and rating being fetched during an add. Not a nice feeling, discovering that—but it happens.
Because we are practicing TDD, we have a better chance of finding that sort of thing early.
First things first, though. We'll fix add () to get our current test passing:
pub l ic vo id ad d() {
S t ring new Na me = view .get Name F i eld ( ) ;
M o vie n ewM ov ie = new Movi e(ne w N ame ,
vi e w .ge t C ate g o ry Fi el d () ,
vi e w .ge t R ati n g Fi el d( ) );
try {
movie s.a dd (ne wM ovie );
updat eMo vi eLi st ();
} catch (D up lic at eMov ieEx cept i o n e ) {
view. dup li cat eE xcep tion (new N a me) ;
}
}
Exercises
Now to clean up Mo vie Li stEd itor . If we compare sav e () and sa v eA s( ) we see that other than
the initial fetch of o utp ut File in saveA s (), they are identical. That's an easy cleanup: we just have
to have s a veA s( ) set ou tput File and call s a ve( ) :
Test 57: Selecting "Save" from the "File" menu causes the list to be
written into the previously selected file
The next step is to do something very similar to the GUI t es t Sa vi n g( ) . But first we'll do some
damage control and fix up the add-related GUI tests which we see are also out of date. Actually, I'll
make that an exercise for you.
Exercises
Now the test fails because the file is not being updated. We next need to call s a ve () in the associated
Mov i eList Edi to r in response to the Save item being selected:
The test now runs green. However, ini t Men u B ar has grown too large and is starting to smell. A
couple of Extract Method applications (one to each JMe n u It em setup) will take care of it:
OK. Green and clean! There's still some duplication in the menu item creation and setup code, but that's
largely unavoidable in GUI construction code where you are creating multiples of the same type of
component.
Amazon
Prev don't be afraid of buying books Next
This task requires us to add support for reading a previously written Mo v ie L is t back into the system.
Test 58. Reading from an empty stream results in an empty list of movies.
Test 59. Reading from a stream containing the data for a single movie results in a list
containing the single movie.
Test 60. Reading from a stream containing the data for several movies results in a list
containing those movies.
We'll start with reading a Mov ieLi st from a Rea d e r. This is a different style of test than the writing
tests and will need a different fixture, so we need a new T e st C as e.
Test 58: Reading from an empty stream results in an empty list of movies
To make this compile, we need to stub M o vie L i st. r e ad( ) in the simplest way possible:
In this case, you should return what's called a null object. Examples of
this include empty collections.
Green.
Test 59: Reading from a stream containing the data for a single movie
results in a list containing the single movie
The next test will be to read a single movie. We start simply by verifying that exactly one Mo v ie was
read:
Now we need to write some code to read exactly one M o vi e. We will take a similar approach to the one
we ended up with in the writing code: We will delegate the reading of a Mo v ie to the Mo v ie class:
i f ( n ewM ov ie != nu ll ) {
ne w Lis t. add (n ewM ov ie);
}
r etu r n n ew Lis t;
}
Notice the interface design we've done while writing this method. We've decided that if a Mo vi e can't be
read, nul l is returned instead of a Mov ie instance. We need this (and the conditional addition to the list)
in order to keep the previous test from failing.
We need to convert from the name of a category (that of the stored Mo v ie )to an instance of the
C at ego r y class. We can't just create an instance with the name because C at e go ry is a type-safe enum
and as such has a constraint set of instances. If you recall, this is enforced by having a private
constructor. So we need a static method in C ate g o ry to return the instance corresponding to a name.
This is simply a matter of using a map to store a mapping between the names and the instances:
/ /. . .
Now we can compile everything (most notably Mo v i e.r e a dF ro m( ) )and run the tests: green bar!
Next, we have to extend the test to make sure that the Mo vi e that was read was what was expected:
Green bar. We just had to be sure. Also, this will guard us against unintentional changes to the behavior
of the reading code in the future.
Test 60: Reading from a stream containing the data for several movies
results in a list containing those movies
LOAD IN GUI
Provide, in the GUI, the capability to load the movie collection from a file.
Test 61. With the list empty, telling the logical layer to load from a file that contains
data for a set of movies results in the list containing those movies.
Test 62. Choosing "Load" from the "File" menu and selecting a file containing a specific
set of movie data causes the corresponding movies to be placed in the list.
Test 63. With a set of movies loaded, cancelling the load of a different set leaves the
originally loaded set unchanged.
Test 61: With the list empty, telling the logical layer to load from a file
that contains data for a set of movies results in the list containing those
movies
Our first test starts with an empty list, loads a file, and verifies that the expected list was sent to the
view:
c o nt ro l.v e rify ( );
}
After adding the required fixture and stubs, we get a red bar due to the f a l s e returned by the stub
l oa d () .Let's fake it and have load () return t r u e . The bar's still red, but we have a more helpful
failure:
p ub l ic b ool e an l o ad()
th ro ws F ileN o tFou n dExc epti o n , I O E x c e p t i o n , D u p l i c a t e M o v i e E x c e p t i o n {
F i le i npu t File = vi e w.ge tFil e T o L o a d ( ) ;
i f ( in put F ile = = nu l l) {
re tu rn f alse ;
}
We haven't provided any data to be loaded, so l o a d ( ) results in an empty list. This is good, but not
what we're interested in testing here.
Hmm. This is starting to look like a completely different fixture. Maybe we should split
T es t Mo vi eLi s tEdi t orFi l eOpe rati o n s into two test cases, one for saving, and one for loading.
Exercises
Now let's provide some data to load by setting up the input file in s e t U p ( ) :
Test 62: Choosing "Load" from the "File" menu and selecting a file
containing a specific set of movie data causes the corresponding
movies to be placed in the list
Exercises
Next, we need to add an Open option to the File menu (in S w i n g M o v i e L i s t E d i t o r V i e w ). Since the
format of this will be almost identical to the two existing menu items, we'll take a big step and just do
it:
Green!
Test 63: With a set of movies loaded, cancelling the load of a different
set leaves the originally loaded set unchanged
RETROSPECTIVE
In this chapter we added persistence to our application. We did it in the simplest way that could possibly
work: a plain ASCII file, one movie per line, with movie instance variables separated by a delimiter
character. This is basically the same as a Comma Separated Value file except that commas could
reasonably appear in a movie title, so we chose something more unlikely. This let us avoid having to do
something like wrap movie titles in quote characters.
In the process, we've added a File menu to the GUI. Figure 16.1 shows it.
Maybe later we will have need for structured storage or even a relational database. But that is maybe
and later—YAGNI. We did what was barely sufficient for today. Maybe we won't need anything more.
"But," you say, "there's a story for it." So. . . we may not get to that. The customer might decide that
they don't need it; the project may be cancelled before then. Things change. Don't make things more
complex than you have to...until you have to. That's one of the secrets of simple design.
Amazon
Prev don't be afraid of buying books Next
The first step is to start with the innermost stuff. In this case, we are talking about the user being able
to sort the list of movies they see. As we peel off the layers of the onion, we pass through the GUI
layer being able to change the order of the displayed list, the Mo vi e Li s tE di t or updating its view's list
in response to a change in the sorting requirements, the sorting of a Mo v ie Li s t, and finally we arrive
at comparing two Mo vie s. Now we'll back out through the layers, testing and implementing as we go.
Amazon
Prev don't be afraid of buying books Next
COMPARE MOVIES
Create the ability to compare two movies based on either name (ascending order) or
rating (descending order). This is needed for sorting.
The first step, at the innermost layer, is adding the ability to compare two M o v i e s. We'll use the
standard Java approach to this. Since, in this case, we need to be able to sort on different attributes
(i.e., name and rating), making Mo vie implement C o m p a r a b l e doesn't suffice. We will need to create a
C om p ar at or for each ordering we need. Here are the tests:
Test 64. Comparing two movies with the same name, based on name, should answer 0.
Test 65. Comparing two movies, the first having a lexically smaller name, based on
name, should answer a negative integer.
Test 66. Comparing two movies, the first having a lexically greater name, based on
name, should answer a positive integer.
Test 67. Comparing two movies with the same rating, based on rating, should answer 0.
Test 68. Comparing two movies, the first having a lower rating, based on rating, should
answer a negative integer.
Test 69. Comparing two movies, the first having a higher rating, based on rating, should
answer a positive integer.
Test 70. For the purposes of comparison, an unrated movie always has a lower rating.
Test 64: Comparing two movies with the same name, based on name,
should answer 0
Compile, run, green! The test is expecting 0 to be returned, which is what the stub does. The next test
will drive us to generalize.
Test 65: Comparing two movies, the first having a lexically smaller
name, based on name, should answer a negative integer
This test, of course, fails. This is good (have I emphasized this enough?) because it lets us drive the
implementation. Now we need to do some work on M o v i e N a m e C o m p a r a t o r . c o m p a r e T o ( ) . Since we
are ordering by name, and name is a S t r i n g , and S t r i n g implements C o m p a r a b l e , it makes sense
to delegate the comparison to the name:
Green. Now let's do some refactoring. First, we've introduced some duplication in the test class. We can
extract a common fixture:
p ro t ec te d v o id s e tUp( ) {
s t ar Tr ek = new Movi e ("St ar T r e k " , C a t e g o r y . S C I F I , 3 ) ;
c o mp ar ato r ToTe s t = n ew M ovie N a m e C o m p a r a t o r ( ) ;
}
Next, have a look at co m pare( ).Something smells very obtuse and ugly. Let's clean it up by extracting
some explaining variables, running the tests after each change. Here's the end result:
Test 66: Comparing two movies, the first having a lexically greater
name, based on name, should answer a positive integer
As expected, this passes right away. Next, let's look at the other ordering we need: by rating. This is
very similar to what we've just done. The only twist is that the ordering is descending rather than
ascending. First, the fixture:
And now, each of the tests. We'll just go through them one after another, since they're so similar to
what we just did.
Test 67: Comparing two movies with the same rating, based on rating,
should answer 0
Test 68: Comparing two movies, the first having a lower rating, based
on rating, should answer a negative integer
Test 70: For the purposes of comparison, an unrated movie always has
a lower rating
Here's the only difference: With ratings we have the concept of an unrated movie; we didn't have a
corresponding unnamed movie to deal with earlier.
Now we have away of determining two orderings on a list of M o v i e s. The next step is to use that to
provide Mo vie L ist with the capability of sorting.
Amazon
Prev don't be afraid of buying books Next
SORT A MOVIELIST
Add the capability to ask a MovieList to provide its list sorted on a specific attribute, for the
moment this is one of name or rating.
Sorting is different than the category filtering we did earlier. It doesn't change the number of items, it only
changes their order. With that in mind, let's approach this by making sorting an in-place operation. By that
I mean that we will ask a Movie List to sort itself, and it will (potentially) modify the order of its items.
Test 72. Sorting a list, by name, that is already sorted has no effect on it.
Test 73. Sorting a list, by name, that is in reverse order puts it in order.
Test 74. Sorting a list, by rating, that is already sorted has no effect on it.
Test 75. Sorting a list, by rating, that is in reverse order puts it in order.
Test 76. Sorting a list, by rating, that has duplicates puts it in order; the duplicates being in
their original order relative to one another.
Test 77. Sorting a randomly ordered list by name puts it in order, by name.
Test 78. Sorting a randomly ordered list by rating puts it in order, by name.
We'll tackle name sorting first. We start simply with what it means to sort an empty list.
Not much, but it is enough to make the test compile, and pass.
The next test isn't much different. It tests that sorting an already sorted list doesn't change the order.
Test 72: Sorting a list, by name, that is already sorted has no effect on it
To compile we need to add it e rato r() to M o v i e L i s t . We'll skip the stub stage and go right to the real
implementation since it's so obvious: it should return an iterator over the underlying collection:
Exercises
The next test will take a reverse ordered list, sort it, and verify that it has been sorted.
Test 73: Sorting a list, by name, that is in reverse order puts it in order
Next, we need a reversed list. The straightforward way to do this is something like:
If we're doing that we might as well incorporate the construction of the sorted list we used in the last test.
So now we have:
Green!
We need to clean up the tests a bit, though. The most recent test introduced some duplication, specifically,
the code that tests that a Mov i eLis t is in sorted order. We can use Extract Method to take care of that:
There, that cleans things up nicely. Next, we need to take care of sorting a M o v i e L i s t based on rating.
This will be very similar to what we just did, so as we did in the first task, we'll show it all in one step
here, even though we did it one test at a time.
Test 74: Sorting a list, by rating, that is already sorted has no effect on it
Test 75: Sorting a list, by rating, that is in reverse order puts it in order
As expected, everything works. We already have the C o m p a r a t o r s working as well as the basic sorting
mechanism. Using a different (but tested) C o m p a r a t o r should work without any problems.
mechanism. Using a different (but tested) C o m p a r a t o r should work without any problems.
Hmmm... that seemed too easy. Let's try another test. Movie names are unique, but ratings aren't. Let's
write a test that takes a list with duplicate ratings and sorts it.
Test 76: Sorting a list, by rating, that has duplicates puts it in order; the
duplicates being in their original order relative to one another
Green bar. I'm still not confident enough. I want to see it sorting a jumbled list, using both C o m p a r a t o r s .
First, a tweak to the fixture:
Test 77: Sorting a randomly ordered list by name puts it in order, by name
Test 78: Sorting a randomly ordered list by rating puts it in order, by name
That passes all the tests. I'm finally confident enough to move on. I do mean all the tests. The A l l T e s t s
suite is getting bigger:
In the last task we did all the work required to show that the sorting capabilities work in the various
situations. For this task, we just need to test that M o v ieL i st E di to r responds properly to a request to
sort the list. Actually, one test might suffice:
Test 79. Asking the logical layer to sort its list causes the associated view's list to be
updated to one that is in order (order depends on what is being sorted on).
Test 79: Asking the logical layer to sort its list causes the associated
view's list to be updated to one that is in order (order depends on what
is being sorted on)
Add an empty stub for M ovie List Edito r .so r t Usi n g () and run it. This, of course, fails because
sor t Using () does nothing. We need it to sort the list and update the display:
Green bar! That was simple. There's a lesson here that reinforces and validates the way we've been
working: Do the real work in the model without worrying about any sort of user interface. You will spend
most of your time there, and it's the easiest part of a system to write good tests for. It also keeps the
interface layers as thin as possible. This, in turn, makes it easier to replace the interface with something
else. Splitting the interface into operation and presentation layers helps too.
Amazon
Prev don't be afraid of buying books Next
The last task for the sorting story is to add access to the new sorting capabilities to the Swing GUI.
Test 80. Selecting "Sort by name" from the "View" menu causes the displayed list to be sorted
by name.
Test 81. Selecting "Sort by rating" from the "View" menu causes the displayed list to be sorted
by rating.
Since this is conceptually different than our previous GUI tests, and requires a slightly different fixture, we'll
create a new T e s t C a s e :
prote c t e d v o i d s e t U p ( ) t h r o w s Ex ce pt ion {
sup e r . s e t U p ( ) ;
Swi n g M o v i e L i s t E d i t o r V i e w . s t a rt () ;
sta r g a t e = n e w M o v i e ( " S t a r g a te ", C ategory.SCIFI, -1);
the S h i n i n g = n e w M o v i e ( " T h e Sh in in g", Category.HORROR, 2);
sta r W a r s = n e w M o v i e ( " S t a r W ar s" , Category.SCIFI, 5);
sta r T r e k = n e w M o v i e ( " S t a r T re k" , Category.SCIFI, 3);
mov i e s = n e w V e c t o r ( ) ;
mov i e s . a d d ( s t a r g a t e ) ;
mov i e s . a d d ( t h e S h i n i n g ) ;
mov i e s . a d d ( s t a r W a r s ) ;
mov i e s . a d d ( s t a r T r e k ) ;
sor t e d M o v i e s = n e w V e c t o r ( ) ;
sor t e d M o v i e s . a d d ( s t a r T r e k ) ;
sor t e d M o v i e s . a d d ( s t a r W a r s ) ;
sor t e d M o v i e s . a d d ( s t a r g a t e ) ;
sor t e d M o v i e s . a d d ( t h e S h i n i n g ) ;
mov i e L i s t = n e w M o v i e L i s t ( ) ;
mov i e L i s t . a d d ( s t a r g a t e ) ;
mov i e L i s t . a d d ( t h e S h i n i n g ) ;
mov i e L i s t . a d d ( s t a r W a r s ) ;
mov i e L i s t . a d d ( s t a r T r e k ) ;
mai n W i n d o w = n e w J F r a m e O p e r a to r( "M ovie List");
edi t o r = n e w M o v i e L i s t E d i t o r (m ov ie List,
( Sw in gMovieListEditorView)mainWindow. getWindow());
men u b a r = n e w J M e n u B a r O p e r a t or (m ai nWindow);
vie w M e n u = n e w J M e n u O p e r a t o r (m en ub ar, "View");
vie w M e n u . p u s h ( ) ;
}
prote c t e d v o i d t e a r D o w n ( ) t h r o ws E xc eption {
sup e r . t e a r D o w n ( ) ;
mai n W i n d o w . d i s p o s e ( ) ;
}
}
Test 80: Selecting "Sort by name" from the "View" menu causes the
displayed list to be sorted by name
p ublic v o i d t e s t S o r t i n g B y N a m e ( ) {
JMenu I t e m O p e r a t o r s o r t B y N a m e I t em =
n e w J M e n u I t e m O p e r a t o r ( m ai nW in dow,
n e w N am eB as edChooser("sortByName"));
sortB y N a m e I t e m . p u s h ( ) ;
JList O p e r a t o r m o v i e L i s t =
n e w J L i s t O p e r a t o r ( m a i n W i n do w,
n e w N a m e Ba se dC hooser("movieList"));
ListM o d e l l i s t M o d e l = m o v i e L i s t. ge tM odel();
asser t E q u a l s ( " M o v i e l i s t i s t h e wr on g size",sortedMovies.size(),
l i s t M o d e l . g e t S i z e ( )) ;
This fails with an error when looking for the View menu. We need to add it to the GUI:
p rivate J M e n u B a r i n i t M e n u B a r ( ) {
//...
JMenu v i e w M e n u = n e w J M e n u ( " V i ew ") ;
menuB a r . a d d ( v i e w M e n u ) ;
viewM e n u . a d d ( i n i t S o r t B y N a m e I t e m( )) ;
retur n m e n u B a r ;
}
p rivate J M e n u I t e m i n i t S o r t B y N a m e It em () {
JMen u I t e m s o r t B y N a m e I t e m = n e w JM en uItem("Sort by name");
sort B y N a m e I t e m . s e t N a m e ( " s o r t B yN am e" );
retu r n s o r t B y N a m e I t e m ;
}
Now we see a failure due to the list not being sorted after the Sort by name menu item is selected. To deal
with this, we need to add an action listener which will forward the request to the associated editor:
s ortByN a m e I t e m . a d d A c t i o n L i s t e n e r (n ew A ctionListener() {
publi c v o i d a c t i o n P e r f o r m e d ( A c ti on Ev ent e) {
myEd i t o r . s o r t U s i n g ( n e w M o v i e N am eC om parator());
}});
Green.
Test 81: Selecting "Sort by rating" from the "View" menu causes the
displayed list to be sorted by rating
p rotect e d v o i d s e t U p ( ) t h r o w s E x ce pt io n {
//...
ratin g S o r t e d M o v i e s = n e w V e c t o r( );
ratin g S o r t e d M o v i e s . a d d ( s t a r W a r s) ;
ratin g S o r t e d M o v i e s . a d d ( s t a r T r e k) ;
ratin g S o r t e d M o v i e s . a d d ( t h e S h i n in g) ;
ratin g S o r t e d M o v i e s . a d d ( s t a r g a t e) ;
//...
}
p ublic v o i d t e s t S o r t i n g B y R a t i n g ( ) {
JMenu I t e m O p e r a t o r s o r t B y R a t i n g It em =
n e w J M e n u I t e m O p e r a t o r ( ma in Wi ndow,
ne w Na meBasedChooser("sortByRating"));
sortB y R a t i n g I t e m . p u s h ( ) ;
JList O p e r a t o r m o v i e L i s t =
n e w J L i s t O p e r a t o r ( m a i n W i n do w,
n e w N a m eB as ed Chooser("movieList"));
ListM o d e l l i s t M o d e l = m o v i e L i s t. ge tM odel();
asser t E q u a l s ( " M o v i e l i s t i s t h e wr on g size",
r a t i n g S o r t e d M o v i e s. si ze (),
l i s t M o d e l . g e t S i z e () );
And here's the new menu item code to make it pass (done in one bigger step):
p rivate J M e n u B a r i n i t M e n u B a r ( ) {
//. . .
viewM e n u . a d d ( i n i t S o r t B y R a t i n g I te m( )) ;
//. . .
}
p rivate J M e n u I t e m i n i t S o r t B y R a t i ng It em () {
JMenu I t e m s o r t B y R a t i n g I t e m = n ew J Me nuItem("Sort by rating");
sortB y R a t i n g I t e m . s e t N a m e ( " s o r t By Ra ti ng");
sortB y R a t i n g I t e m . a d d A c t i o n L i s t en er (n ew ActionListener() {
pub l i c v o i d a c t i o n P e r f o r m e d ( Ac ti on Event e) {
m y E d i t o r . s o r t U s i n g ( n e w M ov ie Ra tingComparator());
}}) ;
retur n s o r t B y R a t i n g I t e m ;
}
Amazon
Prev don't be afraid of buying books Next
RETROSPECTIVE
Working on this story illustrated how most of the work can (and should) be done in the guts of the
application. Sometimes this is called the model or problem domain code. Do all you can there, and put
good, well-crafted interfaces in place. This makes it easy, and sometimes trivial, to expose new
functionality in the interface layer(s).
The new GUI features are shown in Figures 17.1 and 17.2.
Here's an interesting story that extends an earlier one. We've previously added a rating to Mo v ie ; now
we have to extend that to handle multiple ratings and report the average. Furthermore, we need to be
able to identify the source of each rating. We'll take it one step at a time.
Amazon
Prev don't be afraid of buying books Next
MULTIPLE RATINGS
Add support for multiple ratings rather than just one. Provide access to the average of all the
ratings.
The first task is simply to support more than one rating value, and provide access to an average. Also, we have
to maintain the current behavior (i.e., keep all the existing tests passing) unless it is overridden by the new
behavior. For example, set R a tin g () doesn't make sense anymore.
Test 83. Add a single rating to an unrated movie. Asking for the average rating answers the
rating just added.
Test 84. Add several ratings: 3, 5, and 4. Asking for the average rating answers 4.
Test 85. Add several ratings: 3, 5, 5, and 3. Asking for the average rating answers 4.
Test 86. Writing movies uses for format: <name> | <category> | <total rating> | <number of
ratings>.
Test 87. Reading movies uses for format: <name> | <category> | <total rating> | <number of
ratings>.
Simply chaining to the more general constructor will get this working:
p ubli c M ov ie ( S tri n g na m e , Ca te g ory category) {
thi s( n am e, c ate g o ry, - 1) ;
}
Test 83: Add a single rating to an unrated movie. Asking for the average
rating answers the rating just added
This causes the test to fail, throwing an Un rated Excep tion. Let's have addRating() do something:
Green. Hmm...we have some common fixture in the two tests that should be refactored out. But it's not used
by the previous tests in T e stM o vi e , and what's more, the new tests don't use the existing fixture. We need to
create a new T es t C ase to correspond with the new fixture:
pub li c s ta t i c v o i d m a i n( St ri n g[ ] args) {
j un i t. te x t ui. T e stR u n ne r. ru n (TestMovieRating s.class);
}
}
Run it to make sure it works as it should (i.e., both tests should pass). Good. Now we can refactor the fixture
into a se t Up () method.
Exercises
Test 84: Add several ratings: 3, 5, and 4. Asking for the average rating
answers 4
Red bar! Good, now we have some work to do. The failure is:
Green. Nice. Not an overly elegant solution, but it works...for now. Let's try a few other tests. Remember that
averaging integers involves rounding. That can cause trouble if we don't do it right. Let's try another test with
some other numbers, some that might cause some problems.
Test 85: Add several ratings: 3, 5, 5, and 3. Asking for the average rating
answers 4
This fails:
(3 + 5 +5+3)/4 = 4
There, this test brings to light a rounding error. Our simple implementation has the potential to make a
rounding error each time a rating is added. This is compounded with each rating. We can do one of three
things:
1. Keep the running average as a floating point, rounding only when the average rating is requested (and
even then, only rounding the value that is returned, not the stored average).
2. Store all ratings, taking all into account when the average is requested.
3. Keep track of a running total and a count of the number of ratings the total represents. Calculate the
average when asked to, from those two items.
p ubli c c la ss M o vie {
//. ..
pri va t e in t rat i n g = 0 ;
//. ..
pri va t e in t num b e rOf R a ti ng s = 0;
pub li c M ov i e (Mo v i e o r i gi na l) {
n am e = o r i gin a l .na m e ;
r at i ng = o rig i n al. r a ti ng ;
n um b er Of R a tin g s = o r ig in al . numberOfRatings;
c at e go ry = or i g ina l . ca te go r y;
}
//. ..
pub li c b oo l e an h a sRa t i ng () {
r et u rn n u m ber O f Rat i n gs > 0 ;
}
//. ..
pub li c i nt g etR a w Rat i n g( ) {
r et u rn h a s Rat i n g() ? ( ra ti n g / numberOfRati ngs) : -1;
}
The next item to work on is reading and writing. Here we are changing the behavior so we need to go back to
the tests, and add support for the rating count.
Test 86: Writing movies uses for format: <name> | <category> | <total rating>
| <number of ratings >
Here's the changed code of the two affected tests in TestM ovieL istWriting:
These two tests now fail since M ov ie doesn't write out its rating counter. Here's the new Movie.writeTo()
method that will get these tests to pass:
Test 87: Reading movies uses for format: <name> | <category> | <total
rating> | <number of ratings>
We do the same thing with reading. First the tests, really just the fixture, to force the calculation of an average.
. . thus requiring the reading of the total and count:
This causes a failure as M ovi e . re ad Fr o m() only reads the total, which it takes as the rating itself. We must
update that method as so:
Notice how we need a new constructor to create the new Movi e. We shouldn't need it elsewhere, so we make
it private:
Green again! It feels a little hacky to me, though. We'll have to keep an eye on this. A quick run of all the
(non-Swing) tests shows one failure in T es tMovi eListEditorSaving(). Running Swing tests yields a
similar failure.
Exercises
RATING SOURCE
Create a class to represent the source for a rating. Keep in mind that a single source
can have many ratings, but not the reverse.
We have implemented multiple ratings, and the averaging of them. Now we will turn to the task of
attaching a source (e.g., The New York Times) to each rating. What should this source look like? Should
it be a new class? Or can we get away with a simple string? For now, all we know is that we need to
attach to each rating an indication of where it came from. For now, at least, we can get by with a
string.
Test 89. If we add a rating with a source to a movie, we should be able to later retrieve
the source associated with the rating.
From discussions with the customer, we learn that we need to be able to support anonymous ratings.
That fits nicely with our work so far, and gives us a good place to start:
Two stubs are needed to make this compile. First, a r a t i n g s ( ) method for M o v i e :
p ub l ic c las s Rat i ng {
pu b li c int getV a lue( ) {
r e tu rn 0;
}
The test now fails with a Nul l Poin ter E x c e p t i o n due to the null returned by r a t i n g s ( ) . Now we
need to do something more here. What should it be? Do we feel we know enough to be confident taking
a larger step? Why not? We can always roll back to where we are now if it doesn't work out.
We know that we want to accumulate a list of ratings, so we can add that to a d d R a t i n g ( ) .We'll start
by adding an instance variable to hold the list:
p ub l ic c las s Rat i ng {
pr i va te in t val u e;
pr i va te St r ing s ourc e ;
Green. Have we introduced any duplication? Yes—have a look at these two lines in a d d R a t i n g ( ) :
n um b er Of Rat i ngs+ + ;
r at i ng s. add ( new R atin g (aRa ting ) ) ;
A quick senders check tells us that this is only used in u p d a t e M o v i e ( ) , which in turn is only called
from the interface code. We have two choices here:
2. we can ignore it for now, and deal with it when we get to the interface tasks.
We'll opt for the latter. The crux of the matter is that s e t R a t i n g ( ) is a leftover from the single-rating
system. We are in the process of changing that to a multiple-rating system so, in all likelihood,
s et R at in g() will become obsolete anyway. Since it has no impact on what we are currently focused
on, we can ignore it for now. It would be a good idea, however, to add a short comment to the method
to note what we decided about it. Something similar to this will suffice:
OK, now let's finish the job and remove n u m b e r O f R a t i n g s (after confirming that the tests all pass).
Green. One more thing we notice during this exercise: the method h a s R a t i n g ( ) is now:
We can rewrite that in a way that communicates the intent much better:
Everything still passes. Now we need a test for a rating with an identified source.
We need a new ad dRa t ing( ) method in M o v i e to add a rating with a known source. We start with a
simple stub that does nothing except give us the desired red bar. Now we can add code to it:
OK, green!
Now we can eliminate some duplication we've added. First, look back at the two a d d R a t i n g ( )
methods:
These are much the same, the only difference being the construction of the R a t i n g . If we extract the
common parts to a new method, we end up with:
We can rewrite the first one to chain to the more general one, like this:
We still have duplication lurking in the total rating that is being maintained. We need to remove it. In
most cases, this is simply a matter of removing the reference to r a t i n g , since it is used in parallel with
the r at in gs collection. In the cases where it is used in a calculation, we need to do a bit more work.
The obvious case is g e tRat i ng() . This becomes:
The s et Rat i ng() method needs some attention as well, but only a bit. The significant thing is that to
maintain the current behavior we need a single rating after calling it. We can do this by throwing out
the current set of ratings (recall that this method is a leftover from the single-rating system and will be
gone once the GUI is updated to handle multiple ratings):
The final issue is persistence. We had hoped this could be ignored until we upgraded it to handle
multiple ratings, but we want to clean up the code now. We could just go ahead and leave the
persistence-related tests failing until we revamp the whole area. That could work, but it's not a good
idea to have failing tests lying around. Recall that one of the guidelines is to only ever have one failing
test. . . the one you are actively working with. So w r i t e T o ( ) becomes:
p ub l ic v oid writ e To(W r iter des t i n a t i o n ) t h r o w s I O E x c e p t i o n {
de s ti na tio n .wri t e(na m e);
wr i te Se par a tor( d esti n atio n);
de s ti na tio n .wri t e(ge t Cate gory ( ) . t o S t r i n g ( ) ) ;
wr i te Se par a tor( d esti n atio n);
in t r at ing T oWri t e = 0 ;
if (h as Rat i ng() ) {
r at in gTo W rite = ca l cula teTo t a l R a t i n g ( ) ;
} e ls e {
r at in gTo W rite = 0;
}
de s ti na tio n .wri t e(In t eger .toS t r i n g ( r a t i n g T o W r i t e ) ) ;
To get the required c a lcul a teTo talR a t i n g ( ) method we can extract it from
c al c ul at eAv e rage R atin g (), as so:
Now we have only one reading-related test failing: t e s t R e a d i n g O n e M o v i e . The problem is related to
the constructor used:
f o r (i nt i = 0 ; i < coun t; i + + ) {
ra ti ngs . add( n ew R a ting (rat i n g ) ) ;
}
}
We faked it before, now we need to do a better job of faking it. Let's think. We are given the total
desired rating value, and the number of ratings that have to add up to that total. What if we did:
i f ( co unt > 0) {
in t sin g leRa t ing = rat ing / c o u n t ;
in t las t Rati n g = r atin g - ( s i n g l e R a t i n g * ( c o u n t - 1 ) ) ;
fo r (in t i = 0; i < co unt - 1 ; i + + ) {
ad dRa t ing( s ingl e Rati ng);
}
ad dR ati n g(la s tRat i ng);
}
}
We just need to finish cleaning out the r a t i n g instance variable, and we're done. The core code is
completely converted to using multiple ratings.
Amazon
Prev don't be afraid of buying books Next
REVISED PERSISTENCE
Revamp the persistence capabilities to handle the changes to the rating structure.
The core of the system is using multiple ratings throughout. Now it's time to upgrade the persistence
capabilities. Until now, a flat text file has sufficed to store our data. Now that each M o v i e has an
arbitrarily sized collection of Rati ngs, we need something more structured. We could develop our own
file format to handle this, but it's easier and more standard to use XML. We could have used XML from
the beginning almost as easily, but it would have been overkill if we didn't have a real reason for it. . .
which we didn't.
< mo v ie li st>
< m ov ie na m e = " name " cat egor y = " c a t e g o r y " >
<r at ing s >
< rat i ng v a lue= " valu e" s o u r c e = " s o u r c e " / >
. ..
</ ra tin g s>
< / mo vi e>
...
< /m o vi el ist >
Test 91. Writing a list containing one movie outputs in the adopted XML format.
Test 92. Writing a list containing one movie with multiple ratings outputs in the adopted
XML format.
Test 93. Writing a list containing multiple movies outputs in the adopted XML format.
Test 95. Reading an appropriate XML stream containing a single movie definition results
in a list containing the single movie defined in the stream.
Test 96. Reading an appropriate XML stream containing more than one movie definition
results in a list containing the movies defined in the stream.
Since we are moving to a more structured storage solution, and it requires more overhead, we'll move
the reading and writing (i.e., the persistence) to a class specializing in that functionality.
We'll start by tossing out the existing version of T e s t M o v i e L i s t W r i t i n g and writing a new test for
the new storage format. Since we're using XML, let's use the xmlUnit that Tim Bacon told us about in
Chapter 5. Here's the new Te s tCas e. Notice that we've started out with a small fixture. Based on
previous experience, we know we'll need it, and XML Unit does need some setup:
pr o te ct ed v oid s etUp ( ) {
S t ri ng pa r ser = "org . apac he.x e r c e s . j a x p . D o c u m e n t B u i l d e r F a c t o r y I m p l " ;
X M LU ni t.s e tCon t rolP a rser (par s e r ) ;
X M LU ni t.s e tTes t Pars e r(pa rser ) ;
d e st in ati o n = n ew S t ring Writ e r ( ) ;
w r it er = n ew X M LMov i eLis tWri t e r ( d e s t i n a t i o n ) ;
m o vi eL ist = ne w Mov i eLis t();
}
}
And here's the new version of the test for writing an empty M o v i e L i s t :
This one simple test, with its fixture, lays out the design of the saving system, as shown in Figure 18.1.
This corresponds to the following stubs:
There is no root element, since we are not outputting XML... in fact, we're not outputting anything yet.
Let's start by faking it (note that we need to add the throws clause to M o v i e L i s t W r i t e r . w r i t e ( ) as
well):
That works fine. Now to drive moving from faking to making we need another test.
Test 91: Writing a list containing one movie outputs in the adopted XML
format
When we run this, the call to as sertX M L E q u a l fails. We need to move from faking it to making it... at
least a bit. To pass this test we need:
Well... it passes the test. It is pretty smelly, though. Let's start by extracting some methods. First, we'll
pull out the body of the inner conditional that outputs a rating:
This is much cleaner and will be easier to work with as we add more tests to drive the development.
Here's the next test, one for multiple ratings.
Test 92: Writing a list containing one movie with multiple ratings
outputs in the adopted XML format
This fails because we're only outputting the first rating. We need to generalize this. It's simply a matter
of changing the if to whi l e:
"<ratings>" +
"<rating value=\"4\" " +
"source=\"NY Times\" />" +
"<ratings>" +
"<rating value=\"5\" " +
"source=\"Kent\" />" +
"<rating value=\"5\" " +
"source=\"Ron\" />" +
"</ratings>" +
"</m o v i e > " +
"</ movi e l i s t > " ;
Mo v ie s tar W ars = new Movi e("S t a r W a r s " , C a t e g o r y . S C I F I ) ;
st a rW ar s.a d dRat i ng(4 , "NY Tim e s " ) ;
st a rW ar s.a d dRat i ng(5 , "Ja son" ) ;
Similar to the previous test, this fails because only the first movie is getting output. Again, we must
generalize, this time changing the other conditional to a loop:
This code is pretty ugly and the intent is lost in the multitude of d e s t i n a t i o n . w r i t e ( ) calls. We can
use j a va .t ext .M ess a geFo r mat to clean it up some, and clarify the intent:
There, much nicer. The next step is to remove all of the previous writing-related code from M o v i e L i s t
and M ov ie.
Finally, we need to fix the writing-related code in M o v i e L i s t E d i t o r to use the new writing code.
We'll start by updating the tests. We need to convert them to expect XML output.
Exercises
Now we must turn to reading. As before, we'll replace the old version of T e s t M o v i e L i s t - R e a d i n g
with a new, XML-based version. We know we'll need a similar fixture, so we'll convert the existing one
as a starting point:
Next, we need a starting point. It makes sense to follow the path we did the first time, so we'll start
with a test for reading an empty list.
We need some stubs. We'll follow the same structure as we did when we designed the writer: a general
interface, and a concrete implemention to handle reading XML:
Now we can compile and run the test. It fails with a N u l l P o i n t e r E x c e p t i o n because stubbed r e a d
to return nu l l. Now we can fake it:
The next test requires reading a single M o v i e , driving the code toward something real.
That's a rather long test, but it really just crawls through the structure that was read and verifies that all
is as expected. This, of course, fails since we faked r e a d ( ) , and it simply returns an empty
M ov i eL is t. Let's start by adding the code required to set up and use JDOM to process the XML data:
try {
th eM ovi e list D ocum e nt = bui l d e r . b u i l d ( s o u r c e ) ;
} ca tc h ( J DOME x cept i on e ) {
th ro w n e w IO E xcep t ion( e.ge t M e s s a g e ( ) ) ;
}
M o vi eL ist theN e wLis t = p roce s s D o c u m e n t ( t h e M o v i e l i s t D o c u m e n t ) ;
r e tu rn th e NewL i st;
}
p ri v at e Mov i eLis t pro c essD ocum e n t ( D o c u m e n t t h e M o v i e l i s t D o c u m e n t ) {
r e tu rn ne w Mov i eLis t ();
}
This still fails in the same place, because even though we are now processing the input, we still just
return an empty M o vie L ist. This is a good sign, though, because it does mean that our input XML is
valid, and is being processed. Next, we need to process it and build our objects from the resulting
d oc u me nt . Let's do that a small bit at a time, working our way through the tests as we do. First, we
need to get the right number of M ovies:
M ov ie ne w Movi e = n e w Mo vie( n a m e , c a t e g o r y ) ;
n ew Mo vie L ist. a dd(n e wMov ie);
}
r e tu rn ne w Movi e List ;
}
Now the test gets to the check of the average rating. This fails due to an U n r a t e d E x c e p t i o n being
thrown. This tells us that one movie is being read and the name and category are correct. Next, we'll
add code to process the ratings:
t ry {
v alu e = r a ting E leme nt.g e t A t t r i b u t e ( " v a l u e " ) . g e t I n t V a l u e ( ) ;
} c atc h (Da t aCon v ersi onEx c e p t i o n e ) {
t hro w new IOEx c epti on(e . g e t M e s s a g e ( ) ) ;
}
S tr ing sour c e = r atin gEle m e n t . g e t A t t r i b u t e V a l u e ( " s o u r c e " ) ;
n ew Mov i e.ad d Rati n g(va lue, s o u r c e ) ;
}
n ew Mo vie L ist. a dd(n e wMov ie);
}
r e tu rn ne w Movi e List ;
}
Green! Now we need to refactor to clean up the mess. We'll extract the processing of movies and
ratings, resulting in:
Note that we had to rename Mov ie.pr i m A d d R a t i n g ( ) to a d d R a t i n g ( ) and change its visibility to
p ub l ic . You may sometimes (I won't go as far as to say often) find that methods that are the result of
refactoring eventually become useful as part of the public interface of the class.
That takes care of reading a single movie. Next, we need to test the reading of multiple movies.
Test 96: Reading an appropriate XML stream containing more than one
movie definition results in a list containing the movies defined in the
stream
This compiles and passes without a hitch. Notice that since we already have a test that verifies that a
movie can be read properly, all we need to do here is verify that a collection can be read.
Now we can move up a level and upgrade the reading-related M o v i e L i s t E d i t o r tests. Very little
changes in the test, just the creation of the test file in s e t U p ( ) :
The test fails with a file format exception, since M o v i e L i s t E d i t o r is using the previous reading code.
We need to change its l oa d() method to use the new code:
p ub l ic b ool e an l o ad()
t hro w s Fi l eNot F ound Exce p t i o n , I O E x c e p t i o n , D u p l i c a t e M o v i e E x c e p t i o n {
Fi l e in put F ile = vie w .get File T o L o a d ( ) ;
if (i np utF i le = = nul l ) {
r et ur n f a lse;
}
There—green bar. Now run A l lTes ts to be sure—green bar. The last step is to remove the
r ea d Fr om () methods from Movi e and M o v i e L i s t .Run A l l T e s t s again for safety and we're done.
/ / .. .
s a ve It em. p ush( ) ;
a s se rt XML E qual ( "Res a ved file h a s w r o n g c o n t e n t s . " ,
e x tend e dSav edTe x t ,
c o nten t sOf( outp u t F i l e ) ) ;
}
To show all of a movie's ratings we need to add a list to the GUI in which to show them. For now, we'll
leave the existing rating combo in place so that we don't break the add and update tests. We'll take
care of that in a later task.
This task doesn't require any new tests; we need to update to use the new storage format, though.
First, we'll work on the Mo vieL istE dito r . We need to extend the test te s tS el e ct in g () in
Tes t Movie Lis tE dit or :
All we had to add was the expectations for a call to s e t Ra ti ng s () corresponding to each selected
movie. This fails because those calls are not being made. A simple addition to
Mov i eList Edi to r.s el ect( ) does the job (the call to s et R at in g s( ) ):
t ry {
view .se tR ati ng Fiel d(se lect e d Mov i e .ge t R at in g( ) + 1) ;
} catc h ( Un rat ed Exce ptio n e) {
view .se tR ati ng Fiel d(0) ;
}
}
}
Now t estSe lec ti ng( ) passes, but three other tests fail because they were not expecting the
set R ating s() call. Instead of editing them to expect it, we can have their mock view ignore those
calls. This is reasonable as they aren't performing tests related to the ratings. There are a handful of
tests that need the following lines added at the beginning:
//. . .
Now the test fails when checking the ratings since nothing is being done with the new list box. We need
to add some code to s et Rati ngs( ) to take care of that:
That makes it work. For aesthetic reasons, we'd like to use the same format as we did in the movie list,
namely, the rating star icons followed by the text. We'll do something similar to what we did in Chapter
13. We've done this before, so we'll skip the details here, showing only the results. Here's the test case:
p r otect ed vo id se tUp( ) {
fiveS tar s = n ew Rat ing( 5, " D a ve" ) ;
three Sta rs = ne w Ra ting (3, " J aso n " );
rende rer = ne w Rati ngRe nder e r ();
list = n ew JL is t();
list. set Ba ckg ro und( Colo r.BL U E );
list. set Fo reg ro und( Colo r.RE D ) ;
list. set Se lec ti onBa ckgr ound ( C olo r . RED ) ;
list. set Se lec ti onFo regr ound ( C olo r . BLU E ) ;
}
r e turn thi s;
}
}
Next, we can just change the GUI code to use the new custom renderer for the rating list:
Test 97. Telling the logical layer to select a movie and add a rating to it, and providing
specific rating value and source, causes that rating to be added to the movie that was
selected.
Test 98. Selecting a movie, setting the rating value, and source to specific values and
pressing the "Add Rating" button causes that rating to be added to the selected movie.
Test 97: Telling the logical layer to select a movie and add a rating to it,
and providing specific rating value and source, causes that rating to be
added to the movie that was selected
We need to add some methods to the M o vie L i stE d i tor V ie w interface to fetch the values for the new
rating:
OK, this compiles, but fails. The expected calls to the new methods are not being made. Also, the
second call to s etR at ing s() with the new rating is not happening. Here's the failure trace:
But this is exactly the ratings that were expected earlier. What's up? R at i ng doesn't have an
equ a ls() method. That's what.
We need to digress and write a test for comparing R a tin g s. Hold on a sec ... won't that leave us with
more than one test failing (the new test for R a tin g . equ a ls ( ) as well as the currently failing test?
Yes and no. Yes, in that we'll have two failures reported. No, in that this test should be passing. It's the
comparion that is failing; all requirements of the test are being satisfied. So let's press on and write the
test for Rat ing .e qua ls ():
pub l ic in t h as hCo de () {
i n t res ult = 17 ;
r e sult = 3 7 * r es ult + va lue;
r e sult = 3 7 * r es ult+ sou rce. h a shC o d e() ;
r e turn res ul t;
}
This lets tes tEq ua ls( ) pass. Now go back to our previous test. It now passes with flying colors ...
well, OK, just green. The next step is to add rating addition to the Swing GUI. Here's our test.
Test 98: Selecting a movie, setting the rating value, and source to
specific values and pressing the "Add Rating" button causes that rating
to be added to the selected movie
pub l ic vo id te stA dd ingR atin gs() {
J L istOp era to r m ov ieLi st =
ne w J Li stO pe rato r(ma inWi n d ow, n ew N a me Ba se d Ch o os er ( "m ov i eL is t ") );
m o vieLi st. cl ick On Item (0, 1);
This selects Star Wars from the movie list, selects a rating value, enters a rating source, pushes the Add
Rating button, verifies that Star Wars has the expected new rating, and verifies that it was added to the
ratings list.
The initial failure of this test is due to the lack of a rating source field. We easily add one:
Now the test fails due to an incorrect rating being added. We need to fill in the accessors that fetch the
relevant field values:
Now the test passes. Is there anything to clean up? Yes, when we added the component code, we did it
quick and dirty. We need to arrange the rating addition components in a nicer way and pull their setup
into a separate method. We did something very similar before, so here's the final code:
Now that everything has been updated to use multiple ratings, we can remove support for the early
single-rating system.
We will start by removing setR atin g() in Mo v i e. From there you compile and run the tests.When you
find compile errors or failing tests, you need to investigate and either remove references to already
removed code, or tweak tests to not use deleted functionality. I'll leave this as an exercise to the
reader. Starting with the removal of M ov ie . s etR a t ing ( ) it took about 30 minutes to clean the
system of the single-rating support. Compile and run the tests (all the tests) continuously during this
process to be sure you don't overdo it.
Amazon
Prev don't be afraid of buying books Next
RETROSPECTIVE
This has been a rather involved story. Looking back, we should have split it into two or more smaller
stories. We did a lot of work in this chapter. We moved from a simple, single numeric rating per movie
to a multirating system, each with a source annotation.
As part of the process, we had to revamp our persistence mechanism, moving it to its own set of
classes as well as moving to a more structured format. This provides an example of letting the
immediate requirements drive the addition of complexity to the system.
More and more, XML is a commonplace standard. It is the basis of many pieces of internet infrastucture.
May be we should consider XML as the simplest thing when our data has any amount of structure. This
story certainly would have been easier to do if we had used XML from the beginning rather than a flat
text file.
Amazon
Prev don't be afraid of buying books Next
Here are the tests we want to pass when we've finished this task:
Test 99. A rating created without a review should answer an empty string when asked
for its review.
Test 100. A rating created with a review should answer that string when asked for its
review.
We need to extend the concept of a rating.Before doing this, we should make sure that Ra t in g is well
encapsulated. When we look at how ratings are added to movies, we find these methods in Mo v ie :
If we look further, we find that the last two methods reflect the constructors for R a ti ng :
This is starting to smell like Shotgun Surgery, in that if we add a constructor that take a S tr in g
review, we will likely want to add a corresponding add R a ti ng method. This is not a good trend. We
need to clean this up before moving forward.
To do this we must replace calls to the complex a d d Rat i ng methods with a call to
add R ating (Ra ti ng) with a nested Rat i ng construction. For example, the first line below needs to be
replaced with the second:
After changing each line, we compile and rerun all the tests. This keeps us from introducing errors
accidentally. If we do, we know immediately, when the code is still in mind, and in the editor.
Once we have replaced all references to a method, we can delete it. That will leave only one
add R ating () method: the one that takes a R a t ing .
Now we need to design our review handling. First, we think about what it means not to have are view.
We should maintain backward compatibility if it is possible, and in this case it should be. If we use the
current constructors that don't take are view, the resulting Ra ti n g shouldn't have a review. Now, we
write this up as a test:
Now we should move aR atin g into the common fixture, and remove it from te s t- Eq u al s( ) :
Test 100: A rating created with a review should answer that string when
asked for its review
Next, we need to ask, "What if a Rati ng like that has a review?" and "How does a R at in g get a
review?" The answer to the first question is that it should be able to answer it when asked. The answer
to the second is that we'll start with the simplest approach, which is via a constructor. If we need to set
the review after creation, we'll add the capability then. We express these answers as a test:
Since we're happy with how things are going and we feel confident, we'll take a bigger step and rework
Rat i ng's constructors to accommodate the review argument, as well as update g e tR ev i ew () to use
the new field:
Our new test passes, and all the previous ones still do. This might be a good time to rerun Al l Te st s .
Green.
Amazon
Prev don't be afraid of buying books Next
SAVE REVIEW
Add support to persistence for saving reviews.
Test 101. If a rating has an attached review, the review is saved as the text of the
<rating> element.
We already have tests in place for saving without any reviews. Nothing there will change. We also don't
need to do anything relating to the saving mechanism in the interface components. All we need to worry
about is that if there are reviews, they get saved properly. We can do this at the lowest level, in
Tes t Movie Lis tW rit in g.We just need a single test that verifies that saving with a review works.
Test 101: If a rating has an attached review, the review is saved as the
text of the <rating> element
This fails since the XML comparison fails. To get it passing, we need to work on
XML M ovieL ist Wr ite r:
Notice how we've added a second format for ratings and renamed the previous one to better reveal its
intended use. We also need to add a query (h a s Rev i e w( )) to R a ti n g:
With all of this in place, the test passes. Now we need to clean up. When we added the second rating
format we introduced some duplication in the format strings. Ratings should have a common core format
whether there is a review or not. We need to refactor to eliminate the duplication and make the
intended commonality obvious:
Make sure all the tests still pass, and that should do it.
Amazon
Prev don't be afraid of buying books Next
LOAD REVIEW
Add support to persistence for loading reviews.
Similarly to the previous task, we only need to concern ourselves with reading a M ov ie L is t that
contains a review.
Test 102. Reading an XML stream that contains text for a rating element results in that
rating having an attached review.
We need to add a test to Te stMo vieLi s tRe a d ing . Just for fun, we'll take a slightly different
approach this time. The existing test for reading a movie uses a fixture that uses a movie with two
ratings. Let's add a review to one of them.
Test 102: Reading an XML stream that contains text for a rating element
results in that rating having an attached review
/ / ...
a s sertF als e( "Fi rs t ra ting sho u l d n o t ha v e a r ev i ew . ",
fi rs tRat ing. hasR e v iew ( ) );
/ / ...
R a ting sec on dRa ti ng = (Rat ing) r a tin g I ter a t or .n ex t () ;
/ / ...
a s sertT rue (" Sec on d ra ting sho u l d h a v e a r ev ie w. " ,
sec on dRat ing. hasR e v iew ( ) );
As expected, this test fails on the assertion that the second rating has a review. To correct that, we
need to revisit pr oce ss Rati ng() in X M LMo v i eLi s t Rea d er :
Green.
Amazon
Prev don't be afraid of buying books Next
DISPLAY REVIEW
Add support to the GUI to display reviews.
Now that we can handle reviews internally, it's time to make them available to the user.
Test 103. Telling the logical layer that a rating is selected that has no review results in the
view being told to put an empty string in the review field.
Test 104. Telling the logical layer that a rating is selected that has a review results in the
view being told to put the review string in the review field.
Test 103: Telling the logical layer that a rating is selected that has no
review results in the view being told to put an empty string in the review
field
Mo v i e Li st Edi to r e d itor = ne w M o v i e L i s t E d i t o r ( m o v i e L i s t , m o c k V i e w ) ;
ed i t o r. se lec t( 0);
ed i t o r. se lec tR ati n g(0) ;
co n t r ol .v eri fy ();
}
Now we can run the new test. It fails because the expected call to s e t R a t i n g R e v i e w F i e l d ( ) was not
made. A simple change will get the test passing:
p u bl i c vo id se le ctR a ting ( int i) {
vi e w . se tR ati ng Rev i ewFi e ld(" ");
}
Now we've faked it. Our next test will drive us to make it work properly.
Test 104: Telling the logical layer that a rating is selected that has a review
results in the view being told to put the review string in the review field
This test is very similar to the previous one, except that a review is present:
Mo v i e Li st Edi to r e d itor = ne w M o v i e L i s t E d i t o r ( m o v i e L i s t , m o c k V i e w ) ;
ed i t o r. se lec t( 0);
ed i t o r. se lec tR ati n g(1) ;
co n t r ol .v eri fy ();
}
Now we need to make s elec t Rati ng() do more than return an empty string; it has to fetch the review
corresponding to the selected rating and send that to the view:
The compiler (or the editor if we are using an advanced IDE) tells us that we need to write
g e tR a t i ng (i nt) . When we look at Mov i e we see that we have g e t R a t i n g ( ) , which returns the average
rating. Now we add g e tRa ti ng(i nt). This is confusing—the former returns an i n t that is the average of
all the ratings, while the latter returns the R a t i n g at the given index. We make a note to refactor to make
the intent clearer by renaming getR ating ( ) to g e t A v e r a g e R a t i n g ( ) . While we're at it,
g e tR a w R at in g() smells funny. We'll have a closer look at it as well.
But let's get back to the problem at hand. Here's our new g e t R a t i n g ( i n t ) :
p u bl i c Ra ti ng ge tRa t ing( i nt i ) {
re t u r n (R ati ng )ra t ings . get( i);
}
OK, green!
Now let's do those refactorings. The rename is easy so that we won't go into detail here. Just be sure to run
A l lT e s t s after each small change if you are refactoring by hand, and afterward if you are using an
automated rename.
The next item on our list is a bit more involved. The method in question, g e t R a w R a t i n g ( ) , is used only in
M o vi e R a ti ng Com pa rat o r:
Its only use is to return - 1 if the movie is unrated rather than throwing an exception. We have both a query
to check if there is a rating, as well as an exception that is thrown if the rating of an unrated movie is
requested. We should really only have one. I suggest that we refactor to get rid of the exception. We can do
this by renaming get R awR at ing( ) to ge t A v e r a g e R a t i n g ( ) ,so that we now have:
The rename took care of the above c ompa r e ( ) method, so it's not an issue anymore. This rename broke the
compilation of a few methods. The first is in C u s t o m M o v i e L i s t R e n d e r e r :
We can simplify this further since now mov i e T o R e n d e r . g e t A v e r a g e R a t i n g ( ) returns - 1 if the movie is
unrated. The icon index calculation will now work in that case, so we have:
Here, we can't just strip off the exception-related code. . . our test relies on it. But we can easily rewrite the
test to use the tweaked behavior:
This test is redundant. The test we just updated tested the very same behavior. We can simply delete this
test.
Now we run A llT e sts to verify that everything passes. OK, done. We've just cleaned up our code and made
it clearer. High five!
Test 105. Selecting a rating that doesn't have a review puts an empty string in the review
field.
Test 106. Telling the logical layer that a rating is selected that has a review results in the
view being told to put the review string in the review field.
Our first test will verify correct behavior when there is no review as well as drive the evolution of the GUI
itself.
Test 105: Selecting a rating that doesn't have a review puts an empty string
in the review field
This fails because there isn't a Tex tArea named " r e v i e w " . Let's fix that in
S w in g M o vi eL ist Ed ito r View :
JS c r o ll Pa ne sc rol l er =
n ew JS cro l lPan e (rat ing R e v i e w F i e l d ,
Scro llP a n e C o n s t a n t s . V E R T I C A L S C R O L L B A R A L W A Y S ,
Scro llP a n e C o n s t a n t s . H O R I Z O N T A L S C R O L L B A R N E V E R ) ;
re t u r n sc rol le r;
}
The method i nit R atin g sPan e () is getting rather large; we should make a note to clean it up (by
extracting the construction of the rating detail pane) later.
This addition makes our test pass so we can do that refactoring. Here's the result:
Test 106: Telling the logical layer that a rating is selected that has are view
results in the view being told to put the review string in the review field
Now we need a test that verifies the behavior when there is a review to be displayed. First we need to tweak
the fixture to add a rating with a review:
This fails since se t Rati n gRev i ewFi eld ( ) currently does nothing. It needs to fill in the new J T e s t A r e a :
The test still fails. Let's look at the code for the rating list:
JS c r o ll Pa ne sc rol l er =
ne w J Sc rol l Pane ( rati ngL i s t ,
Scro llP a n e C o n s t a n t s . V E R T I C A L S C R O L L B A R A L W A Y S ,
Scro llP a n e C o n s t a n t s . H O R I Z O N T A L S C R O L L B A R N E V E R ) ;
re t u r n sc rol le r;
}
In playing with the application, we see that there are a couple of visual, aesthetic bits of weirdness:
1. Widgets resize oddly with the window size changed; combo boxes and test fields get the extra height,
the review test area doesn't.
2. When a rating is selected, its review is displayed in the rating-detail area, but its value and source
aren't. This looks odd.
The first issue we can fix simply by overriding the m a x i m u m S i z e ( ) method of the combo boxes and text
fields, like this:
The second issue is more involved, and is more functional. We need tests to drive it. We can do this quickly
and easily by extending the tests we just wrote. First, in T e s t M o v i e L i s t E d i t o r :
p u bl i c vo id te st Rat i ngSe l ecti onW i t h o u t R e v i e w ( ) {
// . . .
mo c k V ie w. set Ra tin g s(st a rWar s.g e t R a t i n g s ( ) ) ;
co n t r ol .s etV oi dCa l labl e (1) ;
mo c k V ie w. set Ra tin g Valu e Fiel d(5 ) ;
co n t r ol .s etV oi dCa l labl e (1);
mo c k V ie w. set Ra tin g Sour c eFie ld( " A n o n y m o u s " ) ;
co n t r ol .s etV oi dCa l labl e (1);
mo c k V ie w. set Ra tin g Revi e wFie ld( " " ) ;
co n t r ol .s etV oi dCa l labl e (1);
// . . .
}
Now the test fails since these methods are not being called. We now need to add code to
M o vi e L i st Ed ito r. sel e ctRa t ing( ):
This drives us to implement the two methods we added to the interface earlier:
That gets the test to pass. Things look much better now.
Amazon
Prev don't be afraid of buying books Next
ADD A REVIEW
Add support to the GUI to enter a review as part of adding a rating.
There are no new tests for this task, just some enhancements of existing ones.
We start by extending te stAd ding Rati n g in T est M o vi eL is t Ed it o r since that behavior now
includes a review:
/ / ...
m o ckVie w.g et Rat in gVal ueFi eld( ) ;
c o ntrol .se tR etu rn Valu e(2, 1);
m o ckVie w.g et Rat in gSou rceF ield ( ) ;
c o ntrol .se tR etu rn Valu e("D ave" , 1);
m o ckVie w.g et Rat in gRev iewF ield ( ) ;
c o ntrol .se tR etu rn Valu e("N ot b a d .", 1 );
//...
}
The test now fails because this new method is not being called. To get the test to pass we need to
extend a d dRa ti ng( ) in Mo vieL istE di to r :
That does it. We're done. Figure 19.1 shows the final state of the application GUI.
RETROSPECTIVE
Adding reviews to the system would seem, at first glance, to be a large task. It touches all aspects of
the system from persistence to GUI. In fact, it was very simple and straightforward to add. This is one
of the benefits of Test-Driven Development, refactoring, and programming by intention: it is easier to
extend the code. Making a seemingly profound addition or change is no longer something to be feared.
Amazon
Prev don't be afraid of buying books Next
THE DESIGN
Figure 20.1 shows the class diagram. Our design is fairly simple—we don't see any classes that are
overly large or small, and none that are heavy on instance variables or operations. Classes are not
overly connected. Overall, the design looks well balanced. Even though this is a small project, we should
be able to maintain this level of design balance through slavishly practicing TDD and refactoring. In the
next sections we will look more objectively at the design by collecting metrics that give us a more
mathematical view of the design's quality. One thing to consider when looking at a design is how code is
distributed:
Are there any data or function classes? These are classes that contain mostly or completely
methods or instance variables, respectively.
To determine the answers to these questions, we can collect metrics on the code. We did this using
TogetherControlCenter. For each class, we collected five metrics: [1]
This isn't a valid metric for measuring productivity or progress. Rather, it provides a high-level view of
where the code is. It gives a quick way to find a class that might be too large (and are possibly doing
too much and should be broken up) or too small (and are possibly not doing enough to justify their
autonomy and should be merged into another class). See Figure 20.2.
It is most interesting to compare the previous two metrics, as shown in Figure 20.3. An extreme
imbalance between these values is indication that there may be a problem. A class that is mostly or all
attributes could be a data class. Conversely, a class that is all or mostly operations could be a function
class. However, keep in mind that there should generally be more operations than attributes. Another
point to remember is that accessors and mutators (aka, getters and setters) will show up as operations.
This is misleading in that they are really just ways of getting to the attributes and do not often provide
any functionality. As such, they should be considered part of the attribute.
The percentage of methods that do not access a specific attribute averaged over all attributes in the
class. A high value of cohesion (a low lack of cohesion) implies that the class is well designed. See Figure
20.4.
The thing to remember is that these are just metrics. They don't tell you anything conclusive about the
quality of your code. All they can do is give you indications of how things are going, and where to look
for potential problems.
This is intended to measure the complexity of a class, assuming that a class with more methods than
another is more complex, and that a method with more parameters than another is also likely to be
more complex. See Figure 20.4.
Our classes fare quite well according to this metric. A comparatively high value for WMPC2 could indicate
a class that is too complex and should be split up.
Based on these metrics, our project is in good shape. The M o vi e class is significantly more complex
than most, but is also very cohesive. Since M ovi e is arguably the core class in the system, this is not
unexpected.
Amazon
Prev don't be afraid of buying books Next
The graph in Figure 20.5 shows a comparison between the amount of application code and test code,
broken into GUI and non-GUI-related classes in each case. This is a very raw measurement, but it gives
an overall impression of how much we are testing. There should be at least as much test code as
application code, half as much is better.
From the graph we can see that for the GUI there is a bit more test code than application code. GUIs
tend to be heavy in terms of the amount of code. Properly done (i.e., highly decoupled) GUI code is
primarily component creation, layout, event handling (deferring to the domain code), and presentation.
Since this is straightforward code it doesn't need to be tested as heavily as complex application logic.
We can also see that there is approximately twice as much test code as application code for the non-
GUI portions. This is fine, but we should start to be concerned if the test:application ratio is much more
than 2:1.
Amazon
Prev don't be afraid of buying books Next
TEST QUALITY
Jester Results
Running Jester on our application code results in a variety of changes that do not cause any test to fail.
Once we trim off changes made to comments and the boilerplate (e.g., standard optimizations in
equ a ls(), to Str in g() , ha shCo de(), etc), we are left with a small number of potential
shortcomings, which are discussed in the following sections.
MovieListEditor
1. The case when no movie is selected. This includes deselecting a selected movie (by calling
sele ct( -1 )) as well as performing operations on the selected movie when one isn't selected
(specifically upd at e() and addR a tin g ( )).
These cases are tested somewhat in the GUI tests which were not run as part of this analysis. These
results do point out problems with our test coverage which need to be remedied.
XMLMovieListWriter
This is an interesting case which can actually lead to the simplifying of the code. If you recall, before we
added support for reviews, the XML element ra t i ng had no text, only attributes. When we added
reviews, we placed the review in the text for the r ati n g tag. We chose to add a second output format
to handle the case where the Ra ting being written had a review:
Jester found that it could change the condition of the if to be always tr u e and the tests still passed.
This implies that we can always simply use the first option of the if statement. Why is that? Remember
that the tests compare XML structure, not text. The structure is the same in each case when there is no
review—either an element with no text, or an element with an empty text. These are the same once it is
parsed.
Much simpler and cleaner. This is an unexpected benefit of running Jester on our code: the uncovering of
some duplication.
RatingRenderer
The issue here is due to a shortcut we took when we skipped testing the color based on selected status.
If you recall, we did that because this code was identical to the code in the movie renderer which we
tested more thoroughly. Maybe we shouldn't have taken that shortcut?
There are a few places where we could use an additional test or two. We didn't write any tests related
to problems that can occur when opening files, for example.
NoUnit Results
It took a while to work through several bugs in NoUnit that prevented it from running on our project
code. These were all known bugs with workarounds available. After making the required changes, we had
NoUnit running and providing results. These results were interesting, but not entirely surprising. The
output of NoUnit's analysis is too large to reproduce here, but you can see it on our Web site with other
support material for this book[URL 65].
The anonymous subclasses (e.g., event handlers) and component construction methods in the GUI are
tested either not at all, or not very well. These functions are typically called from the GUI framework,
and not our test code. In the GUI-based test cases they are called indirectly from the test case, again
through the Swing framework.
Likewise, the custom renderers are not well tested. They are mostly boilerplate and their methods are
called from the GUI framework.
Other methods that tend not to be well tested are support methods such as t o St ri n g( ), eq u al s( ) ,
and hash C ode () . Remember that we didn't explicitly test the latter two. We used Eq ua l sT es t er to
do that, so NoUnit didn't pick up the calls to them.
As we scan through the NoUnit output, we can see that the primary functional methods of the core
classes are all marked green, indicating that they are tested directly.
Clover Results
We easily ran Clover on the project, a few simple additions to the Ant build script. Results, shown in
Figure 20.6, were good. Areas not tested were primarily methods such as t oS tr i ng () and
has h code( ). One nice feature of Clover that we didn't make use of is the ability to mark sections of
code that should be ignored when calculating coverage. This allows us to mark methods like
toS t ring( ) and not have them included in the coverage results. In some cases the untested code was
error handling code that needs to be addressed. For example, there was no testing of error handling on
the read and write code.
Our tests are in good shape. Jester found a couple of places where we could extend our test coverage—
one where we were lazy and one where we missed testing for specific GUI behavior (the deselect case).
The latter would best be dealt with by specifying what should happen when list items (both movie and
rating) are deselected, and hence there is no selection. With that story in place, we would write tests
and evolve the behavior to match. Jester also found an opportunity for simplification. This in itself is
reason enough to use it occasionally.
NoUnit's report was reassuring: all of the important functionality was directly tested. Things found not to
be tested were largely because they were called indirectly through Jemmy and the Swing framework.
Thus, while NoUnit didn't detect the calls, they are being tested.
Likewise, the results from Clover weren't bad. They could be better, though. We can learn from this. The
biggest problem was that we checked test coverage and quality once at the end. This is something that
really should be done regularly, preferably as part of the build process. By keeping our eye on this
feedback we can make corrections while they are minor. The same applies to running audits and metrics
on the project. If we have an automated tool to do this (which is really a requirement), there's no
excuse not to be doing it frequently. This, again, lets us take corrective action before we have a
significant problem to deal with.
Amazon
Prev don't be afraid of buying books Next
There are dangers in overusing mocks. The primary one is that it can make refactoring more difficult
since mocks of the refactored classes need to be modified to match the new behavior. Hand-built mocks
(whether using MockObjects with or without MockMaker, or not) need to be maintained like any other
class in the system. On the other hand, using EasyMock can lead to overspecification in tests if you
aren't careful.
Like all powerful tools, mocks are incredibly helpful and beneficial when used responsibly. Anything used
to excess has the potential to create problems. Mocks are no different.
Amazon
Prev don't be afraid of buying books Next
GENERAL COMMENTS
One purpose in writing this book was to aid you, the reader, in your quest to improve your skills and your
code. Another purpose was to let me think through the issues involved in TDD, mainly by having to express
them clearly and understandably. I've written this section very near the end of the process, and have taken
the opportunity to read through the book, looking for things that I've learned or become more convinced of.
I have, for the most part, resisted the temptation to go back and revise the code in this book. As such, it
serves as a fairly accurate picture of how the development went. And as such, there are a few things I
would have done differently.
One thing is that I would have used mocks more during the development of the GUI. This was a very
simple application so we incurred no real penalty from using the real application code behind the GUI in our
tests. Had it been a more involved application, the tests would have been noticably slow. Mocks would help
avoid that.
Another thing—one more obvious and somewhat more embarrassing in hindsight—jumped out and hit me as
I read through the book. Think back to the first task, specifically the test for containment in the list. I'll
replicate it here for convenience, hoping that my editors don't refactor away the duplication:
1. Checking for the containment of two movies is redundant and doesn't add much, if anything.
2. There are three assertions in this single test. I feel even more strongly now that you should always
strive for a single assertion per test. In a perfect world, s et U p( ) would build the fixture, and each
test method would make a single assertion about it. Test methods this simple cease to require the
message parameter for the assert.
DEBUGGING
We didn't talk about the time we spent laboriously single-stepping through our code searching for bugs.
Or about painstakingly poking through data structures looking for that bad value. We didn't talk about it
because we didn't do it. The write-up of the project reflects exactly what happened, down to each line of
code we wrote. We did omit most of the times we ran the tests... because we did it constantly (and
painlessly). We ran the tests after every change, after every compile. And many times just to be sure.
So... what about the debugger? Well, honestly, we never used it. I was telling someone about Eclipse
and they asked what the debugger was like. After a moment of thought, I said something like, "Pretty
good...I guess...I haven't used it more than a couple times...and that was mainly out of curiosity." I'd
been using Eclipse almost daily for over a year at that point. He couldn't believe it.
So what's going on here? Well, once you get good at TDD you don't have much use for a debugger. This
is because you are working in very small increments, adding very little code at a time.
You write a test, then write just enough code to make it pass. Once it passes, you're done with that
chunk of work. When you're refactoring, you start with all the tests passing. After each small change
you rerun the tests (likely, a localized subset of the entire suite). If they don't all pass, the problem is
in that last little change you made. Because your tests are watching your code for bugs in such tiny
increments, you never have much trouble finding a bug. It's hard to believe how well this works until
you try it yourself.
Even if you were feeling brave (or foolhardy) and made a big change between tests, you simply back
that change out and go through it again in smaller steps. . . running the tests after each step. (This is a
common and humbling lesson when you are first learning TDD.)
So, yes, TDD sounds like a lot of extra work upfront (as my friend above commented), and certainly it is
more work upfront than many of us are accustomed to. But it actually saves much more work on the
backend (debugging) than it costs upfront. As nonintuitive as it may be, TDD saves you time. You'll find
that once you acquire the feel for it, you'll work much faster than when you were using the code and
debug approach. And you don't have to wait weeks to discover this higher speed: it will be obvious after
just a few hours, and certainly after a couple of days. You just won't be spending all that backend time
sleuthing for bugs. You'll be making rapid, steady, bug-free progress, task after task. As I keep saying,
you will find it exhilarating.
Amazon
Prev don't be afraid of buying books Next
LIST OF TESTS
Test 1: defined on Pg. 207 and implemented on Pg. 207
Adding a movie to an empty list should result in a list with a size of one.
Adding two movies to an empty list should result in a list with a size of
two.
The logical layer should send the appropriate list of movies to the view for
display.
Test 6: defined on Pg. 219 and implemented on Pg. 221
The GUI should have a listbox and should display a list of movies in it as
requested.
When the logical layer is asked to add a movie, it should request the
required data from the view and update the movie list to include a new
movie based on the data provided.
The GUI should have a field for the movie name and an add button. It
should answer the contents of the name field when asked, and request
that the logical layer add a movie when the add button is pushed.
Changing the name of a movie results in it using the new name hereafter.
Indicating, to the logical layer, that a selection is made from the list
causes the view to be given a value for the name field, that is, the
selected movie's name.
Selecting from the list causes the name field to be filled in with the
selected movie's name.
Asking the logical layer to add a duplicate movie causes it to inform the
presentation layer that the operation would result in a duplicate.
Trying to add a movie that is the same as one in the list results in the
display of a "Duplicate Movie" error dialog.
Trying to rename a movie to the name of one in the list results in the
display of a "Duplicate Movie" error dialog.
When asked for the renderer component, the renderer returns itself.
When given a movie to render, the resulting test and rating image
corresponds to the movie being rendered.
When rendering an unselected item, the renderer uses its list's unselected
colors.
When rendering a selected item, the renderer uses its list's selected
colors.
Updating a movie changes its rating if a different rating was selected for
it.
Updating a movie in the GUI changes its rating if a different rating was
selected for it, and updates the display accordingly.
A movie that hasn't explicitly been given a category should answer that it
is uncategorized when asked for its category.
Trying to create a movie with an invalid category (i.e., not from the
predefined set) throws an exception.
Telling the logical layer that a movie is selected causes the presentation
layer to be told the category to display.
Telling the logical layer to update and providing it with data that indicates
a category change results in the GUI layer being given a new set of
movies with that change reflected.
Selecting a movie from the list, changing the value of the category, and
pressing Update updates the data for that movie. When that movie is
selected again, the new category is displayed.
Test 42: defined on Pg. 289 and implemented on Pg. 290
Asking for a subset for the ALL category answers the original list.
Telling the logical layer to select a specific movie in a filtered list rather
than the complete list from actually selects the appropriate movie, in spite
of being selected from a sublist.
Telling the logical layer to update a specific movie in a filtered list rather
than from the complete list actually updates the appropriate movie
properly.
Test 47: defined on Pg. 294 and implemented on Pg. 298
Writing a list of one movie should write that movie to the stream in the
format <name> | <category> | <rating>.
Test 52: defined on Pg. 309 and implemented on Pg. 312
Writing a list containing several movies writes each movie to the stream,
one per line, in the format of the previous test.
Selecting "Save As" from the "File" menu prompts for a file using the
standard file chooser. The list is written into the selected file.
Telling the logical layer to "Save" causes it to save the current list to the
same file as the previous "Save As" operation.
Test 57: defined on Pg. 322 and implemented on Pg. 325
Selecting "Save" from the "File" menu causes the list to be written into
the previously selected file.
Reading from a stream containing the data for a single movie results in a
list containing the single movie.
Reading from a stream containing the data for several movies results in a
list containing those movies.
With the list empty, telling the logical layer to load from a file that
contains data for a set of movies results in the list containing those
movies.
With a set of movies loaded, cancelling the load of a different set leaves
the originally loaded set unchanged.
Comparing two movies with the same name, based on name, should
answer 0.
Comparing two movies, the first having a lexically smaller name, based on
name, should answer a negative integer.
Comparing two movies, the first having a lexically greater name, based on
name, should answer a positive integer.
Comparing two movies, the first having a lower rating, based on rating,
should answer a negative integer.
Comparing two movies, the first having a higher rating, based on rating,
should answer a positive integer.
Asking the logical layer to sort its list causes the associated view's list to
be updated to one that is in order (order depends on what is being sorted
on).
Selecting "Sort by name" from the "View" menu causes the displayed list
to be sorted by name.
Selecting "Sort by rating" from the "View" menu causes the displayed list
to be sorted by rating.
Add a single rating to an unrated movie. Asking for the average rating
answers the rating just added.
Add several ratings: 3, 5, and 4. Asking for the average rating answers 4.
Add several ratings: 3, 5, 5, and 3. Asking for the average rating answers
4.
Writing a list containing one movie outputs in the adopted XML format.
Writing a list containing one movie with multiple ratings outputs in the
adopted XML format.
Telling the logical layer to select a movie and add a rating to it, and
providing specific rating value and source, causes that rating to be added
to the movie that was selected.
Selecting a movie, setting the rating value, and source to specific values
and pressing the "Add Rating" button causes that rating to be added to
the selected movie.
Test 99: defined on Pg. 397 and implemented on Pg. 398
A rating created with a review should answer that string when asked for
its review.
If a rating has an attached review, the review is saved as the text of the
<rating> element.
Reading an XML stream that contains text for a rating element results in
that rating having an attached review.
Telling the logical layer that a rating is selected that has no review results
in the view being told to put an empty string in the review field.
Test 104: defined on Pg. 403 and implemented on Pg. 404
Telling the logical layer that a rating is selected that has a review results
in the view being told to put the review string in the review field.
Selecting a rating that doesn't have a review puts an empty string in the
review field.
Telling the logical layer that a rating is selected that has a review results
in the view being told to put the review string in the review field.
Amazon
Prev don't be afraid of buying books Next
SUMMARY
In this chapter we've taken a step back and had a look at the project we just did. We generated a class
diagram, collected some metrics, and analyzed the quality and coverage of our tests. Our code fared
well. What does this indicate? Are we especially good programmers? Maybe. It's nice to think so.
However, most of the credit goes to how we worked. Test-Driven Development makes us work in small
steps. This focuses us on one small problem at a time. This in turn allows us to design and code the
simplest solution that we can find. This keeps our design simple. Constant refactoring keeps the
combination of each simplest design decision as simple as possible. These techniques let us evolve a
simple, clean, yet wonderfully expressive design. Furthermore, the design evolves very quickly and has
an incredibly high level of quality.
Amazon
Prev don't be afraid of buying books Next
The ordering of these chapters is arbitrary, as they are all mutually independent. The
structure of each chapter is the same. We start by looking at architectural and API issues
specific to the family member. The bulk of each chapter works through the same task
using the language of the chapter.
Any special tricks or techniques available or applicable to each language are discussed as
we go.
The task we'll use is the first task of the first story in the example project:
Make a container for movies with a way to add to it. Doesn't have to be ordered or
sorted.
Test 2. Adding a movie to an empty list should result in a list with a size
of one.
Test 4. Asking about the presence of a movie that wasn't added should
result in a negative response.
In Chapter 26, the chapter on Visual Basic (it being a different sort of beast), Kay takes
a slightly different approach. As in most cases, the differences are themselves instructive.
Amazon
Prev don't be afraid of buying books Next
THE FRAMEWORK
You can download RubyUnit from [URL 46]. RubyUnit includes all the expected asserts, such as:
assert(boolean)
assert match(string, regex) asserts that s t r ing matches against the regular
expression reg ex .
In addition to having to requ ire the classes being tested, test case classes must include the following:
Additionally, our test case must extend RU N I T:: T e stC a se . A useful idiom is to use a template similar
to the following for test cases. As when using JUnit, this allows the test case class to be run in order to
run its tests. Here's the template:
EXAMPLE
c la s s Te stM o vieL i st
< R U NI T: :Te s tCas e
d e f te stE m ptyL i stSi z e
@l is t = Movi e List . new
as se rt e qual ( 0, @ l ist. size )
end
e nd
c la s s Mo vie L ist
e nd
c la s s Mo vie L ist
d e f si ze
0
end
e nd
Green.
Test 2: Adding a movie to an empty list should result in a list with a size
of one.
c la s s Mo vie
d e f in iti a lize ( aNam e )
end
e nd
We run the test and it fails because we've faked M o v i e L i s t . s i z e ( ) to return 0 . Now we can fake it
one step better by counting the number of movies that are added:
c la s s Mo vie L ist
d e f in iti a lize
@n um ber O fMov i es = 0
end
d e f si ze
@n um ber O fMov i es
end
d e f ad d ( m ovie T oAdd )
@n um ber O fMov i es + = 1
end
e nd
This test should verify that the movie that was added is really there; again we'll fake it at first:
We next need a stub c o ntai n sMov ieN a m e d ? method in M o v i e L i s t that fakes it for the purposes of
this test:
d e f te stS i ze
as se rt e qual ( 0, @ l ist. size )
end
d e f te stS i ze
as se rt e qual ( 1, @ l ist. size )
end
d e f te stC o ntai n s
as se rt( @ list . cont a insM ovie N a m e d ? ( " S t a r W a r s " ) )
end
Notice how we've renamed the test methods when we refactored to remove the fixture-related prefix.
Having a prefix like that is a good indication that we're mixing fixtures.
Test 4: Asking about the presence of a movie that wasn't added should
result in a negative response.
Now we need to test that a movie that we didn't add is, in fact, not reported as being in the list:
This test drives us to move from fake to make in terms of keeping track of what movies have been
added to the list:
c la s s Mo vie L ist
d e f in iti a lize
@m ov ies = Ha s h.ne w
@n um ber O fMov i es = 0
end
d e f si ze
@n um ber O fMov i es
end
d e f ad d ( m ovie T oAdd )
@m ov ies . stor e (mov i eToA dd.n a m e , m o v i e T o A d d )
@n um ber O fMov i es + = 1
end
d e f co nta i nsMo v ieNa m ed? (aNa m e )
@m ov ies . incl u de?( a Name )
end
e nd
c la s s Mo vie
a t tr r ead e r :n a me
As you can see, we are now duplicating the information about how many movies have been added. We
keep track of it explicitly (as before) and it is also available as an attribute of the H a s h . To remove it,
we first change size to return the information fetched from the H a s h :
d ef si ze
@ m ov ie s.s i ze
e nd
The tests still pass, so we can remove all of the code related to explicitly counting how many movies
have been added. Now Movi e List is:
c la s s Mo vie L ist
d e f in iti a lize
@ mo vi es = Has h .new
end
d e f si ze
@m ov ies . size
end
d e f ad d ( m ovie T oAdd )
@m ov ies . stor e (mov i eToA dd.n a m e , m o v i e T o A d d )
end
THE FRAMEWORK
The SUnit framework is included in the recent releases of VisualWorks Smalltalk [URL 43], Squeak [URL
44], and probably others that I'm not as familiar with. Versions for other varieties of Smalltalk can be
found at the xprogramming.com Website [URL 11].
This framework takes advantage of Smalltalk's blocks to simplify the assertion mechanism. There are
several assertion calls which we will briefly list here.
Amazon
Prev don't be afraid of buying books Next
BOOLEAN ASSERTIONS
assert: aBoolean asserts that aB o o lea n is t r u e.
BLOCK ASSERTIONS
should: aBlock asserts that a Bl oc k val u e evaluates to t ru e .
EXCEPTION ASSERTIONS
should: aBlock raise: anExceptionalEvent asserts that executing a Bl oc k signals an
exception identified by anEx cept i o nal E v ent . Not signaling an exception, or signaling
a different exception, causes this assertion to fail.
FAILURE
signalFailure: aString fails immediately, providing a S tr in g as a message.
Amazon
Prev don't be afraid of buying books Next
EXAMPLE
set U p
e m ptyLi st := Mo vi eLis t ne w
When we try to accept the test method, it complains that Mo vi e Li st is undefined. We define it as a
stub class with nothing in it:
Now t estEm pty Li stS iz e is accepted. When we run the test, it passes. Why? We haven't even written
the s ize method for Mo vieL ist.
The problem is that s ize is a method in Ob j e ct from which we inherit. This is the Smalltalk way. Let's
write a stub s iz e method with no body and try the test again. OK. Our test now fails. Now we can
write a bit of code to pass the test:
Mov i eList
>> s i ze
^0
Accept, test, green. Onward.
Test 2: Adding a movie to an empty list should result in a list with a size
of one.
This test adds one item to the list and checks that this size returns 1 . This is a different fixture (i.e., not
an empty list), so we start a new TestC a se:
set U p
t h eList := M ovi eL ist new.
t h eList ad d: (M ov ie n ame: 'St a r Wa r s ')
tes t Size
s e lf sh oul d: [t he List siz e = 1 ]
Running the tests now results in Tes tOne I t emL i s t>>t e st S iz e failing with an error because the
method Mov ie Lis t>>add: isn't understood (i.e., is undefined). We create a stub and now it fails with
an assertion failure since everything runs, but we aren't really doing anything yet. This is the state we
want to be in for the red bar. Now we can write some code to get the green bar back. We start by
capturing the fact that a movie was added:
Mov i eList
>> s i ze
^ n umber OfM ov ies
Now T estOn eIt em Lis t>>test Size passes, but Tes t Em p ty Li s t>>t es tS i ze fails. Why is that?
Because we aren't initializing n umbe rOfM o v ies to 0.
To fix this, we can add an explicit creation method (this is a class method):
The next test checks that any movies that are added are reported as being in the list:
For this to work, we need a cont ainsM o vie N a med : method in Mo v ie Li s t. We can start off by
faking it:
Test 4: Asking about the presence of a movie that wasn't added should
result in a negative response.
This test drives us to add the ability to remember what movies have been added to the list. Let's take a
bigger step and add a collection to hold onto them:
ini t
n u mberO fMo vi es := 0.
m o vies := Di cti on ary new
add : aMov ie
n u mberO fMo vi es := num berO fMov i e s + 1 .
m o vies at: a Mov ie nam e pu t: a M o vie
siz e
^ n umber OfM ov ies
If we run the tests now, they all fail (with an error) because Mo vi e doesn't understand na me . Further,
Mov i e isn't storing its name. That's easy enough to fix:
There, green. Now we have a little refactoring to do. We're keeping an explicit count of how many
movies have been added, but this information is available from the collection of movies. The first step is
to return the size of the collection:
Mov i eList
>> s i ze
^m o vies siz e
All tests still pass. Now we can go through M o vie L i st and remove all references to n u mb er O fM ov i es .
The result is:
ini t
m o vies := Di cti on ary new
siz e
^ m ovies si ze
add : aMov ie
m o vies at: a Mov ie nam e pu t: a M o vie
THE FRAMEWORK
You can download CppUnit from [URL 48]. CppUnit has quite a different feel to it from the xUnit
members we've looked at so far. This is largely due to the fact that C++, being compiled to native code,
does not support reflection. This means that we have to do all the work ourselves. Instead of being able
to have the test case class figure out what the tests are, we have to do it ourselves, like so:
Assertions are also handled differently. In this case, they are implemented as macros:
This isn't the most elegant looking solution, but it does have the beneficial effect that assertions are
visually obvious.
Amazon
Prev don't be afraid of buying books Next
EXAMPLE
pu bli c :
T est E mpt yM ovi eL ist (s td:: stri ng n a m e);
v irt u al vo id re gis te rTes ts(T es tS u i te * s uit e );
v oid set Up ();
v oid tes tS ize () ;
};
c la ss M ovi eL ist
pu bli c :
M ovi e Lis t( );
i nt s ize () ;
};
This passes because our stub implementation of Mov i e Lis t :: s iz e( ) returns 0 to keep the compiler
happy.
Test 2: Adding a movie to an empty list should result in a list with a size
of one.
This test will add a movie and verify that the size is now 1. As in earlier chapters, we'll move to a new
fixture for this:
pu bli c :
T est O neI te mLi st (st d: :str ing na me ) ;
v irt u al vo id re gis te rTes ts(T es tS u i te * s uit e );
v oid set Up ();
v oid tes tS ize () ;
};
c la ss M ovi e {
pu bli c :
M ovi e (st d: :st ri ng aN ame) ;
};
.F
! !! FAI L URE S! !!
T es t R e sul ts :
R un : 1 Fai lu res : 1 E rr ors: 0
Now we can fake it one step better by counting the number of movies that are added:
c la ss M ovi eL ist {
pr iva t e:
i nt n umb er OfM ov ies ;
pu bli c :
M ovi e Lis t( );
i nt s ize () ;
v oid add (M ovi e aMo vi e);
};
Our tests pass and there isn't anything in M ovi e L ist to clean up.
Test 3: If we add a movie to a list, we should be able to ask if it's there
and receive a positive response.
Our next test should verify that the movie that was added is really there; again, we'll fake it at first:
We next need a stub c on tain sMov ieNam e d method in M ov ie Li s t that fakes it for the purposes of this
test:
Test 4: Asking about the presence of a movie that wasn't added should
result in a negative response.
Now we need to test that a movie that we didn't add is, in fact, not reported as being in the list:
. .. F
! !! FAI L URE S! !!
T es t R e sul ts :
R un : 3 Fa ilu re s: 1 Er rors : 0
This test drives us to move from fake to make in terms of keeping track of what movies have been added
to the list:
c la ss M ovi eL ist {
pr iva t e:
i nt n umb er OfM ov ies ;
m ap
< st d:: s tri ng , M ov ie
> m ovi e s;
pu bli c :
M ovi e Lis t( );
i nt s ize () ;
v oid add (M ovi e aMo vi e);
b ool con ta ins Mo vie Na med( std: :s tr i n g n a m e);
};
We also need to extend M ov ie to retain its name (and add a default constructor to satisfy the
requirements of ma p):
c la ss M ovi e {
pr iva t e:
s td: : str in g n am e;
pu bli c :
M ovi e ();
M ovi e (st d: :st ri ng aN ame) ;
s td: : str in g g et Nam e( );
};
M ov ie: : Mov ie ()
{
na me = "" ;
}
As you can see, we are now duplicating the information about how many movies have been added. We
keep track of it explicitly (as before) and it is also available as an attribute of the m ap . To remove it, we
first change si ze to return the information fetched from the m a p:
i nt Mo v ieL is t:: si ze( ) {
r etu r n m ov ies .s ize () ;
}
The tests still pass, so we can remove all of the code related to explicitly counting how many movies have
been added. Now Mov ie List is:
c la ss M ovi eL ist {
pr iva t e:
m ap
< st d:: s tri ng , M ov ie
> m ovi e s;
pu bli c :
M ovi e Lis t( );
i nt s ize () ;
v oid add (M ovi e aMo vi e);
b ool con ta ins Mo vie Na med( std: :s tr i n g n a m e);
};
This chapter presents NUnit, an xUnit family member for the Microsoft .NET framework. I had the good
fortune to cross paths with James as I was thinking about this chapter. I had a few questions about
NUnit which I asked James, explaining my reasons. He responded and kindly offered to write and
contribute the chapter. So, here's the chapter on NUnit, by one of its authors.
One thing to note: While C# and .NET are Microsoft creations, there is an open source implementation
(that is progressing nicely) from Ximian, the folks behind Gnome. It's named The Mono Project and can
be found at [URL 66].
Amazon
Prev don't be afraid of buying books Next
THE FRAMEWORK
The example is written using V1.0 of the .NET Framework and V2.0 of NUnit. See [URL 52] for the site
from which NUnit can be downloaded.
History
NUnit V1.x was written by Philip Craig in the summer of 2000. Philip released an early version in
September of 2000 and was keeping the program up to date with each new version of the .NET
Framework as they were being released. In February of 2002 a small group gathered to develop a new
version of NUnit, one that looked less like JUnit and tried to leverage some of the capabilities of the
.NET platform. The group consisted of Alexei Vorontsov, Mike Two, and James Newkirk. The one feature
in .NET that we focused on was attributes. Attributes are descriptive elements that provide information
about programming elements such as types, fields, methods, and properties that can be retrieved via
reflection at runtime. Looking at JUnit, we saw areas where naming conventions and inheritance were
used to identify test fixtures and test methods and thought that attributes provided a cleaner and more
explicit approach to the same problems. This group released NUnit V2.0 in October of 2002.
TestFixture
In order to write your first test in NUnit, you must create a class that is marked with the
[Te s tFixt ure ] attribute. The attribute definition is contained in an assembly called
nun i t.fra mew or k.d ll . Also, you must provide a using statement that identifies the NUnit classes
which are contained in the NU nit. Frame w ork namespace. One thing to note is that the only
requirement for the class is that it has a default constructor which means that you can place the
[Te s tFixt ure ] attribute on any class and it does not have to use inheritance.
Test
A test method in NUnit must be a method that is public, returns void, and has no parameters. It must
also be marked with the [ Test] attribute and be in a class marked with a [T e st Fi x tu re ] attribute.
[Te s t]
pub l ic vo id Bl ueS ky ()
{}
SetUp
NUnit, like many xUnit implementations, has a way to allow the programmer to extract common fixture
setup code and have it executed prior to each test being run.
A method can be marked with the custom attribute [ S etU p ] to identify it as the method that NUnit will
use to build the fixture for the tests in this T est F i xtu r e. This allows for common code to be removed
from the individual tests.
[Se t Up]
pub l ic vo id Cr eat eL ist( )
{
l i st = new M ovi eL ist( );
}
Assertion
The NUnit framework, like other xUnit frameworks, comes with the normal set of expected assertions.
Since inheritance is not used to identify the test fixtures, the assertions in NUnit are static methods of a
class called A ss ert io n. This class is also defined in the N U ni t .F ra m ew o rk namespace. The following
is a sample of assert methods contained in the framework (see the class Assertion for the complete set):
All of the assertion methods also take an initial parameter, which is a message that will be printed if
there is a failure.
Amazon
Prev don't be afraid of buying books Next
EXAMPLE
The task is to write a class that contains a list of movies. The list should be able to determine if certain
movies are in the list and it should also be able to tell you the number of movies contained in the list.
We'll start by creating a test fixture to hold all of the tests called T e s t M o v i e L i s t . The name of the
method that will create the list and test to see if the count of movies is equal to 0 is called E m p t y L i s t .
As you can see here they are each marked with their respective attributes.
u si n g Sy ste m ;
u si n g NU nit . Fram e work ;
[ Te s tF ix tur e ]
p ub l ic c las s Test M ovie L ist
{
[ T es t]
p u bl ic vo i d Em p tyLi s t()
{
Mo vi eLi s t li s t = n ew M ovie L i s t ( ) ;
As se rti o n.As s ertE q uals (0, l i s t . C o u n t ) ;
}
}
When I compile this test it fails since we have yet to implement the M o v i e L i s t class. The simplest
version of Mo v ieLi s t that is needed to get this to compile is as follows:
u si n g Sy ste m ;
Compiling and running this yields a green bar. The first step is done. Let's go on to the next test.
Test 2: Adding a movie to an empty list should result in a list with a size
of one.
The test for this step is also in the Tes t M o v i e L i s t fixture and looks like this:
[ Te s t]
p ub l ic v oid OneM o vieL i st()
{
M o vi eL ist list = ne w Mov ieLi s t ( ) ;
M o vi e sta r Wars = ne w Mov ie(" S t a r W a r s " ) ;
l i st .A dd( s tarW a rs);
A s se rt ion . Asse r tEqu a ls(1 , li s t . C o u n t ) ;
}
When I compile this it fails for a couple of reasons. First, I have not defined a class called M o v i e and I
have not defined a method in M ovie Li s t called A d d . Let's go ahead and implement them, keeping in
mind the goal here is to get this test to pass, nothing else.
The simplest version of M ovie that is needed to get this to compile is as follows:
u si n g Sy ste m ;
p ub l ic c las s Movi e
{
p u bl ic Mo v ie(s t ring titl e)
{}
}
Nothing else is needed. Remember, you need to focus on what is needed now, not in the future.
Obviously, the M ov ie class is nothing more than a placeholder at this point. We also need to implement
the A dd method in Mov i eList . At first, I choose to implement A d d like the following:
Compiling and running this yields a failure. Obviously, we have to implement A d d for the test to pass.
Since we only have tests that verify that the size of the list is correct, we can still fake it somewhat by
keeping track of the number of movies that were added to the list and not the contents.
u si n g Sy ste m ;
p u bl ic Mo v ieLi s t()
{
nu mb erO f Movi e s = 0 ;
}
p u bl ic in t Cou n t
{
ge t { r e turn numb e rOfM ovie s ; }
}
p u bl ic vo i d Ad d (Mov i e mo vieT o A d d )
{
nu mb erO f Movi e s += 1;
}
}
Compiling and running this yields a green bar. However, there is some code duplication in the test code.
Each test method creates a M ovie List class. We can extract this commonality into a S e t U p method.
Performing this refactoring on T estM ov i e L i s t looks like this:
u si n g Sy ste m ;
u si n g NU nit . Fram e work ;
[ Te s tF ix tur e ]
p ub l ic c las s Test M ovie L ist
{
p r iv at e M o vieL i st l i st;
[ S et Up ]
p u bl ic vo i d Cr e ateL i st()
{
li st = n ew M o vieL i st() ;
}
[ T es t]
p u bl ic vo i d Em p tyLi s t()
{
As se rti o n.As s ertE q uals (0, l i s t . C o u n t ) ;
}
[ T es t]
p u bl ic vo i d On e Movi e List ()
{
Mo vi e s t arWa r s = n ew M ovie ( " S t a r W a r s " ) ;
li st .Ad d (sta r Wars ) ;
As se rti o n.As s ertE q uals (1, l i s t . C o u n t ) ;
}
}
Compiling and running this test yields a successful result and a scan of the code indicates that there is
nothing else to refactor. It's time to move on to the next test.
The test method for this test is called O n e M o v i e L i s t C o n t a i n s and is defined as follows:
[ Te s t]
p ub l ic v oid OneM o vieL i stCo ntai n s ( )
{
s t ri ng ti t le = "Sta r War s";
M o vi e sta r Wars = ne w Mov ie(t i t l e ) ;
l i st .A dd( s tarW a rs);
A s se rt ion . Asse r t(li s t.Co ntai n s M o v i e N a m e d ( t i t l e ) ) ;
}
When this is compiled it fails due to Mo v i e L i s t not having implemented the method
C on t ai ns Mov i eNam e d. Implementing a stub to get the test to pass is simple and looks like this:
Compiling and running this yields a successful result. At this point, most people question my judgment
and wonder if this is just silly. Remember, we are not done with the task; we are moving forward step-
by-step doing just enough to get the tests to pass. We will have tests in the future that expose that
this implementation is naive, but I intend to wait until the tests expose that naiveté instead of
speculating about it.
Before we move on we have some more code duplication to correct in the test code. Looking at the test
code, we see that the OneM o vieL istC o n t a i n s and O n e M o v i e L i s t methods have a common setup
code. However, that setup code is different from the E m p t y L i s t setup code. The way to correct this is
to split the test class into two different classes, one called T e s t E m p t y M o v i e L i s t and the other called
T es t On eM ovi e List . Once we move the methods into their new classes they can also be renamed.
Performing this refactoring yields the following result:
u si n g Sy ste m ;
u si n g NU nit . Fram e work ;
[ Te s tF ix tur e ]
p ub l ic c las s Tes t Empt y Movi eLis t
{
p r iv at e M o vieL i st l i st;
[ S et Up ]
p u bl ic vo i d Cr e ateL i st()
{
l i st = ne w Mov i eLis t ();
}
[ T es t]
p u bl ic vo i d Co u nt()
{
A s se rt ion . Asse r tEqu a ls(0 , li s t . C o u n t ) ;
}
}
[ Te s tF ix tur e ]
p ub l ic c las s Tes t OneI t emMo vieL i s t
{
p r iv at e r e adon l y st a tic stri n g t i t l e = " S t a r W a r s " ;
p r iv at e M o vieL i st l i st;
[ S et Up ]
p u bl ic vo i d Cr e ateL i st()
{
li st = n ew M o vieL i st() ;
Mo vi e s t arWa r s = n ew M ovie ( t i t l e ) ;
li st .Ad d (sta r Wars ) ;
}
[ T es t]
p u bl ic vo i d Co u nt()
{
As se rti o n.As s ertE q uals (1, l i s t . C o u n t ) ;
}
[ T es t]
p u bl ic vo i d Co n tain s ()
{
As se rti o n.As s ert( l ist. Cont a i n s M o v i e N a m e d ( t i t l e ) ) ;
}
}
Once this refactoring is complete we compile and run; everything still works so we are finished with this
step.
Test 4: Asking about the presence of a movie that wasn't added should
result in a negative response.
The last test that we outlined earlier is to verify that a movie is not in the list. The test method can be
implemented in either test fixture classes. In this example, it is implemented in T e s t O n e M o v i e L i s t
test fixture and is called Do e sNot Conta i n :
[ Te s t]
p ub l ic v oid Does N otCo n tain ()
{
A s se rt ion . Asse r t(!l i st.C onta i n s M o v i e N a m e d ( " S t a r T r e k " ) ) ;
}
Compiling and running the test yields the following output from NUnit:
N Un i t ve rsi o n 2. 0 .6
. .. . F
T es t s ru n: 4 , Fa i lure s : 1, Not r u n : 0 , T i m e : 0 . 0 3 0 0 4 3 2 s e c o n d s
F ai l ur es :
1 ) T es tO neI t emMo v ieLi s t.Do esNo t C o n t a i n :
a t Te stO n eIte m Movi e List .Doe s N o t C o n t a i n ( ) i n t e s t m o v i e l i s t . c s : l i n e 5 1
This test informs us that the implementation of C o n t a i n s M o v i e N a m e d is not sufficient given the
current tests. In order to get this test to pass we need to change the implementation of M o v i e L i s t to
save the Movie objects that are added so that the C o n t a i n s M o v i e N a m e d method can determine if
they are in the list. A way to implement this is with a H a s h t a b l e from the S y s t e m . C o l l e c t i o n s
class in the .NET Framework.
u si n g Sy ste m ;
u si n g Sy ste m .Col l ecti o ns;
p u bl ic Mo v ieLi s t()
{
mo vi es = new Hash t able ();
nu mb erO f Movi e s = 0 ;
}
p u bl ic in t Cou n t
{
ge t { r e turn numb e rOfM ovie s ; }
}
p u bl ic vo i d Ad d (Mov i e mo vieT o A d d )
{
mo vi es. A dd(m o vieT o Add. Name , m o v i e T o A d d ) ;
nu mb erO f Movi e s += 1;
}
Compiling this yields a compilation failure due to the property Name not being defined in the M o v i e
class. It's finally time to add some functionality to the M o v i e class.
u si n g Sy ste m ;
p ub l ic c las s Movi e
{
p r iv at e s t ring titl e ;
p u bl ic Mo v ie(s t ring titl e)
{
th is .ti t le = titl e ;
}
p u bl ic st r ing N ame
{
ge t { r e turn titl e ; }
}
}
Compiling and running this yields the familiar green bar. Upon further review, however, there is a smell
that exists in the M o vie Li st class. The H a s h t a b l e class can perform the function of keeping track of
the number of elements so we do not need the n u m b e r O f M o v i e s variable. Making this change to
M ov i eL is t looks like this:
u si n g Sy ste m ;
u si n g Sy ste m .Col l ecti o ns;
p u bl ic in t Cou n t
{
ge t { r e turn movi e s.Co unt; }
}
p u bl ic vo i d Ad d (Mov i e mo vieT o A d d )
{
mo vi es. A dd(m o vieT o Add. Name , m o v i e T o A d d ) ;
}
Doing this refactoring has simplified the code dramatically. Reviewing the existing code along with our
to-do list indicates that we are finished with the task, so let's check it back into the repository and
move on to the next task.
Amazon
Prev don't be afraid of buying books Next
This chapter presents PyUnit, the xUnit framework for the Python programming language. Bob Payne is
the president of Electroglide Inc., a small software development firm specializing in eXtreme
Programming consulting, implementation, development, and training. He is the co-founder of the
Washington D.C. XP users group, an active presenter at XP conferences, and an agitator for Agile
Methodologies in general.
Amazon
Prev don't be afraid of buying books Next
THE FRAMEWORK
PyUnit (uni tte st .py ) is a core module in current Python distributions. This example was coded using
Python 2.2.2 on Windows, but should run unchanged on virtually any Python platform. Distributions and
information on Python can be found at [URL 67]. Development of the framework is ongoing under the
supervision of Steve Purcell at [URL 53]. If you are doing Web development there are several test
frameworks that are similar to the HTTPUnit framework in Java; these include:
WebUnit — WebUnit is a unit testing framework for testing Web applications in PyUnit
tests. WebUnit was developed by Steve Purcell, the author of PyUnit. Information about
WebUnit can be found at [URL 69].
PyUnit has all the standard xUnit asserts and provides some name mappings to allow the use of the
logically equivalent asserts. The asserts and their logical equivalents are listed below with their
parameter listings (msg parameter is for an optional message):
Testing Equivalence
Equality tests are more liberal in Python than in Java. Python generally tests for value equivalence of
built-in objects and utilizes the overloadable equivalence operator for other classes.
failIfEqual(first, second, msg=None) fails if the two objects are equal as determined
by the '= =' operator.
assertNotEqual(first,second, msg=None) and
Testing Exceptions
PyUnit provides assertions to test exception handling in applications. These assertions test if specific
exceptions are or are not raised given the evaluation of a call to an object. If a different type of
exception is thrown, it will not be caught, and the test case will be deemed to have suffered an error,
exactly as for an unexpected exception.
PyUnit also provides a fa il () method to allow programmers to create unit tests that leverage complex
conditional blocks that would not easily map to the other assertion structures.
Tests in PyUnit are collected in test methods on test objects. Each test method should have one or more
assertions in the body of the method and method names should begin with "te st ". Test objects must
import the uni tt est module, extend u n itt e s t.T e s tCa s e, and their names should begin with
"Tes t ". Optional s et Up( ) and tea rDow n ( ) methods on test objects are used to build a fixture and
clean it up after each test method is run.
Imp o rt un itt es t
Tests can be collected into test suites explicitly or through introspection. While each has its advantages,
I find that utilizing introspection reduces the chance that a test will be missed. The above example
utilizes one form of introspection by calling the u nit t e st .m ai n () . When the script is run, Python will
look for all objects beginning with "Te st " and add them to a test suite; in turn, each method beginning
with "t est" will be evaluated when each T est C a se in the suite is run. Order of test execution cannot
be guaranteed.
Amazon
Prev don't be afraid of buying books Next
EXAMPLE
We must now write our first failing test before we can code.
i mp o r t u ni ttes t
f ro m mov ie .Mov i e i m port *
F AI L E D ( er rors = 1)
Now we will do the simplest thing that will give us a passing test. We will stub out s i z e in the movie
object to return zero.
c la s s Mo vi eLis t :
OK
Test 2: Adding a movie to an empty list should result in a list with a size
of one
It is now our responsibility to expand and test for other behaviors of the MovieList object. We have also
refactored the test to remove redundancy of creating an instance of MovieList and cleaning up the
s et U p () and t e arDo w n() methods of the test object.
c la s s Te st Movi e Lis t (unit test .T e s t C a s e ) :
d e f se tU p(se l f):
se lf .mov i eLi s t = M ovie Li s t ( )
d e f te ar Down ( sel f ):
se lf .mov i eLi s t=Non e
F AI L E D ( er rors = 1)
Stub out the addM o vie ( movie ) method with a pass instruction. This method returns None when it is
called and does nothing.
c la s s Mo vi eLis t :
Red bar— M ovi e object is not defined; we must stub out a M o v i e class. Before we do this we need to
add a te st Mov i e class to test the behavior of a M o v i e and stub out M o v i e . This class only needs to
retain its name for now, so that is the only behavior we will test for.
Create the code for the Movie class, a simple class with only a constructor method.
c la s s Mo vi e:
d ef __ init _ _(s e lf, n ame= No n e ) :
se lf.n a me = name
F AI L E D ( fa ilur e s=1 )
Modify M ov ieL i st to track the size. Yes, we know this is a stupid way to do it but we also know that
first we will make it work and then we will make it elegant.
c la s s Mo vi eLis t :
Green bar— All three of our tests are running at 100 percent. We could now integrate our code with the
main codebase depending on our check-in strategy. Now let's add tests for the functionality to see if a
movie has already been added to the Mo v i e L i s t .
d e f se tU p(se l f):
se lf .mov i eLi s t = M ovie Li s t ( )
d e f te ar Down ( sel f ):
se lf .mov i eLi s t=Non e
Red bar— Stub out the functionality in M o v i e L i s t by simply returning T r u e when the
c on t a ins Mo vie( n ame ) method is called.
c la s s Mo vi eLis t :
Green bar— All four of our tests pass. Refactor test objects to be a bit more modular. Notice that we
pulled the test for zero length list into another test object and created and then added "Star Wars" to the
list of movies. The methods setU p() and t e a r d o w n ( ) are called for each of the test method
invocations in that class so we do not have to worry about test crosstalk in the object.
d e f se tU p(se l f):
se lf .mov i eLi s t = M ovie Li s t ( )
se lf .mov i eLi s t.add Movi e( M o v i e ( ' S t a r W a r s ' ) )
d e f te ar Down ( sel f ):
se lf .mov i eLi s t=Non e
Green bar— Now we must extend the test to express the true intent of the c o n t a i n s - M o v i e ( n a m e )
method by testing to ensure that we get a false if the movie has not been added.
Test 4: Asking about the presence of a movie that wasn't added should
result in a negative response
d e f se tU p(se l f):
se lf .mov i eLi s t = M ovie Li s t ( )
se lf .mov i eLi s t.add Movi e( M o v i e ( ' S t a r W a r s ' ) )
d e f te ar Down ( sel f ):
se lf .mov i eLi s t=Non e
Red bar— Extend M ovi e List to check if a movie has been added:
c la s s Mo vi eLis t :
Green bar— All tests pass. We are done. Or are we? We still must refactor to make the implementation
of our objects as clear as possible and remove any redundancy. Our M o v i e L i s t class is using a
counting scheme rather than checking the length of our internal movie list.
c la s s Mo vi eLis t :
Green bar— Done. Let's be a bit more thorough and expand our tests to check if multiple movies in the
list bother us.
d e f se tU p(se l f):
se lf .mov i eLi s t = M ovie Li s t ( )
se lf .mov i eLi s t.add Movi e( M o v i e ( ' S t a r W a r s ' ) )
d e f te ar Down ( sel f ):
sel f. movi e Lis t =None
Green bar— This last test is somewhat like an acceptance test since it tests multiple scenarios. Many
people would argue against this level of redundancy, but for my money I believe that tests should be
fragile and that some level of acceptance test should be expressed while unit-testing the application. The
redundant asserts could certainly be factored out into private methods if one wishes.
Amazon
Prev don't be afraid of buying books Next
This chapter presents vbUnit, the xUnit family member for the Visual Basic platform. Kay Pentecost has
agreed to write this chapter, as she has been doing TDD in VB for some time now.
Amazon
Prev don't be afraid of buying books Next
THE FRAMEWORK
This chapter presents vbUnit3 Professional, the xUnit family member for Visual Basic. There are several
vbUnit implementations. The one used in this chapter is vbUnit3 Professional, version 3.06.02. vbUnit3
Professional was written by Bodo Maass, and is a commercial product. It's nicely integrated with the VB
IDE, and very easy to use, so I think it's worth it. There is also a free version of vbUnit Basic on Maass'
site at [URL 51]. Other versions of vbUnit are available. You can find them at [URL 11].
Installing the vbUnit3 Professional software installs it as an add-in to Visual Basic. This adds three menu
items to the add-ins menu: Show vbUnit Window (which changes to Hide vbUnit Window when the
window is displayed), Run vbUnit, and vbUnit Options.
To create a new project with vbUnit, select New Project from the File menu. On the New Project window,
select vbUnitTestDLL as the project type. This opens a vbUnit project (the default name is
vbU n itTes tDL L) with two classes: the first test fixture (vb uT e st Fi x tu r e) and the test suite
(vbu T estSu ite ). You can and should rename these classes and files to be meaningful in your project.
The fixture implements the I Fixt ure interface and instantiates a variable for the I A ss er t class. This
gives us a selection of assertion methods, including:
As expected, the A ss ert methods each have the optional message parameter.
Pub l ic Su b T es tFa il ()
m A ssert .Ve ri fy Fa lse, "he llo"
End Sub
To get your first green bar, you need to change the preceding Fa ls e to T ru e.
Amazon
Prev don't be afraid of buying books Next
EXAMPLE
We start by opening a new project in VB6. We select vbUnitTestDLL as the type of project. This creates
a project (that I renamed to vbu Movie ), a test fixture (renamed to v b u M o v i e F i x t u r e ), and a test
suite (renamed to vb u Movi e Suit e). Figure 26.1 shows the environment. The vbUnit window is docked
above the immediate window.
We expect that clicking the Run button will give us a red bar, but instead it gives us a message saying
that a user-defined type is not found. That's because we changed the name from v b u T e s t F i x t u r e to
v bu M ov ie Fix t ure. The test suite vbu M o v i e S u i t e is looking for the old name. We can fix that by
changing the code in the vbu M ovie Sui t e to look for v b u M o v i e F i x t u r e .
Now, clicking the Run button on the vbUnit window gives us a red bar. It failed. But we haven't really
done anything yet...so what's wrong?
Nothing's wrong. I like to start with the failing test provided by vbUnit...just to confirm that it's set up
and working correctly (see Figure 26.2).
To find the method that failed, we double-click on the test result line:
Aha! If Ve ri fy is Fa l se, which it is here, the test will fail. Now we've confirmed that vbUnit is working
correctly...so we can change the Fal se to T r u e , and get a green bar—the passing test.
So now, having warmed up with a small test that vbUnit is correctly installed and the project is running,
we're ready to start the real work.
Our first test will be the test for an empty M o v i e L i s t . And it looks pretty much like the first Java test:
m As s er t. Lon g sEqu a l 0, Empt yLis t . S i z e , " S i z e o f e m p t y l i s t s h o u l d b e 0 . "
This won't compile, of course. We have no class M o v i e L i s t yet. The error message is:
So, we add a class to the test and name it M o v i e L i s t . Now the test fails again because we have no
method called Size :
At this point, the test requires a Mo vie L i s t class, which we gave it, and a method in M o v i e L i s t
called Siz e. We know that the method should return zero for this case. And that's all we really know.
Oh, we figure the M ovi e List object will eventually hold a list of movies. And the size of the
M ov i eL is t will be the number of movies it holds. If we were doing some other style of programming,
we might create some movies, put them in the M o v i e L i s t , and count them, wouldn't we? Well, then
we'd have to decide what a movie looked like, and how to put movies into the list, and so on and so on
and so on.
But all the test is asking for is a Siz e method that returns zero. So that's all we have to do. We do the
simplest thing that could possibly work, and create a S i z e method that returns zero.
And run the test. And if we typed everything correctly, we get a green bar!
Once the test runs, we can look at the code for opportunities to refactor, but there's not much here to
do. We're ready to do the next test, which we'll call T e s t S i z e A f t e r A d d i n g O n e , just as we saw in
Chapter 10. This test will confirm that when we add a movie to the list, the list knows there's one movie
in there. Our assertion will be:
The compiler will complain until we have a M o v i e class and an A d d method, so we add a M o v i e class
and put an empty A d d method in M ovi e L i s t . Now, running the tests, we get a red bar.
It failed because the Size method is only returning zero, and the A d d method doesn't do anything.
So we're going to let the Add method increment a counter that tracks how many movies we've added.
Notice that at this point we don't care anything else about the movie or the movie list—just that when
we add a movie the movie list knows it has one more movie.
So we'll create a private variable, N umb e r O f M o v i e s .
Now we can increment Num b erOf Movie s in the A d d method, and have the S i z e method return the
number of movies.
Do we think the next test will run correctly the first time? Let's try it:
As we expected, this gives us a green bar. Now, looking at the code for opportunities to refactor, we
see a lot of duplication. What's duplication? Duplication appears whenever we do something more than
once. . . even when we use different names in different methods. We're dimensioning a new
M ov i eL is t, for example, in almost every method:
In each case, it's just a Movi e List , although we named methods to tell us what kind of a M o v i e L i s t
they are.
So, we can move this to the Set up method in v b u M o v i e F i x t u r e . We also dimension and instantiate
two M ov ie objects, so we can put that code in the S e t u p , too. We'll put the D i m statements in the
G en e ra l section of the class so they have class-wide scope.
When we press the run button, if it still gives us a green bar, we can delete the duplicated code so the
dimension statements and the instantiations go away. All that is left in each test is the M o v i e L i s t . A d d
method, and the assert that we are testing.
Now we know how many movies are in the M o v i e L i s t . . . but we don't know anything about the
M ov i eL is t other than that.
So our next test will be to see if, once we put a movie in the M o v i e L i s t , if M o v i e L i s t knows it's in
there.
P ub l ic S ub T estC o nten t s
m o vi eL ist . Add S tarW a rs
m A ss er t.V e rify movi e List .Con t a i n s M o v i e N a m e ( " S t a r W a r s " )
E nd Su b
The compiler will remind us that we haven't written a contains method, yet. So we create a method in
M ov i eL is t called C ont ai nsMo vieNa m e .
C on t ai ns Mov i eNam e = T r ue
As I do this, I really like the feeling I get when the green bar appears. The more functionality I add, the
more confidence the green bar gives me.
We don't actually have to refactor, yet. We can add another test to check that a movie that has not
been added returns F a lse when we call C o n t a i n s M o v i e N a m e . This time, we don't call the A d d
method.
When this fails, we can write the functionality that actually checks the contents of the M o v i e L i s t .
We haven't dimensioned Movies yet. Movies will be a collection that will hold the movies for MovieList.
So we dimension Movies in the general section of the MovieList class, so it's available to the entire
class.
M ov i es .A dd
We see that we're incrementing Num ber O f M o v i e s to get the M o v i e L i s t . S i z e . Now, however, we
have the collection Mo v ies, so we can use the C o l l e c t i o n C o u n t method to get the number of
movies. We add the line:
S iz e = M ovi e s.Co u nt
The first thing we need here is a movie title. So we'll add T i t l e properties to M o v i e :
P ri v at e mTi t le A s Str i ng
P ub l ic P rop e rty G et T i tle( ) As S t r i n g
T i tl e = m T itle
E nd Pr op ert y
Green bar!!
This runs and gives us a green bar, so we can delete the N u m b e r O f M o v i e s dimension statement, and
the assignment of N u mber O fMov ies to Size.
And here we have to do some thinking. Is a M o v i e ever going to exist without a title? No. We don't
want any titleless movies hanging around. . . the only time we'd have a titleless movie would be some
sort of movie object we use as an iterator, and it would never be directly instantiated. So a Movie must
have a title. We can't do that with a constructor like we could in other object-oriented languages.
To make sure that every movie has a title, we need some sort of factory object to create movies. Let's
call it St ud io, and give it a method of M a k e N e w M o v i e .
Now, of course, we hear our pair yelling "Test first, test first!"
OK. Point taken. Create a new test fixture called v b u T e s t S t u d i o F i x t u r e . Here's the test:
m As s er t. Str i ngsE q ual " Star War s " , S t a r W a r s . T i t l e , " T i t l e s h o u l d b e ' S t a r W a r s ' "
D im St ud io A s Ne w Stu d io
Green bar!
Now we can modify the IFix t ure_ Set U p in v b u T e s t M o v i e to use the new functionality:
OK, now we have three classes: St udi o , M o v i e , and M o v i e L i s t . These classes allow us to add
movies to a list and check that the list contains them...and the code is pretty straightforward and clean.
First, Mo vie :
O pt i on E xpl i cit
O pt i on E xpl i cit
Finally, St ud io:
O pt i on E xpl i cit
There are more things we can do: we can change M o v i e L i s t into a real collection; we can set the
M ov i e class to be P ubl ic Not Crea t e a b l e so that programs outside of the DLL can only create
M ov i es by going through Stud io; we can add functionality to M o v i e , M o v i e L i s t , and S t u d i o as it's
needed.
Part v: APPENDICES
Amazon
Prev don't be afraid of buying books Next
This appendix does not try to be an exhaustive overview of eXtreme Programming (XP). Its purpose is
to provide a very brief introduction to XP as background to the discussion of Test-Driven Development.
Amazon
Prev don't be afraid of buying books Next
In 1994 The Standish Group published a report that brought these problems into the light. The now-
famous Chaos Report [21] documented that only 16 percent of projects were completed successfully. A
follow-up report[22] included the significant finding that the likelihood of a project succeeding
diminished the more people worked on it, the more it cost, or the longer it took. What did successful
projects look like? They were low cost, short, and done by small teams.
Around this time, several thought leaders in the field began to develop software development processes
that increased the chances of success. Two of the most notable were Ward Cunningham and Kent Beck,
who called their idea "eXtreme Programming."
In 2001 several of the people involved in this development approach met to discuss this trend and their
ideas. This meeting led to the Agile Manifesto[URL 6], which is included as part of this chapter, in a
sidebar. The group that wrote the manifesto forms the core of what is now The Agile Alliance[URL 5].
For more information on Agile Software Development, see [13], [24], [33].
How are agile methods different from traditional ones? At first glance it seems that there is nothing new
here. Most seasoned developers will see practices that they have been using some time. So what's the
big deal?
Traditional (i.e., non-agile) methodologies advocate and usually try to enforce a defined and repeatable
process. This is based on the belief that software can be manufactured. In essence, these are industrial-
era approaches.
Agile methodologies are, in contrast, post-industrial. They use introspection, retrospection, and
adaptation to allow people to self-organize based on the application of a set of practicest hat lead to an
emerging process. This idea is expressed in one of the Agile Alliance's principles: "The best architectures,
requirements, and designs emerge from self-organizing teams"[URL 6]. See the sidebar for all of the
principles behind the Agile Manifesto.
That is, while there is value in the items on the right, we value the items on the left more.
© 2001, the above authors. This declaration may be freely copied in any form, but only in
its entirety through this notice.
Amazon
Prev don't be afraid of buying books Next
EXTREME PROGRAMMING
eXtreme Programming, commonly referred to simply as XP, is one of the most agile of the agile
processes.
XP is defined by four values and a set of practices. However, XP is more than this, more than just doing
the practices. It is a way of creating software that embodies the values. The practices are activities that
we do in order to learn XP. Beck sees the practices as being much the same as etudes that help one
learn and internalize techniques in playing a musical instrument.
Amazon
Prev don't be afraid of buying books Next
Cost Every project has a cost. This includes equipment, facilities, and man-hours.
Especially man-hours.
The really interesting thing is how these variables interact. If you increase cost by adding people to the
project, velocity will often decrease (due to the costs of integrating the additional people) and, hence,
time will increase. On the other hand, increasing the cost by buying faster or better equipment or
facilities can boost your velocity and thus decrease time. You cannot decrease time simply by dumping
more money into a project, especially early in the project. In fact, you are limited in what you can
achieve by controlling cost.
Time is often dictated by external constraints such as trade shows, investor requirements, and business
commitments. Because of this, time is often relatively fixed.
Quality can be variable, but lowering the quality of the system has serious negative impact. No worker is
happy for long if they are coerced into producing goods of poor quality. This can lead to a serious morale
issue. Also, low quality code is harder to extend and maintain. If you try to shorten the time required by
limiting quality, it will surely backfire.
Scope is the thing that is most easily controlled. This entails adding or removing functionality from the
requirements of the project. By using a few simple techniques, XP makes this work very well. One
important technique is to always be working on the most important outstanding feature, according to the
customer. That ensures that if unimplemented features are later removed (thereby reducing scope), the
most important things have already been done.
One mistake that people make is thinking that they can control all four variables. This is impossible.
Cost, time, quality, and scope are interrelated such that the value of any one depends on the values of
the others. What this means in practical terms is that you can control at most three of the four, but
never all four. If you attempt to control all four, your project will most likely fail and will certainly be a
miserable one to work on.
Amazon
Prev don't be afraid of buying books Next
THE VALUES
The reason XP is so successful lies in the four values it is built upon:
1. Communication
2. Simplicity
3. Feedback
4. Courage
Communication
These practices promote open, honest communication between programmers, between programmers and
customers, and with management.
Simplicity
The value of simplicity cannot be overstated. Keeping the code and the design as simple as possible
increases the clarity, understandability, extendability, and maintainability of the system.
Exactly what is simplicity? In the context of XP, Beck[8]defines the simplest system by using four criteria
(in order of decreasing importance). The code is simple if it:
Without honest, continuous feedback everything else falls apart. Feedback is what keeps everyone in
sync. It is what enables the programmers to deliver the system that the customer really wants.
Feedback takes many forms and occurs at many levels on many timescales. This can range from the
feedback you get from running your tests on a minute-by-minute basis, the ongoing feedback from your
pair-programming partner, the feedback from the team in the daily meeting, through to feedback from
the customer during iteration planning when they tell you how you're doing in terms of overall direction.
Courage
Courage is required to do XP. It takes courage to make sweeping changes to working code to make it
clearer or just better. Having a broad test suite gives you confidence, but courage still has to come from
inside. It takes courage to tell your customer that they can't have another feature added without
deferring to something else. It takes courage to throw away code.
Amazon
Prev don't be afraid of buying books Next
THE PRACTICES
In this section we'll have a brief look at each of the XP practices. For more detail see [6], [8], [26],
and [URL 12].
Whole Team
Software development is a cooperative game[13]. There are several players, including programmers,
coach, customers, management, and users. All these players are on the same team, and the goal is for
everyone to win. Everyone works together to do that. On a practical note, the core players
(programmers, coach, and ideally customer) sit in the same room so there is minimal barriers to
communication.
Planning Game
There are two aspects to the planning game: release planning and iteration planning. The former deals
with what can be accomplished by a certain date, while the latter deals with what to do next on a day-
to-day basis.
Release Planning
In an XP project, requirements exist in the form of stories that are written by the customer. Each story
briefly describes a single aspect of functionality (sometimes called a feature) that is testable. The
programmers estimate the cost of each story. Based on that information and their knowledge of the
priority [1] of each story, the customer sketches out the order in which stories should be done. We accept
that this plan is inaccurate and we will adjust it as we go, as the velocity of the team changes, and as
the customer changes their mind about scheduled stories or comes up with new ones. This set of stories
is then divided into iterations (each of which last between one and three weeks) based on how much
work the team can get done per iteration.
[1] The priority of a story is based on its business value to the customer.
Iteration Planning
Each release is made up of a series of iterations. At the beginning of each iteration the development
team sits down and brainstorms each story that is scheduled for that iteration. The result is a set of
tasks for each story. These tasks are then estimated. [2] Then each programmer signs up for a selection
of tasks that they can accomplish in this iteration. They know how much work they got done during the
previous iteration and assume that they will get the same amount done this time. This same rule of
thumb is used to estimate how much the team as a whole can accomplish in an iteration. This is then
used in release planning to divide stories into iterations. They sign up for that much work. This is the
concept known as yesterday's weather. ..odds are that you'll get the same amount done today as you
did yesterday. If a programmer finds that they have time left toward the end of the iteration, they will
look for more work. If a programmer is running out of time (i.e., has too much work), they will defer
some or pass it to another programmer who has time left.
[2] Task estimates are at a finer level of granularity than story estimates, and are more accurate
since there is now more information to work with.
The key thing to remember about the planning game is that the planning is more important than the
plan.
Small Releases
By keeping releases small and frequent we get more rapid feedback from users. Releases should be no
larger than several months' worth of work. As mentioned before, releases are split into iterations. While
the size of releases can vary depending on external constraints, iterations are a fixed size: between one
and three weeks. Historically, two weeks has been the most popular iteration size, but recent experiences
are favoring one-week iterations.
Customer Tests
As stated in previous sections, the customer writes the requirements as a set of stories. They also write
a test (or tests) for each story so that everyone knows when that story has been successfully
implemented in full. These are the customer tests and are also known as acceptance tests since they
indicate when a story's implementation is acceptable. Ideally, the customer has their tests ready for the
programmers to use (i.e., run) before the associated stories are scheduled.
Simple Design
Design for today, not for some potential future. By working test-first we constrain our design to what is
required to pass the test. By refactoring as required, we keep the design clean and simple. That's one
reason why I claim this is a book about design, not about testing.
Pair Programming
Every line of code (test and production) is written by two programmers sitting at a single computer. The
most common, uninformed objection to pair programming is that you have two people doing one
person's work. But it's more than two people writing code together. There is an ongoing, real-time code
review in progress. The partner (the person not typing) can be thinking strategically, while the driver
(the person typing) thinks tactically. Because of this, the partner can suggest what the next test should
be, what names should be used, where refactoring is warranted, etc. Also, because there is always
someone available to talk to, problems can be talked through and brainstormed as needed.
There have been studies on the effectiveness of pair programming. One was presented at the XP2000
conference[37, Chapter 14] which found that using pair programming took only 15 percent more time
and that the extra development cost would be repaid in quicker and cheaper testing, QA, and support.
Another study was presented at XP2002[38] which found that pair programming increases job satisfaction
among programmers.
The social aspect of pair programming should not be undervalued either. Programmers are people, and
people are social animals.
Test-Driven Development
Write tests first and let that drive your design and coding. That's what this book is all about. One of the
beneficial side effects of TDD is that you end up with a comprehensive test suite. This gives you
confidence to refactor. If the tests run before refactoring (and they must)and they run after, you can be
confident that you haven't changed the behavior of the code.
Design Improvement
As you program, you write a small test (just enough to fail), then write a small bit of code (just enough
to pass the failing test). This can lead to poorly designed or poorly structured code. So the next step is
to refactor to make the code clean again.
By refactoring as required, we continuously make the design better, a little at a time. This makes the
code as simple, clear, and intent-revealing as possible.
Everybody owns everything. Anybody can change any code as required. This, especially combined with
pair programming, allows and encourages everyone to get familiar with all (or at least most) of the
codebase. It also enables refactoring in that if anyone sees a problem, they can fix it. It lets the team
work as fast as possible since you never have to wait for the owner of the code to make the change you
need.
Continuous Integration
In XP you never go for very long without integrating your work with the codebase. There are several
beneficial results:
Integration is never a headache because there are minimal differences between the codebase and
your work.
The codebase is always very close to the most current state of the system, typically within
hours.
Your working copy is never very far from the baseline codebase in the repository so you never
have the chance to diverge much.
It is very important to stay this close to the codebase because of collective ownership. Anyone is able to
work on any part of the code. The longer you go without integrating, the more likely it is that someone
has made changes that conflict with yours.
At the very least, you should integrate after you finish each task. It is even better to integrate after
each test-code-refactor cycle. That way you are integrating every 15–30 minutes. When you're doing
this, it's especially important to have your tests run quickly, because before integrating you have to
verify that all the tests run.
Never, never, NEVER leave anything unintegrated at the end of the day. If you can't integrate before you
leave for the day, throw it out and start fresh in the morning. If a task takes longer than a day to do,
either the task is too complex (and has to be broken up) or your solution is too complex (and you need
to find a simpler one). In either case, start over.
Coding Standard
In order to support collective ownership, everyone on the team needs to conform to the same coding
standard. This allows everyone to be comfortable working on any of the code since it all looks the same.
What this standard is doesn't matter. The main thing is that everyone adheres to it. For obvious reasons,
it makes sense to use a standard close to something that is common in the industry.
Common Vocabulary
Everyone involved in the project must use a common vocabulary for talking about the project. This has
often been accomplished through the use of a metaphor.
The metaphor is a shared, simple description of the system. It must be shared by everyone involved and
so should be cast in terms that everyone understands. For example, the C3 project at Chrysler (the
original XP project) used a metaphor of an assembly line to describe the payroll system they were
building. Everyone at Chrysler understood assembly lines.
Not only does the metaphor provide a common understanding of the system's operation, it also provides
a source for names in the system. This tightly couples the code itself with the shared understanding that
has developed.
Sustainable Pace
If you can't stay awake, you can't program. If you can't think, you can't program. Our industry has
made a hero of the programmer that works until the wee hours of the morning, kept awake (and
possibly alive) on a diet of pizza, chips, and cola. Nobody can last long that way. On an XP project, you
stop work when you are ready to. When you've put in a good day you go home, get some rest, and
have some fun. That way you can come back the next morning rested, refreshed, and ready to work at
your top effectiveness. By doing this, you can keep up the pace indefinitely, and not worry about burning
out before the project is completed.
Amazon
Prev don't be afraid of buying books Next
SUMMARY
eXtreme Programming is one of the most revolutionary movements in the history of programming. It
rejects the pomp and ceremony of the heavy methodologies that are standard in our industry. It turns
the focus away from processes that spell out what everyone on the project has to do, and when. It
eschews the use of complex tools and technological solutions in favor of people doing the simplest thing
that could work. Instead of trying to nail down and predict all requirements and timelines at the
beginning of the project, it encourages the customers to think up new requirements and change direction
throughout the project. Instead of trying to fully design the system before the first line of code is written,
it evolves the system in small steps, just enough at any one time.
In short, it lets programmers program, have a life, and have fun doing it.
Amazon
Prev don't be afraid of buying books Next
Some people will tell us that "if you're taking a test-driven development (TDD) approach to development
that you don't need to model." Those people are completely and utterly wrong. These are probably the
exact same people who also told us that "you don't model on an extreme Programming (XP) project,"
and they were completely and utterly wrong about that, too. The best thing that can be said about
them, I guess, is that at least they're consistent.
Model = Documentation. The reality is that the concepts of model and document are
orthogonal—we can have models that aren't documents and documents that aren't
models. A sketch on the back of a paper napkin is a model, as is a drawing on a white
board, as is a collection of Class Responsibility Collaboration (CRC) cards, as is a low-
fidelity user interface prototype built from flip chart paper and sticky notes. These are all
valuable models, yet questionable documents.
You Can Think Everything Through From the Start. Project teams suffering from this
myth often produce significant amounts of documentation instead of what their users
actually want—working software that meets their needs. We need to recognize that we
can't think all the minutiae through, that the programmers likely won't follow the detailed
guidance provided by the models anyway, and that our models need to evolve over time
in an iterative and incremental manner. The fundamental reality is that only our code is
ever truly in sync with our code.
Modeling Implies a Prescriptive Software Process. The reality is that we can model in
an agile manner, as we'll see later in this chapter.
You Must "Freeze" Requirements. The good news is that by freezing our requirements
early in the life cycle, the customer is likely to get exactly what they asked for; the bad
news is that they likely won't get what they actually need. The reality is that change
happens, so we need to embrace this fact and act accordingly.
Your Design is Carved in Stone. This is similar to the desire to freeze requirements,
the main difference being that management wants to ensure that every developer is
marching to the same tune by following the design. The result is that developers either
build the wrong things, or build the right things the wrong way, to conform to the
design. The reality is that nobody is perfect; even the best designers aren't going to get
it right the first time. Furthermore, if we don't freeze the requirements, then by
implication we cannot freeze the design—changes to the requirements will force changes
to the design. The best way to look at it is that the design isn't finished for a given
release until we've shipped the code.
You Must Use a CASE Tool. I often create a model to think through an issue, such as
how we might architect one aspect of the system, allowing myself and/or my coworkers
to move forward and implement what we just modeled. As a result, I often don't need a
significant CASE tool to support my modeling efforts—a white board or stack of index
cards often suffices. My experience is that CASE tools are fine as long as they provide
the best value for our investment in them, but that most of the time I simply don't need
one to model successfully. Yes, I'll often use tools such as Together/CC [URL 34] because
it generates significant amounts of Java scaffolding code and ERWin [URL 45] because it
generates database schemas. Both of these tools help me to fulfill the true purpose of
software development: the creation of software that meets the needs of my users. Having
said that, the vast majority of my modeling efforts are still done by using simple tools,
not sophisticated modeling tools.
Modeling is a Waste of Time. The reality is that we are very often more productive
sketching a diagram, developing a low-fidelity prototype, or creating a few index cards, in
order to think something through before we code it. Productive developers model before
they code. Furthermore, modeling is a great way to promote communication between
team members and project stakeholders because you're talking through issues, coming to
a better understanding of what needs to be built and building bonds between everyone
involved with the project in the process.
The World Revolves Around Data Modeling. Many organizations hobble their new
development efforts by starting with a data model. Often this is the way the organization
has always done things; the data community has a political death grip on the IT
department and therefore does everything in their power to ensure that they control the
software development projects, or the legacy database(s) are such a mess that we have
no other choice. The reality is that we have a wide variety of models at our disposal—use
cases, business rules, activity diagrams, class diagrams, component diagrams, user
interface flow diagrams, and CRC models to name a few—and data models are merely
one such option. We need to use the right model(s) for the job.
All Developers Know How to Model. Modeling skills are gained over years of
experience and only when a developer chooses to gain them. As the agile community will
tell us, people need to work together and to balance off one another's strengths.
Everyone should have the humility to understand that they don't know everything and
therefore they can always learn something important from everyone else: modelers can
learn details of a certain technology from a programmer and programmers can learn
valuable design and architecture techniques from modelers. My personal philosophy is that
everyone is a novice, including myself.
By understanding the myths surrounding modeling and dealing with them effectively, we put yourself,
our project team, and our organization in a position where we can develop software effectively. Agile
Modeling (AM) describes an effective approach to modeling and documentation that works well within a
TDD environment.
Amazon
Prev don't be afraid of buying books Next
An agile modeler is anyone who models follows the AM methodology applying AM's practices in
accordance with its principles and values. An agile developer is someone who follows an agile approach
to software development. An agile modeler is an agile developer. Not all agile developers are agile
modelers.
1. To define and show how to put into practice a collection of values, principles, and practices
pertaining to effective, lightweight modeling. What makes AM a catalyst for improvement isn't the
modeling techniques themselves—such as use case models, class models, data models, or user
interface models—but how to apply them productively.
2. To address the issue of how to apply modeling techniques on software projects taking an agile
approach, such as eXtreme Programming (XP) [8] or Feature Driven Development (FDD) [36].
Sometimes it is significantly more productive for a developer to draw some bubbles and lines to
think through an idea, or to compare several different approaches to solving a problem, than it is
to simply start writing code. There is a danger in being too code-centric—sometimes a quick
sketch can avoid significant churn when we are coding.
3. To address how we can improve our modeling activities following a near-agile approach to
software development, and in particular project teams that have adopted an instantiation of the
Unified Process such as the Rational Unified Process (RUP) [30] or the Enterprise Unified Process
(EUP) [3]. Although we must be following an agile software process to truly be agile modeling,
we may still adopt and benefit from many of AM's practices on non-agile projects.
AM Values
The values of AM include those of XP—communication, simplicity, feedback, and courage—and extend it
with humility. It is critical to have effective communication within our development team as well as with
and between all project stakeholders. We should strive to develop the simplest solution possible that
meets all of our needs and to obtain feedback regarding our efforts often and early. Furthermore, we
should have the courage to make and stick to our decisions, and have the humility to admit that we
may not know everything, that others have value to add to our project efforts.
AM Principles
The principles of AM include the importance of assuming simplicity when we are modeling and embracing
change as we are working because requirements do in fact change over time. We should recognize that
incremental change of our system over time enables agility and that we should strive to obtain rapid
feedback on our work to ensure that it accurately reflects the needs of our project stakeholders. Agile
modelers realize that software is the primary goal, although they balance this with the recognition that
enabling the next effort is the secondary goal. We should model with a purpose, if we don't know why
we are working on something, then we shouldn't be doing so, and that we need multiple models in our
development toolkit in order to be effective. A critical concept is that models are not necessarily
documents, a realization that enables us to travel light by discarding most of our models once they have
fulfilled their purpose. Agile modelers believe that content is more important than representation, that
there are many ways we can model the same concept yet still get it right. To be effective modelers we
need to know our models. To be effective teammates we should realize that everyone can learn from
everyone else, we should work with people's instincts, and that open and honest communication is often
the best policy to follow to ensure effective teamwork. Finally, a focus on quality work is important
because nobody likes to produce sloppy work, and that local adaptation of AM to meet the exact needs
of our environment is important. The following summarizes the principles of AM.
Assume Simplicity
As we develop we should assume that the simplest solution is the best solution.
Any given model could have several ways to represent it. For example, a UI specification could be
created using sticky notes on a large sheet of paper (an essential or low-fidelity prototype), as a sketch
on paper or a whiteboard, as a "traditional" prototype built using a prototyping tool or programming
language, or as a formal document including both a visual representation as well as a textual description
of the UI.
Embrace Change
Accept the fact that change happens. Revel in it; change is one of the things that make software
development exciting.
Our project can still be considered a failure even when we deliver a working system to our users—part
of fulfilling the needs of our project stakeholders is to ensure that our system is robust enough so that it
can be extended over time. As Alistair Cockburn [13] likes to say, when we are playing the software
development game our secondary goal is to set up to play the next game.
Agile modelers have the humility to recognize that they can never truly master something, there is
always opportunity to learn more and to extend your knowledge. They take the opportunity to work with
and learn from others, to try new ways of doing things, to reflect on what seems to work and what
doesn't.
Incremental Change
To embrace change we need to take an incremental approach to our own development efforts, to
change our system a small portion at a time instead of trying to get everything accomplished in one big
release. We can make a big change as a series of small, incremental changes.
Because we have multiple models that we can apply as agile modelers we need to know their strengths
and weaknesses to be effective in their use.
Local Adaptation
It is doubtful that AM can be applied out of the box; instead, we will need to modify it to reflect the
environment, including the nature of the organization, the team, the project stakeholders, and the
project itself.
Our project stakeholders are investing resources—time, money, facilities, and so on—to have software
developed that meets their needs. Stakeholders deserve to invest their resources the best way possible
and not to have them frittered away by our team. Furthermore, stakeholders deserve to have the final
and not to have them frittered away by our team. Furthermore, stakeholders deserve to have the final
say in how those resources are invested or not invested. If it was our money, would we want it any
other way?
If we cannot identify why and for whom we are creating a model, then why are we bothering to work on
it all?
Multiple Models
We have a wide range of modeling artifacts available to us. These artifacts include, but are not limited
to, the diagrams of the Unified Modeling Language (UML), structured development artifacts such as data
models, and low-tech artifacts such as essential user interface models [1].
People need to be free, and to perceive that they are free, to offer suggestions. Open and honest
communication enables people to make better decisions because the quality of the information that they
are basing them on is more accurate.
Quality Work
Agile developers understand that they should invest the effort to make permanent artifacts, such as
source code, user documentation, and technical system documentation of sufficient quality.
Rapid Feedback
Feedback is one of the five values of AM, and because the time between an action and the feedback on
that action is critical, agile modelers prefer rapid feedback over delayed feedback whenever possible.
The primary goal of software development is to produce high-quality software that meets the needs of
our project stakeholders in an effective manner.
Travel Light
Traveling light means that we create just enough models and documentation to get by.
As we gain experience at developing software, our instincts become sharper, and what our instincts are
telling us subconsciously can often be an important input into our modeling efforts.
AM Practices
To model in an agile manner we must apply AM's practices appropriately. Fundamental practices include
creating several models in parallel, applying the right artifact(s) for the situation, and iterating to another
artifact to continue moving forward at a steady pace. Modeling in small increments, and not attempting
to create the magical all encompassing model from our ivory tower, is also fundamental to our success
as an agile modeler. Because models are only abstract representations of software, abstractions that may
not be accurate, we should strive to prove it with code to show that our ideas actually work in practice
and not just in theory. Active stakeholder participation is critical to the success of our modeling efforts
because our project stakeholders know what they want and can provide us with the feedback that we
require. There are two fundamental reasons why we create models: either we model to understand an
issue (such as how to design part of the system) or we model to communicate what our team is doing
(or has done).The principle of assume simplicity is supported by the practices of creating simple content
by focusing only on the aspects that we need to model and not attempting to create a highly detailed
modeling, depicting models simply via use of simple notations, and using the simplest tools to create our
models. We travel light by discarding temporary models and updating models only when it hurts.
Communication is enabled by displaying models publicly, either on a wall or internal Web site, through
collective ownership of our project artifacts, through applying modeling standards, and by modeling with
others. Our development efforts are greatly enhanced when we consider testability, apply patterns
gently, and reuse existing artifacts. Because we often need to integrate with other systems, including
legacy databases as well as Web-based services, we will find that we need to formalize contract models
with the owners of those systems. The following summarizes the practices of AM.
Developers should agree to and follow a common set of modeling standards on a software project. A
good source of modeling standards and guidelines is the book The Elements of UML Style [4] and [URL
64].
Effective modelers learn and then appropriately apply common architectural, design, and analysis
patterns in their models. However, both Martin Fowler [17] and Joshua Kerievsky [28] believe that
developers should consider easing into the application of a pattern and apply it gently.
This practice is AM's equivalent of the adage use the right tool for the job; in this case we want to
create the right model(s) to get the job done. Each artifact—such as a UML state chart, an essential use
case, source code, or data flow diagram (DFD)—has its own specific strengths and weaknesses, and
therefore is appropriate for some situations but not others.
Collective Ownership
Everyone can work on any model, and ideally any artifact on the project, if they need to.
Consider Testability
When we are modeling we should be constantly asking ourselves "How are we going to test this?"
because if we can't test the software that we are building, we shouldn't be building it.
Because each type of model has its strengths and weaknesses, no single model is sufficient for our
modeling needs. By working on several at once we can easily iterate back and forth between them and
use each model for what it is best suited.
Create Simple Content
We should keep the actual content of our models—requirements, analysis, architecture, or design—as
simple as we possibly can while still fulfilling the needs of our project stakeholders. The implication is
that we should not add additional aspects to our models unless they are justifiable.
We should use a subset of the modeling notation available to us—a simple model that shows the key
features that we are trying to understand, perhaps a class model depicting the primary responsibilities of
classes and the relationships between them, often proves to be sufficient.
The vast majority of the models that we create are temporary working models—design sketches, low
fidelity prototypes, index cards, potential architecture/design alternatives, and soon—models that have
fulfilled their purpose but no longer add value now that they have done so.
This supports the principle of open and honest communication on our team because all of the current
models are quickly accessible to them, as well as with our project stakeholders because we aren't hiding
anything from them.
Contract models are often required when an external group controls an information resource that our
system requires, such as a database, legacy application, or information service. A contract model is
formalized with both parties mutually agreeing to it and ready to mutually change it over time if
required.
Whenever we find that we are having difficulties working on one artifact (perhaps we are working on a
use case and find that we are struggling to describe the business logic), that's a sign that we should
iterate to another artifact. By iterating to another artifact we immediately become unstuck because we
are making progress working on that other artifact.
With incremental development we model a little, code a little, test a little, then deliver a little. No more
big design upfront (BDUF) where we invest weeks or even months creating models and documents.
Model to Communicate
One reason why we model is to communicate with people external to our team or to create a contract
model.
Model To Understand
The most important application of modeling is to explore the problem space, to identify and analyze the
requirements for the system, or to compare and contrast potential design alternatives to identify the
potentially most simple solution that meets the requirements.
A model is an abstraction, one that should accurately reflect an aspect of whatever you are building. To
determine if it will actually work, we should validate that our model works by writing the corresponding
code.
There is a wealth of information that agile modelers can take advantage of by reusing them.
We should update an artifact such as a model or document only when we absolutely need to (when not
having the model updated is more painful than the effort of updating it).
The vast majority of models can be drawn on a whiteboard, on paper, or even on the back of a napkin.
Note that AM has nothing against CASE tools—if investing in a CASE tool is the most effective use of our
resources, then we should do so and then it to the best of its ability.
At its core, AM is simply a collection of techniques that reflect the principles and values shared by many
experienced software developers. If there is such a thing as agile modeling, then are there also agile
models? Yes.
Amazon
Prev don't be afraid of buying books Next
Agile models are just barely good enough when they exhibit the following traits:
Sometimes we model to communicate, perhaps we need to communicate the scope of our effort to
senior management, and sometimes we model to understand, perhaps we need to determine a design
strategy to implement a collection of Java classes. If we don't know why we need to create something,
then don't create it; that wouldn't be agile.
Agile models are understandable by their intended audience. A requirements model will be written in the
language of the business that our users comprehend, whereas a technical architecture model will likely
use technical terms that developers are familiar with. For example, the whiteboard sketch in Figure B.1
is straightforward and easy to understand—it isn't perfect but it gets the message across. The modeling
notation that we use affects understandability—UML use case diagrams are of no value to our users if
they don't understand what the notation represents. In this case we would either need to use another
approach or educate them in the modeling technique. Style issues, such as avoiding crossing lines, will
also affect understandability—messy diagrams are harder to read than clean ones [4]. The level of detail
in our models and simplicity also affect understandability.
Models often do not need to be 100 percent accurate, they just need to be accurate enough. For
example, if a street map is missing a street, or it shows that a street is open but we discover it's closed
for repairs, do we throw away the map and start driving mayhem through the city? Not likely. We might
decide to update our map; we could pull out a pen and do it ourselves or go to the local store and
purchase the latest version (which still might be out of date), or we could simply accept that the map
isn't perfect but still use it because it is good enough for our purposes. We don't discard our street map
the minute we find an inaccuracy because we don't expect the map to be perfect nor do we need it to
be. Some project teams can tolerate inaccuracies whereas others can't: the nature of the project, the
nature of the individual team members, and the nature of the organization will decide this.
An agile model does not need to be perfectly consistent with itself or with other artifacts to be useful. If
a use case clearly invokes another in one of its steps, then the corresponding use case diagram should
indicate that with an association between the two use cases that is tagged with the UML stereotype of
<<I n clude >>.The diagram is clearly inconsistent with the use case specification, yet the world hasn't
come to an end. In an ideal world, all of our artifacts would be perfectly consistent but the world isn't
ideal nor does it need to be. There is clearly an entropy issue to consider regarding accuracy and
consistency. If we have an artifact that we wish to keep as official documentation, then we will need to
invest the resources to update it as time goes on, otherwise it will quickly become out of date and
useless. The data model of Figure B.2 is missing a few recently added columns, yet it still provides very
good insight into your database schema. There is a fine line between spending too much time updating
documents and not enough. As an aside, Figure B.2 follows the proposed UML notation for data modeling
described at [URL 7].
A road map doesn't indicate each individual house on each street. That would be too much detail and
thus would make the map difficult to work with. However, when a street is being built, I would imagine
the builder has a detailed map of the street that shows each building, the sewers, electrical boxes, and
so on in enough detail that makes the map useful to him. This map doesn't depict the individual patio
stones that make up the walkway to each, once again that would be too much detail. Sufficient detail
depends on the audience and the purpose for which they are using a model—drivers need maps that
show streets, builders need maps that show civil engineering details. Similarly, Figure B.1 clearly doesn't
provide a detailed description of the XYZ business process, nor is it perfect, but it does get depicted at a
sufficiently detailed level. I've worked on many projects where a couple of diagrams drawn on a
whiteboard that are updated as the project goes along were sufficient to describe the architecture.
A fundamental aspect of any project artifact is it should add positive value. Does the benefit that an
architecture model brings to our project outweigh the costs of developing and (optionally) maintaining it?
An architecture model helps to solidify the vision to which our project team is working towards, which
clearly has value. But, if the costs of that model outweigh the benefits, then it no longer provides
positive value. Perhaps it was unwise to invest $100,000 developing a detailed and heavily documented
architecture model when a $5,000 investment resulting in whiteboard diagrams recorded via digital
snapshots would have done the job.
Agile models are as simple as possible
We should strive to keep our models as simple as possible while still getting the job done. Simplicity is
clearly affected by the level of detail in our models, but it also can be affected by the extent of the
notation that we apply. For example, Unified Modeling Language (UML) class diagrams can include a
myriad of symbols, yet most diagrams can get by with just a portion of the notation. We often don't
need to apply all the symbols available to us, so we limit ourselves to a subset of the notation that still
allows us to get the job done. Often a CRC model is sufficient to explore the business domain or the
detailed design of our software, an example of which is depicted in Figure B.3, so we don't even need to
create a UML class diagram.
Therefore, the definition for an agile model is that it is a model that fulfills its purpose and no more; is
understandable to its intended audience; is simple; sufficiently accurate, consistent, and detailed; and
investment in its creation and maintenance provides positive value to the project. In other words, an
agile model is just barely good enough.
Amazon
Prev don't be afraid of buying books Next
FORUMS
Here you will find a selection of online places to talk with others about TDD and/or XP. This includes
mailing lists, newsgroups, forums, etc.
groups.yahoo.com/group/junint
groups.yahoo.com/group/extremeprogramming
groups.yahoo.com/group/testdrivendevelopment
groups.yahoo.com/group/refactoring
This is a forum for discussions about refactoring, including tools associated with
refactoring. It is a place to share and discuss new and old refactorings in a variety of
software languages.
Amazon
Prev don't be afraid of buying books Next
www.agilealliance.org
www.agilemanifesto.org
This site contains the text of the manifesto, information about the manifesto and its
authors, and a list of signatories (and a way to become one).
www.agiledata.org
A good source of information about how data can be treated with agility. Hosted by Scott
Ambler.
www.agilemodeling.com
The home of the Agile Modeling "movement." See Appendix B for an introduction. Hosted
by Scott Ambler.
Amazon
Prev don't be afraid of buying books Next
www.xp123.com
www.refactoring.com
www.xprogramming.com
www.extremeprogramming.org
www.c2.com/cgi/wiki
The granddaddy of XP sites. This is where it all began. This is a wiki (the original) so it's
a forum for discussion as well as an information source.
Amazon
Prev don't be afraid of buying books Next
JUNIT-RELATED SOFTWARE
[URL 14] JUnit Resources
www.junit.org
This is the place to get the latest versions of the programmer test frameworks in the
xUnit family. There are also extensions and articles about programmer testing.
www.clarkware.com/software/JUnitPerf.html
JUnitPerf is a collection of JUnit test decorators that help you measure the performance
and scalability of functionality contained within existing JUnit tests.
junitpp.sourceforge.net
www.daedalos.com/EN/djux
The Daedalos JUnit Extensions (djux) extend the JUnit testing framework in various ways.
They allow specifying TestResources that are available during the whole test cycle. Using
test resources speeds up unit tests, because time consuming initializations are only done
once and remain active over a complete series of test runs. Furthermore, they allow you
to integrate external testing tools as well as to perform specific database tests inside
JUnit tests.
At the site you can download the current version of the Daedalos JUnit Extensions and
find additional information about unit testing, future releases, and examples.
xmlunit.sourceforge.net
XML can be used for just about anything, so deciding if two documents are equal to each
other isn't as easy as a character for character match. XMLUnit allows you to compare
XML documents and strings.
gsbase.sourceforge.net
A collection of classes that are helpful when writing JUnit test cases.
nounit.sourceforge.net
NoUnit allows you to see how good your JUnit tests are. It generates a report from your
code to graphically show you how many of your project's methods are being tested, and
how well.
This is invaluable if you have code that doesn't have a suite of programmer tests.
If you are practicing TDD, then this shouldn't be required since all the code is there as a
direct result of a test requiring it. But not a bad idea, anyway. . . bad stuff happens.
jester.sourceforge.net
Jester finds code that is not covered by tests. It does this by making systematic (and
independent) changes to your code, compiling, and running your test suite. If changing
the code results in the tests still passing 100 percent, then a potential problem is
flagged.
www.thecortex.net/clover
Clover instruments your source code, gathers execution counts on a line-by-line basis
when you run your test suite (or run the code in whatever way you want), and generates
reports from the resulting data. This is very handy if you want to find out how
comprehensive your tests are. And you should want to find that out.
www.mockobjects.com
mockmaker.sourceforge.net
MockMaker is a program for creating source code for mock object classes that work with
the MockObjects framework (required parts of the framework are included in the
MockMaker download).
www.easymock.org
mockry.sourceforge.net
JUNIT-RELATED INFORMATION
[URL 27] JUnit: A Starter Guide
www.diasparsoftware.com/articles/JUnit/jUnitStarterGuide.html
by J. B. Rainsberger
After reading how critical the JUnit community is of JUnit's documentation, we decided to
create some.
www.xprogramming.com/testfram.htm
junit.sourceforge.net/doc/testinfected/testing.htm
www.clarkware.com/articles/JUnitPrimer.html
Another great introduction to using JUnit, by Mike Clark.
www.cwi.nl/~leon/papers/xp2001/xp2001.pdf
An excellent article on smells in test code and what to do about them. Presented at
XP2001, by Arie van Deursen, Leon Moonen, Alex van den Bergh, and Gerard Kok.
Amazon
Prev don't be afraid of buying books Next
TOOLS
[URL 32] The Eclipse Project
www.eclipse.org
You can find the latest versions of the open source Eclipse IDE here, as well as articles,
forums, etc. relating to Eclipse. Eclipse now includes the JUnit plugin as part of the build.
www.intellij.com/idea/
www.togethersoft.com
www.borland.com/jbuilder
jakarta.apache.org/ant
cruisecontrol.sourceforge.net
A continuous integration tool from Martin Fowler and the folks at ThoughtWorks.
chip.cs.uiuc.edu/users/brant/Refactory
www.instantiations.com/jfactor
A Java refactoring tool from Instantiations (who have been making tools for the OO world
for a long time). It is a refactoring plugin.
[URL 40] RefactorIT
www.refactorit.com
A Java refactoring tool from Aqris that works with jBuilder, Sun ONE Studio, NetBeans,
and jDeveloper.
www.xptools.com
A Java refactoring tool from XP Tools AB that helps find smells, suggests appropriate
refactorings, and performs refactorings. Available as a stand-alone application that
interoperates with most Java IDEs and as a plugin for Borland's JBuilder.
www.chive.com/products/retool
Retool is a refactoring tool from Chive Software. It is an add-in for Oracle 9i, JDeveloper,
and Borland JBuilder 4/5.
www.cincom.com/smalltalk
VisualWorks is the direct descendant of the original Smalltalk systems from XEROX PARC.
XEROX spun off ParcPlace to commercialize Smalltalk-80. This evolved into VisualWorks
and was subsequently bought by Cincom, who has continued to develop and evolve it.
[URL 44] Squeak
www.squeak.org
Squeak is available for free via the Internet at this and other sites. Each release includes
platform-independent support for color, sound, and network access, with complete source
code. Originally developed on the Macintosh, members of its user community have since
ported it to numerous other platforms.
www3.ca.com/Solutions/Product.asp?ID=260
AllFusion ERwin Data Modeler is an industry-leading data modeling solution that can help
you create and maintain databases,data warehouses, and enterprise data models.
Amazon
Prev don't be afraid of buying books Next
homepage1.nifty.com/markey/ruby/rubyunit/index_e.html
On this page you can download the latest version of RubyUnit and find documentation
and related tools (e.g., a Ruby implementation of a mock objects framework).
www.xprogramming.com/testfram.htm
On this page you can download the latest version of SUnit. SUnit is now included in the
standard distribution of Cincom VisualWorks Smalltalk.
cppunit.sourceforge.net
On this page you can download the latest version of CppUnit, find documentation, and
get involved. CppUnit was originally a port of JUnit done by Michael Feathers of Object
Mentor.
unitpp.sourceforge.net
On this page you can download the latest version of Unit++. This is an alternative xUnit
implementation for C++ which takes a different approach. It isn't a port of JUnit, rather it
is a redesign and rewrite in C++. As such, it takes advantage of C++'s capabilities and
idioms.
[URL 50] Testing Framework for Perl
perlunit.sourceforge.net
www.vbunit.com
nunit.org
pyunit.sourceforge.net
COMPANIES
[URL 54] Adaption Software, Inc.
www.adaptionsoft.com
Adaption is the company founded and run by the author of this book.
Adaption Software uses its expertise in Extreme Programming (XP) and Test Driven
Development, as well as its high level of software craftsmanship to provide:
2. Training courses for individuals and organizations who wish to learn TDD or XP;
www.pragmaticprogrammer.com
The online home of The Pragmatic Programmers, the company built by Dave Thomas and
Andy Hunt.
www.clarkware.com
www.daedalos.com
The company that employs Jens Uwe Pibka. Kent Beck worked with this company for a
couple of years as well.
www.thoughtworks.com
www.gargoylesoftware.com
www.togethersoft.com
MISCELLANEOUS
[URL 61] The Coad Letter
bdn.borland.com/coadletter
The Coad Letter focuses on these key areas: competitive strategy, adaptive process,
modeling, design, and test-driven development. This site is a specialized e-newsletter
source and online community dedicated to developing, discussing, and advancing these
topics.
www.enteract.com/~bradapp/ftp/src/libs/C++/AvlTrees.html
www.sdmagazine.com
www.modelingstyle.info
www.adaptionsoft.com/tddapg.html
www.go-mono.com
Mono includes: a compiler for the C# language, a runtime for the Common Language
Infrastructure (also referred to as the CLR),and a set of class libraries.
www.python.org
The online home of Python.
www.puffinhome.org
webunit.sourceforge.net
The online home of WebUnit: a unit testing framework for testing Web applications in
PyUnit tests. WebUnit was developed by Steve Purcell, the author of PyUnit.
Amazon
Prev don't be afraid of buying books Next
Answer p u b li c c l a s s T e st M o v i e e x t en d s T e s t C a s e {
1: p ri v a t e M o vi e s t a r W a r s = n u l l ;
p ro t e c t e d vo i d s e t U p ( ) {
s t a r W a r s = ne w M o v i e ( " St a r W a r s " , n ul l , - 1 ) ;
}
p u b li c v o i d te s tM o v i e N a m e ( ) {
a ss e r t E q u a ls ( "s t a r W a r s s ho u l d h a v e n a me \ " S t a r W a r s \ ". " ,
"S t a r W a r s " ,
st a r W a r s . g e tN a m e ( ) ) ;
}
p u b li c v o i d te s tN u l l N a m e ( ) {
S tr i n g n u l lS t ri n g = n u l l ;
t ry {
n e w M o v i e( n ul l S t r i n g , nu l l ,
-1);
f a i l ( " n u ll na m e s h o u l d h a v e t h r o w n Il l e g a l A r g u m e n t Ex c e p t i o n . " ) ;
} c a t c h ( I ll e ga l A r g u m e n t Ex c e p t i o n e x ) {
}
}
p u b li c v o i d te s tE m p t y N a m e ( ) {
t ry {
n e w M o v i e( " ", n u l l , - 1 );
f a i l ( " e m pt y n a m e s h o u l d h a v e t h r o w n I l l e g a l A r g u m e n tE x c e p t i o n . " ) ;
} c a t c h ( I ll e ga l A r g u m e n t Ex c e p t i o n e x ) {
}
}
p u b li c v o i d te s tT o S t r i n g ( ) {
a ss e r t E q u a ls ( "s t a r W a r s s ho u l d h a v e t o St r i n g o f \ " S t a r W a r s \ " . " ,
"S t a r W a r s " ,
st a r W a r s . t o St r i n g ( ) ) ;
}
p u b li c v o i d te s tE q u a l s ( ) {
f in a l M o v i e a = n e w M o v i e( " S t a r Wars", null,
-1);
f in a l M o v i e b = n e w M o v i e( " S t a r Wars", null,
-1);
f in a l M o v i e c = n e w M o v i e( " S t a r Trek", null,
-1);
f in a l M o v i e d = n e w M o v i e( " S t a r Wars", null,
-1) {
};
n ew E q u a l s Te s te r ( a , b , c , d ) ;
}
p u b li c v o i d te s tR e n a m i n g ( ) {
S tr i n g n e w Na m e = " S t a r T re k " ;
M ov i e a M o v ie = n e w M o v i e (" S t a r W a r s " , n u l l ,
-1);
a Mo v i e . r e n am e (n e w N a m e ) ;
a ss e r t E q u a ls ( "R e n a m i n g s ho u l d c h a n g e th e n a m e . " ,
ne w N a m e ,
aM o v i e . g e t N am e ( ) ) ;
}
p u b li c v o i d te s tN u l l R e n a m e () {
M ov i e a M o v ie = n e w M o v i e (" S t a r W a r s " , n u l l ,
-1);
t ry {
a M o v i e . r en a me ( n u l l ) ;
f a i l ( " n u ll re n a m e s h o u ld h a v e t h r o w n I l l e g a l A r g u m e nt E x c e p t i o n . " );
} c a t c h ( I ll e ga l A r g u m e n t Ex c e p t i o n e x ) {
}
}
p u b li c v o i d te s tE m p t y R e n a m e( ) {
M ov i e a M o v ie = n e w M o v i e (" S t a r W a r s " , n u l l ,
-1);
t ry {
a M o v i e . r en a me ( " " ) ;
f a i l ( " e m pt y r e n a m e s h o ul d h a v e t h r o wn I l l e g a l A r g u m en t E x c e p t i o n . ") ;
} c a t c h ( I ll e ga l A r g u m e n t Ex c e p t i o n e x ) {
}
}
p u b li c v o i d te s tC o p y C o n s t r uc t o r ( ) {
M ov i e c o p y Of S ta r W a r s = n ew M o v i e ( s t a r Wa r s ) ;
a ss e r t N o t S am e (" A c o p y s h ou l d n o t b e t he s a m e a s t h e or i g i n a l . " ,
st a r W a r s ,
co p y O f S t a r W ar s ) ;
a ss e r t E q u a ls ( "A c o p y s h o ul d b e e q u a l to t h e o r i g i n a l ." ,
s ta r W a r s ,
c op y O f S t a r W a rs ) ;
}
p ub l i c v o i d t es t U n R a t e d ( ) {
a s s e r t F a ls e (" s t a r W a r s sh o u l d b e u n r at e d . " , s t a r W a r s. h a s R a t i n g ( ) );
}
p ub l i c v o i d t es t R a t e d M o v ie ( ) t h r o w s U nr a t e d E x c e p t i o n {
M o v i e f o tr = n e w M o v i e (" F e l l o w s h i p of t h e R i n g " , n ul l , 5 ) ;
a s s e r t T r ue ( "f o t r s h o u l d b e r a t e d " , fo t r . h a s R a t i n g ( )) ;
a s s e r t E q ua l s( " f o t r s h o ul d b e r a t e d at 5 . " , 5 , f o t r .g e t R a t i n g ( ) ) ;
}
p ub l i c v o i d t es t U n r a t e d E xc e p t i o n ( ) {
try {
s t a r W a rs . ge t R a t i n g ( ) ;
f a i l ( " ge t Ra t i n g o n a n u n r a t e d M o v ie s h o u l d t h r o w U n r a t e d E x c e p ti o n . " ) ;
} c a t c h (U n ra t e d E x c e p t io n e x ) {
a s s e r t Eq u al s ( " U n r a t e dE x c e p t i o n s h ou l d i d e n t i f y t he m o v i e . " ,
s t a r W a r s. g e t N a m e ( ) ,
e x . g e t M es s a g e ( ) ) ;
}
}
p ub l i c v o i d t es t U n c a t e g o ri z e d ( ) {
a s s e r t E q ua l s( " s t a r W a r s s h o u l d b e u n ca t e g o r i z e d . " ,
" U n c a t e g o ri z e d " ,
s t a r W a r s . ge t C a t e g o r y ( ) );
}
p ub l i c v o i d t es t S c i e n c e F ic t i o n ( ) {
M o v i e a l ie n = n e w M o v i e( " A l i e n " , " S ci e n c e F i c t i o n " ,
-1);
a s s e r t E q ua l s( " a l i e n s h ou l d b e S c i e n ce F i c t i o n . " ,
" S c i e n c e Fi c t i o n " ,
a l i e n . g e t Ca t e g o r y ( ) ) ;
}
p ub l i c s t a ti c v o i d m a i n ( St r i n g [ ] a r g s) {
j u n i t . t e xt u i. T e s t R u n n e r. r u n ( T e s t M o v ie . c l a s s ) ;
}
}
2. Refactor the tests again, this time removing the unneeded M o v i e creation.
Answer p u b li c c l a s s T e st M o v i e e x t en d s T e s t C a s e {
2: p ri v a t e M o vi e s t a r W a r s = n u l l ;
p ro t e c t e d vo i d s e t U p ( ) {
s t a r W a r s = ne w M o v i e ( " St a r W a r s " , n ul l , - 1 ) ;
}
p ub l i c v o i d t es t M o v i e N a m e( ) {
a s s e r t E q ua l s( " s t a r W a r s s h o u l d h a v e na m e \ " S t a r W a r s\ " . " ,
" S t a r W a r s" ,
s t a r W a r s . ge t N a m e ( ) ) ;
}
p ub l i c v o i d t es t N u l l N a m e () {
S t r i n g n ul l St r i n g = n u ll ;
try {
n e w M o vi e (n u l l S t r i n g , n u l l ,
-1);
f a i l ( " nu l l n a m e s h o u ld h a v e t h r o w n I l l e g a l A r g u m e nt E x c e p t i o n . " );
} c a t c h (I l le g a l A r g u m e nt E x c e p t i o n e x) {
}
}
p ub l i c v o i d t es t E m p t y N a m e( ) {
try {
n e w M o vi e (" " , n u l l ,
-1);
f a i l ( " em p ty n a m e s h o ul d h a v e t h r o wn I l l e g a l A r g u m en t E x c e p t i o n . ") ;
} c a t c h (I l le g a l A r g u m e nt E x c e p t i o n e x) {
}
}
p ub l i c v o i d t es t T o S t r i n g () {
a s s e r t E q ua l s( " s t a r W a r s s h o u l d h a v e to S t r i n g o f \ " S ta r W a r s \ " . " ,
" S t a r W a r s" ,
s t a r W a r s . to S t r i n g ( ) ) ;
}
p ub l i c v o i d t es t E q u a l s ( ) {
f i n a l M o vi e a = n e w M o vi e ( " S t a r W a r s" , n u l l ,
-1);
f i n a l M o vi e b = n e w M o vi e ( " S t a r W a r s" , n u l l ,
-1);
f i n a l M o vi e c = n e w M o vi e ( " S t a r T r e k" , n u l l ,
-1);
f i n a l M o vi e d = n e w M o vi e ( " S t a r W a r s" , n u l l ,
-1) {
};
n e w E q u a ls T es t e r ( a , b , c , d ) ;
}
p ub l i c v o i d t es t R e n a m i n g () {
S t r i n g n ew N am e = " S t a r T r e k " ;
s t a r W a r s .r e na m e ( n e w N a m e) ;
a s s e r t E q ua l s( " R e n a m i n g s h o u l d c h a n g e t h e n a m e . " ,
newName,
s t a r W a r s . ge t N a m e ( ) ) ;
}
p ub l i c v o i d t es t N u l l R e n a me ( ) {
try {
s t a r W a rs . re n a m e ( n u l l );
f a i l ( " nu l l r e n a m e s h ou l d h a v e t h r ow n I l l e g a l A r g u me n t E x c e p t i o n ." ) ;
} c a t c h (I l le g a l A r g u m e nt E x c e p t i o n e x) {
}
}
p ub l i c v o i d t es t E m p t y R e n am e ( ) {
try {
s t a r W a rs . re n a m e ( " " ) ;
f a i l ( " em p ty r e n a m e s ho u l d h a v e t h ro w n I l l e g a l A r g um e n t E x c e p t i o n. " ) ;
} c a t c h (I l le g a l A r g u m e nt E x c e p t i o n e x) {
}
}
p ub l i c v o i d t es t C o p y C o n s tr u c t o r ( ) {
M o v i e c o py O fS t a r W a r s = n e w M o v i e ( s t ar W a r s ) ;
a s s e r t N o tS a me ( " A c o p y sh o u l d n o t b e t h e s a m e a s t h e o r i g i n a l . " ,
starWars,
c o p y O f S t a rW a r s ) ;
a s s e r t E q ua l s( " A c o p y s ho u l d b e e q u a l t o t h e o r i g i n al . " ,
starWars,
c o p y O f S t a r Wa r s ) ;
}
p ub l i c v o i d t es t U n R a t e d ( ) {
a s s e r t F a ls e (" s t a r W a r s sh o u l d b e u n r at e d . " , s t a r W a r s. h a s R a t i n g ( ) );
}
p ub l i c v o i d t es t R a t e d M o v ie ( ) t h r o w s U nr a t e d E x c e p t i o n {
M o v i e f o tr = n e w M o v i e (" F e l l o w s h i p of t h e R i n g " , n ul l , 5 ) ;
a s s e r t T r ue ( "f o t r s h o u l d b e r a t e d " , fo t r . h a s R a t i n g ( )) ;
a s s e r t E q ua l s( " f o t r s h o ul d b e r a t e d at 5 . " , 5 , f o t r .g e t R a t i n g ( ) ) ;
}
p ub l i c v o i d t es t U n r a t e d E xc e p t i o n ( ) {
try {
s t a r W a rs . ge t R a t i n g ( ) ;
f a i l ( " ge t Ra t i n g o n a n u n r a t e d M o v ie s h o u l d t h r o w U n r a t e d E x c e p ti o n . " ) ;
} c a t c h (U n ra t e d E x c e p t io n e x ) {
a s s e r t Eq u al s ( " U n r a t e dE x c e p t i o n s h ou l d i d e n t i f y t he m o v i e . " ,
s t a r W a r s .g e t N a m e ( ) ,
e x . g e t M e ss a g e ( ) ) ;
}
}
p ub l i c v o i d t es t U n c a t e g o ri z e d ( ) {
a s s e r t E q ua l s( " s t a r W a r s s h o u l d b e u n ca t e g o r i z e d . " ,
" U n c a t e g o ri z e d " ,
s t a r W a r s . ge t C a t e g o r y ( ) );
}
p ub l i c v o i d t es t S c i e n c e F ic t i o n ( ) {
M o v i e a l ie n = n e w M o v i e( " A l i e n " , " S ci e n c e F i c t i o n " ,
-1);
a s s e r t E q ua l s( " a l i e n s h ou l d b e S c i e n ce F i c t i o n . " ,
" S c i e n c e Fi c t i o n " ,
a l i e n . g e t Ca t e g o r y ( ) ) ;
}
p ub l i c s t a ti c v o i d m a i n ( St r i n g [ ] a r g s) {
j u n i t . t e xt u i. T e s t R u n n e r. r u n ( T e s t M o v ie . c l a s s ) ;
}
}
3. The customer identified these categories: Science Fiction, Horror, Comedy, Western, Drama, Fantasy, Kids,
Adult, Mystery, Thriller. Add these to C a t e g o r y .
Answer p u b li c c l a s s C a te g o r y {
3: p ri v a t e S t ri n g n a m e = n u l l ;
p ri v a t e C a te g or y ( S t r i n g ca t e g o r y N a m e ) {
n a m e = c at e go r y N a m e ;
}
p ub l i c s t a ti c f i n a l C a t e go r y U N C A T E G O RI Z E D = n e w C a t eg o r y ( " U n c a t e go r i z e d " ) ;
p ub l i c s t a ti c f i n a l C a t e go r y S C I F I = ne w C a t e g o r y ( " S ci e n c e F i c t i o n" ) ;
p ub l i c s t a ti c f i n a l C a t e go r y H O R R O R = n e w C a t e g o r y ( " Ho r r o r " ) ;
p ub l i c s t a ti c f i n a l C a t e go r y C O M E D Y = n e w C a t e g o r y ( " Co m e d y " ) ;
p ub l i c s t a ti c f i n a l C a t e go r y W E S T E R N = n e w C a t e g o r y ( "W e s t e r n " ) ;
p ub l i c s t a ti c f i n a l C a t e go r y D R A M A = ne w C a t e g o r y ( " D ra m a " ) ;
p ub l i c s t a ti c f i n a l C a t e go r y F A N T A S Y = n e w C a t e g o r y ( "F a n t a s y " ) ;
p ub l i c s t a ti c f i n a l C a t e go r y K I D S = n ew C a t e g o r y ( " K i ds " ) ;
p ub l i c s t a ti c f i n a l C a t e go r y A D U L T = ne w C a t e g o r y ( " A du l t " ) ;
p ub l i c s t a ti c f i n a l C a t e go r y M Y S T E R Y = n e w C a t e g o r y ( "M y s t e r y " ) ;
p ub l i c s t a ti c f i n a l C a t e go r y T H R I L L E R = n e w C a t e g o r y (" T h r i l l e r " ) ;
}
Answer p u b li c v o i d se l ec t ( i n t i ) {
4: i f ( i = = - 1) {
s e l e c t e d Mo v ie = n u l l ;
} else {
s e l e c t e d Mo v ie = m o v i e s .g e t M o v i e ( i ) ;
v i e w . s e t Na m eF i e l d ( s e l e ct e d M o v i e . g e t Na m e ( ) ) ;
v i e w . s e t Ca t eg o r y F i e l d ( se l e c t e d M o v i e .g e t C a t e g o r y ( ) ) ;
try {
v i e w . s et R at i n g F i e l d ( se l e c t e d M o v i e .g e t R a t i n g ( ) + 1) ;
} c a t c h (U n ra t e d E x c e p t io n e ) {
v i e w . s e tR a ti n g F i e l d ( 0 );
}
}
}
Here are the updated methods. We just need to add expectations for the calls to s e t C a t e g o r yF i e l d ( ) :
p ub l i c v o i d t es t U p d a t i n g () {
V e c t o r n ew M ov i e s = n e w V e c t o r ( ) ;
n e w M o v i e s. a dd ( s t a r W a r s );
n e w M o v i e s. a dd ( n e w M o v i e( " S t a r T r e k I" , C a t e g o r y . S C IF I , 5 ) ) ;
n e w M o v i e s. a dd ( s t a r g a t e );
n e w M o v i e s. a dd ( t h e S h i n i ng ) ;
m o c k V i e w .s e tM o v i e s ( m o v ie s ) ;
c o n t r o l . se t Vo i d C a l l a b l e( 1 ) ;
m o c k V i e w .s e tN a m e F i e l d ( "S t a r T r e k " ) ;
c o n t r o l . se t Vo i d C a l l a b l e( 1 ) ;
m o c k V i e w .s e tR a t i n g F i e l d( 4 ) ;
c o n t r o l . se t Vo i d C a l l a b l e( ) ;
m o c k V i e w .s e tC a t e g o r y F i el d ( C a t e g o r y . SC I F I ) ;
c o n t r o l . se t Vo i d C a l l a b l e( 1 ) ;
m o c k V i e w .g e tN a m e F i e l d ( );
c o n t r o l . se t Re t u r n V a l u e (" S t a r T r e k I ", 1 ) ;
m o c k V i e w .g e tR a t i n g F i e l d( ) ;
c o n t r o l . se t Re t u r n V a l u e (6 , 1 ) ;
m o c k V i e w .s e tM o v i e s ( n e w Mo v i e s ) ;
c o n t r o l . se t Vo i d C a l l a b l e( 1 ) ;
c o n t r o l . ac t iv a t e ( ) ;
M o v i e L i s tE d it o r e d i t o r = n e w M o v i e L is t E d i t o r ( m o v i e Li s t , m o c k V i e w );
e d i t o r . s el e ct ( 1 ) ;
e d i t o r . u pd a te ( ) ;
c o n t r o l . ve r if y ( ) ;
}
p u b li c v o i d te s tU p d a t i n g W i th S a m e N a m e ( ) {
V ec t o r n e w Mo v ie s = n e w V ec t o r ( ) ;
n ew M o v i e s . ad d (s t a r W a r s ) ;
n ew M o v i e s . ad d (n e w M o v i e ( "S t a r T r e k " , C at e g o r y . S C I F I , 5) ) ;
n ew M o v i e s . ad d (s t a r g a t e ) ;
n ew M o v i e s . ad d (t h e S h i n i n g );
m oc k V i e w . s et M ov i e s ( m o v i e s) ;
c on t r o l . s e tV o id C a l l a b l e ( 1) ;
m oc k V i e w . s et N am e F i e l d ( " S ta r T r e k " ) ;
c on t r o l . s e tV o id C a l l a b l e ( 1) ;
m oc k V i e w . s et R at i n g F i e l d ( 4) ;
c on t r o l . s e tV o id C a l l a b l e ( );
m oc k V i e w . s et C at e g o r y F i e l d( C a t e g o r y . S C IF I ) ;
c on t r o l . s e tV o id C a l l a b l e ( 1) ;
m oc k V i e w . g et N am e F i e l d ( ) ;
c on t r o l . s e tR e tu r n V a l u e ( " St a r T r e k " , 1 );
m oc k V i e w . g et R at i n g F i e l d ( );
c on t r o l . s e tR e tu r n V a l u e ( 6 , 1 ) ;
m oc k V i e w . s et M ov i e s ( n e w M o vi e s ) ;
c on t r o l . s e tV o id C a l l a b l e ( 1) ;
c on t r o l . a c ti v at e ( ) ;
M ov i e L i s t E di t or e d i t o r = n e w M o v i e L i s tE d i t o r ( m o v i e L i st , m o c k V i e w ) ;
e di t o r . s e l ec t (1 ) ;
e di t o r . u p d at e () ;
c on t r o l . v e ri f y( ) ;
}
p u b li c v o i d te s tD u p l i c a t e C au s i n g U p d a t e ( ) {
m oc k V i e w . s et M ov i e s ( m o v i e s) ;
c on t r o l . s e tV o id C a l l a b l e ( 1) ;
m oc k V i e w . s et N am e F i e l d ( " S ta r T r e k " ) ;
c on t r o l . s e tV o id C a l l a b l e ( 1) ;
m oc k V i e w . s et R at i n g F i e l d ( 0) ;
c on t r o l . s e tD e fa u l t V o i d C a ll a b l e ( ) ;
m oc k V i e w . s et C at e g o r y F i e l d( C a t e g o r y . S C IF I ) ;
c on t r o l . s e tV o id C a l l a b l e ( 1) ;
m oc k V i e w . g et N am e F i e l d ( ) ;
c on t r o l . s e tR e tu r n V a l u e ( " St a r W a r s " , 1 );
m oc k V i e w . d up l ic a t e E x c e p t io n ( " S t a r W a r s" ) ;
c on t r o l . s e tV o id C a l l a b l e ( 1) ;
c on t r o l . a c ti v at e ( ) ;
M ov i e L i s t E di t or e d i t o r = n e w M o v i e L i s tE d i t o r ( m o v i e L i st , m o c k V i e w ) ;
e di t o r . s e l ec t (1 ) ;
e di t o r . u p d at e () ;
c on t r o l . v e ri f y( ) ;
}
6. We used the t o S t ri n g( ) method to get the value for the category field, as well as the value from the
expected C a t eg o ry to compare against the field contents. What's the problem that we have with the system
in its current state? (Hint: look at C a t e go r y .) Fix it.
Answer We haven't defined C at eg o r y . t o S t r i ng ( ) so it just uses the default from Object. This gives a consistent
6: result, and so our test passes. However, it's not human readable. To fix it we need to add t o S t r i n g ( ) to
C a t eg o r y :
p u b li c S t r i n g t oS t r i n g ( ) {
r et u r n n a m e;
}
p u b li c v o i d te s tU p d a t i n g W i th S a m e N a m e ( ) {
V ec t o r n e w Mo v ie s = n e w V ec t o r ( ) ;
n ew M o v i e s . ad d (s t a r W a r s ) ;
n ew M o v i e s . ad d (n e w M o v i e ( "S t a r T r e k " , Ca t e g o r y . S C I F I , 5 ) ) ;
n ew M o v i e s . ad d (s t a r g a t e ) ;
n ew M o v i e s . ad d (t h e S h i n i n g );
m oc k V i e w . s et M ov i e s ( m o v i e s) ;
c on t r o l . s e tV o id C a l l a b l e ( 1) ;
m oc k V i e w . s et N am e F i e l d ( " S ta r T r e k " ) ;
c on t r o l . s e tV o id C a l l a b l e ( 1) ;
m oc k V i e w . s et R at i n g F i e l d ( 4) ;
c on t r o l . s e tV o id C a l l a b l e ( );
m oc k V i e w . s et C at e g o r y F i e l d( C a t e g o r y . S C IF I ) ;
c on t r o l . s e tV o id C a l l a b l e ( 1) ;
m oc k V i e w . g et N am e F i e l d ( ) ;
c on t r o l . s e tR e tu r n V a l u e ( " St a r T r e k " , 1 );
m oc k V i e w . g et R at i n g F i e l d ( );
c on t r o l . s e tR e tu r n V a l u e ( 6 , 1 ) ;
m oc k V i e w . g et C at e g o r y F i e l d( ) ; / / t h i s li n e a d d e d
c on t r o l . s e tR e tu r n V a l u e ( C at e g o r y . S C I F I , 1 ) ; / / t h i s l in e a d d e d
m oc k V i e w . s et M ov i e s ( n e w M o vi e s ) ;
c on t r o l . s e tV o id C a l l a b l e ( 1) ;
c on t r o l . a c ti v at e ( ) ;
M ov i e L i s t E di t or e d i t o r = n e w M o v i e L i s tE d i t o r ( m o v i e L i st , m o c k V i e w ) ;
e di t o r . s e l ec t (1 ) ;
e di t o r . u p d at e () ;
c on t r o l . v e ri f y( ) ;
}
Answer p u b li c S t r i n g t oS t r i n g ( ) {
8: S tr i n g B u f f er bu f = n e w S tr i n g B u f f e r ( " [" ) ;
I te r a t o r m ov i eI t e r a t o r = m o v i e s . i t e r a to r ( ) ;
b oo l e a n f i rs t = t r u e ;
w hi l e ( m o v ie I te r a t o r . h a s Ne x t ( ) ) {
M o v i e a M ov i e = ( M o v i e ) m o v i e I t e r a t o r. n e x t ( ) ;
i f ( ! f i r st ) {
b u f . a p pe n d( ' ' ) ;
}
f i r s t = fa l se ;
b u f . a p p e nd ( '" ' ) ;
b u f . a p p e nd ( aM o v i e . g e t N am e ( ) ) ;
b u f . a p p e nd ( '" ' ) ;
}
b uf . a p p e n d (' ] ') ;
r et u r n b u f .t o St r i n g ( ) ;
}
M ov i e L i s t un e qu a l L i s t = ne w M o v i e L i s t () ;
u ne q u a l L i s t. a dd ( s t a r W a r s );
u ne q u a l L i s t. a dd ( s t a r g a t e );
n ew E q u a l s Te s te r ( m o v i e L i st , e q u a l L i s t , u n e q u a l L i s t , nu l l ) ;
}
p u b li c b o o l e an eq u a l s ( O b j e ct o ) {
i f ( o = = t hi s ) r e t u r n t r ue ;
i f ( o = = n ul l ) r e t u r n f a ls e ;
i f ( o . g e t C la s s( ) ! = t h i s .g e t C l a s s ( ) ) re t u r n f a l s e ;
M ov i e L i s t aM o vi e L i s t = ( M o vi e L i s t ) o ;
r et u r n m o v ie s .e q u a l s ( a M o vi e L i s t . m o v i e s) ;
p u b li c i n t h as h Co d e ( ) {
r et u r n m o v ie s .h a s h C o d e ( ) ;
}
Answer / / . ..
10: p r i va t e M o v i eL i st s c i f i L i s t = n u l l ;
p r i va t e M o v i eL i st t h r i l l e r Li s t = n u l l ;
p r i va t e M o v i eL i st h o r r o r L i st = n u l l ;
p r o te c t e d v o id se t U p ( ) t h r ow s E x c e p t i o n {
/ /. . .
s ci f i L i s t = n ew M o v i e L i s t( ) ;
s ci f i L i s t . ad d (s t a r W a r s ) ;
s ci f i L i s t . ad d (s t a r T r e k ) ;
s ci f i L i s t . ad d (s t a r g a t e ) ;
t hr i l l e r L i st = n e w M o v i e Li s t ( ) ;
t hr i l l e r L i st . ad d ( r e d O c t o be r ) ;
t hr i l l e r L i st . ad d ( c o n g o ) ;
h or r o r L i s t = ne w M o v i e L i st ( ) ;
h or r o r L i s t .a d d( t h e S h i n i n g) ;
h or r o r L i s t .a d d( c a r r i e ) ;
}
p ub l i c M o v ie L is t E d i t o r ( M ov i e L i s t m o v i eL i s t , M o v i e L i s tE d i t o r V i e w a Vi e w ) {
m o v i e s = m o vi e L i s t ;
f i l t e r e d Mo v ie s = m o v i e L i st ;
v i e w = a Vi e w;
v i e w . s e t Ed i to r ( t h i s ) ;
u p d a t e M o vi e Li s t ( ) ;
}
p ri v a t e v o id up d a t e M o v i e Li s t ( ) {
v i e w . s e t Mo v ie s ( n e w V e c to r ( f i l t e r e d M ov i e s . g e t M o v i e s () ) ) ;
}
}
12. Extract the fixture code from the two tests into s e t U p () .
Answer p u b li c c l a s s T e st M o v i e L i s t Wr i t e r e x t e n d s T e s t C a s e {
12: S tr i n g W r i t er de s t i n a t i o n = n u l l ;
M ov i e L i s t mo v ie L i s t = n u ll ;
p ro t e c t e d vo i d s e t U p ( ) {
d e s t i n a t io n = n e w S t r i ng W r i t e r ( ) ;
m o v i e L i s t= n ew M o v i e L i s t () ;
}
p ub l i c v o i d t es t W r i t i n g E mp t y L i s t ( ) t h ro w s E x c e p t i o n {
m o v i e L i s t. w ri t e T o ( d e s t in a t i o n ) ;
a s s e r t E q ua l s( " W r i t i n g an e m p t y l i s t s h o u l d p r o d u c e n o t h i n g . " ,
"",
d e s t i n a t i on . t o S t r i n g ( ) );
}
p ub l i c v o i d t es t W r i t i n g O ne M o v i e ( ) t h r ow s E x c e p t i o n {
S t r i n g s ta r Wa r s O u t p u t = " S t a r W a r s | Sc i e n c e F i c t i o n |4 \ n " ;
M o v i e s t ar W ar s = n e w M ov i e ( " S t a r W a rs " , C a t e g o r y . S CI F I , 4 ) ;
m o v i e L i s t. a dd ( s t a r W a r s );
m o v i e L i s t. w ri t e T o ( d e s t in a t i o n ) ;
a s s e r t E q ua l s( " W r o n g o u tp u t f r o m w r i ti n g a s i n g l e m ov i e l i s t . " ,
s t a r W a r s O ut p u t ,
d e s t i n a t i on . t o S t r i n g ( ) );
}
p ub l i c s t a ti c v o i d m a i n ( St r i n g [ ] a r g s) {
j u n i t . t e xt u i. T e s t R u n n e r. r u n ( T e s t M o v ie L i s t W r i t e r . c l as s ) ;
}
}
13. Add the method stubs and declarations we need in order to get the test compiling.
Answer In Mo v i e L i s tE d it o r :
13:
p u b li c v o i d sa v eA s ( ) {
}
In Mo v i e L i s tE d it o r V i e w :
F i l e g e t F i l e (S t ri n g s t r i n g );
And in S wi n gM o v i e L i s t E di t o r V i e w :
p u b li c F i l e ge t Fi l e ( S t r i n g p a t t e r n ) {
r et u r n n u l l;
}
14. Make the required change to T e s t Mo v i e L i s t E d i t or F i l e O p e r a t i o n s. t e s t S a v i n g ( ).
Answer p u b li c v o i d te s tS a v i n g ( ) t hr o w s E x c e p t i on {
14: m oc k V i e w . s et M ov i e s ( m o v i e s) ;
c on t r o l . s e tV o id C a l l a b l e ( 1) ;
m oc k V i e w . g et F il e ( ) ;
c on t r o l . s e tR e tu r n V a l u e ( o ut p u t F i l e , 1 ) ;
c on t r o l . a c ti v at e ( ) ;
M ov i e L i s t E di t or e d i t o r = n e w M o v i e L i s tE d i t o r ( m o v i e L i st , m o c k V i e w ) ;
a ss e r t T r u e (" E di t o r s h o u l d h a v e s a v e d " , e d i t o r . s a v e A s () ) ;
F il e A s s e r t .a s se r t S i z e ( " S av e A s - e d f i l e h a s w r o n g s i z e. " ,
e xp e c t e d . l e n g t h( ) ,
o ut p u t F i l e ) ;
F il e A s s e r t .a s se r t E q u a l s ( "S a v e A s - e d f il e h a s w r o n g c on t e n t s " ,
ex p e c t e d ,
ou t p u t F i l e ) ;
c on t r o l . v e ri f y( ) ;
}
Answer p u b li c v o i d te s tA d d i n g ( ) t hr o w s D u p l i c a te M o v i e E x c e p t i o n {
15: S tr i n g L O S T I N S P A C E = " Lo s t I n S p a c e ";
M ov i e l o s t In S pa c e = n e w Mo v i e ( L O S T I N S P A C E , C a t e g o r y. S C I F I , 3 ) ;
V ec t o r m o v ie s Wi t h A d d i t i o n = n e w V e c t o r( m o v i e s ) ;
m ov i e s W i t h Ad d it i o n . a d d ( l os t I n S p a c e ) ;
m oc k V i e w . s et M ov i e s ( m o v i e s) ;
c on t r o l . s e tV o id C a l l a b l e ( 1) ;
m oc k V i e w . g et N am e F i e l d ( ) ;
c on t r o l . s e tR e tu r n V a l u e ( L OS T I N S P A C E , 1 ) ;
m oc k V i e w . g et C at e g o r y F i e l d( ) ;
c on t r o l . s e tR e tu r n V a l u e ( C at e g o r y . S C I F I , 1 ) ;
m oc k V i e w . g et R at i n g F i e l d ( );
c on t r o l . s e tR e tu r n V a l u e ( 3 , 1 ) ;
m oc k V i e w . s et M ov i e s ( m o v i e sW i t h A d d i t i o n );
c on t r o l . s e tV o id C a l l a b l e ( 1) ;
c on t r o l . a c ti v at e ( ) ;
M ov i e L i s t E di t or e d i t o r = n e w M o v i e L i s tE d i t o r ( m o v i e L i st , m o c k V i e w ) ;
e di t o r . a d d () ;
c on t r o l . v e ri f y( ) ;
}
p u b li c v o i d te s tD u p l i c a t e C au s i n g A d d ( ) {
m oc k V i e w . s et M ov i e s ( m o v i e s) ;
c on t r o l . s e tV o id C a l l a b l e ( 1) ;
m oc k V i e w . g et N am e F i e l d ( ) ;
c on t r o l . s e tR e tu r n V a l u e ( " St a r W a r s " , 1 );
m oc k V i e w . g et C at e g o r y F i e l d( ) ;
c on t r o l . s e tR e tu r n V a l u e ( C at e g o r y . S C I F I , 1 ) ;
m oc k V i e w . g et R at i n g F i e l d ( );
c on t r o l . s e tR e tu r n V a l u e ( 5 , 1 ) ;
m oc k V i e w . d up l ic a t e E x c e p t io n ( " S t a r W a r s" ) ;
c on t r o l . s e tV o id C a l l a b l e ( 1) ;
c on t r o l . a c ti v at e ( ) ;
M ov i e L i s t E di t or e d i t o r = n e w M o v i e L i s tE d i t o r ( m o v i e L i st , m o c k V i e w ) ;
e di t o r . a d d () ;
c on t r o l . v e ri f y( ) ;
}
16. Fix up the add related tests in T e s t S w i n g M o v ie L i s t E d i t o r V i e w.
Answer p u b li c v o i d te s tA d d i n g ( ) {
16: S tr i n g L O S T I N S P A C E = " Lo s t I n S p a c e ";
M ov i e l o s t In S pa c e = n e w Mo v i e ( L O S T I N S P A C E , C a t e g o r y. S C I F I , 3 ) ;
m ov i e s . a d d (l o st I n S p a c e ) ;
J Te x t F i e l d Op e ra t o r n e w M o vi e F i e l d =
ne w J T e x t F i e l d Op e r a t o r ( m a i n Wi n d o w ,
n e w N a m e B a s e d C h o o se r ( " n a m e " ) ) ;
n ew M o v i e F i el d .e n t e r T e x t ( LO S T I N S P A C E );
J Co m b o B o x O pe r at o r r a t i n g Co m b o =
n e w J C om b o B o x O p e r at o r ( m a i n W i n d ow ,
n e w N a m eB a s e d C h o o s e r ( " ra t i n g " ) ) ;
r at i n g C o m b o. s et S e l e c t e d I nd e x ( 4 ) ;
J Co m b o B o x O pe r at o r c a t e g o ry C o m b o =
n e w J C om b o B o x O p e r at o r ( m a i n W i n d ow ,
n e w N a m eB a s e d C h o o s e r ( " ca t e g o r y " ) ) ;
c at e g o r y C o mb o .s e t S e l e c t e dI n d e x ( 1 ) ;
J Bu t t o n O p e ra t or a d d B u t t o n =
n e w JB u tt o n O p e r a t o r( m a i n W i n d o w ,
n e w N a m e B a se d C h o o s e r ( " a d d ") ) ;
a dd B u t t o n . do C li c k ( ) ;
J Li s t O p e r a to r m o v i e L i s t =
n e w J Li s tO p e r a t o r ( m ai n W i n d o w ,
ne w N a m e B a s e d Ch o o s e r ( " m o v i e L is t " ) ) ;
L is t M o d e l li s tM o d e l = m o vi e L i s t . g e t M o de l ( ) ;
a ss e r t E q u a ls ( "M o v i e l i s t i s t h e w r o n g s i z e " , m o v i e s . si z e ( ) ,
l i s t M o d e l . ge t S i z e ( ) ) ;
f or ( i n t i = 0; i
< m ov i e s . s i z e( ) ; i + + ) {
a s s e r t E q ua l s( " M o v i e l i st c o n t a i n s b ad m o v i e a t i n d ex " + i , m o v ie s . g e t ( i ) ,
l i s t M o d e l .g e t E l e m e n t A t (i ) ) ;
}
}
p u b li c v o i d te s tD u p l i c a t e C au s i n g A d d ( ) {
J Te x t F i e l d Op e ra t o r n e w M o vi e F i e l d =
ne w J T e x t F i e l d Op e r a t o r ( m a i n Wi n d o w ,
n e w N a m e B a s e d C h o o se r ( " n a m e " ) ) ;
n ew M o v i e F i el d .e n t e r T e x t ( st a r W a r s . g e t N am e ( ) ) ;
J Co m b o B o x O pe r at o r r a t i n g Co m b o =
n e w J C om b o B o x O p e r at o r ( m a i n W i n d ow ,
n e w N a me B a s e d C h o o s e r ( "r a t i n g " ) ) ;
r at i n g C o m b o. s et S e l e c t e d I nd e x ( 4 ) ;
J Co m b o B o x O pe r at o r c a t e g o ry C o m b o =
n e w J C om b o B o x O p e r at o r ( m a i n W i n d ow ,
n e w N a me B a s e d C h o o s e r ( "c a t e g o r y " ) ) ;
c at e g o r y C o mb o .s e t S e l e c t e dI n d e x ( 1 ) ;
J Bu t t o n O p e ra t or a d d B u t t o n =
n e w JB u tt o n O p e r a t o r( m a i n W i n d o w ,
n e w N a m e B as e d C h o o s e r ( " a d d" ) ) ;
a dd B u t t o n . pu s hN o B l o c k ( ) ;
c he c k D u p l i ca t eE x c e p t i o n D ia l o g ( ) ;
J Li s t O p e r a to r m o v i e L i s t =
n e w J Li s tO p e r a t o r ( m ai n W i n d o w ,
n e w N a m e B a s e dC h o o s e r ( " m o v i e Li s t " ) ) ;
c he c k L i s t I sU n ch a n g e d ( m o v ie L i s t ) ;
}
Answer 1. Rename T es t M o v i e L i s tE d i t o r F i l e O p er a t i o n s to T e s t M o v i eL i s t E d i t o r S a vi n g .
17:
2. Create T e st M o v i e L i s t Ed i t o r L o a d i n g , copying loading related parts of
T es t M o v i e L is tE d i t o r S a v i n g:
p u b li c c l a s s T e st M o v i e L i s t Ed i t o r L o a d i n g
e x t e n ds Co m m o n T e s t M ov i e L i s t E d i t o r {
p ri v a t e V e ct o r e m p t y M o v i es ;
p ri v a t e F i l e i np u t F i l e ;
p ro t e c t e d vo i d s e t U p ( ) t hr o w s E x c e p t i on {
s u p e r . s e tU p () ;
i n p u t F i l e = F i l e . c r e a t eT e m p F i l e ( " t e st S a v i n g " , " d a t ") ;
i n p u t F i l e. d el e t e O n E x i t () ;
e m p t y M o v ie s = n e w V e c t or ( ) ;
}
p ub l i c v o i d t es t L o a d i n g ( ) t h r o w s E x c e pt i o n {
m o c k V i e w .s e tM o v i e s ( e m p ty M o v i e s ) ;
c o n t r o l . se t Vo i d C a l l a b l e( 1 ) ;
m o c k V i e w .g e tF i l e T o L o a d () ;
c o n t r o l . se t Re t u r n V a l u e (i n p u t F i l e , 1 );
m o c k V i e w .s e tM o v i e s ( m o v ie s ) ;
c o n t r o l . se t Vo i d C a l l a b l e( 1 ) ;
c o n t r o l . ac t iv a t e ( ) ;
M o v i e L i s tE d it o r e d i t o r = n e w M o v i e L is t E d i t o r ( n e w M ov i e L i s t ( ) ,
mockView);
a s s e r t T r ue ( "E d i t o r s h o ul d h a v e l o a d ed . " , e d i t o r . l o ad ( ) ) ;
c o n t r o l . ve r if y ( ) ;
}
p ub l i c s t a ti c v o i d m a i n ( St r i n g [ ] a r g s) {
j u n i t . t e xt u i. T e s t R u n n e r. r u n ( T e s t M o v ie L i s t E d i t o r L o a di n g . c l a s s ) ;
}
}
Answer Each test begins with these three lines that need to be moved to the end of s e t U p( ) and removed from the
18: two existing tests:
m e n ub a r = n e w J Me n u B a r O p e r at o r ( m a i n W i n d ow ) ;
f i l eM e n u = n e w J Me n u O p e r a t o r( m e n u b a r , " F il e " ) ;
f i l eM e n u . p u s h( ) ;
Answer p u b li c c l a s s T e st M o v i e L i s t So r t i n g B y N a m e e x t e n d s T e s t C a se {
19: p ri v a t e M o vi e Li s t s o r t e d Li s t = n u l l ;
p ri v a t e M o vi e Na m e C o m p a r a to r n a m e C o m p a ra t o r = n u l l ;
p ri v a t e V e ct o r s o r t e d M o v ie s = n u l l ;
p ri v a t e M o vi e Li s t e m p t y L is t = n u l l ;
p ro t e c t e d vo i d s e t U p ( ) t hr o w s E x c e p t i on {
e m p t y L i s t = n e w M o v i e L is t ( ) ;
s o r t e d M o vi e s = n e w V e c to r ( ) ;
s o r t e d M o vi e s. a d d ( n e w M ov i e ( " A " , C a t eg o r y . S C I F I , 5 ) );
s o r t e d M o vi e s. a d d ( n e w M ov i e ( " B " , C a t eg o r y . S C I F I , 4 ) );
s o r t e d M o vi e s. a d d ( n e w M ov i e ( " C " , C a t eg o r y . S C I F I , 3 ) );
s o r t e d M o vi e s. a d d ( n e w M ov i e ( " D " , C a t eg o r y . S C I F I , 2 ) );
s o r t e d M o vi e s. a d d ( n e w M ov i e ( " E " , C a t eg o r y . S C I F I , 1 ) );
s o r t e d M o vi e s. a d d ( n e w M ov i e ( " F " , C a t eg o r y . S C I F I , 0 ) );
s o r t e d M o vi e s. a d d ( n e w M ov i e ( " G " , C a t eg o r y . S C I F I ,
- 1 ) );
s o r t e d L i st = ne w M o v i e L i s t( ) ;
I t e r a t o r i = s o r t e d M o v ie s . i t e r a t o r ( );
w h i l e ( i .h a sN e x t ( ) ) {
s o r t e d Li s t. a d d ( ( M o v i e) i . n e x t ( ) ) ;
}
n a m e C o m p ar a to r = n e w M ov i e N a m e C o m p a ra t o r ( ) ;
}
/ /. . .
}
20. Do the extraction of the fixture.
Answer pu b l ic c l a s s Te s tM o v i e R a t i n gs e x t e n d s T e st C a s e {
20: p r iv a t e M o v ie st a r T r e k = nu l l ;
p r ot e c t e d v oi d s e t U p ( ) t h ro w s E x c e p t i o n {
st a r T r e k = n ew M o v i e ( " S ta r T r e k " , C a te g o r y . S C I F I ) ;
}
p u bl i c v o i d t e st U n r a t e d C o ns t r u c t o r ( ) {
tr y {
s t a r T r e k. g et R a t i n g ( ) ;
f a i l ( " G et t in g r a t i n g of a n u n r a t e d M o v i e s h o u l d t hr o w U n r a t e d M ov i e E x c e p t i o n" ) ;
} c a t c h ( Un r at e d E x c e p t i on e x ) {
}
}
p u bl i c v o i d t e st A d d i n g O n e Ra t i n g ( ) t h r o ws E x c e p t i o n {
st a r T r e k . ad d Ra t i n g ( 3 ) ;
as s e r t E q u al s (" B a d a v e r a ge r a t i n g o f 1. " , 3 , s t a r T r e k. g e t R a t i n g ( ) );
}
p u bl i c s t a t ic vo i d m a i n ( S tr i n g [ ] a r g s ) {
ju n i t . t e x tu i .T e s t R u n n e r .r u n ( T e s t M o v i eR a t i n g s . c l a s s ) ;
}
}
Answer The expected saved data strings don't include a rating count value.
21:
They need to be updated to:
p u b li c c l a s s T e st M o v i e L i s t Ed i t o r S a v i n g ex t e n d s C o m m o n T es t M o v i e L i s t E di t o r {
p ro t e c t e d vo i d s e t U p ( ) t hr o w s E x c e p t i on {
s u p e r . s e tU p () ;
e x p e c t e d = "S t a r W a r s | Sc i e n c e F i c t i on | 5 | 1 \ n " +
"Star Trek|Science Fiction|3|1\n" +
" S t a r ga t e | S c i e n c e Fi c t i o n | 0 | 0 \ n " +
" T h e Sh i n i n g | H o r r o r| 2 | 1 \ n " ;
e x t e n d e d Ex p ec t e d = e x p ec t e d + " T h e Fe l l o w s h i p o f T he R i n g | F a n t a sy | 5 | 1 \ n " ;
}
/ /. . .
}
p u b li c c l a s s T e st S w i n g M o v i eL i s t E d i t o r F i le O p e r a t i o n s e x te n d s T e s t C a s e {
p ro t e c t e d vo i d s e t U p ( ) t hr o w s E x c e p t i on {
//. . .
s a v e d T e x t = " S t a r W a r s |S c i e n c e F i c t io n | 5 | 1 \ n " +
"Star Trek|Science Fiction|3|1\n" +
" S t a rg a t e | S c i e n c e F i c t i o n | 0 | 0 \ n " ;
e x t e n d e d Sa v ed T e x t = s a ve d T e x t + " T h e S h i n i n g | H o r r o r| 2 | 1 \ n " ;
}
}
Answer p u b li c c l a s s T e st M o v i e L i s t Ed i t o r S a v i n g ex t e n d s X M L T e s t Ca s e {
22: p ro t e c t e d Mo c kC o n t r o l c o nt r o l = n u l l ;
p ro t e c t e d Mo v ie L i s t E d i t o rV i e w m o c k V i e w = n u l l ;
p ro t e c t e d Ve c to r m o v i e s = n u l l ;
p ro t e c t e d Mo v ie s t a r W a r s = n u l l ;
p ro t e c t e d Mo v ie s t a r T r e k = n u l l ;
p ro t e c t e d Mo v ie s t a r g a t e = n u l l ;
p ro t e c t e d Mo v ie t h e S h i n i ng = n u l l ;
p ro t e c t e d Mo v ie L i s t m o v i eL i s t = n u l l ;
p ri v a t e S t ri n g e x p e c t e d P re f i x ;
p ri v a t e S t ri n g e x p e c t e d ;
p ri v a t e F i le ou t p u t F i l e ;
p ri v a t e M o vi e f o t r ;
p ri v a t e S t ri n g e x t e n d e d E xp e c t e d ;
p ri v a t e V e ct o r e x t e n d e d M ov i e s ;
p ub l i c T e s tM o vi e L i s t E d i t or S a v i n g ( S t r i ng n a m e ) {
s u p e r ( n a me ) ;
}
p ro t e c t e d vo i d s e t U p ( ) t hr o w s E x c e p t i on {
S t r i n g p ar s er = " o r g . a pa c h e . x e r c e s . ja x p . D o c u m e n t B u il d e r F a c t o r y I mp l " ;
X M L U n i t . se t Co n t r o l P a r s er ( p a r s e r ) ;
X M L U n i t . se t Te s t P a r s e r ( pa r s e r ) ;
s t a r W a r s = ne w M o v i e ( " St a r W a r s " , C at e g o r y . S C I F I , 5) ;
s t a r T r e k = ne w M o v i e ( " St a r T r e k " , C at e g o r y . S C I F I , 3) ;
s t a r g a t e = ne w M o v i e ( " St a r g a t e " , C a te g o r y . S C I F I ,
-1);
t h e S h i n i ng = n e w M o v i e (" T h e S h i n i n g ", C a t e g o r y . H O R RO R , 2 ) ;
m o v i e s = ne w Ve c t o r ( ) ;
m o v i e s . a dd ( st a r W a r s ) ;
m o v i e s . a dd ( st a r T r e k ) ;
m o v i e s . a dd ( st a r g a t e ) ;
m o v i e s . a dd ( th e S h i n i n g ) ;
m o v i e L i s t = n e w M o v i e L i st ( ) ;
m o v i e L i s t. a dd ( s t a r W a r s );
m o v i e L i s t. a dd ( s t a r T r e k );
m o v i e L i s t. a dd ( s t a r g a t e );
m o v i e L i s t. a dd ( t h e S h i n i ng ) ;
f o t r = n ew Mo v i e ( " T h e Fe l l o w s h i p o f T h e R i n g " , C a t eg o r y . F A N T A S Y , 5 ) ;
e x t e n d e d Ex p ec t e d = e x p ec t e d P r e f i x +
" < m o v ie n a m e = \ " T h e F e l l o w s h i p o f Th e R i n g \ " " +
" c a t e g o r y = \ "F a n t a s y \ " > " +
" < r at i n g s > " =
" <r a t i n g v a l u e =\ " 5 \ " " +
" s o u r c e= \ " A n o n y m o u s \ " / > " +
" < / ra t i n g s > " +
" < / m o vi e > < / m o v i e l i st > " ;
e x t e n d e d Mo v ie s = n e w V ec t o r ( m o v i e s ) ;
e x t e n d e d Mo v ie s . a d d ( f o t r) ;
c o n t r o l = E as y M o c k . c o n tr o l F o r ( M o v i e Li s t E d i t o r V i e w . cl a s s ) ;
m o c k V i e w = (M o v i e L i s t E di t o r V i e w ) c o n tr o l . g e t M o c k ( ) ;
m o c k V i e w .s e tE d i t o r ( n u l l) ;
c o n t r o l . se t De f a u l t V o i d Ca l l a b l e ( ) ;
}
p ub l i c v o i d t es t S a v i n g ( ) t h r o w s E x c e p ti o n {
m o c k V i e w .s e tM o v i e s ( m o v ie s ) ;
c o n t r o l . se t Vo i d C a l l a b l e( 1 ) ;
m o c k V i e w .g e tF i l e T o S a v e () ;
c o n t r o l . se t Re t u r n V a l u e (o u t p u t F i l e , 1) ;
m o c k V i e w .g e tN a m e F i e l d ( );
c o n t r o l . se t Re t u r n V a l u e (f o t r . g e t N a m e () , 1 ) ;
m o c k V i e w .g e tC a t e g o r y F i el d ( ) ;
c o n t r o l . se t Re t u r n V a l u e (C a t e g o r y . F A N TA S Y , 1 ) ;
m o c k V i e w .g e tR a t i n g F i e l d( ) ;
c o n t r o l . se t Re t u r n V a l u e (f o t r . g e t R a t i ng ( ) + 1 , 1 ) ;
m o c k V i e w .s e tM o v i e s ( e x t en d e d M o v i e s ) ;
c o n t r o l . se t Vo i d C a l l a b l e( 1 ) ;
c o n t r o l . ac t iv a t e ( ) ;
M o v i e L i s tE d it o r e d i t o r = n e w M o v i e L is t E d i t o r ( m o v i e Li s t , m o c k V i e w) ;
a s s e r t T r ue ( "E d i t o r s h o ul d h a v e s a v e d" , e d i t o r . s a v e As ( ) ) ;
a s s e r t X M LE q ua l ( " S a v e A s- e d f i l e h a s w r o n g c o n t e n t s " ,
expected,
c o n t e n t s O f( o u t p u t F i l e ) );
e d i t o r . a dd ( );
a s s e r t T r ue ( "E d i t o r s h o ul d h a v e r e s a ve d " , e d i t o r . s a ve ( ) ) ;
a s s e r t X M LE q ua l ( " S a v e d fi l e h a s b a d co n t e n t s . " ,
e x t e n d e d Ex p e c t e d ,
c o n t e n t s Of ( o u t p u t F i l e )) ;
c o n t r o l . ve r if y ( ) ;
}
p ri v a t e S t ri n g c o n t e n t s O f( F i l e a F i l e ) t h r o w s I O E x c e p ti o n {
F i l e I n p u tS t re a m f s t r e a m = n e w F i l e I np u t S t r e a m ( a F i l e) ;
i n t s i z e = fs t r e a m . a v a il a b l e ( ) ;
b y t e [ ] bu f fe r = n e w b yt e [ s i z e ] ;
f s t r e a m . re a d( b u f f e r ) ;
re t u r n n e w S tr i n g ( b u f f e r) ;
}
p ub l i c v o i d t es t C a n c e l l e dS a v i n g ( ) t h r ow s E x c e p t i o n {
m o c k V i e w .s e tM o v i e s ( m o v ie s ) ;
c o n t r o l . se t Vo i d C a l l a b l e( 1 ) ;
m o c k V i e w .g e tF i l e T o S a v e () ;
c o n t r o l . se t Re t u r n V a l u e (n u l l , 1 ) ;
c o n t r o l . ac t iv a t e ( ) ;
M o v i e L i s tE d it o r e d i t o r = n e w M o v i e L is t E d i t o r ( m o v i e Li s t , m o c k V i e w) ;
a s s e r t F a ls e (" E d i t o r s h ou l d n o t h a v e s a v e d . " , e d i t o r. s a v e A s ( ) ) ;
c o n t r o l . ve r if y ( ) ;
}
p ub l i c s t a ti c v o i d m a i n ( St r i n g [ ] a r g s) {
j u n i t . t e xt u i. T e s t R u n n e r. r u n ( T e s t M o v ie L i s t E d i t o r S a v in g . c l a s s ) ;
}
}
Answer p u b li c b o o l e an sa v e ( ) t h r o ws I O E x c e p t i o n {
23: i f ( o u t p u t Fi l e = = n u l l ) {
r e t u r n f al s e;
}
F il e W r i t e r w r it e r = n e w Fi l e W r i t e r ( o u tp u t F i l e ) ;
n ew X M L M o v ie L is t W r i t e r ( w ri t e r ) . w r i t e ( mo v i e s ) ;
w ri t e r . c l o se ( );
r et u r n t r u e;
}
Amazon
Prev don't be afraid of buying books Next
BIBLIOGRAPHY
[1] Scott W. Ambler. The Object Primer: The Application Developer's Guide to Object Orientation.
Cambridge University Press, New York, 2nd edition, 2001. www.ambysoft.com/theObjectPrimer.html.
[2] Scott W. Ambler. Agile Modeling: Effective Practices for Extreme Programming and the Unified
Process. John Wiley & Sons Publishing, New York, 2002.
[3] Scott W. Ambler. Enterprise unified process white paper. Technical report, Ronin International, 2002.
www.ronin-intl.com/publications/unifiedProcess.htm.
[4] Scott W. Ambler. The Elements of UML Style. Cambridge University Press, New York, 2003.
www.ambysoft.com/elementsUMLStyle.html.
[5] Dave Astels. Refactoring with UML. In Proceedings of XP2002: 3rd International Conference on
eXtreme Programming and Flexible Processes in Software Engineering, pages 67–70, 2002. Available at
c ic l am in o.d i be.u n ige. i t/xp 2002 / a t t i .
[6] David Astels, Granville Miller, and Miroslav Novak. A Practical Guide to eXtreme Programming. The
Coad Series. Prentice Hall, 2002. ISBN 0-13-067482-6.
[8] Kent Beck. Extreme Programming Explained: Embrace Change. The XP Series. Addison Wesley
Longman, 2000. ISBN 0-201-61641-6.
[9] Kent Beck. Test-Driven Development: By Example. Addison Wesley Longman, 2002. ISBN 0-321-
14653-0.
[10] Joshua Bloch. Effective Java Programming Language Guide. Addison Wesley Longman, 2001. ISBN
0-201-31005-8.
[11] Marko Boger, Thorsten Strum, and Per Fragemann. Refactoring browser for UML. In Proceedings of
XP2002: 3rd International Conference on eXtreme Programming and Flexible Processes in Software
Engineering, pages 77–81, 2002. Available at c i c l a m i n o . d i b e . u n i g e . i t / x p 2 0 0 2 / a t t i .
[12] Peter Coad, Mark Mayfield, and Jonathan Kern. Java Design: Building Better Apps and Applets.
Yourdon Press Computing Series. Prentice Hall, 2nd edition, 1999. ISBN 0-139-11181-6.
[13] Alistair Cockburn. Agile Software Development. Addison Wesley Longman, October 2001.
[14] Michael Feathers. The humble dialog box. Published online, 2002. Available at
www.objectmentor.com/resources/articles.
[16] Martin Fowler. Refactoring: Improving the Design of Existing Code. Addison Wesley Longman, 1999.
ISBN 0-201-48567-2.
[17] Martin Fowler. Is design dead? In Extreme Programming Examined, chapter 1, pages 3-17. Addison
Wesley Longman, 2001. Also available at www.martinfowler/articles/designDead.htm.
[18] Tammo Freese. Easymock: Dynamic Objects for jUnit. In Proceedings of XP2002: 3rd International
Conference on eXtreme Programming and Flexible Processes in Software Engineering, pages 2-5, 2002.
Available at cic la min o .dib e .uni ge. i t / x p 2 0 0 2 / a t t i .
[19] E. Gamma, R. Helm, R. Johnson, and J. Vlissides. Design Patterns: Abstraction and Reuse in Object-
Oriented Designs. In O. Nierstrasz, editor, Proceedings of ECOOP'93, Berlin, 1993. Springer-Verlag.
[20] Erich Gamma and Kent Beck. jUnit: A cook's tour. Jave Report, May 1999. Available at
junit.sourceforge.net/doc/cookstour/cookstour.htm.
[23] Peter Haggar. Practical Java Programming Language Guide. Addison Wesley Longman, February
2000.
[24] Jim Highsmith. Agile Software Development Ecosystems. Addison Wesley Longman, March 2002.
ISBN: 0-201-76043-6.
[25] Andrew Hunt and David Thomas. The Pragmatic Programmer. Addison Wesley Longman, 2000. ISBN
0-201-61622-X.
[27] Ron Jeffries, Ann Anderson, and Chet Hendrickson. Extreme Programming Installed. The XP Series.
Addison Wesley Longman, 2001. ISBN 0-201-70842-6.
[28] Joshua Kerievsky. Patterns and XP. In Extreme Programming Examined, chapter 13, pages 207-220.
Addison Wesley Longman, 2001.
[29] Joshua Kerievsky. Refactoring to Patterns. Not yet published, 2002. Drafts available online at
www.industriallogic.com/xp/refactoring.
[30] P. Kruchten. The Rational Unified Process: An Introduction. Addison Wesley Longman, 2nd edition,
2000.
[31] Tim Mackinnon. xUnit testing—a plea for assertEquals. In Proceedings of XP2002: 3rd International
Conference on eXtreme Programming and Flexible Processes in Software Engineering, pages 170-171,
2002. Available at ci c lami n o.di be.u n i g e . i t / x p 2 0 0 2 / a t t i .
[32] Tim Mackinnon, Steve Freeman, and Philip Craig. Endo-testing: Unit testing with mock objects. In
Extreme Programming Examined, chapter 17, pages 287-301. Addison Wesley Longman, 2001. Available
at www.mockobjects.com/endotesting.html.
[33] Robert C. Martin. Agile Software Development: Principles, Patterns, and Practices. Prentice Hall,
2003. ISBN 0-13-597444-5.
[34] Pete McBreen. Software Craftsmanship. Addison Wesley Longman, 2002. ISBN 0-201-73386-2.
[35] Ivan Moore. Jester—A junit test tester. In Proceedings of XP2001: 2nd International Conference on
eXtreme Programming and Flexible Processes in Software Engineering, pages 84–87, 2001.
[36] Stephen R. Palmer and John M. Felsing. A Practical Guide to Feature-Driven Development. Prentice
Hall, 2002.
[37] Giancarlo Succi and Michele Marchesi, editors. Extreme Programming Examined. The XP Series.
Addison Wesley Longman, 2001. This is a collection of papers from the XP2000 proceedings. ISBN 0-
201-71040-4.
[38] Giancarlo Succi, Michele Marchesi, Witold Pedrycz, and Laurie Williams. Preliminary analysis of the
effects of pair programming on job satisfaction. In Proyceedings of XP2002: 3rd International Conference
on eXtreme Programming and Flexible Processes in Software Engineering, pages 212-215, 2002.
Available at cic la min o .dib e .uni ge. i t / x p 2 0 0 2 / a t t i .
[39] Arie van Deursen and Leon Moonen. The video store revisited—thoughts on refactoring and testing.
In Proceedings of XP2002: 3rd International Conference on eXtreme Programming and Flexible Processes
in Software Engineering, pages 71–76, 2002. Available at
c ic l am in o.d i be.u n ige. i t/xp 2002 / a t t i .
[40] Arie van Deursen, Leon Moonen, Alex van den Bergh, and Gerard Kok. Refactoring test code. In
Proceedings of XP2001: 2nd International Conference on eXtreme Programming and Flexible Processesin
Software Engineering, pages 92–95, 2001. Available at www.cwi.nl/~leon/papers/xp2001/xp2001.pdf.
[41] William C. Wake. Refactoring Workbook. Addison Wesley Longman, 2003. ISBN 0-321-10929-5.
Amazon
Prev don't be afraid of buying books Next
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U ] [ V ]
[ W] [ X ]
Amazon
Prev don't be afraid of buying books Next
[ SYMBOL ] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U ] [ V ]
[ W] [ X ]
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U ] [ V ]
[ W] [ X ]
AbstractExpectation class
AbstractExpectationCollection class
ActiveTestSuite
add() 2nd 3rd 4th
Adding a movie
adding a movie 2nd
Adding a movie
adding a movie
Adding a movie
adding a movie
Adding a movie
adding a movie
Adding a movie
adding a movie
Adding a movie 2nd
adding a movie
Adding a movie
logical layer request for required data and updating of movie list
adding a movie
logical layer request for required data and updating of movie list 2nd
Adding a movie
logical layer request for required data and updating of movie list
adding a movie
logical layer request for required data and updating of movie list
Adding a movie
logical layer request for required data and updating of movie list
adding a movie
logical layer request for required data and updating of movie list
Adding a movie
logical layer request for required data and updating of movie list
adding a movie
text field for movie name and add button
Adding a movie
text field for movie name and add button
adding a movie
text field for movie name and add button
Adding a movie
text field for movie name and add button
adding a movie
text field for movie name and add button
Adding a movie
text field for movie name and add button 2nd
adding a movie
text field for movie name and add button
addRating() 2nd 3rd 4th 5th
Agile Modeling (AM), and test-driven development
ALL category 2nd
Ant integration, Clover
ANT task, running tasks as 2nd
ArrayList
Assert superclass 2nd
assertEquals() 2nd
assertFalse()
assertNotNull()
assertNotSame() 2nd
assertNull()
assertSame() 2nd
assertTrue()
fail()
Assert, starting with
assertCollectionsEquals() 2nd
assertEquals for Object()
assertEquals()
assertExcludes
assertFileEquals() 2nd 3rd
assertIncludes
assertInstanceOf()
AssertionFailedError 2nd
Assertions 2nd 3rd 4th
AssertMo
assertNotXPathExists()
assertStartsWith
assertXMLEqual
assertXMLEquals()
assertXMLIdentical()
assertXPathExists()
assertXPathsEqual()
assertXPathsNotEqual()
Associated tests, eXtreme Programming
awtSleep(), JFCTestCase
awtSleep(long sleepTime), JFCTestCase
Amazon
Prev don't be afraid of buying books Next
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U ] [ V ]
[ W] [ X ]
Bacon, Tim
BaseTestCase 2nd
Beck, Kent 2nd
Big Design Up Front (BDUF)
Block assertions, SUnit
Boilerplate markup
BookList class 2nd 3rd 4th
Boolean assertions, SUnit
Bootstrap technique
Boundary conditions, testing early
Bowler, Mike
Brute force, and GUI test-first 2nd 3rd 4th 5th 6th
Amazon
Prev don't be afraid of buying books Next
[ SYMBOL] [ A ] [ B ] [ C] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U ] [ V ]
[ W] [ X ]
calculate()
calculateAverageRating()
calculateTotalRating()
categories
Categories 2nd
categories 2nd
Categories
categories
adding a category
Categories
adding a category 2nd
categories
adding a category 2nd
Categories
adding a category
categories
adding a selection of category
Categories
adding a selection of category 2nd
categories
adding a selection of category
Categories
adding a selection of category
categories
adding a selection of category 2nd
Categories
adding a selection of category
categories
adding a selection of category
Categories
adding a selection of category
categories
adding support for a category to Movie
Categories
adding support for a category to Movie
constraining a fixed set of alternatives
categories
constraining a fixed set of alternatives
Categories
extending testSelecting
categories
extending testSelecting
Categories
setUp(), updating
categories
setUp(), updating
Categories
setUp(), updating
categories
setUp(), updating
Categories
showing a category in GUI
categories
showing a category in GUI
Categories
showing a category in GUI
categories
showing a category in GUI
Categories
showing a category in GUI
categories
showing a category in GUI 2nd
Categories
showing a category in GUI
categories
testing for other categories 2nd
Categories
testing for other categories 2nd
categories
testing for other categories
Categories
testing for other categories
categories
testing for other categories
Categories
testing for other categories
categories
testing for other categories
Categories
testing for other categories
categories
testing for other categories
Categories
testing for other categories
categories
testing for other categories
Categories
testing for other categories
categories
testing for other categories
Categories
testing for other categories
categories
testing for other categories
Categories
testing for other categories
categories
testing for other categories
Categories
testing for other categories
categories
testing for other categories
Categories
testing for other categories
categories
testing for other categories
Categories
testing for other categories
categories
testing for other categories
Categories
testing for other categories
categories
testing for other categories
Categories
testing for other categories 2nd
categories
testing for other categories 2nd
Categories
testing for other categories
categories
testing for other categories
Categories
testing for other categories
categories
testing for other categories
Categories
testing for other categories
categories
testing for other categories
Categories
testing for other categories
categories
testing for other categories
Categories
testing for other categories 2nd
categories
testing for other categories 2nd
Categories
testing for other categories
Category class
categorySublist()
Clark, Mike
Class names
for test class
Class-level report, Clover 2nd
cleanUp(), JFCTestHelper
Clover 2nd
Ant integration
class level report 2nd
class-level report 2nd
defined
package-level report 2nd
project level report 2nd
project-level report 2nd
user guide
Code
and test-driven development 2nd 3rd
and testing 2nd
duplication 2nd 3rd
example of
removing 2nd
Code smells 2nd 3rd 4th 5th 6th 7th 8th
and refactoring
comments
concept
data classes 2nd 3rd
defined
duplicated code
examples of 2nd 3rd 4th 5th 6th 7th 8th 9th 10th 11th 12th 13th 14th 15th 16th
explaining variable, introducing
Extract Class 2nd
Extract Interface 2nd 3rd 4th
Extract Method 2nd 3rd
form template method 2nd 3rd
replace conditional with polymorphism
replace constructor with factory method 2nd
replace inheritance with delegation 2nd 3rd
replace magic number with symbolic constant 2nd
replace nested conditional with guard clauses 2nd
replace type code with subclasses
in source code
inappropriate intimacy 2nd
large classes
lazy classes
long method
origin of term
Shotgun Surgery 2nd
switch statements 2nd
to patterns
Coding hat
CommonTestMovieListEditor
Conditional, replacing with polymorphism
Constructor, replacing with factory method 2nd
contains()
CountingNodeTester class
CruiseControl
Customer Tests
CustomMovieListRenderer
Amazon
Prev don't be afraid of buying books Next
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U ] [ V ]
[ W] [ X ]
DaeDalos JUnit extensions 2nd 3rd 4th 5th 6th 7th 8th 9th 10th 11th
DatabaseChecker 2nd 3rd 4th
defined
Extensible TestCase
programmer test implementation using test resources 2nd 3rd 4th 5th
test resources, managing 2nd
TRTestRunner
Data classes 2nd 3rd
DatabaseChecker 2nd 3rd 4th
Databases/network resources, avoiding testing against
Debugging 2nd
Delegation, replacing inheritance with 2nd 3rd
delete()
Design, project 2nd 3rd
DetailedIllegalArgumentException
DetailedNullPointerException 2nd
DifferenceConstants class
DifferenceEngine class
DifferenceListener
DisposeWindow(), JFCTestHelper
DocumentTraversal interface
Dollery, Bryan
Duplicate movie message
DuplicateMovieException 2nd 3rd 4th 5th
Duplication of code 2nd 3rd 4th
example of
removing 2nd
Amazon
Prev don't be afraid of buying books Next
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E ] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U ] [ V ]
[ W] [ X ]
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U ] [ V ]
[ W] [ X ]
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U ] [ V ]
[ W] [ X ]
Gamma, Erich
Gargoyle Software JUnit extensions 2nd 3rd 4th 5th 6th 7th 8th 9th 10th 11th 12th
BaseTestCase 2nd
detailed exceptions 2nd 3rd
EqualsTester 2nd 3rd 4th 5th
EventCatcher 2nd
OrderedTestSuite 2nd
RecursiveTestSuite 2nd
getAverageRating() 2nd 3rd
getCategory() 2nd 3rd 4th 5th 6th 7th 8th 9th 10th 11th 12th 13th 14th 15th 16th 17th 18th 19th 20th
21st
getCategoryField()
getDescription()
getMessageFromJDialog(), JFCTestHelper
getName() 2nd
getNameField()
getNewName() 2nd 3rd 4th 5th 6th
getRating()
getRating(int)
getRatingField()
getRatingReviewField()
getRawRating() 2nd
getResultDocument()
getResultString()
getShowingDialogs(), JFCTestHelper
getShowingJFileChooser(), JFCTestHelper
getShowingJFileChoosers(), JFCTestHelper
getStringColumn(tableName, columnName,lwhereClause)
getTestResource()
getWindow(), JFCTestHelper
getWindows(), JFCTestHelper
Graphical User Interface (GUI) 2nd
Guard clauses, replacing nested conditional with 2nd
GUI test-first
brute force 2nd 3rd 4th 5th 6th
defined
developing 2nd 3rd 4th 5th 6th 7th 8th 9th 10th 11th 12th 13th 14th 15th 16th 17th 18th 19th 20th
21st 22nd 23rd 24th 25th 26th
example
tackling 2nd 3rd 4th 5th 6th
Jemmy 2nd 3rd 4th 5th
example 2nd 3rd
how it works 2nd 3rd
JButtonOperator
JFCUnit 2nd 3rd 4th 5th 6th 7th 8th
API 2nd 3rd 4th 5th 6th 7th 8th
junit.extensions.jfcunit.JFCTestCase
junit.extensions.jfcunit.JFCTestHelper,
ultra-thin GUI 2nd 3rd 4th 5th 6th 7th 8th
Amazon
Prev don't be afraid of buying books Next
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U ] [ V ]
[ W] [ X ]
Haggar, Peter
hashCode()
HashSet
hasRating() 2nd
HTMLDocumentBuilder class
Hunt, Andrew
Amazon
Prev don't be afraid of buying books Next
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U ] [ V ]
[ W] [ X ]
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U ] [ V ]
[ W] [ X ]
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U ] [ V ]
[ W] [ X ]
Kerievsky, Joshua
Amazon
Prev don't be afraid of buying books Next
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U ] [ V ]
[ W] [ X ]
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U ] [ V ]
[ W] [ X ]
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U ] [ V ]
[ W] [ X ]
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U ] [ V ]
[ W] [ X ]
oneItemList instance
OrderedTestSuite 2nd
Amazon
Prev don't be afraid of buying books Next
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U ] [ V ]
[ W] [ X ]
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U ] [ V ]
[ W] [ X ]
Quality
Clover results
comments on results
debugging 2nd
general notes 2nd
Jester results
NoUnit results 2nd
RatingRenderer
testing for 2nd 3rd 4th
tests, list of 2nd 3rd 4th 5th 6th 7th 8th
XMLMovieListWriter 2nd
Amazon
Prev don't be afraid of buying books Next
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U ] [ V ]
[ W] [ X ]
Rating.equals()
RatingRenderer 2nd
Ratings
ratings
Ratings
ratings
Ratings
ratings
Ratings
ratings
Ratings
ratings
Ratings
ratings
Ratings
ratings
Ratings
ratings
Ratings
ratings 2nd
Ratings 2nd
ratings
Ratings
ratings
Ratings
ratings
Ratings
ratings
Ratings
ratings
Ratings
editing 2nd 3rd 4th 5th 6th
editing the rating
ratings
editing the rating 2nd
Ratings
editing the rating 2nd
ratings
editing the rating
Ratings
editing the rating
ratings
editing the rating
Ratings
editing the rating
ratings
editing the rating
Ratings
editing the rating
ratings
editing the rating
Ratings
showing in GUI 2nd 3rd 4th 5th 6th
showing rating in GUI
ratings
showing rating in GUI
Ratings
showing rating in GUI
ratings
showing rating in GUI
Ratings
showing rating in GUI
ratings
showing rating in GUI
Ratings
showing rating in GUI
ratings
showing rating in GUI
Ratings
showing rating in GUI
ratings
showing rating in GUI
Ratings
showing rating in GUI
ratings
showing rating in GUI
Ratings
single rating, adding
ratings
single rating, adding
Ratings
single rating, adding
ratings
single rating, adding
Ratings
single rating, adding
ratings
single rating, adding
RecursiveTestSuite 2nd
Refactoring 2nd 3rd 4th 5th 6th 7th 8th 9th 10th 11th 12th 13th 14th 15th 16th 17th 18th 19th 20th
21st 22nd 23rd 24th 25th 26th 27th 28th
and Eclipse
and IDEA
and Test-Driven Development
code smells 2nd 3rd 4th 5th 6th 7th 8th
defined
defined
duplication 2nd 3rd
example of
redundantly maintained code found at run time
removing 2nd
unclear intent
IDEA
need for 2nd
procedure 2nd
situations requiring 2nd
test code
Refactoring hat
RefactorIt
RegisterTestResource
Repeated Test 2nd
Replace Conditional with Polymorphism refactoring
Replace Constructor with Factory Method refactoring 2nd
Replace Inheritance with Delegation refactoring 2nd 3rd
Replace Magic Number with Symbolic Constant refactoring 2nd
Replace Nested Conditional with Guard Clauses refactoring 2nd
Replace Type Code with Subclasses refactoring
resetSleepTime(), JFCTestCase
retrospective 2nd 3rd
design 2nd 3rd
lack of cohesion of methods 2 (LOCOM2) 2nd
number of operations (NOO) 2nd
test vs. application 2nd
testing quality 2nd 3rd 4th
weighted methods per class 2 (WMPC2)
ReturnObjectList
ReturnObjectList class
Reviews
reviews
Reviews
reviews 2nd
Reviews 2nd
reviews 2nd
Reviews
reviews
Reviews
reviews
Reviews
reviews
Reviews
reviews
Reviews 2nd
reviews
Reviews
reviews
Reviews
reviews
Reviews
reviews
Reviews
reviews
Reviews
reviews 2nd
Reviews 2nd
reviews
Reviews
adding
reviews
adding 2nd
Reviews
adding
adding to ratings
reviews
adding to ratings
Reviews
adding to ratings
reviews
adding to ratings
Reviews
adding to ratings
reviews
adding to ratings
displaying
Reviews
displaying
reviews
displaying
Reviews
displaying
reviews
displaying
Reviews
displaying
reviews
displaying
Reviews
displaying 2nd
reviews
displaying
Reviews
displaying
reviews
displaying
Reviews
displaying
reviews
displaying
Reviews
displaying
reviews
displaying
Reviews
displaying
reviews
displaying
Reviews
displaying
reviews
displaying
Reviews
loading
reviews
loading 2nd
Reviews
loading
reviews
saving
Reviews
saving 2nd
reviews
saving
rowCount(tableName)
rowCount(tableName, whereClause)
rowExists(tableName, where-Clause) 2nd
RubyUnit 2nd 3rd 4th 5th 6th
example 2nd 3rd 4th 5th
framework
Running tests
JUnit 2nd 3rd 4th
as an ANT task 2nd
errors/failures
runTestSilently()
Runtime structure of tests, sample of
Amazon
Prev don't be afraid of buying books Next
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S] [ T] [ U ] [ V ]
[ W] [ X ]
sendKeyAction(), JFCTestHelper
sendString(), JFCTestHelper
setCategoryField()
setDefaultVoidCallable()
setExpected()
setExpectedResult()
setMovieNames() 2nd 3rd 4th
setMovies() 2nd 3rd
setNewName() 2nd 3rd
setNewRating()
setRating() 2nd 3rd 4th
setRatingField()
setReturnValue()
setSleepTime()
setSleepTime(long time), JFCTestCase
setUp() 2nd 3rd 4th 5th 6th 7th 8th 9th 10th
updating 2nd
SetupExample 2nd
setVoidCallable()
Shotgun Surgery 2nd
Simplicity
testing simple stuff first
Single rating, adding 2nd 3rd
Single-rating field, removing
size()
sleep(long delay), JFCTestCase
Smalltalk, and refactoring
sorting
Sorting 2nd
sorting 2nd
Sorting 2nd
sorting
Sorting
sorting 2nd
Sorting
sorting
Sorting
sorting
Sorting 2nd
sorting
Sorting
sorting 2nd
Sorting 2nd
sorting 2nd
Sorting 2nd
sorting
Sorting
sorting
Sorting 2nd 3rd 4th 5th 6th 7th 8th 9th 10th 11th 12th 13th 14th 15th 16th 17th 18th 19th 20th 21st
22nd 23rd 24th 25th 26th 27th 28th 29th 30th 31st 32nd 33rd 34th 35th 36th 37th 38th 39th 40th 41st
adding a rating in the GUI 2nd 3rd 4th 5th 6th
adding support for 2nd 3rd 4th 5th 6th 7th 8th
sorting
adding way to sort to GUI
Sorting
adding way to sort to GUI 2nd
sorting
adding way to sort to GUI 2nd
Sorting
adding way to sort to GUI 2nd
sorting
adding way to sort to GUI
Sorting
asking MovieListEditor for sorted lists
sorting
asking MovieListEditor for sorted lists 2nd
Sorting
asking MovieListEditor for sorted lists
sorting
comparing movies
Sorting
comparing movies 2nd
sorting
comparing movies 2nd
Sorting
comparing movies 2nd
sorting
comparing movies
Sorting
comparing movies
sorting
comparing movies
Sorting
of a MovieList
sorting
of a MovieList
Sorting
of a MovieList
sorting
of a MovieList 2nd
Sorting
of a MovieList
sorting
of a MovieList
Sorting
of a MovieList
sorting
of a MovieList
Sorting
of a MovieList 2nd
sorting
of a MovieList
Sorting
rating source 2nd 3rd 4th 5th 6th 7th 8th 9th
removing the single-rating field
revised persistence 2nd 3rd 4th 5th 6th 7th 8th 9th 10th 11th 12th 13th 14th 15th
showing in GUI 2nd 3rd 4th 5th 6th
sortUsing() 2nd
Source code, code smells in
Standard extensions 2nd 3rd 4th 5th 6th
ActiveTestSuite
ExceptionTest Case 2nd
Repeated Test 2nd
TestDecorator 2nd
TestSetup 2nd 3rd
start()
stop()
Stubs
Subclasses, replacing type code with
SUnit 2nd 3rd 4th 5th 6th
block assertions
boolean assertions
example 2nd 3rd 4th 5th
failure
framework
Support movie name editing
support movie name editing
Support movie name editing
support movie name editing
Support movie name editing
support movie name editing
Support movie name editing
support movie name editing
Support movie name editing
support movie name editing
Support movie name editing
constructing a Movie with a null name
support movie name editing
constructing a Movie with a null name
Support movie name editing
constructing a Movie with an empty name
support movie name editing
constructing a Movie with an empty name
Support movie name editing
constructing a Movie with an empty name
support movie name editing
constructing a Movie with an empty name
Support movie name editing
movie name change
support movie name editing
movie name change
Support movie name editing
movie name change
support movie name editing
movie name change
Support movie name editing
renaming a movie to a null name
support movie name editing
renaming a movie to a null name
Support movie name editing
renaming a movie to a null name
support movie name editing
renaming a movie to a null name
Support movie name editing
renaming a movie to an empty name
support movie name editing
renaming a movie to an empty name
Support movie name editing
renaming a movie to an empty name
support movie name editing
renaming a movie to an empty name
Swing GUI
SwingMovieListEditorView 2nd 3rd
Switch statements, and code smells 2nd
Symbolic constant, replacing magic number with 2nd
System.out and System.errr, avoiding in tests
Amazon
Prev don't be afraid of buying books Next
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T ] [ U ] [ V ]
[ W] [ X ]
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U] [ V ]
[ W] [ X ]
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U ] [ V ]
[ W] [ X ]
Validator class
Van Deursen, Arie
VBUnit 2nd 3rd 4th 5th 6th 7th 8th 9th
example 2nd 3rd 4th 5th 6th 7th 8th 9th
framework
getting started with 2nd
MovieList functionality, creating 2nd 3rd 4th 5th 6th 7th 8th
Verifiable class
verify()
Vision statement
vision statement
Amazon
Prev don't be afraid of buying books Next
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U ] [ V ]
[W] [X]
Wake, William
Weighted methods per class 2 (WMPC2)
writeRating()
Amazon
Prev don't be afraid of buying books Next
[ SYMBOL] [ A ] [ B ] [ C ] [ D ] [ E] [ F ] [ G ] [ H ] [ I ] [ J ] [ K ] [ L ] [ M ] [ N ] [ O ] [ P ] [ Q ] [ R ] [ S ] [ T] [ U ] [ V ]
[ W] [ X ]
Xalan
XmlCalculator 2nd 3rd 4th
XMLMovieListReader:
XMLMovieListWriter 2nd
XMLTestCase class 2nd 3rd
XMLUnit 2nd 3rd 4th 5th 6th 7th 8th 9th
configuring
defined
glossary
quick tour of 2nd
validation tests
walking the DOM tree 2nd
XML comparison tests, writing 2nd 3rd
XML transformations, comparing
Xpath tests 2nd 3rd
Amazon