Notes
Notes
James Aspnes
2018-02-21T21:45:23-0500
Contents
1 Course administration 12
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.1.1 License . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.1.2 Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.1.3 Documentation . . . . . . . . . . . . . . . . . . . . . . . . 13
1.1.4 Questions and comments . . . . . . . . . . . . . . . . . . 14
1.2 Lecture schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3 Syllabus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3.1 On-line course information . . . . . . . . . . . . . . . . . 16
1.3.2 Meeting times . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3.3 Synopsis of the course . . . . . . . . . . . . . . . . . . . . 16
1.3.4 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.3.5 Textbook . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.3.6 Course requirements . . . . . . . . . . . . . . . . . . . . . 17
1.3.7 Staff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.3.7.1 Instructor . . . . . . . . . . . . . . . . . . . . . . 17
1.3.7.2 Teaching Fellow . . . . . . . . . . . . . . . . . . 17
1.3.7.3 Peer tutors . . . . . . . . . . . . . . . . . . . . . 18
1.3.8 Use of outside help . . . . . . . . . . . . . . . . . . . . . . 18
1.3.9 Clarifications for homework assignments . . . . . . . . . . 18
1.3.10 Late assignments . . . . . . . . . . . . . . . . . . . . . . . 18
1.4 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.4.1 Why should you learn to program in C? . . . . . . . . . . 19
1.4.2 Why should you learn about data structures and program-
ming techniques? . . . . . . . . . . . . . . . . . . . . . . . 20
2 The Zoo 20
2.1 Getting an account . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2 Getting into the room . . . . . . . . . . . . . . . . . . . . . . . . 21
1
2.3 Remote use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3.1 Terminal access . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3.2 GUI access . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.3 GUI access using FastX . . . . . . . . . . . . . . . . . . . 24
2.4 Developing on your own machine . . . . . . . . . . . . . . . . . . 24
2.4.1 Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4.2 OSX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4.3 Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.5 How to compile and run programs . . . . . . . . . . . . . . . . . 25
2.5.1 Creating the program . . . . . . . . . . . . . . . . . . . . 25
2.5.2 Compiling and running a program . . . . . . . . . . . . . 26
2.5.3 Some notes on what the program does . . . . . . . . . . . 27
2
3.4.4.1 Compilation flags . . . . . . . . . . . . . . . . . 51
3.4.4.2 Automated testing . . . . . . . . . . . . . . . . . 51
3.4.4.3 Examples of some common valgrind errors . . . 51
3.4.4.3.1 Uninitialized values . . . . . . . . . . . 51
3.4.4.3.2 Bytes definitely lost . . . . . . . . . . . 52
3.4.4.3.3 Invalid write or read operations . . . . 53
3.4.5 Not recommended: debugging output . . . . . . . . . . . 55
3.5 Performance tuning . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.5.1 Timing under Linux . . . . . . . . . . . . . . . . . . . . . 57
3.5.2 Profiling with valgrind . . . . . . . . . . . . . . . . . . . 58
3.5.3 Profiling with gprof . . . . . . . . . . . . . . . . . . . . . 64
3.5.3.1 Effect of optimization during compilation . . . . 72
3.6 Version control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.6.1 Setting up Git . . . . . . . . . . . . . . . . . . . . . . . . 73
3.6.2 Editing files . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.6.3 Renaming files . . . . . . . . . . . . . . . . . . . . . . . . 77
3.6.4 Adding and removing files . . . . . . . . . . . . . . . . . . 77
3.6.5 Recovering files from the repository . . . . . . . . . . . . 78
3.6.6 Undoing bad commits . . . . . . . . . . . . . . . . . . . . 78
3.6.7 Looking at old versions . . . . . . . . . . . . . . . . . . . 79
3.6.8 More information about Git . . . . . . . . . . . . . . . . . 80
3.7 Submitting assignments . . . . . . . . . . . . . . . . . . . . . . . 81
3
4.2.3.8 Non-finite numbers in C . . . . . . . . . . . . . . 108
4.2.3.9 The math library . . . . . . . . . . . . . . . . . 108
4.3 Operator precedence . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.4 Programming style . . . . . . . . . . . . . . . . . . . . . . . . . . 110
4.5 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.5.1 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.5.2 Variables as names . . . . . . . . . . . . . . . . . . . . . . 113
4.5.2.1 Variable declarations . . . . . . . . . . . . . . . 113
4.5.2.2 Variable names . . . . . . . . . . . . . . . . . . . 114
4.5.3 Using variables . . . . . . . . . . . . . . . . . . . . . . . . 116
4.5.4 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . 117
4.5.5 Storage class qualifiers . . . . . . . . . . . . . . . . . . . . 118
4.5.5.1 Scope and extent . . . . . . . . . . . . . . . . . . 118
4.5.5.1.1 Additional qualifiers for global variables 119
4.5.6 Marking variables as constant . . . . . . . . . . . . . . . . 119
4.5.6.1 Pointers to const . . . . . . . . . . . . . . . . . 119
4.6 Input and output . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.6.1 Character streams . . . . . . . . . . . . . . . . . . . . . . 120
4.6.2 Reading and writing single characters . . . . . . . . . . . 121
4.6.3 Formatted I/O . . . . . . . . . . . . . . . . . . . . . . . . 122
4.6.4 Rolling your own I/O routines . . . . . . . . . . . . . . . 124
4.6.5 File I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
4.7 Statements and control structures . . . . . . . . . . . . . . . . . . 128
4.7.1 Simple statements . . . . . . . . . . . . . . . . . . . . . . 128
4.7.2 Compound statements . . . . . . . . . . . . . . . . . . . . 129
4.7.2.1 Conditionals . . . . . . . . . . . . . . . . . . . . 129
4.7.2.2 Loops . . . . . . . . . . . . . . . . . . . . . . . . 132
4.7.2.2.1 The while loop . . . . . . . . . . . . . . 132
4.7.2.2.2 The do..while loop . . . . . . . . . . . . 132
4.7.2.2.3 The for loop . . . . . . . . . . . . . . . 133
4.7.2.2.4 Loops with break, continue, and goto . 134
4.7.2.3 Choosing where to put a loop exit . . . . . . . . 136
4.8 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
4.8.1 Function definitions . . . . . . . . . . . . . . . . . . . . . 137
4.8.2 When to write a function . . . . . . . . . . . . . . . . . . 138
4.8.3 Calling a function . . . . . . . . . . . . . . . . . . . . . . 140
4.8.4 The return statement . . . . . . . . . . . . . . . . . . . . 140
4.8.5 Function declarations and modules . . . . . . . . . . . . . 141
4.8.6 Static functions . . . . . . . . . . . . . . . . . . . . . . . . 142
4.8.7 Local variables . . . . . . . . . . . . . . . . . . . . . . . . 143
4.8.8 Mechanics of function calls . . . . . . . . . . . . . . . . . 144
4.9 Pointers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
4.9.1 Memory and addresses . . . . . . . . . . . . . . . . . . . . 145
4.9.2 Pointer variables . . . . . . . . . . . . . . . . . . . . . . . 145
4.9.2.1 Declaring a pointer variable . . . . . . . . . . . . 146
4.9.2.2 Assigning to pointer variables . . . . . . . . . . 146
4
4.9.2.3 Using a pointer . . . . . . . . . . . . . . . . . . . 146
4.9.2.4 Printing pointers . . . . . . . . . . . . . . . . . . 147
4.9.3 The null pointer . . . . . . . . . . . . . . . . . . . . . . . 148
4.9.4 Pointers and functions . . . . . . . . . . . . . . . . . . . . 148
4.9.5 Pointer arithmetic and arrays . . . . . . . . . . . . . . . . 151
4.9.5.1 Arrays . . . . . . . . . . . . . . . . . . . . . . . 152
4.9.5.2 Arrays and functions . . . . . . . . . . . . . . . 153
4.9.5.3 Multidimensional arrays . . . . . . . . . . . . . . 154
4.9.5.4 Variable-length arrays . . . . . . . . . . . . . . . 157
4.9.6 Pointers to void . . . . . . . . . . . . . . . . . . . . . . . . 160
4.9.6.1 Alignment . . . . . . . . . . . . . . . . . . . . . 161
4.9.7 Run-time storage allocation using malloc . . . . . . . . . 162
4.9.8 Function pointers . . . . . . . . . . . . . . . . . . . . . . . 166
4.9.8.1 Function pointer declarations . . . . . . . . . . . 166
4.9.8.2 Callbacks . . . . . . . . . . . . . . . . . . . . . . 167
4.9.8.3 Dispatch tables . . . . . . . . . . . . . . . . . . . 167
4.9.9 The restrict keyword . . . . . . . . . . . . . . . . . . . . . 169
4.10 Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
4.10.1 C strings . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
4.10.2 String constants . . . . . . . . . . . . . . . . . . . . . . . 171
4.10.2.1 String encodings . . . . . . . . . . . . . . . . . . 171
4.10.3 String buffers . . . . . . . . . . . . . . . . . . . . . . . . . 172
4.10.3.1 String buffers and the perils of gets . . . . . . . 172
4.10.4 Operations on strings . . . . . . . . . . . . . . . . . . . . 174
4.10.5 Finding the length of a string . . . . . . . . . . . . . . . . 177
4.10.5.1 The strlen tarpit . . . . . . . . . . . . . . . . . . 177
4.10.6 Comparing strings . . . . . . . . . . . . . . . . . . . . . . 178
4.10.7 Formatted output to strings . . . . . . . . . . . . . . . . . 179
4.10.8 Dynamic allocation of strings . . . . . . . . . . . . . . . . 179
4.10.9 Command-line arguments . . . . . . . . . . . . . . . . . . 180
4.11 Structured data types . . . . . . . . . . . . . . . . . . . . . . . . 181
4.11.1 Structs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
4.11.1.1 Operations on structs . . . . . . . . . . . . . . . 185
4.11.1.2 Layout in memory . . . . . . . . . . . . . . . . . 185
4.11.1.3 Bit fields . . . . . . . . . . . . . . . . . . . . . . 186
4.11.2 Unions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
4.11.3 Enums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
4.11.3.1 Specifying particular values . . . . . . . . . . . . 188
4.11.3.2 What most people do . . . . . . . . . . . . . . . 188
4.11.3.3 Using enum with union . . . . . . . . . . . . . . 189
4.12 Type aliases using typedef . . . . . . . . . . . . . . . . . . . . . 189
4.12.1 Opaque structs . . . . . . . . . . . . . . . . . . . . . . . . 190
4.13 Macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
4.13.1 Macros with arguments . . . . . . . . . . . . . . . . . . . 193
4.13.1.1 Multiple arguments . . . . . . . . . . . . . . . . 193
4.13.1.2 Perils of repeating arguments . . . . . . . . . . . 193
5
4.13.1.3 Variable-length argument lists . . . . . . . . . . 194
4.13.1.4 Macros vs. inline functions . . . . . . . . . . . . 194
4.13.2 Macros that include other macros . . . . . . . . . . . . . . 195
4.13.3 More specialized macros . . . . . . . . . . . . . . . . . . . 196
4.13.3.1 Multiple expressions in a macro . . . . . . . . . 196
4.13.3.2 Non-syntactic macros . . . . . . . . . . . . . . . 196
4.13.3.3 Multiple statements in one macro . . . . . . . . 197
4.13.3.4 String expansion . . . . . . . . . . . . . . . . . . 197
4.13.3.5 Big macros . . . . . . . . . . . . . . . . . . . . . 198
4.13.4 Conditional compilation . . . . . . . . . . . . . . . . . . . 199
4.13.5 Defining macros on the command line . . . . . . . . . . . 200
4.13.6 The #if directive . . . . . . . . . . . . . . . . . . . . . . . 201
4.13.7 Debugging macro expansions . . . . . . . . . . . . . . . . 202
4.13.8 Can a macro call a preprocessor command? . . . . . . . . 202
6
5.4.4 Choosing a hash function . . . . . . . . . . . . . . . . . . 239
5.4.4.1 Division method . . . . . . . . . . . . . . . . . . 240
5.4.4.2 Multiplication method . . . . . . . . . . . . . . . 241
5.4.4.3 Universal hashing . . . . . . . . . . . . . . . . . 241
5.4.5 Maintaining a constant load factor . . . . . . . . . . . . . 243
5.4.6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
5.4.6.1 A low-overhead hash table using open addressing 244
5.4.6.2 A string to string dictionary using chaining . . . 246
5.5 Generic containers . . . . . . . . . . . . . . . . . . . . . . . . . . 252
5.5.1 Generic dictionary: interface . . . . . . . . . . . . . . . . 252
5.5.2 Generic dictionary: implementation . . . . . . . . . . . . 255
5.6 Recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
5.6.1 Example of recursion in C . . . . . . . . . . . . . . . . . . 262
5.6.2 Common problems with recursion . . . . . . . . . . . . . 267
5.6.2.1 Omitting the base case . . . . . . . . . . . . . . 267
5.6.2.2 Blowing out the stack . . . . . . . . . . . . . . . 267
5.6.2.3 Failure to make progress . . . . . . . . . . . . . 268
5.6.3 Tail-recursion and iteration . . . . . . . . . . . . . . . . . 268
5.6.3.1 Binary search: recursive and iterative versions . 270
5.6.4 Mergesort: a recursive sorting algorithm . . . . . . . . . . 272
5.6.5 Asymptotic complexity of recursive functions . . . . . . . 274
5.7 Binary trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
5.7.1 Tree basics . . . . . . . . . . . . . . . . . . . . . . . . . . 275
5.7.2 Binary tree implementations . . . . . . . . . . . . . . . . 276
5.7.3 The canonical binary tree algorithm . . . . . . . . . . . . 276
5.7.4 Nodes vs leaves . . . . . . . . . . . . . . . . . . . . . . . . 277
5.7.5 Special classes of binary trees . . . . . . . . . . . . . . . . 278
5.8 Heaps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
5.8.1 Priority queues . . . . . . . . . . . . . . . . . . . . . . . . 278
5.8.2 Expensive implementations of priority queues . . . . . . . 279
5.8.3 Structure of a heap . . . . . . . . . . . . . . . . . . . . . . 279
5.8.4 Packed heaps . . . . . . . . . . . . . . . . . . . . . . . . . 279
5.8.5 Bottom-up heapification . . . . . . . . . . . . . . . . . . . 280
5.8.6 Heapsort . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
5.8.7 More information . . . . . . . . . . . . . . . . . . . . . . . 283
5.9 Binary search trees . . . . . . . . . . . . . . . . . . . . . . . . . . 283
5.9.1 Searching for a node . . . . . . . . . . . . . . . . . . . . . 283
5.9.2 Inserting a new node . . . . . . . . . . . . . . . . . . . . . 284
5.9.3 Deleting a node . . . . . . . . . . . . . . . . . . . . . . . . 285
5.9.4 Costs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
5.10 Augmented trees . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
5.10.1 Applications . . . . . . . . . . . . . . . . . . . . . . . . . 286
5.11 Balanced trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
5.11.1 Tree rotations . . . . . . . . . . . . . . . . . . . . . . . . . 287
5.11.2 AVL trees . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
5.11.2.1 Sample implementation . . . . . . . . . . . . . . 289
7
5.11.3 2–3 trees . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
5.11.4 Red-black trees . . . . . . . . . . . . . . . . . . . . . . . . 301
5.11.5 B-trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
5.11.6 Splay trees . . . . . . . . . . . . . . . . . . . . . . . . . . 305
5.11.6.1 How splaying works . . . . . . . . . . . . . . . . 305
5.11.6.2 Analysis . . . . . . . . . . . . . . . . . . . . . . 306
5.11.6.3 Other operations . . . . . . . . . . . . . . . . . . 307
5.11.6.4 Top-down splaying . . . . . . . . . . . . . . . . . 307
5.11.6.5 An implementation . . . . . . . . . . . . . . . . 310
5.11.6.6 More information . . . . . . . . . . . . . . . . . 317
5.11.7 Scapegoat trees . . . . . . . . . . . . . . . . . . . . . . . . 317
5.11.8 Skip lists . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
5.11.9 Implementations . . . . . . . . . . . . . . . . . . . . . . . 317
5.12 Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
5.12.1 Basic definitions . . . . . . . . . . . . . . . . . . . . . . . 318
5.12.2 Why graphs are useful . . . . . . . . . . . . . . . . . . . . 318
5.12.3 Operations on graphs . . . . . . . . . . . . . . . . . . . . 321
5.12.4 Representations of graphs . . . . . . . . . . . . . . . . . . 321
5.12.4.1 Adjacency matrices . . . . . . . . . . . . . . . . 321
5.12.4.2 Adjacency lists . . . . . . . . . . . . . . . . . . . 321
5.12.4.2.1 An implementation . . . . . . . . . . . 322
5.12.4.3 Implicit representations . . . . . . . . . . . . . . 327
5.12.5 Searching for paths in a graph . . . . . . . . . . . . . . . 327
5.12.5.1 Implementation of depth-first and breadth-first
search . . . . . . . . . . . . . . . . . . . . . . . . 329
5.12.5.2 Combined implementation of depth-first and
breadth-first search . . . . . . . . . . . . . . . . 333
5.12.5.3 Other variations on the basic algorithm . . . . . 341
5.13 Dynamic programming . . . . . . . . . . . . . . . . . . . . . . . . 341
5.13.1 Memoization . . . . . . . . . . . . . . . . . . . . . . . . . 342
5.13.2 Dynamic programming . . . . . . . . . . . . . . . . . . . . 343
5.13.2.1 More examples . . . . . . . . . . . . . . . . . . . 344
5.13.2.1.1 Longest increasing subsequence . . . . . 344
5.13.2.1.2 All-pairs shortest paths . . . . . . . . . 347
5.13.2.1.3 Longest common subsequence . . . . . 348
5.14 Randomization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
5.14.1 Generating random values in C . . . . . . . . . . . . . . . 351
5.14.1.1 The rand function from the standard library . . 351
5.14.1.1.1 Supplying a seed with srand . . . . . . 352
5.14.1.2 Better pseudorandom number generators . . . . 353
5.14.1.3 Random numbers without the pseudo . . . . . . 353
5.14.1.4 Range issues . . . . . . . . . . . . . . . . . . . . 354
5.14.2 Randomized algorithms . . . . . . . . . . . . . . . . . . . 357
5.14.2.1 Randomized search . . . . . . . . . . . . . . . . 357
5.14.2.2 Quickselect and quicksort . . . . . . . . . . . . . 358
5.14.3 Randomized data structures . . . . . . . . . . . . . . . . . 361
8
5.14.3.1 Skip lists . . . . . . . . . . . . . . . . . . . . . . 362
5.14.3.2 Universal hash families . . . . . . . . . . . . . . 366
5.15 String processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
5.15.1 Radix search . . . . . . . . . . . . . . . . . . . . . . . . . 368
5.15.1.1 Tries . . . . . . . . . . . . . . . . . . . . . . . . 368
5.15.1.1.1 Searching a trie . . . . . . . . . . . . . 369
5.15.1.1.2 Inserting a new element into a trie . . . 369
5.15.1.1.3 Implementation . . . . . . . . . . . . . 369
5.15.1.2 Patricia trees . . . . . . . . . . . . . . . . . . . . 375
5.15.1.3 Ternary search trees . . . . . . . . . . . . . . . . 377
5.15.1.4 More information . . . . . . . . . . . . . . . . . 380
5.15.2 Radix sort . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
5.15.2.1 Bucket sort . . . . . . . . . . . . . . . . . . . . . 381
5.15.2.2 Classic LSB radix sort . . . . . . . . . . . . . . . 381
5.15.2.3 MSB radix sort . . . . . . . . . . . . . . . . . . . 382
5.15.2.3.1 Issues with recursion depth . . . . . . . 382
5.15.2.3.2 Implementing the buckets . . . . . . . . 382
5.15.2.3.3 Further optimization . . . . . . . . . . 383
5.15.2.3.4 Sample implementation . . . . . . . . . 383
9
6.4.8.1 Storage allocation inside objects . . . . . . . . . 418
6.4.9 Standard library . . . . . . . . . . . . . . . . . . . . . . . 422
6.4.10 Things we haven’t talked about . . . . . . . . . . . . . . . 423
6.5 Testing during development . . . . . . . . . . . . . . . . . . . . . 423
6.5.1 Unit tests . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
6.5.1.1 What to put in the test code . . . . . . . . . . . 424
6.5.1.2 Example . . . . . . . . . . . . . . . . . . . . . . 424
6.5.2 Test harnesses . . . . . . . . . . . . . . . . . . . . . . . . 428
6.5.2.1 Module interface . . . . . . . . . . . . . . . . . . 428
6.5.2.1.1 stack.h . . . . . . . . . . . . . . . . . . 428
6.5.2.2 Test code . . . . . . . . . . . . . . . . . . . . . . 429
6.5.2.2.1 test-stack.c . . . . . . . . . . . . . . . . 429
6.5.2.3 Makefile . . . . . . . . . . . . . . . . . . . . . . . 431
6.5.2.3.1 Makefile . . . . . . . . . . . . . . . . . . 431
6.5.3 Stub implementation . . . . . . . . . . . . . . . . . . . . . 432
6.5.3.1 stack.c . . . . . . . . . . . . . . . . . . . . . . . 432
6.5.4 Bounded-space implementation . . . . . . . . . . . . . . . 433
6.5.4.1 stack.c . . . . . . . . . . . . . . . . . . . . . . . 433
6.5.5 First fix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
6.5.6 Final version . . . . . . . . . . . . . . . . . . . . . . . . . 435
6.5.6.1 stack.c . . . . . . . . . . . . . . . . . . . . . . . 435
6.5.7 Moral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
6.5.8 Appendix: Test macros . . . . . . . . . . . . . . . . . . . 437
6.6 Algorithm design techniques . . . . . . . . . . . . . . . . . . . . . 442
6.6.1 Basic principles of algorithm design . . . . . . . . . . . . 442
6.6.2 Specific techniques . . . . . . . . . . . . . . . . . . . . . . 443
6.6.3 Example: Finding the maximum . . . . . . . . . . . . . . 444
6.6.4 Example: Sorting . . . . . . . . . . . . . . . . . . . . . . . 445
6.7 Bit manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
6.8 Persistence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
6.8.1 A simple solution using text files . . . . . . . . . . . . . . 447
6.8.2 Using a binary file . . . . . . . . . . . . . . . . . . . . . . 448
6.8.3 A version that updates the file in place . . . . . . . . . . 450
6.8.4 An even better version using mmap . . . . . . . . . . . . 451
6.8.5 Concurrency and fault-tolerance issues: ACIDity . . . . . 453
8 Assignments 461
8.1 Assignment 1, due Thursday 2018-02-08, at 11:00pm . . . . . . . 461
8.1.1 Bureaucratic part. . . . . . . . . . . . . . . . . . . . . . . 461
8.1.2 Pig Esperanto . . . . . . . . . . . . . . . . . . . . . . . . . 461
10
8.1.3 Your task . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
8.1.4 Testing your assignment . . . . . . . . . . . . . . . . . . . 462
8.1.5 Submitting your assignment . . . . . . . . . . . . . . . . . 462
8.1.6 Sample solution . . . . . . . . . . . . . . . . . . . . . . . . 463
8.2 Assignment 2, due Thursday 2018-02-15, at 11:00pm . . . . . . . 464
8.2.1 Your task . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
8.2.2 Submitting your assignment . . . . . . . . . . . . . . . . . 465
8.2.3 Sample solution . . . . . . . . . . . . . . . . . . . . . . . . 466
8.3 Assignment 3, due Thursday 2018-02-22, at 11:00pm . . . . . . . 468
8.3.1 Your task . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
8.3.2 Submitting your assignment . . . . . . . . . . . . . . . . . 470
8.3.3 Clarifications added after the original assignment was posted470
8.4 Assignment 4, due Thursday 2018-03-01, at 11:00pm . . . . . . . 470
8.4.1 Your task . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
8.4.2 Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
8.4.3 Submitting your assignment . . . . . . . . . . . . . . . . . 473
8.5 Assignment 5, due Thursday 2018-03-29, at 11:00pm . . . . . . . 473
8.6 Assignment 6, due Thursday 2018-04-05, at 11:00pm . . . . . . . 473
8.7 Assignment 7, due Thursday 2018-04-12, at 11:00pm . . . . . . . 473
8.8 Assignment 8, due Thursday 2018-04-19, at 11:00pm . . . . . . . 473
11
9.4.3.3 General . . . . . . . . . . . . . . . . . . . . . . . 489
9.4.4 Sample solution . . . . . . . . . . . . . . . . . . . . . . . . 490
9.5 Assignment 5, due Wednesday 2015-02-25, at 11:00pm . . . . . . 495
9.5.1 Build a Turing machine! . . . . . . . . . . . . . . . . . . . 495
9.5.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
9.5.3 Your task . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
9.5.4 Submitting your assignment . . . . . . . . . . . . . . . . . 497
9.5.5 Sample solution . . . . . . . . . . . . . . . . . . . . . . . . 497
9.6 Assignment 6, due Wednesday 2015-03-25, at 11:00pm . . . . . . 502
9.6.1 Sinking ships . . . . . . . . . . . . . . . . . . . . . . . . . 502
9.6.2 Things to watch out for . . . . . . . . . . . . . . . . . . . 503
9.6.3 The testShips program . . . . . . . . . . . . . . . . . . . 504
9.6.4 Submitting your assignment . . . . . . . . . . . . . . . . . 506
9.6.5 Provided source files . . . . . . . . . . . . . . . . . . . . . 506
9.6.6 Sample solution . . . . . . . . . . . . . . . . . . . . . . . . 509
9.7 Assignment 7, due Wednesday 2015-04-01, at 11:00pm . . . . . . 515
9.7.1 Solitaire with big cards . . . . . . . . . . . . . . . . . . . 515
9.7.2 Explanation of the testing program . . . . . . . . . . . . . 516
9.7.3 Submitting your assignment . . . . . . . . . . . . . . . . . 517
9.7.4 Sample solution . . . . . . . . . . . . . . . . . . . . . . . . 517
9.8 Assignment 8, due Wednesday 2015-04-08, at 11:00pm . . . . . . 517
9.8.1 An ordered set . . . . . . . . . . . . . . . . . . . . . . . . 517
9.8.2 The testOrderedSet wrapper . . . . . . . . . . . . . . . 518
9.8.3 Submitting your assignment . . . . . . . . . . . . . . . . . 519
9.8.4 Sample solution . . . . . . . . . . . . . . . . . . . . . . . . 519
9.9 Assignment 9, due Wednesday 2015-04-15, at 11:00pm . . . . . . 524
9.9.1 Finding a cycle in a maze . . . . . . . . . . . . . . . . . . 524
9.9.2 Input and output format . . . . . . . . . . . . . . . . . . 524
9.9.3 Submitting and testing your program . . . . . . . . . . . 526
9.9.4 Sample solution . . . . . . . . . . . . . . . . . . . . . . . . 526
1 Course administration
1.1 Overview
This is the course information for CPSC 223: Data Structures and Programming
Techniques for the Spring 2015 semester. This document is available in two
formats, both of which should contain the same information:
• HTML
• PDF
12
Code examples can be downloaded from links in the text, or can be found in the
examples directory.
The links above point to www.cs.yale.edu. In case this machine is down,
a backup copy of these files can be found at https://www.dropbox.com/sh/
omg9qcxkxeiam2o/AACRAJOTj8af6V7RC1cXBHjQa?dl=0.
This document is a work in progress, and is likely to change frequently as the
semester progresses.
1.1.1 License
1.1.2 Resources
1.1.3 Documentation
me to it.
13
– C
– C (abridged)
– Unix
– Emacs
• Programming in C
• Valgrind documentation
• UNIXhelp for Users
Please feel free to send questions or comments on the class or anything connected
to it to [email protected].
For questions about individual assignments, you may be able to get a faster
response using Piazza. Note that questions you ask there are visible to other
students if not specifically marked private, so be careful about broadcasting your
draft solutions.
K&R refers to the Kernighan and Ritchie book. Examples from lecture can be
found in the examples directory under 2018/lecture if the links below have not
been updated yet.
2018-01-17 Introduction. What the course is about. Getting started with C:
running the compiler, the main function, integer data types and arithmetic,
a few simple programs. Readings: Course administration, The Zoo, The
Linux programming environment, a little bit about developing on your
own machine, Structure of a C program, Basic integer types; K&R §§1.1,
1.2. Examples from lecture.
2018-01-19 Arithmetic in C. Readings: Integer constants, Integer operators;
K&R §§1.4, 2.2, 2.3, 2.5, 2.6, 2.8, 2.9. Example from lecture: eval.c.
2018-01-22 Local variables and assignment operators. Defining constants using
#define. The ++ and -- operators. Control structures: if, while, for,
switch. Readings: Variables, Statements through The for loop; K&R
§1.3, 1.4, 2.1, 2.4, and 3.1–3.6. Example from lecture: eval.c, also the
Makefile that I was using to configure make so that I could run :make
test from vim to quickly rebuild and run the eval demo.
2018-01-24 Control flow and logical operators: &&, ||, ! and ,. Operator
precedence. I/O using getchar and putchar. Goto-like control structures:
break, continue, goto, and return. Basics of functions. Readings: Rest
of Operators and Statements, Reading and writing single characters, Func-
tions through The return statement; K&R 1.5, 2.6, 2.12, 3.7, 3.8, 4.1, 4.2.
Examples from lecture.
14
2018-01-29 ** Start of pointers and arrays: pointer types and pointer variables.
The & and * operators. Using a pointer to get a value out of a function.
Array declarations. Preventing array modification with const. Strings and
char *. Readings: Pointers up through Arrays; K&R 5.1–5.4. Examples
from lecture](examples/2018/lecture/2018-01-29).
2018-01-31 Storage allocation: malloc, calloc, free, and realloc. More on
pointers and arrays: Multi-dimensional arrays, C99 variable-length arrays.
Finding storage allocation bugs using valgrind. Readings: Pointers
through Run-time storage allocation using malloc, Valgrind; K&R §§5.6–
5.9. Examples from lecture.
2018-02-05 Structured data types: structs, unions, and enums. Separating
interfaces from implementations. Readings: Structured data types; K&R
Chapter 6, §2.5 (for enums). Examples from lecture: struct.c and packed-
Vector.c include two different implementations of a bounds-checked vector
type; union.c shows how unions by themselves lead to trouble but adding
tags using structs and enums can avoid it.
2018-02-07 Managing large C programs: source files vs header files, static
functions, opaque structs and typedefs, linking, make. Readings: Function
declarations and modules, Make. Examples from lecture.
2018-02-12 Strings in C. How Unicode sort of works in C. Various implemen-
tations of strcpy. Using gdb to observe a program in action. Readings:
Strings up to Operations on strings, The GNU debugger gdb; K&R §5.5,
Appendix B3. Examples from lecture, mostly strings.c.
2018-02-14 More on strings and file I/O. Performance pitfalls and how to find
them. Readings: Rest of Strings; File I/O; K&R Chapter 7. Examples
from lecture: copy.c, strings.c, the now misleadingly-named slow.c, and a
reconstruction of slow-original.c.
2018-02-19 Start of data structures: efficiency of different data structures,
linked lists. Readings: Asymptotic notation, Linked lists. Examples
from lecture: linkedList.c, which reverses an input string using a stack
implemented as a linked list.
2018-02-21 Abstract data types: invariants and representations. Implementing
queues and deques. Readings: Abstract data types, Queues, Deques.
Examples from lecture: As a simple example of an abstract data type,
license.h, an interface for implementations license1.c and its encrypted
variant license2.c, both of which work the same with licenseTest.c, but
only one of which defeats the evil abstraction-barrier-violating hacker
program evil.c. As a more practical example, queue.h, an interface for
implementations badQueue.c and goodQueue.c, one of which explodes if
given too many elements and one of which doesn’t. Either will compile
against testQueue.c, since that file only uses information in the interface
and doesn’t care about the implementation.
2018-02-26 Set and map abstract data types. Hash tables. Readings: Hash
tables.
2018-02-28 Function pointers and applications. Readings: TBA
2018-03-05 TBA
15
2018-03-07 Exam 1 will be given at the usual class time in TBA. It will be a
closed-book test potentially covering all material discussed in lecture prior
to this date. Sample exams from previous years: 2005, 2012, 2015.
2018-03-27 TBA
2018-03-28 TBA
2018-04-02 TBA
2018-04-04 TBA
2018-04-09 TBA
2018-04-11 TBA
2018-04-16 TBA
2018-04-18 TBA
2018-04-23 TBA
2018-04-25 Exam 2 will be given at the usual class time in TBA. It will be
a closed-book test potentially covering all material discussed in lecture
during the semester. Sample exams from previous years: 2005, 2012, 2015.
1.3 Syllabus
On-line information about the course, including the lecture schedule, lecture
notes, and information about assignments, can be found at http://www.cs.yale.
edu/homes/aspnes/classes/223/notes.html. This document will be updated
frequently during the semester, and is also available in PDF format.
Lectures are MW 13:00–14:15 in WLH 201. The lecture schedule can be found
in the course notes. A calendar is also available.
16
1.3.4 Prerequisites
1.3.5 Textbook
Eight weekly homework assignments, and two in-class exams. Assignments will
be weighted equally in computing the final grade, and will together count for
60% of the total grade. Each exam will count for 20%.
1.3.7 Staff
1.3.7.1 Instructor
James Aspnes ([email protected], http://www.cs.yale.edu/homes/
aspnes/). Office: AKW 401. If my open office hours don’t work for you, please
send email to make an appointment.
17
1.3.7.3 Peer tutors
• Bonnie Rhee [email protected].
• Joanna Wu [email protected].
• Tony Fu [email protected].
• Aadit Vyas [email protected].
• Hannah Block [email protected].
• Ian Zhou [email protected].
• Xiu Chen [email protected].
• Sreejan Kumar [email protected].
• Melina Delgado [email protected].
• Scott Smith [email protected].
• Thomas Liao [email protected].
• Adriana Elwood [email protected].
• Elizabeth Brooks [email protected].
• Joyce Duan [email protected].
• Isabella Teng [email protected].
Students are free to discuss homework problems and course material with each
other, and to consult with the instructor or a TA. Solutions handed in, however,
should be the student’s own work. If a student benefits substantially from
hints or solutions received from fellow students or from outside sources, then
the student should hand in their solution but acknowledge the outside sources,
and we will apportion credit accordingly. Using outside resources in solving a
problem is acceptable but plagiarism is not.
From time to time, ambiguities and errors may creep into homework assignments.
Questions about the interpretation of homework assignments should be sent
to the instructor at [email protected]. Clarifications will appear in the
on-line version of the assignment.
Assignments submitted after the deadline without a Dean’s Excuse are automat-
ically assessed a 2%/hour penalty.
18
1.4 Introduction
There are two purposes to this course: to teach you to program in the C
programming language, and to teach you how to choose, implement, and use
data structures and standard programming techniques.
• It is
the de facto substandard of programming languages.
–C runs on everything.
–C lets you write programs that use very few resources.
–C gives you near-total control over the system, down to the level of
pushing around individual bits with your bare hands.
– C imposes very few constraints on programming style: unlike higher-
level languages, C doesn’t have much of an ideology. There are very
few programs you can’t write in C.
– Many of the programming languages people actually use (Visual Basic,
perl, python, ruby, PHP, etc.) are executed by interpreters written in
C (or C++, an extension to C).
• You will learn discipline.
– C makes it easy to shoot yourself in the foot.
– You can learn to avoid this by being careful about where you point it.
– Pain is a powerful teacher of caution.
• You will fail CPSC 323 if you don’t learn C really well in CPSC 223 (CS
majors only).
On the other hand, there are many reasons why you might not want to use C
later in life. It’s missing a lot of features of modern program languages, including:
• A garbage collector.
• Minimal programmer-protection features like array bounds-checking or a
strong type system.
• Non-trivial built-in data structures.
• Language support for exceptions, namespaces, object-oriented program-
ming, etc.
For most problems where minimizing programmer time and maximizing robust-
ness are more important than minimizing runtime, other languages are a better
choice. But for this class, we’ll be using C.
If you want to read a lot of flaming about what C is or is not good for, see
http://c2.com/cgi/wiki?CeeLanguage.
19
1.4.2 Why should you learn about data structures and programming
techniques?
For small programs, you don’t need much in the way of data structures. But as
soon as you are representing reasonably complicated data, you need some place
to store it. Thinking about how you want to store and organize this data can be
a good framework for organizing the rest of your program.
Many programming environments will give you a rich collection of built-in data
structures as part of their standard library. C does not: unless you use third-
party libraries, any data structure you want in C you will have to build yourself.
For most data structures this will require an understanding of pointers and
storage allocation, mechanisms often hidden in other languages. Understanding
these concepts will give you a deeper understanding of how computers actually
work, and will both let you function in minimalist environments where you don’t
have a lot of support and let you understand what more convenient environments
are doing under their abstraction barriers.
The same applies to the various programming techniques we will discuss in this
class. While some of the issues that come up are specific to C and similar low-
level languages (particular issues involving disciplined management of storage),
some techniques will apply no matter what kinds of programs you are writing
and all will help in understanding what your computer systems are doing even if
some of the details are hidden.
2 The Zoo
The main undergraduate computing facility for Computer Science is the Zoo,
located on the third floor of AKW. The Zoo contains a large number of Linux
workstations.
You don’t need to do your work for this class in the Zoo, but that is where your
assignments will be submitted and tested, so if you do development elsewhere,
you will need to copy your files over and make sure that they work there as well.
The best place for information about the Zoo is at http://zoo.cs.yale.edu/. Below
are some points that are of particular relevance for CS223 students.
20
2.2 Getting into the room
The Zoo is located on the third floor of Arthur K Watson Hall, toward the
front of the building. If you are a Yale student, your ID should get you into the
building and the room. If you are not a student, you will need to get your ID
validated in AKW 008a to get in after hours.
There are several options for remote use of the Zoo. The simplest is to use ssh
as described in the following section. This will give you a terminal session, which
is enough to run anything you need to if you are not trying to do anything fancy.
The related program scp can be used to upload and download files.
The best part of Unix is that nothing ever changes. The instructions below still
work, and will get you a terminal window in the Zoo:
Date: Mon, 13 Dec 2004 14:34:19 -0500 (EST)
From: Jim Faulkner <[email protected]>
Subject: Accessing the Zoo
Hello all,
I've been asked to write up a quick guide on how to access the Linux
computers in the Zoo. For those who need this information, please read
on.
There are 2 ways of accessing the Zoo nodes, by walking up to one and
logging in on the console (the computers are located on the 3rd floor of
AKW), or by connecting remotely via SSH. Telnet access is not allowed.
SSH clients for various operating systems are available here:
http://www.yale.edu/software/
Mac OSX comes with an SSH client by default. A good choice for an SSH
client if you run Microsoft Windows is PuTTY:
http://www.chiark.greenend.org.uk/~sgtatham/putty/
With the exception of a few legacy accounts, the Zoo uses your campus-wide
NetID and password for login access. However, you must sign up for a Zoo
account before access is allowed. To sign up for a Zoo account, go to
21
this web page:
http://zoo.cs.yale.edu/accounts.html
Then login with your campus-wide NetID and password. You may choose a
different shell, or set up your account to be enrolled in a class if that
is appropriate for you, but neither is necessary. Just click "Submit".
Within an hour, your Zoo account will be created, and you will receive
more information via e-mail about how to access the Zoo.
Users cannot log into zoo.cs.yale.edu (the central file server) directly,
they must log into one of the Zoo nodes. Following is the list of Zoo
nodes:
aphid.zoo.cs.yale.edu lion.zoo.cs.yale.edu
bumblebee.zoo.cs.yale.edu macaw.zoo.cs.yale.edu
cardinal.zoo.cs.yale.edu monkey.zoo.cs.yale.edu
chameleon.zoo.cs.yale.edu newt.zoo.cs.yale.edu
cicada.zoo.cs.yale.edu peacock.zoo.cs.yale.edu
cobra.zoo.cs.yale.edu perch.zoo.cs.yale.edu
cricket.zoo.cs.yale.edu python.zoo.cs.yale.edu
frog.zoo.cs.yale.edu rattlesnake.zoo.cs.yale.edu
gator.zoo.cs.yale.edu rhino.zoo.cs.yale.edu
giraffe.zoo.cs.yale.edu scorpion.zoo.cs.yale.edu
grizzly.zoo.cs.yale.edu swan.zoo.cs.yale.edu
hare.zoo.cs.yale.edu termite.zoo.cs.yale.edu
hippo.zoo.cs.yale.edu tick.zoo.cs.yale.edu
hornet.zoo.cs.yale.edu tiger.zoo.cs.yale.edu
jaguar.zoo.cs.yale.edu tucan.zoo.cs.yale.edu
koala.zoo.cs.yale.edu turtle.zoo.cs.yale.edu
ladybug.zoo.cs.yale.edu viper.zoo.cs.yale.edu
leopard.zoo.cs.yale.edu zebra.zoo.cs.yale.edu
If you have already created an account, you can SSH directly to one of
the above computers and log in with your campus-wide NetID and
password. You can also SSH to node.zoo.cs.yale.edu, which will connect
you to a random Zoo node.
Feel free to contact me if you have any questions about the Zoo.
thanks,
Jim Faulkner
Zoo Systems Administrator
22
2.3.2 GUI access
If for some reason you really want to replicate the full Zoo experience on your
own remote machine, you can try running an X server and forwarding your
connection.
The instructions below were written by Debayan Gupta in 2013, and may or
may not still work.
For Mac or Linux users, typing “ssh -X [email protected]” into a
terminal and then running “nautilus” will produce an X window interface.
When on Windows, I usually use XMing (I’ve included a step-by-step guide at
the end of this mail).
For transferring files, I use CoreFTP (http://www.coreftp.com). FileZilla (https:
//filezilla-project.org/) is another option.
Step-by-step guide to XMIng:
You can download Xming from here: http://sourceforge.net/projects/xming/
Download and install. Do NOT launch Xming at the end of your installation.
Once you’ve installed Xming, go to your start menu and find XLaunch (it should
be in the same folder as Xming).
1. Start XLaunch, and select “Multiple Windows”. Leave “Display Number”
as its default value. Click next.
2. Select “Start a program”. Click next.
3. Type “nautilus” (or “terminal”, if you want a terminal) into the “Start
Program” text area. Select “Using PuTTY (plink.exe)”.
4. Type in the name of the computer (use “node.zoo.cs.yale.edu”) in the
“Connect to computer” text box.
5. Type in your netID in the “Login as user” text box (you can leave the
password blank). Click next.
6. Make sure “Clipboard” is ticked. Leave everything else blank. Click next.
7. Click “Save Configuration”. When saving, make sure your filename ends
with “.xlaunch” - this will let you connect with a click (you won’t need to
do all this every time you connect).
8. Click Finish.
9. You will be prompted for your password - enter it. Ignore any security
warnings.
10. You now have a remote connection to the Zoo.
23
For more options and information, you can go to: http://www.straightrunning.
com/XmingNotes/
Another possibility may be FastX, a commercial X server that does some extra
compression.
The FastX client can be downloaded from https://software.yale.edu/software/
fastx-2-client. You may need to supply your NetID and password to access
this page. In the past using this software required going through a complicated
procedure to get a Yale license key, but this appears to no longer be the case.
After downloading and installing FastX, you should supply node.cs.yale.edu
as the machine to connect to unless you have a particular fondness for a specific
Zoo node. As with ssh, your login will be your NetID and password.
Because C is highly portable, there is a good chance you can develop assignment
solutions on your own machine and just upload to the Zoo for final testing and
submission. Because there are many different kinds of machines out there, I can
only offer very general advice about how to do this.2
You will need a text editor. I like Vim, which will run on pretty much anything,
but you should use whatever you are comfortable with.
You will also need a C compiler that can handle C99. Ideally you will have an
environment that looks enough like Linux that you can also run other command-
line tools like gdb, make, and possibly git. How you get this depends on your
underlying OS.
2.4.1 Linux
Pretty much any Linux distribution will give you this out of the box. You may
need to run your package manager to install missing utilities like the gcc C
compiler.
2.4.2 OSX
OSX is not Linux, but it is Unix under the hood. You will need a terminal
emulator (the built-in Terminal program works, but I like iTerm2. You will also
2 Ifanybody would like to send me more detailed advice on any of these topics, I’d be happy
to paste it in below.
24
need to set up XCode to get command-line developer tools. The method for
doing this seems to vary depending on which version of XCode you have.
You may end up with c99 pointing at clang instead of gcc. Most likely the
only difference you will see is the details of the error messages. Remember to
test with gcc on the Zoo.
Other packages can be installed using Homebrew. If you are a Mac person you
probably already know more about this than I do.
2.4.3 Windows
See the chapter on how to use the Zoo for details of particular commands. The
basic steps are
• Creating the program with a text editor of your choosing. (I like vim for
long programs and cat for very short ones.)
• Compiling it with gcc.
• Running it.
If any of these steps fail, the next step is debugging. We’ll talk about debugging
elsewhere.
Use your favorite text editor. The program file should have a name of the form
foo.c; the .c at the end tells the C compiler the contents are C source code.
Here is a typical C program:
#include <stdio.h>
25
/* print the numbers from 1 to 10 */
int
main(int argc, char **argv)
{
int i;
return 0;
}
examples/count.c
26
looks for programs in certain standard system directories. To make it run a
program in the current directory, we have to include the directory name.
27
– The return 0; on Line 15 tells the operating system that the pro-
gram worked (the convention in Unix is that 0 means success). If
the program didn’t work for some reason, we could have returned
something else to signal an error.
When you sign up for an account in the Zoo, you are offered a choice of possible
shell programs. The examples below assume you have chosen bash, the Bourne-
again shell written by the GNU project. Other shells behave similarly for basic
commands.
When you log in to a Zoo node directly, you may not automatically get a shell
window. If you use the default login environment (which puts you into the KDE
window manager), you need to click on the picture of the display with a shell in
from of it in the toolbar at the bottom of the screen. If you run Gnome instead
(you can change your startup environment using the popup menu in the login
box), you can click on the foot in the middle of the toolbar. Either approach will
pop up a terminal emulator from which you can run emacs, gcc, and so forth.
The default login shell in the Zoo is bash, and all examples of shell command
lines given in these notes will assume bash. You can choose a different login
shell on the account sign-up page if you want to, but you are probably best off
just learning to like bash.
Most of what one does with Unix programs is manipulate the filesystem.
Unix files are unstructured blobs of data whose names are given by paths
consisting of a sequence of directory names separated by slashes: for exam-
ple /home/accts/some-user/cs223/hw1.c. At any time you are in a current
working directory (type pwd to find out what it is and cd new-directory to
28
change it). You can specify a file below the current working directory by
giving just the last part of the pathname. The special directory names .
and .. can also be used to refer to the current directory and its parent. So
/home/accts/some-user/cs223/hw1.c is just hw1.c or ./hw1.c if your current
working directory is /home/accts/some-user/cs223, cs223/hw1.c if your cur-
rent working directory is /home/accts/some-user, and ../cs223/hw1.c if your
current working directory is /home/accts/some-user/illegal-downloads.
All Zoo machines share a common filesystem, so any files you create or change
on one Zoo machine will show up in the same place on all the others.
29
rm rm file deletes a file. Deleted files cannot be recovered. Use this command
carefully.
chmod chmod changes the permissions on a file or directory. See the man page
for the full details of how this works. Here are some common chmod’s:
• chmod 644 file; owner can read or write the file, others can only
read it.
• chmod 600 file; owner can read or write the file, others can’t do
anything with it.
• chmod 755 file; owner can read, write, or execute the file, others
can read or execute it. This is typically used for programs or for
directories (where the execute bit has the special meaning of letting
somebody find files in the directory).
• chmod 700 file; owner can read, write, or execute the file, others
can’t do anything with it.
emacs, gcc, make, gdb, git See corresponding sections.
Sometimes you may have a running program that won’t die. Aside from costing
you the use of your terminal window, this may be annoying to other Zoo users,
especially if the process won’t die even if you close the terminal window or log
out.
There are various control-key combinations you can type at a terminal window
to interrupt or stop a running program.
ctrl-C Interrupt the process. Many processes (including any program you write
unless you trap SIGINT using the sigaction system call) will die instantly
when you do this. Some won’t.
ctrl-Z Suspend the process. This will leave a stopped process lying around.
Type jobs to list all your stopped processes, fg to restart the last process
(or fg %1 to start process %1 etc.), bg to keep running the stopped process
in the background, kill %1 to kill process %1 politely, kill -KILL %1 to
kill process %1 whether it wants to die or not.
ctrl-D Send end-of-file to the process. Useful if you are typing test input
to a process that expects to get EOF eventually or writing programs
using cat > program.c (not really recommmended). For test input,
you are often better putting it into a file and using input redirection
(./program < test-input-file); this way you can redo the test after
you fix the bugs it reveals.
ctrl-\ Quit the process. Sends a SIGQUIT, which asks a process to quit and
dump core. Mostly useful if ctrl-C and ctrl-Z don’t work.
If you have a runaway process that you can’t get rid of otherwise, you can use
ps g to get a list of all your processes and their process ids. The kill command
30
can then be used on the offending process, e.g. kill -KILL 6666 if your evil
process has process id 6666. Sometimes the killall command can simplify
this procedure, e.g. killall -KILL evil kills all process with command name
evil.
If you compile your own program, you will need to prefix it with ./ on the
command line to tell the shell that you want to run a program in the current
directory (called ‘.’) instead of one of the standard system directories. So for
example, if I’ve just built a program called count, I can run it by typing
$ ./count
Here the “$ ” is standing in for whatever your prompt looks like; you should
not type it.
Any words after the program name (separated by whitespace—spaces and/or
tabs) are passed in as arguments to the program. Sometimes you may wish to
pass more than one word as a single argument. You can do so by wrapping the
argument in single quotes, as in
$ ./count 'this is the first argument' 'this is the second argument'
Some programs take input from standard input (typically the terminal). If
you are doing a lot of testing, you will quickly become tired of typing test input
at your program. You can tell the shell to redirect standard input from a file
by putting the file name after a < symbol, like this:
$ ./count < huge-input-file
A ‘>’ symbol is used to redirect standard output, in case you don’t want to
read it as it flies by on your screen:
$ ./count < huge-input-file > huger-output-file
A useful file for both input and output is the special file /dev/null. As input,
it looks like an empty file. As output, it eats any characters sent to it:
$ ./sensory-deprivation-experiment < /dev/null > /dev/null
You can also pipe programs together, connecting the output of one to the input
of the next. Good programs to put at the end of a pipe are head (eats all but the
first ten lines), tail (eats all but the last ten lines), more (lets you page through
the output by hitting the space bar, and tee (shows you the output but also saves
a copy to a file). A typical command might be something like ./spew | more or
./slow-but-boring | tee boring-output. Pipes can consist of a long train
31
of programs, each of which processes the output of the previous one and supplies
the input to the next. A typical case might be:
$ ./do-many-experiments | sort | uniq -c | sort -nr
which, if ./do-many-experiments gives the output of one experiment on each
line, produces a list of distinct experimental outputs sorted by decreasing fre-
quency. Pipes like this can often substitute for hours of real programming.
To write your programs, you will need to use a text editor, preferably one that
knows enough about C to provide tools like automatic indentation and syntax
highlighting. There are three reasonable choices for this in the Zoo: kate, emacs,
and vim (which can also be run as vi). Kate is a GUI-style editor that comes
with the KDE window system; it plays nicely with the mouse, but Kate skills will
not translate well into other environements. Emacs and Vi have been the two
contenders for the One True Editor since the 1970s—if you learn one (or both)
you will be able to use the resulting skills everywhere. My personal preference is
to use Vi, but Emacs has the advantage of using the same editing commands as
the shell and gdb command-line interfaces.
To start Emacs, type emacs at the command line. If you are actually sitting at
a Zoo node it should put up a new window. If not, Emacs will take over the
current window. If you have never used Emacs before, you should immediately
type C-h t (this means hold down the Control key, type h, then type t without
holding down the Control key). This will pop you into the Emacs built-in
tutorial.
32
C-x u Undo. Undoes the last change you made to the current buffer. Type it
again to undo more things. A lifesaver. Note that it can only undo back
to the time you first loaded the file into Emacs—if you want to be able to
back out of bigger changes, use git (described below).
C-x C-s Save. Saves changes to the current buffer out to its file on disk.
C-x C-f Edit a different file.
C-x C-c Quit out of Emacs. This will ask you if you want to save any buffers
that have been modified. You probably want to answer yes (y) for each
one, but you can answer no (n) if you changed some file inside Emacs but
want to throw the changes away.
C-f Go forward one character.
C-b Go back one character.
C-n Go to the next line.
C-p Go to the previous line.
C-a Go to the beginning of the line.
C-k Kill the rest of the line starting with the current position. Useful Emacs
idiom: C-a C-k.
C-y “Yank.” Get back what you just killed.
TAB Re-indent the current line. In C mode this will indent the line according
to Emacs’s notion of how C should be indented.
M-x compile Compile a program. This will ask you if you want to save out
any unsaved buffers and then run a compile command of your choice
(see the section on compiling programs below). The exciting thing about
M-x compile is that if your program has errors in it, you can type C-x ‘
to jump to the next error, or at least where gcc thinks the next error is.
If you don’t find yourself liking Emacs very much, you might want to try Vim
instead. Vim is a vastly enhanced reimplementation of the classic vi editor,
which I personally find easier to use than Emacs. Type vimtutor to run the
tutorial.
One annoying feature of Vim is that it is hard to figure out how to quit. If you
don’t mind losing all of your changes, you can always get out by hitting the
Escape key a few times and then typing ~ \\\ :qa!\\\ ~
To run Vim, type vim or vim filename from the command line. Or you can use
the graphical version gvim, which pops up its own window.
Vim is a modal editor, meaning that at any time you are in one of several modes
(normal mode, insert mode, replace mode, operator-pending mode, etc.), and
the interpretation of keystrokes depends on which mode you are in. So typing
jjjj in normal mode moves the cursor down four lines, while typing jjjj in
insert mode inserts the string jjjj at the current position. Most of the time
you will be in either normal mode or insert mode. There is also a command
33
mode entered by hitting : that lets you type longer commands, similar to the
Unix command-line or M-x in Emacs.
34
them, dG deletes to end of file—there are many possibilities. All of these
save what you deleted into register "" so you can get them back with p.
yy Like dd, but only saves the line to register "" and doesn’t delete it. (Think
copy). All the variants of dd work with yy: 5yy, y$, yj, y%, etc.
p Pull whatever is in register "". (Think paste).
<< and >> Outdent or indent the current line one tab stop.
:make Run make in the current directory. You can also give it arguments, e.g.,
:make myprog, :make test. Use :cn to go to the next error if you get
errors.
:! Run a command, e.g., :! echo hello world or :! gdb myprogram. Returns
to Vim when the command exits (control-C can sometimes be helpful if
your command isn’t exiting when it should). This works best if you ran
Vim from a shell window; it doesn’t work very well if Vim is running in its
own window.
3.2.2.2 Settings
Unlike Emacs, Vim’s default settings are not very good for editing C programs.
You can fix this by creating a file called .vimrc in your home directory with the
following commands:
set shiftwidth=4
set autoindent
set backup
set cindent
set hlsearch
set incsearch
set showmatch
set number
syntax on
filetype plugin on
35
filetype indent on
examples/sample.vimrc
(You can download this file by clicking on the link.)
In Vim, you can type e.g. :help backup to find out what each setting does.
Note that because .vimrc starts with a ., it won’t be visible to ls unless you
use ls -a or ls -A.
A C program will typically consist of one or more files whose names end with .c.
To compile foo.c, you can type gcc foo.c. Assuming foo.c contains no errors
egregious enough to be detected by the extremely forgiving C compiler, this will
produce a file named a.out that you can then execute by typing ./a.out.
If you want to debug your program using gdb or give it a different name,
you will need to use a longer command line. Here’s one that compiles foo.c
to foo (run it using ./foo) and includes the information that gdb needs:
gcc -g3 -o foo foo.c
If you want to use C99 features, you will need to tell gcc to use C99 instead
of its own default dialect of C. You can do this either by adding the argument
-std=c99 as in gcc -std=c99 -o foo foo.c or by calling gcc as c99 as in c99
-o foo foo.c.
By default, gcc doesn’t check everything that might be wrong with your program.
But if you give it a few extra arguments, it will warn you about many (but not
all) potential problems: c99 -g3 -Wall -pedantic -o foo foo.c.
3.3.2 Make
For complicated programs involving multiple source files, you are probably better
off using make than calling gcc directly. Make is a “rule-based expert system”
that figures out how to compile programs given a little bit of information about
their components.
For example, if you have a file called foo.c, try typing make foo and see what
happens.
In general you will probably want to write a Makefile, which is named Makefile
or makefile and tells make how to compile programs in the same directory. Here’s
a typical Makefile:
36
# Any line that starts with a sharp is a comment and is ignored
# by Make.
# Command lines can do more than just build things. For example,
# "make test" will rebuild hello-world (if necessary) and then run it.
test: hello-world
./hello-world
# This lets you type "make clean" and get rid of anything you can
# rebuild. The $(RM) variable is predefined to "rm -f"
clean:
37
$(RM) hello-world *.o
examples/usingMake/Makefile
Given a Makefile, make looks at each dependency line and asks: (a) does the
target on the left hand side exist, and (b) is it older than the files it depends on.
If so, it looks for a set of commands for rebuilding the target, after first rebuilding
any of the files it depends on; the commands it runs will be underneath some
dependency line where the target appears on the left-hand side. It has built-in
rules for doing common tasks like building .o files (which contain machine code)
from .c files (which contain C source code). If you have a fake target like all
above, it will try to rebuild everything all depends on because there is no file
named all (one hopes).
The standard debugger on the Zoo is gdb. Also useful is the memory error
checker valgrind. Below are some notes on debugging in general and using
these programs in particular.
38
1. Know what your program is supposed to do.
2. Detect when it doesn’t.
3. Fix it.
A tempting mistake is to skip step 1, and just try randomly tweaking things
until the program works. Better is to see what the program is doing internally,
so you can see exactly where and when it is going wrong. A second temptation
is to attempt to intuit where things are going wrong by staring at the code or
the program’s output. Avoid this temptation as well: let the computer tell you
what it is really doing inside your program instead of guessing.
3.4.2 Assertions
Every non-trivial C program should include <assert.h>, which gives you the
assert macro (see Appendix B6 of K&R). The assert macro tests if a condition
is true and halts your program with an error message if it isn’t:
#include <assert.h>
int
main(int argc, char **argv)
{
assert(2+2 == 5);
return 0;
}
examples/debugging/no.c
Compiling and running this program produces the following output:
$ gcc -o no no.c
$ ./no
no: no.c:6: main: Assertion `2+2 == 5' failed.
Line numbers and everything, even if you compile with the optimizer turned
on. Much nicer than a mere segmentation fault, and if you run it under the
debugger, the debugger will stop exactly on the line where the assert failed so
you can poke around and see why.
The standard debugger on Linux is called gdb. This lets you run your program
under remote control, so that you can stop it and see what is going on inside.
You can also use ddd, which is a graphical front-end for gdb. There is an extensive
tutorial available for ddd, so we will concentrate on the command-line interface
to gdb here.
39
Warning: Though gdb is rock-solid when running on an actual Linux kernel,
if you are running on a different underlying operating system like Windows
(including Windows Subsystem for Linux) or OS X, it may not work as well,
either missing errors that it should catch or in some cases not starting at all. In
either case you can try debugging on the Zoo machines instead. For OS X, you
might also have better results using the standard OS X debugger lldb, which is
similar enough to gdb to do everything gdb can do while being different enough
that you will need to learn its own set of commands. Most IDEs that support C
also include debugging tools.
Getting back to gdb, we’ll look at a contrived example. Suppose you have the
following program bogus.c:
#include <stdio.h>
sum = 0;
for(i = 0; i -= 1000; i++) {
sum += i;
}
printf("%d\n", sum);
return 0;
}
examples/debugging/bogus.c
Let’s compile and run it and see what happens. Note that we include the flag
-g3 to tell the compiler to include debugging information. This allows gdb to
translate machine addresses back into identifiers and line numbers in the original
program for us.
$ c99 -g3 -o bogus bogus.c
$ ./bogus
-34394132
$
That doesn’t look like the sum of 1 to 1000. So what went wrong? If we were
clever, we might notice that the test in the for loop is using the mysterious -=
operator instead of the <= operator that we probably want. But let’s suppose
we’re not so clever right now—it’s four in the morning, we’ve been working on
bogus.c for twenty-nine straight hours, and there’s a -= up there because in our
befuddled condition we know in our bones that it’s the right operator to use.
We need somebody else to tell us that we are deluding ourselves, but nobody is
40
around this time of night. So we’ll have to see what we can get the computer to
tell us.
The first thing to do is fire up gdb, the debugger. This runs our program in
stop-motion, letting us step through it a piece at a time and watch what it is
actually doing. In the example below gdb is run from the command line. You
can also run it directly from Emacs with M-x gdb, which lets Emacs track and
show you where your program is in the source file with a little arrow, or (if you
are logged in directly on a Zoo machine) by running ddd, which wraps gdb in a
graphical user interface.
$ gdb bogus
GNU gdb 4.17.0.4 with Linux/x86 hardware watchpoint and FPU support
Copyright 1998 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB. Type "show warranty" for details.
This GDB was configured as "i386-redhat-linux"...
(gdb) run
Starting program: /home/accts/aspnes/tmp/bogus
-34394132
41
10 for(i = 0; i -= 1000; i++)
2: i = -1000
1: sum = -1000
(gdb) n
11 sum += i;
2: i = -1999
1: sum = -1000
(gdb) n
10 for(i = 0; i -= 1000; i++)
2: i = -1999
1: sum = -2999
(gdb) quit
The program is running. Exit anyway? (y or n) y
$
Here we are using break main to tell the program to stop as soon as it enters
main, display to tell it to show us the value of the variables i and sum whenever
it stops, and n (short for next) to execute the program one line at a time.
When stepping through a program, gdb displays the line it will execute next as
well as any variables you’ve told it to display. This means that any changes you
see in the variables are the result of the previous displayed line. Bearing this in
mind, we see that i drops from 0 to -1000 the very first time we hit the top of
the for loop and drops to -1999 the next time. So something bad is happening
in the top of that for loop, and if we squint at it a while we might begin to
suspect that i -= 1000 is not the nice simple test we might have hoped it was.
42
Useful for getting out of something you stepped into that you didn’t want
to step into.
cont (Or continue). Continue until (a) the end of the program, (b) a fatal
error like a Segmentation Fault or Bus Error, or (c) a breakpoint. If you
give it a numeric argument (e.g., cont 1000) it will skip over that many
breakpoints before stopping.
print Print the value of some expression, e.g. print i.
display Like print, but runs automatically every time the program stops.
Useful for watching values that change often.
backtrace Show all the function calls on the stack, with arguments. Can be
abbreviated as bt. Do bt full if you also want to see local variables in
each function.
set disable-randomization off Not something you will need every day, but
you should try this before running your program if it is producing seg-
mentation faults outside of gdb but not inside. Normally the Linux kernel
randomizes the position of bits of your program before running it, to make
its response to buffer overflow attacks less predictable. By default, gdb
turns this off so that the behavior of your program is consistent from one
execution to the next. But sometimes this means that a pointer that had
been bad with address randomization (causing a segmentation fault) turns
out not to be bad without. This option will restore the standard behavior
outside gdb and give you some hope of finding what went wrong.
43
The key to all debugging is knowing what your code is supposed to do. If you
don’t know this, you can’t tell the lunatic who thinks he’s Napoleon from lunatic
who really is Napoleon. If you’re confused about what your code is supposed
to be doing, you need to figure out what exactly you want it to do. If you can
figure that out, often it will be obvious what is going wrong. If it isn’t obvious,
you can always go back to gdb.
44
int
main(int argc, char **argv)
{
int x;
x = 3;
assert(x+x == 4);
return 0;
}
examples/debugging/assertFailed.c
With gdb in action:
$ gcc -g3 -o assertFailed assertFailed.c
22:59:39 (Sun Feb 15) zeniba aspnes ~/g/classes/223/notes/examples/debugging
$ gdb assertFailed
GNU gdb (Ubuntu 7.7.1-0ubuntu5~14.04.2) 7.7.1
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "i686-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from assertFailed...done.
(gdb) run
Starting program: /home/aspnes/g/classes/223/notes/examples/debugging/assertFailed
assertFailed: assertFailed.c:12: main: Assertion `x+x == 4' failed.
45
(gdb) up
#3 0xb7e3c6c7 in __assert_fail_base (fmt=0xb7f7a8b4 "%s%s%s:%u: %s%sAssertion `%s' failed.\
assertion=assertion@entry=0x804850f "x+x == 4", file=file@entry=0x8048500 "assertFailed.
line=line@entry=12, function=function@entry=0x8048518 <__PRETTY_FUNCTION__.2355> "main")
92 assert.c: No such file or directory.
(gdb) up
#4 0xb7e3c777 in __GI___assert_fail (assertion=0x804850f "x+x == 4", file=0x8048500 "assert
function=0x8048518 <__PRETTY_FUNCTION__.2355> "main") at assert.c:101
101 in assert.c
(gdb) up
#5 0x0804845d in main (argc=1, argv=0xbffff434) at assertFailed.c:12
12 assert(x+x == 4);
(gdb) print x
$1 = 3
Here we see that x has value 3, which may or may not be the right value, but
certainly violates the assertion.
int
main(int argc, char **argv)
{
int a[1000];
int i;
i = -1771724;
printf("%d\n", a[i]);
return 0;
}
examples/debugging/segmentationFault.c
$ gcc -g3 -o segmentationFault segmentationFault.c
23:04:18 (Sun Feb 15) zeniba aspnes ~/g/classes/223/notes/examples/debugging
$ gdb segmentationFault
GNU gdb (Ubuntu 7.7.1-0ubuntu5~14.04.2) 7.7.1
[...]
46
Reading symbols from segmentationFault...done.
(gdb) run
Starting program: /home/aspnes/g/classes/223/notes/examples/debugging/segmentationFault
47
(gdb) n
11 i *= 37;
1: i = 0
(gdb) n
10 for(i = 0; i < 10; i += 0) {
1: i = 0
(gdb) n
11 i *= 37;
1: i = 0
int
main(int argc, char **argv)
{
int x;
int a[10];
int i;
x = 5;
assert(x == 5);
return 0;
}
examples/debugging/mysteryChange.c
In the debugging session below, it takes a couple of attempts to catch the change
in x before hitting the failed assertion.
$ gcc -g3 -o mysteryChange mysteryChange.c
23:15:41 (Sun Feb 15) zeniba aspnes ~/g/classes/223/notes/examples/debugging
48
$ gdb mysteryChange
GNU gdb (Ubuntu 7.7.1-0ubuntu5~14.04.2) 7.7.1
[...]
Reading symbols from mysteryChange...done.
(gdb) run
Starting program: /home/aspnes/g/classes/223/notes/examples/debugging/mysteryChange
mysteryChange: mysteryChange.c:18: main: Assertion `x == 5' failed.
49
The program being debugged has been started already.
Start it from the beginning? (y or n) y
3.4.4 Valgrind
The valgrind program can be used to detect some (but not all) common errors
in C programs that use pointers and dynamic storage allocation. On the Zoo,
you can run valgrind on your program by putting valgrind at the start of the
command line:
valgrind ./my-program arg1 arg2 < test-input
This will run your program and produce a report of any allocations and de-
allocations it did. It will also warn you about common errors like using unitialized
memory, dereferencing pointers to strange places, writing off the end of blocks
allocated using malloc, or failing to free blocks.
You can suppress all of the output except errors using the -q option, like this:
valgrind -q ./my-program arg1 arg2 < test-input
You can also turn on more tests, e.g.
valgrind -q --tool=memcheck --leak-check=yes ./my-program arg1 arg2 < test-input
See valgrind --help for more information about the (many) options, or look
at the documentation at http://valgrind.org/ for detailed information about
what the output means. For some common valgrind messages, see the examples
section below.
50
If you want to run valgrind on your own machine, you may be able to find a
version that works at http://valgrind.org. Unfortunately, this is only likely to
work if you are running a Unix-like operating system. This does include Linux
(either on its own or inside Windows Subsystem for Linux) and OSX, but it does
not include stock Windows.
int
main(int argc, char **argv)
{
char a[2];
a[0] = 'a';
if(!strcmp(a, "a")) {
puts("a is \"a\"");
}
51
return 0;
}
examples/valgrindErrors/uninitialized.c
Run without valgrind, we see no errors, because we got lucky and it turned out
our hand-built string was null-terminated anyway:
$ ./uninitialized
a is "a"
But valgrind is not fooled:
$ valgrind -q ./uninitialized
==4745== Conditional jump or move depends on uninitialised value(s)
==4745== at 0x4026663: strcmp (mc_replace_strmem.c:426)
==4745== by 0x8048435: main (uninitialized.c:10)
==4745==
==4745== Conditional jump or move depends on uninitialised value(s)
==4745== at 0x402666C: strcmp (mc_replace_strmem.c:426)
==4745== by 0x8048435: main (uninitialized.c:10)
==4745==
==4745== Conditional jump or move depends on uninitialised value(s)
==4745== at 0x8048438: main (uninitialized.c:10)
==4745==
Here we get a lot of errors, but they are all complaining about the same call to
strcmp. Since it’s unlikely that strcmp itself is buggy, we have to assume that
we passed some uninitialized location into it that it is looking at. The fix is to
add an assignment a[1] = '\0' so that no such location exists.
int
main(int argc, char **argv)
{
char *s;
s = malloc(26);
return 0;
}
52
examples/valgrindErrors/missing_free.c
With no extra arguments, valgrind will not look for this error. But if we turn
on --leak-check=yes, it will complain:
$ valgrind -q --leak-check=yes ./missing_free
==4776== 26 bytes in 1 blocks are definitely lost in loss record 1 of 1
==4776== at 0x4024F20: malloc (vg_replace_malloc.c:236)
==4776== by 0x80483F8: main (missing_free.c:9)
==4776==
Here the stack trace in the output shows where the bad block was allocated: inside
malloc (specifically the paranoid replacement malloc supplied by valgrind),
which was in turn called by main in line 9 of missing_free.c. This lets us go
back and look at what block was allocated in that line and try to trace forward
to see why it wasn’t freed. Sometimes this is as simple as forgetting to include
a free statement anywhere, but in more complicated cases it may be because
I somehow lose the pointer to the block by overwriting the last variable that
points to it or by embedding it in some larger structure whose components I
forget to free individually.
int
main(int argc, char **argv)
{
char *s;
s = malloc(1);
s[0] = 'a';
s[1] = '\0';
puts(s);
return 0;
}
examples/valgrindErrors/invalid_operations.c
==7141== Invalid write of size 1
53
==7141== at 0x804843B: main (invalid_operations.c:12)
==7141== Address 0x419a029 is 0 bytes after a block of size 1 alloc'd
==7141== at 0x4024F20: malloc (vg_replace_malloc.c:236)
==7141== by 0x8048428: main (invalid_operations.c:10)
==7141==
==7141== Invalid read of size 1
==7141== at 0x4026063: __GI_strlen (mc_replace_strmem.c:284)
==7141== by 0x409BCE4: puts (ioputs.c:37)
==7141== by 0x8048449: main (invalid_operations.c:14)
==7141== Address 0x419a029 is 0 bytes after a block of size 1 alloc'd
==7141== at 0x4024F20: malloc (vg_replace_malloc.c:236)
==7141== by 0x8048428: main (invalid_operations.c:10)
==7141==
An example of the second:
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
int
main(int argc, char **argv)
{
char *s;
s = malloc(2);
free(s);
s[0] = 'a';
s[1] = '\0';
puts(s);
return 0;
}
examples/valgrindErrors/freed_block.c
==7144== Invalid write of size 1
==7144== at 0x804846D: main (freed_block.c:13)
==7144== Address 0x419a028 is 0 bytes inside a block of size 2 free'd
==7144== at 0x4024B3A: free (vg_replace_malloc.c:366)
==7144== by 0x8048468: main (freed_block.c:11)
==7144==
==7144== Invalid write of size 1
==7144== at 0x8048477: main (freed_block.c:14)
==7144== Address 0x419a029 is 1 bytes inside a block of size 2 free'd
==7144== at 0x4024B3A: free (vg_replace_malloc.c:366)
54
==7144== by 0x8048468: main (freed_block.c:11)
==7144==
==7144== Invalid read of size 1
==7144== at 0x4026058: __GI_strlen (mc_replace_strmem.c:284)
==7144== by 0x409BCE4: puts (ioputs.c:37)
==7144== by 0x8048485: main (freed_block.c:16)
[... more lines of errors deleted ...]
In both cases the problem is that we are operating on memory that is not
guaranteed to be allocated to us. For short programs like these, we might get
lucky and have the program work anyway. But we still want to avoid bugs like
this because we might not get lucky.
How do we know which case is which? If I write off the end of an existing block, I’ll
see something like Address 0x419a029 is 0 bytes after a block of size 1 alloc'd,
telling me that I am working on an address after a block that is still al-
located. When I try to write to a freed block, the message changes
to Address 0x419a029 is 1 bytes inside a block of size 2 free'd,
where the free'd part tells me I freed something I probably shouldn’t have.
Fixing the first class of bugs is usually just a matter of allocating a bigger block
(but don’t just do this without figuring out why you need a bigger block, or
you’ll just be introducing random mutations into your code that may cause
other problems elsewhere). Fixing the second class of bugs usually involves
figuring out why you freed this block prematurely. In some cases you may need
to re-order what you are doing so that you don’t free a block until you are
completely done with it.
55
appears to arrive out of order). It also helps that output to stderr is
usually unbuffered, avoiding the problem of lost output.
2. If you must output to stdout, put fflush(stdout) after any output
operation you suspect is getting lost in the buffer. The fflush function
forces any buffered output to be emitted immediately.
3. Keep all arguments passed to printf as simple as possible and
beware of faults in your debugging code itself. If you write
printf("a[key] == %d\n", a[key]) and key is some bizarre value,
you will never see the result of this printf because your program will
segfault while evaluating a[key]. Naturally, this is more likely to occur if
the argument is a[key]->size[LEFTOVERS].cleanupFunction(a[key])
than if it’s just a[key], and if it happens it will be harder to figure out
where in this complex chain of array indexing and pointer dereferencing
the disaster happened. Better is to wait for your program to break in
gdb, and use the print statement on increasingly large fragments of the
offending expression to see where the bogus array index or surprising null
pointer is hiding.
4. Wrap your debugging output in an #ifdef so you can turn it on and off
easily.
Bearing in mind that this is a bad idea, here is an example of how one might do
it as well as possible:
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
x = *((int *) 0xbad1dea); /* if we are lucky, maybe the optimizer will remove it? */
}
int
main(int argc, char **argv)
{
init();
#ifdef DEBUGGING_OUTPUT
/*
* this type of debugging output is not recommended
* but if you do it anyway:
*
* 1. Use stderr, which flushes automatically.
56
* 2. Be wary of buffered data on stdout.
* 3. Wrap your debugging statement in an #ifdef,
* so it is not active by default.
*/
fputs("Returned from init() in main()\n", stderr);
#endif
return 0;
}
examples/debugging/usingPrintf.c
Note that we get much more useful information if we run this under gdb (which
will stop exactly on the bad line in init), but not seeing the result of the fputs
at least tells us something.
real 0m0.010s
user 0m0.006s
sys 0m0.004s
This measures “real time” (what it sounds like), “user time” (the amount of time
the program runs), and “system time” (the amount of time the operating system
spends supporting your program, e.g. by loading it from disk and doing I/O).
Real time need not be equal to the sum of user time and system time, since the
operating system may be simultaneously running other programs.
Particularly for fast programs, times can vary from one execution to the next,
e.g.
$ time wc /usr/share/dict/words
45378 45378 408865 /usr/share/dict/words
real 0m0.009s
user 0m0.008s
57
sys 0m0.001s
$ time wc /usr/share/dict/words
45378 45378 408865 /usr/share/dict/words
real 0m0.009s
user 0m0.007s
sys 0m0.002s
This arises because of measurement errors and variation in how long different
operations take. But usually the variation will not be much.
Note also that time is often a builtin operation of your shell, so the output
format may vary depending on what shell you use.
The problem with time is that it only tells you how much time your whole
program took, but not where it spent its time. This is similar to looking at a
program without a debugger: you can’t see what’s happening inside. If you want
to see where your program is spending its time, you need to use a profiler.
The specific profiler we will use in this section is callgrind, a tool built into
valgrind, which we’ve been using elsewhere to detect pointer disasters and
storage leaks. Full documentation for callgrind can be found at http://valgrind.
org/docs/manual/cl-manual.html, but we’ll give an example of typical use here.
Here is an example of a program that is unreasonably slow for what it is doing.
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
#include <string.h>
58
return dest;
}
dest[j] = '\0';
return dest;
}
putchar('\n');
}
59
int
main(int argc, char **argv)
{
char *buffer;
char *half;
buffer = malloc(BUFFER_SIZE);
half = malloc(BUFFER_SIZE);
free(half);
free(buffer);
return 0;
}
examples/profiling/slow.c
This program defines several functions for processing null-terminated strings:
replicate, which concatenates many copies of some string together, and
copyEvenCharacters, which copies every other character in a string to a given
buffer. Unfortunately, both functions contain a hidden inefficiency arising from
their use of the standard C library string functions.
The runtime of the program is not terrible, but not as sprightly as we might
expect given that we are working on less than half a megabyte of text:
$ time ./slow
abcdabcdabcdabcdabcdabcdabcdabcdabcdabcd
acacacacacacacacacac
abcdabcdabcdabcdabcdabcdabcdabcdabcdabcd[399960 more]
acacacacacacacacacacacacacacacacacacacac[199960 more]
real 0m3.171s
user 0m3.164s
sys 0m0.001s
So we’d like to make it faster.
In this particular case, the programmer was kind enough to identify the problems
in the original code in comments, but we can’t always count on that. But we
can use the callgrind tool built into valgrind to find out where our program
is spending most of its time.
60
To run callgrind, call valgrind with the --tool=callgrind option, like this:
$ time valgrind --tool=callgrind ./slow
==5714== Callgrind, a call-graph generating cache profiler
==5714== Copyright (C) 2002-2017, and GNU GPL'd, by Josef Weidendorfer et al.
==5714== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
==5714== Command: ./slow
==5714==
==5714== For interactive control, run 'callgrind_control -h'.
abcdabcdabcdabcdabcdabcdabcdabcdabcdabcd
acacacacacacacacacac
abcdabcdabcdabcdabcdabcdabcdabcdabcdabcd[399960 more]
acacacacacacacacacacacacacacacacacacacac[199960 more]
==5714==
==5714== Events : Ir
==5714== Collected : 15339385208
==5714==
==5714== I refs: 15,339,385,208
real 1m31.965s
user 1m31.515s
sys 0m0.037s
I’ve include time at the start of the command line to make it clear just how
much of a slowdown you can expect from using valgrind for this purpose. Note
that valgrind only prints a bit of summary data while executing. To get a full
report, we use a separate program callgrind_annotate:
$ callgrind_annotate --auto=yes --inclusive=yes > slow.callgrind
Here I sent the output to a file slow.callgrind so I could look at it in more
detail in my favorite text editor, since the actual report is pretty huge. The
--auto=yes argument tells callgrind_annotate to show how many instructions
were executed as part of each line of source code, and the --inclusive=yes
argument tells use that in its report it should charge instructions executed in
some function both to that function and to all functions responsible for calling
it. This is usually what you want to figure out where things are going wrong.
The first thing to look at in slow.callgrind is the table showing which functions
are doing most of the work:
--------------------------------------------------------------------------------
Ir file:function
--------------------------------------------------------------------------------
15,339,385,208 ???:0x0000000000000dd0 [/usr/lib64/ld-2.25.so]
15,339,274,304 ???:_start [/home/accts/aspnes/g/classes/223/notes/examples/profiling/slow]
15,339,274,293 /usr/src/debug/glibc-2.25-123-gedcf13e25c/csu/../csu/libc-start.c:(below mai
15,339,273,103 slow.c:main [/home/accts/aspnes/g/classes/223/notes/examples/profiling/slow]
15,339,273,103 /home/accts/aspnes/g/classes/223/notes/examples/profiling/slow.c:main
61
11,264,058,263 slow.c:copyEvenCharacters [/home/accts/aspnes/g/classes/223/notes/examples/p
11,260,141,740 /usr/src/debug/glibc-2.25-123-gedcf13e25c/string/../sysdeps/x86_64/strlen.S:
4,075,049,055 slow.c:replicate [/home/accts/aspnes/g/classes/223/notes/examples/profiling/
4,074,048,083 /usr/src/debug/glibc-2.25-123-gedcf13e25c/string/../sysdeps/x86_64/multiarch
108,795 /usr/src/debug/glibc-2.25-123-gedcf13e25c/elf/rtld.c:_dl_start [/usr/lib64/l
Since each function is charge for work done by its children, the top of the
list includes various setup functions included automatically by the C com-
piler, followed by main. Inside main, we see that the majority of the work is
done in copyEvenCharacters, with a substantial chunk in replicate. The
suspicious similarity in numbers suggests that most of these instructions in
copyEvenCharacters are accounted for by calls to strlen and in replicate
by calls to __strcat_sse3, which happens to be an assembly-language imple-
mentation of strcat (hence the .S in the source file name) that uses the special
SSE instructions in the x86 instruction set to speed up copying.
We can confirm this suspicion by looking at later parts of the file, which annotate
the source code with instruction counts.
The annotated version of slow.c includes this annotated version of replicate,
showing roughly 4 billion instructions executed in __strcat_sse3:
. char *
. replicate(char *dest, const char *src, int n)
12 {
. /* truncate dest */
4 dest[0] = '\0';
.
. /* BAD: each call to strcat requires walking across dest */
400,050 for(int i = 0; i < n; i++) {
600,064 strcat(dest, src);
836 => /usr/src/debug/glibc-2.25-123-gedcf13e25c/elf/../sysdeps/x86_64/dl-trampoline.
4,074,048,083 => /usr/src/debug/glibc-2.25-123-gedcf13e25c/string/../sysdeps/x86_64/multiar
. }
.
2 return dest;
4 }
Similarly, the annotated version of copyEvenCharacters shows that 11 billion
instructions were executed in strlen:
. char *
. copyEvenCharacters(char *dest, const char *src)
12 {
. int i;
. int j;
.
. /* BAD: Calls strlen on every pass through the loop */
2,000,226 for(i = 0, j = 0; i < strlen(src); i += 2, j++) {
62
11,260,056,980 => /usr/src/debug/glibc-2.25-123-gedcf13e25c/string/../sysdeps/x86_64/strlen
825 => /usr/src/debug/glibc-2.25-123-gedcf13e25c/elf/../sysdeps/x86_64/dl-trampoline.
2,000,200 dest[j] = src[i];
. }
.
10 dest[j] = '\0';
.
2 return dest;
8 }
This gives a very strong hint for fixing the program: cut down on the cost of
calling strlen and strcat.
Fixing copyEvenCharacters is trivial. Because the length of src doesn’t change,
we can call strlen once and save the value in a variable:
char *
copyEvenCharacters(char *dest, const char *src)
{
int i;
int j;
size_t len;
dest[j] = '\0';
return dest;
}
Fixing replicate is trickier. The trouble with using strcat is that every time
we call strcat(dest, src), strcat has to scan down the entire dest string
to find the end, which (a) gets more expensive as dest gets longer, and (b)
involves passing over the same non-null initial characters over and over again
each time we want to add a few more characters. The effect of this is that we
turn what should be an O(n)-time process of generating a string of n characters
into something that looks more like O(n2 ). We can fix this by using pointer
arithmetic to keep track of the end of dest ourselves, which also allows us to
replace strcat with memcpy, which is likely to be faster since it doesn’t have to
check for nulls. Here’s the improved version:
char *
replicate(char *dest, const char *src, int n)
{
63
size_t len = strlen(src);
char *tail = dest;
return dest;
}
The result of applying both of these fixes can be found in fast.c. This runs much
faster than slow:
abcdabcdabcdabcdabcdabcdabcdabcdabcdabcd
acacacacacacacacacac
abcdabcdabcdabcdabcdabcdabcdabcdabcdabcd[399960 more]
acacacacacacacacacacacacacacacacacacacac[199960 more]
real 0m0.003s
user 0m0.001s
sys 0m0.001s
If you can’t use valgrind for profiling, don’t like the output you get from it,
or are annoyed by the huge slowdown when profiling your program, you may
be able to get similar results from an older program gprof, which is closely
tied to the gcc compiler. Unlike valgrind, which simulates an x86 CPU one
machine-code instruction at a time, gprof works by having gcc add extra code
to your program to track function calls and do sampling at runtime to see where
your program is spending its time. The cost of this approach is that you get a
bit less accuracy. I have also found gprof to be tricky to get working right on
some operating systems.
Here’s a short but slow program for calculating the number of primes less than
some limit passed as argv[1]:
#include <stdio.h>
#include <stdlib.h>
64
{
int factor;
count = 0;
return count;
}
int
main(int argc, char **argv)
{
if(argc != 2) {
fprintf(stderr, "Usage: %s n\n", argv[0]);
return 1;
}
printf("%d\n", countPrimes(atoi(argv[1])));
return 0;
}
examples/profiling/countPrimes.c
And now we’ll time countPrimes 100000:
$ c99 -g3 -o countPrimes countPrimes.c
$ time ./countPrimes 100000
65
9592
real 0m4.711s
user 0m4.608s
sys 0m0.004s
This shows that the program took just under five seconds of real time, of which
most was spent in user mode and a very small fraction was spent in kernel (sys)
mode. The user-mode part corresponds to the code we wrote and any library
routines we call that don’t require special privileges from the operation system.
The kernel-mode part will mostly be I/O (not much in this case). Real time is
generally less useful than CPU time, because it depends on how loaded the CPU
is. Also, none of these times are especially precise, because the program only
gets charged for time on a context switch (when it switches between user and
kernel mode or some other program takes over the CPU for a bit) or when the
kernel decides to see what it is up to (typically every 10 milliseconds).
The overall cost is not too bad, but the reason I picked 100000 and not some
bigger number was that it didn’t terminate fast enough for larger inputs. We’d
like to see why it is taking so long, to have some idea what to try to speed up.
So we’ll compile it with the -pg option to gcc, which inserts profiling code that
counts how many times each function is called and how long (on average) each
call takes.
Because the profile is not very smart about shared libraries, we also including
the --static option to force the resulting program to be statically linked. This
means that all the code that is used by the program is baked into the executable
instead of being linked in at run-time. (Normally we don’t do this because
it makes for big executables and big running programs, since statically-linked
libraries can’t be shared between more than one running program.)
$ c99 -pg --static -g3 -o countPrimes countPrimes.c
$ time ./countPrimes 100000
9592
real 0m4.723s
user 0m4.668s
sys 0m0.000s
Hooray! We’ve made the program slightly slower. But we also just produced
a file gmon.out that we can read with gprof. Note that we have to pass the
name of the program so that gprof can figure out which executable generated
gmon.out.
$ gprof countPrimes
Flat profile:
66
time seconds seconds calls s/call s/call name
100.00 4.66 4.66 100000 0.00 0.00 isPrime
0.00 4.66 0.00 1 0.00 4.66 countPrimes
0.00 4.66 0.00 1 0.00 4.66 main
67
}
examples/profiling/countPrimesSkipEvenFactors.c
The trick is to check first if n is divisible by 2, and only test odd potential factors
thereafter. This requires some extra work to handle 2, but maybe the extra code
complexity will be worth it.
Let’s see how the timing goes:
$ c99 -pg --static -g3 -o countPrimes ./countPrimesSkipEvenFactors.c
$ time ./countPrimes 100000
9592
real 0m2.608s
user 0m2.400s
sys 0m0.004s
$ gprof countPrimes
Flat profile:
[...]
Twice as fast! And the answer is still the same, too—this is important.
Can we test even fewer factors? Suppose n has a non-trivial factor x. Then n
equals x*y for some y which is also nontrivial. One of x or y will be no bigger
than the square root of n. So perhaps we can stop when we reach the square
root of n,
Let’s try it:
#include <math.h>
68
}
/* else */
for(factor = 3; factor < sqrt(n)+1; factor+=2) {
if(n % factor == 0) return 0;
}
/* else */
return 1;
}
}
examples/profiling/countPrimesSqrt.c
I added +1 to the return value of sqrt both to allow for factor to be equal
to the square root of n, and because the output of sqrt is not exact, and it
would be embarrassing if I announced that 25 was prime because I stopped at
4.9999999997.
Using the math library not only requires including <math.h> but also requires
compiling with the -lm flag after all .c or .o files, to link in the library routines:
$ c99 -pg --static -g3 -o countPrimes ./countPrimesSqrt.c -lm
$ time ./countPrimes 1000000
78498
real 0m1.008s
user 0m0.976s
sys 0m0.000s
$ gprof countPrimes
Flat profile:
[...]
Whoosh!
Can we optimize further? Let’s see what happens on a bigger input:
$ time ./countPrimes 1000000
78498
real 0m0.987s
user 0m0.960s
69
sys 0m0.000s
$ gprof countPrimes
Flat profile:
[...]
This is still very good, although we’re spending a lot of time in sqrt (more
specifically, its internal helper routine __sqrt_finite). Can we do better?
Maybe moving the sqrt out of the loop in isPrime will make a difference:
/* return 1 if n is prime, 0 otherwise */
int
isPrime(int n)
{
int factor;
int sqrtValue;
real 0m0.413s
70
user 0m0.392s
sys 0m0.000s
$ gprof countPrimes
Flat profile:
[...]
This worked! We are now spending almost so little time in sqrt that the profiler
doesn’t notice it.
What if we get rid of the call to sqrt and test if factor * factor <= n instead?
This way we could dump the math library:
/* return 1 if n is prime, 0 otherwise */
int
isPrime(int n)
{
int factor;
real 0m0.450s
user 0m0.428s
sys 0m0.000s
71
This is slower, but not much slower. We might need to decide how much we care
about avoiding floating-point computation in our program.
At this point we could decide that countPrimes is fast enough, or maybe we
could look for further improvements, say, by testing out many small primes at
the beginning instead of just 2, calling isPrime only on odd values of i, or
reading a computational number theory textbook to find out how we ought
to be doing this. A reasonable strategy for code for your own use is often to
start running one version and make improvements on a separate copy while it’s
running. If the first version terminates before you are done writing new code,
it’s probably fast enough.
In each case, the reported time is the sum of user and system time in seconds.3
For the smarter routines, more optimization doesn’t necessarily help, although
some of this may be experimental error since I was too lazy to get a lot of
samples by running each program more than once, and the times for the faster
programs are so small that granularity is going to be an issue.
Here’s the same table using countPrimes 10000000 on the three fastest pro-
grams:
running inside VirtualBox on a Windows 8.1 machine with a 3.30-Ghz AMD FX-6100 CPU,
so don’t be surprised if you get different numbers on a real machine.
72
Version No optimization With -O1 With -O2 With -O3
countPrimesSquaring.c 9.748 9.248 9.236 9.160
Again there are the usual caveats that I am a lazy person and should probably be
doing more to deal with sampling and granularity issues, but if you believe these
numbers, we actually win by going to countPrimesSquaring once the optimizer
is turned on. I suspect that it is benefiting from strength reduction, which would
generate the product factor*factor in isPrime incrementally using addition
rather than multiplying from scratch each time.
It’s also worth noting that the optimizer works better if we leave a lot of easy
optimization lying around. For countPrimesSqrt.c, my guess is that most of
the initial gains are from avoiding function call overhead on sqrt by compiling
it in-line. But even the optimizer is not smart enough to recognize that we are
computing the same value over and over again, so we still win by pulling sqrt
out of the loop in countPrimesSqrtOutsideLoop.c.
If I wanted to see if my guesses about the optimizer were correct, there I could
use gcc -S and look at the assembler code. But see earlier comments about
laziness.
When you are programming, you will make mistakes. If you program long enough,
these will eventually include true acts of boneheadedness like accidentally deleting
all of your source files. You are also likely to spend some of your time trying
out things that don’t work, at the end of which you’d like to go back to the last
version of your program that did work. All these problems can be solved by
using a version control system.
There are six respectable version control systems installed on the Zoo: rcs, cvs,
svn, bzr, hg, and git. If you are familiar with any of them, you should use
that. If you have to pick one from scratch, I recommend using git. A brief
summary of git is given below. For more details, see the tutorials available at
http://git-scm.com.
Typically you run git inside a directory that holds some project you are working
on (say, hw1). Before you can do anything with git, you will need to create the
repository, which is a hidden directory .git that records changes to your files:
$ mkdir git-demo
$ cd git-demo
73
$ git init
Initialized empty Git repository in /home/classes/cs223/class/aspnes.james.ja54/git-demo/.gi
Now let’s create a file and add it to the repository:
$ echo 'int main(int argc, char **argv) { return 0; }' > tiny.c
$ git add tiny.c
The git status command will tell us that Git knows about tiny.c, but hasn’t
commited the changes to the repository yet:
$ git status
# On branch master
#
# Initial commit
#
# Changes to be committed:
# (use "git rm --cached <file>..." to unstage)
#
# new file: tiny.c
#
The git commit command will commit the actual changes, along with a message
saying what you did. For short messages, the easiest way to do this is to include
the message on the command line:
$ git commit -a -m"add very short C program"
[master (root-commit) 5393616] add very short C program
Committer: James Aspnes <[email protected]>
Your name and email address were configured automatically based
on your username and hostname. Please check that they are accurate.
You can suppress this message by setting them explicitly:
If the identity used for this commit is wrong, you can fix it with:
74
$ git config --global user.name "James Aspnes"
$ git config --global user.email "[email protected]"
$ git commit --amend --author="James Aspnes <[email protected]>" -m"add a very short C prog
[master a44e1e1] add a very short C program
1 files changed, 1 insertions(+), 0 deletions(-)
create mode 100644 tiny.c
Note that I repeated the -m business to git commit --amend; if I hadn’t, it would
have run the default editor (vim) to let me edit my commit message. If I don’t
like vim, I can change the default using git config --global core.editor,
e.g.:
$ git config --global core.editor "emacs -nw"
I can see what commits I’ve done so far using git log:
$ git log
commit a44e1e195de4ce785cd95cae3b93c817d598a9ee
Author: James Aspnes <[email protected]>
Date: Thu Dec 29 20:21:21 2011 -0500
Suppose I edit tiny.c using my favorite editor to turn it into the classic hello-
world program:
#include <stdio.h>
int
main(int argc, char **argv)
{
puts("hello, world");
return 0;
}
I can see what files have changed using git status:
$ git status
# On branch master
# Changed but not updated:
# (use "git add <file>..." to update what will be committed)
# (use "git checkout -- <file>..." to discard changes in working directory)
#
# modified: tiny.c
#
no changes added to commit (use "git add" and/or "git commit -a")
75
Notice how Git reminds me to use git commit -a to include these changes
in my next commit. I can also do git add tiny.c if I just want include the
changes to tiny.c (maybe I made changes to a different file that I want to
commit separately), but usually that’s too much work.
If I want to know the details of the changes since my last commit, I can do
git diff:
$ git diff
diff --git a/tiny.c b/tiny.c
index 0314ff1..f8d9dcd 100644
--- a/tiny.c
+++ b/tiny.c
@@ -1 +1,8 @@
-int main(int argc, char **argv) { return 0; }
+#include <stdio.h>
+
+int
+main(int argc, char **argv)
+{
+ puts("hello, world");
+ return 0;
+}
Since I like these changes, I do a commit:
$ git commit -a -m"expand previous program to hello world"
[master 13a73be] expand previous program to hello world
1 files changed, 8 insertions(+), 1 deletions(-)
Now there are two commits in my log:
$ git log | tee /dev/null
commit 13a73bedd3a48c173898d1afec05bd6fa0d7079a
Author: James Aspnes <[email protected]>
Date: Thu Dec 29 20:34:06 2011 -0500
commit a44e1e195de4ce785cd95cae3b93c817d598a9ee
Author: James Aspnes <[email protected]>
Date: Thu Dec 29 20:21:21 2011 -0500
76
3.6.3 Renaming files
You can rename a file with git mv. This is just like regular mv, except that it
tells Git what you are doing.
$ git mv tiny.c hello.c
$ git status
# On branch master
# Changes to be committed:
# (use "git reset HEAD <file>..." to unstage)
#
# renamed: tiny.c -> hello.c
#
These changes don’t get written to the repository unless you do another
git commit:
$ git commit -a -m"give better name to hello program"
[master 6d2116c] give better name to hello program
1 files changed, 0 insertions(+), 0 deletions(-)
rename tiny.c => hello.c (100%)
77
#
# deleted: goodbye.c
#
no changes added to commit (use "git add" and/or "git commit -a")
$ git commit -a -m"no, goodbye.c was a bad idea"
[master defa0e0] no, goodbye.c was a bad idea
1 files changed, 0 insertions(+), 8 deletions(-)
delete mode 100644 goodbye.c
If you make a mistake, you can back out using the repository. Here I will delete
my hello.c file and then get it back using git checkout -- hello.c:
$ rm hello.c
$ ls
$ git checkout -- hello.c
$ ls
hello.c
I can also get back old versions of files by putting the commit id before the --:
$ git checkout a44e1 -- tiny.c
$ ls
hello.c tiny.c
The commit id can be any unique prefix of the ridiculously long hex name shown
by git log.
Having recovered tiny.c, I will keep it around by adding it to a new commit:
$ git commit -a -m"keep tiny.c around"
[master 23d6219] keep tiny.c around
1 files changed, 1 insertions(+), 0 deletions(-)
create mode 100644 tiny.c
Suppose I commit a change that I didn’t want to make. For example, let’s
suppose I decide to add some punctuation to the greeting in hello.c but botch
my edit:
$ vim hello.c
$ git commit -a -m"add exclamation point"
[master f40d8d3] add exclamation point
1 files changed, 1 insertions(+), 1 deletions(-)
Only now does it occur to me to test my program:
78
$ c99 -o hello hello.c
$ ./hello
hello, wolrd!
Disaster!
I can use git diff to see what went wrong. The command below compares the
current working directory to HEADˆ, the commit before the most recent commit:4
$ git diff HEAD^ | tee /dev/null
diff --git a/hello.c b/hello.c
index f8d9dcd..dc227a8 100644
--- a/hello.c
+++ b/hello.c
@@ -3,6 +3,6 @@
int
main(int argc, char **argv)
{
- puts("hello, world");
+ puts("hello, wolrd!");
return 0;
}
And I see my mistake leaping out at me on the new line I added (which git diff
puts a + in front of). But now what do I do? I already commited the change,
which means that I can’t get it out of the history.5
Instead, I use git revert on HEAD, the most recent commit:
$ git revert HEAD
[master fca3166] Revert "add exclamation point"
1 files changed, 1 insertions(+), 1 deletions(-)
(Not shown here is where it popped up a vim session to let me edit the commit
message; I just hit :x<ENTER> to get out of it without changing the default.)
Now everything is back to the way it was before the bad commit:
$ ./hello
hello, world
Running git log will now show me the entire history of my project, newest
commits first:
4 The pattern here is that HEAD is the most recent commit, HEADˆ the one before it, HEADˆˆ
the one before that, and so on. This is sometimes nicer than having to pull hex gibberish out
of the output of git log.
5 Technically I can use git reset to get rid of the commit, but git reset can be dangerous,
79
fca3166a697c6d72fb9e8aec913bb8e36fb5fe4e Revert "add exclamation point"
f40d8d386890103abacd0bf4142ecad62eed5aeb add exclamation point
23d6219c9380ba03d9be0672f0a7b25d18417731 keep tiny.c around
defa0e0430293ca910f077d5dd19fccc47ab0521 no, goodbye.c was a bad idea
454b24c307121b5a597375a99a37a825b0dc7e81 we need a second program to say goodbye
6d2116c4c72a6ff92b8b276eb88ddb556d1b8fdd give better name to hello program
13a73bedd3a48c173898d1afec05bd6fa0d7079a expand previous program to hello world
a44e1e195de4ce785cd95cae3b93c817d598a9ee add a very short C program
If I want to look at an old version (say, after I created goodbye.c), I can go
back to it using git checkout:
$ git checkout 454b2
Note: checking out '454b2'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:
All Git commands take a --help argument that brings up their manual page.
There is also extensive documentation at http://git-scm.com.
80
3.7 Submitting assignments
/c/cs223/bin/check 2
lists the files that you have submitted for Homework #2;
deletes the named files that you had submitted previously for Homework #3
(i.e., withdraws them from submission, which is useful if you accidentally
submit the wrong file);
runs "make" on the files that you submitted previously for Homework #4;
protects the named files that you submitted previously for Homework #5 (so
they cannot be deleted accidentally); and
unprotects the named files that you submitted previously for Homework #6
(so they can be deleted); and
/c/cs223/bin/retrieve 7 Csquash.c
81
retrieves copies of the named files that you submitted previously for Homework #7
/c/cs223/bin/testit 8 BigTest
82
Unfortunately, C99 and C11 both exemplify the uselessness of standards com-
mittees in general and the ISO in particular. Because the ISO has no power to
enforce standards on compiler writers, and because they will charge you CHF
198 just to look at the C11 standard, many compiler writers have ignored much
of C99 and C11. In particular, Microsoft pretty much gave up on adding any
features after ANSI C, and support for C99 and C11 is spotty in gcc and clang,
the two dominant open source C compilers. So if you want to write portable C
code, it is safest to limit yourself to features in ANSI C.
For this class, we will permit you to use any feature of C99 that gcc supports,
which also includes all features of ANSI C. You can compile with C99 support by
using gcc --std=c99 or by calling gcc as c99, as in c99 -o hello hello.c.
Compiling with straight gcc will give you GNU’s own peculiar dialect of C, which
is basically ANSI C with some extras. For maximum portability when using
gcc, it is safest to use gcc -ansi -pedantic, which expects straight ANSI C
and will complain about any extensions.
A C program consists of one or more files (which act a little bit like modules
in more structured programming languages, each of which typically contains
definitions of functions, each of which consists of statements, which are
either compound statements like if, while, etc. or expressions that typ-
ically perform some sort of arithmetic or call other functions. Files may also
include declarations of global variables (not recommended), and functions will
often contain declarations of local variables that can only be used inside that
function.
Here is a typical small C program that sums a range of integers. Since this is
our first real program, it’s a little heavy on the comments (shown between /*
and */).
#include <stdio.h> /* This is needed to get the declarations of fprintf and printf */
#include <stdlib.h> /* This is needed to get the declaration of atoi */
83
/* The three parts of the header for this loop mean:
* 1. Set i to start initially.
* 2. Keep doing the loop as long as i is less than end.
* 3. After each iteration, add 1 to i.
*/
for(i = start; i < end; i++) {
sum += i; /* This adds i to sum */
}
int
main(int argc, char **argv)
{
int start; /* initial value in range */
int end; /* one past the last value in the range */
return 0;
}
examples/sumRange.c
This is what the program does if we compile and run it:
84
$ c99 -g -Wall -pedantic -o sumRange sumRange.c
$ ./sumRange 1 100
sumRange(1, 100) = 4950
The sumRange.c program contains two functions, sumRange and main. The
sumRange function does the actual work, while main is the main routine of the
program that gets called with the command-line arguments when the program
is run. Every C program must have a routine named main with these particular
arguments.
In addition, main may call three library functions, fprintf (which in this case
is used to generate error messages), printf (which generates ordinary output),
and atoi (which is used to translate the command-line arguments into numerical
values). These functions must all be declared before they can be used. In the case
of sumRange, putting the definition of sumRange before the definition of main
is enough. For the library routines, the include files stdio.h and stdlib.h
contain declarations of the functions that contain enough information about
there return types and arguments that the compiler knows how to generate
machine code to call them. These files are included in sumRange.c by the C
preprocessor, which pastes in the contents of any file specified by the #include
command, strips out any comments (delimited by /* and */, or by // and the
end of the line if you are using C99), and does some other tricks that allow you
to muck with the source code before the actual compiler sees it (see Macros).
You can see what the output of the preprocessor looks like by calling the C
compiler with the -E option, as in c99 -E sumRange.c.
The body of each function consists of some variable declarations followed
by a sequence of statements that tell the computer what to do. Unlike some
languages, every variable used in a C program must be declared. A declaration
specifies the type of a variable, which tells the compiler how much space to
allocate for it and how to interpret some operations on its value. Statements
may be compound statements like the if statement in main that executes
its body only if the program is called with the wrong number of command-line
arguments or the for loop in sumRange that executes its body as long as the
test in its header remains true; or they may be simple statements that consist
of a single expression followed by a semicolon.
An expression is usually either a bare function call whose value is discarded (for
example, the calls to fprintf and printf in main), or an arithmetic expression
(which may include function calls, like the calls to atoi or in main) whose value
is assigned to some variable using the assignment operator = or sometimes
variants like += (which is shorthand for adding a value to an existing variable: x
+= y is equivalent to x = x+y).
When you compile a C program, after running the preprocessor, the compiler
generates assembly language code that is a human-readable description of the
ultimate machine code for your target CPU. Assembly language strips out all the
human-friendly features of your program and reduces it to simple instructions
85
usually involving moving things from one place to the other or performing a
single arithmetic operation. For example, the C line
x = y + 1; /* add 1 to y, store result in x */
gets translated into x86 assembly as
movl -24(%rbp), %edi
addl $1, %edi
movl %edi, -28(%rbp)
These three operations copy the value of y into the CPU register %edi, add 1
to the %edi register, and then copy the value back into x. This corresponds
directly to what you would have to do to evaluate x = y + 1 if you could only
do one very basic operation at a time and couldn’t do arithmetic operations on
memory locations: fetch y, add 1, store x. Note that the CPU doesn’t know
about the names y and x; instead, it computes their addresses by adding -24
and -28 respectively to the base pointer register %rbp. This is why it can be
hard to debug compiled code unless you tell the compiler to keep around extra
information.
For an arbitrary C program, if you are using gcc, you can see what your code
looks like in assembly language using the -S option. For example, c99 -S
sumRange.c will create a file sumRange.s that looks like this:
.file "sumRange.c"
.text
.globl sumRange
.type sumRange, @function
sumRange:
.LFB0:
.cfi_startproc
pushl %ebp
.cfi_def_cfa_offset 8
.cfi_offset 5, -8
movl %esp, %ebp
.cfi_def_cfa_register 5
subl $16, %esp
movl $0, -4(%ebp)
movl 8(%ebp), %eax
movl %eax, -8(%ebp)
jmp .L2
.L3:
movl -8(%ebp), %eax
addl %eax, -4(%ebp)
addl $1, -8(%ebp)
.L2:
movl -8(%ebp), %eax
cmpl 12(%ebp), %eax
86
jl .L3
movl -4(%ebp), %eax
leave
.cfi_restore 5
.cfi_def_cfa 4, 4
ret
.cfi_endproc
.LFE0:
.size sumRange, .-sumRange
.section .rodata
.LC0:
.string "Usage: %s\n start end"
.LC1:
.string "sumRange(%d, %d) = %d\n"
.text
.globl main
.type main, @function
main:
.LFB1:
.cfi_startproc
pushl %ebp
.cfi_def_cfa_offset 8
.cfi_offset 5, -8
movl %esp, %ebp
.cfi_def_cfa_register 5
andl $-16, %esp
subl $32, %esp
cmpl $3, 8(%ebp)
je .L6
movl 12(%ebp), %eax
movl (%eax), %edx
movl stderr, %eax
movl %edx, 8(%esp)
movl $.LC0, 4(%esp)
movl %eax, (%esp)
call fprintf
movl $1, %eax
jmp .L7
.L6:
movl 12(%ebp), %eax
addl $4, %eax
movl (%eax), %eax
movl %eax, (%esp)
call atoi
movl %eax, 24(%esp)
movl 12(%ebp), %eax
87
addl $8, %eax
movl (%eax), %eax
movl %eax, (%esp)
call atoi
movl %eax, 28(%esp)
movl 28(%esp), %eax
movl %eax, 4(%esp)
movl 24(%esp), %eax
movl %eax, (%esp)
call sumRange
movl %eax, 12(%esp)
movl 28(%esp), %eax
movl %eax, 8(%esp)
movl 24(%esp), %eax
movl %eax, 4(%esp)
movl $.LC1, (%esp)
call printf
movl $0, %eax
.L7:
leave
.cfi_restore 5
.cfi_def_cfa 4, 4
ret
.cfi_endproc
.LFE1:
.size main, .-main
.ident "GCC: (Ubuntu 4.8.2-19ubuntu1) 4.8.2"
.section .note.GNU-stack,"",@progbits
examples/sumRange.s
You usually don’t need to look at assembly language, but it can sometimes be
enlightening to see what the compiler is doing with your code. One thing that I
find interesting about this particular code (which is for the x86 architecture) is
that most of the instructions are movl, the x86 instruction for copying a 32-bit
quantity from one location to another: most of what this program is doing is
copying data into the places expected by the library functions it is calling. Also
noteworthy is that the beautiful compound statements like if and for that so
eloquently express the intent of the programmer get turned into a pile of jump
(jmp) and conditional jump (jl, je) instructions, the machine code versions of
the often dangerous and confusing goto statement. This is because CPUs are
dumb: they don’t know how to carry out an if branch or a loop, and all they
can do instead is be told to replace the value of their program counter register
with some new value instead of just incrementing it as they usually do.
Assembly language is not the last stage in this process. The assembler (as) is a
program that translates the assembly language in sumRange.s into machine code
88
(which will be store in sumRange.o if we aren’t compiling a single program all
at once). Machine code is not human-readable, and is close to the raw stream of
bytes that gets stored in the computer’s memory to represent a running program.
The missing parts are that the addresses of each function and global variables are
generally left unspecified, so that they can be moved around to make room for
other functions and variables coming from other files and from system libraries.
The job of stitching all of these pieces together, putting everything in the right
place, filling in any placeholder addresses, and generating the executable file
sumRange that we can actually run is given to the linker ld.
The whole process looks like this:
sumRange.c (source code)
|
v
[preprocessor (cpp)]
|
v
preprocessed version of sumRange.c
|
v
[compiler (gcc)]
|
v
sumRange.s (assembly code)
|
v
[assembler (as)]
|
v
sumRange.o (machine code)
|
v
[linker (ld)] <- system library (glibc.a)
|
v
sumRange (executable)
The good news is, you don’t actually have to run all of these steps yourself;
instead, gcc (which you may be calling as c99) will take care of everything for
you, particularly for simple programs like sumRange.c that fit in a single file.
89
4.2 Numeric data types
PDP-7 on which UNIX was first developed used 18-bit words, which conveniently translated
into six octal digits back in the pre-hexadecimal era.
90
an image itself might be a long sequence of such 3-byte RGB values. At the
bottom, every operation applied to these more complex data types translates
into a whole lot of copies and arithmetic operations on individual bytes and
words.
From the CPU’s point of view, even much of this manipulation consists of
operating on integers that happen to represent addresses instead of data. So
when a C program writes a zero to the 19th entry in a sequence of 4-byte integers,
somewhere in the implementation of this operation the CPU will be adding 4 · 19
to a base address for the sequence to computer where to write this value. Unlike
many higher-level languages, C allows the program direct access to address
computations via pointer types, which are tricky enough to get their own
chapter. Indeed, most of the structured types that C provides for representing
more complicated data can best be understood as a thin layer of abstraction on
top of pointers. We will see examples of these in later chapters as well.
For now, we concentrate on integer and floating-point types, and on the operations
that can be applied to them.
Most variables in C programs tend to hold integer values, and indeed most
variables in C programs tend to be the default-width integer type int. Declaring
a variable to have a particular integer type controls how much space is used to
store the variable (any values too big to fit will be truncated) and specifies that
the arithmetic on the variable is done using integer operations.
The typical size is for 32-bit architectures like the Intel i386. Some 64-bit
machines might have 64-bit ints and longs, and some microcontrollers have
16-bit ints. Particularly bizarre architectures might have even wilder sizes, but
you are not likely to see this unless you program vintage 1970s supercomputers.
The general convention is that int is the most convenient size for whatever
computer you are using and should be used by default.
91
Many compilers also support a long long type that is usually twice the length
of a long (e.g. 64 bits on i386 machines). This type was not officially added to
the C standard prior to C99, so it may or may not be available if you insist on
following the ANSI specification strictly.
Each of these types comes in signed and unsigned variants.
This controls the interpretation of some operations (mostly comparisons and
shifts) and determines the range of the type: for example, an unsigned char
holds values in the range 0 through 255 while a signed char holds values in
the range -128 through 127, and in general an unsigned n-bit type runs from
0 through 2n − 1 while the signed version runs from −2n−1 through 2n−1 − 1.
The representation of signed integers uses two’s-complement notation, which
means that a positive value x is represented as the unsigned value x while a
negative value −x is represented as the unsigned value 2n − x. For example, if
we had a peculiar implementation of C that used 3-bit ints, the binary values
and their interpretation as int or unsigned int would look like this:
The reason we get one extra negative value for an unsigned integer type is this
allows us to interpret the first bit as the sign, which makes life a little easier for
whoever is implementing our CPU. Two useful features of this representation
are:
1. We can convert freely between signed and unsigned values as long as we
are in the common range of both, and
2. Addition and subtraction work exactly the same we for both signed and
unsigned values. For example, on our hypothetical 3-bit machine, 1 + 5
represented as 001 + 101 = 110 gives the same answer as 1 + (−3) =
001 + 101 = 110. In the first case we interpret 110 as 6, while in the
second we interpret it as −2, but both answers are right in their respective
contexts.
Note that in order to make this work, we can’t detect overflow: when the CPU
adds two 3-bit integers, it doesn’t know if we are adding 7 + 6 = 111 + 110 =
1101 = 13 or (−1) + (−2) = 111 + 110 = 101 = (−3). In both cases the result
is truncated to 101, which gives the incorrect answer 5 when we are adding
unsigned values.
92
This can often lead to surprising uncaught errors in C programs, although using
more than 3 bits will make overflow less likely. It is usually a good idea to
pick a size for a variable that is substantially larger than the largest value you
expect the variable to hold (although most people just default to int), unless
you are very short on space or time (larger values take longer to read and write
to memory, and may make some arithmetic operations take longer).
Taking into account signed and unsigned versions, the full collection of integer
types looks like this:
93
while((c = getchar()) != EOF) { /* <- DON'T DO THIS! */
putchar(c);
}
#include <inttypes.h>
94
int
main(int argc, char **argv)
{
uint64_t big;
if(argc != 2) {
fprintf(stderr, "Usage: %s number\n", argv[0]);
return 1;
}
if(big % 2 == 0) {
big /= 2;
} else if(big <= MAX_VALUE) {
big = 3*big + 1;
} else {
/* overflow! */
puts("overflow");
return 1;
}
}
puts("Reached 1");
return 0;
}
examples/integerTypes/fixedWidth.c
The type aliases size_t and ptrdiff_t are provided in stddef.h to represent
the return types of the sizeof operator and pointer subtraction. On a 32-
bit architecture, size_t will be equivalent to the unsigned 32-bit integer type
uint32_t (or just unsigned int) and ptrdiff_t will be equivalent to the signed
32-bit integer type int32_t (int). On a 64-bit architecture, size_t will be
equivalent to uint64_t and ptrdiff_t will be equivalent to int64_t.
95
The place where you will most often see size_t is as an argument to malloc,
where it gives the number of bytes to allocate.
Because stdlib.h includes stddef.h, it is often not necessary to include
stddef.h explicitly.
'a' int
97 int
97u unsigned int
0xbea00d1ful unsigned long, written in hexadecimal
0777s short, written in octal
96
Having a lot of numeric constants in your program—particularly if the same
constant shows up in more than one place—is usually a sign of bad programming.
There are a few constants, like 0 and 1, that make sense on their own, but many
constant values are either mostly arbitrary, or might change if the needs of the
program change. It’s helpful to assign these constants names that explain their
meaning, instead of requiring the user to guess why there is a 37 here or an
0x1badd00d there. This is particularly important if the constants might change
in later versions of the program, since even though you could change every 37
in your program into a 38, this might catch other 37 values that have different
intended meanings.
For example, suppose that you have a function (call it getchar) that needs to
signal that sometimes it didn’t work. The usual way is to return a value that
the function won’t normally return. Now, you could just tell the user what value
that is:
/* get a character (as an `int` ASCII code) from `stdin` */
/* return -1 on end of file */
int getchar(void);
and now the user can write
while((c = getchar()) != -1) {
...
}
But then somebody reading the code has to remember that -1 means “end
of file” and not “signed version of 0xff” or “computer room on fire, evacuate
immediately.” It’s much better to define a constant EOF that happens to equal -1,
because among other things if you change the special return value from getchar
later then this code will still work (assuming you fixed the definition of EOF):
while((c = getchar()) != EOF) {
...
}
So how do you declare a constant in C? The traditional approach is to use the
C preprocessor, the same tool that gets run before the compiler to expand out
#include directives. To define EOF, the file /usr/include/stdio.h includes
the text
#define EOF (-1)
What this means is that whenever the characters EOF appear in a C program as
a separate word (e.g. in 1+EOF*3 but not in appurtenancesTherEOF), then the
preprocessor will replace them with the characters (-1). The parentheses around
the -1 are customary to ensure that the -1 gets treated as a separate constant
and not as part of some larger expression. So from the compiler’s perspective,
EOF really is -1, but from the programmer’s perspective, it’s end-of-file. This is
a special case of the C preprocessor’s macro mechanism.
97
In general, any time you have a non-trivial constant in a program, it should be
#defined. Examples are things like array dimensions, special tags or return
values from functions, maximum or minimum values for some quantity, or
standard mathematical constants (e.g., /usr/include/math.h defines M_PI as
pi to umpteen digits). This allows you to write
char buffer[MAX_FILENAME_LENGTH+1];
area = M_PI*r*r;
if(status == COMPUTER_ROOM_ON_FIRE) {
evacuate();
}
instead of
char buffer[513];
area = 3.141592319*r*r;
if(status == 136) {
evacuate();
}
which is just an invitation to errors (including the one in the area computation).
Like typedefs, #defines that are intended to be globally visible are best done in
header files; in large programs you will want to #include them in many source
files. The usual convention is to write #defined names in all-caps to remind the
user that they are macros and not real variables.
98
that values you thought should be large positive integers come back as random-
looking negative integers.
Division (/) of two integers also truncates: 2/3 is 0, 5/3 is 1, etc. For positive
integers it will always round down.
Prior to C99, if either the numerator or denominator was negative, the behavior
was unpredictable and depended on what your processor chose to do. In practice
this meant you should never use / if one or both arguments might be negative.
The C99 standard specified that integer division always removes the fractional
part, effectively rounding toward 0; so (-3)/2 is -1, 3/-2 is -1, and (-3)/-2 is
1.
There is also a remainder operator % with e.g. 2%3 = 2, 5%3 = 2, 27 % 2 = 1, etc.
The sign of the modulus is ignored, so 2%-3 is also 2. The sign of the dividend
carries over to the remainder: (-3)%2 and (-3)%(-2) are both -1. The reason
for this rule is that it guarantees that y == x*(y/x) + y%x is always true.
x y expression value
0011 0101 x&y 0001
0011 0101 x|y 0111
0011 0101 xˆy 0110
0011 0101 ~x 1100
The shift operators << and >> shift the bit sequence left or right: x << y produces
the value x · 2y (ignoring overflow); this is equivalent to shifting every bit in x y
positions to the left and filling in y zeros for the missing positions. In the other
direction, x >> y produces the value bx · 2− yc by shifting x y positions to the
right. The behavior of the right shift operator depends on whether x is unsigned
or signed; for unsigned values, it shifts in zeros from the left end always; for
signed values, it shifts in additional copies of the leftmost bit (the sign bit). This
makes x >> y have the same sign as x if x is signed.
99
If y is negative, it reverses the direction of the shift; so x << -2 is equivalent to
x >> 2.
Examples (unsigned char x):
x y x << y x >> y
00000001 1 00000010 00000000
11111111 3 11111000 00011111
10111001 -2 00101110 11100100
x y x << y x >> y
00000001 1 00000010 00000000
11111111 3 11111000 11111111
10111001 -2 11101110 11100100
Shift operators are often used with bitwise logical operators to set or extract
individual bits in an integer value. The trick is that (1 << i) contains a 1 in the
i-th least significant bit and zeros everywhere else. So x & (1<<i) is nonzero if
and only if x has a 1 in the i-th place. This can be used to print out an integer
in binary format (which standard printf won’t do).
The following program gives an example of this technique. For example, when
called as ./testPrintBinary 123, it will print 111010 followed by a newline.
#include <stdio.h>
#include <stdlib.h>
100
int
main(int argc, char **argv)
{
if(argc != 2) {
fprintf(stderr, "Usage: %s n\n", argv[0]);
return 1;
}
print_binary(atoi(argv[1]));
putchar('\n');
return 0;
}
examples/integerTypes/testPrintBinary.c
In the other direction, we can set the i-th bit of x to 1 by doing x | (1 << i)
or to 0 by doing x & ~(1 << i). See the section on bit manipulation. for
applications of this to build arbitrarily-large bit vectors.
101
4.2.2.2.4 Relational operators
Logical operators usually operate on the results of relational operators or
comparisons: these are == (equality), != (inequality), < (less than), > (greater
than), <= (less than or equal to) and >= (greater than or equal to). So, for
example,
if(size >= MIN_SIZE && size <= MAX_SIZE) {
puts("just right");
}
tests if size is in the (inclusive) range [MIN_SIZE..MAX_SIZE].
Beware of confusing == with =. The code
/* DANGER! DANGER! DANGER! */
if(x = 5) {
...
is perfectly legal C, and will set x to 5 rather than testing if it’s equal to 5.
Because 5 happens to be nonzero, the body of the if statement will always
be executed. This error is so common and so dangerous that gcc will warn
you about any tests that look like this if you use the -Wall option. Some
programmers will go so far as to write the test as 5 == x just so that if their
finger slips, they will get a syntax error on 5 = x even without special compiler
support.
/* This program can be used to show how atoi etc. handle overflow. */
/* For example, try "overflow 1000000000000". */
int
main(int argc, char **argv)
102
{
char c;
int i;
long l;
long long ll;
if(argc != 2) {
fprintf(stderr, "Usage: %s n\n", argv[0]);
return 1;
}
c = atoi(argv[1]);
i = atoi(argv[1]);
l = atol(argv[1]);
ll = atoll(argv[1]);
return 0;
}
examples/integerTypes/overflow.c
Real numbers are represented in C by the floating point types float, double,
and long double. Just as the integer types can’t represent all integers because
they fit in a bounded number of bytes, so also the floating-point types can’t
represent all real numbers. The difference is that the integer types can represent
values within their range exactly, while floating-point types almost always give
only an approximation to the correct value, albeit across a much larger range.
The three floating point types differ in how much space they use (32, 64, or
80 bits on x86 CPUs; possibly different amounts on other machines), and thus
how much precision they provide. Most math library routines expect and return
doubles (e.g., sin is declared as double sin(double), but there are usually
float versions as well (float sinf(float)).
103
1 = 1 · 20
2 = 1 · 21
0.375 = 1.5 · 2−2
etc.
The mantissa is usually represented in base 2, as a binary fraction. So (in a
very low-precision format), $1 would be 1.000 · 20 , 2 would be 1.000 · 21 , and
0.375 = 3/8 would be 1.100 · 2−2 , where the first 1 after the decimal point counts
as 1/2, the second as 1/4, etc. Note that for a properly-scaled (or normalized)
floating-point number in base 2 the digit before the decimal point is always 1.
For this reason it is usually dropped to save space (although this requires a
special representation for 0).
Negative values are typically handled by adding a sign bit that is 0 for positive
numbers and 1 for negative numbers.
4.2.3.3 Operators
Floating-point types in C support most of the same arithmetic and relational
operators as integer types; x > y, x / y, x + y all make sense when x and y are
floats. If you mix two different floating-point types together, the less-precise
one will be extended to match the precision of the more-precise one; this also
works if you mix integer and floating point types as in 2 / 3.0. Unlike integer
division, floating-point division does not discard the fractional part (although
it may produce round-off error: 2.0/3.0 gives 0.66666666666666663, which
is not quite exact). Be careful about accidentally using integer division when
you mean to use floating-point division: 2/3 is 0. Casts can be used to force
floating-point division (see below).
Some operators that work on integers will not work on floating-point types. These
are % (use modf from the math library if you really need to get a floating-point
remainder) and all of the bitwise operators ~, <<, >>, &, ˆ, and |.
104
You can convert floating-point numbers to and from integer types explicitly using
casts. A typical use might be:
/* return the average of a list */
double
average(int n, int a[])
{
int sum = 0;
int i;
105
represents -1, and so forth. The mantissa fits in the remaining 24 bits, with its
leading 1 stripped off as described above.
Certain numbers have a special representation. Because 0 cannot be represented
in the standard form (there is no 1 before the decimal point), it is given the
special representation 0 00000000 00000000000000000000000. (There is also
a -0 = 1 00000000 00000000000000000000000, which looks equal to +0 but
prints differently.) Numbers with exponents of 11111111 = 255 = 2128 represent
non-numeric quantities such as “not a number” (NaN), returned by operations like
(0.0/0.0) and positive or negative infinity. A table of some typical floating-point
numbers (generated by the program float.c) is given below:
0 = 0 = 0 00000000 00000000000000000000000
-0 = -0 = 1 00000000 00000000000000000000000
0.125 = 0.125 = 0 01111100 00000000000000000000000
0.25 = 0.25 = 0 01111101 00000000000000000000000
0.5 = 0.5 = 0 01111110 00000000000000000000000
1 = 1 = 0 01111111 00000000000000000000000
2 = 2 = 0 10000000 00000000000000000000000
4 = 4 = 0 10000001 00000000000000000000000
8 = 8 = 0 10000010 00000000000000000000000
0.375 = 0.375 = 0 01111101 10000000000000000000000
0.75 = 0.75 = 0 01111110 10000000000000000000000
1.5 = 1.5 = 0 01111111 10000000000000000000000
3 = 3 = 0 10000000 10000000000000000000000
6 = 6 = 0 10000001 10000000000000000000000
0.1 = 0.10000000149011612 = 0 01111011 10011001100110011001101
0.2 = 0.20000000298023224 = 0 01111100 10011001100110011001101
0.4 = 0.40000000596046448 = 0 01111101 10011001100110011001101
0.8 = 0.80000001192092896 = 0 01111110 10011001100110011001101
1e+12 = 999999995904 = 0 10100110 11010001101010010100101
1e+24 = 1.0000000138484279e+24 = 0 11001110 10100111100001000011100
1e+36 = 9.9999996169031625e+35 = 0 11110110 10000001001011111001110
inf = inf = 0 11111111 00000000000000000000000
-inf = -inf = 1 11111111 00000000000000000000000
nan = nan = 0 11111111 10000000000000000000000
What this means in practice is that a 32-bit floating-point value (e.g. a float) can
represent any number between 1.17549435e-38 and 3.40282347e+38, where
the e separates the (base 10) exponent. Operations that would create a smaller
value will underflow to 0 (slowly—IEEE 754 allows “denormalized” floating point
numbers with reduced precision for very small values) and operations that would
create a larger value will produce inf or -inf instead.
For a 64-bit double, the size of both the exponent and mantissa are larger; this
gives a range from 1.7976931348623157e+308 to 2.2250738585072014e-308,
with similar behavior on underflow and overflow.
106
Intel processors internally use an even larger 80-bit floating-point format for
all operations. Unless you declare your variables as long double, this should
not be visible to you from C except that some operations that might otherwise
produce overflow errors will not do so, provided all the variables involved sit in
registers (typically the case only for local variables and function parameters).
4.2.3.6 Error
In general, floating-point numbers are not exact: they are likely to contain
round-off error because of the truncation of the mantissa to a fixed number
of bits. This is particularly noticeable for large values (e.g. 1e+12 in the table
above), but can also be seen in fractions with values that aren’t powers of 2 in the
denominator (e.g. 0.1). Round-off error is often invisible with the default float
output formats, since they produce fewer digits than are stored internally, but
can accumulate over time, particularly if you subtract floating-point quantities
with values that are close (this wipes out the mantissa without wiping out the
error, making the error much larger relative to the number that remains).
The easiest way to avoid accumulating error is to use high-precision floating-point
numbers (this means using double instead of float). On modern CPUs there
is little or no time penalty for doing so, although storing doubles instead of
floats will take twice as much space in memory.
Note that a consequence of the internal structure of IEEE 754 floating-point
numbers is that small integers and fractions with small numerators and
power-of-2 denominators can be represented exactly—indeed, the IEEE 754
standard carefully defines floating-point operations so that arithmetic on such
exact integers will give the same answers as integer arithmetic would (except,
of course, for division that produces a remainder). This fact can sometimes
be exploited to get higher precision on integer values than is available from
the standard integer types; for example, a double can represent any integer
between -253 and 253 exactly, which is a much wider range than the values from
2ˆ-31ˆ to 2ˆ31ˆ-1 that fit in a 32-bit int or long. (A 64-bit long
long does better.) So double‘ should be considered for applications where
large precise integers are needed (such as calculating the net worth in pennies of
a billionaire.)
One consequence of round-off error is that it is very difficult to test floating-point
numbers for equality, unless you are sure you have an exact value as described
above. It is generally not the case, for example, that (0.1+0.1+0.1) == 0.3
in C. This can produce odd results if you try writing something like
for(f = 0.0; f <= 0.3; f += 0.1): it will be hard to predict in advance
whether the loop body will be executed with f = 0.3 or not. (Even more
hilarity ensues if you write for(f = 0.0; f != 0.3; f += 0.1), which
after not quite hitting 0.3 exactly keeps looping for much longer than I am
willing to wait to see it stop, but which I suspect will eventually converge to
some constant value of f large enough that adding 0.1 to it has no effect.)
107
Most of the time when you are tempted to test floats for equality, you are
better off testing if one lies within a small distance from the other, e.g. by
testing fabs(x-y) <= fabs(EPSILON * y), where EPSILON is usually some
application-dependent tolerance. This isn’t quite the same as equality (for
example, it isn’t transitive), but it usually closer to what you want.
108
There are two parts to using the math library. The first is to include the line
#include <math.h>
somewhere at the top of your source file. This tells the preprocessor to paste in
the declarations of the math library functions found in /usr/include/math.h.
The second step is to link to the math library when you compile. This is done
by passing the flag -lm to gcc after your C program source file(s). A typical
command might be:
c99 -o program program.c -lm
If you don’t do this, you will get errors from the compiler about missing functions.
The reason is that the math library is not linked in by default, since for many
system programs it’s not needed.
109
<< >> shifts
< <= >= > inequalities
== != equality
& (binary) bitwise AND
ˆ bitwise XOR
| bitwise OR
&& logical AND
|| logical OR
?: ternary if (associates
right-to-left)
= += -= *= /= %= &= ˆ= |= <<= >>= assignment (associate
right-to-left)
, comma
#include <stdio.h>
int
main(int argc, char **argv)
{
for(int i = COUNTDOWN_START; i >= 0; i--) {
printf("%d\n", i);
}
return 0;
}
examples/style/countdown.c
#include <stdio.h>
int main(int _,char**xb){_=0xb;while(_--)printf("%d\n",_);return ++_;}
examples/style/badCountdown.c
110
The difference between these programs is that the first is designed to be easy to
read and understand while the second is not. Though computer can’t tell the
difference between them, the second will be much harder to debug or modify to
accomplish some new task.
Certain formatting and programming conventions have evolved over the years to
make C code as comprehensible as possible, and as we introduce various features
of C, we will talk about how best to use them to make your programs understood
by both computers and humans.
Submitted assignments may be graded for style in addition to correctness. Below
is a checklist that has been used in past versions of the course to identify some
of the more egregious violations of reasonable coding practice. For more extreme
examples of what not to do, see the International Obfuscated C Code Contest.
Style grading checklist
Score is 20 points minus 1 for each box checked (but never less than 0)
Comments
[ ] Undocumented module.
[ ] Undocumented function other than main.
[ ] Underdocumented function: return value or args not described.
[ ] Undocumented program input and output (when main is provided).
[ ] Undocumented struct or union components.
[ ] Undocumented #define.
[ ] Failure to cite code taken from other sources.
[ ] Insufficient comments.
[ ] Excessive comments.
Naming
Whitespace
Macros
111
[ ] Dependent constant not written as expression of earlier constant.
[ ] Underdocumented parameterized macro.
Global variables
Functions
Code organization
[ ] Lack of modularity.
[ ] Function used in multiple source files but not declared in header file.
[ ] Internal-use-only function not declared static.
[ ] Full struct definition in header files when components should be hidden.
[ ] #include "file.c"
[ ] Substantial repetition of code.
Miscellaneous
4.5 Variables
4.5.1 Memory
Memory consists of many bytes of storage, each of which has an address which
is itself a sequence of bits. Though the actual memory architecture of a modern
computer is complex, from the point of view of a C program we can think of
as simply a large address space that the CPU can store things in (and load
things from), provided it can supply an address to the memory. Because we
don’t want to have to type long strings of bits all the time, the C compiler lets
us give names to particular regions of the address space, and will even find free
space for us to use.
112
4.5.2 Variables as names
113
initial value to a variable by putting in something like = 0 after the variable name.
It is good practice to put a comment after each variable declaration that explains
what the variable does (with a possible exception for conventionally-named loop
variables like i or j in short functions). Below is an example of a program with
some variable declarations in it:
#include <stdio.h>
#include <ctype.h>
/*
*This global variable is not used; it is here only to demonstrate
* what a global variable declaration looks like.
*/
unsigned long SpuriousGlobalVariable = 127;
int
main(int argc, char **argv)
{
int c; /* character read */
int count = 0; /* number of digits found */
printf("%d\n", count);
return 0;
}
examples/variables/countDigits.c
114
followed by $ for a string variable and % for an integer variable. These
type tags were used because BASIC interpreters didn’t have a mechanism
for declaring variable types.
IFNXG7 A typical FORTRAN variable name, back in the days of 6-character
all-caps variable names. The I at the start means it’s an integer variable.
The rest of the letters probably abbreviate some much longer description
of what the variable means. The default type based on the first letter
was used because FORTRAN programmers were lazy, but it could be
overridden by an explicit declaration.
i, j, c, count, top_of_stack, accumulatedTimeInFlight Typical names
from modern C programs. There is no type information contained in
the name; the type is specified in the declaration and remembered by
the compiler elsewhere. Note that there are two different conventions
for representing multi-word names: the first is to replace spaces with
underscores, and the second is to capitalize the first letter of each word
(possibly excluding the first letter), a style called camel case. You should
pick one of these two conventions and stick to it.
prgcGradeDatabase An example of Hungarian notation, a style of variable
naming in which the type of the variable is encoded in the first few character.
The type is now back in the variable name again. This is not enforced by
the compiler: even though iNumberOfStudents is supposed to be an int,
there is nothing to prevent you from declaring float iNumberOfStudents
if you are teaching a class on improper chainsaw handling and want to
allow for the possibility of fractional students. See this MSDN page for a
much more detailed explanation of the system.
Not clearly an improvement on standard naming conventions, but it is
popular in some programming shops.
In C, variable names are called identifiers. These are also used to identify
things that are not variables, like functions and user-defined types.
An identifier in C must start with a lower or uppercase letter or the underscore
character _. Typically variables starting with underscores are used internally
by system libraries, so it’s dangerous to name your own variables this way.
Subsequent characters in an identifier can be letters, digits, or underscores.
So for example a, ____a___a_a_11727_a, AlbertEinstein, aAaAaAaAaAAAAAa,
and ______ are all legal identifiers in C, but $foo and 01 are not.
The basic principle of variable naming is that a variable name is a substitute for
the programmer’s memory. It is generally best to give identifiers names that
are easy to read and describe what the variable is used for. Such variables are
called self-documenting. None of the variable names in the preceding list are
any good by this standard. Better names would be total_input_characters,
dialedWrongNumber, or stepsRemaining. Non-descriptive single-character
names are acceptable for certain conventional uses, such as the use of i and j
115
for loop iteration variables, or c for an input character. Such names should only
be used when the scope of the variable is small, so that it’s easy to see all the
places where it is used at the same time.
C identifiers are case-sensitive, so aardvark, AArDvARK, and AARDVARK are all
different variables. Because it is hard to remember how you capitalized something
before, it is important to pick a standard convention and stick to it. The
traditional convention in C goes like this:
• Ordinary variables and functions are lowercased or camel-cased, e.g. count,
countOfInputBits.
• User-defined types (and in some conventions global variables) are capital-
ized, e.g. Stack, TotalBytesAllocated.
• Constants created with #define or enum are put in all-caps:
MAXIMUM_STACK_SIZE, BUFFER_LIMIT.
Ignoring pointers for the moment, there are essentially two things you can do to
a variable. You can assign a value to it using the = operator, as in:
x = 2; /* assign 2 to x */
y = 3; /* assign 3 to y */
or you can use its value in an expression:
x = y+1; /* assign y+1 to x */
The assignment operator is an ordinary operator, and assignment expressions
can be used in larger expressions:
x = (y=2)*3; /* sets y to 2 and x to 6 */
This feature is usually only used in certain standard idioms, since it’s confusing
otherwise.
There are also shorthand operators for expressions of the form variable = variable
operator expression. For example, writing x += y is equivalent to writing
x = x + y, x /= y is the same as x = x / y, etc.
For the special case of adding or subtracting 1, you can abbreviate still further
with the ++ and -- operators. These come in two versions, depending on whether
you want the result of the expression (if used in a larger expression) to be the
value of the variable before or after the variable is incremented:
x = 0;
y = x++; /* sets x to 1 and y to 0 (the old value) */
y = ++x; /* sets x to 2 and y to 2 (the new value) */
y = x--; /* sets x to 1 and y to 2 (the old value) */
y = --x; /* sets x to 0 and y to 0 (the new value) */
116
The intuition is that if the ++ comes before the variable, the increment happens
before the value of the variable is read (a preincrement; if it comes after, it
happens after the value is read (a postincrement). This is confusing enough
that it is best not to use the value of preincrement or postincrement operations
except in certain standard idioms. But using x++ or ++x by itself as a substitute
for x = x+1 is perfectly acceptable style.8
4.5.4 Initialization
It is a serious error to use the value of a variable that has never been assigned to,
because you will get whatever junk is sitting in memory at the address allocated
to the variable, and this might be some arbitrary leftover value from a previous
function call that doesn’t even represent the same type.9
Fortunately, C provides a way to guarantee that a variable is initialized as soon
as it is declared. Many of the examples in the notes do not use this mechanism,
because of bad habits learned by the instructor using early versions of C that
imposed tighter constraints on initialization. But initializing variables is a good
habit to get in the practice of doing.
For variables with simple types (that is, not arrays, structs, or unions), an
initializer looks like an assignment:
int sum = 0;
int n = 100;
int nSquared = n*n;
double gradeSchoolPi = 3.14;
const char * const greeting = "Hi!";
const int greetingLength = strlen(greeting);
For ordinary local variables, the initializer value can be any expression, including
expressions that call other functions. There is an exception for variables allocated
when the program starts (which includes global variables outside functions and
static variables inside functions), which can only be initialized to constant
expressions.
The last two examples show how initializers can set the values of variables that
are declared to be const (the variable greeting is both constant itself, because
of const greeting, and points to data that is also constant, because it is of
type const char). This is the only way to set the values of such variables
without cheating, because the compiler will complain if you try to do an ordinary
assignment to a variable declared to be constant.
8 C++ programmers will prefer ++x if they are not otherwise using the return value, because
if x is some very complicated type with overloaded ++, using preincrement avoids having to
save a copy of the old value.
9 Exception: Global variables and static local variables are guaranteed to be initialized to
an all-0 pattern, which will give the value 0 for most types.
117
For fixed-size arrays and structs, it is possible to supply an initializer for each
component, by enclosing the initializer values in braces, separated by commas.
For example:
int threeNumbers[3] = { 1, 2, 3 };
struct numericTitle {
int number;
const char *name;
};
118
return count++;
}
To declare a local variable with static extent, use the static qualifier as in the
above example. To declare a global variable with static extent, declare it outside
a function. In both cases you should provide an initializer for the variable.
extern float GlobalFloat; /* this global variable, defined somewhere else, has type
static char Character = 'c'; /* global variable, can only be used by functions in this
(Note the convention of putting capital letters on global variables to distinguish
them from local variables.)
Typically, an extern definition would appear in a header file so that it can be
included in any function that uses the variable, while an ordinary global variable
definition would appear in a C file so it only occurs once.
119
The const in the declaration above applies to the characters that string points
to: string is not const itself, but is instead a pointer to const. It is still
possible to make string point somewhere else, say by doing an assignment:
string = "You cannot modify this string either."
If you want to make it so that you can’t assign to string, put const right before
the variable name:
/* prevent assigning to string as well */
const char * const string = "You cannot modify this string.";
Now string is a const pointer to const: you can neither modify string nor
the values it points to.
Note that const only restricts what you can do using this particular variable
name. If you can get at the memory that something points to by some other
means, say through another pointer, you may be able to change the values in
these memory locations anyway:
int x = 5;
const int *p = &x;
int *q;
Input and output from C programs is typically done through the standard I/O
library, whose functions etc. are declared in stdio.h. A detailed descriptions
of the functions in this library is given in Appendix B of Kernighan and Ritchie.
We’ll talk about some of the more useful functions and about how input-output
(I/O) works on Unix-like operating systems in general.
The standard I/O library works on character streams, objects that act like
long sequences of incoming or outgoing characters. What a stream is connected
to is often not apparent to a program that uses it; an output stream might go to
a terminal, to a file, or even to another program (appearing there as an input
stream).
Three standard streams are available to all programs: these are stdin (standard
input), stdout (standard output), and stderr (standard error). Standard I/O
functions that do not take a stream as an argument will generally either read
from stdin or write to stdout. The stderr stream is used for error messages.
120
It is kept separate from stdout so that you can see these messages even if you
redirect output to a file:
$ ls no-such-file > /tmp/dummy-output
ls: no-such-file: No such file or directory
c = getchar();
The getchar routine will return the special value EOF (usually -1; short for end
of file) if there are no more characters to read, which can happen when you
hit the end of a file or when the user types the end-of-file key control-D to the
terminal. Note that the return value of getchar is declared to be an int since
EOF lies outside the normal character range.
To write a single character to stdout, use putchar:
putchar('!');
Even though putchar can only write single bytes, it takes an int as an argument.
Any value outside the range 0..255 will be truncated to its last byte, as in the
usual conversion from int to unsigned char.
Both getchar and putchar are wrappers for more general routines getc and
putc that allow you to specify which stream you are using. To illustrate getc
and putc, here’s how we might define getchar and putchar if they didn’t exist
already:
int
getchar2(void)
{
return getc(stdin);
}
int
putchar2(int c)
{
return putc(c, stdout);
}
Note that putc, putchar2 as defined above, and the original putchar all return
an int rather than void; this is so that they can signal whether the write
succeeded. If the write succeeded, putchar or putc will return the value written.
121
If the write failed (say because the disk was full), then putc or putchar will
return EOF.
Here’s another example of using putc to make a new function putcerr that
writes a character to stderr:
int
putcerr(int c)
{
return putc(c, stderr);
}
A rather odd feature of the C standard I/O library is that if you don’t like the
character you just got, you can put it back using the ungetc function. The
limitations on ungetc are that (a) you can only push one character back, and (b)
that character can’t be EOF. The ungetc function is provided because it makes
certain high-level input tasks easier; for example, if you want to parse a number
written as a sequence of digits, you need to be able to read characters until you
hit the first non-digit. But if the non-digit is going to be used elsewhere in your
program, you don’t want to eat it. The solution is to put it back using ungetc.
Here’s a function that uses ungetc to peek at the next character on stdin
without consuming it:
/* return the next character from stdin without consuming it */
int
peekchar(void)
{
int c;
c = getchar();
if(c != EOF) ungetc(c, stdin); /* puts it back */
return c;
}
Reading and writing data one character at a time can be painful. The C standard
I/O library provides several convenient routines for reading and writing formatted
data. The most commonly used one is printf, which takes as arguments a
format string followed by zero or more values that are filled in to the format
string according to patterns appearing in it.
Here are some typical printf statements:
printf("Hello\n"); /* print "Hello" followed by a newline */
printf("%c", c); /* equivalent to putchar(c) */
122
printf("%d", n); /* print n (an int) formatted in decimal */
printf("%u", n); /* print n (an unsigned int) formatted in decimal */
printf("%o", n); /* print n (an unsigned int) formatted in octal */
printf("%x", n); /* print n (an unsigned int) formatted in hexadecimal */
printf("%f", x); /* print x (a float or double) */
/* print total (an int) and average (a double) on two lines with labels */
printf("Total: %d\nAverage: %f\n", total, average);
For a full list of formatting codes see Table B-1 in Kernighan and Ritchie, or
run man 3 printf.
The inverse of printf is scanf. The scanf function reads formatted data from
stdin according to the format string passed as its first argument and stuffs the
results into variables whose addresses are given by the later arguments. This
requires prefixing each such argument with the & operator, which takes the
address of a variable.
Format strings for scanf are close enough to format strings for printf that you
can usually copy them over directly. However, because scanf arguments don’t go
through argument promotion (where all small integer types are converted to int
and floats are converted to double), you have to be much more careful about
specifying the type of the argument correctly. For example, while printf("%f",
x) will work whether x is a float or a double, scanf("%f", &x) will work only
if x is a float, which means that scanf("%lf", &x) is needed if x is in fact a
double.
Some examples:
scanf("%c", &c); /* like c = getchar(); c must be a char; will NOT put EOF in
scanf("%d", &n); /* read an int formatted in decimal */
scanf("%u", &n); /* read an unsigned int formatted in decimal */
scanf("%o", &n); /* read an unsigned int formatted in octal */
scanf("%x", &n); /* read an unsigned int formatted in hexadecimal */
scanf("%f", &x); /* read a float */
scanf("%lf", &x); /* read a double */
/* read total (an int) and average (a float) on two lines with labels */
/* (will also work if input is missing newlines or uses other whitespace, see below) */
scanf("Total: %d\nAverage: %f\n", &total, &average);
For a full list of formatting codes, run man 3 scanf.
The scanf routine usually eats whitespace (spaces, tabs, newlines, etc.) in its
input whenever it sees a conversion specification or a whitespace character in its
format string. The one exception is that a %c conversion specifier will not eat
whitespace and will instead return the next character whether it is whitespace
or not. Non-whitespace characters that are not part of conversion specifications
must match exactly. To detect if scanf parsed everything successfully, look at
123
its return value; it returns the number of values it filled in, or EOF if it hits
end-of-file before filling in any values.
The printf and scanf routines are wrappers for fprintf and fscanf, which
take a stream as their first argument, e.g.:
fprintf(stderr, "BUILDING ON FIRE, %d%% BURNT!!!\n", percentage);
This sends the output the the standard error output handle stderr. Note the
use of “%%” to print a single percent in the output.
Since we can write our own functions in C, if we don’t like what the standard
routines do, we can build our own on top of them. For example, here’s a function
that reads in integer values without leading minus signs and returns the result.
It uses the peekchar routine we defined above, as well as the isdigit routine
declared in ctype.h.
/* read an integer written in decimal notation from stdin until the first
* non-digit and return it. Returns 0 if there are no digits. */
int
readNumber(void)
{
int accumulator; /* the number so far */
int c; /* next character */
accumulator = 0;
return accumulator;
}
Here’s another implementation that does almost the same thing:
int
readNumber2(void)
{
int n;
if(scanf("%u", &n) == 1) {
return n;
} else {
124
return 0;
}
}
The difference is that readNumber2 will consume any whitespace before the first
digit, which may or may not be what we want.
More complex routines can be used to parse more complex input. For example,
here’s a routine that uses readNumber to parse simple arithmetic expressions,
where each expression is either a number or of the form (expression+expression)
or (expression*expression). The return value is the value of the expression after
adding together or multiplying all of its subexpressions. (A complete program
including this routine and the others defined earlier that it uses can be found
examples/IO/calc.c.
#define EXPRESSION_ERROR (-1)
c = peekchar();
if(c == '(') {
c = getchar();
e1 = readExpression();
op = getchar();
e2 = readExpression();
/* else */
switch(op) {
case '*':
return e1*e2;
break;
case '+':
return e1+e2;
break;
125
default:
return EXPRESSION_ERROR;
break;
}
} else if(isdigit(c)) {
return readNumber();
} else {
return EXPRESSION_ERROR;
}
}
Because this routine calls itself recursively as it works its way down through
the input, it is an example of a recursive descent parser. Parsers for more
complicated languages like C are usually not written by hand like this, but are
instead constructed mechanically using a Parser generator.
Reading and writing files is done by creating new streams attached to the files.
The function that does this is fopen. It takes two arguments: a filename, and a
flag that controls whether the file is opened for reading or writing. The return
value of fopen has type FILE * and can be used in putc, getc, fprintf, etc.
just like stdin, stdout, or stderr. When you are done using a stream, you
should close it using fclose.
Here’s a program that reads a list of numbers from a file whose name is given as
argv[1] and prints their sum:
#include <stdio.h>
#include <stdlib.h>
/*
* Sum integers in a file.
*
* 2018-01-24 Includes bug fixes contributed by Zhe Hua.
*/
int
main(int argc, char **argv)
{
FILE *f;
int x;
int sum;
if(argc != 2) {
fprintf(stderr, "Usage: %s filename\n", argv[0]);
126
exit(1);
}
f = fopen(argv[1], "r");
if(f == 0) {
/* perror is a standard C library routine */
/* that prints a message about the last failed library routine */
/* prepended by its argument */
perror(argv[1]);
exit(2);
}
/* else everything is ok */
sum = 0;
while(fscanf(f, "%d", &x) == 1) {
sum += x;
}
printf("%d\n", sum);
return 0;
}
examples/IO/sum.c
To write to a file, open it with fopen(filename, "w"). Note that as soon as
you call fopen with the "w" flag, any previous contents of the file are erased.
If you want to append to the end of an existing file, use "a" instead. You can
also add + onto the flag if you want to read and write the same file (this will
probably involve using fseek).
Some operating systems (Windows) make a distinction between text and binary
files. For text files, use the same arguments as above. For binary files, add a b,
e.g. fopen(filename, "wb") to write a binary file.
/* leave a greeting in the current directory */
#include <stdio.h>
#include <stdlib.h>
int
main(int argc, char **argv)
127
{
FILE *f;
f = fopen(FILENAME, "w");
if(f == 0) {
perror(FILENAME);
exit(1);
}
fclose(f);
return 0;
}
examples/IO/helloFile.c
The bodies of C functions (including the main function) are made up of state-
ments. These can either be simple statements that do not contain other
statements, or compound statements that have other statements inside them.
Control structures are compound statements like if/then/else, while, for, and
do..while that control how or whether their component statements are executed.
128
switch; see below) or the next iteration of a loop; we’ll talk about these more
when we talk about loops. The goto statement jumps to another location in the
same function, and exists for the rare occasions when it is needed. Using it in
most circumstances is a sin.
4.7.2.1 Conditionals
These are compound statements that test some condition and execute one or
another block depending on the outcome of the condition. The simplest is the
if statement:
if(houseIsOnFire) {
/* ouch! */
scream();
runAway();
}
The body of the if statement is executed only if the expression in parentheses
at the top evaluates to true (which in C means any value that is not 0).
The braces are not strictly required, and are used only to group one or more
statements into a single statement. If there is only one statement in the body,
the braces can be omitted:
if(programmerIsLazy) omitBraces();
This style is recommended only for very simple bodies. Omitting the braces
makes it harder to add more statements later without errors.
if(underAttack)
launchCounterAttack(); /* executed only when attacked */
hideInBunker(); /* ### DO NOT INDENT LIKE THIS ### executed always */
In the example above, the lack of braces means that the hideInBunker()
statement is not part of the if statement, despite the misleading indentation.
This sort of thing is why I generally always put in braces in an if.
An if statement may have an else clause, whose body is executed if the test is
false (i.e. equal to 0).
if(happy) {
smile();
} else {
frown();
}
129
A common idiom is to have a chain of if and else if branches that test several
conditions:
if(temperature < 0) {
puts("brrr");
} else if(temperature < 100) {
puts("hooray");
} else {
puts("ouch!");
}
This can be inefficient if there are a lot of cases, since the tests are applied
sequentially. For tests of the form <expression> == <small constant>, the
switch statement may provide a faster alternative. Here’s a typical switch
statement:
/* print plural of cow, maybe using the obsolete dual number construction */
switch(numberOfCows) {
case 1:
puts("cow");
break;
case 2:
puts("cowen");
break;
default:
puts("cows");
break;
}
This prints the string “cow” if there is one cow, “cowen” if there are two cowen,
and “cows” if there are any other number of cows. The switch statement
evaluates its argument and jumps to the matching case label, or to the default
label if none of the cases match. Cases must be constant integer values.
The break statements inside the block jump to the end of the block. Without
them, executing the switch with numberOfCows equal to 1 would print all three
lines. This can be useful in some circumstances where the same code should be
used for more than one case:
switch(c) {
case 'a':
case 'e':
case 'i':
case 'o':
case 'u':
type = VOWEL;
break;
default:
type = CONSONANT;
130
break;
}
or when a case “falls through” to the next:
switch(countdownStart) {
case 3:
puts("3");
case 2:
puts("2");
case 1:
puts("1")
case 0:
puts("KABLOOIE!");
break;
default:
puts("I can't count that high!");
break;
}
Note that it is customary to include a break on the last case even though it has
no effect; this avoids problems later if a new case is added. It is also customary
to include a default case even if the other cases supposedly exhaust all the
possible values, as a check against bad or unanticipated inputs.
switch(oliveSize) {
case JUMBO:
eatOlives(SLOWLY);
break;
case COLLOSSAL:
eatOlives(QUICKLY);
break;
case SUPER_COLLOSSAL:
eatOlives(ABSURDLY);
break;
default:
/* unknown size! */
abort();
break;
}
Though switch statements are better than deeply nested if/else-if constructions,
it is often even better to organize the different cases as data rather than code.
We’ll see examples of this when we talk about function pointers.
Nothing in the C standards prevents the case labels from being buried inside
other compound statements. One rather hideous application of this fact is Duff’s
device.
131
4.7.2.2 Loops
There are three kinds of loops in C.
int
main(int argc, char **argv)
{
int c;
return 0;
}
Note that the expression inside the while argument both assigns the return
value of getchar to c and tests to see if it is equal to EOF (which is returned
when no more input characters are available). This is a very common idiom in C
programs. Note also that even though c holds a single character, it is declared
as an int. The reason is that EOF (a constant defined in stdio.h) is outside the
normal character range, and if you assign it to a variable of type char it will be
quietly truncated into something else. Because C doesn’t provide any sort of
exception mechanism for signalling unusual outcomes of function calls, designers
of library functions often have to resort to extending the output of a function to
include an extra value or two to signal failure; we’ll see this a lot when the null
pointer shows up in the chapter on pointers.
132
loop is always executed at least once.
Here’s a loop that does a random walk until it gets back to 0 (if ever). If we
changed the do..while loop to a while loop, it would never take the first step,
because pos starts at 0.
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int
main(int argc, char **argv)
{
int pos = 0; /* position of random walk */
do {
pos += random() & 0x1 ? +1 : -1;
printf("%d\n", pos);
} while(pos != 0);
return 0;
}
examples/statements/randomWalk.c
The do..while loop is used much less often in practice than the while loop.
It is theoretically possible to convert a do..while loop to a while loop by making
an extra copy of the body in front of the loop, but this is not recommended
since it’s almost always a bad idea to duplicate code.
133
for(i = 10; i >= 0; i--) {
printf("%d\n", i);
}
/* this loop does the same thing with two variables by using the comma operator */
for(i = 0, power = 1; power < n; i++, power *= 2) {
printf("2^%d = %d\n", i, power);
}
i = 0;
while(i < 10) {
printf("%d\n", i);
i++;
}
134
openDoorNumber(i);
if(boobyTrapped()) {
break;
}
}
The continue statement skips to the next iteration. Here is a program with a
loop that iterates through all the integers from -10 through 10, skipping 0:
#include <stdio.h>
int
main(int argc, char **argv)
{
int n;
return 0;
}
examples/statements/inverses.c
Occasionally, one would like to break out of more than one nested loop. The
way to do this is with a goto statement.
for(i = 0; i < n; i++) {
for(j = 0; j < n; j++) {
doSomethingTimeConsumingWith(i, j);
if(checkWatch() == OUT_OF_TIME) {
goto giveUp;
}
}
}
giveUp:
puts("done");
The target for the goto is a label, which is just an identifier followed by a colon
and a statement (the empty statement ; is ok).
The goto statement can be used to jump anywhere within the same function
body, but breaking out of nested loops is widely considered to be its only
genuinely acceptable use in normal code.
135
4.7.2.3 Choosing where to put a loop exit
Choosing where to put a loop exit is usually pretty obvious: you want it after
any code that you want to execute at least once, and before any code that you
want to execute only if the termination test fails.
If you know in advance what values you are going to be iterating over, you will
most likely be using a for loop:
for(i = 0; i < n; i++) {
a[i] = 0;
}
Most of the rest of the time, you will want a while loop:
while(!done()) {
doSomething();
}
The do..while loop comes up mostly when you want to try something, then try
again if it failed:
do {
result = fetchWebPage(url);
} while(result == 0);
Finally, leaving a loop in the middle using break can be handy if you have
something extra to do before trying again:
for(;;) {
result = fetchWebPage(url);
if(result != 0) {
break;
}
/* else */
fprintf(stderr, "fetchWebPage failed with error code %03d\n", result);
sleep(retryDelay); /* wait before trying again */
}
(Note the empty for loop header means to loop forever; while(1) also works.)
4.8 Functions
136
4.8.1 Function definitions
137
it would be possible to call it as
helloWorld("this is a bogus argument");
without causing an error. The reason is that a function declaration with no
arguments means that the function can take an unspecified number of arguments,
and it’s up to the user to make sure they pass in the right ones. There are good
historical reasons for what may seem like obvious lack of sense in the design
of the language here, and fixing this bug would break most C code written
before 1989. But you shouldn’t ever write a function declaration with an empty
argument list, since you want the compiler to know when something goes wrong.
As with any kind of abstraction, there are two goals to making a function:
• Encapsulation: If you have some task to carry out that is simple do de-
scribe from the outside but messy to understand from the inside, wrapping
it in a function lets somebody carry out this task without having to know
the details. This is also useful if you want to change the implementation
later.
• Code re-use: If you find yourself writing the same lines of code in several
places (or worse, are tempted to copy a block of code to several places),
you should probably put this code in a function (or perhaps more than one
function, if there is no succinct way to describe what this block of code is
doing).
Both of these goals may be trumped by the goal of making your code under-
standable. If you can’t describe what a function is doing in a single, simple
sentence, this is a sign that maybe you need to restructure your code. Having a
function that does more than one thing (or does different thing depending on
its arguments) is likely to lead to confusion. So, for example, this is not a good
function definition:
/*** ### UGLY CODE AHEAD ### ***/
/*
* If getMaximum is true, return maximum of x and y,
* else return minimum.
*/
int
computeMaximumOrMinimum(int x, int y, int getMaximum)
{
if(x > y) {
if(getMaximum) {
return x;
} else {
138
return y;
}
} else {
if(getMaximum) {
return y;
} else {
return x;
}
}
}
Better would be to write two functions:
/* return the maximum of x and y */
int
maximum(int x, int y)
{
if(x > y) {
return x;
} else {
return y;
}
}
139
printIntWithNewline(2+5); /* this could do anything */
printf("%d\n", 2+7); /* this does exactly what it says */
As with all caveats, this caveat comes with its own caveat: what might justify
a function like this is if you want to be able to do some kind of specialized
formatting that should be consistent for all values of a particular form. So you
might write a printDistance function like the above as a stub for a fancier
function that might use different units at different scales or something.
A similar issue will come up with non-syntactic macros, which also tend to fail
the “does this make my code more or less understandable” test. Usually it is a
bad idea to try to replace common C idioms.
A function call consists of the function followed by its arguments (if any) inside
parentheses, separated by comments. For a function with no arguments, call it
with nothing between the parentheses. A function call that returns a value can
be used in an expression just like a variable. A call to a void function can only
be used as an expression by itself:
totalDistance += distSquared(x1 - x2, y1 - y2);
helloWorld();
greetings += helloWorld(); /* ERROR */
140
if (n % i == 0) {
/* found a factor */
return 0;
}
}
/* no factors */
return 1;
}
examples/functions/isPrime.c
By default, functions have global scope: they can be used anywhere in your
program, even in other files. If a file doesn’t contain a declaration for a function
someFunc before it is used, the compiler will assume that it is declared like
int someFunc() (i.e., return type int and unknown arguments). This can
produce infuriating complaints later when the compiler hits the real declaration
and insists that your function someFunc should be returning an int and you are
a bonehead for declaring it otherwise.
The solution to such insulting compiler behavior errors is to either (a) move the
function declaration before any functions that use it; or (b) put in a declaration
without a body before any functions that use it, in addition to the declaration
that appears in the function definition. (Note that this violates the no separate
but equal rule, but the compiler should tell you when you make a mistake.)
Option (b) is generally preferred, and is the only option when the function is
used in a different file.
To make sure that all declarations of a function are consistent, the usual practice
is to put them in an include file. For example, if distSquared is used in a lot
of places, we might put it in its own file distSquared.c:
#include "distSquared.h"
int
distSquared(int dx, int dy)
{
return dx*dx + dy*dy;
}
examples/functions/distSquared.c
The file distSquared.c above uses #include to include a copy of the following
header file distSquared.h:
/* Returns the square of the distance between two points separated by
141
dx in the x direction and dy in the y direction. */
int distSquared(int dx, int dy);
examples/functions/distSquared.h
Note that the declaration in distSquared.h doesn’t have a body. Instead, it’s
terminated by a semicolon, like a variable declaration. It’s also worth noting
that we moved the documenting comment to distSquared.h: the idea is that
distSquared.h is the public face of this (very small one-function) module, and
so the explanation of how to use the function should be there.
The reason distSquared.c includes distSquared.h is to get the compiler to
verify that the declarations in the two files match. But to use the distSquared
function, we also put #include "distSquared.h" at the top of the file that
uses it:
#include "distSquared.h"
int
tooClose(int x1, int y1, int x2, int y2)
{
return distSquared(x1 - x2, y1 - y2) < THRESHOLD;
}
examples/functions/tooClose.c
The #include on line 1 uses double quotes instead of angle brackets; this tells
the compiler to look for distSquared.h in the current directory instead of the
system include directory (typically /usr/include).
By default, all functions are global; they can be used in any file of your program
whether or not a declaration appears in a header file. To restrict access to the
current file, declare a function static, like this:
static void
helloHelper(void)
{
puts("hi!");
}
void
hello(int repetitions)
{
int i;
142
for(i = 0; i < repetitions; i++) {
helloHelper();
}
}
examples/functions/staticHello.c
The function hello will be visible everywhere. The function helloHelper will
only be visible in the current file.
It’s generally good practice to declare a function static unless you intend to
make it available, since not doing so can cause namespace conflicts, where
the presence of two functions with the same name either prevent the program
from linking or—even worse—cause the wrong function to be called. The latter
can happen with library functions, since C allows the programmer to override
library functions by defining a new function with the same name. Early on
in my career as a C programmer, I once had a program fail in a spectacularly
incomprehensible way because I’d written a select function without realizing
that select is a core library function in Unix.
A function may contain definitions of local variables, which are visible only
inside the function and which survive only until the function returns. These
may be declared at the start of any block (group of statements enclosed by
braces), but it is conventional to declare all of them at the outermost block of
the function.
/* Given n, compute n! = 1*2*...*n */
/* Warning: will overflow on 32-bit machines if n > 12 */
int
factorial(int n)
{
int i;
int product;
product = 1;
143
return product;
}
examples/functions/factorial.c
Several things happen under the hood when a function is called. Since a function
can be called from several different places, the CPU needs to store its previous
state to know where to go back. It also needs to allocate space for function
arguments and local variables.
Some of this information will be stored in registers, memory locations built
into the CPU itself, but most will go on the stack, a region of memory that on
typical machines grows downward, even though the most recent additions to the
stack are called the “top” of the stack. The location of the top of the stack is
stored in the CPU in a special register called the stack pointer.
So a typical function call looks like this internally:
1. The current instruction pointer or program counter value, which
gives the address of the next line of machine code to be executed, is pushed
onto the stack.
2. Any arguments to the function are copied either into specially designated
registers or onto new locations on the stack. The exact rules for how to do
this vary from one CPU architecture to the next, but a typical convention
might be that the first few arguments are copied into registers and the rest
(if any) go on the stack.
3. The instruction pointer is set to the first instruction in the code for the
function.
4. The code for the function allocates additional space on the stack to hold
its local variables (if any) and to save copies of the values of any registers
it wants to use (so that it can restore their contents before returning to its
caller).
5. The function body is executed until it hits a return statement.
6. Returning from the function is the reverse of invoking it: any saved registers
are restored from the stack, the return value is copied to a standard register,
and the values of the instruction pointer and stack pointer are restored to
what they were before the function call.
From the programmer’s perspective, the important point is that both the argu-
ments and the local variables inside a function are stored in freshly-allocated
locations that are thrown away after the function exits. So after a function call
the state of the CPU is restored to its previous state, except for the return value.
Any arguments that are passed to a function are passed as copies, so changing
the values of the function arguments inside the function has no effect on the
caller. Any information stored in local variables is lost.
144
Under very rare circumstances, it may be useful to have a variable local to a
function that persists from one function call to the next. You can do so by
declaring the variable static. For example, here is a function that counts how
many times it has been called:
/* return the number of times the function has been called */
int
counter(void)
{
static count = 0;
return ++count;
}
examples/functions/staticCounter.c
Static local variables are stored outside the stack with global variables, and have
unbounded extent. But they are only visible inside the function that declares
them. This makes them slightly less dangerous than global variables—there is no
fear that some foolish bit of code elsewhere will quietly change their value—but
it is still the case that they usually aren’t what you want. It is also likely that
operations on static variables will be slightly slower than operations on ordinary
(“automatic”) variables, since making them persistent means that they have to
be stored in (slow) main memory instead of (fast) registers.
4.9 Pointers
Memory in a typical modern computer is divided into two classes: a small number
of registers, which live on the CPU chip and perform specialized functions like
keeping track of the location of the next machine code instruction to execute
or the current stack frame, and main memory, which (mostly) lives outside
the CPU chip and which stores the code and data of a running program. When
the CPU wants to fetch a value from a particular location in main memory, it
must supply an address: a 32-bit or 64-bit unsigned integer on typical current
architectures, referring to one of up to 232 or 264 distinct 8-bit locations in the
memory. These integers can be manipulated like any other integer; in C, they
appear as pointers, a family of types that can be passed as arguments, stored
in variables, returned from functions, etc.
A pointer variable is a variable that holds a pointer, just like an int variable
is a variable that holds an int.
145
4.9.2.1 Declaring a pointer variable
The convention is C is that the declaration of a complex type looks like its use.
To declare a pointer-valued variable, write a declaration for the thing that it
points to, but include a * before the variable name:
int *pointerToInt;
double *pointerToDouble;
char *pointerToChar;
char **pointerToPointerToChar;
These declarations create four pointer variables, named pointerToInt,
pointerToDouble, pointerToChar, and pointerToPointerToChar. On a
typical 64-bit machine, each will be allocated 8 bytes, enough to represent an
address in memory.
The contents of these variables are initially arbitrary: to use them, you will need
to compute the address of something and assign it to the variable.
146
int n; /* an int variable */
int *p; /* a pointer to an int */
*p = 2; /* sets n to 2 */
*p = *p + *p; /* sets n to 4 */
The * operator binds very tightly, so you can usually use *p anywhere you could
use the variable it points to without worrying about parentheses. However, a
few operators, such as the -- and ++ operators and the . operator used to
unpack structs, bind tighter. These require parentheses if you want the * to take
precedence.
(*p)++; /* increment the value pointed to by p */
*p++; /* WARNING: increments p itself */
int
main(int argc, char **argv)
{
static int s; /* static local variable, stored in BSS segment */
int a; /* automatic variable, stored on stack */
int *p; /* pointer variable for malloc below */
/* obtain a block big enough for one int from the heap */
p = malloc(sizeof(int));
147
free(p);
return 0;
}
examples/pointers/lookingAtPointers.c
When I run this on a Mac OS X 10.6 machine after compiling with gcc, the
output is:
&G = 0x100001078
&s = 0x10000107c
&a = 0x7fff5fbff2bc
&p = 0x7fff5fbff2b0
p = 0x100100080
main = 0x100000e18
The interesting thing here is that we can see how the compiler chooses to allocate
space for variables based on their storage classes. The global variable G and the
static local variable s both persist between function calls, so they get placed in
the BSS segment (see .bss) that starts somewhere around 0x100000000, typically
after the code segment containing the actual code of the program. Local variables
a and p are allocated on the stack, which grows down from somewhere near the
top of the address space. The block returned from malloc that p points to is
allocated off the heap, a region of memory that may also grow over time and
starts after the BSS segment. Finally, main appears at 0x100000e18; this is in
the code segment, which is a bit lower in memory than all the global variables.
The special value 0, known as the null pointer, may be assigned to a pointer
of any type. It may or may not be represented by the actual address 0, but
it will act like 0 in all contexts (e.g., it has the value false in an if or while
statement). Null pointers are often used to indicate missing data or failed
functions. Attempting to dereference a null pointer can have catastrophic effects,
so it’s important to be aware of when you might be supplied with one.
A simple application of pointers is to get around C’s limit on having only one
return value from a function. Because C arguments are copied, assigning a value
to an argument inside a function has no effect on the outside. So the doubler
function below doesn’t do much:
#include <stdio.h>
148
/* doesn't work */
void
doubler(int x)
{
x *= 2;
}
int
main(int argc, char **argv)
{
int y;
y = 1;
doubler(y); /* no effect on y */
return 0;
}
examples/pointers/badDoubler.c
However, if instead of passing the value of y into doubler we pass a pointer to
y, then the doubler function can reach out of its own stack frame to manipulate
y itself:
#include <stdio.h>
void
doubler(int *x)
{
*x *= 2;
}
int
main(int argc, char **argv)
{
int y;
y = 1;
doubler(&y); /* sets y to 2 */
return 0;
149
}
examples/pointers/goodDoubler.c
Generally, if you pass the value of a variable into a function (with no &), you
can be assured that the function can’t modify your original variable. When you
pass a pointer, you should assume that the function can and will change the
variable’s value. If you want to write a function that takes a pointer argument
but promises not to modify the target of the pointer, use const, like this:
void
printPointerTarget(const int *p)
{
printf("%d\n", *p);
}
The const qualifier tells the compiler that the target of the pointer shouldn’t be
modified. This will cause it to return an error if you try to assign to it anyway:
void
printPointerTarget(const int *p)
{
*p = 5; /* produces compile-time error */
printf("%d\n", *p);
}
Passing const pointers is mostly used when passing large structures to functions,
where copying a 32-bit pointer is cheaper than copying the thing it points to.
If you really want to modify the target anyway, C lets you “cast away const”:
void
printPointerTarget(const int *p)
{
*((int *) p) = 5; /* no compile-time error */
printf("%d\n", *p);
}
There is usually no good reason to do this. The one exception might be if the
target of the pointer represents an abstract data type, and you want to modify
its representation during some operation to optimize things somehow in a way
that will not be visible outside the abstraction barrier, making it appear to leave
the target constant.
Note that while it is safe to pass pointers down into functions, it is very dangerous
to pass pointers up. The reason is that the space used to hold any local variable
of the function will be reclaimed when the function exits, but the pointer will
still point to the same location, even though something else may now be stored
there. So this function is very dangerous:
150
int *
dangerous(void)
{
int n;
...
return &n;
}
...
Because pointers are just numerical values, one can do arithmetic on them.
Specifically, it is permitted to
• Add an integer to a pointer or subtract an integer from a pointer. The
effect of p+n where p is a pointer and n is an integer is to compute the
address equal to p plus n times the size of whatever p points to (this is
why int * pointers and char * pointers aren’t the same).
• Subtract one pointer from another. The two pointers must have the same
type (e.g. both int * or both char *). The result is a signed integer value
151
of type ptrdiff_t, equal to the numerical difference between the addresses
divided by the size of the objects pointed to.
• Compare two pointers using ==, !=, <, >, <=, or >=.
• Increment or decrement a pointer using ++ or --.
4.9.5.1 Arrays
The main application of pointer arithmetic in C is in arrays. An array is a
block of memory that holds one or more objects of a given type. It is declared
by giving the type of object the array holds followed by the array name and the
size in square brackets:
int a[50]; /* array of 50 ints */
char *cp[100]; /* array of 100 pointers to char */
Declaring an array allocates enough space to hold the specified number of objects
(e.g. 200 bytes for a above and 400 for cp—note that a char * is an address, so
it is much bigger than a char). The number inside the square brackets must be
a constant whose value can be determined at compile time.
The array name acts like a constant pointer to the zeroth element of the array.
It is thus possible to set or read the zeroth element using *a. But because the
array name is constant, you can’t assign to it:
1 *a = 12; /* sets zeroth element to 12 */
2
3 a = &n; /* #### DOESN'T WORK #### */
More common is to use square brackets to refer to a particular element of the
array. The expression a[n] is defined to be equivalent to *(a+n); the index n
(an integer) is added to the base of the array (a pointer), to get to the location
of the n-th element of a. The implicit * then dereferences this location so that
you can read its value (in a normal expression) or assign to it (on the left-hand
side of an assignment operator). The effect is to allow you to use a[n] just as
you would any other variable of type int (or whatever type a was declared as).
Note that C doesn’t do any sort of bounds checking. Given the declaration
int a[50];, only indices from a[0] to a[49] can be used safely. However, the
compiler will not blink at a[-12] or a[10000]. If you read from such a location
you will get garbage data; if you write to it, you will overwrite god-knows-what,
possibly trashing some other variable somewhere else in your program or some
critical part of the stack (like the location to jump to when you return from
a function). It is up to you as a programmer to avoid such buffer overruns,
which can lead to very mysterious (and in the case of code that gets input from
a network, security-damaging) bugs. The valgrind program can help detect such
overruns in some cases.
Another curious feature of the definition of a[n] as identical to *(a+n) is that
it doesn’t actually matter which of the array name or the index goes inside the
152
braces. So all of a[0], *a, and 0[a] refer to the zeroth entry in a. Unless you
are deliberately trying to obfuscate your code, it’s best to write what you mean.
sum = 0;
for(i = 0; i < n; i++) {
sum += a[i];
}
return sum;
}
examples/pointers/sumArray.c
Note the use of const to promise that sumArray won’t modify the contents of a.
Another way to write the function header is to declare a as an array of unknown
size:
/* return the sum of the values in a, an array of size n */
int
sumArray(int n, const int a[])
{
...
}
This has exactly the same meaning to the compiler as the previous definition.
Even though normally the declarations int a[10] and int *a mean very differ-
ent things (the first one allocates space to hold 10 ints, and prevents assigning
a new value to a), in a function argument int a[] is just syntactic sugar for
int *a. You can even modify what a points to inside sumArray by assigning to
it. This will allow you to do things that you usually don’t want to do, like write
this hideous routine:
/* return the sum of the first n values in a */
int
sumArray(int n, const int a[])
153
{
const int *an; /* pointer to first element not in a */
int sum;
sum = 0;
an = a+n;
return sum;
}
154
rows, of type int **. The downside of this approach is that the array is no
longer contiguous (which may affect cache performance) and it requires reading
a pointer to find the location of a particular value, instead of just doing address
arithmetic starting from the base address of the array. But elements can still be
accessed using the a[i][j] syntax. An example of this approach is given below:
/* Demo program for malloc'd two-dimensional arrays */
#include <stdio.h>
#include <stdlib.h>
155
if(a[i] == 0) {
/* note that 0 in a[i] will stop freed2d after it frees previous rows */
free2d(a);
return 0;
}
}
return a;
}
int
main(int argc, char **argv)
{
int rows;
int cols;
int **a;
int i;
int j;
if(argc != 3) {
fprintf(stderr, "Usage: %s rows cols\n", argv[0]);
return 1;
}
/* else */
rows = atoi(argv[1]);
cols = atoi(argv[2]);
156
for(j = 0; j < cols; j++) {
printf("%4d", a[i][j]);
}
putchar('\n');
}
return 0;
}
examples/pointers/malloc2d.c
sum = 0;
for(i = 0; i < n; i++) {
sum += a[i];
}
return sum;
}
This doesn’t accomplish much, because the length of the array is not used.
However, it does become useful if we have a two-dimensional array, as otherwise
there is no way to compute the length of each row:
int
sumMatrix(int rows, int cols, const int m[rows][cols])
{
int i;
int j;
157
int sum;
sum = 0;
for(i = 0; i < rows; i++) {
for(j = 0; j < cols; j++) {
sum += a[i][j];
}
}
return sum;
}
Here the fact that each row of m is known to be an array of cols many ints makes
the implicit pointer computation in a[i][j] actually work. It is considerably
more difficult to to this in ANSI C; the simplest approach is to pack m into a
one-dimensional array and do the address computation explicitly:
int
sumMatrix(int rows, int cols, const int a[])
{
int i;
int j;
int sum;
sum = 0;
for(i = 0; i < rows; i++) {
for(j = 0; j < cols; j++) {
sum += a[i*cols + j];
}
}
return sum;
}
Variable-length arrays can sometimes be used for run-time storage allocation, as
an alternative to malloc and free (see below). A variable-length array allocated
as a local variable will be deallocated when the containing scope (usually a
function body, but maybe just a compound statement marked off by braces)
exits. One consequence of this is that you can’t return a variable-length array
from a function.
Here is an example of code using this feature:
/* reverse an array in place */
void
reverseArray(int n, int a[n])
{
/* algorithm: copy to a new array in reverse order */
158
/* then copy back */
int i;
int copy[n];
int i;
int *copy;
159
}
free(copy);
}
A special pointer type is void *, a “pointer to void”. Such pointers are declared
in the usual way:
void *nothing; /* pointer to nothing */
Unlike ordinary pointers, you can’t dereference a void * pointer or do arithmetic
on it, because the compiler doesn’t know what type it points to. However, you
are allowed to use a void * as a kind of “raw address” pointer value that you
can store arbitrary pointers in. It is permitted to assign to a void * variable
from an expression of any pointer type; conversely, a void * pointer value
can be assigned to a pointer variable of any type. An example is the return
value of malloc or the argument to free, both of which are declared as void *.
(Note that K&R suggests using an explicit cast for the return value of malloc.
This is now acknowledged by the authors to be an error, which arose from
the need for a cast prior to the standardization of void * in ANSI C. See
http://cm.bell-labs.com/cm/cs/cbook/2ediffs.html.)
int *block;
160
4.9.6.1 Alignment
One issue with casting pointers to and from void * is that you may violate the
alignment restrictions for a particular kind of pointer on some architectures.
Back in the 8-bit era of the 1970s, a single load or store operation would access
a single byte of memory, and because some data (chars) are still only one byte
wide, C pointers retain the ability to address individual bytes. But present-day
memory architectures typically have a wider data path, and the CPU may load
or store as many as 8 bytes (64 bits) in a single operation. This makes it natural
to organize memory into 4-byte or 8-byte words even though addresses still refer
to individual bytes. The effect of the memory architecture is that the address of
memory words must be aligned to a multiple of the word size: so with 4-byte
words, the address 0x1037ef44 (a multiple of 4) could refer to a full word, but
0x1037ef45 (one more than a multiple of 4) could only be used to refer to a
byte within a word.
What this means for a C program depends on your particular CPU and compiler.
If you try to use something like 0x1037ef45 as an int *, one of three things
might happen:
1. The CPU might load the 4 bytes starting at this address, using two accesses
to memory to piece together the full int out of fragments of words. This
is done on Intel architectures, but costs performance.
2. The CPU might quietly zero out the last two bits of the address, loading
from 0x1037ef44 even though you asked for 0x1037ef45. This happens
on some other architectures, notably ARM.
3. The CPU might issue a run-time exception.
All of these outcomes are bad, and the C standard does not specify what happens
if you try to dereference a pointer value that does not satisfy the alignment
restrictions of its target type. Fortunately, unless you are doing very nasty things
with casts, this is unlikely to come up, because any pointer value you will see in
a typical program is likely to arise in one of three ways:
1. By taking the address of some variable. This pointer will be appropriately
aligned, because the compiler allocates space for each variable (including
fields within structs) with appropriate alignment.
2. By computing an offset address using pointer arithmetic either explicitly
(p + n) or implicitly (p[n]). In either case, as long as the base pointer is
correctly aligned, the computed pointer will also be correctly aligned.
3. By obtaining a pointer to an allocated block of memory using malloc or a
similar function. Here malloc is designed to always return blocks with the
maximum possible required alignment, just to avoid problems when you
use the results elsewhere.
On many compilers, you can use __alignof(type) to get the alignment restriction
for a particular type. This was formalized in C11 without the underscores:
alignof. Usually if your code needs to include __alignof or alignof something
161
has already gone wrong.
The other place where alignment can create issues is that if you make a struct
with components with different alignment restrictions, you may end up with
some empty space. For example, on a machine that enforces 4-byte alignment for
ints, building a struct that contains a char and an int will give you something
bigger than you might expect:
#include <stdio.h>
struct ci {
char c; /* offset 0 */
/* 3 unused bytes go here */
int i; /* offset 4 */
};
struct ic {
int i; /* offset 0 */
char c; /* offset 4 */
/* 3 unused bytes go here */
};
int
main(int argc, char **argv)
{
printf("sizeof(struct ci) == %lu\n", sizeof(struct ci));
printf("sizeof(struct ic) == %lu\n", sizeof(struct ic));
return 0;
}
examples/alignment/structPacking.c
$ c99 -Wall -o structPacking structPacking.c
$ ./structPacking
sizeof(struct ci) == 8
sizeof(struct ic) == 8
In both cases, the compiler packs in an extra 3 bytes to make the size of the
struct a multiple of the worst alignment of any of its components. If it didn’t
do this, you would have trouble as soon as you tried to make an array of these
things.
C does not generally permit arrays to be declared with variable sizes. C also
doesn’t let local variables outlive the function they are declared in. Both features
162
can be awkward if you want to build data structures at run time that have
unpredictable (perhaps even changing) sizes and that are intended to persist
longer than the functions that create them. To build such structures, the
standard C library provides the malloc routine, which asks the operating system
for a block of space of a given size (in bytes). With a bit of pushing and shoving,
this can be used to obtain a block of space that for all practical purposes acts
just like an array.
To use malloc, you must include stdlib.h at the top of your program. The
declaration for malloc is
void *malloc(size_t);
where size_t is an integer type (often unsigned long). Calling malloc with
an argument of n allocates and returns a pointer to the start of a block of n
bytes if possible. If the system can’t give you the space you asked for (maybe
you asked for more space than it has), malloc returns a null pointer. It is good
practice to test the return value of malloc whenever you call it.
Because the return type of malloc is void *, its return value can be assigned
to any variable with a pointer type. Computing the size of the block you need is
your responsibility—and you will be punished for any mistakes with difficult-
to-diagnose buffer overrun errors—but this task is made slightly easier by the
built-in sizeof operator that allows you to compute the size in bytes of any
particular data type. A typical call to malloc might thus look something like
this:
#include <stdlib.h>
a = malloc(sizeof(int) * n);
return a;
}
examples/pointers/makeIntArray.c
If you don’t want to do the multiplication yourself, or if you want to guarantee
that the allocated data is initialized to zero, you can use calloc instead of
malloc. The calloc function is also declared in stdlib.h and takes two
arguments: the number of things to allocated, and the size of each thing. Here’s
163
a version of makeIntArray that uses calloc. Aside from zeroing out the data,
it is equivalent to the malloc version.
#include <stdlib.h>
a = calloc(n, sizeof(int));
return a;
}
examples/pointers/calloc.c
When you are done with a region allocated using malloc’d or calloc, you
should return the space to the system using the free routine, also defined in
stdlib.h. If you don’t do this, your program will quickly run out of space. The
free routine takes a void * as its argument and returns nothing. It is good
practice to write a matching destructor that de-allocates an object for each
constructor (like makeIntArray) that makes one.
void
destroyIntArray(int *a)
{
free(a);
}
It is a serious error to do anything at all with a block after it has been freed.
This is not necessarily because free modifies the contents of the block (although
it might), but because when you free a block you are granting the storage
allocator permission to hand the same block out in response to a future call to
malloc, and you don’t want to step on whatever other part of your program is
now trying to use that space.
It is also possible to grow or shrink a previously allocated block. This is done
using the realloc function, which is declared as
void *realloc(void *oldBlock, size_t newSize);
The realloc function returns a pointer to the resized block. It may or may not
allocate a new block. If there is room, it may leave the old block in place and
164
return its argument. But it may allocate a new block and copy the contents of
the old block, so you should assume that the old pointer has been freed.
Here’s a typical use of realloc to build an array that grows as large as it needs
to be:
/* read numbers from stdin until there aren't any more */
/* returns an array of all numbers read, or null on error */
/* returns the count of numbers read in *count */
int *
readNumbers(int *count /* RETVAL */ )
{
int mycount; /* number of numbers read */
int size; /* size of block allocated so far */
int *a; /* block */
int n; /* number read */
mycount = 0;
size = 1;
while(scanf("%d", &n) == 1) {
/* is there room? */
while(mycount >= size) {
/* double the size to avoid calling realloc for every number read */
size *= 2;
a = realloc(a, sizeof(int) * size);
if(a == 0) return 0;
}
return a;
}
examples/pointers/readNumbers.c
165
Because errors involving malloc and its friends can be very difficult to spot, it
is recommended to test any program that uses malloc using valgrind.
A function pointer, internally, is just the numerical address for the code for a
function. When a function name is used by itself without parentheses, the value
is a pointer to the function, just as the name of an array by itself is a pointer
to its zeroth element. Function pointers can be stored in variables, structs,
unions, and arrays and passed to and from functions just like any other pointer
type. They can also be called: a variable of type function pointer can be used in
place of a function name.
Function pointers are not used as much in C as in functional languages, but
there are many common uses even in C code.
#include <stdio.h>
int
main(int argc, char **argv)
{
/* function for emitting text */
int (*say)(const char *);
say = puts;
say("hello world");
166
return 0;
}
4.9.8.2 Callbacks
A callback is when we pass a function pointer into a function so that that
function can call our function when some event happens or it needs to compute
something.
A classic example is the comparison argument to qsort, from the standard
library:
/* defined in stdlib.h */
void
qsort(
void *base,
size_t n,
size_t size,
int (*cmp)(const void *key1, const void *key2)
);
This is a generic sorting routine that will sort any array in place. It needs to
know (a) the base address of the array; (b) how many elements there are; (c)
how big each element is; and (d) how to compare two elements. The only tricky
part is supplying the comparison, which could involve arbitrarily-complex code.
So we supply this code as a function with an interface similar to strcmp.
static int
compare_ints(void *key1, void *key2)
{
return *((int *) key1) - *((int *) key2);
}
int
sort_int_array(int *a, int n)
{
qsort(a, n, sizeof(*a), compare_ints);
}
Other examples might include things like registering an error handler for a
library, instead of just having it call abort() or something equally catastrophic,
or providing a cleanup function for freeing data passed into a data structure.
167
into this array. Here is a simple example, which echoes most of the characters in
its input intact, except for echoing every lowercase vowel twice:
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
#include <limits.h>
/*
* Demonstrate use of dispatch tables.
*/
int
main(int argc, char **argv)
{
/* this declares table as an array of function pointers */
int (*table[UCHAR_MAX+1])(int);
int i;
int c;
168
while((c = getchar()) != EOF) {
table[c](c);
}
return 0;
}
examples/pointers/dispatchTable.c
And here is the program translating Shakespeare into mock-Swedish:
$ c99 -Wall -pedantic -g3 -o dispatchTable dispatchTable.c
$ echo Now is the winter of our discontent made glorious summer by this sun of York. | ./dis
Noow iis thee wiinteer oof oouur diiscoonteent maadee glooriioouus suummeer by thiis suun oo
In this particular case, we did a lot of work to avoid just writing a switch
statement. But being able to build a dispatch table dynamically can be very
useful sometimes. An example might be a graphical user interface where each
button has an associated function. If buttons can be added by different parts of
the program, using a table mapping buttons to functions allows a single dispatch
routine to figure out where to route button presses.
(For some applications, we might want to pass additional information in to the
function to change its behavior. This can be done by replacing the function
pointers with closures.)
In C99, it is possible to declare that a pointer variable is the only way to reach
its target as long as it is in scope. This is not enforced by the compiler; instead,
it is a promise from the programmer to the compiler that any data reached
through this point will not be changed by other parts of the code, which allows
the compiler to optimize code in ways that are not possible if pointers might
point to the same place (a phenomenon called pointer aliasing). For example,
consider the following short function:
// write 1 + *src to *dst and return *src
int
copyPlusOne(int * restrict dst, int * restrict src)
{
*dst = *src + 1;
return *src;
}
For this function, the output of c99 -O3 -S includes one more instruction if
the restrict qualifiers are removed. The reason is that if dst and src may
point to the same location, src needs to be re-read for the return statement, in
169
case it changed. But if they are guaranteed to point to different locations, the
compiler can re-use the previous value it already has in one of the CPU registers.
For most code, this feature is useless, and potentially dangerous if someone calls
your routine with aliased pointers. However, it may sometimes be possible to
increase performance of time-critical code by adding a restrict keyword. The
cost is that the code might no longer work if called with aliased pointers.
Curiously, C assumes that two pointers are never aliases if you have two arguments
with different pointer types, neither of which is char * or void *.10 This is
known as the strict aliasing rule and cannot be overridden from within the
program source code: there is no unrestrict keyword. You probably only need
to worry about this if you are casting pointers to different types and then passing
the cast pointers around in the same context as the original pointers.
4.10 Strings
170
4.10.1 C strings
Because delimited strings are simpler and take less space, C went for delimited
strings. A string is a sequence of characters terminated by a null character '\0'.
Looking back from almost half a century later, this choice may have been a
mistake in the long run, but we are pretty much stuck with it.
Note that the null character is not the same as a null pointer, although both
appear to have the value 0 when used in integer contexts. A string is represented
by a variable of type char *, which points to the zeroth character of the string.
The programmer is responsible for allocating and managing space to store strings,
except for explicit string constants, which are stored in a special non-writable
string space by the compiler.
If you want to use counted strings instead, you can build your own
using a struct. Most scripting languages written in C (e.g. Perl,
Python_programming_language, PHP, etc.) use this approach inter-
nally. (Tcl is an exception, which is one of many good reasons not to use
Tcl).
171
a Unicode string with a UTF-8 encoding in a comment, as illustrated in the file
unicode.c. But this use of Unicode in C is very limited.
Some issues you will quickly run into if you are trying to do something more
sophisticated:
1. You cannot use non-ASCII letters anywhere outside a string constant or comment without co
1. If you include a UTF-8 encoded string somewhere, even though both your text editor and te
1. You can't generally put a multibyte character into a `char` variable, or write it as a `c
1. You may find out that some other tools have their own ideas about what encodings to expec
There exists libraries for working with Unicode strings in C, but they are clunky.
If you need to handle a lot of non-ASCII text, you may be better of working
with a different language. However, even moving away from C is not always a
panacea, and Unicode support in other tools may be hit-or-miss.
The problem with string constants is that you can’t modify them. If you want to
build strings on the fly, you will need to allocate space for them. The traditional
approach is to use a buffer, an array of chars. Here is a particularly painful
hello-world program that builds a string by hand:
#include <stdio.h>
int
main(int argc, char **argv)
{
char hi[3];
hi[0] = 'h';
hi[1] = 'i';
hi[2] = '\0';
puts(hi);
return 0;
}
examples/strings/hi.c
Note that the buffer needs to have size at least 3 in order to hold all three
characters. A common error in programming with C strings is to forget to leave
space for the null at the end (or to forget to add the null, which can have comical
results depending on what you are using your surprisingly long string for).
172
Fixed-size buffers are a common source of errors in older C programs, particularly
ones written with the library routine gets. The problem is that if you do
something like
strcpy(smallBuffer, bigString);
the strcpy function will happily keep copying characters across memory long
after it has passed the end of smallBuffer. While you can avoid this to a
certain extent when you control where bigString is coming from, the situation
becomes particularly fraught if the string you are trying to store comes from
the input, where it might be supplied by anybody, including somebody who is
trying to execute a buffer overrun attack to seize control of your program.
If you do need to read a string from the input, you should allocate the receiving
buffer using malloc and expand it using realloc as needed. Below is a program
that shows how to do this, with some bad alternatives commented out:
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
size = INITIAL_LINE_LENGTH;
line = malloc(size);
assert(line);
length = 0;
173
}
line[length++] = c;
}
line[length] = '\0';
return line;
}
int
main(int argc, char **argv)
{
int x = 12;
/* char name[NAME_LENGTH]; */
char *line;
int y = 17;
printf("Hi %s! Did you know that x == %d and y == %d?\n", line, x, y);
return 0;
}
examples/strings/getLine.c
174
strcpy2(char *dest, const char *src)
{
/* This line copies characters one at a time from *src to *dest. */
/* The postincrements increment the pointers (++ binds tighter than *) */
/* to get to the next locations on the next iteration through the loop. */
/* The loop terminates when *src == '\0' == 0. */
/* There is no loop body because there is nothing to do there. */
while(*dest++ = *src++);
}
The externally visible difference between strcpy2 and the original strcpy is
that strcpy returns a char * equal to its first argument. It is also likely that
any implementation of strcpy found in a recent C library takes advantage of
the width of the memory data path to copy more than one character at a time.
Most C programmers will recognize the while(*dest++ = *src++); from hav-
ing seen it before, although experienced C programmers will generally be able to
figure out what such highly abbreviated constructions mean. Exposure to such
constructions is arguably a form of hazing.
Because C pointers act exactly like array names, you can also write strcpy2
using explicit array indices. The result is longer but may be more readable if
you aren’t a C fanatic.
char *
strcpy2a(char *dest, const char *src)
{
int ;
i = 0;
for(i = 0; src[i] != '\0'; i++) {
dest[i] = src[i];
}
/* note that the final null in src is not copied by the loop */
dest[i] = '\0';
return dest;
}
An advantage of using a separate index in strcpy2a is that we don’t trash dest,
so we can return it just like strcpy does. (In fairness, strcpy2 could have saved
a copy of the original location of dest and done the same thing.)
Note that nothing in strcpy2, strcpy2a, or the original strcpy will save you if
dest points to a region of memory that isn’t big enough to hold the string at
src, or if somebody forget to tack a null on the end of src (in which case strcpy
will just keep going until it finds a null character somewhere). As elsewhere, it’s
your job as a programmer to make sure there is enough room. Since the compiler
175
has no idea what dest points to, this means that you have to remember how
much room is available there yourself.
If you are worried about overrunning dest, you could use strncpy instead. The
strncpy function takes a third argument that gives the maximum number of
characters to copy; however, if src doesn’t contain a null character in this range,
the resulting string in dest won’t either. Usually the only practical application
to strncpy is to extract the first k characters of a string, as in
/* copy the substring of src consisting of characters at positions
start..end-1 (inclusive) into dest */
/* If end-1 is past the end of src, copies only as many characters as
available. */
/* If start is past the end of src, the results are unpredictable. */
/* Returns a pointer to dest */
char *
copySubstring(char *dest, const char *src, int start, int end)
{
/* copy the substring */
strncpy(dest, src + start, end - start);
return dest;
}
Another quick and dirty way to extract a substring of a string you don’t care
about (and can write to) is to just drop a null character in the middle of the
sacrificial string. This is generally a bad idea unless you are certain you aren’t
going to need the original string again, but it’s a surprisingly common practice
among C programmers of a certain age.
A similar operation to strcpy is strcat. The difference is that strcat concate-
nates src on to the end of dest; so that if dest previous pointed to "abc" and
src to "def", dest will now point to "abcdef". Like strcpy, strcat returns
its first argument. A no-return-value version of strcat is given below.
void
strcat2(char *dest, const char *src)
{
while(*dest) dest++;
while(*dest++ = *src++);
}
Decoding this abomination is left as an exercise for the reader. There is also a
function strncat which has the same relationship to strcat that strncpy has
to strcpy.
176
As with strcpy, the actual implementation of strcat may be much more subtle,
and is likely to be faster than rolling your own.
return i;
}
Note the use of the comma operator in the increment step. The comma operator
applied to two expressions evaluates both of them and discards the value of the
first; it is usually used only in for loops where you want to initialize or advance
more than one variable at once.
Like the other string routines, using strlen requires including string.h.
177
dest[j] = '\0';
return dest;
}
The problem is that strlen has to scan all of src every time the test is done,
which adds time proportional to the length of src to each iteration of the loop.
So copyEvenCharactersBadVersion takes time proportional to the square of
the length of src.
Here’s a faster version:
/* like strcpy, but only copies characters at indices 0, 2, 4, ...
from src to dest */
char *
copyEvenCharacters(char *dest, const char *src)
{
int i;
int j;
int len; /* length of src */
len = strlen(src);
dest[j] = '\0';
return dest;
}
Because it doesn’t call strlen all the time, this version of copyEvenCharacters
will run much faster than the original even on small strings, and several million
times faster if src is megabytes long.
If you want to test if strings s1 and s2 contain the same characters, writing
s1 == s2 won’t work, since this tests instead whether s1 and s2 point to the
same address. Instead, you should use strcmp, declared in string.h. The
strcmp function walks along both of its arguments until it either hits a null
on both and returns 0, or hits two different characters, and returns a positive
integer if the first string’s character is bigger and a negative integer if the second
string’s character is bigger (a typical implementation will just subtract the two
characters). A straightforward implementation might look like this:
178
int
strcmp(const char *s1, const char *s2)
{
while(*s1 && *s2 && *s1 == *s2) {
s1++;
s2++;
}
You can write formatted output to a string buffer with sprintf just like you
can write it to stdout with printf or to a file with fprintf. Make sure when
you do so that there is enough room in the buffer you are writing to, or the
usual bad things will happen.
When allocating space for a copy of a string s using malloc, the required space
is strlen(s)+1. Don’t forget the +1, or bad things may happen.11
11 In this case you will get lucky most of the time, since the odds are that malloc will give you
a block that is slightly bigger than strlen(s) anyway. But bugs that only manifest themselves
occasionally are even worse than bugs that kill your program every time, because they are
much harder to track down.
179
Because allocating space for a copy of a string is such a common operation, many
C libraries provide a strdup function that does exactly this. If you don’t have
one (it’s not required by the C standard), you can write your own like this:
/* return a freshly-malloc'd copy of s */
/* or 0 if malloc fails */
/* It is the caller's responsibility to free the returned string when done. */
char *
strdup(const char *s)
{
char *s2;
s2 = malloc(strlen(s)+1);
if(s2 != 0) {
strcpy(s2, s);
}
return s2;
}
Exercise: Write a function strcatAlloc that returns a freshly-malloc’d string
that concatenates its two arguments. Exactly how many bytes do you need to
allocate?
Now that we know about strings, we can finally do something with argc and
argv.
Recall that argv in main is declared as char **; this means that it is a pointer
to a pointer to a char, or in this case the base address of an array of pointers
to char, where each such pointer references a string. These strings correspond
to the command-line arguments to your program, with the program name itself
appearing in argv[0]12
The count argc counts all arguments including argv[0]; it is 1 if your program
is called with no arguments and larger otherwise.
Here is a program that prints its arguments. If you get confused about what
argc and argv do, feel free to compile this and play with it:
#include <stdio.h>
int
main(int argc, char **argv)
12 Some programs (e.g. /c/cs223/bin/submit) will use this to change their behavior depend-
180
{
int i;
return 0;
}
examples/strings/printArgs.c
Like strings, C terminates argv with a null: the value of argv[argc] is always
0 (a null pointer to char). In principle this allows you to recover argc if you
lose it.
C has two kinds of structured data types: structs and unions. A struct holds
multiple values in consecutive memory locations, called fields, and implements
what in type theory is called a product type: the set of possible values is the
Cartesian product of the sets of possible values for its fields. In contrast, a union
has multiple fields but they are all stored in the same location: effectively, this
means that only one field at a time can hold a value, making a union a sum
type whose set of possible values is the union of the sets of possible values for
each of its fields. Unlike what happens in more sensible programming languages,
unions are not tagged: unless you keep track of this somewhere else, you can’t
tell which field in a union is being used, and you can store a value of one type in
a union and try to read it back as a different type, and C won’t complain.13
4.11.1 Structs
A struct is a way to define a type that consists of one or more other types
pasted together. Here’s a typical struct definition:
struct string {
int length;
char *data;
};
This defines a new type struct string that can be used anywhere you would
use a simple type like int or float. When you declare a variable with type
13 There are various ways to work around this. The simplest is to put a union inside a larger
181
struct string, the compiler allocates enough space to hold both an int and
a char * (8 bytes on a typical 32-bit machine). You can get at the individual
components using the . operator, like this:
struct string {
int length;
char *data;
};
int
main(int argc, char **argv)
{
struct string s;
s.length = 4;
s.data = "this string is a lot longer than you think";
puts(s.data);
return 0;
}
examples/structs/structExample.c
Variables of type struct can be assigned to, passed into functions, returned
from functions, and tested for equality, just like any other type. Each such
operation is applied componentwise; for example, s1 = s2; is equivalent to
s1.length = s2.length; s1.data = s2.data; and s1 == s2 is equivalent
to s1.length == s2.length && s1.data = s2.data.
These operations are not used as often as you might think: typically, instead of
copying around entire structures, C programs pass around pointers, as is done
with arrays. Pointers to structs are common enough in C that a special syntax
is provided for dereferencing them.14 Suppose we have:
struct string s; /* a struct */
struct string *sp; /* a pointer to a struct */
s.length = 4;
s.data = "another overly long string";
has type struct string *, there is no particular reason why it can’t interpret sp.length as
sp->length. But it doesn’t do this, so you will have to remember to write sp->length instead.
182
puts((*sp).data);
puts(sp->data);
The second is more common, since it involves typing fewer parentheses. It is an
error to write *sp.data in this case; since . binds tighter than *, the compiler
will attempt to evaluate sp.data first and generate an error, since sp doesn’t
have a data field.
Pointers to structs are commonly used in defining abstract data data, since it
is possible to declare that a function returns e.g. a struct string * without
specifying the components of a struct string. (All pointers to structs in C
have the same size and structure, so the compiler doesn’t need to know the
components to pass around the address.) Hiding the components discourages
code that shouldn’t look at them from doing so, and can be used, for example,
to enforce consistency between fields.
For example, suppose we wanted to define a struct string * type that held
counted strings that could only be accessed through a restricted interface that
prevented (for example) the user from changing the string or its length. We
might create a file myString.h that contained the declarations:
/* make a struct string * that holds a copy of s */
/* returns 0 if malloc fails */
struct string *makeString(const char *s);
#include "myString.h"
struct string {
int length;
char *data;
};
183
struct string *
makeString(const char *s)
{
struct string *s2;
s2 = malloc(sizeof(struct string));
if(s2 == 0) { return 0; } /* let caller worry about malloc failures */
s2->length = strlen(s);
s2->data = malloc(s2->length);
if(s2->data == 0) {
free(s2);
return 0;
}
strncpy(s2->data, s, s2->length);
return s2;
}
void
destroyString(struct string *s)
{
free(s->data);
free(s);
}
int
stringLength(struct string *s)
{
return s->length;
}
int
stringCharAt(struct string *s, int index)
{
if(index < 0 || index >= s->length) {
return -1;
} else {
return s->data[index];
}
}
examples/myString/myString.c
184
In practice, we would probably go even further and replace all the
struct string * types with a new name declared with typedef.
int
main(int argc, char **argv)
{
struct foo {
int i;
char c;
double d;
float f;
char *s;
};
185
printf("d is at %lu\n", offsetof(struct foo, d));
printf("f is at %lu\n", offsetof(struct foo, f));
printf("s is at %lu\n", offsetof(struct foo, s));
return 0;
}
examples/structs/offsetof.c
4.11.2 Unions
A union is just like a struct, except that instead of allocating space to store
all the components, the compiler only allocates space to store the largest one,
and makes all the components refer to the same address. This can be used to
save space if you know that only one of several components will be meaningful
for a particular object. An example might be a type representing an object in a
LISP-like language like Scheme:
struct lispObject {
int type; /* type code */
union {
int intVal;
double floatVal;
char * stringVal;
struct {
struct lispObject *car;
186
struct lispObject *cdr;
} consVal;
} u;
};
Now if you wanted to make a struct lispObject that held an integer value,
you might write
lispObject o;
o.type = TYPE_INT;
o.u.intVal = 27;
Here TYPE_INT has presumably been defined somewhere. Note that nothing
then prevents you from writing
x = 2.7 * o.u.floatVal; /* BAD */
The effects of this will be strange, since it’s likely that the bit pattern representing
27 as an int represents something very different as a double. Avoiding such
mistakes is your responsibility, which is why most uses of union occur inside
larger structs that contain enough information to figure out which variant of
the union applies.
4.11.3 Enums
C provides the enum construction for the special case where you want to have a
sequence of named constants of type int, but you don’t care what their actual
values are, as in
enum color { RED, BLUE, GREEN, MAUVE, TURQUOISE };
This will assign the value 0 to RED, 1 to BLUE, and so on. These values are
effectively of type int, although you can declare variables, arguments, and return
values as type enum color to indicate their intended interpretation.
Despite declaring a variable enum color c (say), the compiler will still allow c
to hold arbitrary values of type int.
So the following ridiculous code works just fine:
#include <stdio.h>
#include <stdlib.h>
int
main(int argc, char **argv)
187
{
enum foo x;
if(argc != 1) {
fprintf(stderr, "Usage: %s\n", argv[0]);
return 1;
}
x = 127;
return 0;
}
examples/definitions/enumsAreInts.c
188
you never plan to use the numerical values, enum may be a better choice, because
it guarantees that all the values will be distinct.
struct LispValue {
enum TypeCode typeCode;
union {
int i;
double d;
char *s;
} value;
};
Here we don’t care what the numeric values of TYPE_INT, TYPE_DOUBLE, and
TYPE_STRING are, as long as we can apply switch to typeCode to figure out
what to do with one of these things.
189
The syntax for typedef looks like a variable declaration preceded by typedef,
except that the variable is replaced by the new type name that acts like whatever
type the defined variable would have had. You can use a name defined with
typedef anywhere you could use a normal type name, as long as it is later in
the source file than the typedef definition. Typically typedefs are placed in a
header file (.h file) that is then included anywhere that needs them.
You are not limited to using typedefs only for complex types. For example, if
you were writing numerical code and wanted to declare overtly that a certain
quantity was not just any double but actually a length in meters, you could
write
typedef double LengthInMeters;
typedef double AreaInSquareMeters;
There are certain cases where the compiler needs to know the definition of a
struct:
1. When the program accesses its components.
2. When the compiler needs to know its size. This may be because you
are building an array of these structs, because they appear in a larger
struct, when you are passing the struct as an argument or assigning it
to a variable, or just because you applied sizeof to the struct.
But the compile does not need to know the definition of a struct to know how
create a pointer to it. This is because all struct pointers have the same size
and structure.
This allows a trick called an opaque struct, which can be used for information
hiding, where one part of your program is allowed to see the definition of a
struct but other parts are not.
The idea is to create a header file that defines all the functions that might be used
to access the struct, but does not define the struct itself. For example, suppose
we want to create a counter, where the user can call a function increment that
acts like ++ in the sense that it increments the counter and returns the new
190
value, but we don’t want to allow the user to change the value of the counter in
any other way. This header file defines the interface to the counter.
Here is the header file:
/* Create a new counter, initialized to 0. Call counterDestroy to get rid of it. */
struct counter * counterCreate(void);
#include "counter.h"
int
main(int argc, char **argv)
{
struct counter *c;
int value;
c = counterCreate();
counterDestroy(c);
return 0;
}
examples/structs/opaqueStructs/testCounter.c
To make this work, we do have to provide an implementation. The obvious
way to do it is have a struct counter store the counter value in an int, but
one could imagine other (probably bad) implementations that did other things,
as long as from the outside they acted like we expect.
We only put the definition of a struct counter in this file. This means that
191
only functions in this file can access a counter’s components, compute the size of
a counter, and so forth. While we can’t absolutely prevent some other function
from extracting or modifying the contents of a counter (C doesn’t provide
that kind of memory protection), we can at least hint very strongly that the
programmer shouldn’t be doing this.
#include <stdlib.h>
#include <assert.h>
#include "counter.h"
struct counter {
int value;
};
struct counter *
counterCreate(void)
{
struct counter *c;
c = malloc(sizeof(struct counter));
assert(c);
c->value = 0;
return c;
}
void
counterDestroy(struct counter *c)
{
free(c);
}
int
counterIncrement(struct counter *c)
{
return ++(c->value);
}
examples/structs/opaqueStructs/counter.c
We will see this trick used over and over again when we build abstract data
types.
192
4.13 Macros
See K&R Appendix A12.3 for full details on macro expansion in ANSI C and
http://gcc.gnu.org/onlinedocs/cpp/Macros.html for documentation on what gcc
supports.
The short version: the command
#define FOO (12)
causes any occurrence of the word FOO in your source file to be replaced by
(12) by the preprocessor. To count as a word, FOO can’t be adjacent to other
alphanumeric characters, so for example FOOD will not expand to (12)D.
193
4.13.1.3 Variable-length argument lists
C99 added variadic macros that may have a variable number of arguments;
these are mostly useful for dealing with variadic functions (like printf) that
also take a variable number of arguments.
To define a variadic macro, define a macro with arguments where the last
argument is three periods: ... . The macro __VA_ARGS__ then expands to
whatever arguments matched this ellipsis in the macro call.
For example:
#include <stdio.h>
int
main(int argc, char **argv)
{
Warning("%s: this program contains no useful code\n", argv[0]);
return 1;
}
It is possible to mix regular arguments with ..., as long as ... comes last:
#define Useless(format, ...) printf(format, __VA_ARGS__)
194
12). A better alternative is to use an inline function.
Like macros, inline functions should be defined in header files. Ordinary functions
always go in C files because (a) we only want to compile them once, and (b)
the linker will find them in whatever .o file they end up in anyway. But inline
functions generally don’t get compiled independently, so this doesn’t apply.
Here is a header file for an inline version of distSquared:
/* Returns the square of the distance between two points separated by
dx in the x direction and dy in the y direction. */
static inline int
distSquared(int dx, int dy)
{
return dx*dx + dy*dy;
}
examples/functions/distSquaredInline.h
This looks exactly like the original distSquared, except that we added static
inline. We want this function to be declared static because otherwise some
compilers will try to emit a non-inline definition for it in ever C file this header
is included in, which could have bad results.15
The nice thing about this approach is that if we do decide to make distSquared
an ordinary function (maybe it will make debugging easier, or we realize we
want to be able to take its address), then we can just move the definition into
a .c file and take the static inline off. Indeed, this is probably the safest
thing to start with, since we can also do the reverse if we find that function call
overhead on this particular function really does account for a non-trivial part of
our running time (see profiling).
in how they handle inline functions. For an extensive discussion of the terrifying portability
issues that arise in pre-C99 C compilers, see http://www.greenend.org.uk/rjk/tech/inline.html.
195
4.13.3 More specialized macros
Some standard idioms have evolved over the years to deal with issues that come
up in defining complex macros. Usually, having a complex macro is a sign of
bad design, but these tools can be useful in some situations.
196
4.13.3.3 Multiple statements in one macro
If you want to write a macro that looks like a function call but contains multiple
statements, the correct way to do it is like
#define HiHi() do { puts("hi"); puts("hi"); } while(0)
This can safely be used in place of single statements, like this:16
if(friendly)
HiHi();
else
snarl();
Note that no construct except do..while will work here. Just using braces
will cause trouble with the semicolon before the else, and no other compound
statement besides do..while expects to be followed by a semicolon in this way.
int
16 To make the example work, we are violating our usual rule of always using braces in if
statements.
197
main(int argc, char **argv)
{
PrintExpr(2+2);
return 0;
}
examples/macros/printExpr.c
When run, this program prints
2+2 = 4
Without using a macro, there is no way to capture the text string "2+2" so we
can print it.
This sort of trickery is mostly used in debugging. The assert macro is a more
sophisticated version, which uses the built-in macros __FILE__ (which expands
to the current source file as a quoted string) and __LINE__ (which expands to
the current source line number, not quoted) to not only print out an offending
expression, but also the location of it in the source.
198
#include "declareSort.h"
/* note: must appear outside of any function, and has no trailing semicolon */
DeclareSort(int, int)
#define N (50)
int
main(int argc, char **argv)
{
int a[N];
int i;
int_sort(a, N);
return 0;
}
examples/macros/useDeclareSort.c
Do this too much and you will end up reinventing C++ templates, which are a
more or less equivalent mechanism for generating polymorphic code that improve
on C macros like the one above by letting you omit the backslashes.
199
#include <assert.h>
int
main(int argc, char **argv)
{
#ifdef SAY_HI
puts("Hi.");
#else /* matches #ifdef SAY_HI */
#ifndef BE_POLITE
puts("Go away!");
#else /* matches #ifndef BE_POLITE */
puts("I'm sorry, I don't feel like talking today.");
#endif /* matches #ifndef BE_POLITE */
#endif /* matches #ifdfe SAY_HI */
#ifdef DEBUG_ARITHMETIC
assert(2+2 == 5);
#endif
return 0;
}
examples/macros/ifdef.c
You can turn these conditional compilation directives on and off at compile
time by passing the -D flag to gcc. Here is the program above, running after
compiling with different choices of options:
$ gcc -DSAY_HI -o ifdef ifdef.c
$ ./ifdef
Hi.
$ gcc -DBE_POLITE -DDEBUG_ARITHMETIC -o ifdef ifdef.c
$ ./ifdef
I'm sorry, I don't feel like talking today.
ifdef: ifdef.c:18: main: Assertion `2+2 == 5' failed.
Aborted
An example of how this mechanism can be useful is the NDEBUG macro: if you
define this before including assert.h, it turns every assert in your code into
a no-op. This can be handy if you are pretty sure your code works and you
want to speed it up in its final shipped version, or if you are pretty sure your
code doesn’t work but you want to hide the evidence. (It also means you should
not perform side-effects inside an assert unless you are happy with them not
happening.)
200
Using the flag -DNAME defines NAME to be 1. If you want something else, use
-DNAME=VALUE. This can be used to bake useful information into your program at
compile time, and is often used to specify filenames. Below is a simple example.
#include <stdio.h>
int
main(int argc, char **argv)
{
#ifdef MESSAGE
puts(MESSAGE);
#endif
return 0;
}
examples/macros/message.c
$ gcc -DMESSAGE='"Hi there!"' -o message message.c
$ ./message
Hi there!
Note that we had to put an extra layer of single quotes in the command line to
keep the shell from stripping off the double quotes. This is unavoidable: had we
written puts("MESSAGE") in the code, the preprocessor would have recognized
that MESSAGE appeared inside a string and would not have replaced it.17
The preprocessor also includes a more general #if directive that evaluates simple
arithmetic expressions. The limitations are that it can only do integer arithmetic
(using the widest signed integer type available to the compiler) and can only do
it to integer and character constants and the special operator defined(NAME),
which evaluates to 1 if NAME is defined and 0 otherwise. The most common use
of this is to combine several #ifdef-like tests into one:
#include <stdio.h>
int
main(int argc, char **argv)
{
#if VERBOSITY >= 3 && defined(SAY_HI)
puts("Hi!");
17 The # operator looks like it ought to be useful here, but it only works for expanding
arguments to macros and not for expanding macros themselves. Attempting to get around
this by wrapping MESSAGE in a macro that applies the # operator to its first argument will end
in tears if MESSAGE contains any special characters like commas or right parentheses. The C
preprocessor has many unfortunate limitations.
201
#endif
return 0;
}
examples/macros/if.c
One problem with using a lot of macros is that you can end up with no idea what
input is actually fed to the compiler after the preprocessor is done with it. You
can tell gcc to tell you how everything expands using gcc -E source_file.c.
If your source file contains any #include statements it is probably a good idea
to send the output of gcc -E to a file so you can scroll down past the thousands
of lines of text they may generate.
202
5.1.1 Two sorting algorithms
57 12 34 68
1257 3468
12345678
Suppose that we want to estimate the cost of this algorithm without actually
coding it up. We might observe that each time a card is merged into a new pile,
we need to do some small, fixed number of operations to decide that it’s the
smaller card, and then do an additional small, fixed number of operations to
physically move it to a new place. If we are really clever, we might notice that
since the size of the pile a card is in doubles with each round, there can be at
most dlog2 ne rounds until all cards are in the same pile. So the cost of getting
a single card in the right place will be at most c log n where c counts the “small,
fixed” number of operations that we keep mentioning, and the cost of getting
every card in the right place will be at most cn log n.
In the ‘”selection sort”’ algorithm, we look through all the cards to find the
smallest one, swap it to the beginning of the list, then look through the remaining
cards for the second smallest, swap it to the next position, and so on.
Here’s a picture of this algorithm in action on 8 cards:
57123486
17523486
12573486
12375486
12345786
203
12345786
12345687
12345678
This is a simpler algorithm to implement that mergesort, but it is usually slower
on large inputs. We can formalize this by arguing that each time we scan k cards
to find the smallest, it’s going to take some small, fixed number of operations
to test each card against the best one we found so far, and an additional small,
fixed number of operations to swap the smallest card to the right place. To
compute the total cost we have to add these costs for all cards, which will give
us a total cost that looks something like (c1 n + c2 ) + (c1 (n − 1) + c2 ) + (c1 (n −
2) + c2 ) + . . . + (c11̇ + c2 ) = c1 n(n + 1)/2 + c2 n.
For large n, it looks like this is going to cost more than mergesort. But how can
we make this claim cleanly, particularly if we don’t know the exact values of c,
c1 , and c2 ?
The idea is to replace complex running time formulae like cn log n or c1 n(n +
1)/2 + c2 n with an asymptotic growth rate O(n log n) or O(n2 ). These asymp-
totic growth rates omit the specific details of exactly how fast our algorithms
run (which we don’t necessarily know without actually coding them up) and
concentrate solely on how the cost scales as the size of the input n becomes
large.
This avoids two issues:
1. Different computers run at different speeds, and we’d like to be able to say
that one algorithm is better than another without having to measure its
running time on specific hardware.
2. Performance on large inputs is more important than performance on small
inputs, since programs running on small inputs are usually pretty fast.
The idea of ‘”asymptotic notation”’ is to consider the shape of the worst-case
cost T (n) to process an input of size n. Here, worst-case means we consider
the input that gives the greatest cost, where cost is usually time, but may be
something else like space. To formalize the notion of shape, we define classes of
functions that behave like particular interesting functions for large inputs. The
definition looks much like a limit in calculus:
O(n) A function f (n) is in the class O(g(n)) if there exist constants N and c
such that f (n) < c · g(n) when n > N .
If f (n) is in O(g(n)) we say f (n) is ‘”big-O”’ of g(n) or just f (n) = O(g(n)).18
18 This is an abuse of notation, where the equals sign is really acting like set membership.
204
Unpacked, this definition says that f (n) is less than a constant times g(n) when
n is large enough.
Some examples:
• Let f (n) = 3n + 12, and let g(n) = n. To show that f (n) is in O(g(n)) =
O(n), we can pick whatever constants we want for c and N (as long as
they work). So let’s make N be 100 and c be 4. Then we need to show
that if n > 100, 3n + 12 < 4n. But 3n + 12 < 4n holds precisely when
12 < n, which is implied by our assumption that n > 100.
• Let f (n) = 4n2 + 23n + 15, and let g(n) = n2 . Now let N be 100 again
and c be 5. So we need 4n2 + 23n + 15 < 5n2 , or 23n + 15 < n2 . But
n > 100 means that n2 > 100n = 50n + 50n > 50n + 5000 > 23n + 15,
which proves that f (n) is in O(n2 ).
• Let f (n) < 146 for all n, and let g(n) = 1. Then for N = 0 and c = 146,
f (n) < 146 = 146g(n), and f (n) is in O(1).
Writing proofs like this over and over again is a nuisance, so we can use some
basic rules of thumb to reduce messy functions f (n) to their asymptotic forms:
• If c is a constant (doesn’t depend on n), then c · f (n) = O(f (n)). This
follows immediately from being able to pick c in the definition. So we can
always get rid of constant factors: 137n5 = O(n5 ).
• If f (n) = g(n) + h(n), then the bigger of g(n) or h(n) wins. This is because
if g(n) ≤ h(n), then g(n) + h(n) ≤ 2g(n), and then big-O eats the 2. So
12n2 + 52n + 3 = O(n2 ) because n2 dominates all the other terms.
• To figure out which of two terms dominates, the rule is
– Bigger exponents win: If a < b, then O(na ) + O(nb ) = O(nb ).
– Polynomials beat logarithms: For any a and any b > 0, O(loga n) +
O(nb ) = O(nb ).
– Exponentials beat polynomials: For any a and any b > 1, O(na ) +
O(bn ) = O(bn ).
– The distributive law works: Because O(log n) dominates O(1),
O(n log n) dominates O(n).
This means that almost any asymptotic bound can be reduced down to one of a
very small list of common bounds. Ones that you will typically see in practical
algorithms, listed in increasing order, are O(1), O(log n), O(n), O(n log n), or
O(n2 ).
Applying these rules to mergesort and selection sort gives us asymptotic bounds
of cn log n = O(n log n) (the constant vanishes) and c1 n(n + 1)/2 + c2 n =
c1 n2 /2 + c1 n/2 + c2 n = O(n2 ) + O(n) + O(n) = O(n2 ) (the constants vanish
and then O(n2 ) dominates). Here we see that no matter how fast our machine
is at different low-level operations, for large enough inputs mergesort will beat
selection sort.
The general rule is that an expression O(f (n)) = O(g(n)) is true if for any choice of a function
in O(f (n)), that function is in O(g(n)). This relation is transitive and symmetric, but unlike
real equality it’s not symmetric.
205
5.1.3 Asymptotic cost of programs
To compute the asymptotic cost of a program, the rule of thumb is that any
simple statement costs O(1) time to evaluate, and larger costs are the result of
loops or calls to expensive functions, where a loop multiplies the cost by the
number of iterations in the loop. When adding costs together, the biggest cost
wins:
So this function takes O(1) time:
/* return the sum of the integers i with 0 <= i and i < n */
int
sumTo(int n)
{
return n*(n-1)/2;
}
But this function, which computes exactly the same value, takes O(n) time:
/* return the sum of the integers i with 0 <= i and i < n */
int
sumTo(int n)
{
int i;
int sum = 0;
return sum;
}
The reason it takes so long is that each iteration of the loop takes only O(1)
time, but we execute the loop n times, and n · O(1) = O(n).
Here’s an even worse version that takes O(n2 ) time:
/* return the sum of the integers i with 0 <= i and i < n */
int
sumTo(int n)
{
int i;
int j;
int sum = 0;
206
}
}
return sum;
}
Here we have two nested loops. The outer loop iterates exactly n times, and
for each iteration the inner loop iterates at most n times, and the innermost
iteration costs O(1) each time, so the total is at most O(n2 ). (In fact, it’s no
better than this, because at least n/2 times we execute the inner loop, we do at
least n/2 iterations.)
So even if we knew that the constant on the first implementation was really large
(maybe our CPU is bad at dividing by 2?), for big values of n it’s still likely to
be faster than the other two.
(This example is a little misleading, because n is not the size of the input but the
actual input value. More typical might be a statement that the cost of strlen
is O(n) where n is the length of the string.)
Big-O notation is good for upper bounds, but the inequality in the definition
means that it can’t be used for anything else: it is the case that 12 = O(n67 )
just because 12 < n67 when n is large enough. There is an alternative definition,
called ‘”big-Omega”’, that works in the other direction:
Ω(n) A function f (n) is in the class Ω(g(n)) if there exist constants N and c
such that f (n) > c · g(n) when n > N .
This is exactly the same as the definition of O(g(n)) except that the inequality
goes in the other direction. So if we want to express that some algorithm is very
expensive, we might write that it’s Ω(n2 ), which says that once the size of the
input is big enough, then the cost grows at least as fast as n2 .
If you want to claim that your bound is tight—both an upper and a lower
bound—use big-Theta: f (n) is Θ(g(n)) if it is both O(f (n)) and Ω(g(n)).
Mostly we will just use big-O, with the understanding that when we say that a
particular algorithm is O(n), that’s the best bound we could come up with.
Linked lists are about the simplest data structure beyond arrays. They aren’t
very efficient for many purposes, but have very good performance for certain
specialized applications.
207
The basic idea is that instead of storing n items in one big array, we store each
item in its own struct, and each of these structs includes a pointer to the
next struct in the list (with a null pointer to indicate that there are no more
elements). If we follow the pointers we can eventually reach all of the elements.
For example, if we declare the struct holding each element like this:
struct elt {
struct elt *next; /* pointer to next element in the list */
int contents; /* contents of this element */
};
We can build a structure like this:
The box on the far left is not a struct elt, but a struct elt *; in order to
keep track of the list we need a pointer to the first element. As usual in C, we
will have to do all the work of allocating these elements and assigning the right
pointer values in the right places ourselves.
5.2.1 Stacks
To make this work, we need to change two pointers: the head pointer and the
next pointer in the new element holding 0. These operations aren’t affected by
the size of the rest of the list and so take O(1) time.
208
Removal is the reverse of installation: We patch out the first element by shifting
the head pointer to the second element, then deallocate it with free. (We do
have to be careful to get any data we need out of it before calling free). This is
also an O(1) operation.
The fact that we can add and remove elements at the start of linked lists for
cheap makes them particularly useful for implementing a stack, an abstract
data type that supports operations push (insert a new element on the top of the
stack) and pop (remove and return the element at the top of the stack. Here
is an example of a simple linked-list implementation of a stack, together with
some test code:
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
struct elt {
struct elt *next;
int value;
};
/*
* We could make a struct for this,
* but it would have only one component,
* so this is quicker.
*/
typedef struct elt *Stack;
e = malloc(sizeof(struct elt));
assert(e);
e->value = value;
e->next = *s;
*s = e;
}
int
stackEmpty(const Stack *s)
{
209
return (*s == 0);
}
int
stackPop(Stack *s)
{
int ret;
struct elt *e;
assert(!stackEmpty(s));
ret = (*s)->value;
free(e);
return ret;
}
putchar('\n');
}
int
main(int argc, char **argv)
{
int i;
Stack s;
s = STACK_EMPTY;
210
stackPrint(&s);
}
while(!stackEmpty(&s)) {
printf("pop gets %d\n", stackPop(&s));
stackPrint(&s);
}
return 0;
}
examples/linkedLists/stack.c
Unlike most of our abstract data types, we do not include a struct representing
the linked list itself. This is because the only thing we need to keep track of a
linked list is the head pointer, and it feels a little silly to have a struct with
just one component. But we might choose to do this if we wanted to make the
linked list implementation opaque or allow for including more information later.
struct stack {
struct elt *head;
};
5.2.2 Queues
Stacks are last-in-first-out (LIFO) data structures: when we pop, we get the
last item we pushed. What if we want a first-in-first-out (FIFO) data structure?
Such a data structure is called a queue and can also be implemented by a linked
list. The difference is that if we want O(1) time for both the enqueue (push)
and dequeue (pop) operations, we must keep around pointers to both ends of
the linked list.
So now we get something that looks like this:
211
Enqueuing a new element typically requires (a) allocating a new struct to hold
it; (b) making the old tail struct point at the new struct; and (c) updating
the tail pointer to also point to the new struct. There is a minor complication
when the stack is empty; in this case instead of updating tail->next we must put
a pointer to the new struct in head. Dequeuing an element involves updating
the head pointer and freeing the removed struct, exactly like a stack pop.
Here is the queue above after enqueuing a new element 6. The updated pointers
are indicated by dotted lines:
Because we are only changing two pointers, each of which we can reach by
following a constant number of pointers from the main struct, we can do this
in O(1) time.
There is a slight complication when we enqueue the very first element, because
we need to update the head pointer instead of the pointer in the previous tail
(which doesn’t yet exist). This requires testing for an empty queue in the enqueue
routine, which we’ll do in the sample code below.
Dequeuing is easier because it requires updating only one pointer:
If we adopt the convention that a null in head means an empty queue, and use
this property to check if the queue is empty when enqueuing, we don’t even have
to clear out tail when we dequeue the last element.
Here is a simple implementation of a queue holding ints, together with some
test code showing how its behavior differs from a stack:
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
212
/* standard linked list element */
struct elt {
struct elt *next;
int value;
};
struct queue {
struct elt *head; /* dequeue this next */
struct elt *tail; /* enqueue after this */
};
q = malloc(sizeof(struct queue));
q->head = q->tail = 0;
return q;
}
e = malloc(sizeof(struct elt));
assert(e);
e->value = value;
if(q->head == 0) {
/* If the queue was empty, I become the head */
q->head = e;
} else {
/* Otherwise I get in line after the old tail */
q->tail->next = e;
}
213
/* I become the new tail */
q->tail = e;
}
int
queueEmpty(const struct queue *q)
{
return (q->head == 0);
}
int
deq(struct queue *q)
{
int ret;
struct elt *e;
assert(!queueEmpty(q));
ret = q->head->value;
free(e);
return ret;
}
putchar('\n');
}
214
while(!queueEmpty(q)) {
deq(q);
}
free(q);
}
int
main(int argc, char **argv)
{
int i;
struct queue *q;
q = queueCreate();
while(!queueEmpty(q)) {
printf("deq gets %d\n", deq(q));
queuePrint(q);
}
queueDestroy(q);
return 0;
}
examples/linkedLists/queue.c
It is a bit trickier to build a queue out of an array than to build a stack. The
difference is that while a stack pointer can move up and down, leaving the base of
the stack in the same place, a naive implementation of a queue would have head
and tail pointers both marching ever onward across the array leaving nothing
but empty cells in their wake. While it is possible to have the pointers wrap
around to the beginning of the array when they hit the end, if the queue size is
unbounded the tail pointer will eventually catch up to the head pointer. At this
point (as in a stack that overflows), it is necessary to allocate more space and
copy the old elements over. See the section on ring buffers for an example of
how to do this.
215
5.2.3 Looping over a linked list
Looping over a linked list is not hard if you have access to the next pointers.
(For a more abstract way to do this see iterators.)
Let’s imagine somebody gave us a pointer to the first struct stack in a list; call
this pointer first. Then we can write a loop like this that prints the contents
of the stack:
void
stackPrint(struct stack *first)
{
struct stack *elt;
What if we want to loop over a linked list backwards? The next pointers all go
the wrong way, so we have to save a trail of breadcrumbs to get back. The safest
way to do this is to reverse the original list into an auxiliary list:
void
stackPrintReversed(struct stack *first)
{
struct stack *elt;
Stack s2; /* uses imperative implementation */
s2 = stackCreate();
stackPrint(s2);
stackDestroy(s2);
}
Pushing all the elements from the first list onto s2 puts the first element on the
bottom, so when we print s2 out, it’s in the reverse order of the original stack.
216
We can also write a recursive function that prints the elements backwards. This
function effectively uses the function call stack in place of the extra stack s2
above.
void
stackPrintReversedRecursive(struct stack *first)
{
if(first != 0) {
/* print the rest of the stack */
stackPrintReversedRecursive(first->next);
217
Below is an implementation of this structure. We have separated the interface in
deque.h from the implementation in deque.c. This will allow us to change the
implementation if we decide we don’t like it, without affecting any other code in
the system.
A nice feature of this data structure is that we don’t need to use null pointers to
mark the ends of the deque. Instead, each end is marked by a pointer to the
dummy head element. For an empty deque, this just means that the head points
to itself. The cost of this is that to detect an empty deque we have to test for
equality with the head (which might be slightly more expensive that just testing
for null) and the head may contain some wasted space for its missing value if we
allocate it like any other element.19
To keep things symmetric, we implement the pointers as an array, indexed by
the directions DEQUE_FRONT and DEQUE_BACK (defined in deque.h). This means
we can use the same code to push or pop on either end of the deque.
typedef struct deque Deque;
head that doesn’t include this extra space. This is probably more trouble than it is worth in
this case, but might be useful if we were creating a lot of dummy heads and the contents were
more than 4 bytes long.
218
/* returns DEQUE_EMPTY if deque is empty */
int dequePop(Deque *d, int direction);
#include "deque.h"
struct deque {
struct deque *next[NUM_DIRECTIONS];
int value;
};
Deque *
dequeCreate(void)
{
Deque *d;
/*
* We don't allocate the full space for this object
* because we don't use the value field in the dummy head.
*
* Saving these 4 bytes doesn't make a lot of sense here,
* but it might be more significant if value where larger.
*/
d = malloc(offsetof(struct deque, value));
return d;
}
void
219
dequePush(Deque *d, int direction, int value)
{
struct deque *e; /* new element */
e = malloc(sizeof(struct deque));
assert(e);
e->next[direction] = d->next[direction];
e->next[!direction] = d;
e->value = value;
d->next[direction] = e;
e->next[direction]->next[!direction] = e; /* preserves invariant */
}
int
dequePop(Deque *d, int direction)
{
struct deque *e;
int retval;
e = d->next[direction];
if(e == d) {
return DEQUE_EMPTY;
}
/* else remove it */
d->next[direction] = e->next[direction];
e->next[direction]->next[!direction] = d;
retval = e->value;
free(e);
return retval;
}
int
dequeIsEmpty(const Deque *d)
{
return d->next[DEQUE_FRONT] == d;
220
}
void
dequeDestroy(Deque *d)
{
while(!dequeIsEmpty(d)) {
dequePop(d, DEQUE_FRONT);
}
free(d);
}
examples/linkedLists/deque/deque.c
And here is some test code:
testDeque.c.
#include "deque.h"
/*
* Alternative implementation of a deque using a ring buffer.
*
* Conceptually, this is an array whose indices wrap around at
221
* the endpoints.
*
* The region in use is specified by a base index pointing
* to the first element, and a length count giving the number
* of elements. A size field specifies the number of slots
* in the block.
*
* Picture:
*
* ---------------------------------------------------
* |7|8|9| | | | | | | | | | | | | | | | |1|2|3|4|5|6|
* ---------------------------------------------------
* ^ ^
* | |
* base + length - 1 base
*
*/
struct deque {
size_t base; /* location of front element */
size_t length; /* length of region in use */
size_t size; /* total number of positions in contents */
int *contents;
};
d = malloc(sizeof(struct deque));
assert(d);
d->base = 0;
d->length = 0;
d->size = size;
return d;
}
222
/* return a new empty deque */
Deque *
dequeCreate(void)
{
return dequeCreateInternal(INITIAL_SIZE);
}
void
dequePush(Deque *d, int direction, int value)
{
struct deque *d2; /* replacement deque if we grow */
int *oldContents; /* old contents of d */
/*
* First make sure we have space.
*/
if(d->length == d->size) {
/* nope */
d2 = dequeCreateInternal(d->size * 2);
/* evacuate d */
while(!dequeIsEmpty(d)) {
dequePush(d2, DEQUE_BACK, dequePop(d, DEQUE_FRONT));
}
/* do a transplant from d2 to d */
/* but save old contents so we can free them */
oldContents = d->contents;
*d = *d2; /* this is equivalent to copying the components one by one */
/*
* This requires completely different code
* depending on the direction, which is
* annoying.
*/
if(direction == DEQUE_FRONT) {
/* d->base is unsigned, so we have to check for zero first */
if(d->base == 0) {
d->base = d->size - 1;
} else {
223
d->base--;
}
d->length++;
d->contents[d->base] = value;
} else {
d->contents[(d->base + d->length++) % d->size] = value;
}
}
if(dequeIsEmpty(d)) {
return DEQUE_EMPTY;
}
/* else */
if(direction == DEQUE_FRONT) {
/* base goes up by one, length goes down by one */
retval = d->contents[d->base];
return retval;
} else {
/* length goes down by one */
return d->contents[(d->base + --d->length) % d->size];
}
}
int
dequeIsEmpty(const Deque *d)
{
return d->length == 0;
}
void
dequeDestroy(Deque *d)
{
224
free(d->contents);
free(d);
}
examples/linkedLists/deque/ringBuffer.c
Here is a Makefile that compiles testDeque.c against both the linked list and
the ring buffer implementations. You can do make time to race them against
each other.
CC=gcc
CFLAGS=-std=c99 -Wall -pedantic -O3 -g3
test: all
./testDeque $(ITERATIONS)
valgrind -q --leak-check=yes ./testDeque $(VALGRIND_ITERATIONS)
./testRingBuffer $(ITERATIONS)
valgrind -q --leak-check=yes ./testRingBuffer $(VALGRIND_ITERATIONS)
time: all
time ./testDeque $(ITERATIONS)
time ./testRingBuffer $(ITERATIONS)
clean:
$(RM) testDeque testRingBuffer *.o
examples/linkedLists/deque/Makefile
For some applications, there is no obvious starting or ending point to a list, and a
circular list (where the last element points back to the first) may be appropriate.
Circular doubly-linked lists can also be used to build deques; a single pointer
into the list tracks the head of the deque, with some convention adopted for
225
whether the head is an actual element of the list (at the front, say, with its left
neighbor at the back) or a dummy element that is not considered to be part of
the list.
The selling point of circular doubly-linked lists as a concrete data structure is
that insertions and deletions can be done anywhere in the list with only local
information. For example, here are some routines for manipulating a doubly-
linked list directly. We’ll make our lives easy and assume (for the moment) that
the list has no actual contents to keep track of.
#include <stdlib.h>
struct elt {
struct elt *next[2];
};
e = malloc(sizeof(*e));
if(e) {
e->next[LEFT] = e->next[RIGHT] = e;
}
return e;
}
226
/* insert an element e into list in direction dir from head */
void
listInsert(Elt head, int dir, Elt e)
{
/* fill in e's new neighbors */
e->next[dir] = head->next[dir];
e->next[!dir] = head;
/* fix up e1 and e2 */
e1->next[RIGHT] = e2;
e2->next[LEFT] = e1;
}
227
void
listDestroy(Elt e)
{
Elt target;
Elt next;
free(e);
}
examples/linkedLists/circular.c
The above code might or might not actually work. What if it doesn’t? It may
make sense to include some sanity-checking code that we can run to see if our
pointers are all going to the right place:
/* assert many things about correctness of the list */
/* Amazingly, this is guaranteed to abort or return no matter
how badly screwed up the list is. */
void
listSanityCheck(Elt e)
{
Elt check;
assert(e != 0);
check = e;
do {
/* on to the next */
check = check->next[RIGHT];
} while(check != e);
}
What if we want to store something in this list? The simplest approach is to
extend the definition of struct elt:
228
struct elt {
struct elt *next[2];
char *name;
int socialSecurityNumber;
int gullibility;
};
But then we can only use the code for one particular type of data. An alternative
approach is to define a new Elt-plus struct:
struct fancyElt {
struct elt *next[2];
char *name;
int socialSecurityNumber;
int gullibility;
};
and then use pointer casts to convert the fancy structs into Elts:
struct fancyElt *e;
e = malloc(sizeof(*e));
/* fill in fields on e */
5.2.7 What linked lists are and are not good for
Linked lists are good for any task that involves inserting or deleting elements
next to an element you already have a pointer to; such operations can usually
be done in O(1) time. They generally beat arrays (even resizeable arrays) if you
need to insert or delete in the middle of a list, since an array has to copy any
elements above the insertion point to make room; if inserts or deletes always
happen at the end, an array may be better.
229
Linked lists are not good for any operation that requires random access, since
reaching an arbitrary element of a linked list takes as much as O(n) time. For
such applications, arrays are better if you don’t need to insert in the middle; if
you do, you should use some sort of tree.
A description of many different kinds of linked lists with pictures can be found
in the WikiPedia article on the subject.
Animated versions can be found at http://www.cs.usfca.edu/~galles/
visualization/Algorithms.html.
One of the hard parts about computer programming is that, in general, programs
are bigger than brains. Unless you have an unusally capacious brain, it is unlikely
that you will be able to understand even a modestly large program in its entirety.
So in order to be able to write and debug large programs, it is important to be
able to break it up into pieces, where each piece can be treated as a tool whose
use and description is simpler (and therefor fits in your brain better) than its
actual code. Then you can forget about what is happening inside that piece,
and just treat it as an easily-understood black box from the outside.
This process of wrapping functionality up in a box and forgetting about its
internals is called abstraction, and it is the single most important concept in
computer science. In these notes we will describe a particular kind of abstraction,
the construction of abstract data types or ADTs. Abstract data types are
data types whose implementation is not visible to their user; from the outside,
all the user knows about an ADT is what operations can be performed on it and
what those operations are supposed to do.
ADTs have an outside and an inside. The outside is called the interface; it
consists of the minimal set of type and function declarations needed to use the
ADT. The inside is called the implementation; it consists of type and function
definitions, and sometime auxiliary data or helper functions, that are not visible
to users of the ADT. This separation between interface and implementation
is called the abstraction barrier, and allows the implementation to change
without affecting the rest of the program.
What joins the implementation to the interface is an abstraction function.
This is a function (in the mathematical sense) that takes any state of the
implementation and trims off any irrelevant details to leave behind an idealized
pictures of what the data type is doing. For example, a linked list implementation
translates to a sequence abstract data type by forgetting about the pointers used
to hook up the elements and just keeping the sequence of elements themselves.
230
To exclude bad states of the implementation (for example, a singly-linked list that
loops back on itself instead of having a terminating null pointer), we may have a
representation invariant, which is just some property of the implementation
that is always true. Representation invariants are also useful for detecting
when we’ve bungled our implementation, and a good debugging strategy for
misbehaving abstract data type implementations is often to look for the first
point at which they violated some property that we thought was an invariant.
Some programming language include very strong mechanisms for enforcing
abstraction barriers. C relies somewhat more on politeness, and as a programmer
you violate an abstraction barrier (by using details of an implementation that
are supposed to be hidden) at your peril. In C, the interface will typically consist
of function and type declarations contained in a header file, with implementation
made up of the corresponding function definitions (and possibly a few extra
static functions) in one or more .c files. The opaque struct technique can be
used to hide implementation details of the type.
Too much abstraction at once can be hard to take, so let’s look at a concrete
example of an abstract data type. This ADT will represent an infinite sequence of
ints. Each instance of the Sequence type supports a single operation seq_next
that returns the next int in the sequence. We will also need to provide one
or more constructor functions to generate new Sequences, and a destructor
function to tear them down.
Here is an example of a typical use of a Sequence:
void
seq_print(Sequence s, int limit)
{
int i;
5.3.1.1 Interface
231
In C, the interface of an abstract data type will usually be declared in a header
file, which is included both in the file that implements the ADT (so that the
compiler can check that the declarations match up with the actual definitions in
the implementation. Here’s a header file for sequences:
/* opaque struct: hides actual components of struct sequence,
* which are defined in sequence.c */
typedef struct sequence *Sequence;
/* constructors */
/* all our constructors return a null pointer on allocation failure */
/* destructor */
/* destroys a Sequence, recovering all interally-allocated data */
void seq_destroy(Sequence);
/* accessor */
/* returns the first element in a sequence not previously returned */
int seq_next(Sequence);
examples/ADT/sequence/sequence.h
Here we have defined two different constructors for Sequences, one of which
gives slightly more control over the sequence than the other. If we were willing
to put more work into the implementation, we could imagine building a very
complicated Sequence type that supported a much wider variety of sequences
(for example, sequences generated by functions or sequences read from files); but
we’ll try to keep things simple for now. We can always add more functionality
later, since the users won’t notice if the Sequence type changes internally.
5.3.1.2 Implementation
The implementation of an ADT in C is typically contained in one (or sometimes
more than one) .c file. This file can be compiled and linked into any program
that needs to use the ADT. Here is our implementation of Sequence:
#include <stdlib.h>
#include "sequence.h"
struct sequence {
232
int next; /* next value to return */
int step; /* how much to increment next by */
};
Sequence
seq_create(int init)
{
return seq_create_step(init, 1);
}
Sequence
seq_create_step(int init, int step)
{
Sequence s;
s = malloc(sizeof(*s));
if(s == 0) return 0;
s->next = init;
s->step = step;
return s;
}
void
seq_destroy(Sequence s)
{
free(s);
}
int
seq_next(Sequence s)
{
int ret; /* saves the old value before we increment it */
ret = s->next;
s->next += s->step;
return ret;
}
examples/ADT/sequence/sequence.c
Things to note here: the definition of struct sequence appears only in this
file; this means that only the functions defined here can (easily) access the
next and step components. This protects Sequences to a limited extent from
outside interference, and defends against users who might try to “violate the
abstraction boundary” by examining the components of a Sequence directly. It
also means that if we change the components or meaning of the components in
233
struct sequence, we only have to fix the functions defined in sequence.c.
#include "sequence.h"
void
seq_print(Sequence s, int limit)
{
int i;
int
main(int argc, char **argv)
{
Sequence s;
Sequence s2;
puts("Stepping by 1:");
s = seq_create(0);
seq_print(s, 5);
seq_destroy(s);
s2 = seq_create_step(1, 3);
seq_print(s2, 20);
seq_destroy(s2);
return 0;
}
examples/ADT/sequence/main.c
We can compile main.c and sequence.c together into a single binary with the
234
command c99 main.c sequence.c. Or we can build a Makefile which will
compile the two files separately and then link them. Using make may be more
efficient, especially for large programs consisting of many components, since if
we make any changes make will only recompile those files we have changed. So
here is our Makefile:
CC=c99
CFLAGS=-g3 -pedantic -Wall
all: seqprinter
test: seqprinter
./seqprinter
clean:
$(RM) -f seqprinter *.o
examples/ADT/sequence/Makefile
And now running make test produces this output. Notice how the built-in make
variables $@ and $ˆ expand out to the left-hand side and right-hand side of the
dependency line for building seqprinter.
$ make test
c99 -g3 -pedantic -Wall -c -o main.o main.c
c99 -g3 -pedantic -Wall -c -o sequence.o sequence.c
c99 -g3 -pedantic -Wall -o seqprinter main.o sequence.o
./seqprinter
Stepping by 1:
0
1
2
3
4
Now stepping by 3:
1
4
7
10
13
16
235
19
Now we’ve seen how to implement an abstract data type. How do we choose
when to use when, and what operations to give it? Let’s try answering the
second question first.
236
• A list of students,
• A student,
• A list of grades,
• A grade.
If grades are simple, we might be able to make them just be ints (or maybe
doubles); to be on the safe side, we should probably create a Grade type with a
typedef. The other types are likely to be more complicated. Each student might
have in addition to his or her grades a long list of other attributes, such as a name,
an email address, etc. By wrapping students up as abstract data types we can
extend these attributes if we need to, or allow for very general implementations
(say, by allowing a student to have an arbitrary list of keyword-attribute pairs).
The two kinds of lists are likely to be examples of sequence types; we’ll be seeing
a lot of ways to implement these as the course progresses. If we want to perform
the same kinds of operations on both lists, we might want to try to implement
them as a single list data type, which then is specialized to hold either students
or grades; this is not always easy to do in C, but we’ll see examples of how to
do this, too.
Whether or not this set of four types is the set we will finally use, writing it down
gives us a place to start writing our program. We can start writing interface files
for each of the data types, and then evolve their implementations and the main
program in parallel, adjusting the interfaces as we find that we have provided
too little (or too much) data for each component to do what it must.
237
In C, we don’t have the convenience of reusing [] for dictionary lookups (we’d
need C++ for that), but we can still get the same effect with more typing using
functions. For example, using an abstract dictionary in C might look like this:
Dict *title;
const char *user;
title = dictCreate();
dictSet(title, "Barack", "President");
user = "Barack";
printf("Welcome %s %s\n", dictGet(title, user), user);
As with other abstract data types, the idea is that the user of the dictionary type
doesn’t need to know how it is implemented. For example, we could implement
the dictionary as an array of structs that we search through, but that would
be expensive: O(n) time to find a key in the worst case.
Closely related to a dictionary is a set, which has keys but no values. It’s usually
pretty straightforward to turn an implementation of a dictionary into a set (leave
out the values) or vice versa (add values to the end of keys but don’t use them
in searching).
If our keys were conveniently named 0, 1, 2, . . . , n−1, we could simply use an array,
and be able to find a record given a key in constant time. Unfortunately, naming
conventions for most objects are not so convenient, and even enumerations like
Social Security numbers are likely to span a larger range than we want to allocate.
But we would like to get the constant-time performance of an array anyway.
The solution is to feed the keys through some hash function H, which maps them
down to array indices. So in a database of people, to find “Smith, Wayland”,
we would first compute H(“Smith, Wayland”)$ = 137$ (say), and then look in
position 137 in the array. Because we are always using the same function H, we
will always be directed to the same position 137.
But what if H(“Smith, Wayland”) and H(“Hephaestos”) both equal 137? Now
we have a collision, and we have to resolve it by finding some way to either (a)
effectively store both records in a single array location, or (b) move one of the
records to a new location that we can still find later. Let’s consider these two
approaches separately.
5.4.3.1 Chaining
238
We can’t really store more than one record in an array location, but we can fake
it by making each array location be a pointer to a linked list. Every time we
insert a new element in a particular location, we simply add it to this list.
Since the cost of scanning a linked list is linear in its size, this means that the
worst-case cost of searching for a particular key will be linear in the number of
keys in the table that hash to the same location. Under the assumption that
the hash function is a random function (which does not mean that it returns
random values every time you call it but instead means that we picked one of
the many possible hash functions uniformly at random), on average we get n/m
elements in each list.
So on average a failed search takes O(n/m) time.
This quantity n/m is called the load factor of the hash table and is often
written as α. If we want our hash table to be efficient, we will need to keep
this load factor down. If we can guarantee that it’s a constant, then we get
constant-time searches.
Here we will describe three methods for generating hash functions. The first
two are typical methods used in practice. The last has additional desirable
239
theoretical properties.
size_t
hash(const char *s, size_t m)
{
size_t h;
unsigned const char *us;
h = 0;
while(*us != '\0') {
h = (h * BASE + *us) % m;
us++;
}
return h;
}
The division method works best when m is a prime, as otherwise regularities in
the keys can produce clustering in the hash values. (Consider, for example, what
happens if m is 256). But this can be awkward for computing hash functions
quickly, because computing remainders is a relatively slow operation.
240
5.4.4.2 Multiplication method
For this reason, the most commonly-used hash functions replace the modulus m
with something like 23 2 and replace the base with some small prime, relying on
the multiplier to break up patterns in the input. This yields the multiplication
method. Typical code might look something like this:
#define MULTIPLIER (37)
size_t
hash(const char *s)
{
size_t h;
unsigned const char *us;
h = 0;
while(*us != '\0') {
h = h * MULTIPLIER + *us;
us++;
}
return h;
}
The only difference between this code and the division method code is that we’ve
renamed BASE to MULTIPLIER and dropped m. There is still some remainder-
taking happening: since C truncates the result of any operation that exceeds
the size of the integer type that holds it, the h = h * MULTIPLIER + *us; line
effectively has a hidden mod 23 2 or 26 4 at the end of it (depending on how big
your size_t is). Now we can’t use, say, 256, as the multiplier, because then the
hash value h would be determined by just the last four characters of s.
The choice of 37 is based on folklore. I like 97 myself, and 31 also has supporters.
Almost any medium-sized prime should work.
241
The property that makes a family of hash functions {Hr } universal is that, for
any distinct keys x and y, the probability that r is chosen so that Hr (x) = Hr (y)
is exactly 1/m.
Why is this important? Recall that for chaining, the expected number of collisions
between an element x and other elements was just the sum over all particular
elements y of the probability that x collides with that particular element. If Hr
is drawn from a universal family, this probability is 1/m for each y, and we get
the same n/m expected collisions as if Hr were completely random.
Several universal families of hash functions are known. Here is a simple one
that works when the size of the keyspace and the size of the output space are
both powers of 2. Let the keyspace consist of n-bit strings and let m = 2k .
Then the random index r consists of nk independent random bits organized as
n m-bit strings a1 a2 . . . an . To compute the hash function of a particular input
x, compute the bitwise exclusive or of ai for each position i where the i-th bit of
x is 1.
We can implement this in C as
/* implements universal hashing using random bit-vectors in x */
/* assumes number of elements in x is at least BITS_PER_CHAR * MAX_STRING_SIZE */
size_t
hash(const char *s, size_t x[])
{
size_t h;
unsigned const char *us;
int i;
unsigned char c;
int shift;
h = 0;
for(i = 0; *us != 0 && i < MAX_BITS; us++) {
c = *us;
for(shift = 0; shift < BITS_PER_CHAR; shift++, i++) {
/* is low bit of c set? */
if(c & 0x1) {
h ^= x[i];
}
242
/* shift c to get new bit in lowest position */
c >>= 1;
}
}
return h;
}
As you can see, this requires a lot of bit-fiddling. It also fails if we get a lot of
strings that are identical for the first MAX_STRING_SIZE characters. Conceivably,
the latter problem could be dealt with by growing x dynamically as needed. But
we also haven’t addressed the question of where we get these random values
from—see the chapter on randomization for some possibilities.
In practice, universal families of hash functions are seldom used, since a reasonable
fixed hash function is unlikely to be correlated with any patterns in the actual
input. But they are useful for demonstrating provably good performance.
All of the running time results for hash tables depend on keeping the load factor
α small. But as more elements are inserted into a fixed-size table, the load factor
grows without bound. The usual solution to this problem is rehashing: when
the load factor crosses some threshold, we create a new hash table of size 2n or
thereabouts and migrate all the elements to it.
This approach raises the worst-case cost of an insertion to O(n). However, we
can bring the expected cost down to O(1) by rehashing only with probability
O(1/n) for each insert after the threshold is crossed. Or we can apply amortized
analysis to argue that the amortized cost (total cost divided by number of
operations) is O(1) assuming we double the table size on each rehash. Neither the
expected-cost nor the amortized-cost approaches actually change the worst-case
cost, but they make it look better by demonstrating that we at least don’t incur
that cost every time.
With enough machinery, it may be possible to deamortize the cost of rehashing
by doing a little bit of it with every insertion. The idea is to build the new hash
table incrementally, and start moving elements to it once it is fully initialized.
This requires keeping around two copies of the hash table and searching both,
and for most purposes is more trouble than it’s worth. But a mechanism like
this is often used for real-time garbage collection, where it’s important not to
have the garbage collector lock up the entire system while it does its work.
243
5.4.6 Examples
/* destroy an IDList */
void IDListDestroy(IDList list);
#include "idList.h"
struct idList {
244
int size;
int ids[1]; /* we'll actually malloc more space than this */
};
IDList
IDListCreate(int n, int unsortedIdList[])
{
IDList list;
int size;
int i;
int probe;
/* else */
list->size = size;
/* load it up */
for(i = 0; i < n; i++) {
assert(list->ids[probe] == NULL_ID);
list->ids[probe] = unsortedIdList[i];
}
return list;
}
void
245
IDListDestroy(IDList list)
{
free(list);
}
int
IDListContains(IDList list, int id)
{
int probe;
return 0;
}
examples/hashTables/idList/idList.c
/* destroy a dictionary */
void DictDestroy(Dict);
/* delete the most recently inserted record with the given key */
/* if there is no such record, has no effect */
void DictDelete(Dict, const char *key);
246
examples/hashTables/dict/dict.h
#include <stdlib.h>
#include <assert.h>
#include <string.h>
#include "dict.h"
struct elt {
struct elt *next;
char *key;
char *value;
};
struct dict {
int size; /* size of the pointer table */
int n; /* number of elements stored */
struct elt **table;
};
d = malloc(sizeof(*d));
assert(d != 0);
d->size = size;
d->n = 0;
d->table = malloc(sizeof(struct elt *) * d->size);
assert(d->table != 0);
return d;
}
247
Dict
DictCreate(void)
{
return internalDictCreate(INITIAL_SIZE);
}
void
DictDestroy(Dict d)
{
int i;
struct elt *e;
struct elt *next;
free(e->key);
free(e->value);
free(e);
}
}
free(d->table);
free(d);
}
h = 0;
return h;
}
static void
grow(Dict d)
248
{
Dict d2; /* new dictionary we'll create */
struct dict swap; /* temporary structure for brain transplant */
int i;
struct elt *e;
d2 = internalDictCreate(d->size * GROWTH_FACTOR);
DictDestroy(d2);
}
assert(key);
assert(value);
e = malloc(sizeof(*e));
assert(e);
e->key = strdup(key);
e->value = strdup(value);
h = hash_function(key) % d->size;
249
e->next = d->table[h];
d->table[h] = e;
d->n++;
return 0;
}
/* delete the most recently inserted record with the given key */
/* if there is no such record, has no effect */
void
DictDelete(Dict d, const char *key)
{
struct elt **prev; /* what to change when elt is deleted */
struct elt *e; /* what to delete */
free(e->key);
250
free(e->value);
free(e);
return;
}
}
}
examples/hashTables/dict/dict.c
And here is some (very minimal) test code.
#include <stdio.h>
#include <assert.h>
#include "dict.h"
int
main()
{
Dict d;
char buf[512];
int i;
d = DictCreate();
DictDestroy(d);
return 0;
}
251
examples/hashTables/dict/test_dict.c
The first rule of programming is that you should never write the same code
twice. Suppose that you happen to have lying around a dictionary type whose
keys are ints and whose values are strings. Tomorrow you realize that what
you really want is a dictionary type whose keys are strings and whose values
are ints, or one whose keys are ints but whose values are stacks. If you have
n different types that may appear as keys or values, can you avoid writing n2
different dictionary implementations to get every possible combination?
Many languages provide special mechanisms to support generic types, ones
for which part of the type is not specified. It’s as if you could declare an array
in C to be an array of some type to be specified later, and then write functions
that operate on any such array without knowing what the missing type is going
to be (templates in C++ are an example of such a mechanism). Unfortunately,
C does not provide generic types. But by aggressive use of function pointers and
void *, it is possible to fake them.
252
Similarly, we will want to be able to compare keys for equality (since not all keys
that hash together will necessarily be the same), and we may want to be able
to copy keys and values so that the data inside the dictionary is not modified
if somebody changes a value passed in from the outside. So we need a fair bit
of information about keys and values. We’ll organize all of this information in
a struct made up of function pointers. (This includes a few extra components
that came up while writing the implementation.)
/* Provides operations for working with keys or values */
struct dictContentsOperations {
/* hash function */
unsigned long (*hash)(const void *datum, void *arg);
/* free a copy */
void (*delete)(void *datum, void *arg);
/*
* DictIntOps supports int's that have been cast to (void *), e.g.:
* d = dictCreate(DictIntOps, DictIntOps);
253
* dictSet(d, (void *) 1, (void * 2));
* x = (int) dictGet(d, (void * 1));
*/
struct dictContentsOperations DictIntOps;
/*
* Supports null-terminated strings, e.g.:
* d = dictCreate(DictStringOps, DictStringOps);
* dictSet(d, "foo", "bar");
* s = dictGet(d, "foo");
* Note: no casts are needed since C automatically converts
* between (void *) and other pointer types.
*/
struct dictContentsOperations DictStringOps;
/*
* Supports fixed-size blocks of memory, e.g.:
* int x = 1;
* int y = 2;
* d = dictCreate(dictMemOps(sizeof(int)), dictMemOps(sizeof(int));
* dictSet(d, &x, &y);
* printf("%d", *dictGet(d, &x);
*/
struct dictContentsOperations dictMemOps(int size);
We’ll define the operations in DictIntOps to expect ints cast directly to void *,
the operations in DictStringOps to expect char * cast to void *, and the
operations in dictMemOps(size) will expect void * arguments pointing to
blocks of the given size. There is a subtle difference between a dictionary using
DictIntOps and dictMemOps(sizeof(int)); in the former case, keys and values
are the ints themselves (after being case), which in the latter, keys and values
are pointers to ints.
Implementations of these structures can be found below.
To make a dictionary that maps strings to ints, we just call:
d = dictCreate(DictStringOps, DictIntOps);
and then we can do things like:
dictSet(d, "foo", (void *) 2);
v = (int) dictGet(d, "foo');
If we find ourselves working with an integer-valued dictionary a lot, we might
want to define a few macros or inline functions to avoid having to type casts all
the time.
254
5.5.2 Generic dictionary: implementation
To implement our generic dictionary, we just take our favorite non-generic hash
table, and replace any calls to fixed hash functions, copier, free, etc. with calls
to elements of the appropriate structure. The result is shown below.
typedef struct dict *Dict;
/* free a copy */
void (*delete)(void *datum, void *arg);
255
/*
* DictIntOps supports int's that have been cast to (void *), e.g.:
* d = dictCreate(DictIntOps, DictIntOps);
* dictSet(d, (void *) 1, (void * 2));
* x = (int) dictGet(d, (void * 1));
*/
struct dictContentsOperations DictIntOps;
/*
* Supports null-terminated strings, e.g.:
* d = dictCreate(DictStringOps, DictStringOps);
* dictSet(d, "foo", "bar");
* s = dictGet(d, "foo");
* Note: no casts are needed since C automatically converts
* between (void *) and other pointer types.
*/
struct dictContentsOperations DictStringOps;
/*
* Supports fixed-size blocks of memory, e.g.:
* int x = 1;
* int y = 2;
* d = dictCreate(dictMemOps(sizeof(int)), dictMemOps(sizeof(int));
* dictSet(d, &x, &y);
* printf("%d", *dictGet(d, &x);
*/
struct dictContentsOperations dictMemOps(int size);
examples/generic/dict.h
#include <stdlib.h>
#include <string.h>
#include "dict.h"
struct dictElt {
unsigned long hash; /* full hash of key */
void *key;
void *value;
struct dictElt *next;
};
struct dict {
int tableSize; /* number of slots in table */
int numElements; /* number of elements */
struct dictElt **table; /* linked list heads */
/* these save arguments passed at creation */
struct dictContentsOperations keyOps;
256
struct dictContentsOperations valueOps;
};
Dict
dictCreate(struct dictContentsOperations keyOps,
struct dictContentsOperations valueOps)
{
Dict d;
int i;
d = malloc(sizeof(*d));
if(d == 0) return 0;
d->tableSize = INITIAL_TABLESIZE;
d->numElements = 0;
d->keyOps = keyOps;
d->valueOps = valueOps;
d->table = malloc(sizeof(*(d->table)) * d->tableSize);
if(d->table == 0) {
free(d);
return 0;
}
return d;
}
void
dictDestroy(Dict d)
{
int i;
struct dictElt *e;
struct dictElt *next;
257
}
free(d->table);
free(d);
}
h = d->keyOps.hash(key, d->keyOps.arg);
i = h % d->tableSize;
for(e = d->table[i]; e != 0; e = e->next) {
if(e->hash == h && d->keyOps.equal(key, e->key, d->keyOps.arg)) {
/* found it */
return e;
}
}
/* didn't find it */
return 0;
}
258
d->tableSize = old_size;
return;
}
/* else */
/* clear new table */
for(i = 0; i < d->tableSize; i++) d->table[i] = 0;
void
dictSet(Dict d, const void *key, const void *value)
{
int tablePosition;
struct dictElt *e;
e = dictFetch(d, key);
if(e != 0) {
/* change existing setting */
d->valueOps.delete(e->value, d->valueOps.arg);
e->value = value ? d->valueOps.copy(value, d->valueOps.arg) : 0;
} else {
/* create new element */
e = malloc(sizeof(*e));
if(e == 0) abort();
/* link it in */
tablePosition = e->hash % d->tableSize;
e->next = d->table[tablePosition];
d->table[tablePosition] = e;
259
d->numElements++;
const void *
dictGet(Dict d, const void *key)
{
struct dictElt *e;
e = dictFetch(d, key);
if(e != 0) {
return e->value;
} else {
return 0;
}
}
/* int functions */
/* We assume that int can be cast to void * and back without damage */
static unsigned long dictIntHash(const void *x, void *arg) { return (int) x; }
static int dictIntEqual(const void *x, const void *y, void *arg)
{
return ((int) x) == ((int) y);
}
static void *dictIntCopy(const void *x, void *arg) { return (void *) x; }
static void dictIntDelete(void *x, void *arg) { ; }
260
h = 0;
for(i = 0; i < len; i++) {
h = (h << 13) + (h >> 7) + h + s[i];
}
return h;
}
/* string functions */
static unsigned long dictStringHash(const void *x, void *arg)
{
return hashMem(x, strlen(x));
}
static int dictStringEqual(const void *x, const void *y, void *arg)
{
return !strcmp((const char *) x, (const char *) y);
}
s = x;
s2 = malloc(sizeof(*s2) * (strlen(s)+1));
strcpy(s2, s);
return s2;
}
/* mem functions */
static unsigned long dictMemHash(const void *x, void *arg)
{
return hashMem(x, (int) arg);
}
261
static int dictMemEqual(const void *x, const void *y, void *arg)
{
return !memcmp(x, y, (size_t) arg);
}
x2 = malloc((size_t) arg);
memcpy(x2, x, (size_t) arg);
return x2;
}
struct dictContentsOperations
dictMemOps(int len)
{
struct dictContentsOperations memOps;
memOps.hash = dictMemHash;
memOps.equal = dictMemEqual;
memOps.copy = dictMemCopy;
memOps.delete = dictDeleteFree;
memOps.arg = (void *) len;
return memOps;
}
examples/generic/dict.c
And here is some test code and a Makefile: test-dict.c, tester.h, tester.c, Makefile.
5.6 Recursion
262
#include <assert.h>
/* all of these routines print numbers i where start <= i < stop */
void
printRangeIterative(int start, int stop)
{
int i;
void
printRangeRecursive(int start, int stop)
{
if(start < stop) {
printf("%d\n", start);
printRangeRecursive(start+1, stop);
}
}
void
printRangeRecursiveReversed(int start, int stop)
{
if(start < stop) {
printRangeRecursiveReversed(start+1, stop);
printf("%d\n", start);
}
}
void
printRangeRecursiveSplit(int start, int stop)
{
int mid;
printRangeRecursiveSplit(start, mid);
printf("%d\n", mid);
printRangeRecursiveSplit(mid+1, stop);
}
}
263
#define Noisy(x) (puts(#x), x)
int
main(int argc, char **argv)
{
if(argc != 1) {
fprintf(stderr, "Usage: %s\n", argv[0]);
return 1;
}
Noisy(printRangeIterative(0, 10));
Noisy(printRangeRecursive(0, 10));
Noisy(printRangeRecursiveReversed(0, 10));
Noisy(printRangeRecursiveSplit(0, 10));
return 0;
}
examples/recursion/recursion.c
And here is the output:
printRangeIterative(0, 10)
0
1
2
3
4
5
6
7
8
9
printRangeRecursive(0, 10)
0
1
2
3
4
5
6
7
8
9
printRangeRecursiveReversed(0, 10)
9
264
8
7
6
5
4
3
2
1
0
printRangeRecursiveSplit(0, 10)
0
1
2
3
4
5
6
7
8
9
The first function printRangeIterative is simple and direct: it’s what we’ve
been doing to get loops forever. The others are a bit more mysterious.
The function printRangeRecursive is an example of solving a problem using
a divide and conquer approach. If we don’t know how to print a range of
numbers 0 through 9, maybe we can start by solving a simpler problem of
printing the first number 0. Having done that, we have a new, smaller problem:
print the numbers 1 through 9. But then we notice we already have a function
printRangeRecursive that will do that for us. So we’ll call it.
If you aren’t used to this, it has the feeling of trying to make yourself fly by
pulling very hard on your shoelaces.20 But in fact the computer will happily
generate the eleven nested instances of printRangeRecursive to make this
happen. When we hit the bottom, the call stack will look something like this:
printRangeRecursive(0, 10)
printRangeRecursive(1, 10)
printRangeRecursive(2, 10)
printRangeRecursive(3, 10)
printRangeRecursive(4, 10)
printRangeRecursive(5, 10)
printRangeRecursive(6, 10)
printRangeRecursive(7, 10)
printRangeRecursive(8, 10)
20 A small child of my acquaintance once explained that this wouldn’t work, because you
would hit your head on the ceiling.
265
printRangeRecursive(9, 10)
printRangeRecursive(10, 10)
This works because each call to printRangeRecursive gets its own parameters
and its own variables separate from the others, even the ones that are still in
progress. So each will print out start and then call another copy in to print
start+1 etc. In the last call, we finally fail the test start < stop, so the
function exits, then its parent exits, and so on until we unwind all the calls on
the stack back to the first one.
In printRangeRecursiveReversed, the calling pattern is exactly the same,
but now instead of printing start on the way down, we print start on
the way back up, after making the recursive call. This means that in
printRangeRecursiveReversed(0, 10), 0 is printed only after the results of
printRangeRecursiveReversed(1, 10), which gives us the countdown effect.
So far these procedures all behave very much like ordinary loops, with increas-
ing values on the stack standing in for the loop variable. More exciting is
printRangeRecursiveSplit. This function takes a much more aggressive ap-
proach to dividing up the problem: it splits a range [0, 10) as two ranges [0, 5)
and [6, 10) separated by a midpoint 5.ˆ[The notation [x, y) means all numbers z
such that x ≤ z < y.] We want to print the midpoint in the middle, of course,
and we can use printRangeRecursiveSplit recursively to print the two ranges.
Following the execution of this procedure is more complicated, with the start of
the sequence of calls looking something like this:
printRangeRecursiveSplit(0, 10)
printRangeRecursiveSplit(0, 5)
printRangeRecursiveSplit(0, 2)
printRangeRecursiveSplit(0, 1)
printRangeRecursiveSplit(0, 0)
printRangeRecursiveSplit(1, 1)
printRangeRecursiveSplit(2, 2)
printRangeRecursiveSplit(3, 5)
printRangeRecursiveSplit(3, 4)
printRangeRecursiveSplit(3, 3)
printRangeRecursiveSplit(4, 4)
printRangeRecursiveSplit(5, 5)
printRangeRecursiveSplit(6, 10)
... etc.
Here the computation has the structure of a tree instead of a list, so it is not so
obvious how one might rewrite this procedure as a loop.
266
5.6.2 Common problems with recursion
Like iteration, recursion is a powerful tool that can cause your program to do
much more than expected. While it may seem that errors in recursive functions
would be harder to track down than errors in loops, most of the time there are a
few basic causes.
267
For this reason, it’s best to try to avoid linear recursions like the one in
printRangeRecursive, where the depth of the recursion is proportional
to the number of things we are doing. Much safer are even splits like
printRangeRecursiveSplit, since the depth of the stack will now be only
logarithmic in the number of things we are doing. Fortunately, linear recursions
are often tail-recursive, where the recursive call is the last thing the recursive
function does; in this case, we can use a standard transformation (see below) to
convert the tail-recursive function into an iterative function.
if(start == stop) {
printf("%d\n", start);
} else {
mid = (start + stop) / 2;
printRangeRecursiveSplitBad(start, mid);
printRangeRecursiveSplitBad(mid, stop);
}
}
This will get stuck on as simple a call as printRangeRecursiveSplitBad(0, 1);
it will set mid to 0, and while the recursive call to printRangeRecursiveSplitBad(0, 0)
will work just fine, the recursive call to printRangeRecursiveSplitBad(0, 1)
will put us back where we started, giving an infinite recursion.
Detecting these errors is usually not too hard (segmentation faults that produce
huge piles of stack frames when you type where in gdb are a dead give-away).
Figuring out how to make sure that you do in fact always make progress can be
trickier.
Tail recursion is when a recursive function calls itself only once, and as the
last thing it does. The printRangeRecursive function is an example of a
tail-recursive function:
268
void
printRangeRecursive(int start, int stop)
{
if(start < stop) {
printf("%d\n", start);
printRangeRecursive(start+1, stop);
}
}
The nice thing about tail-recursive functions is that we can always translate
them directly into iterative functions. The reason is that when we do the tail
call, we are effectively replacing the current copy of the function with a new
copy with new arguments. So rather than keeping around the old zombie parent
copy—which has no purpose other than to wait for the child to return and then
return itself—we can reuse it by assigning new values to its arguments and
jumping back to the top of the function.
Done literally, this produces this goto-considered-harmful monstrosity:
void
printRangeRecursiveGoto(int start, int stop)
{
topOfFunction:
start = start+1;
goto topOfFunction;
}
}
But we can almost always remove goto statements using less dangerous control
structures. In this particular case, the pattern of jumping back to just before an
if matches up exactly with what we get from a while loop:
void
printRangeRecursiveNoMore(int start, int stop)
{
while(start < stop) {
printf("%d\n", start);
start = start+1;
}
}
In functional programming languages, this transformation is usually done in
the other direction, to unroll loops into recursive functions. Since C doesn’t
269
like recursive functions so much (they blow out the stack!), we usually do this
transformation got get rid of recursion instead of adding it.
#include "binarySearch.h"
int
binarySearch(int target, const int *a, size_t length)
{
size_t index;
index = length/2;
if(length == 0) {
/* nothing left */
return 0;
} else if(target == a[index]) {
/* got it */
return 1;
} else if(target < a[index]) {
/* recurse on bottom half */
return binarySearch(target, a, index);
} else {
/* recurse on top half */
/* we throw away index+1 elements (including a[index]) */
return binarySearch(target, a+index+1, length-(index+1));
}
}
examples/binarySearch/binarySearchRecursive.c
270
This will work just fine, and indeed it finds the target element (or not) in O(log n)
time, because we can only recurse O(log n) times before running out of elements
and we only pay O(1) cost per recursive call to binarySearch. But we do have
to pay function call overhead for call, and there is a potential to run into stack
overflow if our stack is very constrained.
Fortunately, we don’t do anything with the return value from binarySearch
but pass it on up the stack: the function is tail-recursive. This means that we
can get rid of the recursion by reusing the stack from from the initial call. The
mechanical way to do this is wrap the body of the routine in a for(;;) loop
(so that we jump back to the top whenever we hit the bottom), and replace
each recursive call with one or more assignments to update any parameters that
change in the recursive call. The result looks like this:
#include <stddef.h>
#include "binarySearch.h"
int
binarySearch(int target, const int *a, size_t length)
{
size_t index;
if(length == 0) {
/* nothing left */
return 0;
} else if(target == a[index]) {
/* got it */
return 1;
} else if(target < a[index]) {
/* recurse on bottom half */
length = index;
} else {
/* recurse on top half */
/* we throw away index+1 elements (including a[index]) */
a = a + index + 1;
length = length - (index + 1);
}
}
}
examples/binarySearch/binarySearchIterative.c
271
Here’s some simple test code to demonstrate that these two implementations in
fact do the same thing: Makefile, testBinarySearch.c.
So far the examples we have given have not been very useful, or have involved
recursion that we can easily replace with iteration. Here is an example of a
recursive procedure that cannot be as easily turned into an iterative version.
We are going to implement the mergesort algorithm on arrays. This is a classic
divide and conquer sorting algorithm that splits an array into two pieces, sorts
each piece (recursively!), then merges the results back together. Here is the code,
together with a simple test program.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
i1 = i2 = iout = 0;
272
if(n < 2) {
/* 0 or 1 elements is already sorted */
memcpy(out, a, sizeof(int) * n);
} else {
/* sort into temp arrays */
a1 = malloc(sizeof(int) * (n/2));
a2 = malloc(sizeof(int) * (n - n/2));
mergeSort(n/2, a, a1);
mergeSort(n - n/2, a + n/2, a2);
/* merge results */
merge(n/2, a1, n - n/2, a2, out);
#define N (20)
int
main(int argc, char **argv)
{
int a[N];
int b[N];
int i;
mergeSort(N, a, b);
273
return 0;
}
examples/sorting/mergesort.c
The cost of this is pretty cheap: O(n log n), since each element of a is processed
through merge once for each array it gets put in, and the recursion only goes
O(log n) layers deep before we hit 1-element arrays.
The reason that we can’t easily transform this into an iterative version is that
the mergeSort function is not tail-recursive: not only does it call itself twice,
but it also needs to free the temporary arrays at the end. Because the algorithm
has to do these tasks on the way back up the stack, we need to keep the stack
around to track them.
One issue with a recursive functions is that it becomes harder to estimate its
asymptotic complexity. Unlike loops, where we can estimate the cost by simply
multiplying the number of iterations by the cost of each iteration, the cost of a
recursive function depends on the cost of its recursive calls. This would make it
seem that we would need to be able to compute the cost of the function before
we could compute the cost of the function.
Fortunately, for most recursive functions, the size of the input drops whenever
we recurse. So the cost can be expressed in terms of a recurrence, a formula
for the cost T (n) on an input of size n in terms of the cost on smaller inputs.
Some examples:
T (n) = O(1) + T (n/2) This is the cost of binary search. To search an array of
n elements, look up the middle element (O(1) time) and, in the worst case,
recurse on an array of n/2 elements.
T (n) = 2T (n/2) + O(n) This is the cost of mergesort. Sort two half-size arrays
recursively, then merge them in O(n) time.
T (n) = O(1) + T (n − 1) This is the cost of most simple loops, if we think of
them as a recursive process. Do O(1) work on the first element, then do
T (n − 1) work on the rest.
There are standard tools for solving many of the recurrences that arise in common
algorithms, but these are overkill for our purposes, since there are only a handful
of recurrences that are likely to come up in practice and we already solved most
of them. Here is a table of some of the more common possibilities:
274
Recurrence Solution Example
T (n) = 2T (n/2) + O(n) T (n) = O(n log n) Mergesort
Divide and conquer yields algorithms whose execution has a tree structure.
Sometimes we build data structures that are also trees. It is probably not
surprising that divide and conquer is the natural way to build algorithms that
use such trees as inputs.
Here is a typical binary tree. It is binary because every node has at most two
children. This particular tree is also complete because the nodes consist only
of internal nodes with exactly two children and leaves with no children. Not
all binary trees will be complete.
0
/ \
1 2
/ \
3 4
/ \
5 6
/ \
7 8
Structurally, a complete binary tree consists of either a single node (a leaf) or a
root node with a left and right subtree, each of which is itself either a leaf or a
root node with two subtrees. The set of all nodes underneath a particular node
x is called the subtree rooted at x.
The size of a tree is the number of nodes; a leaf by itself has size 1. The height
of a tree is the length of the longest path; 0 for a leaf, at least one in any larger
tree. The depth of a node is the length of the path from the root to that node.
The height of a node is the height of the subtree of which it is the root, i.e. the
length of the longest path from that node to some leaf below it. A node u is
an ancestor of a node v if v is contained in the subtree rooted at u; we may
write equivalently that v is a descendant of u. Note that every node is both
and ancestor and descendant of itself; if we wish to exclude the node itself, we
refer to a proper ancestor or proper descendant.
275
5.7.2 Binary tree implementations
Pretty much every divide and conquer algorithm for binary trees looks like this:
void
doSomethingToAllNodes(struct node *root)
{
if(root) {
doSomethingTo(root);
doSomethingToAllNodes(root->left);
doSomethingToAllNodes(root->right);
}
}
The function processes all nodes in what is called a preorder traversal, where
the “preorder” part means that the root of any tree is processed first. Moving
the call to doSomethingTo in between or after the two recursive calls yields an
inorder or postorder traversal, respectively.
In practice we usually want to extract some information from the tree. For
example, this function computes the size of a tree:
276
int
treeSize(struct node *root)
{
if(root == 0) {
return 0;
} else {
return 1 + treeSize(root->left) + treeSize(root->right);
}
}
and this function computes the height:
int
treeHeight(struct node *root)
{
int lh; /* height of left subtree */
int rh; /* height of right subtree */
if(root == 0) {
return -1;
} else {
lh = treeHeight(root->left);
rh = treeHeight(root->right);
return 1 + (lh > rh ? lh : rh);
}
}
Since both of these algorithms have the same structure, they both have the same
asymptotic running time. We can compute this running time by observing that
each recursive call to treeSize or treeHeight that does not get a null pointer
passed to it gets a different node (so there are n such calls), and each call that
does get a null pointer passed to it is called by a routine that doesn’t, and that
there are at most two such calls per node. Since the body of each call itself costs
O(1) (no loops), this gives a total cost of Θ(n).
So these are all Θ(n) algorithms.
For some binary trees we don’t store anything interesting in the internal nodes,
using them only to provide a route to the leaves. We might reasonably ask if an
algorithm that runs in O(n) time where n is the total number of nodes still runs
in O(m) time, where m counts only the leaves. For complete binary trees, we
can show that we get the same asymptotic performance whether we count leaves
only, internal nodes only, or both leaves and internal nodes.
Let T (n) be the number of internal nodes in a complete binary tree with n leaves.
277
It is easy to see that T (1) = 0 and T (2) = 1, but for larger trees there are multiple
structures and so it makes sense to write a recurrence: T (n) = 1+T (k)+T (n−k).
We can show by induction that the solution to this recurrence is exactly T (n) =
n − 1. We already have the base case T (1) = 0. For larger n, we have T (n) =
1 + T (k) + T (n − k) = 1 + (k − 1) + (n − k − 1) = n − 1.
So a complete binary tree with Θ(n) nodes has Θ(n) internal nodes and Θ(n)
leaves; if we don’t care about constant factors, we won’t care which number we
use.
So far we haven’t specified where particular nodes are placed in the binary tree.
Most applications of binary trees put some constraints on how nodes relate to
one another. Some possibilities:
• Heaps: Each node has a key that is less than the keys of both of its children.
These allow for a very simple implementation using arrays, so we will look
at these first.
• BinarySearchTrees: Each node has a key, and a node’s key must be greater
than all keys in the subtree of its left-hand child and less than all keys in
the subtree of its right-hand child.
5.8 Heaps
A heap is a binary tree in which each element has a key (or sometimes priority)
that is less than the keys of its children. Heaps are used to implement the
priority queue abstract data type, which we’ll talk about first.
In a standard queue, elements leave the queue in the same order as they arrive.
In a priority queue, elements leave the queue in order of decreasing priority:
the DEQUEUE operation becomes a DELETE-MIN operation (or DELETE-
MAX, if higher numbers mean higher priority), which removes and returns the
highest-priority element of the priority queue, regardless of when it was inserted.
Priority queues are often used in operating system schedulers to determine which
job to run next: a high-priority job (e.g., turn on the fire suppression system)
runs before a low-priority job (floss the cat) even if the low-priority job has been
waiting longer.
278
5.8.2 Expensive implementations of priority queues
A heap is a binary tree in which each node has a smaller key than its children;
this property is called the heap property or heap invariant.
To insert a node in the heap, we add it as a new leaf, which may violate the heap
property if the new node has a lower key than its parent. But we can restore the
heap property (at least between this node and its parent) by swapping either
the new node or its sibling with the parent, where in either case we move up
the node with the smaller key. This may still leave a violation of the heap
property one level up in the tree, but by continuing to swap small nodes with
their parents we eventually reach the top and have a heap again. The time to
complete this operation is proportional to the depth of the heap, which will
typically be O(log n) (we will see how to enforce this in a moment).
To implement DELETE-MIN, we can easily find the value to return at the top of
the heap. Unfortunately, removing it leaves a vacuum that must be filled in by
some other element. The easiest way to do this is to grab a leaf (which probably
has a very high key), and then float it down to where it belongs by swapping it
with its smaller child at each iteration. After time proportional to the depth
(again O(log n) if we are doing things right), the heap invariant is restored.
Similar local swapping can be used to restore the heap invariant if the priority
of some element in the middle changes; we will not discuss this in detail.
It is possible to build a heap using structs and pointers, where each element
points to its parent and children. In practice, heaps are instead stored in arrays,
with an implicit pointer structure determined by array indices. For zero-based
arrays as in C, the rule is that a node at position i has children at positions
2*i+1 and 2*i+2; in the other direction, a node at position i has a parent at
279
position (i-1)/2 (which rounds down in C). This is equivalent to storing a heap
in an array by reading through the tree in breadth-first search order:
0
/ \
1 2
/ \ / \
3 4 5 6
becomes
0 1 2 3 4 5 6
This approach works best if there are no gaps in the array. So to maximize
efficiency we make this “no gaps” policy part of the invariant. We can do so
because we don’t care which leaf gets added when we do an INSERT, so we
can make it be the position at the end of the array. Similarly, in a DELETE-
MIN operation we can promote the last element to the root before floating it
down. Both these operations change the number of elements in the array, and
INSERTs in particular might force us to reallocate eventually. So in the worst
case INSERT can be an expensive operation, although as with growing hash
tables, the amortized cost may still be small.
If we are presented with an unsorted array, we can turn it into a heap more
quickly than the O(n log n) time required to do n INSERTs. The trick is to
build the heap from the bottom up (i.e. starting with position n − 1 and working
back to position 0, so that when it comes time to fix the heap invariant at
position i we have already fixed it at all later positions (this is a form of dynamic
programming). Unfortunately, it is not quite enough simply to swap a[i] with
its smaller child when we get there, because we could find that a[0] (say) was
the largest element in the heap. But the cost of floating a[i] down to its proper
place will be proportional to its own height rather than the height of the entire
heap. Since most of the elements of the heap are close to the bottom, the total
cost will turn out to be O(n).
5.8.6 Heapsort
280
Here is a simple implementation of heapsort, that demonstrates how both bottom-
up heapification and the DELETE-MAX procedure work by floating elements
down to their proper places:
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
/* compute child 0 or 1 */
#define Child(x, dir) (2*(x)+1+(dir))
for(;;) {
if(Child(pos, 1) < n && a[Child(pos, 1)] > a[Child(pos, 0)]) {
/* maybe swap with Child(pos, 1) */
if(a[Child(pos, 1)] > x) {
a[pos] = a[Child(pos, 1)];
pos = Child(pos, 1);
} else {
/* x is bigger than both kids */
break;
}
} else if(Child(pos, 0) < n && a[Child(pos, 0)] > x) {
/* swap with Child(pos, 0) */
a[pos] = a[Child(pos, 0)];
pos = Child(pos, 0);
} else {
/* done */
break;
}
}
a[pos] = x;
}
281
static void
heapify(int n, int *a)
{
int i;
/* sort an array */
void
heapSort(int n, int *a)
{
int i;
int tmp;
heapify(n, a);
#define N (100)
#define MULTIPLIER (17)
int
main(int argc, char **argv)
{
int a[N];
int i;
if(argc != 1) {
fprintf(stderr, "Usage: %s\n", argv[0]);
return 1;
}
282
for(i = 0; i < N; i++) { printf("%d ", a[i]); }
putchar('\n');
heapSort(N, a);
return 0;
}
examples/sorting/heapsort.c
• Priority_queue
• Binary_heap
• http://mathworld.wolfram.com/Heap.html
A binary search tree is a binary tree in which each node has a key, and a
node’s key must be greater than all keys in the subtree of its left-hand child and
less than all keys in the subtree of its right-hand child. This allows a node to be
searched for using essentially the same binary search algorithm used on sorted
arrays.
283
This procedure can be rewritten iteratively, which avoids stack overflow and is
likely to be faster:
struct node *
treeSearch(struct node *root, int target)
{
while(root != 0 && root->key != target) {
if(root->key > target) {
root = root->left;
} else {
root = root->right;
}
}
return root;
}
These procedures can be modified in the obvious way to deal with keys that
aren’t ints, as long as they can be compared (e.g., by using strcmp on strings).
newNode = malloc(sizeof(*newNode));
assert(newNode);
newNode->key = key;
newNode->left = 0;
newNode->right = 0;
for(;;) {
if(root->key > key) {
/* try left child */
if(root->left) {
root = root->left;
} else {
/* put it in */
284
root->left = newNode;
return;
}
} else {
/* right child case is symmetric */
if(root->right) {
root = root->right;
} else {
/* put it in */
root->right = newNode;
return;
}
}
}
}
Note that this code happily inserts duplicate keys. It also makes no attempt
to keep the tree balanced. This may lead to very long paths if new keys are
inserted in strictly increasing or strictly decreasing order.
Deletion is more complicated. If a node has no children, we can just remove it,
and the rest of the tree stays the same. A node with one child can be spliced
out, connecting its parent directly to its child. But with two children, we can’t
do this.
The trick is to find the leftmost node in our target’s right subtree (or vice versa).
This node exists assuming the target has two children. As in a hash table, we
can then swap our target node with this more convenient node. Because it is
the leftmost node, it has no left child, so we can delete it using the no-children
or one-child case.
5.9.4 Costs
Searching for or inserting a node in a binary search tree with n nodes takes time
proportional to the depth of the node. In balanced trees, where the nodes in
each subtree are divided roughly evenly between the two child subtrees, this will
be O(log n), but for a badly unbalanced tree, this might be as much as O(n).
So making a binary search tree work efficiently requires keeping it balanced.
285
5.10 Augmented trees
5.10.1 Applications
Storing the height field can be useful for balancing, as in AVL trees.
Storing the size allows ranking (computing the number of elements less than
a given target value) and unraking (find an element with a particular rank).
Sample code for doing this is given in the AVL tree sample implementation
below.
Storing other aggregates like the sum of keys or values allows range queries,
where we ask, for example, for some aggregate statistic (like the sum or average)
of all the elements between some goven minimum and maximum.
Assuming we keep the tree balanced and correctly maintain the aggregate data
or each subtree, all of these operations can be done in O(log n) time.
Binary search trees are a fine idea, but they only work if they are balanced—if
moving from a tree to its left or right subtree reduces the size by a constant
fraction. Balanced binary trees add some extra mechanism to the basic binary
search tree to ensure balance. Finding efficient ways to balance a tree has been
studied for decades, and several good mechanisms are known. We’ll try to hit
the high points of all of them.
286
5.11.1 Tree rotations
The problem is that as we insert new nodes, some paths through the tree may
become very long. So we need to be able to shrink the long paths by moving
nodes elsewhere in the tree.
But how do we do this? The idea is to notice that there may be many binary
search trees that contain the same data, and that we can transform one into
another by a local modification called a rotation:
y x
/ \ <==> / \
x C A y
/ \ / \
A B B C
Rotations in principle let us rebalance a tree, but we still need to decide when
to do them. If we try to keep the tree in perfect balance (all paths nearly the
same length), we’ll spend so much time rotating that we won’t be able to do
anything else.
287
AVL trees solve this problem by enforcing the invariant that the heights of
the two subtrees sitting under each node differ by at most one. This does not
guarantee perfect balance, but it does get close. Let S(k) be the size of the
smallest AVL tree with height k. This tree will have at least one subtree of height
k − 1, but its other subtree can be of height k − 2 (and should be, to keep it as
small as possible). We thus have the recurrence S(k) = 1 + S(k − 1) + S(k − 2),
which is very close to the Fibonacci recurrence.
It is possible to solve this exactly using generating functions. But we can get
close by guessing that S(k) ≥ ak for some constant a. This clearly works for
S(0) = a0 = 1. For larger k, compute
• S(k) = 1 + ak−1 + ak−2 = 1 + ak (1/a + 1/a2 ) > ak (1/a + 1/a2 ).
This last quantity is at least ak provided (1/a + 1/a2 ) is at least 1. We can
solve exactly for the largest a that makes this work, but a very quick calculation
shows that a = 3/2 works: 2/3 + 4/9 = 10/9 > 1. It follows that any AVL tree
with height k has at least (3/2)k nodes, or conversely that any AVL tree with
(3/2)k nodes has height at most k. So the height of an arbitrary AVL tree with
n nodes is no greater than log3/2 n = O(log n).
How do we maintain this invariant? The first thing to do is add extra information
to the tree, so that we can tell when the invariant has been violated. AVL
trees store in each node the difference between the heights of its left and right
subtrees, which will be one of −1, 0, or 1. In an ideal world this would require
log2 3 ≈ 1.58 bits per node, but since fractional bits are difficult to represent on
modern computers a typical implementation uses two bits. Inserting a new node
into an AVL tree involves
1. Doing a standard binary search tree insertion.
2. Updating the balance fields for every node on the insertion path.
3. Performing a single or double rotation to restore balance if needed.
Implementing this correctly is tricky. Intuitively, we can imagine a version of an
AVL tree in which we stored the height of each node (using O(log log n) bits).
When we insert a new node, only the heights of its ancestors change—so step
2 requires updating O(log n) height fields. Similarly, it is only these ancestors
that can be too tall. It turns out that fixing the closest ancestor fixes all the
ones above it (because it shortens their longest paths by one as well). So just
one single or double rotation restores balance.
Deletions are also possible, but are uglier: a deletion in an AVL tree may require
as many as O(log n) rotations. The basic idea is to use the standard binary
search tree deletion trick of either splicing out a node if it has no right child, or
replacing it with the minimum value in its right subtree (the node for which is
spliced out); we then have to check to see if we need to rebalance at every node
above whatever node we removed.
Which rotations we need to do to rebalance depends on how some pair of siblings
are unbalanced. Below, we show the possible cases.
288
Zig-zig case: This can occur after inserting in A or deleting in C. Here we rotate
A up:
y x
/ \ ===> / \
x C A y
/ \ | / \
A B # B C
|
#
Zig-zag case: This can occur after inserting in B or deleting in C. This requires
a double rotation.
z z y
/ \ ===> / \ ===> / \
x C y C x z
/ \ / \ /| |\
A y x B2 A B1 B2 C
/ \ / \
B1 B2 A B1
Zig-zag case, again: This last case comes up after deletion if both nephews of
the short node are too tall. The same double rotation we used in the previous
case works here, too. Note that one of the subtrees is still one taller than the
others, but that’s OK.
z z y
/ \ ===> / \ ===> / \
x C y C x z
/ \ / \ /| |\
A y x B2 A B1 B2 C
| / \ / \ |
# B1 B2 A B1 #
|
#
289
implemented in the treeBalance function, that fixes any violations of the AVL
balance rule.
/*
* Basic binary search tree data structure without balancing info.
*
* Convention:
*
* Operations that update a tree are passed a struct tree **,
* so they can replace the argument with the return value.
*
* Operations that do not update the tree get a const struct tree *.
*/
struct tree {
/* we'll make this an array so that we can make some operations symmetric */
struct tree *child[TREE_NUM_CHILDREN];
int key;
int height; /* height of this node */
size_t size; /* size of subtree rooted at this node */
};
/* delete minimum element from the tree and return its key */
/* do not call this on an empty tree */
int treeDeleteMin(struct tree **root);
290
/* return height of tree */
int treeHeight(const struct tree *root);
#include "tree.h"
int
treeHeight(const struct tree *root)
{
if(root == 0) {
return TREE_EMPTY_HEIGHT;
} else {
return root->height;
}
}
291
if(root == 0) {
return TREE_EMPTY_HEIGHT;
} else {
maxChildHeight = TREE_EMPTY_HEIGHT;
return maxChildHeight + 1;
}
}
size_t
treeSize(const struct tree *root)
{
if(root == 0) {
return 0;
} else {
return root->size;
}
}
if(root == 0) {
return 0;
} else {
size = 1;
return size;
}
}
292
/* fix aggregate data in root */
/* assumes children are correct */
static void
treeAggregateFix(struct tree *root)
{
if(root) {
root->height = treeComputeHeight(root);
root->size = treeComputeSize(root);
}
}
/*
* y x
* / \ / \
* x C <=> A y
* / \ / \
* A B B C
*/
y = *root; assert(y);
x = y->child[direction]; assert(x);
b = x->child[!direction];
/* do the rotation */
*root = x;
x->child[!direction] = y;
y->child[direction] = b;
293
{
int tallerChild;
if(*root) {
for(tallerChild = 0; tallerChild < TREE_NUM_CHILDREN; tallerChild++) {
if(treeHeight((*root)->child[tallerChild]) >= treeHeight((*root)->child[!tallerC
#ifdef PARANOID_REBALANCE
treeSanityCheck(*root);
#endif
}
}
if(*root) {
for(i = 0; i < TREE_NUM_CHILDREN; i++) {
treeDestroy(&(*root)->child[i]);
}
free(*root);
*root = TREE_EMPTY;
294
}
}
if(*root == 0) {
/* not already there, put it in */
e = malloc(sizeof(*e));
assert(e);
e->key = newElement;
e->child[LEFT] = e->child[RIGHT] = 0;
*root = e;
} else if((*root)->key == newElement) {
/* already there, do nothing */
return;
} else {
/* do this recursively so we can fix data on the way back out */
treeInsert(&(*root)->child[(*root)->key < newElement], newElement);
}
return t != 0;
}
/* delete minimum element from the tree and return its key */
295
/* do not call this on an empty tree */
int
treeDeleteMin(struct tree **root)
{
struct tree *toFree;
int retval;
if((*root)->child[LEFT]) {
/* recurse on left subtree */
retval = treeDeleteMin(&(*root)->child[LEFT]);
} else {
/* delete the root */
toFree = *root;
retval = toFree->key;
*root = toFree->child[RIGHT];
free(toFree);
}
return retval;
}
296
} else {
treeDelete(&(*root)->child[(*root)->key < target], target);
}
if(root != 0) {
treePrintIndented(root->child[LEFT], depth+1);
treePrintIndented(root->child[RIGHT], depth+1);
}
}
size_t
treeRank(const struct tree *t, int target)
{
size_t rank = 0;
297
/* go right */
/* root and left subtree are all less than target */
rank += (1 + treeSize(t->child[LEFT]));
t = t->child[RIGHT];
} else {
/* go left */
t = t->child[LEFT];
}
}
int
treeUnrank(const struct tree *t, size_t rank)
{
size_t leftSize;
return t->key;
}
if(root) {
assert(root->height == treeComputeHeight(root));
assert(root->size == treeComputeSize(root));
298
for(i = 0; i < TREE_NUM_CHILDREN; i++) {
treeSanityCheck(root->child[i]);
}
}
}
#ifdef TEST_MAIN
int
main(int argc, char **argv)
{
int key;
int i;
const int n = 10;
const int randRange = 1000;
const int randTrials = 10000;
struct tree *root = TREE_EMPTY;
if(argc != 1) {
fprintf(stderr, "Usage: %s\n", argv[0]);
return 1;
}
/* original test */
for(i = 0; i < n; i++) {
assert(!treeContains(root, i));
treeInsert(&root, i);
assert(treeContains(root, i));
treeSanityCheck(root);
#ifdef PRINT_AFTER_OPERATIONS
treePrint(root);
puts("---");
#endif
}
/* check ranks */
for(i = 0; i < n; i++) {
assert(treeRank(root, i) == i);
assert(treeUnrank(root, i) == i);
}
treeSanityCheck(root);
299
assert(!treeContains(root, i));
treeSanityCheck(root);
#ifdef PRINT_AFTER_OPERATIONS
treePrint(root);
puts("---");
#endif
}
treeSanityCheck(root);
treeDestroy(&root);
/* random test */
srand(1);
treeSanityCheck(root);
treeDestroy(&root);
#ifdef TEST_USE_STDIN
while(scanf("%d", &key) == 1) {
/* insert if positive, delete if negative */
if(key > 0) {
treeInsert(&root, key);
assert(treeContains(root, key));
} else if(key < 0) {
treeDelete(&root, -key);
assert(!treeContains(root, key));
}
/* else ignore 0 */
#ifdef PRINT_AFTER_OPERATIONS
treePrint(root);
puts("---");
#endif
}
treeSanityCheck(root);
treeDestroy(&root);
#endif /* TEST_USE_STDIN */
return 0;
}
300
#endif /* TEST_MAIN */
examples/trees/AVL/tree.c
This Makefile will compile and run some demo code in tree.c if run with make
test.
(An older implementation can be found in the directory examples/trees/
oldAvlTree.
An early branch in the evolution of balanced trees was the 2–3 tree. Here all
paths have the same length, but internal nodes have either 2 or 3 children. So a
2–3 tree with height k has between 2k and 3k leaves and a comparable number
of internal nodes. The maximum path length in a tree with n nodes is at most
dlg ne, as in a perfectly balanced binary tree.
An internal node in a 2–3 tree holds one key if it has two children (including two
nil pointers) and two if it has three children. A search that reaches a three-child
node must compare the target with both keys to decide which of the three
subtrees to recurse into. As in binary trees, these comparisons take constant
time, so we can search a 2–3 tree in O(log n) time.
Insertion is done by expanding leaf nodes. This may cause a leaf to split when
it acquires a third key. When a leaf splits, it becomes two one-key nodes and
the middle key moves up into its parent. This may cause further splits up the
ancestor chain; the tree grows in height by adding a new root when the old root
splits. In practice only a small number of splits are needed for most insertions,
but even in the worst case this entire process takes O(log n) time.
It follows that 2–3 trees have the same performance as AVL trees. Conceptually,
they are simpler, but having to write separate cases for 2-child and 3-child nodes
doubles the size of most code that works on 2–3 trees. The real significance of
2–3 trees is as a precursor to two other kinds of trees, the red-black tree and the
B-tree.
A red-black tree is a 2–3–4 tree (i.e. all nodes have 2, 3, or 4 children and 1, 2,
or 3 internal keys) where each node is represented by a little binary tree with a
black root and zero, one, or two red extender nodes as follows:
The invariant for a red-black tree is that
1. No two red nodes are adjacent.
2. Every path contains the same number of black nodes.
301
Figure 1: redblacknodes.png
302
For technical reasons, we include the null pointers at the bottom of the tree as
black nodes; this has no effect on the invariant, but simplifies the description of
the rebalancing procedure.
From the invariant it follows that every path has between k and 2k nodes, where
k is the black-height, the common number of black nodes on each path. From
this we can prove that the height of the tree is O(log n).
Searching in a red-black tree is identical to searching in any other binary search
tree; we simply ignore the color bit on each node. So search takes O(log n) time.
For insertions, we use the standard binary search tree insertion algorithm, and
insert the new node as a red node. This may violate the first part of the invariant
(it doesn’t violate the second because it doesn’t change the number of black
nodes on any path). In this case we need to fix up the constraint by recoloring
nodes and possibly performing a single or double rotation.
Which operations we need to do depend on the color of the new node’s uncle.
If the uncle is red, we can recolor the node’s parent, uncle, and grandparent
and get rid of the double-red edge between the new node and its parent without
changing the number of black nodes on any path. In this case, the grandparent
becomes red, which may create a new double-red edge which must be fixed
recursively. Thus up to O(log n) such recolorings may occur at a total cost of
O(log n).
If the uncle is black (which includes the case where the uncle is a null pointer),
a rotation (possibly a double rotation) and recoloring is necessary. In this case
(depicted at the bottom of the picture above), the new grandparent is always
black, so there are no more double-red edges. So at most two rotations occur
after any insertion.
Deletion is more complicated but can also be done in O(log n) recolorings and
O(1) (in this case up to 3) rotations. Because deletion is simpler in red-black
trees than in AVL trees, and because operations on red-black trees tend to have
slightly smaller constants than corresponding operation on AVL trees, red-black
trees are more often used that AVL trees in practice.
5.11.5 B-trees
303
Figure 2: redblackrebalance.png
304
When a node would otherwise end up with M children, it splits into two nodes
with M/2 children each, and moves its middle key up into its parent. As in 2–3
trees this may eventually require the root to split and a new root to be created;
in practice, M is often large enough that a small fixed height is enough to span
as much data as the storage system is capable of holding.
Searches in B-trees require looking through logM n nodes, at a cost of O(M )
time per node. If M is a constant the total time is asymptotically O(log n). But
the reason for using B-trees is that the O(M ) cost of reading a block is trivial
compare to the much larger constant time to find the block on the disk; and so
it is better to minimize the number of disk accesses (by making M large) than
reduce the CPU time.
305
This is probably best understood by looking at a figure from the original paper:
The bottom two cases are the ones we will do most of the time.
Just looking at the picture, it doesn’t seem like zig-zig will improve balance
much. But if we have a long path made up of zig-zig cases, each operation will
push at least one node off of this path, cutting the length of the path in half. So
the rebalancing happens as much because we are pushing nodes off of the long
path as because the specific rotation operations improve things locally.
5.11.6.2 Analysis
Sleator and Tarjan show that any sequence of m splay operations on an n-node
splay tree has total cost at most O((m+n) log n+m). For large m (at least linear
in n), the O(m log n) term dominates, giving an amortized cost per operation of
O(log n), the same as we get from any balanced binary tree. This immediately
gives a bound on search costs, because the cost of plunging down the tree to find
the node we are looking for is proportional to the cost of splaying it up to the
root.
306
Splay trees have a useful “caching” property in that they pull frequently-accessed
nodes to the to the top and push less-frequently-accessed nodes down. The
authors show that if only k of the n nodes are accessed, the long-run amortized
cost per search drops to O(log k). For more general access sequences, it is
conjectured that the cost to perform a sufficiently long sequence of searches using
a splay tree is in fact optimal up to a constant factor (the “dynamic optimality
conjecture”), but no one has yet been able to prove this conjecture (or provide a
counterexample).21
1306.0207v1.pdf.
307
could solve both of these problems by including parent pointers in our tree, but
this would add a lot of complexity and negate the space improvement over AVL
trees of not having to store heights.
The solution given in the Sleator-Tarjan paper is to replace the bottom-up splay
procedure with a top-down splay procedure that accomplishes the same task.
The idea is that rotating a node up from the bottom effectively splits the tree
above it into two new left and right subtrees by pushing ancestors sideways
according to the zig-zig and zig-zag patters. But we can recognize these zig-zig
and zig-zag patterns from the top as well, and so we can construct these same
left and right subtrees from the top down instead of the bottom up. When we
do this, instead of adding new nodes to the tops of the trees, we will be adding
new nodes to the bottoms, as the right child of the rightmost node in the left
tree or the left child of the rightmost node in the left tree.
Here’s the picture, from the original paper:
308
To implement this, we need to keep track of the roots of the three trees, as well
as the locations in the left and right trees where we will be adding new vertices.
The roots we can just keep pointers to. For the lower corners of the trees, it
makes sense to store instead a pointer to the pointer location, so that we can
modify the pointer in the tree (and then move the pointer to point to the pointer
in the new corner). Initially, these corner pointers will just point to the left and
right tree roots, which will start out empty.
The last step (shown as Figure 12 from the paper) pastes the tree back together
309
by inserting the left and right trees between the new root and its children.
5.11.6.5 An implementation
Here is an implementation of a splay tree, with an interface similar to the
previous AVL tree implementation.
/*
* Basic binary search tree data structure without balancing info.
*
* Convention:
*
* Operations that update a tree are passed a struct tree **,
* so they can replace the argument with the return value.
*
* Operations that do not update the tree get a const struct tree *.
*/
struct tree {
/* we'll make this an array so that we can make some operations symmetric */
struct tree *child[TREE_NUM_CHILDREN];
int key;
};
310
examples/trees/splay/tree.h
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
#include <limits.h>
#include "tree.h"
/*
* y x
* / \ / \
* x C <=> A y
* / \ / \
* A B B C
*/
y = *root; assert(y);
x = y->child[direction]; assert(x);
b = x->child[!direction];
/* do the rotation */
*root = x;
x->child[!direction] = y;
y->child[direction] = b;
}
311
/* link operations for top-down splay */
/* this pastes a node in as !d-most node in subtree on side d */
static inline void
treeLink(struct tree ***hook, int d, struct tree *node)
{
*hook[d] = node;
/* strictly speaking we don't need to do this, but it allows printing the partial trees
node->child[!d] = 0;
hook[d] = &node->child[!d];
}
/* we don't need to keep following this pointer, we'll just fix it at the end */
t = *root;
/* keep going until we hit the key or we would hit a null pointer in the child */
while(t->key != target && (child = t->child[dChild = t->key < target]) != 0) {
/* child is not null */
grandchild = child->child[dGrandchild = child->key < target];
#ifdef DEBUG_SPLAY
treePrint(top[0]);
puts("---");
treePrint(t);
puts("---");
312
treePrint(top[1]);
puts("===");
#endif
313
/* insert an element into a tree pointed to by root */
void
treeInsert(struct tree **root, int newElement)
{
struct tree *e;
struct tree *t;
int d; /* which side of e to put old root on */
treeSplay(root, newElement);
t = *root;
e->key = newElement;
if(t == 0) {
e->child[LEFT] = e->child[RIGHT] = 0;
} else {
/* split tree and put e on top */
/* we know t is closest to e, so we don't have to move anything else */
d = (*root)->key > newElement;
e->child[d] = t;
e->child[!d] = t->child[!d];
t->child[!d] = 0;
}
treeSplay(root, target);
314
if(*root && (*root)->key == target) {
/* save pointers to kids */
left = (*root)->child[LEFT];
right = (*root)->child[RIGHT];
/* return left */
*root = left;
}
}
}
if(root != 0) {
treePrintIndented(root->child[LEFT], depth+1);
treePrintIndented(root->child[RIGHT], depth+1);
}
}
315
void
treePrint(const struct tree *root)
{
treePrintIndented(root, 0);
}
#ifdef TEST_MAIN
int
main(int argc, char **argv)
{
int i;
const int n = 10;
struct tree *root = TREE_EMPTY;
if(argc != 1) {
fprintf(stderr, "Usage: %s\n", argv[0]);
return 1;
}
treeDestroy(&root);
return 0;
}
#endif
examples/trees/splay/tree.c
Makefile. The file speedTest.c can be used to do a simple test of the efficiency
of inserting many random elements. On my machine, the splay tree version is
316
about 10% slower than the AVL tree for this test on a million elements. This
probably indicates a bigger slowdown for treeInsert itself, because some of the
time will be spent in rand and treeDestroy, but I was too lazy to actually test
this further.
Scapegoat trees are another amortized balanced tree data structure. The idea
of a scapegoat tree is that if we ever find ourselves doing an insert at the end of
a path that is too long, we can find some subtree rooted at a node along this
path that is particularly imbalanced and rebalance it all at once at a cost of O(k)
where k is the size of the subtree. These were shown by Galperin and Rivest
(SODA 1993) to give O(log n) amortized cost for inserts, while guaranteeing
O(log n) depth, so that inserts also run in O(log n) worst-case time; they also
came up with the name “scapegoat tree”, although it turns out the same data
structure had previously been published by Andersson in 1989. Unlike splay
trees, scapegoat trees do not require modifying the tree during a search, and
unlike AVL trees, scapegoat trees do not require tracking any information in
nodes (although they do require tracking the total size of the tree and, to allow
for rebalancing after many deletes, the maximum size of the tree since the last
time the entire tree was rebalanced).
Unfortunately, scapegoat trees are not very fast, so one is probably better off
with an AVL tree.
Skip lists are yet another balanced tree data structure, where the tree is disguised
as a tower of linked lists. Since they use randomization for balance, we describe
them with other randomized data structures.
5.11.9 Implementations
AVL trees and red-black trees have been implemented for every reasonable
programming language you’ve ever heard of. For C implementations, a good
place to start is at http://adtinfo.org/.
317
5.12 Graphs
Graphs can be used to model any situation where we have things that are related
to each other in pairs; for example, all of the following can be represented by
graphs:
Family trees Nodes are members, with an edge from each parent to each of
their children.
Transportation networks Nodes are airports, intersections, ports, etc. Edges
are airline flights, one-way roads, shipping routes, etc.
Assignments Suppose we are assigning classes to classrooms. Let each node be
either a class or a classroom, and put an edge from a class to a classroom
if the class is assigned to that room. This is an example of a bipartite
graph, where the nodes can be divided into two sets S and T and all edges
go from S to T .
318
Figure 3: A graph
319
Figure 4: A directed graph
320
5.12.3 Operations on graphs
A good graph representation will allow us to answer one or both of these questions
quickly. There are generally two standard representations of graphs that are
used in graph algorithms, depending on which question is more important.
For both representations, we simplify the representation task by insisting that
vertices be labeled 0, 1, 2, . . . , n − 1, where n is the number of vertices in the
graph. If we have a graph with different vertex labels (say, airport codes), we
can enforce an integer labeling by a preprocessing step where we assign integer
labels, and then translate the integer labels back into more useful user labels
afterwards. The preprocessing step can usually be done using a hash table in
O(n) time, which is likely to be smaller than the cost of whatever algorithm we
are running on our graph, and the savings in code complexity and running time
from working with just integer labels will pay this cost back many times over.
321
successors of u is also O(d+ (u)), which is clearly the best possible since it takes
that long just to write them all down. Finding predecessors of a node u is
extremely expensive, requiring looking through every list of every node in time
O(n + m), where m is the total number of edges, although if this is something
we actually need to do often we can store a second copy of the graph with the
edges reversed.
Adjacency lists are thus most useful when we mostly want to enumerate outgoing
edges of each node. This is common in search tasks, where we want to find a
path from one node to another or compute the distances between pairs of nodes.
If other operations are important, we can optimize them by augmenting the
adjacency list representation; for example, using sorted arrays for the adjacency
lists reduces the cost of edge existence testing to O(log(d+ (u))), and adding a
second copy of the graph with reversed edges lets us find all predecessors of u in
O(d− (u)) time, where d− (u) is u’s in-degree.
Adjacency lists also require much less space than adjacency matrices for sparse
graphs: O(n + m) vs O(n2 ) for adjacency matrices. For this reason adjacency
lists are more commonly used than adjacency matrices.
5.12.4.2.1 An implementation
Here is an implementation of a basic graph type using adjacency lists.
/* basic directed graph type */
322
/* invoke f on all edges (u,v) with source u */
/* supplying data as final parameter to f */
/* no particular order is guaranteed */
void graphForeach(Graph g, int source,
void (*f)(Graph g, int source, int sink, void *data),
void *data);
examples/graphs/graph.h
#include <stdlib.h>
#include <assert.h>
#include "graph.h"
/* these arrays may or may not be sorted: if one gets long enough
* and you call graphHasEdge on its source, it will be */
struct graph {
int n; /* number of vertices */
int m; /* number of edges */
struct successors {
int d; /* number of successors */
int len; /* number of slots in array */
int isSorted; /* true if list is already sorted */
int list[]; /* actual list of successors starts here */
} *alist[];
};
g->n = n;
g->m = 0;
323
g->alist[i] = malloc(sizeof(struct successors));
assert(g->alist[i]);
g->alist[i]->d = 0;
g->alist[i]->len = 0;
g->alist[i]->isSorted= 1;
}
return g;
}
324
/* return the number of vertices in the graph */
int
graphVertexCount(Graph g)
{
return g->n;
}
return g->alist[source]->d;
}
static int
intcmp(const void *a, const void *b)
{
return *((const int *) a) - *((const int *) b);
}
325
if(! g->alist[source]->isSorted) {
qsort(g->alist[source]->list,
g->alist[source]->d,
sizeof(int),
intcmp);
}
326
5.12.4.3 Implicit representations
For some graphs, it may not make sense to represent them explicitly. An example
might be the word-search graph from CS223/2005/Assignments/HW10, which
consists of all words in a dictionary with an edge between any two words that
differ only by one letter. In such a case, rather than building an explicit data
structure containing all the edges, we might generate edges as needed when
computing the neighbors of a particular vertex. This gives us an implicit or
procedural representation of a graph.
Implicit representations require the ability to return a vector or list of values
from the neighborhood-computing function. There are various way to do this, of
which the most sophisticated might be to use an iterator.
327
on all the nodes that are reachable from s, and (b) for each such node v, a
parent pointer back to the source of the edge that brought v into the tree. Often
these two values can be combined by using a null parent pointer to represent the
absence of a mark (this usually requires making the root point to itself so that
we know it’s in the tree). Other values that may be useful are a table showing
the order in which nodes were added to the tree.
What kind of tree we get depends on what we use for the bucket—specifically,
on what edge is returned when we ask for the “best” edge. Two easy cases are:
1. The bucket is a stack. When we ask for an outgoing edge, we get the
last edge inserted. This has the effect of running along as far as possible
through the graph before backtracking, since we always keep going from
the last node if possible. The resulting algorithm is called depth-first
search and yields a depth-first search tree. If we don’t care about the
lengths of the paths we consider, depth-first search is a perfectly good
algorithm for testing connectivity. It can also be implemented without any
auxiliary data structures as a recursive procedure, as long as we don’t go
so deep as to blow out the system stack.
2. The bucket is a queue. Now when we ask for an outgoing edge, we get the
first edge inserted. This favors edges that are close to the root: we don’t
start consider edges from nodes adjacent to the root until we have already
added all the root’s successors to the tree, and similarly we don’t start
considering edges at distance k until we have already added all the closer
nodes to the tree. This gives breadth-first search, which constructs a
shortest-path tree in which every path from the root to a node in the
tree has the minimum length.
Structurally, these algorithms are almost completely identical; indeed, if we
organize the stack/queue so that it can pop from both ends, we can switch
between depth-first search and breadth-first search just by choosing which end
to pop from.
Below, we give a [combined implementation](#combinedDFSBFS} of both depth-
first search and breadth-first search that does precisely this, although this is
mostly for show. Typical implementations of breadth-first search include a
further optimization, where we test an edge to see if we should add it to the
tree (and possibly add it) before inserting into the queue. This gives the same
result as the DFS-like implementation but only requires O(n) space for the
queue instead of O(m), with a smaller constant as well since don’t need to
bother storing source edges in the queue. An example of this approach is given
[below]{#graphSearchImplementation}.
The running time of any of these algorithms is very fast: we pay O(1) per
vertex in setup costs and O(1) per edge during the search (assuming the input
is in adjacency-list form), giving a linear O(n + m) total cost. Often it is more
expensive to set up the graph in the first place than to run a search on it.
328
5.12.5.1 Implementation of depth-first and breadth-first search
Here is a simple implementation of depth-first search, using a recursive algorithm,
and breadth-first search, using an iterative algorithm that maintains a queue of
vertices. In both cases the algorithm is applied to a sample graph whose vertices
are the integers 0 through n − 1 for some n, and in which vertex x has edges to
vertices x/2, 3 · x, and x + 1, whenever these values are also integers in the range
0 through n − 1. For large graphs it may be safer to run an iterative version of
DFS that uses an explicit stack instead of a possibly very deep recursion.
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
#include <stdint.h>
struct node {
Vertex *neighbors; /* array of outgoing edges, terminated by VERTEX_NULL */
Vertex parent; /* for search */
};
struct graph {
size_t n; /* number of vertices */
struct node *v; /* list of vertices */
};
void
graphDestroy(struct graph *g)
{
Vertex v;
free(g);
}
/* this graph has edges from x to x+1, x to 3*x, and x to x/2 (when x is even) */
struct graph *
makeSampleGraph(size_t n)
{
struct graph *g;
Vertex v;
329
const int allocNeighbors = 4;
int i;
g = malloc(sizeof(*g));
assert(g);
g->n = n;
g->v = malloc(sizeof(struct node) * n);
assert(g->v);
/* fill in neighbors */
g->v[v].neighbors = malloc(sizeof(Vertex) * allocNeighbors);
i = 0;
if(v % 2 == 0) { g->v[v].neighbors[i++] = v/2; }
if(3*v < n) { g->v[v].neighbors[i++] = 3*v; }
if(v+1 < n) { g->v[v].neighbors[i++] = v+1; }
g->v[v].neighbors[i++] = VERTEX_NULL;
}
return g;
}
puts("digraph G {");
puts("}");
}
330
{
do {
printf(" %d", u);
u = g->v[u].parent;
} while(g->v[u].parent != u);
}
puts("digraph G {");
puts("}");
}
if(g->v[child].parent == VERTEX_NULL) {
g->v[child].parent = parent;
for(i = 0; (neighbor = g->v[child].neighbors[i]) != VERTEX_NULL; i++) {
dfsHelper(g, child, neighbor);
}
}
}
void
dfs(struct graph *g, Vertex root)
{
dfsHelper(g, root, root);
}
331
/* compute BFS tree starting at root */
void
bfs(struct graph *g, Vertex root)
{
Vertex *q;
int head; /* deq from here */
int tail; /* enq from here */
Vertex current;
Vertex nbr;
int i;
q = malloc(sizeof(Vertex) * g->n);
assert(q);
head = tail = 0;
free(q);
}
int
main(int argc, char **argv)
{
int n;
struct graph *g;
if(argc != 3) {
fprintf(stderr, "Usage: %s action n\nwhere action =\n g - print graph\n d - print
332
return 1;
}
n = atoi(argv[2]);
g = makeSampleGraph(n);
switch(argv[1][0]) {
case 'g':
printGraph(g);
break;
case 'd':
dfs(g, 0);
printTree(g);
break;
case 'b':
bfs(g, 0);
printTree(g);
break;
default:
fprintf(stderr, "%s: unknown action '%c'\n", argv[0], argv[1][0]);
return 1;
}
graphDestroy(g);
return 0;
}
examples/graphSearch/search.c
The output of the program is either the graph, a DFS tree of the graph rooted
at 0, or a BFS tree of the graph rooted at 0, in a format suitable for feeding to
the GraphViz program dot, which draws pictures of graphs.
Here are the pictures for n = 20.
333
Figure 5: The full graph
334
Figure 6: DFS tree
335
Figure 7: BFS tree
336
* struct searchInfo *s;
* int n;
*
* s = searchInfoCreate(g);
*
* n = graph_vertices(g);
* for(i = 0; i < n; i++) {
* dfs(s, i);
* }
*
* ... use results in s ...
*
* searchInfoDestroy(s);
*
*/
struct searchInfo {
Graph graph;
int reached; /* count of reached nodes */
int *preorder; /* list of nodes in order first reached */
int *time; /* time[i] == position of node i in preorder list */
int *parent; /* parent in DFS or BFS forest */
int *depth; /* distance from root */
};
337
#include <assert.h>
#include "graph.h"
#include "genericSearch.h"
a = malloc(sizeof(*a) * n);
assert(a);
return a;
}
s = malloc(sizeof(*s));
assert(s);
s->graph = g;
s->reached = 0;
n = graphVertexCount(g);
s->preorder = createEmptyArray(n);
s->time = createEmptyArray(n);
s->parent = createEmptyArray(n);
s->depth = createEmptyArray(n);
return s;
}
338
/* free searchInfo data---does NOT free graph pointer */
void
searchInfoDestroy(struct searchInfo *s)
{
free(s->depth);
free(s->parent);
free(s->time);
free(s->preorder);
free(s);
}
/* stack/queue */
struct queue {
struct edge *e;
int bottom;
int top;
};
static void
pushEdge(Graph g, int u, int v, void *data)
{
struct queue *q;
q = data;
q->e[q->top].u = u;
q->e[q->top].v = v;
q->top++;
}
339
/* edge we are working on */
struct edge cur;
q.bottom = q.top = 0;
/* no */
assert(r->reached < graphVertexCount(r->graph));
r->parent[cur.v] = cur.u;
r->time[cur.v] = r->reached;
r->preorder[r->reached++] = cur.v;
if(cur.u == cur.v) {
/* we could avoid this if we were certain SEARCH_INFO_NULL */
/* would never be anything but -1 */
r->depth[cur.v] = 0;
} else {
r->depth[cur.v] = r->depth[cur.u] + 1;
}
free(q.e);
}
void
340
dfs(struct searchInfo *results, int root)
{
genericSearch(results, root, 0);
}
void
bfs(struct searchInfo *results, int root)
{
genericSearch(results, root, 1);
}
examples/graphs/genericSearch.c
And here is some test code: genericSearchTest.c. You will need to compile
genericSearchTest.c together with both genericSearch.c and graph.c to
get it to work. This Makefile will do this for you.
341
There are two parts to dynamic programming. The first part is a programming
technique: dynamic programming is essentially divide and conquer run in reverse:
we solve a big instance of a problem by breaking it up recursively into smaller
instances; but instead of carrying out the computation recursively from the top
down, we start from the bottom with the smallest instances of the problem,
solving each increasingly large instance in turn and storing the result in a
table. The second part is a design principle: in building up our table, we are
careful always to preserve alternative solutions we may need later, by delaying
commitment to particular choices to the extent that we can.
The bottom-up aspect of dynamic programming is most useful when a straight-
forward recursion would produce many duplicate subproblems. It is most efficient
when we can enumerate a class of subproblems that doesn’t include too many
extraneous cases that we don’t need for our original problem.
To take a simple example, suppose that we want to compute the n-th Fibonacci
number using the defining recurrence
• F (n) = F (n − 1) + F (n − 2)
• F (1) = F (0) = 1.
A naive approach would simply code the recurrence up directly:
int
fib(int n)
{
if(n < 2) {
return 1
} else {
return fib(n-1) + fib(n-2);
}
}
The running time of this procedure is easy to compute. The recurrence is
• T (n) = T (n − 1) + T (n − 2) + Θ(1),
whose solution is Θ(an ) where a is the golden ratio 1.6180339887498948482 . . ..
This is badly exponential.23
5.13.1 Memoization
The problem is that we keep recomputing values of fib that we’ve already
computed. We can avoid this by memoization, where we wrap our recursive
solution in a memoizer that stores previously-computed solutions in a hash
23 But it’s linear in the numerical value of the output, which means that fib(n) will actually
terminate in a reasonable amount of time on a typical modern computer when run on any n
small enough that F (n) fits in 32 bits. Running it using 64-bit (or larger) integer representations
will be slower.
342
table. Sensible programming languages will let you write a memoizer once and
apply it to arbitrary recursive functions. In less sensible programming languages
it is usually easier just to embed the memoization in the function definition itself,
like this:
int
memoFib(int n)
{
int ret;
if(hashContains(FibHash, n)) {
return hashGet(FibHash, n);
} else {
ret = memoFib(n-1) + memoFib(n-2);
hashPut(FibHash, n, ret);
return ret;
}
}
The assumption here is that FibHash is a global hash table that we have initialized
to map 0 and 1 to 1. The total cost of running this procedure is O(n), because
fib is called at most twice for each value k in 0 . . . n.
Memoization is a very useful technique in practice, but it is not popular with
algorithm designers because computing the running time of a complex memoized
procedure is often much more difficult than computing the time to fill a nice
clean table. The use of a hash table instead of an array may also add overhead
(and code complexity) that comes out in the constant factors. But it is always
the case that a memoized recursive procedure considers no more subproblems
than a table-based solution, and it may consider many fewer if we are sloppy
about what we put in our table (perhaps because we can’t easily predict what
subproblems will be useful).
Dynamic programming comes to the rescue. Because we know what smaller cases
we have to reduce F(n) to, instead of computing F(n) top-down, we compute it
bottom-up, hitting all possible smaller cases and storing the results in an array:
int
fib2(int n)
{
int *a;
int i;
int ret;
if(n < 2) {
343
return 1;
} else {
a = malloc(sizeof(*a) * (n+1));
assert(a);
a[1] = a[2] = 1;
ret = a[n];
free(a);
return ret;
}
Notice the recurrence is exactly the same in this version as in our original
recursive version, except that instead of computing F(n-1) and F(n-2) recursively,
we just pull them out of the array. This is typical of dynamic-programming
solutions: often the most tedious editing step in converting a recursive algorithm
to dynamic programming is changing parentheses to square brackets. As with
memoization, the effect of this conversion is dramatic; what used to be an
exponential-time algorithm is now linear-time.
344
This last step requires some explanation. We don’t really want to store in
table[i] the full longest increasing subsequence ending at position i, because it
may be very big. Instead, we store the index of the second-to-last element of this
sequence. Since that second-to-last element also has a table entry that stores the
index of its predecessor, by following the indices we can generate a subsequence
of length O(n), even though we only stored O(1) pieces of information in each
table entry. This is similar to the parent pointer technique used in graph search
algorithms.
Here’s what the code looks like:
/* compute a longest strictly increasing subsequence of an array of ints */
/* input is array a with given length n */
/* returns length of LIS */
/* If the output pointer is non-null, writes LIS to output pointer. */
/* Caller should provide at least sizeof(int)*n space for output */
/* If there are multiple LIS's, which one is returned is arbitrary. */
unsigned long
longest_increasing_subsequence(const int a[], unsigned long n, int *output);
examples/dynamicProgramming/lis/lis.h
#include <stdlib.h>
#include <assert.h>
#include "lis.h"
unsigned long
longest_increasing_subsequence(const int a[], unsigned long n, int *output)
{
struct lis_data {
unsigned long length; /* length of LIS ending at this point */
unsigned long prev; /* previous entry in the LIS ending at this point
} *table;
unsigned long i;
unsigned long j;
unsigned long best_length;
345
/* default best is just this element by itself */
table[i].length = 1;
table[i].prev = n; /* default end-of-list value */
output[best_length - i - 1] = a[scan];
scan = table[scan].prev;
}
}
free(table);
return best_length;
}
346
examples/dynamicProgramming/lis/lis.c
A sample program that runs longest_increasing_subsequence on a list of
numbers passed in by stdin is given in test_lis.c. Here is a Makefile.
Implemented like this, the cost of finding an LIS is O(n2 ), because to compute
each entry in the array, we have to search through all the previous entries to
find the longest path that ends at a value less than the current one. This can be
improved by using a more clever data structure. If we use a binary search tree
that stores path keyed by the last value, and augment each node with a field
that represents the maximum length of any path in the subtree under that node,
then we can find the longest feasible path that we can append the current node
to in O(log n) time instead of O(n) time. This brings the total cost down to
only O(n log n).
347
endpoints i and j, which can be anything). When k = 0, this is just the length
of the i–j edge, or +∞ if there is no such edge. So we can start by computing
L(i, j, 0) for all i. Now given L(i, j, k) for all i and some k, we can compute
L(i, j, k + 1) by observing that any shortest i–j path that has intermediate
vertices in 0 . . . k either consists of a path with intermediate vertices in 0 . . . k − 1,
or consists of a path from i to k followed by a path from k to j, where both of
these paths have intermediate vertices in 0 . . . k − 1. So we get
• L(i, j, k + 1) = min(L(i, j, k), L(i, k, k) + L(k, j, k).
Since this takes O(1) time to compute if we have previously computed L(i, j, k)
for all i and j, we can fill in the entire table in O(n3 ) time.
Implementation details:
• If we want to reconstruct the shortest path in addition to computing its
length, we can store the first vertex for each i–j path. This will either be
(a) the first vertex in the i–j path for the previous k, or (b) the first vertex
in the i–k path.
• We don’t actually need to use a full three-dimensional array. It’s enough
to store one value for each pair i, j and let k be implicit. At each step we
let L[i][j] be min(L[i][j], L[i][k] + L[k][j]). The trick is that we don’t care
if L[i][k] or L[k][j] has already been updated, because that will only give
us paths with a few extra k vertices, which won’t be the shortest paths
anyway assuming no negative cycles.
348
x and y are strings. Let a and b be single characters. Then L(xa, yb) is the
maximum of:
• L(x, y) + 1, if a = b,
• L(xa, y), or
• L(x, yb).
The idea is that we either have a new matching character we couldn’t use before
(the first case), or we have an LCS that doesn’t use one of a or b (the remaining
cases). In each case the recursive call to LCS involves a shorter prefix of xa or
yb, with an ultimate base case L(x, y) = 0 if at least one of x or y is the empty
string. So we can fill in these values in a table, as long as we are careful to
make sure that the shorter prefixes are always filled first. If we are smart about
remembering which case applies at each step, we can even go back and extract
an actual LCS, by stitching together to places where a = b. Here’s a short C
program that does this:
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
#include <string.h>
#include <limits.h>
xLen = strlen(x);
yLen = strlen(y);
349
for(i = 0; i < xLen; i++) {
for(j = 0; j < yLen; j++) {
/* we can always do no common substring */
best[i][j].length = 0;
best[i][j].prev = 0;
best[i][j].newChar = 0;
}
}
outPos = best[xLen-1][yLen-1].length;
lcs[outPos--] = '\0';
350
if(p->newChar) {
assert(outPos >= 0);
lcs[outPos--] = p->newChar;
}
}
}
int
main(int argc, char **argv)
{
if(argc != 3) {
fprintf(stderr, "Usage: %s string1 string2\n", argv[0]);
return 1;
}
puts(output);
return 0;
}
examples/dynamicProgramming/lcs/lcs.c
The whole thing takes O(nm) time where n and m are the lengths of A and B.
5.14 Randomization
If you want random values in a C program, there are three typical ways of getting
them, depending on how good (i.e. uniform, uncorrelated, and unpredictable)
you want them to be.
351
E.g.
#include <stdio.h>
#include <stdlib.h>
int
main(int argc, char **argv)
{
printf("%d\n", rand());
return 0;
}
examples/randomization/randOnce.c
The rand function, declared in stdlib.h, returns a random-looking integer in
the range 0 to RAND_MAX (inclusive) every time you call it. On machines using
the GNU C library RAND_MAX is equal to INT_MAX which is typically 23 1 − 1, but
RAND_MAX may be as small as 32767. There are no particularly strong guarantees
about the quality of random numbers that rand returns, but it should be good
enough for casual use, and it has the advantage that as part of the C standard
you can assume it is present almost everywhere.
Note that rand is a pseudorandom number generator: the sequence of values
it returns is predictable if you know its starting state (and is still predictable
from past values in the sequence even if you don’t know the starting state, if
you are clever enough). It is also the case that the initial seed is fixed, so that
the program above will print the same value every time you run it.
This is a feature: it permits debugging randomized programs. As John von
Neumann, who proposed pseudorandom number generators in his 1946 talk
“Various Techniques Used in Connection With Random Digits,” explained:
We see then that we could build a physical instrument to feed random
digits directly into a high-speed computing machine and could have
the control call for these numbers as needed. The real objection
to this procedure is the practical need for checking computations.
If we suspect that a calculation is wrong, almost any reasonable
check involves repeating something done before. At that point the
introduction of new random numbers would be intolerable.
352
int
main(int argc, char **argv)
{
srand(time(0));
printf("%d\n", rand());
return 0;
}
examples/randomization/srandFromTime.c
Here time(0) returns the number of seconds since the epoch (00:00:00 UTC,
January 1, 1970, for POSIX systems, not counting leap seconds). Note that this
still might give repeated values if you run it twice in the same second, and it’s
extremely dangerous if you expect to distribute your code to a lot of people who
want different results, since two of your users are likely to run it twice in the
same second. See the discussion of /dev/urandom below for a better method.
int
main(int argc, char **argv)
{
unsigned int randval;
353
FILE *f;
f = fopen("/dev/random", "r");
fread(&randval, sizeof(randval), 1, f);
fclose(f);
printf("%u\n", randval);
return 0;
}
examples/randomization/devRandom.c
(A similar construction can also be used to obtain a better initial seed for srand
than time(0).)
Both /dev/random and /dev/urandom derive their random bits from physically
random properties of the computer, like time between keystrokes or small
variations in hard disk rotation speeds. The difference between the two is that
/dev/urandom will always give you some random-looking bits, even if it has to
generate extra ones using a cryptographic pseudo-random number generator,
while /dev/random will only give you bits that it is confident are in fact random.
Since your computer only generates a small number of genuinely random bits per
second, this may mean that /dev/random will exhaust its pool if read too often.
In this case, a read on /dev/random will block (just like reading a terminal with
no input on it) until the pool has filled up again.
Neither /dev/random nor /dev/urandom is known to be secure against a deter-
mined attacker, but they are about the best you can do without resorting to
specialized hardware.
354
The problem here is that there are 23 1 outputs from rand, and 6 doesn’t divide
23 1. So 1 and 2 are slightly more likely to come up than 3, 4, 5, or 6. This can
be particularly noticeable if we want a uniform variable from a larger range, e.g.
[0 . . . b(2/3) · 23 1c].
We can avoid this with a technique called rejection sampling, where we reject
excess parts of the output range of rand. For rolling a die, the trick is to reject
anything in the last extra bit of the range that is left over after the largest
multiple of the die size. Here’s a routine that does this, returning a uniform
value in the range 0 to n-1 for any positive n, together with a program that
demonstrates its use for rolling dice:
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
#include <time.h>
return r % n;
}
int
main(int argc, char **argv)
{
int i;
srand(time(0));
putchar('\n');
return 0;
}
355
examples/randomization/randRange.c
More generally, rejection sampling can be used to get random values with
particular properties, where it’s hard to generate a value with that property
directly. Here’s a program that generates random primes:
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
#include <time.h>
/* return 1 if n is prime */
int
isprime(int n)
{
int i;
if(n % 2 == 0 || n == 1) { return 0; }
return 1;
}
return r;
}
int
main(int argc, char **argv)
{
int i;
srand(time(0));
356
}
return 0;
}
examples/randomization/randPrime.c
One temptation to avoid is to re-use your random values. If, for example, you
try to find a random prime by picking a random x and trying x, x+1, x+2, etc.,
until you hit a prime, some primes are more likely to come up than others.
return r;
}
This will find a winning value in 8 tries on average. In contrast, this deterministic
version will take a lot longer for nonzero patterns:
int
matchBitsDeterministic(int pattern)
357
{
int i;
return i;
}
The downside of the randomized approach is that it’s hard to tell when to quit
if there are no matches; if we stop after some fixed number of trials, we get a
Monte Carlo algorithm that may give the wrong answer with small probability.
The usual solution is to either accept a small probability of failure, or interleave
a deterministic backup algorithm that always works. The latter approach gives
a Las Vegas algorithm whose running time is variable but whose correctness is
not.
but it’s not hard to show that on average even the bigger pile has no more than 3/4 of the
elements.
358
* Returns number of elements <= pivot */
static int
splitByPivot(int n, int *a, int pivot)
{
int lo;
int hi;
int temp; /* for swapping */
return lo;
}
if(n == 1) {
return a[0];
}
/* else */
359
pivot = a[rand() % n]; /* we will tolerate non-uniformity */
lo = splitByPivot(n, a, pivot);
if(n <= 1) {
return;
}
/* else */
pivot = a[rand() % n]; /* we will tolerate non-uniformity */
lo = splitByPivot(n, a, pivot);
quickSort(lo, a);
quickSort(n - lo, a + lo);
}
/* shuffle an array */
void
shuffle(int n, int *a)
{
int i;
int r;
int temp;
360
}
}
#define N (1024)
int
main(int argc, char **argv)
{
int a[N];
int i;
shuffle(N, a);
shuffle(N, a);
quickSort(N, a);
return 0;
}
examples/randomization/quick.c
361
the same expected search cost as in a balanced binary tree.
The problem with this approach is that we don’t have any guarantees that the
input will be supplied in random order, and in the worst case we end up with a
linked list. The solution is to put the randomization into the algorithm itself,
making the structure of the tree depend on random choices made by the program
itself.
362
#include "skiplist.h"
struct skiplist {
int key;
int height; /* number of next pointers */
struct skiplist *next[1]; /* first of many */
};
return i;
}
assert(s);
s->key = key;
s->height = height;
return s;
}
363
Skiplist s;
int i;
return s;
}
/* free a skiplist */
void
skiplistDestroy(Skiplist s)
{
Skiplist next;
while(s) {
next = s->next[0];
free(s);
s = next;
}
}
return s->key;
}
364
void
skiplistInsert(Skiplist s, int key)
{
int level;
Skiplist elt;
assert(elt);
365
target = target->next[level];
}
}
if(s->next[level] == target) {
s->next[level] = target->next[level];
}
}
free(target);
}
examples/trees/skiplist/skiplist.c
Here is the header file, Makefile, and test code: skiplist.h, Makefile,
test_skiplist.c.
366
= h(y)] for any fixed keys x =6 y is 1/m, where m is the size of the hash table.
The reason is that the cost of searching for x (with chaining) is linear in the
number of keys already in the table that collide with it. The expected number
of such collisions is the sum of Pr[h(x) = h(y)] over all keys y in the table, or
n/m if we have n keys. So this pairwise collision probability bound is enough to
get the desired n/m behavior out of our table. A family of hash function with
this property is called universal.
How do we get a universal hash family? For strings, we can use a table of random
values, one for each position and possible character in the string. The hash
of a string is then the exclusive or of the random values hashArray[i][s[i]]
corresponding to the actual characters in the string. If our table has size a power
of two, this has the universal property, because if two strings x and y differ in
some position i, then there is only one possible value of hashArray[i][y[i]]
(mod m) that will make the hash functions equal.
Typically to avoid having to build an arbitrarily huge table of random values,
we only has an initial prefix of the string. Here is a hash function based on this
idea, which assumes that the d data structure includes a hashArray field that
contains the random values for this particular hash table:
static unsigned long
hash_function(Dict d, const char *s)
{
unsigned const char *us;
unsigned long h;
int i;
h = 0;
return h;
}
A modified version of the Dict hash table from the chapter on hash tables that
uses this hash function is given here: dict.c, dict.h, test_dict.c, Makefile.
Most of the time, when we’ve talked about the asymptotic performance of data
structures, we have implicitly assumed that the keys we are looking up are of
367
constant size. This means that computing a hash function or comparing two
keys (as in a binary search tree) takes O(1) time. What if this is not the case?
If we consider an m-character string, any reasonable hash function is going to
take O(m) time since it has to look at all of the characters. Similarly, comparing
two m-character strings may also take O(m) time. If we charge for this (as we
should!) then the cost of hash table operations goes from O(1) to O(m) and the
cost of binary search tree operations, even in a balanced tree, goes from O(log n)
to O(m log n). Even sorting becomes more expensive: a sorting algorithm that
does O(n log n) comparisons may now take O(mn log n) time. But maybe we
can exploit the structure of strings to get better performance.
Radix search refers to a variety of data structures that support searching for
strings considered as sequences of digits in some large base (or radix). These
are generally faster than simple binary search trees because they usually only
require examining one byte or less of the target string at each level of the tree,
as compared to every byte in the target in a full string comparison. In many
cases, the best radix search trees are even faster than hash tables, because they
only need to look at a small part of the target string to identify it.
We’ll describe several radix search trees, starting with the simplest and working
up.
5.15.1.1 Tries
A trie is a binary tree (or more generally, a k-ary tree where k is the radix)
where the root represents the empty bit sequence and the two children of a
node representing sequence x represent the extended sequences x0 and x1 (or
generally x0, x1, . . . , x(k − 1)). So a key is not stored at a particular node but
is instead represented bit-by-bit (or digit-by-digit) along some path. Typically
a trie assumes that the set of keys is prefix-free, i.e. that no key is a prefix of
another; in this case there is a one-to-one correspondence between keys and
leaves of the trie. If this is not the case, we can mark internal nodes that
also correspond to the ends of keys, getting a slightly different data structure
known as a digital search tree. For null-terminated strings as in C, the null
terminator ensures that any set of strings is prefix-free.
Given this simple description, a trie storing a single long key would have a very
large number of nodes. A standard optimization is to chop off any path with no
branches in it, so that each leaf corresponds to the shortest unique prefix of a
key. This requires storing the key in the leaf so that we can distinguish different
keys with the same prefix.
The name trie comes from the phrase “information retrieval.” Despite the
etymology, trie is now almost always pronounced like try instead of tree to avoid
368
confusion with other tree data structures.
5.15.1.1.3 Implementation
A typical trie implementation in C might look like this. It uses a GET_BIT macro
similar to the one from the chapter on bit manipulation, except that we reverse
the bits within each byte to get the right sorting order for keys.
typedef struct trie_node *Trie;
369
/* free a trie */
void trie_destroy(Trie);
#include "trie.h"
struct trie_node {
char *key;
struct trie_node *kids[TRIE_BASE];
};
if(trie == 0) {
/* we lost */
return 0;
} else {
/* check that leaf really contains the target */
370
return !strcmp(trie->key, target);
}
}
s2 = malloc(strlen(s) + 1);
assert(s2);
strcpy(s2, s);
return s2;
}
t = malloc(sizeof(*t));
assert(t);
if(key) {
t->key = my_strdup(key);
assert(t->key);
} else {
t->key = 0;
}
return t;
}
371
int bitvalue;
Trie t;
Trie kid;
const char *oldkey;
if(trie == 0) {
return make_trie_node(key);
}
/* else */
/* first we'll search for key */
for(t = trie, bit = 0; !IsLeaf(t); bit++, t = kid) {
kid = t->kids[bitvalue = GET_BIT(key, bit)];
if(kid == 0) {
/* woohoo! easy case */
t->kids[bitvalue] = make_trie_node(key);
return trie;
}
}
/* else */
/* hard case---have to extend the trie */
oldkey = t->key;
#ifdef EXCESSIVE_TIDINESS
t->key = 0; /* not required but makes data structure look tidier */
#endif
/* then split */
t->kids[bitvalue] = make_trie_node(key);
t->kids[!bitvalue] = make_trie_node(oldkey);
return trie;
}
372
/* kill it */
void
trie_destroy(Trie trie)
{
int i;
if(trie) {
for(i = 0; i < TRIE_BASE; i++) {
trie_destroy(trie->kids[i]);
}
if(IsLeaf(trie)) {
free(trie->key);
}
free(trie);
}
}
static void
trie_print_internal(Trie t, int bit)
{
int i;
int kid;
if(t != 0) {
if(IsLeaf(t)) {
for(i = 0; i < bit; i++) putchar(' ');
puts(t->key);
} else {
for(kid = 0; kid < TRIE_BASE; kid++) {
trie_print_internal(t->kids[kid], bit+1);
}
}
}
}
void
trie_print(Trie t)
{
trie_print_internal(t, 0);
}
examples/trees/trie/trie.c
Here is a short test program that demonstrates how to use it:
373
#include <stdio.h>
#include <stdlib.h>
#include "trie.h"
size = 1;
line = malloc(size);
if(line == 0) return 0;
n = 0;
374
int
main(int argc, char **argv)
{
Trie t;
char *line;
t = EMPTY_TRIE;
while((line = getline()) != 0) {
if(!trie_contains(t, line)) {
puts(line);
}
free(line);
}
puts("===");
trie_print(t);
trie_destroy(t);
return 0;
}
examples/trees/trie/test_trie.c
375
typedef struct patricia_node *Patricia;
Now when searching for a key, instead of using the number of nodes visited so
far to figure out which bit to look at, we just read the bit out of the table. This
means in particular that we can skip over any bits that we don’t actually branch
on. We do however have to be more careful to make sure we don’t run off the
end of our target key, since it is possible that when skipping over intermediate
bits we might skip over some that distinguish our target from all keys in the
table, including longer keys. For example, a Patricia tree storing the strings
abc and abd will first test bit position 22, since that’s where abc and abd differ.
This can be trouble if we are looking for x.
Here’s the search code:
int
patricia_contains(Patricia t, const char *key)
{
int key_bits;
376
5.15.1.3 Ternary search trees
Ternary search trees were described by Jon Bentley and Bob Sedgewick in
an article in the April 1988 issue of Dr. Dobb’s Journal, available here.
The basic idea is that each node in the tree stores one character from the key
and three child pointers lt, eq, and gt. If the corresponding character in the
target is equal to the character in the node, we move to the next character in
the target and follow the eq pointer out of the node. If the target is less, follow
the lt pointer but stay at the same character. If the target is greater, follow the
gt pointer and again stay at the same character. When searching for a string,
we walk down the tree until we either reach a node that matches the terminating
NUL (a hit), or follow a null pointer (a miss).
A TST acts a bit like a 256-way trie, except that instead of storing an array
of 256 outgoing pointers, we build something similar to a small binary search
tree for the next character. Note that no explicit balancing is done within these
binary search trees. From a theoretical perspective, the worst case is that we
get a 256-node deep linked-list equivalent at each step, multiplying our search
time by 256 = O(1). In practice, only those characters that actual appear in
some key at this stage will show up, so the O(1) is likely to be a small O(1),
especially if keys are presented in random order.
TSTs are one of the fastest known data structures for implementing dictionaries
using strings as keys, beating both hash tables and tries in most cases. They
can be slower than Patricia trees if the keys have many keys with long matching
prefixes; however, a Patricia-like optimization can be applied to give a com-
pressed ternary search tree that works well even in this case. In practice,
regular TSTs are usually good enough.
Here is a simple implementation of an insert-only TST. The C code includes two
versions of the insert helper routine; the first is the original recursive version
and the second is an iterative version generated by eliminating the tail recursion
from the first.
typedef struct tst_node *TST;
/* free a TST */
void tst_destroy(TST);
377
examples/trees/tst/tst.h
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
#include "tst.h"
struct tst_node {
char key; /* value to split on */
struct tst_node *lt; /* go here if target[index] < value */
struct tst_node *eq; /* go here if target[index] == value */
struct tst_node *gt; /* go here if target[index] > value */
};
while(t) {
if(*key < t->key) {
t = t->lt;
} else if(*key > t->key) {
t = t->gt;
} else if(*key == '\0') {
return 1;
} else {
t = t->eq;
key++;
}
}
return 0;
}
378
}
379
/* add a new key to a TST */
/* and return the new TST */
TST
tst_insert(TST t, const char *key)
{
assert(key);
#ifdef USE_RECURSIVE_INSERT
tst_insert_recursive(&t, key);
#else
tst_insert_iterative(&t, key);
#endif
return t;
}
/* free a TST */
void
tst_destroy(TST t)
{
if(t) {
tst_destroy(t->lt);
tst_destroy(t->eq);
tst_destroy(t->gt);
free(t);
}
}
examples/trees/tst/tst.c
And here is some test code, almost identical to the test code for tries: test_tst.c.
The Dr. Dobb’s article contains additional code for doing deletions and partial
matches, plus some optimizations for inserts.
380
that the algorithm can extract only one bit of information from every call to
compare. Since there are n! possible ways to reorder the input sequence, this
means we need at least log(n!) = O(n log n) calls to compare to finish the sort. If
we are sorting something like strings, this can get particularly expensive, because
calls to strcmp may take time linear in the length of the strings being compared.
In the worst case, sorting n strings of length m each could take O(nm log n)
time.
381
sat
bat
The second pass sorts on the second column, producing no change in the order
(all the characters are the same). The last pass sorts on the first column. This
moves the s after the two bs, but preserves the order of the two words starting
with b. The result is:
bad
bat
sat
There are three downsides to LSB radix sort:
1. All the strings have to be the same length (this is not necessarily a problem
if they are really fixed-width data types like ints).
2. The algorithm used to sort each position must be stable, which may require
more effort than most programmers would like to take.
3. It may be that the late positions in the strings don’t affect the
order, but we have to sort on them anyway. If we are sorting
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa and baaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa,
we spend a lot of time matching up as against each other.
382
There is a trick we can do analagous to the Dutch flag algorithm where we sort
the array in place. The idea is that we first count the number of elements that
land in each bucket and assign a block of the array for each bucket, keeping
track in each block of an initial prefix of values that belong in the bucket with
the rest not yet processed. We then walk through the buckets swapping out any
elements at the top of the good prefix to the bucket they are supposed to be in.
This procedure puts at least one element in the right bucket for each swap, so
we reorder everything correctly in at most n swaps and O(n) additional work.
To keep track of each bucket, we use two pointers bucket[i] for the first element
of the bucket and top[i] for the first unused element. We could make these be
integer array indices, but this slows the code down by about 10%. This seems to
be a situation where our use of pointers is complicated enough that the compiler
can’t optimize out the array lookups.
#include "radixSort.h"
temp = *a;
*a = *b;
*b = temp;
}
/* this is the internal routine that assumes all strings are equal for the
383
* first k characters */
static void
radixSortInternal(int n, const char **a, int k)
{
int i;
int count[UCHAR_MAX+1]; /* number of strings with given character in position k */
int mode; /* most common position-k character */
const char **bucket[UCHAR_MAX+1]; /* position of character block in output */
const char **top[UCHAR_MAX+1]; /* first unused index in this character block */
if(count[mode] < n) {
384
} else {
/* swap with top of appropriate block */
swapStrings(top[i], top[(unsigned char) top[i][0][k]]++);
}
}
}
} else {
void
radixSort(int n, const char **a)
{
radixSortInternal(n, a, 0);
}
examples/sorting/radixSort/radixSort.c
Some additional files: radixSort.h, test_radixSort.c, Makefile, sortInput.c.
The last is a program that sorts lines on stdin and writes the result to
stdout, similar to the GNU sort utility. When compiled with -O3 and
run on my machine, this runs in about the same time as the standard sort
program when run on a 4.7 million line input file consisting of a random
shuffle of 20 copies of /usr/share/dict/words, provided sort is run with
LANG=C sort < /usr/share/dict/words to keep it from having to deal with
locale-specific collating issues. On other inputs, sort is faster. This is not
bad given how thoroughly sort has been optimized, but is a sign that further
optimization is possible.
385
6 Other topics not covered in detail in 2015
These are mostly leftovers from previous versions of the class where different
topics were emphasized.
6.1.1 Iterators
Suppose we have an abstract data type that represents some sort of container,
such as a list or dictionary. We’d like to be able to do something to every
element of the container; say, count them up. How can we write operations on
the abstract data type to allow this, without exposing the implementation?
To make the problem more concrete, let’s suppose we have an abstract data type
that represents the set of all non-negative numbers less than some fixed bound.
The core of its interface might look like this:
/*
* Abstract data type representing the set of numbers from 0 to
* bound-1 inclusive, where bound is passed in as an argument at creation.
*/
typedef struct nums *Nums;
/* Destructor */
void nums_destroy(Nums);
struct nums {
int bound;
};
386
struct nums *n;
n = malloc(sizeof(*n));
n->bound = bound;
return n;
}
387
nums_contents(Nums n)
{
int *a;
int i;
a = malloc(sizeof(*a) * (n->bound + 1));
for(i = 0; i < n->bound; i++) a[i] = i;
a[n->bound] = -1;
return a;
}
We might use it like this:
sum = 0;
contents = nums_contents(nums);
for(p = contents; *p != -1; p++) {
sum += *p;
}
free(contents);
Despite the naturalness of the approach, returning a sequence in this case leads
to the most code complexity of the options we will examine.
388
create and destroy a separate iterator object that holds the state of the loop.
But for many tasks in C, the first/done/next idiom is a pretty good one.
389
body; just build a struct containing all the variables that it uses, and pass a
pointer to this struct as f_data.
6.1.2 Closures
Sequence
create_sequence(int (*next)(void *data), void *data)
{
Sequence s;
s = malloc(sizeof(*s));
assert(s);
s->next = next;
s->data = data;
return s;
}
int
sequence_next(Sequence s)
{
return s->next(s->data);
}
And here are some examples of sequences:
/* make a constant sequence that always returns x */
static int
constant_sequence_next(void *data)
{
return *((int *) data);
}
390
Sequence
constant_sequence(int x)
{
int *data;
data = malloc(sizeof(*data));
if(data == 0) return 0;
*data = x;
static int
arithmetic_sequence_next(void *data)
{
struct arithmetic_sequence_data *d;
d = data;
d->cur += d->step;
return d->cur;
}
Sequence
arithmetic_sequence(int x, int a)
{
struct arithmetic_sequence_data *d;
d = malloc(sizeof(*d));
if(d == 0) return 0;
391
static int
add_sequences_next(void *data)
{
Sequence *s;
s = data;
return sequence_next(s[0]) + sequence_next(s[1]);
}
Sequence
add_sequences(Sequence s0, Sequence s1)
{
Sequence *s;
s = malloc(2*sizeof(*s));
if(s == 0) return 0;
s[0] = s0;
s[1] = s1;
static int
iterated_function_sequence_next(void *data)
{
struct iterated_function_sequence_data *d;
int retval;
d = data;
retval = d->x;
d->x = d->f(d->x);
return retval;
}
Sequence
iterated_function_sequence(int (*f)(int), int x0)
{
392
struct iterated_function_sequence_data *d;
d = malloc(sizeof(*d));
if(d == 0) return 0;
d->x = x0;
d->f = f;
6.1.3 Objects
Here’s an example of a hierarchy of counter objects. Each counter object has (at
least) three operations: reset, next, and destroy. To call the next operation
on counter c we include c and the first argument, e.g. c->next(c) (one could
write a wrapper to enforce this).
The main trick is that we define a basic counter structure and then extend it
to include additional data, using lots of pointer conversions to make everything
work.
/* use preprocessor to avoid rewriting these */
#define COUNTER_FIELDS \
void (*reset)(struct counter *); \
int (*next)(struct counter *); \
void (*destroy)(struct counter *);
struct counter {
COUNTER_FIELDS
};
393
Counter
make_zero_counter(void)
{
return &Zero_counter;
}
/* new fields */
int init;
int cur;
int (*f)(int); /* update rule */
};
static void
ifs_reset(Counter c)
{
struct ifs_counter *ic;
ic = (struct ifs_counter *) c;
ic->cur = ic->init;
}
static void
ifs_next(Counter c)
{
struct ifs_counter *ic;
int ret;
ic = (struct ifs_counter *) c;
ret = ic->cur;
ic->cur = ic->f(ic->cur);
return ret;
}
Counter
make_ifs_counter(int init, int (*f)(int))
394
{
struct ifs_counter *ic;
ic = malloc(sizeof(*ic));
ic->reset = ifs_reset;
ic->next = ifs_next;
ic->destroy = (void (*)(struct counter *)) free;
ic->init = init;
ic->cur = init;
ic->f = f;
void
print_powers_of_2(void)
{
int i;
Counter c;
c = make_ifs_counter(1, times2);
c->reset(c);
c->destroy(c);
}
395
6.2 Suffix arrays
These are notes on practical implementations of suffix arrays, which are a data
structure for searching quickly for substrings of a given large string.
• Answer from the old days: Fast string searching is the key to dealing with
mountains of information. Why, a modern (c. 1960) electronic computer
can search the equivalent of hundreds of pages of text in just a few hours. . .
• More recent answer:
– We still need to search enormous corpuses of text (see http://www.
google.com).
– Algorithms for finding long repeated substrings or patterns can be
useful for data compression) or detecting plagiarism.
– Finding all occurrence of a particular substring in some huge corpus
is the basis of statistical machine translation.
– We are made out of strings over a particular finite alphabet GATC.
String search is a central tool in computational biology.
Suffix trees and suffix arrays are data structures for representing texts that
allow substring queries like “where does this pattern appear in the text” or “how
many times does this pattern occur in the text” to be answered quickly. Both
work by storing all suffixes of a text, where a suffix is a substring that runs to the
end of the text. Of course, storing actual copies of all suffixes of an n-character
text would take O(n2 ) space, so instead each suffix is represented by a pointer
to its first character.
A suffix array stores all the suffixes sorted in dictionary order. For example,
the suffix array of the string abracadabra is shown below. The actual contents
of the array are the indices in the left-hand column; the right-hand shows the
corresponding suffixes.
396
11 \0
10 a\0
7 abra\0
0 abracadabra\0
3 acadabra\0
5 adabra\0
8 bra\0
1 bracadabra\0
4 cadabra\0
6 dabra\0
9 ra\0
2 racadabra\0
A suffix tree is similar, but instead using an array, we use some sort of tree
data structure to hold the sorted list. A common choice given an alphabet of
some fixed size k is a trie, in which each node at depth d represents a string of
length d, and its up to k children represent all (d + 1)-character extensions of the
string. The advantage of using a suffix trie is that searching for a string of length
m takes O(m) time, since we can just walk down the trie at the rate of one
node per character in m. A further optimization is to replace any long chain of
single-child nodes with a compressed edge labeled with the concatenation all the
characters in the chain. Such compressed suffix tries can not only be searched in
linear time but can also be constructed in linear time with a sufficiently clever
algorithm (we won’t actually do this here). Of course, we could also use a simple
balanced binary tree, which might be preferable if the alphabet is large.
The disadvantage of suffix trees over suffix arrays is that they generally require
more space to store all the internal pointers in the tree. If we are indexing a
huge text (or collection of texts), this extra space may be too expensive.
397
assumption gives an O(n log2 n) running time; this is a factor of log n slower,
but this may be acceptable if programmer time is more important.
The fastest approach is to build a suffix tree in O(n) time and extract the suffix
array by traversing the tree. The only complication is that we need the extra
space to build the tree, although we get it back when we throw the tree away.
398
The idea of the Burrows-Wheeler Transform is to construct an array whose rows
are all cyclic shifts of the input string in dictionary order, and return the last
column of the array. The last column will tend to have long runs of identical
characters, since whenever some substring (like the appears repeatedly in the
input, shifts that put the first character t in the last column will put the rest
of the substring he in the first columns, and the resulting rows will tend to be
sorted together. The relative regularity of the last column means that it will
compress well with even very simple compression algorithms like run-length
encoding.
Below is an example of the Burrows-Wheeler transform in action, with $ marking
end-of-text. The transformed value of abracadabra$ is $drcraaaabba, the last
column of the sorted array; note the long run of a’s (and the shorter run of b’s).
abracadabra$ abracadabra$
bracadabra$a abra$abracad
racadabra$ab acadabra$abr
acadabra$abr adabra$abrac
cadabra$abra a$abracadabr
adabra$abrac bracadabra$a
dabra$abraca --> bra$abracada
abra$abracad cadabra$abra
bra$abracada dabra$abraca
ra$abracadab racadabra$ab
a$abracadabr ra$abracadab
$abracadabra $abracadabra
The most useful property of the Burrows-Wheeler transform is that it can be
inverted; this distinguishes it from other transforms that produce long runs like
simply sorting the characters. We’ll describe two ways to do this; the first is
less efficient, but more easily grasped, and involves rebuilding the array one
column at a time, starting at the left. Observe that the leftmost column is just
all the characters in the string in sorted order; we can recover it by sorting the
rightmost column, which we have to start off with. If we paste the rightmost and
leftmost columns together, we have the list of all 2-character substrings of the
original text; sorting this list gives the first two columns of the array. (Remember
that each copy of the string wraps around from the right to the left.) We can
then paste the rightmost column at the beginning of these two columns, sort
the result, and get the first three columns. Repeating this process eventually
reconstructs the entire array, from which we can read off the original string from
any row. The initial stages of this process for abracadabra$ are shown below:
$ a $a ab $ab abr
d a da ab dab abr
r a ra ac rac aca
c a ca ad cad ada
r a ra a$ ra$ a$a
a b ab br abr bra
399
a -> b ab -> br abr -> bra
a c ac ca aca cad
a d ad da ada dab
b r br ra bra rac
b r br ra bra ra$
a $ a$ $a a$a $ab
Rebuilding the entire array in this fashion takes O(n2 ) time and O(n2 ) space.
In their paper, Burrows and Wheeler showed that one can in fact reconstruct
the original string from just the first and last columns in the array in O(n) time.
Here’s the idea: Suppose that all the characters were distinct. Then after
reconstructing the first column we would know all pairs of adjacent characters.
So we could just start with the last character $ and regenerate the string by
appending at each step the unique successor to the last character so far. If all
characters were distinct, we would never get confused about which character
comes next.
The problem is what to do with pairs with duplicate first characters, like ab
and ac in the example above. We can imagine that each a in the last column is
labeled in some unique way, so that we can talk about the first a or the third a,
but how do we know which a is the one that comes before b or d?
The trick is to look closely at how the original sort works. Look at the rows in
the original transformation. If we look at all rows that start with a, the order
they are sorted in is determined by the suffix after a. These suffixes also appear
as the prefixes of the rows that end with a, since the rows that end with a are
just the rows that start with a rotated one position. It follows that all instances
of the same letter occur in the same order in the first and last columns. So if
we use a stable sort to construct the first column, we will correctly match up
instances of letters.
This method is shown in action below. Each letter is annotated uniquely with
a count of how many identical letters equal or precede it. Sorting recovers the
first column, and combining the last and first columns gives a list of unique
pairs of adjacent annotated characters. Now start with $1 and construct the full
sequence $1 a1 b1 r1 a3 c1 a4 d1 a2 b2 r2 a5 $1. The original string is
obtained by removing the end-of-string markers and annotations: abracadabra.
$1 a1
d1 a2
r1 a3
c1 a4
r2 a5
a1 b1
a2 --> b2
a3 c1
a4 d1
b1 r1
400
b2 r2
a5 $1
Because we are only sorting single characters, we can perform the sort in linear
time using counting sort. Extracting the original string also takes linear time if
implemented reasonably.
struct suffixArray {
size_t n; /* length of string INCLUDING final null */
const char *string; /* original string */
const char **suffix; /* suffix array of length n */
};
/* destructor */
void suffixArrayDestroy(SuffixArray);
401
suffixArraySearch(SuffixArray, const char *substring, size_t *first);
#include "suffixArray.h"
static int
saCompare(const void *s1, const void *s2)
{
return strcmp(*((const char **) s1), *((const char **) s2));
}
SuffixArray
suffixArrayCreate(const char *s)
{
size_t i;
SuffixArray sa;
sa = malloc(sizeof(*sa));
assert(sa);
sa->n = strlen(s) + 1;
sa->string = s;
402
return sa;
}
void
suffixArrayDestroy(SuffixArray sa)
{
free(sa->suffix);
free(sa);
}
size_t
suffixArraySearch(SuffixArray sa, const char *substring, size_t *first)
{
size_t lo;
size_t hi;
size_t mid;
size_t len;
int cmp;
len = strlen(substring);
if(cmp == 0) {
/* we have a winner */
/* search backwards and forwards for first and last */
for(lo = mid; lo > 0 && strncmp(sa->suffix[lo-1], substring, len) == 0; lo--);
for(hi = mid; hi < sa->n && strncmp(sa->suffix[hi+1], substring, len) == 0; hi++
if(first) {
*first = lo;
}
return hi - lo + 1;
} else if(cmp < 0) {
lo = mid;
} else {
hi = mid;
}
403
}
return 0;
}
char *
suffixArrayBWT(SuffixArray sa)
{
char *bwt;
size_t i;
bwt = malloc(sa->n);
assert(bwt);
return bwt;
}
char *
inverseBWT(size_t len, const char *s)
{
/* basic trick: stable sort of s gives successor indices */
/* then we just thread through starting from the nul */
size_t *successor;
int c;
size_t count[UCHAR_MAX+1];
size_t offset[UCHAR_MAX+1];
size_t i;
char *ret;
size_t thread;
/* counting sort */
for(c = 0; c <= UCHAR_MAX; c++) {
count[c] = 0;
404
}
offset[0] = 0;
return ret;
}
examples/suffixArray/suffixArray.c
Here is a Makefile and test code: Makefile, testSuffixArray.c.
The output of make test shows all occurrences of a target string, the Burrows-
Wheeler transform of the source string (second-to-last line), and its inversion
(last line, which is just the original string):
$ make test
/bin/echo -n abracadabra-abracadabra-shmabracadabra | ./testSuffixArray abra
Count: 6
abra
abra-abr
abra-shm
abracada
abracada
abracada
aaarrrdddm\x00-rrrcccaaaaaaaaaaaashbbbbbb-
405
abracadabra-abracadabra-shmabracadabra
6.4 C++
Here we will describe some basic features of C++ that are useful for implementing
abstract data types. Like all programming languages, C++ comes with an
ideology, which in this case emphasizes object-oriented features like inheritance.
We will be ignoring this ideology and treating C++ as an improved version of C.
The goal here is not to teach you all of C++, which would take a while, but
instead to give you some hints for why you might want to learn C++ on your own.
If you decide to learn C++ for real, Bjarne Stroustrup’s The C++ Programming
Language is the definitive source. A classic tutorial here aimed at C programmers
introduces C++ features one at a time (some of these features have since migrated
into C). The web site http://www.cplusplus.com has extensive tutorials and
documentation.
int
main(int argc, const char **argv)
{
std::cout << "hi\n";
return 0;
}
examples/c++/helloworld.cpp
Compile this using g++ instead of gcc. Make shows how it is done:
$ make helloworld
g++ helloworld.cpp -o helloworld
Or we could use an explicit Makefile:
CPP=g++
CPPFLAGS=-g3 -Wall
helloworld: helloworld.o
$(CPP) $(CPPFLAGS) -o $@ $^
Now the compilation looks like this:
406
$ make helloworld
g++ -g3 -Wall -c -o helloworld.o helloworld.cpp
g++ -g3 -Wall -o helloworld helloworld.o
The main difference from the C version:
1. #include <stdio.h> is replaced by #include <iostream>, which gets
the C++ version of the stdio library.
2. printf("hi\n") is replaced by std::cout << "hi\n". The stream
std::cout is the C++ wrapper for stdout; you should read this variable
name as cout in the std namespace. The << operator is overloaded for
streams so that it sends its right argument out on its left argument (see
the discussion of operator overloading below). You can also do things like
std::cout << 37, std::cout << 'q', std::cout << 4.7, etc. These
all do pretty much what you expect.
If you don’t like typing std:: before all the built-in functions and variables, you
can put using namespace std somewhere early in your program, like this:
#include <iostream>
int
main(int argc, const char **argv)
{
cout << "hi\n";
return 0;
}
examples/c++/helloworld_using.cpp
6.4.2 References
407
void increment(int &x)
{
x++;
}
The int &x declaration says that x is a reference to whatever variable is passed
as the argument to increment. A reference acts exactly like a pointer that has
already had * applied to it. You can even write &x to get a pointer to the original
variable if you want to for some reason.
As with pointers, it’s polite to mark a reference with const if you don’t intend
to modify the original object:
void reportWeight(const SumoWrestler &huge)
{
cout << huge.getWeight();
}
References are also used as a return type to chain operators together; in the
expression
cout << "hi" << '\n';
the return type of the first << operator is an ostream & reference (as is cout);
this means that the '\n' gets sent to the same object. We could make the
return value be just an ostream, but then cout would be copied, which could
be expensive and would mean that the copy was no longer working on the same
internal state as the original. This same trick is used when overloading the
assignment operator.
C++ lets you define multiple functions with the same name, where the choice of
which function to call depends on the type of its arguments. Here is a program
that demonstrates this feature:
#include <iostream>
const char *
typeName(int x)
{
return "int";
}
const char *
typeName(double x)
408
{
return "double";
}
const char *
typeName(char x)
{
return "char";
}
int
main(int argc, const char **argv)
{
cout << "The type of " << 3 << " is " << typeName(3) << ".\n";
cout << "The type of " << 3.1 << " is " << typeName(3.1) << ".\n";
cout << "The type of " << 'c' << " is " << typeName('c') << ".\n";
return 0;
}
examples/c++/functionOverloading.cpp
And here is what it looks like when we compile and run it:
$ make functionOverloading
g++ functionOverloading.cpp -o functionOverloading
$ ./functionOverloading
The type of 3 is int.
The type of 3.1 is double.
The type of c is char.
Internally, g++ compiles three separate functions with different (and ugly) names,
and when you use typeName on an object of a particular type, g++ picks the one
whose type matches. This is similar to what happens with built-in operators in
straight C, where + means different things depending on whether you apply it
to a pair of ints, a pair of doubles, or a pointer and an int, but C++ lets you
do it with your own functions.
6.4.4 Classes
C++ allows you to declare classes that look suspiciously like structs. The main
differences between a class and a C-style struct are that (a) classes provide
member functions or methods that operate on instances of the class and
that are called using a struct-like syntax; and (b) classes can distinguish between
private members (only accessible to methods of the class) and public members
(accessible to everybody).
409
In C, we organize abstract data types by putting the representation in a struct
and putting the operations on the data type in functions that work on this struct,
often giving the functions a prefix that hints at the type of its target (mostly to
avoid namespace collisions). Classes in C++ make this connection between a
data structure and the operations on it much more explicit.
Here is a simple example of a C++ class in action:
#include <iostream>
Counter::Counter() { value = 0; }
Counter::Counter(int initialValue) { value = initialValue; }
Counter::~Counter() { cerr << "counter de-allocated with value " << value << '\n'; }
int Counter::read() { return value; }
void Counter::increment() { value++; }
int
main(int argc, const char **argv)
{
Counter c;
Counter c10(10);
return 0;
}
410
examples/c++/counter.cpp
Things to notice:
1. In the class Counter declaration, the public: label introduces the public
members of the class. The member value is only accessible to member
functions of Counter. This enforces much stronger information hiding
than the default in C, although one can still use void * trickery to hunt
down and extract supposedly private data in C++ objects.
2. In addition to the member function declarations in the class declara-
tion, we also need to provide definitions. These look like ordinary func-
tion definitions, except that the class name is prepended using :: as in
Counter::read.
3. Member functions are called using struct access syntax, as in c.read().
Conceptually, each instance of a class has its own member functions, so
that c.read is the function for reading c while c10.read is the function
for reading c10. Inside a member function, names of class members refer to
members of the current instance; value inside c.read is c.value (which
otherwise is not accessible, since c.value is not public).
4. Two special member functions are Counter::Counter() and Counter::Counter(int).
These are constructors, and are identifiable as such because they are
named after the class. A constructor is called whenever a new instance
of the class is created. If you create an instance with no arguments
(as in the declaration Counter c;), you get the constructor with no
arguments. If you create an instance with arguments (as in the declaration
Counter c10(10);), you get the version with the appropriate arguments.
This is just another example of function overloading. If you don’t define
any constructors, C++ supplies a default constructor that takes no
arguments and does nothing. Note that constructors don’t have a return
type (you don’t need to preface them with void).
5. The special member function Counter::~Counter() is a destructor; it is
called when an object of type Counter is de-allocated (say, when returning
from a function with a local variable of this type). This particular destructor
is not very useful. Destructors are mostly important for objects that allocate
their own storage that needs to be de-allocated when the object is; see the
section on storage allocation below.
Compiling and running this program gives the following output. Note that the
last two lines are produced by the destructor.
c starts at 0
c after one increment is 1
c10 starts at 10
c10 after two increments is 10
counter de-allocated with value 10
counter de-allocated with value 3
One subtle difference between C and C++ is that C++ uses empty parentheses
411
() for functions with no arguments, where C would use (void). This is a bit
of a historical artifact, having to do with C allowing () for functions whose
arguments are not specified in the declaration (which was standard practice
before ANSI C).
Curiously, C++ also allows you to declare structs, with the interpretation that
a struct is exactly like a class except that all members are public by default.
So if you change class to struct in the program above, it will do exactly the
same thing. In practice, nobody who codes in C++ does this; the feature is
mostly useful to allow C code with structs to mix with C++ code.
Sometimes when you define a new class, you also want to define new interpre-
tations of operators on that class. Here is an example of a class that defines
elements of the max-plus algebra over ints. This gives us objects that act
like ints, except that the + operator now returns the larger of its arguments and
the * operator now returns the sum.25
The mechanism in C++ for doing this is to define member functions with names
operatorsomething where something is the name of the operator we want to
define. These member functions take one less argument that the operator they
define; in effect, x + y becomes syntactic sugar for x.operator+(y) (which,
amazingly, is actually legal C++). Because these are member functions, they
are allowed to access members of other instances of the same class that would
normally be hidden.
This same mechanism is also used to define automatic type conversions out
of a type: the MaxPlus::operator int() function allows C++ to convert a
MaxPlus object to an int whenever it needs to (for example, to feed it to cout).
(Automatic type conversions into a type happen if you provide an appropriate
constructor.)
#include <iostream>
#include <algorithm> // for max
where a+b is the time to do a and b in parallel, and a*b is the time to do a and b sequentially.
The reason for making the first case + and the second case * is because this makes the
distributive law a*(b+c) = (a*b)+(a*c) work. It also allows tricks like matrix multiplication
using the standard definition. See http://maxplus.org for more than you probably want to
know about this.
412
public:
MaxPlus(int);
MaxPlus operator+(const MaxPlus &);
MaxPlus operator*(const MaxPlus &);
operator int();
};
MaxPlus::MaxPlus(int x) { value = x; }
MaxPlus
MaxPlus::operator*(const MaxPlus &other)
{
return MaxPlus(value + other.value);
}
MaxPlus
MaxPlus::operator+(const MaxPlus &other)
{
/* std::max does what you expect */
return MaxPlus(max(value, other.value));
}
int
main(int argc, const char **argv)
{
cout << "2+3 == " << (MaxPlus(2) + MaxPlus(3)) << '\n';
cout << "2*3 == " << (MaxPlus(2) * MaxPlus(3)) << '\n';
return 0;
}
examples/c++/maxPlus.cpp
Avoid the temptation to overuse operator overloading, as it can be dangerous if
used to obfuscate what an operator normally does:
MaxPlus::operator--() { godzilla.eat(tokyo); }
The general rule of thumb is that you should probably only do operator overload-
ing if you really are making things that act like numbers (yes, cout << violates
this).
Automatic type conversions can be particularly dangerous. The line
cout << (MaxPlus(2) + 3) << '\n';
is ambiguous: should the compiler convert MaxPlus(2) to an int using the
413
MaxPlus(int) constructor and use ordinary integer addition or convert 3 to
a MaxPlus using MaxPlus::operator int() and use funky MaxPlus addition?
Fortunately most C++ compilers will complain about the ambiguity and fail
rather than guessing wrong.
6.4.6 Templates
One of the things we kept running into in this class was that if we defined a
container type like a hash table, binary search tree, or priority queue, we had
to either bake in the type of the data it held or do horrible tricks with void *
pointers to work around the C type system. C++ includes a semi-principled
work-around for this problem known as templates. These are essentially macros
that take a type name as an argument, that are expanded as needed to produce
functions or classes with specific types (see Macros for an example of how to do
this if you only have C).
Typical use is to prefix a definition with template <class T> and then use T
as a type name throughout:
template <class T>
T add1(T x)
{
return x + ((T) 1);
}
Note the explicit cast to T of 1; this avoids ambiguities that might arise with
automatic type conversions.
If you put this definition in a program, you can then apply add1 to any type
that has a + operator and that you can convert 1 to. For example, the output of
this code fragment:
cout << "add1(3) == " << add1(3) << '\n';
cout << "add1(3.1) == " << add1(3.1) << '\n';
cout << "add1('c') == " << add1('c') << '\n';
cout << "add1(MaxPlus(0)) == " << add1(MaxPlus(0)) << '\n';
cout << "add1(MaxPlus(2)) == " << add1(MaxPlus(2)) << '\n';
is
add1(3) == 4
add1(3.1) == 4.1
add1('c') == d
add1(MaxPlus(0)) == 1
add1(MaxPlus(2)) == 2
By default, C++ will instantiate a template to whatever type fits in its argument.
If you want to force a particular version, you can put the type in angle brackets
after the name of whatever you defined. For example,
414
cout << "add1<int>(3.1) == " << add1<int>(3.1) << '\n';
produces
add1<int>(3.1) == 4
because add1<int> forces its argument to be converted to an int (truncating
to 3) before adding one to it.
Because templates are really macros that get expanded as needed, it is common
to put templates in header (.h) files rather than in .cpp files. See the stack
implementation below for an example of this.
6.4.7 Exceptions
int fail()
{
throw "you lose";
return 5;
}
int
main(int argc, const char **argv)
{
try {
cout << fail() << '\n';
}
415
catch(const char *s) {
cerr << "Caught error: " << s << '\n';
}
return 0;
}
examples/c++/exception.cpp
In action:
$ make exception
g++ -g3 -Wall exception.cpp -o exception
$ ./exception
Caught error: you lose
Note the use of cerr instead of cout. This sends the error message to stderr.
A try..catch statement will catch an exception only if the type matches the
type of the argument to the catch part of the statement. This can be used to
pick and choose which exceptions you want to catch. See http://www.cplusplus.
com/doc/tutorial/exceptions/ for some examples and descriptions of some C++
standard library exceptions.
C++ programs generally don’t use malloc and free, but instead use the built-in
C++ operators new and delete. The advantage of new and delete is that they
know about types: not only does this mean that you don’t have to play games
with sizeof to figure out how much space to allocate, but if you allocate a new
object from a class with a constructor, the constructor gets called to initialize
the object, and if you delete an object, its destructor (if it has one) is called.
There are two versions of new and delete, depending on whether you want
to allocate just one object or an array of objects, plus some special syntax for
passing constructor arguments:
• To allocate a single object, use new type.
• To allocate an array of objects, use new type[size]. As with malloc, both
operations return a pointer to type.
• If you want to pass arguments to a constructor for type, use new
type(args). This only works with the single-object version, so you can’t
do new SomeClass[12] unless SomeClass has a constructor that takes no
arguments.
• To de-allocate a single object, use delete pointer-to-object.
• To de-allocate an array, use delete [] pointer-to-base-of-array. Mixing
new with delete [] or vice versa is an error that may or may not be
416
detected by the compiler. Mixing either with malloc or free is a very bad
idea.
The program below gives examples of new and delete in action:
#include <iostream>
#include <cassert>
class Noisy {
int id;
public:
Noisy(int); // create a noisy object with this id
~Noisy();
};
Noisy::Noisy(int initId) {
id = initId;
cout << "Noisy object created with id " << id << '\n';
}
Noisy::~Noisy() {
cout << "Noisy object destroyed with id " << id << '\n';
}
int
main(int argc, const char **argv)
{
int *p;
int *a;
const int n = 100;
Noisy n1(1);
Noisy *n2;
p = new int;
a = new int[n];
n2 = new Noisy(2);
*p = 5;
assert(*p == 5);
417
assert(a[i] == i);
}
delete [] a;
delete p;
delete n2;
return 0;
}
examples/c++/allocation.cpp
418
Stack(); /* create a new empty stack */
delete [] contents;
419
size = other.size;
top = other.top;
contents = new T[size];
return *this;
}
delete [] contents;
contents = newContents;
size = newSize;
}
contents[top++] = elt;
}
420
Here is some code demonstrating use of the stack:
#include <iostream>
#include "stack.h"
int
main(int argc, const char **argv)
{
Stack<int> s;
Stack<int> s2;
try {
s.push(1);
s.push(2);
s.push(3);
s2 = s;
try {
s2.pop();
} catch(const char *err) {
cout << "Caught expected exception " << err << '\n';
}
return 0;
}
421
examples/c++/stack/testStack.cpp
int
main(int argc, const char **argv)
{
stack<int> s;
stack<int> s2;
s.push(1);
s.push(2);
s.push(3);
s2 = s;
return 0;
}
examples/c++/stack/stdStack.cpp
422
One difference between the standard stack and our stack is that std::stack’s
pop member function doesn’t return anything. So we have to use top to get the
top element before popping it.
There is a chart of all the standard library data structures at http://www.
cplusplus.com/reference/stl/.
The main thing we’ve omitted here is any discussion of object-oriented features
of C++, particularly inheritance. These are not immediately useful for the
abstract-data-type style of programming we’ve used in CS223, but can be helpful
for building more complicated systems, where we might want to have various
specialized classes of objects that can all be approached using a common interface
represented by a class that they inherit from. If you are interested in exploring
these tools further, the CS department occasionally offers a class on object-
oriented programming; Mike Fischer’s lecture notes from the last time this
course was offered can be found at http://zoo.cs.yale.edu/classes/cs427/2011a/
lectures.html.
423
by itself. Typically, this will be a single function or a group of functions that
together implement some data structure.
In C, these will often make up the contents of a single source file. Though this is
probably not the best approach if you are building a production-quality testing
framework, a simple way to include unit tests in a program is to append to each
source file a test main function that can be enabled by defining a macro (I like
TEST_MAIN). You can then build this file by itself with the macro defined to get
a stand-alone test program for just this code.
6.5.1.2 Example
Here is an example of a simple data structure with some built-in test code
conditionally compiled by defining TEST_MAIN. The data structure implements a
counter with built-in overflow protection. The counter interface does not provide
the ability to read the counter value; instead, the user can only tell if it is zero
or not.
Because the counter is implemented internally as a uint64_t, black-box testing
of what happens with too many increments would take centuries. So we include
some white-box tests that directly access the counter value to set up this (arguably
unnecessary) test case.
The code is given below. We include both the interface file and the implemen-
tation, as well as a Makefile showing how to build and run the test program.
The Makefile includes some extra arguments to gcc to turn on the TEST_MAIN
macro and supply the extra information needed to run gcov. If you type make
test, it will make and run testCounter, and then run gcov to verify that we
did in fact hit all lines of code in the program.
424
/*
* Abstract counter type.
*
* You can increment it, decrement it, and test for zero.
*
* Increment and decrement operations return 1 if successful,
* 0 if the operation would cause underflow or overflow.
*/
/* destroy a counter */
void counterDestroy(Counter *);
#include <stdint.h>
struct counter {
uint64_t value;
};
425
c = malloc(sizeof(Counter));
assert(c);
c->value = 0;
return c;
}
/* destroy a counter */
void
counterDestroy(Counter *c)
{
free(c);
}
426
#ifdef TEST_MAIN
int
main(int argc, char **argv)
{
Counter *c;
assert(counterIsZero(c));
assert(counterIncrement(c) == 1); /* 1 */
assert(!counterIsZero(c));
assert(counterIncrement(c) == 1); /* 2 */
assert(!counterIsZero(c));
assert(counterDecrement(c) == 1); /* 1 */
assert(!counterIsZero(c));
assert(counterDecrement(c) == 1); /* 0 */
assert(counterIsZero(c));
assert(counterDecrement(c) == 0); /* 0 */
assert(counterIsZero(c));
assert(counterIncrement(c) == 1); /* 1 */
assert(!counterIsZero(c));
counterDestroy(c);
assert(c->value == 0);
assert(counterIncrement(c) == 1); /* 1 */
assert(c->value == 1);
assert(counterIncrement(c) == 1); /* 2 */
assert(c->value == 2);
assert(counterDecrement(c) == 1); /* 1 */
assert(c->value == 1);
assert(counterDecrement(c) == 1); /* 0 */
assert(c->value == 0);
assert(counterDecrement(c) == 0); /* 0 */
assert(c->value == 0);
assert(counterIncrement(c) == 1); /* 1 */
assert(c->value == 1);
427
assert(counterDecrement(c) == 1); /* COUNTER_MAX-1 */
assert(c->value == COUNTER_MAX-1);
assert(counterIncrement(c) == 1); /* COUNTER_MAX */
assert(c->value == COUNTER_MAX);
counterDestroy(c);
return 0;
}
#endif
examples/unitTest/counter.c
CC=c99
CFLAGS=-g3 -pedantic -Wall
all: seqprinter
test: seqprinter
./seqprinter
clean:
$(RM) -f seqprinter *.o
examples/ADT/sequence/Makefile
Here are some older notes on testing using a test harness that does some basic
tricks like catching segmentation faults so that a program can keep going even if
one test fails.
6.5.2.1.1 stack.h
428
/*
* This is an "opaque struct"; it discourages people from looking at
* the inside of our structure. The actual definiton of struct stack
* is contained in stack.c.
*/
typedef struct stack *Stack;
6.5.2.2.1 test-stack.c
#include <stdio.h>
#include <setjmp.h>
#include <signal.h>
#include <unistd.h>
#include <stdlib.h>
429
#include "stack.h"
#include "tester.h"
int
main(int argc, char **argv)
{
Stack s;
int i;
tester_init();
/* 25 */ TEST_ASSERT(s != 0);
/* 32 */ TEST(stack_isempty(s), 0);
/* 33 */ TEST(stack_pop(s), 3);
/* 34 */ TEST(stack_isempty(s), 0);
/* 35 */ TEST(stack_pop(s), 2);
/* 36 */ TEST(stack_isempty(s), 0);
/* 37 */ TEST(stack_pop(s), 1);
/* 38 */ TEST(stack_isempty(s), 1);
/* 39 */ TEST(stack_pop(s), STACK_EMPTY);
/* 40 */ TEST(stack_isempty(s), 1);
/* 41 */ TEST(stack_pop(s), STACK_EMPTY);
430
stack_push(s, i);
}
for(i = 0; i < STRESS_TEST_ITERATIONS; i++) {
stack_push(s, 957);
if(stack_pop(s) != 957) {
/* 60 */ FAIL("wanted 957 but didn't get it");
abort();
}
}
for(i = STRESS_TEST_ITERATIONS - 1; i >= 0; i--) {
if(stack_isempty(s)) {
/* 66 */ FAIL("stack empty too early");
abort();
}
if(stack_pop(s) != i) {
/* 70 */ FAIL("got wrong value!");
abort();
}
}
} ENDTRY; /* 74 */
/* 76 */ TEST(stack_isempty(s), 1);
tester_report(stdout, argv[0]);
return tester_result();
}
There is a lot of test code here. In practice, we might write just a few tests to
start off with, and, to be honest, I didn’t write all of this at once. But you can
never have too many tests— if nothing else, they give an immediate sense of
gratification as the number of failed tests drops.
6.5.2.3 Makefile
• Finally, we’ll write a Makefile:
6.5.2.3.1 Makefile
CC=gcc
CFLAGS=-g3 -Wall -ansi -pedantic
all:
test: test-stack
431
./test-stack
@echo OK!
Of course, we still can’t compile anything, because we don’t have any implemen-
tation. Let’s fix that. To make it easy to write, we will try to add as little as
possible to what we already have in stack.h:
6.5.3.1 stack.c
#include <stdlib.h>
#include "stack.h"
432
test-stack.c:45: TEST FAILED: stack_isempty(s) -> 1 but expected 0
test-stack.c:46: TEST FAILED: stack_pop(s) -> -1 but expected 4
test-stack.c:60: wanted 957 but didn't get it
test-stack.c:74: Aborted (signal 6)
./test-stack: errors 8/17, signals 1, FAILs 1
make[1]: *** [test] Error 8
Hooray! It compiles on the first try! (Well, not really, but let’s pretend it did.)
Unfortunately, it only passes any tests at all by pure dumb luck. But now we
just need to get the code to pass a few more tests.
Here’s a first attempt at a stack that suffers from some artificial limits. We
retain the structure of the original broken implementation, we just put a few
more lines of code in and format it more expansively.
6.5.4.1 stack.c
#include <stdlib.h>
#include "stack.h"
struct stack {
int top;
int data[MAX_STACK_SIZE];
};
Stack
stack_create(void)
{
struct stack *s;
s = malloc(sizeof(*s));
s->top = 0;
return s;
}
void
stack_destroy(Stack s)
{
free(s);
}
433
void
stack_push(Stack s, int elem)
{
s->data[(s->top)++] = elem;
}
int
stack_pop(Stack s)
{
return s->data[--(s->top)];
}
int
stack_isempty(Stack s)
{
return s->top == 0;
}
Let’s see what happens now:
$ make test
gcc -g3 -Wall -ansi -pedantic -c -o test-stack.o test-stack.c
gcc -g3 -Wall -ansi -pedantic -c -o tester.o tester.c
gcc -g3 -Wall -ansi -pedantic -c -o stack.o stack.c
gcc -g3 -Wall -ansi -pedantic -o test-stack test-stack.o tester.o stack.o
./test-stack
test-stack.c:40: TEST FAILED: stack_isempty(s) -> 0 but expected 1
test-stack.c:41: TEST FAILED: stack_pop(s) -> 409 but expected -1
test-stack.c:47: TEST FAILED: stack_isempty(s) -> 0 but expected 1
test-stack.c:48: TEST FAILED: stack_pop(s) -> 0 but expected -1
test-stack.c:49: TEST FAILED: stack_isempty(s) -> 0 but expected 1
test-stack.c:74: Segmentation fault (signal 11)
test-stack.c:76: TEST FAILED: stack_isempty(s) -> 0 but expected 1
free(): invalid pointer 0x804b830!
./test-stack: errors 6/17, signals 1, FAILs 0
make[1]: *** [test] Error 6
There are still errors, but we get past several initial tests before things blow up.
Looking back at the line numbers in test-stack.c, we see that the first failed
test is the one that checks if the stack is empty after we pop from an empty stack.
The code for stack_isempty looks pretty clean, so what happened? Somewhere
s->top got set to a nonzero value, and the only place this can happen is inside
stack_pop. Aha! There’s no check in stack_pop for an empty stack, so it’s
decrementing s->top past 0. (Exercise: why didn’t the test of stack_pop fail?)
434
6.5.5 First fix
If we’re lucky, fixing this problem will make the later tests happier. Let’s try a
new version of stack_pop. We’ll leave everything else the same.
int
stack_pop(Stack s)
{
if(stack_isempty(s)) {
return STACK_EMPTY;
} else {
return s->data[--(s->top)];
}
}
And now we get:
$ make test
gcc -g3 -Wall -ansi -pedantic -c -o test-stack.o test-stack.c
gcc -g3 -Wall -ansi -pedantic -c -o tester.o tester.c
gcc -g3 -Wall -ansi -pedantic -c -o stack.o stack.c
gcc -g3 -Wall -ansi -pedantic -o test-stack test-stack.o tester.o stack.o
./test-stack
test-stack.c:74: Segmentation fault (signal 11)
test-stack.c:76: TEST FAILED: stack_isempty(s) -> 0 but expected 1
./test-stack: errors 1/17, signals 1, FAILs 0
make[1]: *** [test] Error 1
Which is much nicer. We are still failing the stress test, but that’s not terribly
surprising.
After some more tinkering, this is what I ended up with. This version uses a
malloc’d data field, and realloc’s it when the stack gets too big.
6.5.6.1 stack.c
#include <stdlib.h>
#include "stack.h"
struct stack {
int top; /* first unused slot in data */
int size; /* number of slots in data */
int *data; /* stack contents */
};
435
#define INITIAL_STACK_SIZE (1)
#define STACK_SIZE_MULTIPLIER (2)
Stack
stack_create(void)
{
struct stack *s;
s = malloc(sizeof(*s));
if(s == 0) return 0;
s->top = 0;
s->size = INITIAL_STACK_SIZE;
s->data = malloc(s->size * sizeof(*(s->data)));
if(s->data == 0) return 0;
/* else everything is ok */
return s;
}
void
stack_destroy(Stack s)
{
free(s->data);
free(s);
}
void
stack_push(Stack s, int elem)
{
if(s->top == s->size) {
/* need more space */
s->size *= STACK_SIZE_MULTIPLIER;
s->data = realloc(s->data, s->size * sizeof(*(s->data)));
if(s->data == 0) {
abort(); /* we have no other way to signal failure :-( */
}
}
/* now there is enough room */
s->data[s->top++] = elem;
}
int
stack_pop(Stack s)
{
436
if(stack_isempty(s)) {
return STACK_EMPTY;
} else {
return s->data[--(s->top)];
}
}
int
stack_isempty(Stack s)
{
return s->top == 0;
}
At last we have a version that passes all tests:
$ make test
gcc -g3 -Wall -ansi -pedantic -c -o test-stack.o test-stack.c
gcc -g3 -Wall -ansi -pedantic -c -o tester.o tester.c
gcc -g3 -Wall -ansi -pedantic -c -o stack.o stack.c
gcc -g3 -Wall -ansi -pedantic -o test-stack test-stack.o tester.o stack.o
./test-stack
OK!
6.5.7 Moral
Writing a big program all at once is hard. If you can break the problem down
into little problems, it becomes easier. “Test first” is a strategy not just for
getting a well-tested program, but for giving you something easy to do at each
step— it’s usually not too hard to write one more test, and it’s usually not too
hard to get just one test working. If you can keep taking those small, easy steps,
eventually you will run out of failed tests and have a working program.
/*
* Test macros.
*
* Usage:
*
* #include <setjmp.h>
* #include <stdio.h>
* #include <signal.h>
* #include <unistd.h>
*
* testerInit(); -- Initialize internal data structures.
437
* testerReport(FILE *, "name"); -- Print report.
* testerResult(); -- Returns # of failed tests.
*
* TRY { code } ENDTRY;
*
* Wraps code to catch seg faults, illegal instructions, etc. May not be
* nested.
* Prints a warning if a signal is caught.
* To enforce a maximum time, set alarm before entering.
*
* TEST(expr, expected_value);
*
* Evaluates expr (which should yield an integer value) inside a TRY.
* Prints a warning if evaluating expr causes a fault or returns a value
* not equal to expected_value.
*
* TEST_ASSERT(expr)
*
* Equivalent to TEST(!(expr), 0)
*
* You can also cause your own failures with FAIL:
*
* TRY {
* x = 1;
* if(x == 2) FAIL("why is x 2?");
* } ENDTRY;
*
* To limit the time taken by a test, call tester_set_time_limit with
* a new limit in seconds, e.g.
*
* tester_set_time_limit(1);
* TRY { while(1); } ENDTRY;
*
* There is an initial default limit of 10 seconds.
* If you don't want any limit, set the limit to 0.
*
*/
438
int expr_value; /* expression value */
int setjmp_return; /* return value from setjmp */
int try_failed; /* true if last try failed */
int user_fails; /* number of calls to FAIL */
int time_limit; /* time limit for TRY */
} TesterData;
/* another atrocity */
#define TEST(expr, expected_value) \
TesterData.tests++; \
TesterData.errors++; /* guilty until proven innocent */ \
439
TRY { TesterData.expr_value = (expr); \
if(TesterData.expr_value != expected_value) { \
fprintf(stderr, "%s:%d: TEST FAILED: %s -> %d but expected %d\n", \
__FILE__, __LINE__, __STRING(expr), \
TesterData.expr_value, expected_value); \
} else { \
TesterData.errors--; \
} \
} \
ENDTRY; \
if(TesterData.try_failed) \
fprintf(stderr, "%s:%d: TEST FAILED: %s caught signal\n", \
__FILE__, __LINE__, __STRING(expr))
#include <stdio.h>
#include <signal.h>
#include <string.h>
#include <setjmp.h>
#include "tester.h"
const char *
testerStrsignal(int sig)
{
return strsignal(sig);
}
static void
tester_sighandler(int signal)
{
if(TesterData.escape_hatch_active) {
TesterData.escape_hatch_active = 0;
longjmp(TesterData.escape_hatch, signal);
}
}
440
void
testerInit(void)
{
TesterData.escape_hatch_active = 0;
TesterData.tests = 0;
TesterData.errors = 0;
TesterData.signals = 0;
TesterData.user_fails = 0;
signal(SIGSEGV, tester_sighandler);
signal(SIGILL, tester_sighandler);
signal(SIGFPE, tester_sighandler);
signal(SIGALRM, tester_sighandler);
signal(SIGBUS, tester_sighandler);
signal(SIGABRT, tester_sighandler);
}
void
testerReport(FILE *f, const char *preamble)
{
if(TesterData.errors != 0 || TesterData.signals != 0) {
fprintf(f, "%s: errors %d/%d, signals %d, FAILs %d\n",
preamble,
TesterData.errors,
TesterData.tests,
TesterData.signals,
TesterData.user_fails);
}
}
int
testerResult(void)
{
return TesterData.errors;
}
void
tester_set_time_limit(int t)
{
TesterData.time_limit = t;
}
examples/testHarness/tester.c
441
6.6 Algorithm design techniques
The fundamental principle of algorithm design was best expressed by the math-
ematician George Polya: “If there is a problem you can’t solve, then there is
an easier problem you can solve: find it.” For computers, the situation is even
better: if there is any technique to make a problem easier even by a tiny bit, then
you can repeat the technique—possibly millions or even billions of times—until
the problem becomes trivial.
For example, suppose we want to find the maximum element of an array of n
ints, but we are as dumb as bricks, so it doesn’t occur to us to iterate through
the array keeping track of the largest value seen so far. We might instead be
able to solve the problem by observing that the maximum element is either (a)
the last element, or (b) the maximum of the first n − 1 elements, depending on
which is bigger. Figuring out (b) is an easier version of the original problem, so
we are pretty much done once we’ve realized we can split the problem in this
way. Here’s the code:
/* returns maximum of the n elements in a */
int
max_element(int a[], int n)
{
int prefix_max;
if(n == 1) {
return a[0];
} else {
prefix_max = max_element(a, n-1);
if(prefix_max < a[n-1]) {
return a[n-1];
} else {
return prefix_max;
}
}
}
Note that we need a special case for a 1-element array, because the empty prefix
of such an array has no maximum element. We also assert that the array
contains at least one element, just to avoid mischief.
One problem with this algorithm (at least when coding in C) is that the recursion
may get very deep. Fortunately, there is a straightforward way to convert the
recursion to a loop. The idea is that instead of returning a value from the
442
recursive call, we put it in a variable that gets used in the next pass through the
loop. The result is
/* returns maximum of the n elements in a */
int
max_element(int a[], int n)
{
int i; /* this replaces n-1 from the recursive version */
int prefix_max;
443
Greedy method Run through your problem one step at a time, keeping track
of the single best solution at each step. Hope sincerely that this will not
lead you to make a seemingly-good choice early with bad consequences
later.
Some of these approaches work better than others—it is the role of algorithm
analysis (and experiments with real computers) to figure out which are likely to
be both correct and efficient in practice. But having all of them in your toolbox
lets you try different possibilities for a given problem.
444
6.6.4 Example: Sorting
445
6.7 Bit manipulation
6.8 Persistence
When a C program exits, all of its global variables, local variables, and heap-
allocated blocks are lost. Its memory is reclaimed by the operating system,
446
erased, and handed out to other programs. So what happens if you want to keep
data around for later?
To make this problem concrete, let’s suppose we want to keep track of a hit
counter for web pages. From time to time, the user will run the command
count_hit number where number is an integer value in the range 0 to 99, say.
(A real application would probably be using urls, but let’s keep things as simple
as possible.) We want count_hit to print the number of times the page with the
given number has been hit, i.e. 1 the first time it is called, 2 the next time, etc.
Where can we store the counts so that they will survive to the next execution of
count_hit?
The simplest solution is probably to store the data in a text file. Here’s a
program that reads a file hits, increments the appropriate value, and the writes
out a new version. To reduce the chances that data is lost (say if count_hit
blows up halfway through writing the file), the new values are written to a new
file hit~, which is then renamed to hit, taking the place of the previous version.
#include <stdio.h>
#include <stdlib.h>
int
main(int argc, char **argv)
{
int c;
int i;
int counts[NUM_COUNTERS];
FILE *f;
if(argc < 2) {
fprintf(stderr, "Usage: %s number\n", argv[0]);
exit(1);
}
/* else */
c = atoi(argv[1]);
if(c < 0 || c >= NUM_COUNTERS) {
fprintf(stderr, "Counter %d not in range 0..%d\n", c, NUM_COUNTERS - 1);
exit(2);
}
447
f = fopen(COUNTER_FILE, "r");
if(f == 0) {
perror(COUNTER_FILE);
exit(3);
}
/* read them in */
for(i = 0; i < NUM_COUNTERS; i++) {
fscanf(f, "%d", &counts[i]);
}
fclose(f);
printf("%d\n", ++counts[c]);
rename(NEW_COUNTER_FILE, COUNTER_FILE);
return 0;
}
examples/persistence/textFile.c
If you want to use this, you will need to create an initial file /tmp/hit with
NUM_COUNTERS zeroes in it.
Using a simple text file like this is the easiest way to keep data around, since
you can look at the file with a text editor or other tools if you want to do things
to it. But it means that the program has to parse the file every time it runs. We
can speed things up a little bit (and simplify the code) by storing the values in
binary.
Here’s a version that stores the data as a binary file of exactly sizeof(int) * NUM_COUNTERS
bytes. It uses the stdio routines fread and fwrite to read and write the file.
These are much faster than the loops in the previous program, since they can
just slap the bytes directly into counts without processing them at all.
The program also supplies and extra flag b to fopen. This is ignored on Unix-like
machines but is needed on Windows machines to tell the operating system that
448
the file contains binary data (such files are stored differently from text files on
Windows).
#include <stdio.h>
#include <stdlib.h>
int
main(int argc, char **argv)
{
int c;
int counts[NUM_COUNTERS];
FILE *f;
if(argc < 2) {
fprintf(stderr, "Usage: %s number\n", argv[0]);
exit(1);
}
/* else */
c = atoi(argv[1]);
if(c < 0 || c >= NUM_COUNTERS) {
fprintf(stderr, "Counter %d not in range 0..%d\n", c, NUM_COUNTERS - 1);
exit(2);
}
f = fopen(COUNTER_FILE, "rb");
if(f == 0) {
perror(COUNTER_FILE);
exit(3);
}
/* read them in */
fread(counts, sizeof(*counts), NUM_COUNTERS, f);
fclose(f);
printf("%d\n", ++counts[c]);
449
rename(NEW_COUNTER_FILE, COUNTER_FILE);
return 0;
}
examples/persistence/binaryFile.c
Again, you’ll have to initialize /tmp/hit to use this; in this case, you want it to
contain exactly 400 null characters. On a Linux machine you can do this with
the command dd if=/dev/zero of=/tmp/hit bs=400 count=1.
The advantage of using binary files is that reading and writing them is both
simpler and faster. The disadvantages are (a) you can’t look at or update the
binary data with your favorite text editor any more, and (b) the file may no
longer be portable from one machine to another, if the different machines have
different endianness or different values of sizeof(int). The second problem we
can deal with by converting the data to a standard word size and byte order
before storing it, but then we lose some advantages of speed.
We still may run into speed problems if NUM_COUNTERS is huge. The next program
avoids rewriting the entire file just to update one value inside it. This program
uses the fseek function to position the cursor inside the file. It opens the file
using the "r+b" flag to fopen, which means to open an existing binary file for
reading and writing.
#include <stdio.h>
#include <stdlib.h>
int
main(int argc, char **argv)
{
int c;
int count;
FILE *f;
if(argc < 2) {
fprintf(stderr, "Usage: %s number\n", argv[0]);
exit(1);
}
/* else */
c = atoi(argv[1]);
450
if(c < 0 || c >= NUM_COUNTERS) {
fprintf(stderr, "Counter %d not in range 0..%d\n", c, NUM_COUNTERS - 1);
exit(2);
}
f = fopen(COUNTER_FILE, "r+b");
if(f == 0) {
perror(COUNTER_FILE);
exit(3);
}
/* read counter */
fseek(f, sizeof(int) * c, SEEK_SET);
fread(&count, sizeof(int), 1, f);
printf("%d\n", ++count);
/* write it back */
fseek(f, sizeof(int) * c, SEEK_SET);
fwrite(&count, sizeof(int), 1, f);
fclose(f);
return 0;
}
examples/persistence/binaryFileFseek.c
Note that this program is not only shorter than the last one, but it also avoids
allocating the counts array. It also is less likely to run into trouble with running
out of space during writing. If we ignore issues of concurrency, this is the best
we can probably do with just stdio.
We can do even better using the mmap routine, available in all POSIX-compliant
C libraries. POSIX, which is short for Portable Standard Unix, is supported by
essentially all Unix-like operating systems and NT-based versions of Microsoft
Windows. The mmap routine tells the operating system to “map” a file in the
filesystem to a region in the process’s address space. Reading bytes from this
region will read from the file; writing bytes to this region will write to the file
(although perhaps not immediately). Even better, if more than one process
calls mmap on the same file at once, they will share the memory region, so that
updates made by one process will be seen immediately by the others (with some
caveats having to do with how concurrent access to memory actually works on
real machines).
451
Here is the program using mmap:
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <sys/mman.h> /* For mmap. I think mman is short for "memory management." */
int
main(int argc, char **argv)
{
int c;
int *counts;
int fd;
if(argc < 2) {
fprintf(stderr, "Usage: %s number\n", argv[0]);
exit(1);
}
/* else */
c = atoi(argv[1]);
if(c < 0 || c >= NUM_COUNTERS) {
fprintf(stderr, "Counter %d not in range 0..%d\n", c, NUM_COUNTERS - 1);
exit(2);
}
if(counts == 0) {
perror(COUNTER_FILE);
exit(4);
}
printf("%d\n", ++counts[c]);
452
/* unmap the region and close the file just to be safe */
munmap(counts, sizeof(*counts) * NUM_COUNTERS);
close(fd);
return 0;
}
examples/persistence/binaryFileMmap.c
Now the code for actually incrementing counts[c] and writing it to the file
is trivial. Unfortunately, we have left stdio behind, and have to deal with
low-level POSIX calls like open and close to get at the file. Still, this may be
the most efficient version we can do, and becomes even better if we plan to do
many updates to the same file, since we can just keep the file open.
All of the solutions described so far can fail if you run two copies of count_hits
simultaneously. The mmap solution is probably the least vulnerable to failures,
as the worst that can happen is that some update is lost if the same locations is
updated at exactly the same time. The other solutions can fail more spectacularly;
simultaneous writes to /tmp/hit~ in the simple text file version, for example,
can produce a wide variety of forms of file corruption. For a simple web page hit
counter, this may not be a problem. If you are writing a back-end for a bank,
you probably want something less vulnerable.
Database writers aim for a property called ACIDity from the acronym ACID
= Atomicity, Consistency, Isolation, and Durability. These are defined
for a system in which the database is accessed via transactions consisting of
one or more operations. An example of a transaction might be ++counts[c],
which we can think of as consisting of two operations: reading counts[c], and
writing back counts[c]+1.
Atomicity means that either every operation in a transaction is performed or
none is. In practice, this means if the transaction fails any partial progress must
be undone.
Consistency means that at the end of a transaction the database is in a “consistent”
state. This may just mean that no data has been corrupted (e.g. in the text
data file we have exactly 100 lines and they’re all integer counts), or it may also
extend to integrity constraints enforce by the database (e.g. in a database of
airline flights, the fact that flight 2937 lands at HVN at 22:34 on 12/17 implies
that flight 2937 exists, has an assigned pilot, etc.).
Isolation says that two concurrent transactions can’t detect each other; the
partial progress of one transaction is not visible to others until the transaction
commits.
453
Durability means that the results of any committed transaction are permanent.
In practice this means there is enough information physically written to a disk
to reconstruct the transaction before the transaction finished.
How can we enforce these requirements for our hit counter? Atomicity is not
hard: if I stop a transaction after a read but before the write, no one will be the
wiser (although there is a possible problem if only half of my write succeeds).
Consistency is enforced by the fseek and mmap solutions, since they can’t change
the structure of the file. Isolation is not provided by any of our solutions,
and would require some sort of locking (e.g. using flock) to make sure that
only one program uses the file at a time. Durability is enforced by not having
count_hits return until the fclose or close operation has succeeded (although
full durability would require running fsync or msync to actually guarantee data
was written to disk).
Though it would be possible to provide full ACIDity with enough work, this is
a situation where using an existing well-debugged tool beats writing our own.
Depending on what we are allowed to do to the machine our program is running
on, we have many options for getting much better handling of concurrency. Some
standard tools we could use are:
• gdbm. This is a minimal hash-table-on-disk library that uses simplistic
locking to get isolation. The advantage of this system is that it’s probably
already installed on any Linux machine. The disadvantage is that it doesn’t
provide much functionality beyond basic transactions.
• Berkeley DB is a fancier hash-table-on-disk library that provides full
ACIDity but not much else. There is a good chance that some version of
this is also installed by default on any Linux or BSD machine you run into.
• Various toy databases like SQLite or MySQL provide tools that look very
much like serious databases with easy installation and little overhead.
These are probably the solutions most people choose, especially since
MySQL is integrated tightly with PHP and other Web-based scription
languages. Such a solution also allows other programs to access the table
without having to know a lot of details about how it is stored, because the
SQL query language hides the underlying storage format.
• Production-quality databases like PostgreSQL, SQL Server, or Oracle
provide very high levels of robustness and concurrency at the cost of
requiring non-trivial management and possibly large licensing fees. This is
what you pick if you really are running a bank.
7 What next?
Congratulations! You now know everything there is to know about programming
in C. Now what do you do?
My recommendation would be the following: learn C++, since you know 75% of
454
it already, and you will be able to escape from some (but not all) of the annoying
limitations of C. And learn a scripting language you can be comfortable with,
for writing programs quickly where performance isn’t the main requirement.
455
can get at the internal data of an object) and inheritance (allowing one
abstract data type to be defined by extending another). You can fake
most of these things in C if you try hard enough (for example, using
function pointers), but it is always possible to muck around with internal
bits of things just because of the unlimited control C gives you over
the environment. This can quickly become dangerous in large software
projects.
C provides only limited support for avoiding namespace collisions
In a large C program, it’s impossible to guarantee that my
eat_leftovers function exported from leftovers.c doesn’t con-
flict with your eat_leftovers function in cannibalism.c. A mediocre
solution is to use longer names: leftovers_eat_leftovers vs
cannibalism_eat_leftovers, and one can also play games with
function pointers and global struct variables to allow something like
leftovers.eat_leftovers vs cannibalism.eat_leftovers. Most
modern programming languages provide an explicit package or namespace
mechanism to allow the programmer to control who sees what names
where.
On the above list, C++ fixes everything except the missing garbage collector.
If you want to learn C++, you should get a copy of The C++ Programming
Language, by Bjarne Stroustrup, which is the definitive reference manual. But
you can get a taste of it from several on-line tutorials:
• C++ tutorial for C users, by Eric Brasseur. Exactly what it says. Intro-
duces C++ features not found in C in order of increasing complexity.
• Some other on-line tutorials that assume little or no prior programming
experience:
– http://www.cplusplus.com/doc/tutorial/
– http://www.cprogramming.com/tutorial.html
C syntax has become the default for new programming languages targeted at a
general audience. Some noteworthy examples of C-like languages are Java (used
in Android), Objective-C (used in OSX and iOS), and C# (used in Windows).
Each of these fix some of the misfeatures of C (including the lack of a garbage
collector and bounds checks on arrays) while retaining much of the flavor of
C. Which to choose probably depends on what platform you are interested in
developing for.
456
7.4 Scripting languages
/* Palindrome detector.
*
* For each line of the input, prints PALINDROME if it is a palindrome
* or the index of the first non-matching character otherwise.
*
* Note: does not handle lines containing nulls.
*/
457
int c;
size = 1;
line = malloc(size);
if(line == 0) return 0;
n = 0;
n = strlen(s);
458
return IS_PALINDROME;
}
int
main(int argc, char **argv)
{
char *line;
int mismatch;
while((line = getLine()) != 0) {
mismatch = testPalindrome(line);
if(mismatch == IS_PALINDROME) {
puts("PALINDROME");
} else {
printf("%d\n", mismatch);
}
free(line);
}
return 0;
}
examples/scripting/palindrome.c
This version is written in Perl (http://www.perl.org):
#!/usr/bin/perl
while(<>) {
chomp; # remove trailing newline
if($_ eq reverse $_) {
print "PALINDROME\n";
} else {
for $i (0..length($_) - 1) {
if(substr($_, $i, 1) ne substr($_, length($_) - $i - 1, 1)) {
print $i, "\n";
last;
}
}
}
}
459
examples/scripting/palindrome.pl
The things to notice about Perl is that the syntax is deliberately very close to
C (with some idiosyncratic extensions like putting $ on the front of all variable
names), and that common tasks like reading all input lines get hidden inside
default constructions like while(<>) and the $_ variable that functions with no
arguments like chomp operate on by default. This can allow for very compact
but sometimes very incomprehensible code.
Here’s a version in Python (https://www.python.org/):
#!/usr/bin/python
import sys
C 0.107s
Perl 0.580s
Python 2.052s
Note that for Perl and Python some of the cost is the time to start the interpreter
460
and parse the script, but factors of 10–100 are not unusual slowdowns when
moving from C to a scripting language. The selling point of these languages
is that in many applications run time is not as critical as ease and speed of
implementation.
As an even shorter example, if you just want to print all the palindromes in a
file, you can do that from the command line in one line of Perl, e.g:
$ perl -ne 'chomp; print $_, "\n" if($_ eq reverse $_)' < /usr/share/dict/words
8 Assignments
TODO: general instructions about assignments
Make sure that you sign up for an account on the Zoo at http://zoo.cs.yale.edu/
accounts.html. If you already have an account, you still need to check the CPSC
223 box so that you can turn in assignments. It’s best to do this as soon as
possible.
You do not need to develop your solution on the Zoo, but you will need to turn
it in there, and it will be tested using the compiler on the Zoo.
For this assignment, you are to implement an encoder for Pig Esperanto, a
simplified version of the language game Pig Elvish, which in turn is similar to
Pig Latin.
Pig Esperanto works by translating a text one word at a time. For the purposes
of this assignment, a word consists of a consecutive sequence of characters for
which isalpha, defined in the include file ctype.h, returns true. Any characters
for which isalpha returns false should be passed through unmodified.
For each input word:
1. Move the first letter to the end.
2. Add the letters “an” to the end of any word of three letters or less, and “o”
to the end of any longer word.
3. Make the new first letter of the word match the case of the old first letter
of the word. Make the letter that was moved lowercase if it is not still the
first letter. Do not change the capitalization of any other letters.
461
Capitalization can be tested using the isupper and islower macros, and modi-
fied using the toupper and tolower macros. Like isalpha, these are all defined
in ctype.h.
You are to write a program encode.c that takes an input from stdin, encodes
it using the above rules, and writes the result to stdout.
For example, given the input
I *REALLY* like Yale's course-selection procedures.
Your program should output
Ian *EALLYro* ikelo Aleyo'san ourseco-electionso rocedurespo.
462
The unsympathetic robo-grading script used to grade this assignment may or
may not use the same tests as this command, so you should make sure your
program works on other inputs as well. You may also want to look at the style
grading checklist to see that you haven’t committed any gross atrocities against
readability, in case a human being should happen to look at your code.
You can submit your assignment more than once, but any late penalties will be
assessed based on the last submission. For more details about the submit script
and its capabilities, see here.
/*
* Translate text into Pig Esperanto, a simplified versoin
* of Pig Elvish.
*/
#include <stdio.h>
#include <ctype.h>
int
main(int argc, char **argv)
{
int c;
int firstLetter; /* initial letter if any */
int count; /* number of letters in the current word so far */
for(;;) {
c = getchar();
if(isalpha(c)) {
if(count == 0) {
/* first letter */
firstLetter = c;
} else if(count == 1) {
/* second letter, fix the case */
if(isupper(firstLetter)) {
putchar(toupper(c));
} else{
putchar(tolower(c));
}
} else {
/* just pass it through */
putchar(c);
}
463
/* always bump count */
count++;
} else {
if(count != 0) {
/* finish off the previous word */
if(count > 1) {
putchar(tolower(firstLetter));
} else {
putchar(firstLetter);
}
if(count <= 3) {
putchar('a');
putchar('n');
} else {
putchar('o');
}
}
/* reset count */
count = 0;
if(c == EOF) {
break;
} else {
putchar(c);
}
}
}
return 0;
}
examples/2018/hw/1/encode.c
464
the i-th output character, starting from 0 as usual, is set to the j-th input
character, where j = (ai + b) mod n. For appropriate choices of a and b, this will
reorder the characters in the block in a way that can be reversed by choosing a
corresponding decryption key (n, a0 , b0 ).
For example, if n = 5, a = 3, and b = 2, the string Hello, world! would be
encrypted like this:
in: H e l l o , w o r l d ! \0 \0
i: 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4
j: 2 0 3 1 4 2 0 3 1 4 2 0 3 1 4
out: l H l e o w , o r ! l \0 d \0
465
8.2.3 Sample solution
/*
* Transposition block cipher encoder/decoder.
*/
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
/*
* transpose in to out by the rule
* out[i] = in[(a*i+b)%n];
*/
void
transpose(const char *in, char *out, int n, int a, int b)
{
/* we need to do some sneakery to deal with negative remainders */
long long j;
j = (a * i + b) % n;
if(j < 0) {
j += n;
}
out[i] = in[j];
}
}
/*
* Send a buffer to stdout.
*
* Second argument gives length.
*
* We can't just use fputs because out may contain null characters.
*/
void
ship(const char *out, int n)
{
/* could also use fwrite */
for(int i = 0; i < n; i++) {
putchar(out[i]);
}
466
}
/*
* Read sequence of blocks, feed each to transpose then ship.
*/
int
main(int argc, char **argv)
{
int n;
int a;
int b;
char *in;
char *out;
int c;
int i;
if(argc != 4) {
fprintf(stderr, "Usage: %s n a b\n", argv[0]);
return 1;
}
n = atoi(argv[1]);
a = atoi(argv[2]);
b = atoi(argv[3]);
if(n <= 0) {
fprintf(stderr, "%s: block size n must be positive\n", argv[0]);
return 2;
}
in = malloc(n);
assert(in);
out = malloc(n);
assert(out);
i = 0;
467
i = 0;
}
}
if(i > 0) {
/* pad remaining bytes with nulls and ship */
for(; i < n; i++) {
in[i] = '\0';
}
/* clean up */
free(in);
free(out);
return 0;
}
examples/2018/hw/2/transpose.c
For this assignment, you are to implement a data type supporting addition and
multiplication of large non-negative integers.
The file num.h, shown below, defines the interface to the data type. Your job is
to provide a matching num.c file that implements these functions. You may also
implement any other functions that would be helpful, but to be safe it would be
best to declare any extra functions static.
A Num represents a possibly very large non-negative integer, and can be initialized
by supplying a null-terminated string of ASCII digits to the numCreate function.
You will need to choose an appropriate representation for Nums that allows a
reasonably efficient implementation of the remaining functions.
A test harness that you can use to try out your code can be found in testNum.c.
These files are also available in the directory /c/cs223/Hwk3 on the Zoo.
#ifndef _NUM_H
#define _NUM_H
468
#include <stdio.h>
/*
* High-precision arithmetic on non-negative number in base 10.
*/
#endif /* _NUM_H */
examples/2018/hw/3/num.h
469
8.3.2 Submitting your assignment
2018-02-12 You may assume that you will never have to deal with a number
with more than 23 1 − 1 decimal digits. This is suggested by the use of
int for the index in numGetDigit, although in principle numPrint could
extract more digits than this. But we will just declare this officially.
For this assignment, you are to implement a data type representing a deck of
playing cards.
The file deck.h, also shown in the Interface section below, defines the interface
to the data type. Your job is to provide a matching deck.c file.
A deck consists of an ordered sequence of 0 or more cards, implemented as the
struct card type. Each card has a rank, which is a character in the string
"A23456789TJQK", and a suit, which is a character in the string "CDHS". A
card is printed by giving the rank and then the suit. For example, the Ten of
Diamonds has rank 'T' and suit 'D', and would be printed as TD.
A deck is printed by printing all the cards in the deck, separated by spaces. The
deckPrint function should do this. There should not be a space after the last
card.
The deckCreate and deckDestroy functions create and destroy decks. A new
deck always contains 52 cards, ordered by suit, then rank.
The deckGetCard function removes and returns the card at the top of a deck.
The deckPutCard function adds a new card to the bottom of a deck.
Two additional functions split and combine decks. The deckSplit function takes
a deck d, and a number n, and returns (using pointers passed in by the caller)
two decks d1 and d2, where d1 contains the top n cards in d (or all cards in d
if n is greater than or equal to the size of d), and d2 contains any cards that
are left over. As a side effect, deckSplit destroys d. The deckShuffle function
combines two decks d1 and d2 by alternately taking cards from the top of each
deck, starting with d1; if one of the decks runs out, the remaining deck supplies
470
the rest of the cards. Like deckShuffle, deckSplit returns a new deck and
destroys its inputs.
A test harness that you can use to try out your code can be found in testDeck.c.
This implements Deck Assembly Language, a minimalist programming language
for manipulating decks. Here is an example of running testDeck by hand. Note
that inputs and outputs are interleaved.
$ ./testDeck
# create a new deck and print it
c1 p1
AC 2C 3C 4C 5C 6C 7C 8C 9C TC JC QC KC AD 2D 3D 4D 5D 6D 7D 8D 9D TD JD QD KD AH 2H 3H 4H 5H
# remove top card from the deck
-1 p1
AC
2C 3C 4C 5C 6C 7C 8C 9C TC JC QC KC AD 2D 3D 4D 5D 6D 7D 8D 9D TD JD QD KD AH 2H 3H 4H 5H 6H
# put it back on the bottom
+1 AC p1
2C 3C 4C 5C 6C 7C 8C 9C TC JC QC KC AD 2D 3D 4D 5D 6D 7D 8D 9D TD JD QD KD AH 2H 3H 4H 5H 6H
# split into two decks
/ 1 2 17 p1 p2
2C 3C 4C 5C 6C 7C 8C 9C TC JC QC KC AD 2D 3D 4D 5D
6D 7D 8D 9D TD JD QD KD AH 2H 3H 4H 5H 6H 7H 8H 9H TH JH QH KH AS 2S 3S 4S 5S 6S 7S 8S 9S TS
# shuffle them back together
* 1 2 p1
2C 6D 3C 7D 4C 8D 5C 9D 6C TD 7C JD 8C QD 9C KD TC AH JC 2H QC 3H KC 4H AD 5H 2D 6H 3D 7H 4D
# split into a nonempty deck and an empty deck
/ 1 2 100000 e1 p1 e2 p2
1
2C 6D 3C 7D 4C 8D 5C 9D 6C TD 7C JD 8C QD 9C KD TC AH JC 2H QC 3H KC 4H AD 5H 2D 6H 3D 7H 4D
0
8.4.2 Interface
#include <stdio.h>
// A single card
471
// This is small enough that we usually pass it
// around by copying instead of using pointers.
typedef struct card {
char rank; /* from RANKS */
char suit; /* from SUITS */
} Card;
// A deck of cards
typedef struct deck Deck;
472
//
// If d1 is X X X X
// and d2 is Y Y Y Y Y Y Y,
// return value is X Y X Y X Y X Y Y Y Y.
//
// If d1 is X X X X
// and d2 is Y Y,
// return value is X Y X Y X X.
//
// Running time should be O(length of shorter deck).
// Destroys d1 and d2.
Deck *deckShuffle(Deck *d1, Deck *d2);
TBA
TBA
TBA
TBA
473
9 Sample assignments from Spring 2015
Make sure that you sign up for an account on the Zoo at http://zoo.cs.yale.edu/
accounts.html. If you already have an account, you still need to check the CPSC
223 box so that you can turn in assignments. It’s best to do this as soon as
possible.
You do not need to develop your solution on the Zoo, but you will need to turn
it in there, and it will be tested using the compiler on the Zoo.
474
9.1.3 Your task
For this assignment, you are to write a program encode.c that takes a plaintext
from stdin, encodes it using the above algorithm, and writes the result to
stdout.
For example, given the input
"Stop, thief!" cried Tom, arrestingly.
9.1.4 Hints
• You should assume that you are using the standard Latin 26-letter alphabet.
• You may assume that the characters 'A' through 'Z' and 'a' through 'z'
are represented using continuous ranges of integers, so that the expression
c - 'A' gives the position of c in the alphabet, provided c is an uppercase
character, and counting A as 0. This means that your program will not
be portable to machines that use EBCDIC or some other exotic character
representation.
• To test if a character is uppercase or lowercase, one option would be to put
#include <ctype.h> in your program and use the isupper and islower
macros. Note that these may behave oddly if you have set a locale that
uses a different alphabet. It may be safer to make your own tests.
475
$ echo hi > file
$ cat file
hi
$ od -t x1z file
0000000 68 69 0a >hi.<
0000003
/*
* Encode text on stdin by alphabet rotation with shifting offset.
*
* Initially, each character 'A'..'Z' or 'a'..'z' is rotated 17 positions.
*
* After encoding an uppercase letter, the offset is increased by 5 (mod 26).
*
* After encoding a lowercase letter, the offset is increased by 3 (mod 26).
*
* These parameters are set using the INITIAL_OFFSET, UPPERCASE_STEP, and LOWERCASE_STEP
* constants defined below.
*
*/
476
#include <stdio.h>
int
main(int argc, char **argv)
{
int offset = INITIAL_OFFSET;
int c;
return 0;
}
examples/2015/hw/1/encode.c
477
typedef struct safe Safe; /* opaque data type for a safe */
/*
* Returns the number of tumblers on a safe.
* If this is n, the possible tumbler indices will be 0 through n-1.
* */
int numTumblers(Safe *s);
/*
* Returns the number of positions of each tumbler.
* If this is n, the possible tumbler positions will be 0 through n-1.
*/
int numPositions(Safe *s);
/*
* Try a combination.
*
* This should be an array of numTumbler(s) ints.
*
* Returns contents of safe (a non-negative int) if combination is correct
* and safe has not yet self-destructed.
*
* Returns SAFE_BAD_COMBINATION if combination is incorrect
* and safe has not yet self-destructed.
*
* Returns SAFE_SELF_DESTRUCTED if safe has self-destructed.
*
* Note: may modify combination.
*/
int tryCombination(Safe *s, int *combination);
examples/2015/hw/2/safe.h
The noteworthy function in this API is tryCombination, which takes a pointer
to a safe and an array of ints representing the combination, and returns either
the contents of the safe (an int), the special code SAFE_BAD_COMBINATION if the
combination is incorrect, or the special code SAFE_SELF_DESTRUCTED if the safe
blew up after seeing too many bad combinations. Note that tryCombination
does not declare its second argument to be const and may not leave it intact.
The additional functions allow you to obtain important information about the
safe, like how many tumblers it has and what values these tumblers can be set
to. The behavior of a safe given a combination with the wrong number of values
478
or values outside the permitted range is undefined.
Your task is to write a function openSafe that will open a safe, if possible, by
trying all possible combinations. Note that if the safe self-destructs before you
can try all the possibilities, this task may not in fact be possible. Your openSafe
function should return SAFE_SELF_DESTRUCTED in this case. Your function
should be defined in a file openSafe.c and should match the declaration in this
file:
/* Include safe.h before this file to get the definition of Safe. */
/*
* Open a safe and return the value returned by tryCombination,
* or SAFE_SELF_DESTRUCTED if the safe self-destructed.
*/
int openSafe(Safe *s);
examples/2015/hw/2/openSafe.h
It is recommended that you put the lines below in your openSafe.c file to ensure
consistency with these declarations:
#include "safe.h"
#include "openSafe.h"
You may put additional functions in openSafe.c if that would be helpful. You
should declare these static to avoid the possibility of namespace conflicts.
In addition to safe.h and openSafe.h, /c/cs223/Hwk2/sourceFiles also con-
tains a main.c file that can be compiled together with openSafe.c to generate a
program that can be called from the command line. This program generates a
safe with a pseudorandom combination based on parameters specified on the
command line, runs your openSafe routine on it, and prints the value that
openSafe returns. You should not rely on your function being tested with this
particular program.
479
This runs the test script in /c/cs223/Hwk2/test.openSafe on your submit-
ted assignment. You can also run this script by hand to test the version of
openSafe.c in your current working directory.
9.2.3 Valgrind
You may need to allocate storage using malloc to complete this assignment. If
you do so, you should make sure that you call free on any block you allocate
inside your openSafe function before the function returns. The test.openSafe
script attempts to detect storage leaks or other problems resulting from misuse of
these routines by running your program with valgrind. You can also use valgrind
yourself to track down the source of errors, particularly if you remember to
compile with debugging info turned on using the -g3 option to gcc. The script
/c/cs223/bin/vg gives a shortcut for running valgrind with some of the more
useful options.
#include <stdlib.h>
#include <assert.h>
#include "safe.h"
#include "openSafe.h"
480
n = numTumblers(s);
free(copy);
return result;
}
int
openSafe(Safe *s)
{
int *combination; /* counter for combinations */
int n; /* number of tumblers */
int base; /* number of positions */
int result; /* result of tryCombination */
/* allocate space */
n = numTumblers(s);
base = numPositions(s);
for(zeroCombination(n, combination);
481
(result = nondestructiveTryCombination(s, combination)) == SAFE_BAD_COMBINATION;
nextCombination(n, base, combination));
free(combination);
return result;
}
examples/2015/hw/2/openSafe.c
i c0 + c1 i + c2 i2
0 1=1+5·0+3·0
1 9 = 1 + 5 · 1 + 3 · 12
2 23 = 1 + 5 · 2 + 3 · 22
Similarly, we can use quadratic letter sequences to reveal secret messages hidden
in the lyrics of K-pop songs:
$ ./qls hail satan < gangnam-style-excerpt.txt
470 3 5 hail
14 10 30 satan
14 56 7 satan
or even examine Act 1 of The Tempest to help resolve the Shakespeare authorship
question:27
$ ./qls "Bacon" "de Vere" "Marlowe" "Stanley" "that Stratford dude" < tempest-act-one.txt
120 387 777 Bacon
27 Stratfordians, Oxfordians, and other conspiracy theorists might object that these results
depend critically on the precise formatting of the text. We counter this objection by observing
that we used the Project Gutenberg e-text of The Tempest, which, while not necessarily the
most favored by academic Shakespeare scholars, is the easiest version to obtain on-line. We
consider it further evidence of Sir Francis Bacon’s genius that not only was he able to subtly
encode his name throughout his many brilliant plays, but he was even able to anticipate the
effects of modern spelling and punctuation on this encoding.
482
120 542 906 Bacon
120 851 850 Bacon
120 1592 726 Bacon
120 1607 472 Bacon
120 2461 95 Bacon
120 2729 50 Bacon
120 3225 215 Bacon
120 3420 284 Bacon
120 4223 330 Bacon
120 4534 76 Bacon
120 5803 29 Bacon
143 46 161 Bacon
143 268 727 Bacon
143 684 1434 Bacon
[... 280 more lines of Bacon omitted ...]
19959 1178 87 Bacon
5949 239 465 Marlowe
Write a program qls.c that takes a text on stdin and searches for quadratic
letter sequences that start with the strings given in argv. Your program should
output all such quadratic letter sequences that it finds, using the format
printf("%d %d %d %s\n", [...]);
where [...] should be replaced by appropriate expressions to give c0 , c1 , c2 ,
and the string found.
If a string appears more than once at the start of a quadratic letter sequence,
your program should print all occurrences. The order your output lines appear
in is not important, since the test script sorts them into a canonical order. Do
whatever is convenient.
Your program should be reasonably efficient, but you do not need to get carried
away looking for a sophisticated algorithm for this problem. Simply testing all
plausible combinations of coefficients should be enough.
Because neither K-pop songs nor Elizabethan plays use null characters, you may
assume that no null characters appear in your input.
You may also assume that any search strings will contain at least two characters,
in order to keep the number of outputs finite.
483
/c/cs223/bin/submit 3 qls.c
You can run some basic tests on your submitted solution with
/c/cs223/bin/testit 3 qls
The test program is also available as /c/cs223/Hwk3/test.qls. Sample inputs
and outputs can be found in /c/cs223/Hwk3/testFiles. The title of each file
contains the test strings used, separated by - characters. Before comparing the
output of your program to the output files, you may find it helpful to run it
through sort, e.g.
./qls hail satan < hail-satan.in | sort > test.out
diff test.out hail-satan.out
/*
* Search for quadratic letter sequences starting with words from argv on stdin.
*
* A quadratic letter sequence of length n in s is a sequence of characters
*
* s[c0 + c1*i + c2*i*i]
*
* where c0, c1, c2 are all >= 0, at least one of c1 and c2 is > 0,
* and i ranges over 0, 1, 2, ..., n-1.
*
* For each QLS found, prints c0, c1, c2, and the target string to stdout.
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <assert.h>
/*
* Return true iff we get a match in s for t with coefficients c
*
* Behavior is undefined if coefficients would send us off the end of s.
*/
static int
qlsMatch(const char *s, const char *t, int c[NUM_COEFFICIENTS])
{
int i;
484
for(i = 0; t[i] != '\0'; i++) {
if(s[c[0] + c[1] * i + c[2] * i * i] != t[i]) {
/* no match */
return 0;
}
}
return 1;
}
/*
* Search for quadratic letter sequences in s starting with t
* and print results to stdout.
*/
static void
qlsSearch(const char *s, const char *t)
{
int c[NUM_COEFFICIENTS]; /* coefficients */
int lenS; /* length of s */
int lenT; /* length of t */
int maxI; /* maximum value for i (this is lenT-1) */
lenS = strlen(s);
lenT = strlen(t);
maxI = lenT-1;
/* try all possible c[0] that will let us finish before lenS */
for(c[0] = 0; c[0] + maxI < lenS; c[0]++) {
/* if s[c[0]] isn't right, c[1] and c[2] can't fix it */
if(s[c[0]] == t[0]) {
/* try all feasible c[1] */
for(c[1] = 0; c[0] + c[1] * maxI < lenS; c[1]++) {
/* try all feasible c[2], but start at 1 if c[1] == 0 */
for(c[2] = (c[1] == 0); c[0] + c[1] * maxI + c[2] * maxI * maxI < lenS; c[2]
/* now see if we get a match */
if(qlsMatch(s, t, c)) {
printf("%d %d %d %s\n", c[0], c[1], c[2], t);
}
}
}
}
}
}
485
/*
* Return a single string holding all characters from stdin.
*
* This is malloc'd data that the caller should eventually free.
*/
static char *
getContents(void)
{
size_t size;
size_t len;
char *text;
int c;
size = INITIAL_BUFFER_SIZE;
len = 0;
text = malloc(size);
assert(text);
text[len++] = c;
}
/* cleanup */
text = realloc(text, len+1);
assert(text);
text[len] = '\0';
return text;
}
int
main(int argc, char **argv)
{
int i;
char *s;
486
s = getContents();
free(s);
return 0;
}
examples/2015/hw/3/qls.c
For this assignment you are to write a program that takes from stdin a sequence
of instructions for pasting ASCII art pictures together, reads those pictures from
files, and writes the combined picture to stdout.
Each instruction is of the form row column filename, suitable for reading
with scanf("%d %d %s", &row, &col, filename);, where row and col are
declared as ints and filename is a suitably large buffer of chars. Such an
instruction means to paste the contents of file filename into the picture with
each character shifted row rows down and column columns to the right of its
position in file filename. When pasting an image, all characters other than space
(' ', or ASCII code 32) overwrite any characters from earlier files at the same
position. Spaces should be treated as transparent, having no effect on the final
image.
For example, suppose that the current directory contains these files:
# # #
\==========/
\......../
examples/2015/hw/4/ship
/\
/vv\
/vvvv\
||
examples/2015/hw/4/tree
* * *
____|_|_|_____
487
|_____________|
|___HAPPY_____|
|__BIRTHDAY___|
|_____________|
examples/2015/hw/4/cake
Then this is what we should get from executing the command:
$ echo "1 1 ship 3 5 ship 3 19 tree 7 2 ship 13 4 ship 4 22 tree 5 6 cake" | ./compositor
# # #
\==========/
\......#.# # /\
\==========/ /vv\/\
\....*.*.* /vvv/vv\
____|_|_|_____|/vvvv\
|_____________| ||
\===|___HAPPY_____|
\..|__BIRTHDAY___|
|_____________|
# # #
\==========/
\......../
examples/2015/hw/4/example.out
For this assignment, you may submit whatever source files you like, along with
a file Makefile that will generate the program compositor when make is called
with no arguments (see the instructions for using make.)
You can test your submitted assignment using the public test script with
/c/cs223/bin/testit 4 public
You may also test your unsubmitted assignment in the current working directory
with
/c/cs223/Hwk4/test.public
The test script is intended mostly to guard against trivial errors in output format
and is not necessarily exhaustive.
488
9.4.3 Notes
9.4.3.1 Input
For parsing the commands on stdin, we recommend using scanf. You can test
for end of file by checking if scanf correctly parsed all three arguments, as in
int row;
int col;
char filename[BUFFER_SIZE];
9.4.3.2 Output
Your output should include newline and space characters to put the composited
characters in the appropriate rows and columns. It should not include any more
of such characters than are absolutely necessary.
For example, there should never be a space at the end of a line (even if there
is a space at the end of a line in one of the input files). Similarly, there should
not be any blank lines at the end of your output. You may, however, find it
necessary to add a newline to the end of the last line to avoid having the output
end in the middle of a line.
9.4.3.3 General
You may assume that the final picture is not so big that you can’t store a row
or column number for one of its characters in an int.
28 Normally this is a dangerous thing to assume, but this assignment is complicated enough
already.
489
9.4.4 Sample solution
I wrote two versions of this. The first used a jagged array to represent an image,
but I decided I didn’t like it and did another version using a sorted linked list of
points. This second version is shown below.
/*
* Alternate version of ASCII art thing using a queue.
*/
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
/*
* Idea of this data structure is that we have a sorted array
* of pixels, where each pixel specifies a row, column, and character
* to put in that position. The sort order is row then column.
*
* This is organized as a queue in the sense that we can push
* new pixels on to the end of it, although as it happens we
* never actually dequeue anything.
*/
struct pixel {
int row;
int col;
char value;
};
struct queue {
size_t top; /* number of elements */
size_t size; /* number of allocated slots */
struct pixel *pixels; /* pixel values, sorted by row then column */
};
q = malloc(sizeof(struct queue));
assert(q);
490
q->top = 0;
q->size = QUEUE_INITIAL_SIZE;
return q;
}
/* clean up queue */
void
queueDestroy(struct queue *q)
{
free(q->pixels);
free(q);
}
q->pixels[q->top++] = p;
}
q = queueCreate();
f = fopen(filename, "r");
if(f == 0) {
perror(filename);
exit(1);
}
491
p.row = p.col = 0;
fclose(f);
return q;
}
492
/* end last row */
putchar('\n');
}
/*
* Merge two queues, creating a new, freshly-allocated queue.
* New queue is sorted. If there are pixels in both left
* and right with the same row and column, the one from right
* overwrites the one from left.
*/
struct queue *
queueMerge(const struct queue *left, const struct queue *right)
{
int l = 0;
int r = 0;
struct queue *q;
q = queueCreate();
493
queuePush(q, right->pixels[r++]);
}
return q;
}
int
main(int argc, char **argv)
{
struct queue *merged; /* holding place for result of merge */
struct queue *left; /* accumulated picture */
struct queue *right; /* new picture */
int row; /* row offset for new picture */
int col; /* column offset for new picture */
char filename[BUFFER_SIZE]; /* filename for new picture */
if(argc != 1) {
fprintf(stderr, "Usage: %s\n", argv[0]);
return 1;
}
queueDestroy(left);
queueDestroy(right);
}
494
queueWrite(left);
queueDestroy(left);
return 0;
}
examples/2015/hw/4/compositor.c
Here is a Makefile.
495
You may assume that the program is argv is complete in the sense that it
includes rules for any combination of state and symbol you will encounter while
executing it. You are not required to detect if this assumption is violated.
9.5.2 Example
The program
b+2a-0 a-1a-1
gives instructions for what to do in state 1 (b+2a-0) and state 2 (a-1a-1). In
state 1, if the controller reads an a, the triple b+2 means that it should write
b, move right (+), and switch to state 2. If instead it reads a b, the triple a-0
means that it should write a, move left (-), and halt (0). In state 2, the machine
always writes a, moves left, and switches to state 1.
Below is a depiction of this machine’s execution. It passes through 4 states
(including both the initial state and the final halting state) using a total of 3
steps. The controller and its current state is shown above its current position on
the tape at each point in time. To avoid having to put in infinitely long lines,
only the middle three tape cells are shown.
1
aaa
2
aba
1
aba
0
aaa
You should submit a Makefile and whatever source files are needed to generate a
program ./turing when make is called with no arguments. The turing program
should simulate a Turing machine as described above and print the number of
steps that it takes until it halts in decimal format, followed by a newline. It
should not produce any other output. For example, using the program above,
your program should print 3:
$ ./turing b+2a-0 a-1a-1
3
496
For more complex programs you may get different results. Here is a 3 state, 3
symbol program that runs for a bit longer:
$ ./turing b+2a-0c-3 b-3c+2b-2 b-1a+2c-1
92649163
You may assume that tape symbols can always be represented by lowercase
letters, that states can always be represented by single digits, and that argv is
in the correct format (although it may be worth including a few sanity checks in
your program just in case).
Not all Turing machine programs will halt. Your program is not required to
detect if the Turing machine it is simulating will halt eventually or not (although
it should notice if it does halt).
Submit all files needed to build your program as usual using /c/cs223/bin/submit
5 filename.
There is a public test script in /c/cs223/Hwk5/test.public. You can run this
on your submitted files with /c/cs223/bin/testit 5 public.
/*
* Simple Turing machine simulator.
*
* Tape holds symbols 0 (default) through 2.
*
* Controller programming is specified in argv:
*
* argv[i] gives transitions for state i as six characters.
*
* Each triple of characters is <action><direction><new-state>
*
* where <action> is one of:
*
* a,b,c: write this value to tape
*
* <direction> is one of:
*
* -: go left
* +: go right
* .: stay put
*
497
* The three pairs give the transition for reading 0, 1, 2 from tape.
*
* State 0 is the halting state.
*
* On halting, prints number of transitions followed by contents
* of all tape cells that have ever been visited by the
* finite-state controller.
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <assert.h>
#include <sys/types.h>
struct configuration {
unsigned int state;/* state of head */
size_t leftmost; /* leftmost cell visited */
size_t rightmost; /* rightmost cell visited */
size_t current; /* current cell */
size_t tapeLength; /* current allocated space for tape */
char *tape; /* contents of cells */
};
newTapeLength = 4*c->tapeLength;
newTape = malloc(newTapeLength);
assert(newTape);
498
newTape[i + offset] = c->tape[i];
}
oldTape = c->tape;
c->tape = newTape;
c->tapeLength = newTapeLength;
c->current += offset;
c->leftmost += offset;
c->rightmost += offset;
free(oldTape);
}
struct configuration *
configurationCreate(void)
{
struct configuration *c;
size_t i;
c = malloc(sizeof(struct configuration));
assert(c);
c->state = 1;
c->tapeLength = INITIAL_TAPE_LENGTH;
c->leftmost = c->rightmost = c->current = c->tapeLength / 2;
c->tape = malloc(c->tapeLength);
assert(c->tape);
return c;
}
void
configurationDestroy(struct configuration *c)
{
free(c->tape);
free(c);
}
499
/* used for debugging mostly */
void
configurationPrint(const struct configuration *c)
{
size_t i;
int
main(int argc, char **argv)
{
struct configuration *c;
char cellValue;
const char *transition;
size_t steps;
if(argc == 1) {
fprintf(stderr, "Usage: %s transitions\n", argv[0]);
return 1;
}
c = configurationCreate();
steps = 0;
while(c->state != 0) {
steps++;
cellValue = c->tape[c->current];
assert(0 <= cellValue);
assert(3*(cellValue+1) <= strlen(argv[c->state]));
500
c->tape[c->current] = transition[0] - SYMBOL_BASE;
switch(transition[1]) {
case '-':
if(c->current == 0) {
configurationExpand(c);
}
c->current--;
if(c->current < c->leftmost) {
c->leftmost = c->current;
}
break;
case '+':
if(c->current == c->tapeLength - 1) {
configurationExpand(c);
}
c->current++;
if(c->current > c->rightmost) {
c->rightmost = c->current;
}
break;
case '.':
/* do nothing */
break;
default:
fprintf(stderr, "Bad direction '%c'\n", transition[2]);
exit(2);
break;
}
#ifdef PRINT_CONFIGURATION
configurationPrint(c);
#endif
}
configurationDestroy(c);
return 0;
}
501
examples/2015/hw/5/turing.c
CC=gcc
CFLAGS=-std=c99 -Wall -pedantic -g3
all: turing
compositor: turing.o
$(CC) $(CFLAGS) -o $@ $^
clean:
$(RM) turing *.o
examples/2015/hw/5/Makefile
For this assignment, you are to implement a data structure for playing a game
involving ships placed in a large square grid. Each ship occupies one more more
squares in either a vertical or horizontal line, and has a name that consists of a
single char other than a period (which will be used to report the absence of a
ship). Ships have a bounded maximum length; attempts to place ships longer
than this length have no effect.
All type and constant definitions for the data type, and all function declarations,
are given in the file ships.h, which is shown below, and which you can also find
in /c/cs223/Hwk6/sourceFiles/ships.h. The playing field is represented by
a struct field (which you get to define). A new struct field is created by
fieldCreate, and when no longer needed should be destroyed by fieldDestroy.
These data types from ships.h control ship naming and placement. Note that
uint32_t is defined in stdint.h (which is also included by inttypes.h. You
will need to include one of these files before ships.h to get this definition.
typedef uint32_t coord;
struct position {
coord x;
coord y;
};
struct ship {
struct position topLeft; /* coordinates of top left corner */
int direction; /* HORIZONTAL or VERTICAL */
unsigned int length; /* length of ship */
502
char name; /* name of ship */
};
Actual placement is done using the fieldPlaceShip function, declared as follows:
void fieldPlaceShip(struct field *f, struct ship s);
A ship of length m placed horizontally with its top left corner at position (x, y)
will occupy positions (x, y) through (x+m−1, y). If instead it is placed vertically,
it will occupy positions (x, y) through (x, y + m − 1). If any of these coordinates
exceed the maximum coordinate COORD_MAX (defined in ships.h), the ship will
not be placed. The ship will also not be placed if its name field is equal to
NO_SHIP_NAME or if the length exceeds MAX_SHIP_LENGTH.
If the new ship will occupy any position as a ship previously placed in the field,
the previous ship will be removed. It is possible for many ships to be removed
at once in this way.
The fieldAttack function can be used to remove a ship at a particular location
without placing a new ship. It returns the name of the removed ship, if any, or
NO_SHIP_NAME if there is no ship at that location.
Finally, the fieldCountShips returns the number of ships still present in the
field.
Your job is to write an implementation of these functions, which you should
probably put in a file ships.c. You must also supply a Makefile, which,
when make is called with no arguments, generates a test program testShips
from your implementation and the file testShips.c that we will provide. You
should not count on precisely this version of testShips.c being supplied; your
implementation should work with any main program that respects the interface
in ships.h.
You should write your implementation so that it will continue to work if the
typedef for coord, or the definitions of the constants COORD_MAX, NO_SHIP_NAME,
SHIP_MAX_LENGTH, HORIZONTAL, or VERTICAL change. You may, however, assume
that coord is an unsigned integer type and the COORD_MAX is the largest value
that can be represented by this type.
If it helps in crafting your implementation, you may assume that
MAX_SHIP_LENGTH will alway be a reasonably small constant. You do
not need to worry about implementing a data structure that will handle huge
ships efficiently. On the other hand, COORD_MAX as defined in the default
ships.h is 23 2 − 1, so you will need to be able to deal with a field with at
least 26 4 possible locations, a consideration you should take into account when
choosing a data structure to represent a field.
503
9.6.3 The testShips program
504
...c.
...c.
...c.
.....
.....
The input files used by test.public can be found in /c/cs223/Hwk6/testFiles.
Some of these were generated randomly using the script /c/cs223/Hwk6/makeRandom,
which you should feel free to use for your own nefarious purposes.
Because the interface in ships.h gives no way to find out what ships are currently
in the field, the test program will not actually produce pictures like the above.
Instead, it prints after each command a line giving the name of the ship sunken by
fieldAttack (or NO_SHIP_NAME if no ship is sunk or fieldPlaceShip is called)
and the number of ships left in the field following the attack. So the user must
imagine the carnage as the 100000 ships in randomSparseBig.in somehow leave
only 25336 survivors in randomSparseBig.out, demonstrating the importance
of strict navigational rules in real life.
505
9.6.4 Submitting your assignment
/*
* Type for coordinates, and their maximum possible value.
*
* Include <stdint.h> before this header file
* to get the definition of uint32_t
* and its maximum value UINT32_MAX.
*/
typedef uint32_t coord;
#define COORD_MAX (UINT32_MAX)
/*
* Non-opaque structs for passing around positions and ship placements.
*/
struct position {
coord x;
coord y;
};
struct ship {
struct position topLeft; /* coordinates of top left corner */
int direction; /* HORIZONTAL or VERTICAL */
unsigned int length; /* length of ship */
char name; /* name of ship */
};
/*
506
* Create a playing field for holding ships.
*/
struct field *fieldCreate(void);
/*
* Free all space associated with a field.
*/
void fieldDestroy(struct field *);
/*
* Place a ship in a field with given placement and name.
*
* If placement.length is less than one or greater than MAX_SHIP_LENGTH,
* or if some part of the ship would have a coordinate greater than COORD_MAX,
* or if the ship's name is NO_SHIP_NAME,
* the function returns without placing a ship.
*
* Placing a new ship that intersects any previously-placed ships
* sinks the previous ships, removing them from the field.
*/
void fieldPlaceShip(struct field *f, struct ship s);
/*
* Attack!
*
* Drop a shell at given position.
*
* Returns NO_SHIP_NAME if attack misses (does not intersect any ship).
*
* Otherwise returns name of ship hit.
*
* Hitting a ship sinks it, removing it from the field.
*/
char fieldAttack(struct field *f, struct position p);
/*
* Return number of ships in the field.
*/
size_t fieldCountShips(const struct field *f);
examples/2015/hw/6/ships.h
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
#include <stdint.h>
#include <inttypes.h>
507
#include "ships.h"
int
main(int argc, char **argv)
{
struct field *f; /* where we keep our ships */
int command; /* command char */
struct ship s; /* ship we are placing */
struct position p; /* location to attack */
int sank; /* ship we sank */
if(argc != 1) {
fprintf(stderr, "Usage: %s\n", argv[0]);
return 1;
}
f = fieldCreate();
fieldPlaceShip(f, s);
sank = NO_SHIP_NAME;
break;
case ATTACK:
if(scanf("%" SCNu32 " %" SCNu32 " ", &p.x, &p.y) != 2) {
fprintf(stderr, "Not enough enough args to %c\n", ATTACK);
return 1;
}
/* else */
508
sank = fieldAttack(f, p);
break;
default:
/* bad command */
fprintf(stderr, "Bad command %c\n", command);
return 1;
break;
}
fieldDestroy(f);
return 0;
}
examples/2015/hw/6/testShips.c
#include <stdlib.h>
#include <assert.h>
#include <string.h>
#include <stdint.h>
#include "ships.h"
struct elt {
struct elt *next; /* pointer to next element in linked list */
struct ship ship; /* ship in this element */
};
509
static size_t
hash(struct position p)
{
return X_HASH_FACTOR * p.x + Y_HASH_FACTOR * p.y;
}
f = malloc(sizeof(struct field));
assert(f);
f->size = initialSize;
f->occupancy = 0;
return f;
}
struct field *
fieldCreate(void)
{
return fieldCreateInternal(DEFAULT_INITIAL_SIZE);
}
510
for(e = f->table[i]; e != 0; e = next) {
next = e->next;
free(e);
}
}
free(f->table);
}
void
fieldDestroy(struct field *f)
{
fieldDestroyContents(f);
free(f);
}
/*
* Helper for fieldPlaceShip.
*
* This skips all the sanity-checking in fieldPlaceShip,
* and just performs the hash table insertion.
*/
static void
fieldInsertShip(struct field *f, struct ship s)
{
size_t h; /* hashed coordinates */
struct elt *e; /* new element to insert */
h = hash(s.topLeft) % f->size;
e = malloc(sizeof(struct elt));
assert(e);
e->ship = s;
e->next = f->table[h];
f->table[h] = e;
f->occupancy++;
}
void
fieldPlaceShip(struct field *f, struct ship s)
{
511
struct field *f2;
struct elt *e;
struct position pos;
size_t i;
free(f2);
}
fieldAttack(f, pos);
512
}
/*
* Helper for fieldAttack.
*
* If there is a ship with topLeft at given position, return pointer
* to location in hash table that points to it (either table entry
* or next component).
*
* If not, return null.
*/
static struct elt **
fieldShipAt(struct field *f, struct position p)
{
struct elt **prev; /* previous pointer */
/*
* Attack!
*
* Drop a shell at given position.
*
* Returns 0 if attack misses (does not intersect any ship).
*
* Otherwise returns name of ship hit,
* which should be freed by caller when no longer needed.
*
* Hitting a ship sinks it, removing it from the field.
*/
char
fieldAttack(struct field *f, struct position p)
{
struct position p2;
513
int i;
int direction;
struct elt **prev;
struct elt *freeMe;
char name;
if(prev) {
/* if we sink anybody, it will be this ship */
/* but maybe it doesn't reach */
/* or points in the wrong direction */
if((*prev)->ship.length > i && (*prev)->ship.direction == direction) {
/* got it */
freeMe = *prev;
*prev = freeMe->next;
name = freeMe->ship.name;
free(freeMe);
f->occupancy--;
return name;
} else {
/* didn't get it */
/* maybe try again in other direction */
break;
}
}
}
}
514
/*
* Return number of ships in the field.
*/
size_t
fieldCountShips(const struct field *f)
{
return f->occupancy;
}
examples/2015/hw/6/ships.c
For this assignment you are to implement a strategy for playing a card game
involving moving cards (represented by uint64_ts) down through a sequence of
n piles. The interface to your strategy is given in the file strategy.h, shown
below:
/*
* Interface for card-playing strategy.
*
* The deal function supplies a new card to the strategy. Each possible card will only be d
*
* The play function should return a card that has been dealt previously but not yet played.
* If asked for a card when the hand is empty, its behavior is undefined.
*/
#include <stdint.h>
515
/* play a card from pile k */
Card strategyPlay(Strategy *, int k);
examples/2015/hw/7/strategy.h
Initially, the player has n piles, numbered 1 through n. The strategyDeal
function is called to indicate that a new card has been dealt to pile n. The
strategyPlay function is called to indicate that a card should be moved from
pile k to pile k-1; this function should return the card to move. Cards moved to
pile 0 leave the game and are not used again. Each card is unique: once a card is
dealt, the same card will never be dealt again during the same play of the game.
The choice of when to deal and when to play from pile is controlled by some
external entity, which at some point will stop and compute the smallest card in
each pile. The goal of the strategy is to make these smallest cards be as large as
possible, giving priority to the highest-numbered piles: given two runs of the
game, the better-scoring one is the one that has the larger smallest card in pile
n, or, if both have the same smallest card in pile n, the one that has the larger
smallest card in pile n − 1, and so forth. A tie would require that both runs end
with the same smallest card in every pile. An empty pile counts as UINT64_MAX
for this purpose (although note that a strategy has no control over which piles
are empty).
Your job is to implement a strategy that produces the best possible result
for any sequence of calls to strategyDeal and strategyPlay. Your strategy
implementation will most likely need to keep track of which cards are available in
each pile, as this information is not provided by the caller. Your strategyPlay
function should only make legal moves: that is, it should only play cards that are
actually present in the appropriate pile. You may assume that strategyPlay is
never called on an empty pile.
Your implementation should consist of a file strategy.c and any support-
ing source and header files that you need other than strategy.h, which we
have provided for you. You should also supply a file Makefile that gener-
ates a program testStrategy when make is called with no arguments, us-
ing your implementation and the testStrategy.c file that you can find in
/c/cs223/Hwk7/sourceFiles/testStrategy.c.
The testStrategy program implements one of four rules for when you can play
from each pile. The arguments to testStrategy are a character indicating which
rule to apply, the number of cards to deal (which can be pretty big), and the
number of piles (which is much more limited, because testStrategy.c tracks
the pile each card is in using a char to save space). The actual cards dealt are
generated deterministically and will be the same in every execution with the
same arguments. The test files in /c/cs223/Hwk7/testFiles give the expected
516
output when testStrategy is run with the arguments specified in the filename
(after removing the - characters); this will always be the value, in hexadecimal,
of the smallest card in each pile, starting with the top pile.
For example, running the harmonic rule h with 1000 cards and 4 piles (not
counting the 0 pile) gives the output
$ ./testStrategy h 1000 4
5462035faf0d6fa1
501ebb6268d39af3
25732b5fee7c8ad7
301e0f608d124ede
This output would appear in a filename h-1000-4, if this particular combination
of parameters were one of the test cases.
For this assignment, you are to implement an ordered set data type for holding
null-terminated strings. The interface to this data type is given in the file
orderedSet.h, shown below.
/*
* Ordered set data structure.
*/
517
struct orderedSet *orderedSetCreate(void);
/* Destroy a set */
void orderedSetDestroy(struct orderedSet *);
The test program is a fairly thin wrapper over the implementation that allows
you to call the various functions using one-line commands on standard input.
A command is given as the first character of the line, and the rest of the line
518
contains the argument to the command if needed. The + and - commands add
or remove an element from the set, respectively, while the p, s, and h commands
print the contents of the set, the size of the set, and a hash of the set (these
commands ignore any argument). The f command removes all elements of the
set that do not contain a particular substring.
Here is a simple input to the program that inserts four strings, filters out the
ones that don’t contain ee, then prints various information about the results.
+feed
+the
+bees
+please
fee
s
h
p
This should produce the output
2
15082778b3db8cb3
bees
feed
There were a lot of ways to do this. For the sample solution, I decided to do
something unusual, and store the set as a hash table. This is not ordered, but
since the only operation that requires the set to be ordered is orderedSetFilter,
which will take Ω(n) time no matter how you implement it, the O(n log n) cost
to call qsort to sort the elements as needed does not add much overhead.
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
519
#include <stdint.h>
#include <string.h>
#include "orderedSet.h"
s = malloc(sizeof(*s));
assert(s);
s->n = 0;
s->size = size;
s->table = calloc(s->size, sizeof(char *));
return s;
}
struct orderedSet *
orderedSetCreate(void)
{
return orderedSetCreateInternal(INITIAL_SIZE);
}
/* Destroy a set */
void
orderedSetDestroy(struct orderedSet *s)
{
size_t i;
520
if(s->table[i]) {
free(s->table[i]);
}
}
free(s->table);
free(s);
}
static size_t
hash(const char *s)
{
size_t h;
return h;
}
static char *
strMalloc(const char *s)
{
char *s2;
s2 = malloc(strlen(s)+1);
strcpy(s2, s);
return s2;
}
assert(elt);
521
/* skip over non-empty slots with different values */
for(h = hash(elt) % s->size; s->table[h] && strcmp(s->table[h], elt); h = (h+1) % s->siz
orderedSetInsertInternal(s, strMalloc(elt));
}
522
/* skip over non-empty slots with different values */
for(h = hash(elt) % s->size; s->table[h] && strcmp(s->table[h], elt); h = (h+1) % s->siz
/* remove and reinsert any elements up to the next hole, in case they wanted to be e
for(h = (h+1) % s->size; s->table[h] ; h = (h+1) % s->size) {
later = s->table[h];
s->table[h] = 0;
s->n--;
orderedSetInsertInternal(s, later);
}
}
}
static int
compare(const void *s1, const void *s2)
{
return strcmp(*((const char **) s1), *((const char **) s2));
}
top = 0;
523
a[top++] = s->table[h];
}
}
s2 = orderedSetCreate();
free(a);
return s2;
}
examples/2015/hw/8/orderedSet.c
Makefile{examples/2015/hw/8/Makefile}
For this problem, you are given a rectangular maze consisting of wall squares
(represented by 0) and path squares (represented by 1). Two path squares are
considered to be adjacent if they are at most one square away orthogonally or
diagonally; in chess terms, two path squares are adjacent if a king can move
from one to the other in one turn. The input to your program is a maze in which
the graph consisting of all path squares is connected and contains at most one
cycle, where a cycle is a sequence of distinct squares s1 , s2 , . . . , sk where each si
is adjacent to si+1 and sn is adjacent to s1 . Your job is to write a program maze
that finds this cycle if it exists, and marks all of its squares as cycle squares
(represented by 2).
For example, here is a picture of a 200-by-100 maze that contains a small cycle:
and here is the same maze with the cycle highlighted in white:
The input to your program should be taken from stdin, in a restricted version
of raw PGM format, an old image format designed to be particularly easy to
524
Figure 9: 200 by 100 maze
525
parse. The input file header will be a line that looks like it was generated by the
printf conversion string "P5 %d %d 255\n", where the first int value is the
width of the image in columns and the second is the height of the image in rows;
the same conversion string can be given to scanf to parse this line. Following
the newline will be a long sequence of bytes, each representing one pixel of the
image, with each row following immediately after the previous one. These bytes
will be either 0 or 1 depending on whether that position in the maze is a wall or
a path.
The output to your program should be in the same format, with the difference
that now some of the bytes in the image data may be 2, indicating the cycle.
If there is no cycle, the output should be identical to the input. Your program
is not required to detect or respond in any particular way to input mazes that
violate the format or do not contain a connected graph of path squares, although
you are encouraged to put in reasonable error checking for your own benefit
during testing.
For example, the maze depicted above is stored in the file 200-100-4.in.pgm;
the corresponding output is stored in the file 200-100-4.out.pgm. Other sample
inputs and outputs can be found in /c/cs223/Hwk9/testFiles.
This file format is hard to read with the naked eye, even after loading into a text
editor. The script /c/cs223/Hwk9/toPng will generate a PNG file that doubles
the pixel size and rescales the 0, 1, 2 pixel values to more reasonable values for
display. This can be called as /c/cs223/Hwk9/toPng filename.pgm to produce
a new file filename.pgm.png. This works best if filename.pgm is already in a
directory you can write to. PNG files can be displayed using most web browsers
and image manipulation tools.
Submit whatever files you need to build maze (including a Makefile that gen-
erates maze when called with no arguments) using /c/cs223/bin/submit 9.
You can apply the public test script in /c/cs223/Hwk9/test.public to your
submitted files using /c/cs223/bin/testit 9 public.
This uses breadth-first search, which makes the search a bit simpler than depth-
first search but requires some more effort to compute the cycle. The program
also includes code for generating random mazes.
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
#include <math.h>
526
#include <limits.h>
struct direction {
signed char x;
signed char y;
};
struct position {
int x;
int y;
};
struct square {
int contents;
struct position parent; /* used by search routine */
};
struct maze {
struct position size; /* rows = size.x, columns = size.y */
struct square *a; /* packed array of squares */
};
527
/* look up a position in a maze */
#define Mref(m, pos) ((m)->a[(pos).y * (m)->size.x + (pos).x])
#define Mget(m, pos) (assert((pos).x >= 0 && (pos).y >= 0 && (pos).x < (m)->size.x && (pos).
return target->x >= 0 && target->y >= 0 && target->x < m->size.x && target->y < m->size.
}
/* free a maze */
void
destroyMaze(struct maze *m)
{
free(m->a);
free(m);
}
m = malloc(sizeof(*m));
assert(m);
return m;
528
}
void
saveMaze(struct maze *m, FILE *f)
{
struct position i;
return count;
}
struct position
randomPosition(const struct maze *m)
{
struct position r;
return r;
}
529
/* generate a random connected maze with no cycles */
struct maze *
generateMaze(struct position size)
{
struct maze *m;
struct position r;
struct position i;
size_t countdown; /* how long to run before we get tired of not making progress */
size_t maxCountdown; /* value to reset countdown to when we make progress */
m = malloc(sizeof(struct maze));
assert(m);
m->size = size;
m->a = malloc(sizeof(struct square) * m->size.x * m->size.y);
assert(m->a);
/* reset countdown */
countdown = maxCountdown;
}
}
return m;
}
530
/* create a cycle by adding one extra PATH square
* that connects two existing squares */
void
mazeAddCycle(struct maze *m)
{
struct position r;
do {
r = randomPosition(m);
} while(Mget(m, r).contents != WALL || countNeighbors(m, r) != 2);
head = tail = 0;
531
}
/* find a root */
/* we don't care what this is, but it can't be a WALL */
do {
root = randomPosition(m);
} while(Mget(m, root).contents != PATH);
/* push root */
Mref(m, root).parent = root;
queue[tail++] = root;
532
} while(!eqPosition(ancestor, root));
doneWithSearch:
free(queue);
}
int
main(int argc, char **argv)
{
struct maze *m;
struct position size = { 80, 60 };
int seed;
switch(argc) {
case 1:
/* sample solution for the assignment */
m = loadMaze(stdin);
mazeSearchForCycle(m);
saveMaze(m, stdout);
destroyMaze(m);
break;
case 4:
/* generate a new test image */
/* usage is ./maze width height seed */
/* if seed is negative, use absolute value and don't put in cycle */
size.x = atoi(argv[1]);
size.y = atoi(argv[2]);
seed = atoi(argv[3]);
533
break;
default:
fprintf(stderr, "Usage %s or %s width height seed\n", argv[0], argv[0]);
return 1;
}
return 0;
}
examples/2015/hw/9/maze.c
And the Makefile.
Hi Jim,
Several of your students for 223 were up late last night in the Zoo
working on their assignments, and they seemed to be getting hung up on
some coding issues. They were pretty frustrated with some standard
language/debugging issues, so I helped them get the type-checker and
Valgrind to stop yelling at them. I noticed some recurring problems and I
thought I'd pass them on to you. They're pretty standard mistakes, and
I've made most of them myself at some point, either in your class or in
Stan's. It occurred to me that there might be more confused people than
were around last night, and they'd probably appreciate it if someone told
them about these sort of things. I'm not trying to intrude on how you
teach the class; I just thought this feedback would be helpful and I
wasn't sure that it would find its way to you otherwise. I'm sure you've
already taught them several of these, and I understand that sometimes
students just don't pay attention. Still, these seem like good points to
hammer down:
534
people didn't seem to realize that a char* is 4 bytes rather than 1.)
3. I think it would be helpful if you explained why, when using
realloc(), it's a good idea to increase the allocated size
multiplicatively rather than additively. Besides, everyone loves the
"tearing down the hotel" metaphor. :)
4. If they use call-by-reference, they had better make sure that they
keep the same reference. So if they pass in a pointer as an argument to a
function, they shouldn't call malloc() or realloc() on that function.
(Mention the double pointer as another option.) Most people will make
this mistake eventually if no one warns them about it. When I was
learning C, I sort of viewed malloc() and realloc() as magical
memory-increasing functions; that is to say, I didn't think very hard
about the meaning of assigning a pointer to malloc()'s return value. I
suspect some of your students would benefit from having the details
spelled out. (Or spelled out again, if you've already done that.)
5. It's possible to get through a lot (but not all) of the CS major
without learning basic Unix shell syntax, but that's really just wasted
time. Pipes, backgrounding, man, scp, and grep really help even at the
intro level. I realize the purpose of the class isn't to teach Unix, but
in past years I think there was a TA help session on these things. They
don't need to know how to write their own Emacs modes, but the basics
would definitely be helpful.
6. malloc/free -- If Valgrind/gdb reports a problem inside of malloc() or
free(), chances are that the student has *not* discovered a bug in gcc.
(I just heard how one of Zhong's students' proved the correctness of the
libraries for his thesis; that's pretty cool.) Explain why you can't
malloc() twice on the same pointer. Explain how with multidimensional
pointers, you must malloc/free each dimension separately. Drill down the
one-to-one correspondence between malloc'ing and free'ing.
7. Null characters: It's not obvious to newbies that some library functions
require them, particularly null-terminated strings. Tell them that
char*'s must be null terminated in order for <string.h>
functions to work.
8. Off-by-one errors: Tell people that when all else fails, take a hard
look at their comparison operators; i. e. make sure that > shouldn't
really be a >=.
9. This is probably another thing for a help session or workshop, but I
feel almost everyone could benefit from basic software engineering
methodology. Stylistic awkwardness I noticed:
--Using a mess of if-then-else's instead of nested control
structures.
--Using while-loops with iterators that get initialized right
before the beginning of the loop and get incremented with each iteration,
when they could be using for-loops.
--Doing the setup work for a loop right before the beginning of
the loop and then at the end of every iteration, instead of at the
535
beginning of every iteration. Conversely: doing the cleanup work at the
beginning of every iteration and then after the loop has completed.
10. Tell them to use assert(). (Frequently.) When you cover binary
search, using placement of debugging statements in code in order to pin
down an error might be an instructive example.
11. Tell them to use either printf statements or a debugger to debug. I
think they can figure out how to do this on their own, they just need to
be told it's a good idea.
Best,
Jim
536