0% found this document useful (0 votes)
14 views358 pages

AP Prep Vol2 Answers Explanations Unlocked

AP_Prep_Vol2_Answers_Explanations_unlocked

Uploaded by

Ye Myo Kyaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views358 pages

AP Prep Vol2 Answers Explanations Unlocked

AP_Prep_Vol2_Answers_Explanations_unlocked

Uploaded by

Ye Myo Kyaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 358

AP Exam

Preparation Book
Volume 2

Answers & Explanations

Preparation for
Applied
Information
Technology
Engineer
Examination

INFORMATION-TECHNOLOGY PROMOTION AGENCY, JAPAN


● Contents

Volume 2

Answers & Explanations

● Morning Exam
Section 1 Basic Theory ................................................................... 1
Section 2 Computer System ........................................................... 30
Section 3 Technological Elements .................................................. 65
Section 4 Development Technology ............................................... 101
Section 5 Project Management....................................................... 121
Section 6 Service Management ...................................................... 129
Section 7 System Strategy ............................................................. 143
Section 8 Business Strategy ........................................................... 150
Section 9 Corporate and Legal Affairs ............................................ 160

● Afternoon Exam
Section 10 Business Strategy, Information Strategy, Strategy
Establishment and Consulting Techniques ...................... 181
Section 11 Programming (Algorithms) .............................................. 193
Section 12 Strategy Related Issues.................................................. 227
Section 13 Technological Elements .................................................. 246
Section 14 Information Systems Development ................................. 312
Section 15 Management Related Items ........................................... 323

Copyright Notice
All registered trademarks, trademarks, and product names of the
respective companies should be respected even if not
specified.
Morning Exam Section 1 Basic Theory Answers and Explanations

Section 1 Basic Theory

Q1-1 c) Decimal to binary conversion

We have only to calculate 7  32  0.21875 and then to convert the resulting value to a binary
fraction. However, as another solution, we can represent the expression as 7  32  7  25  7  2 5 ,
and thereby obtain the binary fraction by shifting “111” (bit pattern for 7) 5 bits to the right. In
other words, the implied binary point of “111” is located at the rightmost end, so the operation of
shifting “111” 5 bits to the right yields the following result.

111.

0.00111

Therefore, the answer is c).


Also, as a different solution based on 7  32  7  25  7  2 5 , we may consider that there are
seven of the same value 2 5  (0.00001) 2 , which leads to the answer (0.00111) 2 .

Q1-2 d) Relationship between the numbers of digits represented in decimal and binary

The maximum value of a binary number with B digits is 2 B  1 . In order to determine how many
digits are needed to represent this value in decimal, we have only to convert it to decimal and
count the digits. For example, an 8-digit binary number has a maximum value of (11111111) 2 ,
which is converted to ( 255)10 , so it is 3 digits long. However, the question uses a logarithmic
function (log) instead of counting the number of digits, so we have to consider that method. As is
often used for determining the number of digits in a binary number, the inequality
2 3
“10 (=100) < 256 < 10 (=1,000)” means that ( 253)10 is 3 digits long and log10 X determines what
power of 10 a certain number X is.
All the options use “≈”, so we only need to determine the approximate number of digits. In
other words, we can use 2 B instead of the maximum value 2 B  1 . As a result, we obtain
B
log102 = B log102. This is the approximate number of digits D in decimal, so d) is the correct
answer.

Q1-3 d) Methods for representing negative numbers

The methods for obtaining 1’s and 2’s complements are as described below.
• 1’s complement: Subtract the original number from an n-bit binary number with all bits set to
1. The same result can also be derived by inverting all bits of the original number.
• 2’s complement: Subtract the original number from a number that is obtained by adding 1 to
an n-bit binary number with all bits set to 1. The same result can also be derived by adding 1
to a 1’s complement.
First, we consider the 1’s complement. (10) 2 is derived by inverting all bits of (01) 2 (  1) , so
blank A is –1. Also, (11)2 is derived by inverting all bits of (00)2(= + 0), so blank C is – 0.
Next, we consider the 2’s complement. –1 can be derived by inverting all bits of (01)2(= +1) and
adding 1 as follows: 01 → (invert) → 10 → (+1) → 11. Thus, blank D is – 1, and then blank B’s

1
Morning Exam Section 1 Basic Theory Answers and Explanations

bit string (10)2 is interpreted as (11)2(= –1) – 1 = – 2.


Therefore, d) is the correct answer.

b2 b1 Sign and
1’s complement 2’s complement
absolute value
0 0 +0 +0 0
0 1 +1 +1 +1
1 0 –0 –1 –2
1 1 –1 –0 –1

Q1-4 b) Using offset in floating-point representation

First, we need to convert the two hexadecimal numbers into binary numbers in order to interpret
them by using the floating-point representation specified in the question. The low-order 4 digits of
both numbers are all 0s, so only the high-order 4 digits need to be converted. The “bias”
(sometimes called offset) in the question is a method of representing numbers, in which a constant
is added to an actual value in order to represent negative numbers as positive numbers.

High-order 4 digits of 45BF0000 → 0100 0101 1011 1111


Exponent = 6910 → Actual value is 69 – 64 = 5
High-order 4 digits of 41300000 → 0100 0001 0011 0000
Exponent = 6510 → Actual value is 65 – 64 = 1

When two floating-point numbers are added or subtracted, the exponent of the smaller one must
be increased to match that of the larger. In this question, the exponent value of the smaller number
41300000 is adjusted to match that of the larger one 45BF0000; as a result, the high-order two
digits of both numbers have the same value 45 prior to the addition operation. Both are positive
numbers, and the exponent field of the number is not changed greatly after the addition, so the
options a) and d) can be disregarded. Furthermore, since the exponent field of the number
41300000 is adjusted, it is necessary to shift and adjust the fraction field of that number. This
means that since leading 0s are additionally inserted at the beginning of the fraction field
00110000, the result of the addition operation does not greatly change the higher bits of the
fraction field for 45BF0000. When looked at from this angle, the option b) is correct. In this way,
we can determine which answer is correct on multiple-choice questions without performing
detailed calculations.
For comparison and verification purposes, we can calculate more accurately in the following
way. In order to change the exponent of 41300000 to 45, the fraction is shifted 4 bits (= 69 – 65) to
the right. As a result, the fraction, which was 0011 0000 before the shift, becomes 0000 0011 (the
low-order 16 bits are all 0s, and as such omitted). This resulting value 0000 0011 is added to the
fraction 1011 1111 of the other number. The result is 1100 0010 = C216, and thus the answer is b). ,
As supplemental information, when the fraction becomes zero because of an underflow caused by
shifting, the resulting error is called a loss of trailing digits.

2
Morning Exam Section 1 Basic Theory Answers and Explanations

Q1-5 b) Errors caused by calculation

When we perform a simple subtraction between two values with almost identical absolute
values and the same sign, we can observe that the number of significant digits decreases. It is
called cancellation of significant digits (or cancellation error). Since the expression 151  150
is a subtraction between two values with almost identical absolute values and the same sign,
cancellation of significant digits occurs. Therefore, b) is the correct answer. Below is a detailed
example of cancellation error and its preventive measures.
When the eight significant digits are assumed before the subtraction, 150 = 12.247449,
151 = 12.288206, and 151  150 = 0.040757. Thus, the number of significant digits decreases
from eight to five. In order to prevent this, we can perform the calculation by rationalizing the
numerator as follows:

( 151  150 )( 151  150 )


151 – 150 =
151  150

1
=
151  150
1
=
24.535655

= 0.040757013

a) Truncation error: An error which occurs in engineering calculations when a non-terminating


decimal calculation is cut off
c) Loss of trailing digits: An error in arithmetic addition or subtraction between two numbers
with a very large absolute value and a very small absolute value, which causes the smaller
value to be ignored or lost
d) Rounding error: An error caused by rounding off, rounding up, or rounding down numbers

Q1-6 c) Evaluation of significant digits resulting from four basic arithmetic operations

In numerical operations, the significant digits of the operation results are determined by those of
the values used in each operation. In addition and subtraction operations, the digit position (radix
point location) is adjusted to perform the calculation. In other words, the significant digits of the
operation results are determined not only by the significant digits of both values but also by the
radix point locations. Specifically, calculations are performed by matching the radix point
locations and rounding off the result in accordance with the value that has fewer digits after the
radix point. Therefore, a) and b) are incorrect because both options describe that the resulting
number of significant digits after addition or subtraction becomes six which is the larger number
of significant digits before the operation.
On the other hand, in multiplication and division operations, rounding-off is done on the value
that has the smaller number of significant digits before the operation. For this reason, the resulting
number of significant digits after multiplication or division of two values with six and two
significant digits becomes the smaller number of the significant digits, which is two. Therefore, c)
is the correct answer.

3
Morning Exam Section 1 Basic Theory Answers and Explanations

Q1-7 c) Set operations

Fig. 1 shows A  B ’ illustrated as a Venn diagram. In addition, the relative complement of B in


A (or the difference of A and B, whose elements are in A but not in B) can be represented as
A  B  A  B . The relative complement of B (Fig. 3) in A (Fig. 2), A  B , can be represented
as A  B by using A and B (Fig. 4), which is the same as Fig. 1. Therefore, c) is the correct
answer.
Also, the Venn diagrams for a), b), and d) are all illustrated as shown in Fig.5.

a) ( A  B )  ( A  B )  S  ( A  B )  ( A  B )  S  ( A  B ) (Same as d))
b) ( S  A)  ( S  B )  S  ( A  B ) (Distributive law) (Same as d))

S A B S A B S A B

Fig. 1 A B Fig. 2 A Fig. 3 B

S A B S A B

Fig. 4 B Fig. 5 S  ( A  B)

4
Morning Exam Section 1 Basic Theory Answers and Explanations

Q1-8 b) Representing sets as Venn diagrams

The shaded area in the question can be represented in the following expression for set
operations. Using the distributive law, we can transform the expression and arrive at the correct
answer b).

( A  B)  ( A  B)
 (( A  B )  A)  (( A  B )  B )
 (( A  A)  ( B  A))  (( A  B )  ( B  B ))
 ( B  A)  (( A  B ) (From ( A  A)  S ) , ( B  B )  S ) )
 ( A  B)  ( A  B)

A B A B

A B A B

(A  B)  (A  B)

A B

a) A B

A B

c) ( A  B )  ( A  B )

This is a negation of the correct answer b). Operations that are used to obtain negations of
each other are known as complement operations.

A B

d) S  ( A  B )

A B

5
Morning Exam Section 1 Basic Theory Answers and Explanations

Q1-9 b) Sets

We can use a Venn diagram to accurately figure out the number of students in each group.

100 Students
Spanish: 26 students

Not studying any of the languages: 23 students


a c
d
a: Spanish and Chinese only
b
b: Chinese and French only
Chinese: French: c: French and Spanish only
38 students 36 students d: All three languages

Of the 100 students, the 23 students are not studying any of the three languages, and the
remaining 77 students are studying at least one language. There are also students who are studying
two or three languages. On the other hand, of the total number of 100 students derived by simply
adding the number of students who are studying one of the three languages (26 studying Spanish +
38 studying Chinese + 36 studying French), students within sets a, b, and c in the above Venn
diagram are counted twice and students in set d are counted three times. For this reason, adding
the number of overlapping students ( a  b  c  d  2) to the 77 students who are studying at least
one of the languages results in 100 students (equation (1)).

77  ( a  b  c  d  2)  100 ...... (1)

Furthermore, since the number of students studying two languages are 7 for Spanish and
Chinese (= a + d), 10 for Chinese and French (= b + d), and 8 for French and Spanish (= c + d), the
following equation (2) can be derived.

ad 7
b  d  10
+) c  d  8
a  b  c  d  3  25 ..... (2)

Equation (1) can be transformed as follows: a  b  c  d  2  23 . And then, subtracting the


transformed equation from equation (2) yields d = 2. Therefore, the number of students studying all
three languages is 2, and the correct answer is the option b).

Q1-10 b) Logical expressions of functions shown in truth tables

In order to quickly find an answer to this type of question, it is better to check all the options
one by one. We can write out truth tables for each option. However, we can also note the fact that
all of the options contain a logical product (True only when both are True) within a pair of
parentheses, and furthermore two pairs of parentheses are joined by using a logical sum (True
when at least one is True). And so, we can try to assign appropriate values to the variables x, y, and
z, and compare the results with those of f (x, y, z) of the truth table.

6
Morning Exam Section 1 Basic Theory Answers and Explanations

Since the options a) through c) all start with (x  y), we can first try the third row of the truth
table x = T, y = F, and z = T, and obtain x  y = F.

a) y  z = F, so the entire expression is evaluated as F. This does not match the function f of the
truth table.
b) y  z = T, so the entire expression is evaluated as T. For now, this matches the function f of the
truth table.
c) y  z = F, so the entire expression is evaluated as F. This does not match the function f of the
truth table.
d) For this option only, we can try the second row x = T, y = T, and z = F so that the first half
( x  y ) can yield F. As a result, x  y = F and y  z = F, so the entire expression is
evaluated as F. This does not match the function f of the truth table.

Hence, we can find that the answer is b). However, just to be sure, it is better to check by trying
other values and then to determine the final answer.

Q1-11 c) Karnaugh map and equivalent logical expressions

Simplification using a Karnaugh map can be used to directly determine equivalent logical
expressions. First, take note of the four cells in the center of the map containing 1s. What these
cells share in common is that, regardless of the values of A or C, when the values of both B and D
are 1, the value of these cells is 1. We can then determine B  D as part of the logical expression.
Next, focusing attention on the two 1s that appear in the first row, regardless of the value of C, the
result is 1 when the values of A, B, and D are all 0s, so by taking the negation, part of the logical
expression A  B  D can be determined.

Both of these cases are parts of the Karnaugh map in the question, so the overall expression is
the sum of the two logical expressions. Thus, we can determine that the equivalent logical
expression is c) A  B  D  B  D .

A B  D

CD
00 01 11 10
AB
00 1 1
BD
01 1 1
11 1 1
10

Q1-12 d) Calculation of probability using simple Markov process

Generally, an occurrence of an event affects the state that follows. The current state, likewise, is
dependent on previous states, so the probability of a given state occurring in the future is
dependent on the probability of a particular state happening in the past. A Markov process is one
in which we can consider a discrete number of past states when considering the probability of a
given future state occurring based on past state occurrence probabilities. A simple Markov process

7
Morning Exam Section 1 Basic Theory Answers and Explanations

is the simplest form of Markov process, in which only the immediately preceding (current) state
needs to be considered. In other words, in a simple Markov process, when considering the
probability of a certain kind of weather on a given day, only the weather of the preceding day need
be considered. There is no need to be confused by the unfamiliar expression “simple Markov
process.” Instead, the question can be considered simply.
There are three patterns in which “the weather two days after a given rainy day is sunny.” The
sum of the probabilities of each pattern is the probability asked for by the question, and thus the
answer is d) 33%.

• Rainy → Rainy → Clear ....... 0.2  0.3 = 0.06


• Rainy → Clear → Clear ........ 0.3  0.4 = 0.12
• Rainy → Cloudy → Clear ..... 0.5  0.3 = 0.15
0.06 + 0.12 + 0.15 = 0.33

Q1-13 b) Calculation of conditional probability

The probability of events that caused a certain result can be determined based on the actual
events that have occurred. Such probability is called the probability of causation, or posterior
probability.
Generally, the probability that it was die A that was rolled, based on the fact that the resulting
number was 1, is called conditional probability. We must not determine “the probability that when
one die is taken from the bag, it is die A.”

A 1 B

3 3 2

10 10
20

1
The probability of drawing die A from the bag containing two dice is , the same probability
2
1 3 3
as for drawing die B. The probability of die A rolling the number 1 is   . The
2 10 20
1 3 3
probability of die B rolling the number 1 is   . Thus, the probability of the number 1
2 5 10
3 3 9
being rolled is   . The probability, then, that “the die which rolled the number 1” was
20 10 20
3
20 3 20 1
die A is 9
   b).
20
20 9 3

Q1-14 c) Product percentage rejected using normal distribution of weight

Given a normal distribution with an average weight of 5.2kg and a standard deviation of 0.1kg
for products, products which weigh less than 5.0kg are rejected. The difference between the
rejected products and an average product is 5.2 – 5.0 = 0.2kg. Since the standard deviation is 0.1kg,

8
Morning Exam Section 1 Basic Theory Answers and Explanations

the difference 0.2 = 2  0.1 = 2  standard deviation. This means rejected products are double the
standard deviation away from the average. Let this distance be u. The normal distribution table
indicates that when u = 2.0, P = 0.023. Therefore, 2.3% of products are rejected. Thus, c) is the
correct answer. The standard normal distribution table shows what percentage of data (P) has
values greater than “the average + u  the standard deviation”. Normal distributions are
symmetrical on either side of the average, so the percentage of data (P) that has values smaller
than “the average – u  the standard deviation” is identical. Thus, if products weighing less than
5.0kg and products weighing more than 5.4kg are both rejected, the percentage will be 2.3  2 =
4.6%.

(Reference)
Given the probability variable x follows a normal distribution with average m and standard
deviation σ, the value u in the standard normal distribution can be determined using the formula
below.

u  xm 

Substituting the product weight, average, and standard deviation into this formula produces:

u  5 .0  5 .2  0 .1  0 .2  0 .1  2 .0

The normal distribution table indicates that when u = 2.0 and P = 0.023, the probability of being
rejected is 2.3%.

Q1-15 a) Rewriting of calculation formulas to minimize number of multiplication operations


n
If X calculations are performed individually, the same calculations will appear repeatedly. For
n n–1 n–1
example, in order to calculate X and X , X will appear in both cases. Making optimal use of
this reappearance to take advantage the preceding calculation result of the value to the n –1 power,
the calculation formula can be rewritten as shown below.
5 4 3 2
Y = X + aX + bX + cX + dX + e
4 3 2
= X  (X + aX + bX + cX + d ) + e
3 2
= X  (X  (X + aX + bX + c) + d ) + e
2
= X  (X  (X  (X + aX + b) + c) + d ) + e
= X  (X  (X  (X  (X + a) + b) + c) + d ) + e

As a result, the number of multiplication operations after rewriting is 4, meaning that the correct
answer is a).

Q1-16 d) Average response time in M/M/1 queueing model

For an M/M/1 queueing model, when the average arrival rate  , the average service rate  , and
1
the average service time Ts = are given, the following equations hold for these parameters.


Utilization rate      Ts

9
Morning Exam Section 1 Basic Theory Answers and Explanations


Average waiting time excluding service time Tw =  Ts
1 
1
Average response time including service time Tq = Tw + Ts =  Ts
1 

There is an average of six processing requests per one minute, so:


Average arrival rate  = 6 / 60 = 0.1 (processing requests per second)
The average processing time is six seconds, so the utilization rate  is:

    Ts = 0.1  6 = 0.6

This question asks for the average response time, so:

1 1
Tq =  Ts =  6 = 15 (seconds)
1  1  0.6

Therefore, d) is the correct answer.

Q1-17 d) Formula for determining average waiting time in M/M/1 queueing model

The average waiting time (not including the average service time of a server) for an M/M/1
queueing model can be determined with the following formula.


Average waiting time =  Ts
1 

After the two branches are merged, the resulting branch will only have one ATM, so the
utilization rate of the ATM after the merger will be 2  , which is double of what it was before. The
average waiting time after the merger can be determined by replacing  with 2  in this
formula. Thus, the correct answer is d).

Q1-18 d) Shortest paths as shown in diagrams

Questions asking to determine the number of shortest paths can be regarded as questions on
permutation and combination. The shortest path between P and R consists of traversing two
horizontal segments and two vertical segments. The order that these four segments are combined
determines the path, so the number of combinations is the number of shortest paths. This simply
means which two vertical or horizontal segments (of the four segments) are chosen, so the number
of combinations can be calculated as combinations of two chosen from the four.

P→R: 4C2 = 4 3 / (2 1) = 6

In the same way, the number of shortest paths from R to Q is decided by what order two vertical
and three horizontal paths are traversed. This can be calculated as combinations of two chosen
from the five.

R→Q: 5C2 = 5  4 / (2  1) = 10

In order to go from P to Q, point R must be passed through. Therefore, the number of shortest
paths can be determined by multiplying the results for P→R and R→Q (independently

10
Morning Exam Section 1 Basic Theory Answers and Explanations

determined), resulting in 6  10 = 60 paths, or answer d).

Q1-19 a) Calculating the average number of bits using the indicated notation

The average number of bits can be determined by multiplying the number of bits for encoded
numeric characters by the probability of occurrence for each number and then calculating the sum
of each result.

1  0.4 + 2  0.19 + 4  0.1 + (5  0.05)  5 + (6  0.03)  2


= 0.4 + 0.38 + 0.4 + 1.25 + 0.36
= 2.79 ≈ 2.8 (bits)

Therefore, a) is the correct answer.

Q1-20 c) Bit length necessary for encoding

For the original message to be uniquely decodable from the encoded bit string, c) or d) must be
chosen. For example, in the case of a), 001 has two interpretations: AAB and CB. In the case of b),
0110 has two interpretations: ADA and BC.
Next, considering that the probabilities of occurrence for A, B, C, and D are 50%, 30%, 10%,
and 10%, respectively, the average bit string lengths for c) and d) can be determined as shown
below.

c): 1  0.5 + 2  0.3 + 3  0.1 + 3  0.1 = 1.7


d): 2 0.5 + 2  0.3 + 2  0.1 + 2  0.1 = 2

As this shows, c) is shorter.

Q1-21 b) State transition diagram of finite automaton

An automaton is a mathematical model of a system which is affected not only by input values
but also by the preceding state. Further, a finite automaton is one with a finite state, which uses the
current state and input to determine the output and next state and operate based on those. A finite
automaton can be represented with a state transition diagram such as the one in the question. The
character strings accepted by this automaton start with the initial state shown in the
diagram, and depending on the value 1 or 0, changes states, ending at the accepting state . It is
important to note that even if the automaton reaches the accepting state in the middle of a
character string, this does not mean that the character string is accepted. Below is a trace of state
changes for each character string given in the answer group, with the states shown in the diagram
represented symbolically. The only one that ends at E is b).

11
Morning Exam Section 1 Basic Theory Answers and Explanations

a) 0 1 0 1 1
S a b E c c

b) 0 1 1 1 0
S a b b b E

1 0 1 1 1
c) S S a b b b

1 1 1 0 0
d) S S S S a S

Q1-22 a) BNF and bit strings

There are a few symbols available in BNF (Backus-Naur Form) notation: “::=” which means
“the left side of the equation is defined as the right side of the equation,” “|” which represents
“OR,” and “< >” in which defined elements are enclosed. These can be used for even more
complex syntax.
The defined content of the BNF notation given in the question can be considered as follows: < S >
is defined as “ 01” or “ 0 < S > 1”, and when “ 0 < S > 1” is applied, the definition of the < S > part is
recursive. Therefore, the following pattern is allowed.
01 0 < 01 > 1 00 < 01 > 11 000 < 01 > 111 …
In this iterative pattern, the left and right sides always consist of the same number of 0s and 1s,
respectively. Therefore, a) is the correct answer.

Q1-23 a) RPN (Reverse Polish Notation)

Reverse Polish Notation (RPN) is the arithmetic expression representation method used inside
computers. RPN is used when compiling source programs written in Fortran, COBOL, or the like,
because the priority of operators needs to be considered when performing syntax analysis. With
this representation method, addition, subtraction, multiplication, and division operators are placed
after the variables they apply to, and parentheses are eliminated. Computers perform calculations
by starting from the leftmost part of the expression representation. When performing this
calculation, computers use stacks.
Below is the process used to convert the expression in the question into a conventional
expression. In consideration of two variables just before the first operator, we can start scanning
the expression from left to right as shown below.
(1) AB  CA  BA  

The first element is the variable A. This alone is insufficient to determine what action to
perform, so the next element is considered.
(2) AB  CA  BA  

This is also a variable. The operator that follows it acts on A and B.
(3) AB  CA  BA  

The next element is the “–” operator, so the conventional notation of its operation on the

12
Morning Exam Section 1 Basic Theory Answers and Explanations

variables A and B is “A – B ”.
(4) AB  CA  BA  
↑↑ ↑
Next come the variables C and A, followed by the “–” operator, so, again in conventional
notation, this indicates “C – A ”.
(5) AB  CA  BA  

Next comes the “  ” operator, so “A – B ” is multiplied by “C – A ”. That is, the expression
becomes “(A – B)  (C – A)”.
(6) AB  CA  BA  
↑↑ ↑
Next come the variables B and A, and operator “  ”, thus “B  A ”. Hence, the two values are
“(A – B)  (C – A)” and “B  A ”.
(7) AB  CA  BA  

Next comes the “–” operator, so “(A – B)  (C – A)” and “B  A” are operated on by “–”. Thus,
the original expression is “(A – B)  (C – A) – B  A”.

Substituting A = 3, B = 6, and C = 7, the expression yields (3 – 6)  (7 – 3) – 6  3 = ( – 3)  4 – 2


= – 12 – 2 = – 14. The correct answer is a).

Q1-24 a) Correct combinations of search methods and execution time orders

Orders indicate the computational complexity of algorithms, and can be determined from the
term with the highest order in the expression used to determine the number of comparisons. The
number of comparisons for each search method can be considered as described below.

• Binary search: When the items to be searched have already been sorted by key values, this
method is used to progressively divide the search range in half in order to find the desired
data. If the item being searched for exists within a set of n items, when the number of data
items in the search scope is reduced to a single item, the data has been found. In reality,
remainders are discarded when dividing by two, so data can be considered found when the
number is one or greater, but less than two. If the number of comparisons is k, then 1 ≤ n /2k < 2.

1 ≤ n / 2k, so 2k ≤ n. Take the base 2 logarithm of both sides:


k
log22 ≤ log2n, and then transform the left side:
k
log22 = k log22 = k  1 = k, and therefore k ≤ log2n.
k k k 1
Likewise, n / 2 < 2 is n < 2  2 and then n < 2 + . Take the base 2 logarithm of both sides:
k 1
log2n < log22 + = (k + 1)  log22 = k + 1

Based on the above, the number of comparisons can be represented as k ≤ log2n < k + 1. The
average number of comparisons is an integer value which satisfies this inequality. The order of
a binary search, then, is log2n.

• Linear search: In this method, comparisons are performed in order from the edge of the search

13
Morning Exam Section 1 Basic Theory Answers and Explanations

range. When the search key exists within a set of n data items, the minimum number of
comparisons is 1, and the maximum is n – 1. Based on the average of the minimum and
maximum numbers, the average number of comparisons is n / 2 . The order, thus, is n.

• Hash search: In this method, a certain calculation (hash function) is performed on keys to
determine their values (hash values). This value is then used as the data storage location. With
this method, sometimes differing data may result in the same hash value. Considering such
cases, position information such as pointers is used to link data with the same hash values to
avoid overlapping data positions. Because of this, when collisions occur, searching takes
longer, but since the question states that “the probability of hash value collision (sharing of
identical values) is negligible,” for this question, any data can be searched for without
occurrence of data collisions. With this method, the storage position can be determined
regardless of the number of data items, so the order is 1.
Therefore, a) is the correct answer.

Q1-25 c) Audio sampling technologies

The PCM (Pulse Code Modulation) transmission system converts analog signals into pulse
signals and transmits them. The method samples analog signals at regular intervals, and transforms
those values into digital data to transmit the data. The number of samples is the number of digital
data transferred, so the volume of data transmitted per second, that is, the transfer rate (bits per
second) is equal to the length of each individual digital data element multiplied by the number of
data items sampled per second. This number of data elements is the number of samples per
second. In this question, the code length is 8 bits, so the number of samples is 64,000 bits  8 bits
= 8,000 samples. The question asks for the sampling interval, so, as determined by the formula
below, c) is the correct answer.

1 1 125 1 125
1 second  8,000 times =      125 microseconds
8 1,000 1,000 1,000 1,000,000

Q1-26 b) Appending parity bit to 7-bit data

Hexadecimal character codes 30, 3F, and 7A are each converted into binary form, and a parity
bit is appended to the start of the character code. An even parity bit is appended, so the first bit is
decided so that the number of “1s” can be even.

(30)16 = (0011 0000)2→(0011 0000)2 ...... The number of 1s is two, so the number is left as-is.
(3F)16 = (0011 1111)2→(0011 1111)2 ...... The number of 1s is six, so the number is left as-is.
(7A)16 = (0111 1010)2→(1111 1010)2 ...... The number of 1s is five, so the first bit is set to 1,
making it (FA)16.
Therefore, b) is the correct answer.

14
Morning Exam Section 1 Basic Theory Answers and Explanations

Q1-27 a) Hamming code

Given Hamming code 1110011, the information bits and redundancy bits are as follows:
X1 = 1, X2 = 1, X3 = 1, X4 = 0, P1 = 1, P2 = 1, P3 = 0
This is applied to the provided expressions.

X1  X3  X4  P1 = 1  1  0  1 = 1
X1  X2  X4  P2 = 1  1  0  1 = 1
X1  X2  X3  P3 = 1  1  1  0 = 1

If there are no errors, the result of each expression is 0. If there is an error bit, the result of the
expression is 1. In this case, each result indicates that there is an error bit. The only value included
in all three expressions is X1, so the error is in X1. As a result of correcting this, the correct
Hamming code is 0110011, so a) is the correct answer.

Q1-28 a) Feedback control characteristics

Feedback control refers to the use of a portion of system output values, or deviated values, as
system input values, using those values to control system output values. Normally, control is
performed by comparing target values and current values, and attempting to minimize the
difference between the two. Therefore, as stated in a), even if the item being controlled changes
for some reason, an independent equilibrium function ensures that accurate control is possible.

b) The effect of noise is minimized by feedback control.


c) Target values can be changed.
d) This is an explanation of feed forward control using fuzzy control.

Q1-29 b) Insertion into a queue

Queues are FIFO (First-In First-Out) data structures, with values removed from the front of the
queue, and new values inserted behind the last element in the queue.
As the figure shows, the first element in the queue is indicated by a start pointer, and the queue
can be traversed from element to element starting from this pointer. When an element is removed
from a queue implemented in this way, it is evident that the element pointed to by the start pointer
is the element to be removed. However, to insert an item to the end of the queue, we need to
traverse the queue by following the pointers from the start of the queue to reach the final element.
As the number of elements in the queue increases, the effort needed to find the end of the queue
increases accordingly.
In a) and c), in order to find the end of the queue, the queue must be traversed by following
pointers from the start of the queue. This means that the effort involved in finding the end of the
queue is identical to that in the queue in the figure. In b) and d), the end pointer indicates the final
element in the queue, so little effort is required in determining the position where the next queue
element is to be inserted. When there are many elements, with a) and c), the effort involved in
determining the point where a new queue item can be inserted increases in proportion to the
number of elements. With b) and d), the position is determined with the end pointer, regardless of
the number of elements. Thus, b) and d) require less effort to determine the queue insertion

15
Morning Exam Section 1 Basic Theory Answers and Explanations

position than a) or c).


Next, in comparison of b) and d), since the effort involved in finding the insertion position is
identical for both, the effort involved in updating pointers when inserting an element is considered.
As Fig. 1 shows, for b), after (1) the element pointed to by the end pointer is updated to point to
the new element, (2) the end pointer itself is updated to point to the inserted element. As Fig. 2
shows, for d), after (1) the element pointed to by the end pointer is updated to point to the new
element, and (2) the added element is set to point to the preceding element, (3) the end pointer
itself is updated to point to the inserted element. For these reasons, the amount of effort involved
in pointer updating is greater for d) than for b), and therefore the approach which requires the least
effort for an insertion of an element is b).

Start Pointer (1) New Element

End Pointer

(2)

Fig. 1 Insertion into queue b)

Start Pointer (1) New Element

End Pointer (2)

(3)
Fig. 2 Insertion into queue d)

Q1-30 b) Data manipulation with queues and stacks

Stacks are LIFO (Last-In First-Out) data structures, and queues are FIFO (First-In First-Out)
data structures. In consideration of this, the procedure in the question can be executed in order as
shown below.

(1) push(a): Data a is inserted into the stack.


(2) push(b): Next, data b is inserted into the stack. The content of the stack is now ab.
(3) enq(pop()): Data is removed from the stack (pop()) and inserted into the queue. The data
removed from the stack is the last data that was inserted into the stack, which is b.
Therefore, the queue contains b. The stack now contains only a.
(4) enq(c): Next, data c is inserted into the queue. The content of the queue is now bc.
(5) push(d): Data d is inserted into the stack. The content of the stack is now ad.
(6) push(deq()): Data is removed from the queue (deq()) and inserted into the stack. The data
removed from the queue is the data that was inserted first, which is b. The stack now
contains adb, and the queue contains only c.
(7) x←pop(): Data is removed from the stack and assigned to x. Here, the data that is removed
from the stack is the data which was inserted most recently, which is b.

16
Morning Exam Section 1 Basic Theory Answers and Explanations

push(a) push(b) enq(pop()) enq(c) push(d) push(deq()) x ←pop()

b b
Stack b b d d d
a a a a a a a
Queue →b →c→b →c→b →c →c
Thus, the data assigned to the variable x is b) “b”.

Q1-31 c) List changes for linked cells

In order to change list [Tokyo, Shinagawa, Nagoya, Shin-Osaka] to [Tokyo, Shin-Yokohama,


Nagoya, Shin-Osaka], the element pointed to after Tokyo must be changed from Shinagawa to
Shin-Yokohama, and then the element pointed to after Shin-Yokohama to Nagoya. Looking at the
question, it is evident that the value of the second row of the table that represents the array
indicates the row number of the next element in the list. Elements such as these which point to the
next data location are called pointers.
The operation to make the pointer of A(1, 1) = “Tokyo”, A(1, 2), point to the 5th row,
Shin-Yokohama, is 5→A(1, 2), but if this is performed first, the value of A(1, 2) = 2, which points
to the data after “Tokyo”, will be erased. Thus, the operation to change A(5, 2), the pointer of
A(5, 1) = “Shin-Yokohama”, to point to the next element, “Nagoya,” must be performed first. To
do this, the value of the pointer for A(2, 1) = “Shinagawa”, A(2, 2), which is 3 is assigned to
A(5, 2). The operation in the answer group that is equivalent to A(2, 2)→A(5, 2) is c)
A((A1, 2), 2)→A(5, 2), so c) is the correct answer.
It is important to note that if the same operations are performed in the order shown in a), 5 will
be assigned (A1, 2) in the first operation, so the second operation, A((A1, 2),2)→A(5, 2) will result
in A(5, 2)→A(5, 2), and thus the procedure will not work.

Q1-32 b) Heaps constructed in arrays

Heaps are complete binary trees, in which, for any subtree, the following is true:
parent data value < child data value (or parent data value > child data value)
The element with an index of 1 in the options is the smallest value 1, so the relationship
between the heap and the array is shown below (the numbers on the left shoulder of the nodes
represent indices).
1

2 3

4 5 6 7

8 9 10 11

Insert the elements from each option of the answer group into the nodes of the binary tree, and
search for the heap for which it is true that “parent data value < child data value” (numbers on the
left shoulder of the nodes indicate index numbers, and numbers in the nodes represent elements).

17
Morning Exam Section 1 Basic Theory Answers and Explanations

a) 5 > 4 for the two elements with indices 3 and 6, so this does not satisfy the conditions of a
heap.
1
1
2 3
3 5
4 5 6 7
12 6 4 9
8 9 10 11
15 14 8 11

b) All values satisfy the conditions of a heap, so this is the correct answer.
1
1
2 3
5 3
4 5 6 7
12 6 4 9
8 9 10 11
15 14 8 11

c) 8 > 6 for the two elements with indices 5 and 10, so this does not satisfy the conditions of a
heap.
1
1
2 3
5 3
4 5 6 7
12 8 4 9
8 9 10 11
15 14 6 11

d) 6 > 5 for the two elements with indices 2 and 5, so this does not satisfy the conditions of a
heap.
1
1
2 3
6 3
4 5 6 7
12 5 4 9
8 9 10 11
15 14 8 11

18
Morning Exam Section 1 Basic Theory Answers and Explanations

Q1-33 c) Number of binary tree elements

This question asks the examinee to determine the formula which represents the relationship
between the number of nodes (n) and depth (k) of a binary tree in which all leaves have the same
depth, and all nodes other than leaves have two children. First, using the binary tree shown as an
example in the question, a depth of 2 is substituted for k in each formula given in the options, to
confirm whether the resulting value is 7.

a) 2(2 + 1) + 1 = 7 Ex.
b) 22 + 3 = 4 + 3 = 7
c) 22 + 1 – 1 = 8 – 1 = 7
d) (2 – 1)(2 + 1) + 4 = 3 + 4 = 7 k=2

The number of nodes, n, is 7 for all of the options, so now substitute a depth of 3 for k and
confirm whether the resulting values of the formulas are 15 nodes.

a) 3(3 + 1) + 1 = 3  4 + 1 = 13
b) 23 + 3 = 8 + 3 = 11
c) 23 + 1 – 1 = 24 – 1 = 16 – 1 = 15
d) (3 – 1)(3 + 1) + 4 = 2  4 + 4 = 12
k=3
Therefore, c) is the correct answer.
The condition that “all nodes other than leaves have two children” indicates that this is a
complete binary tree, in which every node except the bottom nodes has two children. The number
of leaves in a complete binary tree is four for k = 2, and eight for k = 3, and likewise the number of
leaves is 2k. The number of nodes other than leaves within a depth of k – 1 is 2k – 1, so for depth k,
the total number of nodes can be determined as follows: 2k + (2k – 1) = 2k  2 – 1 = 2k + 1 – 1.

Q1-34 d) Binary tree search methods

Based on the parent-child relationships between elements as described in the question for array
A[1], A[2], ..., A[n], a binary tree is generated as shown in Fig. 1. For example, for i = 1, a binary
tree can be generated by taking A[2] as the left child of A[1], and A[3] as the right child of A[1].

A [1]

A [2] A [3]

A [4] A [5] A [6] A [7]

Fig. 1 Binary tree

Performing a linear search on the array will traverse the array by starting at the top of the array,
A[1], passing through A[2], A[3], ..., A[7], and searching the binary tree in order from the value 1
to the value 7 as shown in Fig. 2. This is a breadth-first search, so the correct answer is d).

19
Morning Exam Section 1 Basic Theory Answers and Explanations

1 A [1]
2 A [2] 3 A [3]

4 5 6 7
A [4] A [5] A [6] A [7]

Fig. 2 Binary tree search

Below is a summary of the other search methods (searching in numerical order).

a) Pre-order, depth-first search b) Post-order, depth-first search

1 7
2 5 3 6

3 4 6 7 1 2 4 5

In order of parent, left subtree, In order of left subtree, right


and right subtree subtree, and parent

c) In-order, depth-first search

4
2 6

1 3 5 7
In order of left partial tree,
parent, right subtree

Q1-35 d) Algorithms depicted in flowcharts

The flowchart is composed of (1) an initial setting and (2) a main processing using an iterative
structure. The content of each processing is as follows:

(1) Initial setting


c is set to 0, and d is set to a.

(2) Main processing using an iterative structure


Iterative processing stops when d is smaller than b; if not smaller, the value of d minus b is
assigned to d, and c is incremented by 1. In this iterative process, while d is greater than or
equal to b, d – b is performed, and c counts the number of subtractions. When d < b and the
iterative process ends, the value of d is the remainder after subtracting b from d multiple times,
and c is the number of times the subtraction was performed. In other words, “when a is
divided by b, the quotient is c and the remainder is d.” Therefore, d) is the correct answer.

20
Morning Exam Section 1 Basic Theory Answers and Explanations

Q1-36 d) Data sorting algorithms

In addition to the three basic sorting methods, bubble sorting, selection sorting, and insertion
sorting, there are also applied sorting methods which can reduce computational complexity, such
as quick sorting, shell sorting, and heap sorting. Many applied sorting methods use recursive
algorithms, and the general process involved in each method must be understood.

a) Quick sorting is an improved method of bubble sorting. One element of data is selected for
comparison, and the remaining data is divided into two groups: one consisting of data with
values greater than the reference data, and the other consisting of values less than the
reference data. This process is repeated in order to sort data. The description in the option
applies to shell sorting.
b) Shell sorting is an improved method of insertion sorting. Elements within a data sequence is
extracted at regular intervals (gaps) forming subsequences. The insertion method is used on
those subsequences to sort them. The gap is then made smaller, and the same process is
repeated. The description in the option applies to bubble sorting.
c) In bubble sorting, adjacent data is compared, and if their size order is reversed, the data itself
is exchanged. The description in the option applies to quick sorting.
d) This description is correct. Heap sorting is an improved method of selection sorting. In
selection sorting, the minimum (or maximum) data in a data sequence is selected and set as
the first data element, and then the process is repeated on the remaining data. This sorting
method uses tree structures called heaps.

Q1-37 c) Number of data comparisons in insertion sorting algorithm

In insertion sorting, a data element is extracted from data sequences, and inserted into the
remaining data sequence in the correct position in ascending or descending order. This operation is
repeatedly performed on all data in the data sequence to sort the data. As with bubble sorting and
selection sorting, when insertion sorting is performed, the average number of comparisons for n
data elements is proportional to n 2. When the number of data elements is doubled, that is, 2n, the
2
average number of comparisons becomes (2n) = 4n 2, or four times the number of data elements n.
For example, as a result of comparing the number of key comparisons for the original data
elements (100 elements) with that for the double data elements (200 elements), the ratio of the two
numbers is “(the number of comparisons for 200 items)  (the number of comparisons for 100
2 2
items) = 200  100 = 4 (times).” Therefore, c) is the correct answer. The reason that c) says
“roughly quadruples” is that the average number of comparisons using the insertion algorithm is
represented as (n 2 + n – 2 ) / 4, and there is some degree of variance other than n 2, but it is small
enough that it can be ignored when n is sufficiently large.

Q1-38 d) Combinations of sorting method descriptions and names

Below is an organized list of the sorting methods given as options, and their characteristics.

• Insertion sort: A data element is selected from the data sequence to be sorted in order, starting
with the second element, and inserted into the appropriate position so that the data sequence to

21
Morning Exam Section 1 Basic Theory Answers and Explanations

its left can be in sorted order. This operation is repeated until the n-th data element. The
number of comparisons is proportional to n 2.
• Heap sort: Before sorting, a heap is constructed from the data sequence. Heaps are complete
binary trees, in which, for any subtree, parent data values are greater than child data values.
After extracting the root (largest data) from the heap, the heap is reconstructed. This procedure
is repeated until the data is sorted in descending order. The number of comparisons is
proportional to n log n.
• Merge sort: The data sequence to be sorted is progressively divided into halves, subsequences
containing two or fewer data elements are constructed, and these are sorted. The sorted
subsequences are merged, and the process is continued until the entire data sequence is sorted.
The number of comparisons is proportional to n log n.

Looking at the number of comparisons, it is evident that the description in “C ” refers to


insertion sorting. The number of comparisons for both heap sorting and merge sorting are
proportional to n log n. Heap sorting can represent data sequences to be sorted as arrays and
construct a heap inside the array by moving elements, so no work area is necessary. On the other
hand, merge sorting divides the data sequence to be sorted into halves, and, as with quick sorting,
for both divided subsequences, recursively performs merge sorting, sorting each divided
subsequence. Once all subsequences with two or fewer data elements are sorted, one subsequence
is inserted into the other subsequence such that the sorted order is not disturbed, merging the two
subsequences. This subsequence merging is repeated until, eventually, the entire target data
sequence has been sorted. Performing merging requires a work area equal to half the number of
data elements used to store the subsequence being inserted. Therefore, the description in “A”
applies to merge sorting, and the description in “B ” applies to heap sorting. Therefore, d) is the
appropriate combination.

Q1-39 b) Algorithms for searching unordered data

The key point in answering this question is having an accurate understanding of the
characteristics of data search algorithms. As the question concerns “searching a large array of
unordered data,” the appropriate algorithm is the linear search algorithm. Therefore, b) is the
correct answer. In linear searching, specific data is searched for from the start of the data to the
end, regardless of how the data is ordered. For the binary search and hashing algorithms given as
other options, the data must be appropriately sorted first.

a) Before a binary search is performed, data must be sorted.


c) With hashing, data is stored in storage locations determined by using a hash function. When
searching, the storage location specified by the hash function is searched for.
d) The Monte Carlo algorithm is a numerical calculation algorithm which uses random numbers
to determine approximate solutions.

22
Morning Exam Section 1 Basic Theory Answers and Explanations

Q1-40 d) Average number of comparisons for linear searches

In this question, one condition is that “the employee number to search for appears in a random
order, and the search always starts from the beginning of the table.” In the event that the employee
number exists in the table, we must determine the average number of comparisons in which the
number will be found, from finding it in the first searched position to the final n-th position. This
can be expressed as (1 + 2 + . . . + n) / n. The numerator is n (n + 1) / 2, so the average number of
comparisons is (n + 1) / 2.
Next, in the other event that the employee number does not appear in the table, the number of
elements to be searched is always n, so the average number of comparisons is n. Here, the
probability that the number exists in the table is (1 – a), and the probability that it does not exist in
the table is a, so the overall average number of comparisons can be determined by multiplying the
probability of occurrence by the number of comparisons for each event and then calculating the
sum of both events. Therefore, d) is the correct answer.

( n  1)(1  a )
 na
2

Q1-41 b) Binary search algorithm (flowchart)

In binary searching, when searching for data in an array in which elements are already sorted in
ascending or descending order, the data at the center of the array is compared with the data being
searched for. Based on whether the data searched for is larger or smaller, the search range is
narrowed in half and the data search is repeated.
An overview of the data search steps using the binary search algorithm is given below (for an
array with n elements, sorted in ascending order).

(1) In order to set the entire array as the search range, the far left edge (L) is set as the start
element number (1), and the far right edge (R) as the end element number (n).
(2) The middle position of the search range (M) is determined as follows:
M = (L + R)  2
(3) The value of the element at the position determined in (2) (hereinafter, the center value) is
compared against the search data.
As a result of the comparison:

(i) If the search data = the center value, the search data has been found, and the search
process ends.
(ii) If the search data < the center value, the search data cannot be found in the second half
of the search range, so the right edge (R) is changed to the position immediately before
the compared center value (M – 1), the first half being set as the new search range, and
the process returns to (2).
(iii) If the search data > the center value, the search data cannot be found in the first half of
the search range, so the left edge (L) is changed to the position immediately after the
compared center value (M + 1), the second half being set as the new search range, and
the process returns to (2).
(iv) If the search data does not match the center value, and there are no more elements to

23
Morning Exam Section 1 Basic Theory Answers and Explanations

search (R < L), the search data does not exist, and the search process ends.

Note that if the elements are sorted in descending order, the inequality signs (<, >) will be
reversed.
For example array A below, the first center value is “24”. If the search data is “24”, it will match
the center value, so the search process will end. If the search data is “6”, the search data < the
center value, so the search range will be changed to the first half (section outlined with a thick
border at left), and the process will be repeated. If the search data is “39”, the search data > the
center value, so the search range will be changed to the second half (section outlined with a thick
border at right), and the process will be repeated.

1 2 3 4 5 6 7 8 9 10
A 1 5 6 16 24 29 30 39 42 53
L M R

Applying the processing above to the flowchart in the question results in the following:
In the flowchart, L is substituted by x, R by y, and M by m.

Start

1→x
(1)
n→y

> (3) (iv)


x:y

(2) A 0→m

< = (3) (i)


(3) k : A(m)
>
(3) (ii) m – 1 → y (3) (iii) m + 1 → x End

Blank A is the process for determining the element number of the middle element of the search
range (step (2)), so b) (x + y) / 2 → m is the correct answer.

Q1-42 d) Hash function and synonym

In hashing, when storing data, a predefined procedure (called a hash function) is used on the
key value of the data for generating a value which is used as the storage location and making it
easier to retrieve the key value later on. With this method, sometimes a different key value will
generate the same value when the hash function is calculated (this is called a synonym, with the
storage position being identical). In such case, the data is stored in a separate location. Usually, a
pointer is used to connect and manage synonymous data.

24
Morning Exam Section 1 Basic Theory Answers and Explanations

The hash value of 150 is calculated by dividing 150 by 97. 150  97 = 1 r53, so the hash value is
53. The hash values of each key value are shown below.

a) 13......13  97 = 0 r13
b) 244......244  97 = 2 r50
c) 535......535  97 = 5 r50
d) 732......732  97 = 7 r53

Therefore, d) is the correct answer.

Q1-43 d) Characteristics of hash table search time

In hash table searches, data values themselves are used in calculations to determine the data
storage position (the function used is called a hash function), and when searching for data, the
same calculation method is used to determine and access the storage position from the data value.
In this question, given that synonyms (multiple data whose hash values point to the same position)
do not occur, regardless of the number of data elements in the table, calculations can be performed
on data values to uniquely determine their storage position. Therefore, the search time is fixed, and
the answer is d).

Q1-44 c) Results of recursive procedure

Noting that “p div q”, used in procedure F(x), indicates the integer part of the quotient when p is
divided by q, and “p mod q” indicates the remainder when p is divided by q, we should consider
the contents of the process described. As shown in the figure below, procedure F(x) is called
recursively, but as the definition of F(x) states that nothing will happen unless x > 0, recursive
calling will stop at x = 0, and the value can be determined by tracing the procedure back. The
figure below shows this process, with processing performed in the order of
(i)→(ii)→(iii)→(iv)→(v).

F(10) F(1)
(i) F(10); (ii) F(1); (iii) F(0);  (x=0)
(v) print(2); (iv) print(1);
x = 10 x=1
10 div 8 = 1 1 div 8 = 0
10 mod 8 = 2 1 mod 8 = 1

Fig. Process order

Printing, then, is performed in order, first “1” (iv) followed by “2” (v), so the printed result will
be 12, or answer c).

25
Morning Exam Section 1 Basic Theory Answers and Explanations

Q1-45 a) Calculation of recursively defined function

The trace of the function call is shown below, in accordance with the definition of the function
f(n).

f(n) = 2  f(n – 1)
→ 2  f(n – 2)
→ 2 f(n – 3)
... n calls
→ 2 f(1)
→ 2  f(0)
(= 1)

In f(n), n calls are performed until f(0) = 1, so assigning values in order from the end, it is
evident that 2 to the n-th power (= 2n), answer a), is being determined.

Another approach to solving this problem would be to assign a specific value to n. For example,
if n = 3, we would find that the function determines 2  2  2 = 23 by tracing the calls.

Q1-46 b) Flowcharts including synchronization of parallel processing

The flowchart in the question shows that unless the condition that both process B and process Y
are completed is satisfied (synchronization), new parallel processing (process A, process X) cannot
be started.

a) Parallel processing synchronization occurs when B is completed. If process Y ends before


process B first ends, synchronization is possible, and process A can be started, but for the
second process A to be started, process Y must be completed.
b) Assuming a point in time where Y has been completed, synchronization is possible when B is
completed, so process X can be performed. At the same time, process A can be performed.
When process X ends, process Y can start. Therefore, this execution order is possible, and this
is the correct answer.
c) In this sample order, synchronization is possible when process B is performed and process Y is
performed. If process Y has not yet been completed, process A cannot be performed again.
d) In this sample order, synchronization is possible when the first process Y is completed.
Therefore, process B cannot be performed before process A is performed. Even considering a
state in which process X and process A have been completed, synchronization is not possible
after process Y is completed, so process X cannot be performed.

Q1-47 d) Character string represented in regular expressions

Regular expressions are used to represent sets of symbols, and can be used to search for
ambiguous file names or multiple character strings.
The regular expression “[A – Z] + [0 – 9]  ” represents the following character string.

26
Morning Exam Section 1 Basic Theory Answers and Explanations

[A – Z] + [0 – 9] 

String of one or more String of zero or more


alphabetic characters numeric characters

Examining the options for strings which start with one or more alphabetic characters, followed
by zero or more numeric characters, the answer would be d) ABCDEF.

a) The character string must start with at least one alphabetic character.
b), c) The symbols “  ” and “+” themselves are not defined as elements in the character string, so
they cannot be used.

Q1-48 b) Standardization of programming

The objective of programming standardization is the improvement of program quality, and


consists of rules for writing programs, and the creation of programs based on those rules. The
standardization of programming is to define the rules and conventions for writing programs, to
create programs based on those regulations, and thereby to aim at achieving the improvement of
program quality. This standardization makes it easy for any development or maintenance staff
member to understand the contents of programs. By documenting how to write programs and
sharing information on error-prone programming or programming with security related risks,
problems can be prevented in advance. Therefore, b) is the appropriate answer.

a) There is no relationship between compiling optimization and programming standardization.


c) Generally, programming standardization is dependent on individual programming languages.
d) Programming standardization can promote the creation of a program with better processing
efficiency, but it generally does not include standard program execution time clarification.

Q1-49 d) Program structure

Program A: This refers to a recursive program.


Programs B and C : Program structures can be categorized as shown below.

Reusable Serially Reusable (B )

Reentrant (C )
Therefore, d) is the correct answer.

Q1-50 b) Call by value and call by reference

The dummy argument X is called by value, and only the value is passed to the function add by
the main program. X within add is stored in a different location than X in the main program.
Therefore, even if the value of X is changed in add, the change will not be reflected in the value of
X in the main program.
The dummy argument Y is called by reference, so Y ’s address is passed to the function add. The
function add uses the memory location of the received address to perform processing. Therefore,
changing the value of Y in add will result in the value of Y in the main program being changed.

27
Morning Exam Section 1 Basic Theory Answers and Explanations

In the question, the value 2 is passed to add by the main program as both X and Y. The add
function changes the value of X to equal X + Y (X becomes 4), and changes the value of Y to equal
X + Y (Y becomes 6). However, when control is returned to the main program as a result of the
return statement, only Y in the main program is changed to 6, while X retains the value of 2 it had
before the call.
Therefore, b) is the correct answer.

Main program X X (1) Function add (X, Y )


2 2→4
X = 2; (1) X=X+Y;
Y Y (2)
Y = 2; (2) Y=X+Y;
2 2→6

(3)

Q1-51 c) Programming paradigms

The correct answer is c) Pascal and procedural language. Note that the programming language
C in a) is a procedural language, while C++ is an object-oriented language. b) and d) should be
“LISP and functional language” and “Prolog and logic language,” respectively.
Below is a brief overview of the characteristics of each programming paradigm, and
representative programming languages.
• Procedural language (C, Pascal, COBOL, etc): Language in which the values of the variables
are sequentially changed by executing program statements
• Functional language (LISP, etc.): Language in which basic functions are combined in order to
produce output values based on input values
• Logic language (Prolog, etc.): Language that describes programs using predicate logic
• Object-oriented language (C++, Smalltalk, etc.): Language equipped with some specific
features, such as class definition, inheritance, and encapsulation, which can support the
object-oriented programming approach

Q1-52 d) Characteristics of Java

In languages such as C or C++, when a program reserves memory space, the program is
responsible for freeing up the memory space when the space is no longer necessary. However, in
Java, memory is constantly monitored, and memory spaces no longer used are automatically
recovered and organized into free space. This feature is called garbage collection. Therefore, d) is
the appropriate answer.

a) Inheritance from multiple superclasses in an object-oriented language is called multiple


inheritance. C++ supports multiple inheritance, but Java does not.
b) A class defines the characteristics of objects which share common variables and methods.
Basic data types represent attributes of a variable, and cannot be treated as classes.
c) Java does not support the variable types of pointer, structure, and union supported by C.

28
Morning Exam Section 1 Basic Theory Answers and Explanations

Q1-53 b) Characteristics of XML

XML (eXtensible Markup Language) is a markup language that was developed based on SGML
(Standard Generalized Markup language), and is notable for allowing users to define their own
custom tags. When exchanging data between networked information systems, the attributes, order,
and the like of the data being exchanged often differ between systems, so a mere exchange of data
values alone is not sufficient for correct data exchange. With XML, users can freely define tags,
and by using this feature, define data item names, attributes, and the like with tags, embedding
values in tags so that data with different formats and structures can be correctly exchanged.
Therefore, b) is the correct answer.

a) XML is intended to supplement the capabilities of representing data and document structure,
which HTML, a language for Web document display, is not suited for. Its purpose is not the
improvement of Web page display performance.
c) HTML uses a style language called CSS (Cascading Style Sheets). XML also uses XSL
(eXtensible Style Language) in addition to CSS.
d) As explained above, both XML and HTML were developed based on SGML.

Q1-54 b) Characteristics of computer language

SGML (Standard Generalized Markup Language) is a standard language used for describing
structured documents. Character strings enclosed with < > in the document can be used for
representing the structure of documents containing diagrams and tables as well as decorative fonts.
It is used worldwide in publications from public agencies, and also in electronic publishing. The
description in b) is appropriate.

a) PostScript is a page description language primarily used by printers, and is independent of


operating systems or output devices. It produces optimal output results in accordance with
output device capabilities. The description shown in this option is that for HTML (HyperText
Markup Language), so this is incorrect.
c) VRML (Virtual Reality Modeling Language) is a language for describing three-dimensional
graphical data on the Web, and defines the format of the text files that describe
three-dimensional coordinate values and geometrical data. The description shown in this
option is that for Java, so this is incorrect.
d) XML (eXtensible Markup Language) is a markup language for creating Web pages, and is also
used for exchanging data between companies. It is an extension of SGML and HTML. It is
characterized by embedding DTD (Document Type Definition) which defines a document
structure, and by allowing tags which describe data attributes to be defined by users. In
HTML, only predefined tags may be used.

29
Morning Exam Section 2 Computer System Answers and Explanations

Section 2 Computer System

Q2-1 d) Characteristics of supercomputers

A supercomputer is a type of computer with high performance computing capability used to


perform design, simulations, etc. which require large amounts of calculations. Matrix operations
are heavily used in this type of computing, so parallel processing of calculations were mainly
performed with vector computers equipped with multiple processors that provide vector operation
instructions for high speed matrix operations. In recent years, however, with the increase in
performance of PCs and decrease in their costs, processing methods that use many
microprocessors or connect many PCs to perform parallel processing has become the mainstream.
The type of computer which uses many microprocessors to perform parallel processing is referred
to as MPP (Massively Parallel Processor). Furthermore, as methods to implement supercomputers
have diversified, systems that can perform massive calculations like traditional supercomputers
are often referred to as HPC (High Performance Computing) systems. Grid computing, which uses
many PCs connected to the Internet, is also considered a variation of HPC. Therefore, d) is the
appropriate description.

a) A supercomputer’s main target area is scientific computation and not character string
processing.
b) This description seems to relate to general-purpose computers.
c) A computer, like the one in this description, which incorporates “hardware circuits specific to
certain application areas” is called a special-purpose processor. Although it is one way to
achieve HPC, it is not a general characteristic of supercomputers, so the description is not
appropriate.

Q2-2 d) Instruction execution cycle

This question is about the instruction execution process from “instruction fetch” to “instruction
execution.” The control unit fetches, decodes, and executes instructions one by one. The correct
process is “fetch instruction → decode instruction → calculate address of operands → fetch
operands → execute instruction.” Therefore, d) is the appropriate answer. An operand is the target
that is acted upon by the instruction’s operation code. The process is normally divided into these
five stages, but the “instruction execution” stage may further be subdivided into two stages:
execute instruction → store results of operation.

Q2-3 b) Characteristics of RISC compared with CISC

RISC (Reduced Instruction Set Computer) is an architecture in which the types of instructions
are simplified and all instructions are executed using wired logic on hardware. Since the
instruction length is fixed, execution times for most instructions are the same, which makes it
suitable for pipeline control. It has many registers so that operations can be executed on registers
to increase processing performance.
CISC (Complex Instruction Set Computer) is a legacy architecture which provides a wide
variety of instructions and implements multifunctional and complex instructions using

30
Morning Exam Section 2 Computer System Answers and Explanations

microprograms. Since the instruction lengths are variable and their execution times differ, it is not
suitable for pipeline control. Considering these points, b) summarizes the characteristics of RISC.

Q2-4 c) Programming method effective for pipeline processing

Pipeline processing is a method of increasing CPU performance by dividing instruction


execution cycles into several stages and overlapping each stage. For example, when the instruction
cycle is: fetch instruction (1) → decode instruction (2) → calculate address of operands (3) →
fetch operands (4) → execute instruction (5) → store result of operation (6), pipeline processing is
performed as in the following figure:

Instruction 1 (1) (2) (3) (4) (5) (6)


Instruction 2 (1) (2) (3) (4) (5) (6)
Instruction 3 (1) (2) (3) (4) (5) (6)

In this way, pipeline processing increases the performance of instruction executions by fetching
the next instruction before the execution of the previous instruction has finished, and then
executing different stages of multiple instructions at the same time. A situation which disturbs and
prevents the parallel execution of processing is called a pipeline hazard. Factors which disturb
pipeline processing include: a control hazard caused by a branch instruction which nullifies the
prefetched instruction; a data hazard in which an execution standby occurs as an instruction waits
for the results of a previous instruction; and a structural hazard caused by contention for resources
such as memory or decoder. In order to perform pipeline processing effectively, it is necessary to
ensure that these hazards do not easily occur. Among the options, programming method c) is
effective in decreasing the occurrence of control hazards by reducing the number of branch
instructions.

Q2-5 c) Multiprocessor system

In a tightly coupled multiprocessor system, main memory is shared, and a single OS controls
the system. On the other hand, in a loosely coupled multiprocessor system, each processor has its
own main memory and OS (can be different OS), and exchanges information by using high-speed
I/O ports.
In a tightly coupled multiprocessor system, the processors are usually considered equal, and
tasks in the system can be executed on any processor. Therefore, c) is the appropriate answer.

a) (Incorrect) main memory → (correct) high-speed I/O port: It is the tightly coupled
multiprocessor system which exchanges information by using main memory.
b) Although there is no loss that occurs in a tightly coupled multiprocessor system, such as
because of memory contention, performance does not increase in proportion to the number of
processors but increases more moderately because of the increase in overhead in transferring
control among CPUs.
d) The processors are not controlled by their individual OSs but are all controlled by a single OS.

31
Morning Exam Section 2 Computer System Answers and Explanations

Q2-6 a) External interrupt

There are two types of interrupt: internal interrupt that occurs as a result of executing a program
instruction, and external interrupt that occurs irrespective of the program instruction being
executed. Among the descriptions, an interrupt raised by the interval timer in a) is categorized as
an external interrupt. Interval timers are used to make processing requests at regular time intervals.

• External interrupts: I/O interrupts (completion of I/O operations or malfunctions in I/O


devices), timer interrupts (expiration of timers), and machine check interrupts (such as
malfunctions in hardware or power failure)
• Internal interrupts: supervisor call interrupts (use of OS functions through SVC instructions or
system call instructions by the program being executed), program interrupts (such as
divide-by-zero, overflow, executing invalid instructions, and page fault)

b) Interrupts that are raised as a result of an overflow or divide-by-zero error are categorized as
internal interrupts.
c) An interrupt (page fault) that is raised when a program instruction attempts to access pages not
in main memory is categorized as an internal interrupt.
d) Interrupts (SVC instructions or system call instructions) that are raised as result of executing
software interrupt instructions are categorized as internal interrupts.

Q2-7 a) Clock frequency of PCs

A clock, which is a component of a CPU, is a circuit that generates a signal (clock signal) to
synchronize the operation timing in each functional unit of a computer. Clock frequency,
represented in MHz, is the number of cycles of the clock signal that occur in a second. The clock
frequency of a CPU is several times greater than the system bus (equaling several pulses of the
clock), so a) is correct.

b) The inverse of the clock frequency is the clock pulse interval time. The number of instructions
(in millions) that can be executed in a second is represented by MIPS.
c) Higher clock frequencies generally result in higher CPU performance, although there are
limitations to this rule, such as because of malfunction in logical circuits. On the other hand,
overall system performance is not determined only by the CPU, but is influenced by storage
devices, I/O devices, and other peripheral functions comprehensively. When there is a
performance bottleneck in a device other than the CPU, overall system performance will not
improve no matter how much CPU performance improves.
d) Although the CPU architecture such as instruction sets can be considered equivalent since the
CPU types are the same, program performance is affected by differences such as in the
configuration of other devices or the application environment as described in c). Therefore,
the program execution performances of the two PCs are not necessarily equivalent.

Q2-8 c) Explanation of VLIW

VLIW (Very Long Instruction Word) is a mechanism which uses longer instructions and
specifies multiple actions in an instruction so that multiple functions can be executed at one time

32
Morning Exam Section 2 Computer System Answers and Explanations

(Fig. A). In order to implement VLIW, multiple instructions that can be executed at the same time
are combined into a single combined instruction at the compilation stage. Therefore, c) is the
correct answer.

One
instruction
Main
Operation Operation Operation Network Order
... memory
control control control control control
control
↓ ↓ ↓ ↓ ↓ ↓ ↓
Arith- Arith- Arith- Arith-
metic metic metic metic
unit unit unit unit

Fig. A VLIW

a) Since instructions to be executed at the same time are determined statically at the compilation
stage, the statement “dynamically determined by hardware control” is not appropriate.
b) This is an explanation for super-pipeline (Fig. B).
d) This is an explanation for superscalar (Fig. C).

IF: Instruction fetch ID: Decode


OA: Operand address calculation OF: Operand fetch
EX: Instruction execution RS: Storage of operation results

Instruction 1 IF ID OA OF EX RS
Instruction 2 IF ID OA OF EX RS
Instruction 3 IF ID OA OF EX RS
Instruction 4 IF ID OA OF EX RS

Fig. B Super-pipeline

Instruction 1 IF ID OA OF EX RS
Instruction 2 IF ID OA OF EX RS
Instruction 3 IF ID OA OF EX RS
Instruction 4 IF ID OA OF EX RS

Fig. C Superscalar

Q2-9 a) Characteristic of DDR-SDRAM

DDR-SDRAM (Double Data Rate Synchronous DRAM) enables high-speed memory access.
High-speed is achieved by accessing data at both the rising and falling edges (double) of the clock
signal. In other words, two data can be accessed in one clock cycle. Therefore, access speed is
doubled in comparison with standard SDRAM (which enables one data to be accessed in one
clock cycle). Since DDR-SDRAM is a type of SDRAM, it operates in synchronization with the

33
Morning Exam Section 2 Computer System Answers and Explanations

clock signal. Therefore, a) is the appropriate characteristic.


Data in DRAM was read in the following order: a particular row is read into the row buffer
through RAS (Row Address Specification) and then the appropriate column (data) in that row is
read by CAS (Column Address Specification). Thus, even when multiple data in the same row is
read in sequence, the content of the same row is read into the row buffer each time. In Fast Page
Mode (FPM) DRAM, as described in option c), this overhead is eliminated by enabling multiple
data in the same row to be read continuously by changing only the column address without
reading data into the row buffer each time. EDO (Extended Data Out) DRAM in b) achieves even
higher reading speed by preparing the reading operation for the next data while reading the current
data to shorten page access cycle time. These DRAMs are categorized as asynchronous DRAMs.
After those DRAMs, SDRAM in d) which enables one data to be read in one clock cycle in
synchronization with the memory bus clock (external clock) was developed. DDR-SDRAM,
which is discussed in this question, is a type of SDRAM which achieves even higher speed by
enabling two data to be read in one clock cycle.

Q2-10 d) Combination in which effective access time is the shortest

In order to compensate for the gap between CPU processing performance and main memory
access speed, cache memory that can be accessed at high-speed is placed between them. Hit ratio
is the probability that the target data exists in cache memory. If the target data is not in cache
memory, it is read from main memory. When the hit ratio is p, then the probability of reading the
data from main memory is defined as 1– p.
When the access time of cache memory is Tc and that of main memory is Tm, the effective
access time is Tc p + Tm (1– p). Applying the values in each option, d) yields the shortest
effective access time as follows:

a) 10 0.6 + 70 (1– 0.6) = 6 + 28 = 34 (nanoseconds)


b) 10  0.7 + 70  (1– 0.7) = 7 + 21 = 28 (nanoseconds)
c) 20  0.7 + 50  (1– 0.7) = 14 + 15 = 29 (nanoseconds)
d) 20  0.8 + 50  (1– 0.8) = 16 + 10 = 26 (nanoseconds)

Q2-11 d) Characteristic of write-back method

In the write-back method, when data in cache memory is changed, the data is written only to
cache memory. Then, when that block (data) is flushed out of cache memory, the changes are
written to main memory. Thus, when the same block (address) is modified multiple times, access
to main memory is not required each time, resulting in fewer accesses and high-speed writes.
Therefore, d) is the correct answer. However, a situation in which data in cache memory and main
memory do not match will occur temporarily.
The method in which data is simultaneously written to both cache memory and main memory is
called the write-through method. Options a) and b) are characteristics of this method. For c),
although there may be several interpretations, it can be a characteristic of the write-through
method if the data is written to both cache memory and main memory when the data is in the
cache.

34
Morning Exam Section 2 Computer System Answers and Explanations

Q2-12 b) Memory interleaving

Memory interleaving is an architecture which increases access speed by partitioning main


memory into multiple units called banks, connecting each bank with the CPU via separate busses,
and enabling parallel access. It is effective for transferring data to cache memory or for
prefetching data in pipeline control. Therefore, b) is the correct answer.

a) Operations within a computer are synchronized using the signal generated by the processor
clock. The number of signals that occur in 1 second is referred to as clock frequency and is a
CPU-specific performance. By implementing memory interleaving, access time to main
memory from the CPU can be reduced. As a result, processing capacity in a unit of time
increases, but it does not mean that clock frequency has increased.
c) Data stored in contiguous address space are often related and used at the same time. Therefore,
in memory interleaving, the contiguous address space are divided into separate banks, and
when access to a target bank is started, banks with subsequent addresses are accessed in
parallel.
d) Memory interleaving is a technique to shorten access time by prefetching the contents of
contiguous addresses. When all memory accessing is accomplished for the contiguous
addresses and the prefetched contents are not wasted, performance can be theoretically
increased n-times. In reality, however, since the memory addresses accessed are random, and
the prefetched contents are often wasted, an increase in performance by n-times is not
necessarily achieved.

Q2-13 a) Method of writing data onto disks to increase I/O speed

“A method of increasing data I/O speed by distributing and writing data onto multiple hard
disks” is the striping method of a). This method is used in RAID which aims to achieve a disk
system with high capacity, high reliability, and high I/O performance by handling multiple hard
disks as one hard disk.

b) The term “swapping” is usually used to refer to replacing something. To replace devices such
as a failed hard disk while continuing the operation of a computer system is called hot
swapping. To save processes that were not running for a long time to disk from memory is
called swap-out, and the reverse is called swap-in.
c) Based on the property that data read from a disk is often referenced again, this is a mechanism
or memory to improve disk performance by temporarily saving data read from the disk to disk
memory and eliminating the need for input operations when the data is referenced again.
d) This is a description of disk duplexing. By writing data into two disks simultaneously, when
either disk fails, operations can be continued using the other disk. This increases the
availability of a computer system.

Q2-14 b) Calculating effective data capacity in a RAID5 configuration

Since one hard disk out of six is used as a spare disk, the number of hard disks that can be used
to configure a RAID5 system is five. RAID5 consists of effective data and parity data. The parity

35
Morning Exam Section 2 Computer System Answers and Explanations

data is distributed among multiple disks but its total capacity sums up to the equivalent of one hard
disk. Therefore, the capacity of effective data is equivalent to the capacity of four hard disks,
which is 80GB 4 = 320GB. Therefore, b) is the correct answer.

Q2-15 a) Optical disk using organic dye and laser light

The storage media on which organic dye is used as a recording layer and data is recorded by
creating singes with a laser is CD-R (CD-Recordable) of option a). CD-R is a type of CD in which
data can be written only once. Since singes are created, data cannot be erased once it has been
written.

b) CD-RW (CD-ReWritable) is a type of CD which enables data to be written or erased any


number of times. It records data by using a laser to heat the material in the recording layer to
change its characteristics (reflectance).
c), d) DVD is a recording media with the same diameter as that of CD (12 cm). Its recording
density is higher than that of CD, and double-sided and double-layered recordings are
possible. DVD-RAM of c) refers to rewritable DVD, and DVD-ROM of d) refers to read-only
DVD. In DVD-RAM, data is recorded in almost the same way as in CD-RW. In DVD-ROM,
similar to CD-ROM, data is recorded by creating small dimples called “pits”.

Q2-16 c) Explanation of system bus

A system bus is a shared path (bus) used by a CPU to exchange data or control signals between
the CPU and main memory or between the CPU and I/O controllers. Therefore, c) is the
appropriate description. There are several methods of categorizing buses. When they are
categorized as internal and external busses of a computer, a system bus is categorized as an
internal bus similar to a processor bus (a connection between components in a CPU) or a memory
bus (a connection between a CPU and main memory). External busses include the I/O bus
between an I/O controller and auxiliary storage devices, and between I/O controllers.

a) This is a description of RS-232C.


b) This is a description of DMA (Direct Memory Access) or I/O channel.
d) This is a description of USB. High-speed (480Mbits/sec) mode was additionally defined from
USB2.0, and super speed mode (5Gbits/sec) was further added from USB3.0.

Q2-17 b) Transfer modes in USB

In USB, four transfer modes are defined as mentioned in this question: isochronous transfer,
interrupt transfer, control transfer, and bulk transfer. Among these modes, interrupt transfer, in
which polling is periodically performed by the computer to detect the status of buttons or the like,
is mainly used for mice or joysticks. Therefore, b) is the correct answer.

a) Isochronous transfer is a mode of focusing on real-time transfer without retransmission. It is


mainly used for playing back video or audio.
c) Control transfer is used for configuring or controlling devices.
d) Bulk transfer is used for exchanging large amounts of data. It is used for devices connected via

36
Morning Exam Section 2 Computer System Answers and Explanations

USB such as disk devices, printers, scanners, and network adapters.

Q2-18 a) Interface specialized for transferring large amount of video data

This is a question about I/O interfaces on PCs. In 3D (three-dimensional) graphics mainly used
for CAD or gaming software, a large amount of memory exclusively for 3D is required for the Z
buffer which holds image depth information, or for storing pattern information for the surfaces of
3D objects. Usually, part of main memory is used for this purpose. Therefore, in order to perform
3D rendering, a large amount of data must be transferred at high-speed between main memory and
the graphics adapter. Intel developed AGP (Accelerated Graphics Port), a bus specification as an
interface specifically for transferring video data. Therefore, a) is the correct answer.

b) ATA (AT Attachment) is a standard defined by ANSI (American National Standards Institute)
based on the IDE (Integrated Device Electronics) specification which is an industry standard
used for interfaces to connect hard disks in IBM PC/AT-compatible PCs.
c) This is an I/O bus with a data width of 16 bits to connect I/O expansion cards to PCs. ISA
(Industrial Standard Architecture) bus is an industry standard based on the AT bus
specification disclosed by IBM as a bus specification for PC/AT.
d) PCI (Peripheral Component Interconnect) bus is an I/O bus used to connect I/O expansion
cards to PCs and is similar to an ISA bus, but with a significantly increased data transfer rate
such as by widening the data width to 32 bits. It is widely used as a standard bus which does
not depend on CPU architecture.

Q2-19 c) Explanation of DMA

DMA (Direct Memory Access), as described in c), is a method in which a dedicated control
circuit transfers data between I/O devices and main memory. The circuit dedicated for this purpose
is referred to as DMAC (DMA Controller). By using this dedicated circuit to control data transfer
(i.e., data I/O for peripheral devices) between I/O devices and main memory, the CPU can be
relieved of this control function.

a) Since the CPU directly controls data transfer, this is a description about the method called
direct control.
b) Judging from “I/O-specific address spaces in main memory,” this is a description about
memory-mapped I/O. It is mainly used in RISC machines that have huge memory space.
Incidentally, the alternate method is called the I/O-mapped I/O method. It is widely used in
mainframes or PCs which use CISC processors.
d) The method of shortening processing time by “partially overlapping the execution stages of
multiple instructions” is called pipelining. It is a method to control the execution of CPU
instructions and is not an I/O control method like the other options.

Q2-20 b) Calculation of the amount of image data

Generally, in this kind of calculation, attention should be given to the unit. As 1 inch equals
2.54cm, the image width and height are 25.4cm =10 inches and 38.1cm =15 inches respectively.

37
Morning Exam Section 2 Computer System Answers and Explanations

Resolution 600dpi means that there are 600 dots per inch, so the number of dots in the image is
(600  10)  (600  15) = 54,000,000 (dots). Since 24 bits (= 3 bytes) of color information is
required for each dot, the amount of data is:

54,000,000 3 (bytes) =162,000,000 (bytes) =162 (Mbytes), so b) is the correct answer.

Q2-21 d) Display with low voltage operation, low power consumption, and no backlight

The display which emits light when voltage is applied and meets all the characteristics of “no
need of backlight,” “low voltage operation,” and “low power consumption” is the organic
light-emitting diode (OLED) display, so d) is the correct answer. The OLED display consists of an
organic compound (which emits light by applying voltage) sandwiched between glass (or plastic)
plates, and operates by applying 5 to 10V DC voltage. It is thin, some being 1.8 mm in thickness.
Unlike a liquid crystal display (LCD), it consumes little power because it does not require a
backlight.

a) CRT (Cathode Ray Tube): It is a Braun tube used for television sets. Fluorescent material
applied on the display surface illuminates when an electron beam deflected by a magnetic
field or electrode hits the display surface. Its power consumption is high in comparison with
LCDs.
b) PDP (Plasma Display Panel): It is a display which has gas such as helium or neon sealed in
between two sheets of glass and emits light by applying voltage to the gas. Although it is slim,
lightweight, and luminous, power consumption is high in comparison with LCDs.
c) TFT liquid crystal: It is a display which has material called liquid crystal sandwiched between
glass plates. Liquid crystal changes its transparency by altering its molecular structure when
voltage is applied. A type of LCD in which voltage is applied to each bit of pixel through a
thin-film transistor is called a TFT (Thin Film Transistor) system. Liquid crystals require a
backlight since they do not emit light on their own.

Q2-22 a) Vertical distribution system

Distributed processing systems can be categorized into two types: horizontal/vertical


distribution and functional/load distribution. Functional distribution is a method which assigns
different processing functions to each computer, whereas load distribution is a method which
assigns the same processing functions to all computers. In reality, three types of system
configurations are derived by combining these methods: horizontal distribution, horizontal load
distribution, and vertical distribution.
In vertical distribution systems, small high-performance computers or workstations are
integrated into the system, and each of them are assigned a certain function such as
communication control, database, and data processing. Systems with front-end/back-end
processors or client/server systems are grouped in this category. These systems are characterized
by a hierarchy or dependency between processors, so a) is appropriate.

b) This is a description of a horizontal load distribution system. It has the advantage of being able
to continue processing even if some processors fail.
c) This is a description of a horizontal distribution system or a horizontal load distribution

38
Morning Exam Section 2 Computer System Answers and Explanations

system, and not a vertical distribution system.


d) This describes the distribution of processing functions in a horizontal distribution system.
Since each of the functions is assigned to different processors, application programs are easier
to manage. It also has other advantages such as being able to install the most appropriate
processor for each application.

Q2-23 b) Grid computing

Grid computing is a system in which multiple computers are connected by a network to


virtually create a high-performance computer from where users can obtain as much processing or
storage capacity as required. Since computers have much idle time (i.e., a state of waiting a fair
amount of time for data to process), it is possible to utilize the resources of idle computers or lend
these resources to companies or other departments which have high-speed processing needs for
large amounts of data. There already exists a project which aggregates unused CPU power from
home computers over the Internet to perform complex processing such as decryption and medical
research. In this way, grid computing technology makes it possible to perform calculations which
are impossible for one computer to handle alone. Therefore, b) is the correct answer.

a) This is an explanation of asymmetric multiprocessing in which processors are preassigned


according to the tasks they will perform.
c) This is an explanation of symmetric multiprocessing in which multiple CPUs perform tasks as
equivalent entities.
d) This is an explanation of multithreading.

Q2-24 b) Characteristic of three-tier client/server system

A three-tier client/server system is a system in which the client/server application consists of


three logical tiers: the presentation layer, the function layer (business logic layer), and the data
layer (database layer). The presentation layer provides user interface functions, the function layer
provides business processing functions such as data processing, and the data layer provides
database access functions. By lowering the interdependency between each tier, development tasks
for each tier can be performed in parallel. For example, when the screen layout is changed, only
the screen generation program in the presentation layer has to be changed. When the data
aggregation method is changed, only the program in the function layer has to be modified. These
tasks can be performed in parallel. In addition, if there is no change in data items, the data layer
does not have to be changed. Therefore, b) is the appropriate answer.

a) This is a description of the development method for the presentation layer and is not a
characteristic specific to a three-tier client/server system.
c) One of the characteristics in a three-tier client/server system is to separate business processing
and database processing.
d) In a three-tier client/server system, the function layer which performs business logic can be
placed only on servers, only on clients, or on both clients and servers. For example, when the
function layer is placed only on servers, if a change is made to the business logic, programs
installed on clients do not have to be replaced.

39
Morning Exam Section 2 Computer System Answers and Explanations

Q2-25 c) Explanation of RPC in client/server system

RPC (Remote Procedure Call) in a client/server system is a kind of inter-program


communication method in which a program can invoke procedures in another computer to have
that computer to handle the processing. Specifically, when computer A invokes a procedure in
computer B, computer B performs the processing and returns the results to computer A. Computer
A receives the processing results from computer B and performs subsequent processes. Therefore,
c) is the appropriate answer.

a) This is a description of remote login.


b) This is a description of stored procedures.
d) This is a description of network drives.

Q2-26 b) Adoption of thin client

Although the definition of a thin client system is not singular, it generally consists of client PCs
that do not have hard disks, and servers. It has the effect of decreasing TCO (Total Cost of
Ownership) by reducing setup work for individual client PCs and consolidating software resources
on the server side, and enhancing terminal security by not holding data on terminals.
In adopting thin clients, the following are required:

(1) Software and hardware for thin clients


(2) Software and hardware for servers
(3) A network which connects thin clients and servers

Therefore, the system configuration requirements for adopting thin clients are reflected in
software, hardware, and network configuration. Thus, b) is correct.

Q2-27 b) Configuration diagram of a computer system

When considering the configuration diagram of the computer system in question, we should
focus on the relationship between CPU and memory (i.e., whether memory is shared by multiple
CPUs or not). A loosely coupled multiprocessor configuration, as shown in the configuration
diagram, is a multiprocessor system in which CPUs have their own memory instead of shared
memory, are controlled by independent OSs, and are connected via a high-speed network.
Therefore, b) is the appropriate answer.

a) Cluster configuration is a method of bringing (or clustering) multiple computers together and
using them as one system. The configuration diagram shown in this option shows a tightly
coupled multiprocessor configuration in which multiple CPUs share memory.
c) In dual configuration, two systems with the same configuration perform the same task and
compare the results to proceed with processing. So its configuration diagram is the one shown
in d). The configuration diagram shown in this option shows a VM (Virtual Machine) system
in which multiple OSs run on one CPU.
d) Duplex configuration is a method in which two systems (i.e., production and backup) are
prepared, and if the production system fails, processing can be continued by switching

40
Morning Exam Section 2 Computer System Answers and Explanations

operation to the backup system. When the production system is working normally, the backup
system often performs batch processing, etc. The configuration diagram in this option shows a
dual configuration.

Q2-28 d) Technology to increase the reliability of computer systems

A correct understanding of technical terms concerning some examples of technology to increase


the reliability of computer systems is required.
Fault tolerance is technology to make the system work correctly as a whole by multiplexing
important components of the system even if some components fail. Therefore, d) is the appropriate
answer. In fault tolerant systems, some ways are devised to enable failed parts to be replaced or
repaired without stopping the system. In this question, although fault tolerance is considered part
of technology comparable to fail soft or fail safe, caution should be exercised as the term could be
used to express tolerance to failures in a broad sense which includes fail soft.

a) Fail safe is technology to design systems so that a failure in one part of the system will have
an effect of leaning towards the safer side. It is required in systems such as those related to
human life or social infrastructure. Note that a) is an explanation of fault avoidance.
b) Fail soft is technology to design systems to allow failed equipment to be temporarily
disconnected from the system so that the system can continue operating without completely
stopping, although its performance will be lowered.
c) Fault avoidance is technology to increase the reliability of components and avoid failures as
shown in the description of a). Note that c) is an explanation of fail soft.

Q2-29 a) Advantage of installing NAS

NAS is a specialized file server machine which is used by directly connecting it to the network.
The name “Network Attached Storage” is derived from the fact that it looks as though a storage
device is directly connected to the network. NAS inherently possesses a file system and
communication functions, can be easily introduced or added to systems, and allows files to be
shared among multiple computers running different OSs. Therefore, a) is the correct answer.

b), c) These are descriptions about SAN (Storage Area Network). In SAN, a dedicated network,
which is separate from the network connecting servers and clients, is used to connect servers
and storage devices, and data is exchanged over the network in units of blocks. In NAS, data
is shared in units of files, while in SAN, data is shared in units of blocks. Since NAS uses a
dedicated network to connect servers to storage devices, network load is lower compared with
NAS. In addition, SAN uses a data transfer method called fiber channel to achieve
long-distance high-speed data transfer.
d) From the description “a file system built on general-purpose servers can be shared,” this can
be considered as a description about DAS (Direct Attached Storage). DAS is a storage device
that is connected directly to a server on a one-to-one basis.

41
Morning Exam Section 2 Computer System Answers and Explanations

Server Server Client Client


...
Client Client LAN
...
LAN Server

Storage
Fiber channel

Fig. A Topology of NAS

Client Client Storage


...
LAN

Server Fig. B Topology of SAN

: File system
Storage

Fig. C Topology of DAS

Q2-30 c) Function of load balancer

A load balancer is a device which distributes requests from clients to multiple Web servers so
that the requests will not be concentrated at one particular server. Therefore, c) is the correct
answer. Load balancers distribute requests from clients to servers based on data such as the
number of TCP connections distributed to each server, the response time of servers, and CPU
utilization of servers.

The other descriptions are for the following equipment or functions.


a) SSL (Secure Sockets Layer) accelerators which specifically encrypt/decrypt SSL
b) Bandwidth controllers which allocate bandwidth for communication
d) Multilink (link aggregation)

Q2-31 c) Performance evaluation by benchmarking

A benchmark test is a way to compare and evaluate the performance of computers by measuring
the run-time of standard programs. Therefore, the description of c) is appropriate which describes
that performance results from several kinds of benchmark tests help to understand the
characteristics of systems and are effective for choosing machines to be installed.

The other descriptions have the following errors:


a) The results of the TPC benchmark are expressed by performance value and price/performance
ratio and include the measure of cost.
b) For example, Dhrystone is used to measure the integer arithmetic performance of a computer

42
Morning Exam Section 2 Computer System Answers and Explanations

and is not a benchmark test for evaluating the performance of overall computer systems.
d) A benchmark test evaluates the performance of a specific objective and cannot be considered a
widely-used evaluation model.

Q2-32 b) Performance evaluation of computer systems using simulation

When the performance of computer systems is evaluated using simulations, instead of building
and evaluating real systems, a model is created which simulates performance factors in a real
target system and the performance of the model is evaluated to aid in the forecasting of system
performance. What is important is to make a system model which can be developed more easily
than building an actual system, and in which calculation time for simulations does not become
excessive. For this purpose, we must identify the items we want to understand through
performance evaluation, distinguish essential parts which affect results from nonessential parts,
and keep the model simple. Note that an event in a simulation is a phenomenon generated for the
purpose of simulating factors that affect the status or performance of computer resources and
include the initiation of transactions or completion of I/O. Therefore, b) is the appropriate
description.

a) Calculation accuracy does not necessarily improve in proportion to the number of generated
events. Increasing the number of events will not improve calculation accuracy unless they
have an essential effect on system performance.
c) The statement “only events currently identified can be implemented in the model” is not
correct. Simulations can be applied to help forecast the future by considering events that will
occur in the future.
d) The statement “random numbers are not reproducible” is wrong. In addition, simulations using
events triggered by random numbers are often performed.

Q2-33 b) Calculation of transaction processing performance

MIPS represents the number of instructions (in millions) a CPU can execute in a second. When
processor performance is 20MIPS and the utilization rate of the processor is 80%, the following
holds:
6 6
Number of instructions executed per second = 20 10  0.8 = 16  10 (steps/second)
One transaction consists of 800,000 steps, so the following holds:
6
Number of transactions per second = (16  10 )  800,000 = 20 (transactions/second)
Therefore, b) is the correct answer.

Q2-34 d) Speed-up ratio in a multiprocessor

On the condition that “the speed-up ratio of A with r = 0.9 becomes 3 times faster than that of B
with r = 0.3,” the following equation holds:

1 1
 3
0.9 0 .3
1  0 .9  1  0.3 
n n

43
Morning Exam Section 2 Computer System Answers and Explanations

n 3n

0.1n  0.9 0.7n  0.3

1 3

0.1n  0.9 0.7n  0.3

0.3n + 2.7 = 0.7n + 0.3

0.4n = 2.4

n=6

Thus, performance which is three-times higher can be achieved when the number of processors
is six.
1
Therefore, d) is the correct answer. The equation E  used here represents the
r
1 r 
n
speed-up ratio in a multiprocessor and is referred to as Amdahl’s law.

Q2-35 c) Performance index representing contention on main memory

Contention on main memory is a situation where multiple programs access main memory at the
same time. The level of contention is determined by the number of access requests that occur at
the same time and the available capacity of main memory. In general, since access requests from
different programs are made for different pages, paging will occur as the level of contention on
main memory increases, because of lack of main memory. Therefore, as described in c), the
frequency of paging can be considered an index representing the level of contention on main
memory. Although a) execution waiting time and b) transaction response time are affected by
contention on main memory, they are also affected by contention on disk or CPU, so in
consideration of the statement “best indicates,” c) is the correct answer.
Note that although memory utilization d) might seem correct because main memory is a type of
memory, it is not an appropriate index for indicating the state of contention because when
programs access the memory frequently, memory utilization increases even if there is no
contention.

Q2-36 b) Calculation of availability in accordance with changes in usage conditions

When “mean time between failures” is represented as MTBF and “mean time to repair” is
represented as MTTR, availability can be expressed using the following formula:

Availability = MTBF / (MTBF + MTTR)

So, availability before the change is x / (x + y). According to the conditions in the question, both
MTBF and MTTR have increased 1.5 times, so these can be considered as 1.5x and 1.5y
respectively. Therefore, availability under the new conditions is expressed as follows:

Availability = 1.5x / (1.5x + 1.5y)


= 1.5x / 1.5(x + y)

44
Morning Exam Section 2 Computer System Answers and Explanations

= x / (x + y)

As a result, we can see that it is the same as before.


This means that when scale factors for MTBF and MTTR are the same, availability does not
change. Therefore, b) is the correct answer.

Q2-37 a) Calculation of MTTR to increase availability 1.25 times

MTBF (Mean Time Between Failures) is the average time a system continues operation without
failure. MTTR (Mean Time To Repair) is the average time until a system recovers from a failure.
Availability is the probability that a system is operating normally and can be calculated using the
formula below:
MTBF
Availability 
MTBF  MTTR
The availability of a system with MTBF of 1,500 hours and MTTR of 500 hours can be
calculated as follows:
1,500 1,500 3
 
1,500  500 2 ,000 4

The availability 1.25 times higher than that of this system is:
(3/4)  (5/4) = 15/16
Let the new MTTR be x hours and calculate the value for x which meets the following equation:
1,500 15

1,500  x 16

From this equation, we can get x = 100, so a) is correct.

Q2-38 b) Calculation of network availability

The availability “A1” of the line connecting Osaka and Tokyo through Nagoya can be calculated
by multiplying the availability of each line since the route is connected in series.
A1 = 0.9  0.9 = 0.81
The availability “A2” of the line connecting Osaka and Tokyo without passing through Nagoya
is 0.9.
Therefore, the availability “A3” of the overall route between Osaka and Tokyo can be calculated
by subtracting the probability of failing both lines from 1, since A1 and A2 are connected in
parallel.
A3 = 1 – (1 – A1) (1 – A2) = 1 – (1 – 0.81) (1 – 0.9) = 1 – 0.019 = 0.981
Next, as the line between Fukuoka and Osaka is connected in series with the overall route
between Osaka and Tokyo, availability is calculated as follows:
Availability = 0.9  A3 = 0.9  0.981 = 0.8829
Therefore, b) is correct.

45
Morning Exam Section 2 Computer System Answers and Explanations

A2

Fukuoka Osaka Tokyo


A3
Nagoya
A1

Q2-39 b) Availability of overall system

In a serial connection, the overall availability decreases as the number of connected devices
increases because all the devices must be operating in order for the overall system to be in an
operating state. In a parallel connection, on the other hand, the overall availability increases as the
number of connected devices increases, because the overall system can operate when any one of
the devices is operating.
From this perspective, let us compare configurations A through C in the question. First, in
consideration of A and B, the availability of B is lower than that of A, because B is a configuration
where one device is added to configuration A in series, so A > B. Then, in consideration of the
parallel configuration of A and C, the availability with one device is higher, because the
availability in a serial configuration with two devices is lower than that with one device.
Therefore, the availability of A is higher than the availabilities of B and C. Among the options,
only a) and b) meet this condition. The difference in these configurations is the order of the
availabilities of B and C. When the availability of the device itself is α, the availability with two
devices in a serial configuration is α2 while the availability with two devices in a parallel
configuration is 1 – (1 – α)2 = 2α – α2. Therefore, availability of B is α  (2α – α2) = 2α2 – α3 and that
of C is 2α2 – α4, because it can be considered a parallel configuration of devices with the
availability of α2. Here, the availability α of the device itself is smaller than 1 and α3 > α4, so
2α2 – α3 < 2α2 – α4, thus B < C. Therefore, the configurations in descending order of availability are
A, C, B, so b) is the correct answer.
Incidentally, when α = 0.9, the availability of each system is as follows:

A: 1 – (1 – 0.9) (1 – 0.9) = 0.99


B: 0.9  0.99 = 0.891
C: 1 – (1 – 0.9  0.9) (1 – 0.9  0.9) = 0 .9639

Q2-40 c) Formula for calculating the number of system failures in a unit of time

MTBF (Mean Time Between Failures) and MTTR (Mean Time To Repair) respectively have the
following meanings:

Total operating hours


MTBF =
Number of failures
MTBF is the average time between failures calculated by dividing the total normal operating
hours during a specific period in the past by the number of failures during this time.

46
Morning Exam Section 2 Computer System Answers and Explanations

Total repair hours


MTTR =
Number of failures
MTTR is the average repair time calculated by dividing the total time the system could not be
operated normally because of failures or their repair during a specific period in the past by the
number of failures during this time.
The number of failures of a system in a unit of time (rate of failure occurrence) can be
Number of failures 1
calculated from , and it is which is the reciprocal of MTBF.
Total operating hours MTBF
Therefore, c) is the correct answer.

Q2-41 a) Interpretation of bathtub curve

The curve which represents the failure rate of hardware is called “bathtub curve” from its shape.
This curve can be divided into the following three periods, based on the main causes of failure that
occur in these periods.

• Early failure period: It is the technical term referring to the early period after the system starts
operating. In this period, failures occur because of design errors, mistakes in the development
process, etc. These kinds of failures decrease over time.
• Random failure period: It is the technical term used to designate the period of time when the
influence of early failures has decreased, the state is stable with failure rate being constant,
and failures occur at random.
• Wear-out failure period: It is the technical term used to describe the period of time when
failures caused by wear or damage from long-term usage occur. Failure rate increases over
time.

When we look at the curve considering the above, the early failure period is about one year, the
random failure period is about 3 years, and the wear-out failure period is about 2 years. Therefore,
a) is the appropriate answer.

Q2-42 a) Characteristics of UNIX

One of the characteristics of UNIX is that in addition to files that are collections of data stored
on disks, it handles peripherals such as terminals, keyboards, and mice, as special files called
device files. This allows programs to handle peripherals in the same way as ordinary files.
Therefore, a) is the appropriate characteristic. Note that although device files contain only
information such as device number, when these files are accessed, the appropriate device drivers
are invoked and provide access to peripherals.

b) “Two-way communication between processes” is achieved by using a function called pipe.


Redirection is a function to change the standard input (usually a keyboard) and the standard
output (usually a display). Redirection can be used, for example, to write content for a display
to a file.
c) GUI is provided by CDE (Common Desktop Environment). CDE is a cross-vendor interface
which standardizes the appearance of UNIX desktop environments that were slightly different

47
Morning Exam Section 2 Computer System Answers and Explanations

between vendors. Note that shell is a command user interface (CUI) of UNIX.
d) There are five common file organizations: sequential organization, direct organization,
indexed sequential organization, partitioned organization, and VSAM organization files. Data
can be accessed according to the characteristics of each organization. These file organizations
are used in general purpose computers referred to as legacy systems (outdated systems). Open
systems, such as UNIX, handle files as a bit string (in units of bytes) called a byte stream and
have no concept of file organization.

Q2-43 a) State where processor utilization decreases

In a virtual memory system, paging may occur very frequently when the programs in virtual
memory are too big compared with the amount of main memory available in the system or the
multiplicity of programs (the number of programs running concurrently) is large. This situation is
called thrashing (option a)) and the overall efficiency of the system decreases.

b) Fragmentation: A situation where memory in a storage device is fragmented and the system
cannot allocate contiguous memory of a required size, resulting in a series of interspersed
memory areas. When write and delete operations are repeatedly performed on the storage
device, this condition gets worse and the efficiency of the system will decrease.
c) Paging: In virtual memory systems, when there is no space in real memory, the system pages
out the part of a program in real memory which is not required immediately to an auxiliary
storage device and instead pages in the part that will be required next to that space. These two
operations are collectively referred to as paging. The pages that were paged out are stored and
managed in places called slots in the auxiliary storage device.
d) Bottleneck: A bottleneck is a hindrance to progress or production in the overall resources of a
computer system, such as the heavily-loaded resource. In system development, it includes an
event which hinders development activities.

48
Morning Exam Section 2 Computer System Answers and Explanations

Q2-44 b) Paging methods

FIFO (First In First Out) is a method which replaces the page that has been in real memory for
the longest time. Also, LRU (Least Recently Used) replaces the page which has not been accessed
for the longest time since the last reference. An answer can be derived by tracing the access to
pages starting from the next page (e.g., page 3) in the table. The page should be replaced when the
virtual page to be accessed is not in real memory. The final condition of real memory pages after
processing using the FIFO and LRU methods is as shown below.

Virtual page
State of real memory State of real memory
number
pages when FIFO is used pages when LRU is used
to be accessed
1 1 – – 1 – –
4 1 4 – 1 4 –
2 1 4 2 1 4 2
3 3 4 2 3 4 2
4 3 4 2 3 4 2
1 3 1 2 3 4 1
2 3 1 2 2 4 1
4 3 1 4 2 4 1
3 3 1 4 2 4 3
represents the replacement of the real memory page, and represents access to the real
memory page.

Therefore, b) is the correct answer.

Q2-45 a) Explanation of job and job step

As described in a), a job is “a set of procedures performed in a computer” and consists of one or
more job steps. For example, in the case of matching two files (A and B), usually these two files
are first sorted so that the value of matching keys are in the same order (usually in ascending
order) and the sorted files are then read and matched. Since the program will become complex
when all of these steps are performed in one program, the steps are performed in separate
programs in stages for example as in (1) sort file A → (2) sort file B → (3) match files A and B. As
a result, by executing steps (1) through (3) sequentially, the user’s objective of matching is
achieved, but this is done by making the computer system execute steps (1) through (3) separately.
In this case, since it is inconvenient for users to handle these steps (1) through (3) separately and
execute them sequentially, there is a mechanism in which users enter (request) a collection of steps
into a computer system and the computer system performs the required steps sequentially in
accordance with the user’s directions. This mechanism is called a job (control). That is, the
collection of steps (1) through (3) is the job, and (1), (2), and (3) are job steps that make up the
job.
The contents of a job are described using JCL (Job Control Language) and entered into the

49
Morning Exam Section 2 Computer System Answers and Explanations

computer system. JCL is used to describe information about programs to be executed, required
resources (such as files), and the execution environment for each job step constituting the job. In
accordance with the contents described, the OS generates tasks or processes which are units of
CPU allocation, and have control over sequential processing. Therefore, a) is the appropriate
description.

b) Tasks or processes, which constitute job steps, are in one of three states: “running,” “ready,”
and “waiting” states, and an interrupt or other events will trigger the state transition.
c) Although a job is a concept mainly used in batch processing, it can also be used in online
processing. In addition, job steps correspond to processes or tasks. Note that in OSs running
on general purpose computers in which CPU allocation is managed in terms of tasks, tasks
corresponding to processes may be referred to as job step tasks, and tasks generated by job
step tasks may be referred to as child tasks.
d) Jobs are performed in the following stages: (1) preparation of execution, such as the
interpretation of JCL by the reader, (2) starting the execution of a job by the initiator, (3)
actual processing in accordance with the contents of job steps, (4) post-processing such as the
release of resources by the terminator, (5) output to printers by the writer. However, job
management functions provided by OS, such as reader and initiator, are not referred to as job
steps.

Q2-46 c) Explanation of job scheduling

In order to improve throughput, prioritization according to processing characteristics is


important. For example, in interactive processing, most of the time is consumed in user and screen
interaction (the period during which users are viewing the screen and entering information), and
the actual period of time when the CPU is used is very short in comparison. On the other hand,
batch processing involves no interaction with users, so the percentage of time during which the
CPU is used is high compared with interactive processing. Considering these characteristics, CPU
waiting time for interactive processing can be shortened by giving higher priority to interactive
processing. Since this processing time is short, CPU waiting time for batch processing with lower
priority is also short, so the throughput of batch processing will not be significantly affected. On
the other hand, when these priorities are reversed, batch processing will not release the CPU soon,
so CPU waiting time for interactive processing becomes long, resulting in poor throughput.
Therefore, c) is the appropriate description.

a) This is an explanation of round robin scheduling, not of FCFS.


b) Since a timer interrupt is a kind of hardware interrupt, an interrupt itself does not significantly
affect the throughput. In addition, enforced switching of CPU allocated to jobs, which is
performed by the OS using time slice (preemption), is generally used to improve throughput.
However, since preemption involves process switching (context switching) which generates
significant load, preemption beyond acceptable levels will cause throughput to decrease.
d) Basically, the same concept concerning the priority of interactive and batch processing can be
applied. Since jobs which use I/O intensively consume much time for I/O processing and the
percentage of CPU usage time is low, CPU waiting time can be shortened by giving propriety
to these jobs, and as a result, the overall throughput is improved.

50
Morning Exam Section 2 Computer System Answers and Explanations

Q2-47 d) Calculation of turnaround time

Turnaround time is a system performance evaluation index used mainly for batch jobs.
Specifically, it is the amount of time required from the time the job is entered until the time the
processing results are actually acquired.
In the question, it is assumed that the multiplicity of jobs is one and scheduling according to
SPTF (Shortest Processing Time First) is applied. Therefore, among the jobs which have arrived
(waiting for the start of execution), the job with the shortest processing time is selected and
executed. The processing time of each job is the one when it is separately executed, as shown in
the table.
Among arrived jobs B, D, and E, E is chosen because it has the
shortest processing time.

Time 0 1 2 3 4 5 6 7 8 9 10 11 12
Job arrival A B C D E
(Processing time) 2 4 3 2 1 Between B and D, D is chosen because it has
Execute A A A a shorter processing time.
Execute C C C C
Execute E E
Execute D D D
Execute B B B B B
Between B and C, C is chosen because it has a shorter
processing time.

Therefore, jobs are executed in order of A(2) → C(3) → E(1) → D(2) → B(4), and B ends at time
12 (seconds) from the arrival of A as time 0. Turnaround time for B is the time from the arrival of
B, which is 1 second after the arrival of A, so the answer is d) 11 (seconds).

Q2-48 c) Task scheduling method

The method which first executes the task (job) with the shortest expected processing time is
called SJF (Shortest Job First). Although it is appropriate for performing tasks which require more
immediacy such as online real-time processing, if tasks with short expected processing times
continuously arrive at the CPU resource queue, there is a high possibility that tasks with longer
expected processing times may continue waiting for CPU resource allocation and will not be
executed for a long time. Therefore, c) is the appropriate answer.

a) A scheduling method in which multiple CPU resource queues are prepared according to task
priorities and CPU resources are allocated for tasks in queues with higher priority is called
priority scheduling. The problem with this method is that it may take some time before CPU
resources are allocated to tasks with lower priority. To address this problem, a technique
called aging is used in which the priority of a task is gradually raised according to its waiting
time.
b) The time limit during which tasks can use a CPU is referred to as time quantum or time slice.

51
Morning Exam Section 2 Computer System Answers and Explanations

The scheduling method in which a task is suspended and inserted to the end of the CPU
resource queue when it reaches this time limit is referred to as the round robin method. Tasks
in a CPU resource queue are executed in order, and CPU resources are allocated to the tasks
sooner or later.
d) This is an explanation of FIFO (First In First Out) in which tasks are executed in order of
arrival. Although there is the problem that later tasks have to wait, CPU resources will
eventually be allocated to those later tasks sooner or later.

Q2-49 a) State transition of tasks

In multitask control, tasks in the system are managed in three states: “running” in which tasks
are running, “ready” in which tasks can be executed at any time but are forced to wait because the
CPU is occupied, and “waiting” in which tasks are waiting for the completion of I/O operations.
When these three states are depicted as a transition diagram, it becomes the task (process) state
transition diagram. When a higher priority task becomes ready, a task in running state loses the
right to use the CPU and enters ready state, so a) is the correct answer. This type of task switching
(because of the arrival of tasks with higher priority) is called preemption.

b) Generated tasks first enter into a ready state.


c) When the processing of an I/O request has been completed, the task transits from waiting state
to ready state.
d) When a task issues an I/O request, the task transits from running state to waiting state in order
to wait for the completion of the request.

Q2-50 d) Preemptive and non-preemptive methods

Preemption in process control means that the system takes away CPU from a running process
and assigns it to another process. This preemptive method that performs preemption manages
processes in ready state by using prioritized queues. When a process with higher priority than the
running process becomes ready, the system performs preemption and takes away CPU from the
running process and assigns it to the process with the higher priority. On the other hand, the
non-preemptive method is a method in which the system does not perform preemption.
In the round robin method, the system assigns CPU to a process for a certain period of time
(called time quantum), and if the process does not finish after this period, the system adds the
process to the end of the ready queue having the same priority and then assigns CPU to the
process at the top of the queue. Preemption is performed when the process does not finish within
time quantum. Thus, this method can be classified as a preemptive method. In the first-come
first-served method, processing is performed in order of arrival (in order of entering into ready
state) and preemption does not occur until the process finishes. Thus, this method can be classified
as a non-preemptive method. Therefore, d) is the correct answer.
In the SPTF (Shortest Processing Time First) method, the system gives higher priority to
processes with the shortest processing time. In the “shortest remaining processing time” method,
the system gives higher priority to processes with the shorter remaining processing time. Both
methods execute processes with highest priority first. In reality, since the system cannot know the
processing time or remaining processing time in advance, these methods are implemented by the

52
Morning Exam Section 2 Computer System Answers and Explanations

feedback queueing method in which the system gives high priority to processes that newly become
ready and then lowers their priority if the processes do not finish within the time quantum. Since
preemption is required in order to implement this method, it can be classified as a preemptive
method.

Q2-51 d) Task scheduling

When tasks A through C are executed concurrently, CPU processing for task A which has
highest priority is performed first for 2 milliseconds. When CPU processing for task A finishes,
task A starts I/O processing, and CPU is allocated to task B which has medium priority. CPU
processing of task B is stopped after 2 milliseconds, CPU processing of task A is performed again
for 2 milliseconds, and then task A finishes. After this, the remaining CPU processing of task B is
performed for 1 millisecond, and when task B starts I/O processing, CPU processing of task C
with low priority starts. At this point, 7 milliseconds have passed after the start of any of three
tasks (it is actually task A which has higher priority). Therefore, d) is the correct answer.
This process can be depicted as shown below.

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 (millisecond)

Task A CPU I/O CPU Completed


Task B Wait CPU Wait CPU I/O CPU Completed
Task C Wait CPU I/O Wait CPU Completed

Start of task C

Q2-52 a) Description of semaphore

In exclusive control, the P operation of a semaphore is the operation performed before entering
a critical section, and the V operation is the operation performed after exiting a critical section. In
general, with synchronization control, P operation corresponds to a wait operation and V operation
corresponds to a signal operation. Therefore, as described in a), by performing V operation, one of
the processes waiting for resources to be released will change from wait state to ready state.

b) P operation and V operation can be performed in any order. P operation decrements the
semaphore variable by one, and V operation increments it by one.
c) Multiple V operations can be performed consecutively on the same semaphore. For example,
when one resource is used in a multiplexed manner, V operations can be performed
consecutively according to the number of the multiplicity.
d) When a program uses multiple resources, there will be multiple semaphores.

Q2-53 d) Description of thread and process

A process is an execution environment of a program. A thread is an executable unit in a process,


and one process contains one or more threads. Address space (memory space) is allocated to each
process, and threads within the same process share this address space. CPU resources (PWS

53
Morning Exam Section 2 Computer System Answers and Explanations

(program counter, status register) and registers) are allocated to each thread. While inter-thread
communication can be achieved by using (shared) memory within the same space without
switching address spaces, inter-process communication incurs significant overhead such as
address space switching. Therefore, d) is the appropriate description.

a) Context refers to the execution condition values of a process or thread and includes
information such as PSW, registers, stack pointers, and page conversion table (for processes
only). Context can be switched between threads within the same process without switching
address spaces, but it cannot be switched at high speed between threads in other address
spaces (processes) because the system must switch address spaces.
b) Threads are executed in the same address space as processes.
c) CPU resources are allocated to each thread, so stack and context are not shared between a
thread and a process.

Q2-54 b) Processes which may possibly cause deadlock

In order to avoid deadlock, a measure is taken so that the allocation order of resources to be
occupied is the same among the processes. Referring to the table, process A and process B occupy
resources in the same order, so they do not cause a deadlock. However, process C and process D
cause a deadlock in the cases shown below. Therefore, b) is the correct answer. Note that the
resource in bold waits for the release of other processes and is the direct cause of a deadlock.

(Process A and process C) (Process A and process D)


A: X→Y→Z A: X→Y→Z
C: Z→X D: Z→Y

Q2-55 b) Explanation of API in OS

API (Application Programming Interface) is a mechanism that enables application software to


use various functions provided by an OS. It is a generic term for interfaces including functions,
commands, and utilities which facilitate the development of programs. Therefore, b) is the
appropriate description.

a) This is an explanation of mechanisms which enable applications to directly operate hardware


without using API, such as for fast screen rendering (e.g., DirectX in Windows).
c) A socket, an inter-process communication interface in TCP/IP, is one mechanism which allows
applications to communicate over a network.
d) This is an explanation concerning standardization of GUI (Graphical User Interface), for
example, Motif and OPEN LOOK in UNIX.

54
Morning Exam Section 2 Computer System Answers and Explanations

Q2-56 d) Allocation of file area

When file areas are allocated in accordance with rules (1) and (4) in the order specified in (2),
the result is as shown below. Thus, d) is the correct answer.

Amount of
Start 90 30 40 40 70 30 area
allocated
Empty space of A 0 – 90 – 90 – 90 – 90 – 90 – 90 90
Empty space of B 0 0 – 30 – 30 – 70 – 70 – 100 100
Empty space of C 0 0 0 – 40 – 40 – 110 – 110 110

Q2-57 c) I/O management function

The correct combination of functions corresponding to A through C is c). The following are
supplements to the descriptions of each function.

• Device driver: It is part of an OS and performs device-dependent I/O control and is provided
for each device. In computers such as PCs, when a new peripheral device is connected, its
corresponding device driver may need to be installed.
• Spooling: It is a method of separating I/O operations that are slower in comparison with the
CPU and running those operations in parallel so that the CPU can be used efficiently. SPOOL
is an acronym for Simultaneous Peripheral Operation On-Line. Specifically, the output of a
job is temporarily stored in an intermediate auxiliary storage device, and the actual output
operation is performed from there in accordance with the speed of the output device. At this
time, CPU is freed from this job and performing another job.
• Buffer pool: Multiple programs perform I/O operations via buffer pools prepared in main
memory so that performance can be improved by reducing the idle time of each device and
increasing parallel operation.

Q2-58 c) Memory management at the time of executing programs

Option c) is the appropriate description of garbage collection.

a) “A technique which allows the system to execute programs larger than the amount of main
memory” is called overlay. In this technique, programs are divided into units called segments.
Segments are stored in an auxiliary storage device and loaded to main memory when they are
executed. Memory compaction is a technique to relocate programs and compress space in
order to create a single contiguous region by collecting the small free spaces in main memory
resulting from repeated allocation and release of spaces in main memory.
b) This is a description about dynamic relocation.
d) This is a description about dynamic linking.

The descriptions and names of the techniques for b) and d) are reversed.

55
Morning Exam Section 2 Computer System Answers and Explanations

Q2-59 d) Explanation of memory leak

Memory leaks, as shown in description d), occur when memory areas allocated to processes
remain unreleased even when they are no longer required. As this problem progresses, available
memory space decreases. As a result, it causes unstable operation, poor response, or in the worst
case, hang-up. Memory leaks are often caused by bugs in the memory management code in the OS
or applications.

a) This is a description about memory fragmentation.


b) This is a description about overlay.
c) Areas other than those allocated for a process are protected by memory protection keys and
generally are write-protected. This is not called memory leak.

Q2-60 a) Memory management function of OS

The appropriate combination of functions corresponding to A through C is a). The following are
supplements to the descriptions of each function in the question.

• Overlay: A program is divided into several units (such as modules) and stored in an auxiliary
storage device in advance. These units are loaded into main memory from the auxiliary
storage device as they become necessary in accordance with the directions of the program.
The units are loaded over program units that have become unnecessary in main memory. Thus
this method is referred to as overlay. This function was used before the virtual memory
function was implemented.
• Paging: This is a method of implementing a virtual memory system by using an auxiliary
storage device. A page is a fixed-length fragment of memory and is used as a unit of data
loaded/unloaded between main memory and an auxiliary storage device.
• Swapping: This term is a synonym of roll-in/roll-out. While it is often confused with paging,
the term is originally used to refer to unloading/reloading in units of processes or jobs.

Q2-61 b) Handling of multiple interrupts

A real-time OS is an OS which is suited for real-time processing such as controlling of


equipment. In real-time processing, processing requests are issued from moment to moment as
interrupts and each request must be handled (responded to) within an acceptable response time.
Specifically, it is required that urgent processing requests be handled as soon as possible with a
higher priority than other processes. Multiple interrupts mean that a new interrupt is generated
during the handling of another interrupt. In a real-time OS, multiple interrupts must be handled
based on the urgency (priority) of each interrupt.
Among the options, c) first handles a new interrupt that occurred during the handling of another
interrupt, and d) handles the new interrupt after finishing the handling of the current interrupt. As
described above, in real-time processing, since interrupts must be handled based on their priority
rather than their order of occurrence, these two options are incorrect.
On the other hand, a) and b) are matters related to interrupt masks. An interrupt mask is an
operation in which interrupts with priorities lower than a certain level are not accepted while an

56
Morning Exam Section 2 Computer System Answers and Explanations

interrupt is being handled. In the case that the order of interrupt handling is controlled based on
priority, when many interrupts with lower priority occur during the handling of an interrupt with
higher priority, the handling of lower priority interrupts is delayed based on their priority.
However, every time an interrupt occurs, processing of the current interrupt must be suspended in
order to check the priority of the new interrupt. To resolve this issue, an operation called interrupt
mask is available so that interrupts with priorities lower than a certain level cannot be accepted.
Therefore, the description “masks an interrupt with lower priority rather than the one it is currently
handling” in b) is appropriate.
Note that when interrupts with a higher priority are masked as described in a), an urgent
processing request may be delayed, which is inappropriate for real-time processing.

Q2-62 d) Description of the role of shell in OS

Shell is a program which accepts, interprets, and executes commands entered by a user. It
corresponds to a user interface program in UNIX or COMMAND.COM in MS-DOS. The program
which reads, interprets, and executes commands entered by a user is generally referred to as a
command interpreter, but it is referred to as shell especially in UNIX. Therefore, d) is the
appropriate description.

a) This is a description of a shortcut key.


b) Security management and exclusive control are functions provided by an OS, not by shell.
c) This is a description about icons (shortcuts) placed on the desktop screens of computers such
as a PC.

Q2-63 a) Explanation of path specification in file systems

In file systems that have a directory structure, files are managed in a hierarchical tree structure
by using directories that contain information about files. We will provide an explanation here
using an example of UNIX which uses such a file system. The top directory in the tree structure is
called the root directory and represented as “/”. Each directory can contain any number of files or
directories under it.
/

d1 d2
: Directory

f1 f2 f3 d3 : File

Fig. 1 Example of file structure

A path specification is to specify a route (path) from a certain place to a file (or directory), and
there are two kinds of path specifications: an absolute path specification which describes the path
from the root directory to the file, and a relative path specification which describes the path from
the current working directory (current directory).

57
Morning Exam Section 2 Computer System Answers and Explanations

Let us check the details of each description about path specifications.

a) In file systems that have a directory structure, any directory or file can be referenced by
specifying their routes (paths). Between parent and child directories, it is possible to refer to a
parent (i.e., a directory one-level above) from a child directory, not to mention to a child from
a parent. Using the example shown in Fig. 1, Fig 2 shows referencing in both directions by
using the relative path specification where parent directory is d2 and child directory is d3.

Reference from parent to child: “d3” Reference from child to parent: “..”

/ /
Current
directory
Parent Parent
d1 d2 d1 d2
d3 ..
Current
f1 f2 f3 d3 f1 f2 f3 d3 directory
Child Child

Fig. 2 Referencing between parent and child directories

b) When the current directory is the root directory, the relative path specification and the absolute
path specification are different as shown in Fig. 3. Note that the current directory is
represented as “.” in the relative path specification for UNIX.

Current Current
. directory / directory

d1 d2 /d1 /d2

d1/f1 d1/f2 d2/f3 d2/d3 /d1/f1 /d1/f2 /d2/f3 /d2/d3

Relative path specification Absolute path specification

Fig. 3 Relative path specification and absolute path specification

c) When referencing a parent directory from a child directory, the specification is represented as
“..” using the current directory as the starting point as shown in Fig. 2. This is a relative path
specification, not an absolute path specification.
d) Specifying a path to the target file from the root directory is an absolute path specification, not
a relative path specification. Representation in an absolute path specification will be the same
regardless of the current directory.
Therefore, a) is the appropriate description.

58
Morning Exam Section 2 Computer System Answers and Explanations

Q2-64 a) Specification of files in directories

In file systems used in workstations and PCs, files are stored in containers called directories that
are managed in a hierarchical structure. The location of a file or directory in a file system can be
expressed using an absolute path specification in which files and directories are represented as a
path from the top level, and a relative path specification in which files and directories are
represented as a path from the current position of a process (current directory). This question
concerns the mapping between these methods.

By tracing the figure as shown below, it is clear that a) is the correct answer.

Root

..\B
(1) Initial position A (2) B

.\A\B
A B (3) A B

A B A B A B A B

\B\A\B

b) In an absolute path specification, the result is \B\B\B.


c) In an absolute path specification, the result is \B.
d) In an absolute path specification, the result is \B\B.

Q2-65 a) Development tools

A simulator is a tool which mimics (or simulates) the behavior of software or hardware or both.
A program which generates large artificial traffic to evaluate the performance of online processing
is a kind of simulator. Therefore, a) is the appropriate answer.

b) A tool used to check the content of an object at the time of debugging a program is referred to
as an inspector. There is also a tool called snapshot which records the values of variables. A
code auditor (code audit tool) is a tool to detect programs which violate development
standards including programming format, naming conventions, and commands that can be
used.
c) A tool that has functions including automatic replacement of string to edit program source
code is an editor. A tracer is a tool to support debugging by printing out instructions and their
results in order of program execution.
d) A tool which inserts logical conditions that should hold true between variables into
appropriate places in programs to inspect whether or not they are satisfied at run time is
referred to as an assertion checker. A coverage monitor is a tool to monitor the run time
coverage of instructions in programs.

59
Morning Exam Section 2 Computer System Answers and Explanations

Q2-66 c) Creation of object code executable on different platforms

A language processing program which generates object programs for different machine
platforms that have different instruction formats is referred to as a cross compiler. For example, in
the case of developing embedded systems for cell phones or household electronic appliances,
since development is not possible on such target machines, programs are developed on a PC and
the object program that can run on the target machine is generated using a cross compiler. This
kind of development method is sometimes referred to as cross building. Therefore, c) is the correct
answer.

a) An interpreter is a language processing program which executes source code while translating
it line by line into machine language. Interpreted language includes BASIC and Perl.
b) An emulator is software which enables a computer to execute programs developed for a
machine platform which has a different instruction format. For example, it provides a function
to execute programs for dedicated gaming machines on a PC.
d) A generator is a language processing program which generates programs by specifying
processing conditions as parameters. Program development support tools classified as fourth
generation languages often provide generator functions.

Q2-67 a) Functions of upper CASE tools

An upper CASE tool is a tool which supports activities in upstream processes of system
development including basic planning, external design, or internal design. Thus, a function to
create a DFD (Data Flow Diagram) required at the design stage is classified as a function provided
by an upper CASE tool. Therefore, a) is the appropriate answer.

b) A function to support the creation of test data is provided by a lower CASE tool.
c) A function to automatically generate program code is provided by a lower CASE tool.
d) Although a function to support library management may be classified as a lower process in
consideration of managing libraries such as source programs, object modules, and load
modules, it is difficult to classify this function as an upper or lower process in consideration of
including the management of documents that are deliverables in upstream processes.

Q2-68 c) Characteristics of OSS defined in OSI

OSI (Open Source Initiative) defines open source software as having distribution terms that
comply with ten criteria. Among these criteria, “1. Free Redistribution” states that the selling of
software including open source software must not be restricted, and “6. No Discrimination Against
Fields of Endeavor” states that making use of the software must not be restricted based on its
purpose of use. Therefore, c) is the correct answer.

a) In “9. License Must Not Restrict Other Software,” placing restrictions on the license of the
software distributed along with open source software is prohibited.
b) “1. Free Redistribution” assures the act of selling or giving away open source software.
d) “4. Integrity of The Author’s Source Code” assures the distribution of derived software.

60
Morning Exam Section 2 Computer System Answers and Explanations

Q2-69 c) Considerations in the use of OSS

Although the definition of OSS (Open Source Software) is not singular, OSS (Open Source
Initiative), a group promoting OSS, defines the following ten conditions.

1. It must be freely redistributable.


2. Source code must be disclosed.
3. The creation of derived software must be allowed, and the same license must be inherited.
4. The author of the software can require a person who modifies and distributes the software to
divide the distributed software into the author’s source code and difference software (integrity
of the author’s source code).
5. Persons or groups must not be discriminated.
6. Fields of endeavor must not be restricted.
7. Additional licenses must not be requested at the time of redistributing.
8. A license must not be specific to a product.
9. A license must not restrict other software distributed on the same media.
10. A license must be technology-neutral.

Commercial software which incorporates OSS is derived software of OSS. In c), there is a
statement that says “depending on the license.” When the license of OSS complies with the above
conditions 1 through 10, the same license is applied to the commercial software (i.e., terms of
OSS) based on condition 3, and the source code must be disclosed as defined in condition 2.
Therefore, c) is the appropriate answer.

a) OSS developers are not responsible for assuring the quality of software. User
self-responsibility is one of the characteristics of OSS.
b) As defined in conditions 3 and 6, fields of endeavor must not be restricted, and source code
must be disclosed.
d) As defined in condition 3, when the original developer of OSS specifies a licensing term
stating that the source code of derived software must be disclosed, the developer of the newly
derived software cannot make its source code undisclosed at his/her own discretion.

Q2-70 c) Semiconductor memory using flip-flop

Flip-flop is a circuit that has two stable states and is used for memory cell of SRAM described
in c). SRAM is used as cache memory owing to its high-speed access. Since its circuit is complex
and high integration is difficult, there is no SRAM with capacity as large as DRAM. In addition,
the cost is higher than DRAM.

a) DRAM: A type of RAM which needs to be refreshed every several milliseconds because the
electrical charge weakens over time. Although access speed is lower than SRAM because of
this behavior, its circuit is simple, cost is low, and creating high-capacity memory is relatively
easy. It is used as main memory, RAM disk, etc.
b) EEPROM (Electrically Erasable Programmable ROM): Its characteristic is that the memory
content does not disappear when the power is turned off (nonvolatile). Although ROM means
read-only memory, rewritable ROM is referred to as PROM (Programmable ROM). Among

61
Morning Exam Section 2 Computer System Answers and Explanations

PROMs, flash EEPROM (flash memory) allows its content to be electrically erased entirely or
in units of blocks and is used as a storage medium for digital cameras or IC cards.
d) Mask ROM: A kind of ROM in which memory content is set when manufacturing as
read-only memory. Data cannot be rewritten.

Q2-71 a) System LSI

With the rapid advancement of microfabrication technology for semiconductors, it has become
possible to integrate entire systems into one silicon chip (or to a similarly high level of integration)
which were previously built by combining separate LSIs, or to integrate LSIs requiring slightly
different manufacturing technology into the same chip. An LSI chip which integrates logical
circuits including the CPU along with memory, digital circuit, and analog circuit, and provides
various functions and performance on a single chip is referred to as system LSI. Integrating
various functions onto one chip is often called SoC (System-on-a-Chip) technology.
When system LSI is developed, functional blocks developed by different developers are reused
for efficient development and shorter development time. Functional blocks whose functions and
external interfaces are standardized as common specifications and which are provided as libraries
so that they can be reused are referred to as “IP core” or “IP.” IP stands for intellectual property,
and reusable design asset is referred to as IP core.
Therefore, a) which points out the “combination of IPs” is the appropriate description.

b) Although it is possible to embed an OS onto an LSI chip, “embedding an OS” is not a


requirement of a system LSI chip, so this description is not appropriate.
c) Although it is possible to “consolidate multiple microcomputers,” it is not a requirement of a
system LSI chip, so this description is not appropriate.
d) With regard to the statement “multiple LSIs are consolidated in one board,” it is not a
description of system LSI if we regard the “board” as a printed circuit board, so it is not
appropriate. If we regard the “board” as a silicon chip, consolidating “multiple LSIs with the
same functionality” is not a requirement of a system LSI chip, so this description is not
appropriate.

Q2-72 b) Power-saving technology in microprocessors

In a CMOS, a large current occurs at the time of switching, so less power is consumed when it
operates at a low frequency where switching frequency is lower. Therefore, b) is the appropriate
answer.

a) Power consumption of CMOS is lower than that of bipolar (transistor).


c) Clock gating is a method of reducing power consumption in an internal circuit of an integrated
circuit by stopping clock input to the part of the circuit which does not need to operate. In this
method, power consumption can be reduced even when the CPU is not in standby mode.
d) In general, power consumption in an integrated circuit is proportional to the frequency and the
number of gates and to the square of the power voltage. Therefore, as the operating voltage is
raised, power consumption of the circuit increases.

62
Morning Exam Section 2 Computer System Answers and Explanations

Q2-73 b) Truth table of logical circuit diagram

Based on the logical circuit, the logical expression of output Z is Z  ( X  Y )  ( X  Y ) as


shown in the diagram below. Here, “+” is used for the logical sum, “  ” for the logical product,
and “ X ” for the logical negation of X.
The table below is the truth table for each element of this logical expression. Therefore, b) is the
correct answer, and it is the exclusive logical sum of X and Y.

X
X
X OR Y
Y
Y Z
X
( X OR Y ) AND ( X OR Y )
Y X OR Y

Negation (NOT) Logical sum (OR) Logical product (AND)

X Y X Y X +Y X+Y (X Y )(X Y )

0 0 1 1 1 0 0
0 1 1 0 1 1 1
1 0 0 1 1 1 1
1 1 0 0 0 1 0

Q2-74 d) Operation of logical circuit

The table below shows the output values for each input of the logical circuit.

(Logical product) (Logical sum) (Exclusive logical sum) (Logical negation)


A B
A AND B A OR B A XOR B NOT A
0 0 0 0 0 1
0 1 0 1 1 1
1 0 0 1 1 0
1 1 1 1 0 0

First, we will focus on the I/O of the XOR (Exclusive OR) gate placed at the left column of
each circuit in the options. When we check the I/O of the XOR gate in this table,
The output is 1 in two cases: when A=0 and B=1, or A=1 and B=0.
The output is 0 in two cases: when A=0 and B=0, or A=1 and B=1.
This means that the output of an XOR gate is 1 when the number of 1s in input A and B is odd.
In the two XOR gates placed at the left of each circuit, when we count the number of 1s in the 4
bits of input data, the relationships shown below hold true for even and odd numbers of 1s given
as input in each XOR gate. Here, by replacing even numbers of 1s with 0 and odd numbers of 1s
with 1, these relationships can be represented by those shown to the right of the arrows.

63
Morning Exam Section 2 Computer System Answers and Explanations

even + even = even 0+0=0


odd + even = odd 1+0=1
even + odd = odd 0+1=1
odd + odd = even 1+1=0

These relationships is identical with the table for the XOR gate, so we can see that a single
XOR gate can be used to consolidate the two XOR gates placed at the left column of each circuit.
In this case, when the number of 1s in the 4 bits of input data is odd, the output of the XOR gate
at the right column of the circuit is 1. The condition given in the question is “output is 1 when the
number of 1s given as input is 0 or even,” so we can see that we can obtain the expected result by
inverting the output of the XOR gate at the right column of the circuit by using a NOT gate.
Therefore, d) is the correct answer.
In addition, in the case where the number of 1s given as input is 0, from the table for the output
value of the XOR gate, we can see that the circuit shown in d) can be used for making a correct
decision.
Note that this circuit is for generating/checking odd parity.

64
Morning Exam Section 3 Technological Elements Answers and Explanations

Section 3 Technological Elements

Q3-1 d) Explanation of Web content usability

Usability refers not only to ease of use for the physically challenged or the elderly, but also to
general ease of use. Therefore, d) is an appropriate explanation of Web content usability.
International standard ISO 9241-11 provides, in the form of “Guidance on Usability,” guidelines
for ergonomic measurement of the ease of use of visual display terminals in offices. These
guidelines define usability as “the extent to which a product can be used by specified users to
achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of
use.”
a) This is an explanation of universal design.
b), c) These descriptions both pertain to barrier-free design.

Q3-2 d) Accessibility

Accessibility is a keyword for describing the ease with which software, information services, or
information systems, such as Web sites, can be used from a user interface perspective. Therefore,
d) is the correct answer. It can also be considered as “the degree to which universal design is
implemented within an information system,” through design choices such as screen color schemes
and font sizes, the implementation of audio speech functions, the attachment of explanatory text
for images and audio, or the like, in order that anyone, including the elderly and the physically
challenged, can sufficiently use the information system in question. It includes not only integrated
so-called “barrier-free” design and functions, but also equipment and software which can be used
to set and adjust these functions in order to suit an individual user’s physical characteristics.

a) This explanation refers to the Basic Resident Register Network System. This is a system for
the sharing and use of information on individuals that have been assigned resident registration
codes (individuals recorded in the Basic Resident Register) by networking governmental
organizations and centers nationwide with local governments (prefectural and municipal).
b) This is a description of interoperability.
c) This is a description of traceability.

Q3-3 c) Pull-down menus and pop-up menus

Pull-down menus and pop-up menus are used for input when there are a relatively small number
of predetermined choices. There are only five categories in c), so data entry is made simple by
providing means to select one of these categories. Thus, the use of pull-down menus or pop-up
menus is appropriate.

a) When a product number is assigned to each product, a new number will be assigned each time

65
Morning Exam Section 3 Technological Elements Answers and Explanations

a product is registered. When numbers are assigned automatically, the product numbers only
need to be displayed, so input is not necessary. However, if number assignment is not
performed automatically, direct input is preferable.
b) A new product name is also entered directly at the time of product registration.
d) Prices vary from product to product, so it is preferable to directly enter the initial data, as with
a) and b). However, in special cases, such as when there are many types of products but very
few product prices, the use of pull-down or pop-up menus is suitable. This option, though,
specifies “a range of 10,000 to 100,000 yen,” so this option should not be considered as one of
those special cases.

Q3-4 c) Human interface design

When looking at the answer group for functions related to operation consistency, “on every
screen” is a key word. In other words, it means to provide consistency of operations across
multiple screens. For example, by keeping the sizes and positions of the “OK,” “Cancel,” “Next,”
“Back,” and similar buttons on each screen, it is possible for users to quickly learn the operation
on screens. Therefore, c) is the correct answer.

a) The purpose of the “undo” function, which returns a system to its previous operational state, is
to prevent problems because of user operation errors.
b) The purpose of shortcut keys is to save the time and effort involved in information entry.
d) The purpose of displaying progress status is to eliminate user’s feeling of unease by providing
processing feedback to the user.

Q3-5 d) Output form design

This is a basic question about output form design, and is fairly easy.

a) This option states that items with the same form should be arranged close to each other, with
numbers close to other numbers and Kanji close to other Kanji, but this is incorrect. Items
must be designed to be placed in locations which fit the carrying out of work and work
procedures.
b) This option says that when forms are converted to digital form, a new format should be
designed without giving consideration to the currently used output format, but this is incorrect.
As a general rule, digital forms should be based on current forms, which support the carrying
out of work, and that new sections should be designed if needed.
c) This option says that it is advisable to set the same number of printed digits in output items as
the internal number of characters in the data, but this is not always necessary. Approximate
numbers are sufficient for items used for confirming overall trends, and should be decided in
conjunction with the intended purpose of output lists.
d) For uniformity across forms, it is of course preferable for paper sizes, form titles, printing

66
Morning Exam Section 3 Technological Elements Answers and Explanations

methods, and the like to be designed in accordance with company or department rules, so this
is the appropriate description.

Q3-6 c) Code systems

The code which corresponds to the question is mnemonic code. “Mnemonic” means “assisting
the memory,” and mnemonic code is code which assists people’s memory by suggesting the
content of data. Using symbols in assembler language instructions such as ADD to represent an
addition instruction is an example of a mnemonic code.

a) Sequence code: Sequence codes are codes ordered in accordance with defined rules. For
example, these can include codes in natural number order, or alphabetical order.
b) Decimal code: It is a code in which each digit of a decimal number is expressed by a string of
a specified number of bits.
[Example: Expressing each digit of a decimal number with 4 bits]

1 5
0 0 0 1 0 1 0 1

d) Block code: It refers to the sets of symbolic strings created by encoding data which has been
divided into blocks, and the method for performing that encoding.

Q3-7 b) Lossy compression

There are two types of image encoding methods: lossless compression method and lossy
compression method. With lossless compression, expanding the compressed data results in the
original data being perfectly restored. This is used for compressing programs and data, but the
compression ratios offered are not very high. Conversely, with lossy compression, the original data
cannot be perfectly restored by expanding the compressed data. This is used for compressing
images, audio, and the like, in which a certain degree of error is acceptable. It offers high
compression rates.
JPEG (Joint Photographic Experts Group) is a compression method for use on full-color still
images. It is generally lossy, but there also exist lossless JPEGs. Therefore, this corresponds to “a
compression method with which the original data may not be fully recovered when compressed
data is decompressed,” and thus b) is the correct answer.

a) GIF (Graphic Interchange Format): This is a lossless compression format developed by


CompuServe for compressing 256-color still images.
c) ZIP: This is a file compression format developed by PKWARE.
d) Run length method: This is a compression method that counts how many times the same data
appears consecutively, and uses that number as data to perform compression.

67
Morning Exam Section 3 Technological Elements Answers and Explanations

Q3-8 d) Compression standards used for video delivery services

MPEG-4 d) is the compression standard used for transmitting video data to portable
communications devices (e.g., cell phones). Initially, MPEG-4 was developed for wireless image
transmission supporting data-encoding speeds of 64kbps or less, but it was later expanded for
Internet usage, with its application scope extended such that it has become a multimedia encoding
standard specification. The encoding speed range has also expanded, now extending from 5 kbps to
4 Mbps.

The meanings of the other terms used are given below.


a) JPEG: This s a compression method for color still images.
b) MMR (Modified MR): This is a compression method that is primarily used by G4 FAX, and
suited to monochrome images without gray scale information.
c) MP3 (MPEG Audio Layer 3): This is a layer 3 compression method for MPEG-1 audio.
Layers 1 and 2 are used for digital broadcasting.

Q3-9 a) Character codes used in computers

ASCII code is a code defined by ANSI (American National Standards Institute equivalent to
Japan’s JIS) which is a 7-bit character code set used to represent alphabetical characters, numbers,
and the like. It does not include Kanji. The description in a) is appropriate.

b) EUC (Extended UNIX Code): EUC includes Kanji. EUC is a 2-byte or greater character code
set for handling characters from every country by UNIX systems in a unified way. Kanji codes
are expressed with 2 bytes, but JIS supplemental Kanji, including control characters, are
expressed with 3 bytes.
c) Unicode: This is a character code set for handling characters from every country in a unified
way. All characters are assigned a 2-byte code or a 4-byte code. The description applies to
Shift-JIS code.
d) Shift JIS code: This is a modification of the JIS Kanji code. The first byte is shifted in order to
avoid overlapping with ASCII code, so it can be used individually or mixed together with
ASCII code. The description applies to EUC.

Q3-10 b) Explanation of virtual reality

Virtual Reality (VR) is the “computer representation of information as having real physical
form by appealing to the five human senses, primarily those of sight and sound.” As it states in b),
CG, sensors, and other technologies are used to represent computer generated worlds as if they
were real. Input and output devices, such as HMD (Head Mounted Display), which make objects
appear three-dimensional, and haptic data gloves, which produce the sensation of touch, are used.
Virtual reality is used for education, training, learning, medical, design, and other applications.

68
Morning Exam Section 3 Technological Elements Answers and Explanations

a) This is a description of interlaced GIF (Graphics Interchange Format) and progressive JPEG
(Joint Photographic Experts Group) image file formats, primarily used on the Internet. When
normal GIF formatted image data is displayed, the data is gradually displayed from top to
bottom as it is downloaded. With interlaced GIFs, first the entire image is displayed in a
low-resolution, blocky form. As the file download proceeds, the image becomes progressively
more detailed and less blocky. The word “interlacing” comes from the interlacing of display
scan lines. Progressive JPEG use the same display approach, enabling to get a grasp of the
overall image before downloading is completed.
c) This is a description of simulations, one of the applications of virtual reality. It is not a
description of virtual reality itself, and thus is not an appropriate answer.
d) This is a description of VFX (Visual Effects), in which computers are used to make composite
images from separately captured real images. SFX (Special Effects) is sometimes used
synonymously, but in actuality refers to compositing methods which do not use computers.
VFX and SFX are sometimes used interchangeably.

Q3-11 a) Computer graphics

Display screens are composed of a grid of points called pixels. By setting the color of each
screen pixel, a shape can be displayed on the screen. Since the pixels are arranged on a grid, when
there are diagonal or curved lines at the edge of a shape, those sections are displayed as a jagged
line. In order to make this jaggedness less obvious, a technique called anti-aliasing is used, where
pixels are assigned colors midway between the colors of the pixels on either side, so a) is the
correct answer.

b) Clipping refers to specifying one part of an image in order to set a limit to the processing
range.
c) Shading refers to adding shadows in order to create a three-dimensional effect.
d) Morphing refers to creating intermediate images between two images before and after a
change in order to make the image change smoother.

Q3-12 b) Database three-schema structure

The three-schema structure is composed of external schema (data item groups which describe
the logical structures needed from individual user perspectives), conceptual schema (data item
groups which describe the logical relationships inherent in the data), and internal schema (data
item groups which are described taking into consideration hardware, performance, recovery, and
security). The relationship between these is shown in the figure below.

69
Morning Exam Section 3 Technological Elements Answers and Explanations

Internal schema

Conceptual schema

External External External


schema schema …… schema

By dividing database into conceptual schema, which describes logical data relationships, and
external schema, which indicates how users wish data to be displayed, it is possible to establish
logical data independence. Specifically, even if there are changes to the logical relationships
between data (table or data items are inserted or deleted), if they do not relate to how users wish
data to be displayed, then they have no effect on displayed data. Therefore, the description in b) is
appropriate.

a) As mentioned above, the three-schema structure is composed of the external schema, the
conceptual schema, and the internal schema.
c) (Incorrect) internal schema → (Correct) conceptual schema
d) (Incorrect) external schema → (Correct) internal schema

Q3-13 d) Identifying primary keys from functional dependencies

The primary key of a relation is the minimal attribute set which can be used to uniquely specify
a relation tuple (in SQL, a row). “Minimal” means that if a single attribute were to be removed
from that attribute set, it would no longer be able to perform the role of the primary key. In other
words, it is the minimum set of necessary attributes.

a) From the functional dependency contents of (1) to (7), it is apparent that the order number
alone is insufficient to determine product numbers, product names, and quantities, so “Order_
number” is not the primary key.
b) As with a), this set is insufficient to determine product numbers, product names, and
quantities. Functional dependency (2) is “Order_number → Customer_number,” so
“Customer_number” can be determined based on the order number. Therefore, it is
meaningless (excessive, redundant) to add “Customer_number”. Therefore, even if the set of
“Order_number and Customer_number” were sufficient to uniquely identify tuples, it would
not be minimal. Therefore, this is not the primary key.
c) With these three attributes, all attributes can be determined. However, as with b), the customer

70
Morning Exam Section 3 Technological Elements Answers and Explanations

number is included, which means that this set is not minimal, and therefore it is not the
primary key.
d) According to the functional dependencies (1) through (7), the set of “Order_number and
Product_number” is sufficient to determine all the attributes (equivalent to identifying each
tuple), and the set is minimal. Thus, this fits the definition of a primary key, and is the correct
answer. It is important to contrast this answer with c) and note that one of the requirements of
a primary key is that it is “minimal.”

Q3-14 c) Data model

The order, order details, and teaching products sections of each option are identical, so let us
focus on the other sections. Given the condition described in the question, that “each salesperson
is assigned to multiple geographic regions,” it is clear that salespersons have relations with
multiple regions. From the “multiple salespersons are assigned to the same region” section, we can
conclude that the relationship between regions and salespersons is many-to-many, and that
“managed region” is a (relational) entity which indicates this relationship. Looking at the answer
group from this perspective, option c), in which regions and salespersons have a many-to-many
relationship through managed regions, is the correct option. Looking at other parts of c), we can
see that customers and regions, customers and orders, and orders and salespersons are all
one-to-one relationships, which do not conflict with the question’s [Business rules]. Therefore, c)
is the correct answer.

a) Regions and customers have a many-to-many relationship through managed regions. Managed
regions indicate the relationships between salespersons and the regions which they manage,
not regions and customers. This option is also incorrect, because a single customer does not
belong to multiple regions.
b) This option is incorrect because salespersons and managed regions have many-to-many
relationships through regions.
d) This option is incorrect because salespersons and managed regions have many-to-many
relationships through customers.

Q3-15 c) Explanation of ACID characteristic (atomicity)

Transaction results must be meaningfully consistent for users. ACID characteristics ensure that.
ACID stands for Atomicity, Consistency, Isolation, and Durability. The description that
corresponds to atomicity is c), and it means that transaction processing must not be terminated in
an incomplete state. Therefore, c) is the appropriate description.

a) This corresponds with consistency.


b) This corresponds with durability.
d) This corresponds with isolation.

71
Morning Exam Section 3 Technological Elements Answers and Explanations

Q3-16 c) Explanation of data model

In the data model in the question, the relationship between “book titles” and “collection books”
is one-to-many. From the table definition contents, when there are multiple copies of the same
book, even if the purchase date is the same for each, each book is managed as a “collection book,”
and those copies are all related to the same “book title.”
In other words, collection books are entities for managing actual books possessed by the library,
and correspond with actual physical entities, while book titles only indicate abstract entities
concerning books, such as collection book names and authors, and do not correspond with actual
physical entities. Therefore, c) is appropriate.

a) The relationship between book titles and collection books is one-to-many relationship, so the
current situation is correct.
b) Book titles and collection books have been separated as a result of normalizing to the third
normal form, and are not redundant.
d) There is no direct relationship between reservations and collection books, so even if all
collection titles to be loaned are known when a reservation is placed, the specific collection
book to be loaned cannot be determined.

Q3-17 d) Correctly defined third normal form table

The functional dependency diagram in the figure can be represented in terms of functional
dependencies of attribute sets as shown below.
(1) a →{b, c, d, e}
(2) b →{f, g}
(3) {b, c}→h
The figure contains the three functional dependencies (1) through (3). From these functional
dependencies, it is clear that if the value of attribute “a” is determined, all other attribute values
will also be determined. Therefore, attribute “a” is the candidate key if all the attributes are
gathered together in one table. There are no other candidate keys, and all the attributes other than
“a” are non-key attributes (attributes not belonging to any candidate key). When defining the third
normal form, we first consider the second normal form. The definition of the second normal form
is that all non-key attributes have no partial functional dependency on any candidate key. The only
candidate key is attribute “a”, and thus there is no partial functional dependency, so this is already
in the second normal form. Next, let us consider the third normal form. The definition of the third
normal form is that every non-key attribute is not transitively dependent on any candidate key.
This table, however, contains the transitive functional dependency a→{b, c}→{f, g, h}. The
redundant section of this transitive functional dependency, {b, c}→{f, g, h}, is split off into a
separate table.
(4) {a, b, c, d, e}

72
Morning Exam Section 3 Technological Elements Answers and Explanations

(5) {b, c, f, g, h}
The candidate key for table (5), created by splitting off the redundant section, is {b, c}. This has
the functional dependency of b →{f, g}, and the non-key attributes are partially functional
dependent on the candidate key, so the redundant section b →{f, g} is split off into a separate
table.
(4) {a, b, c, d, e}
(5)-1 {b, c, h}
(5)-2 {b, f, g}
Therefore, d) is the correct answer. The explanation above looked only at whether non-key
attributes were partially functional dependent or transitively functionally dependent on candidate
keys, but we can also first judge that the candidate key section after splitting (specifically, {b, c})
is partially functional dependent. Therefore, the following can be derived from {a, b, c, d, e, f, g,
h}:
(6) {a, b, c, d, e, h}
(7) {b, f, g}
The redundant section, {b, c}→h of the table (6) transitively functionally dependent attribute
a →{b, c}→h can be split off into a separate table and considered as explained below.
(6)-1 {a, b, c, d, e} (same as (4))
(6)-2 {b, c, h} (same as (5)-1)
(7) {b, f, g} (same as (5)-2)

(1) (2)
a b f

c g

(3) h
d
(5) (5)-2 (7)
e
b f b f

(4) (6)-1 c g g

a b h
(5)-1 (6)-2
c
b

d c

e h

73
Morning Exam Section 3 Technological Elements Answers and Explanations

Q3-18 d) Index of relational database

Indexing is useful in searching, but it needs to be updated together with data when data is
updated, and thus we must keep in mind the fact that it takes time.

a) Incorrect: Updating an indexed column requires both the data and the index to be updated,
taking more update processing time than not being indexed.
b) Incorrect: Updating indexes also takes time, so only columns which are frequently searched
should be indexed.
c) Incorrect: For example, if the table only had a few rows, a single I/O would be sufficient, and
indexing would not be effective.
d) Correct: A full search of all items will be faster than accessing via an index. The number of
records managed by the index becomes larger, so it takes time to search the index itself, and
there are no benefits from indexing.

Q3-19 b) Defining views

When the following contents are included in the SELECT statement of a view definition, it is
impossible to determine the original tables to be updated, and therefore impossible to update the
view.

• DISTINCT
• Set functions or calculations
• Join operations
• subqueries
• GROUP BY and HAVING

Among the four views, only b) does not match any of these, and therefore is updateable.

a) This statement includes DISTINCT, so it is impossible to determine which row to update


when there are duplicate rows
c) This includes a set function and GROUP BY.
d) This includes a join operation.

Q3-20 b) SQL execution results

UNION is a set operator which yields the union set of the query results of two SELECT
statements. The first SELECT statement selects subjects from the Lab table, and derives the table
below.

74
Morning Exam Section 3 Technological Elements Answers and Explanations

Subject_
number
2
5

The second SELECT statement selects subjects with five or more units, and derives the table
below.

Subject_
number
1
2
3

The UNION statement yields the union set of these two tables, and removes duplicate rows,
resulting in the table shown in b). To keep duplicate rows, as in the table shown in a), the UNION
ALL statement should be used.

Q3-21 c) SQL statements

Searching for people having the same first and last names means that there are people who have
identical names, so the appropriate query is one in which grouping is performed when names
match (GROUP BY Name) and the number of grouped names is greater than one (COUNT(*) > 1).
Note that the selection conditions for the grouped results use the HAVING clause. Therefore, c) is
appropriate. COUNT(*) is a set function which counts the number of rows in a table without
eliminating duplicate rows.

a) This query sorts by name, and uses DISTINCT to eliminate duplicate names. This query sorts
and displays all names in the Employee table, including those that are not identical names
shared by more than one person, so it is impossible to tell from the query’s results which are
names of people who share the same first and last names with others.
b) This query merely displays grouped names, and does not indicate whether or not they are
names shared by multiple people. However, the number of people for each name is displayed,
so it is possible to visually search for names which correspond to multiple people.
d) This query’s conditions check if names match with names, but as there is one name per table
row, the name on each row will always match the name on the row, and thus this does not
search for duplicated names. This option is incorrect.

Q3-22 a) SELECT statements deriving the same search results

The SELECT statement in the question performs a join operation based on SNO, the attribute
shared by tables T1 and T2, sorts the result in ascending order of attribute SNAME of table T1,
and then displays the results after eliminating duplicating SNAME by using DISTINCT.

75
Morning Exam Section 3 Technological Elements Answers and Explanations

SELECT DISTINCT T1.SNAME ← Displays results after elimination of SNAME duplicates


FROM T1, T2 ← Specifies the two tables to join
WHERE T1.SNO = T2.SNO ← Specifies joining using the shared attribute, SNO
ORDER BY T1.SNAME ← Sorts in ascending order (ASC is the default and can be omitted)

Generally, the contents of most join operations can also be expressed in subquery form. Let us
look at the options of the answer group in order.

a) The subquery after the IN predicate selects SNO in table T2. The ORDER BY clause also
specifies SNAME, and DISTINCT is also specified. This subquery is the equivalent of the
join operation in the question, so this is the correct option. This statement uses SNAME in lieu
of T1.SNAME. However, as SNAME only appears in table T1, there is no need to specify the
table name.
b) This subquery specifies table T1, so all SNAME values in table T1 are displayed, excluding
repetitions. This output is unrelated to table T2, so this option is incorrect.
c) This statement uses the NOT IN predicate, so it is not equivalent to the join operation that
joins equivalent SNO.
d) There is no SNAME in table T2, so there is a syntax error in the SQL statement.

Q3-23 a) Database exclusive control

Exclusive control of a database is a function which limits the use of database records by other
transactions when the records are in use by a transaction. There are two types of exclusive control
(locks): “shared locks” and “exclusive locks.” For example, when a record is being used for
reference, it is inconvenient if it is updated by another transaction while it is being referenced.
However, there is no problem with two transactions referencing it simultaneously. In such case, a
shared lock is used. In other words, multiple transactions can reference the record at the same
time, but not update it. On the other hand, if a record is being used by a transaction in order to
update it, it is also inconvenient if another transaction references the record while it is being
updated. In such case, an exclusive lock is used. With an exclusive lock, a record is made
exclusive, and cannot even be referenced. Therefore, a) is appropriate. Shared locks can co-exist,
but other combinations cannot.

Q3-24 b) Deadlock occurrence timing

A deadlock occurs when multiple transactions are waiting for each other to release resources
they have exclusive access to, and as a result processing cannot proceed. The key points here are
the roles of shared locks and exclusive locks, and the timing of lock release.
The acceptance conditions for shared locks and exclusive locks on resources are as follows:

76
Morning Exam Section 3 Technological Elements Answers and Explanations

Second
Shared lock Exclusive lock
First
Shared lock Possible Not possible
Exclusive lock Not possible Not possible
First: Lock applied first
Second: Lock applied second

The release of a lock placed by a transaction is done by the ROLLBACK or COMMIT


commands. Looking at the diagram in the question in order, from (1), we get the following:

Explanation Data a Data b


(1) Immediately before READ, a shared lock is
placed on data a. Shared
(2) Immediately before UPDATE, an exclusive lock
is placed on data b. Shared Exclusive

(3) Immediately before READ, there is an attempt


to place a shared lock on data b, but as an
exclusive lock has already been placed in (2), T1 Shared Exclusive Shared
enters waiting state.
(4) ROLLBACK releases the exclusive lock on data
b, so the shared lock from (3) becomes valid, and Shared Shared
T1 is released from waiting state.
(5) Immediately before READ, a shared lock is
placed on data b (in (4), a shared lock is placed, Shared Shared
so it is possible to place additional shared locks).
(6) Immediately before UPDATE, there is an
attempt to place an exclusive lock on data a, but Rejected
as a shared lock has already been placed in (1), Exclusive Shared Shared
the exclusive lock is rejected, and T3 enters T3
waiting state.
(7) Immediately before UPDATE, there is an
attempt to place an exclusive lock on data b, but Rejected
as a shared lock has already been placed in (4), Shared Shared Exclusive
the exclusive lock is rejected, and T1 enters T1
waiting state.

From the above, we can determine that T3 in (6) and T1 in (7) begin waiting for data a, b to be
released from each other, and a deadlock occurs at (7). Therefore, b) is the correct answer.

Q3-25 b) Database failure measures

When a physical failure occurs, a separately saved backup file is used to restore the database to
state in which it was when the backup was created, and then “redo” information of journal files is
used to further restore the database to its state immediately before the failure occurred. This is
called roll-forward processing, so b) is appropriate.

a) This description would be correct if “a snapshot is set” were changed to “a checkpoint is set.”

77
Morning Exam Section 3 Technological Elements Answers and Explanations

Snapshots consist of storing the states of all variables on main storage devices in order to be
able to investigate the running state of a program at a given point.
c) If exclusive control were canceled, it would not be possible to correctly perform processing on
resources requiring exclusive control. Instead, from the transactions causing the deadlock, the
transaction with the least negative impact in terms of processing volume is selected and
forcibly terminated, and as a result the resources are released so that other transactions can
resume operation. When this occurs, all traces of processing by the transaction that was
forcibly terminated are deleted (rolled back).
d) This description would be correct if “‘redo’ information of journal files” were changed to
“‘undo’ information of journal files.” “Redo” information is used in b) roll-forward
processing.

Q3-26 a) Method for restarting and recovering DBMS after failures

When a database is restarted after a failure has occurred, all data must be restored to a state in
which data is consistent. There are two methods for this: roll forward and roll back.
In the case of rolling forward, processing has been completed for transactions which were
committed before the failure, so “redo” information in the log file is used to update database
contents accordingly. The corresponding transactions in the diagram are T2 and T5. Conversely, in
the case of rolling back, “undo” information in the log file is used to return the database back to
the state it was in before the execution of the transactions whose processing had not been
completed when the failure occurred. The corresponding transactions in the diagram are T3, T4,
and T6. However, transactions T3 and T4 merely performed Read processing on the database. In
other words, they did not update the database, so they are not included in the roll back. Therefore,
the correct combination is a).
T1 was committed before the checkpoint, so it is not included in restoration processing.

Q3-27 d) Problems related to transaction parallel control

In order to prevent problems such as lost updates during concurrent transaction execution, locks
or timestamp algorithms are often used. In timestamp algorithms, when transactions read or write
data, they compare the start timestamp of the transaction with the read/write timestamp of the data
in order to determine whether to allow reading or writing. Generally speaking, if the timestamp of
the data to be read or written is later than that of the start time of the transaction, it is judged that
the data was used by another transaction, so the data is not used, and the transaction attempting to
read/write is rolled back and restarted. In the case of using the timestamp algorithm, locks do not
normally occur, so “waiting because of deadlocks or locks” does not occur; that is, a) and b) are
not appropriate. Locks are used to prevent update inconsistencies, so c) is inappropriate.
Therefore, d) is the correct answer. Deadlocks occur as a result of using locks, so this combination
is appropriate.

78
Morning Exam Section 3 Technological Elements Answers and Explanations

Explanations of the terminology used in the question are provided below.


• Lost update: This means that update contents are lost because of updating of the same data by
another transaction.
• Uncommitted dependency: This is also known as a dirty read. This means that data which has
been updated by another transaction but not yet committed is read.
• Inconsistent analysis: This is also known as a non-repeatable read. This means that
transactions read the same data several times, and because of updating by another transaction,
the value of the data initially read does not match the value of the data read later.

Q3-28 a) Explanation of data mining

Data mining refers to discovering semantic information such as patterns and interrelationships
from a large volume of historical data. Therefore, a) is the correct answer.
These discoveries are made using neural networks and advanced mathematical methods such as
statistics with an aim to make future forecasts based on past data. Software with functions for
processing high volume data at high speed and automatic rule detection algorithms have been
developed as data mining tools.

b) This is a description of data mart creation.


c) This is a description of metadata in data warehouse.
d) This is a description of OLAP (OnLine Analytical Processing), online multidimensional data
analysis.

Q3-29 d) Explanation of metadata

Metadata means “data concerning data,” and refers to data which contains definition
information for data itself. Thus, d) is the correct answer.

a) Metadata and power sets are unrelated. Power sets are the sets of all subsets of a set. They
relate to databases in that power sets are not recognized as relations of the relational data
model. Relational databases do not recognize values as also being relations. A form in which
values are not relations—that is, in which values are singular—is called a first normal form.
b) This is an explanation of a domain.
c) Metadata is also stored in DBMS in the forms of data dictionaries or repositories.

Q3-30 b) Recall ratios and precision ratios in information retrieval systems

Recall ratio and precision ratio are expressed with the formulas below. The option that
corresponds to these is b).
Recall ratio = number of matching items searched (b)
 total number of matching items in the database (a)

79
Morning Exam Section 3 Technological Elements Answers and Explanations

Precision ratio = number of matching items searched (b)


 total number of items searched (c)
The recall ratio’s denominator, the “total number of conformant items in the database,” may be
difficult to assess in the case of massive databases, so recall ratio is generally considered to be
theoretical.

Q3-31 b) Characteristic of connectionless communications

Connectionless communications are communications which are performed without first


establishing a transmission route (connection) between the sending and receiving ends. With this
communications approach, each packet is handled separately and individually, so the destination
address must be attached to all transmitted data (packets). Therefore, b) is correct.

a) As explained above, no connection is established between the sending and receiving ends, and
each packet is independent, so there are no concepts of packet arrival order or flow. Therefore,
sequence error detection and flow control are not performed.
c) A PVC (Permanent Virtual Circuit) requires a transmission path to be permanently established
between the parties involved in communications. This is an example of connection-oriented
communication.
d) As explained above, each data unit (packet) is independent, so when there are multiple paths
between the sender and the receiver, individual data units may be transmitted via different
paths.

Q3-32 a) Packet switching

In packet switching, information is divided into multiple packets, and control information such
as source information and the destination address is appended as header information before
transmitting the packets. Error control is performed on the network side. Line utilization is high
for packet switched communications, so packet switching is appropriate for data communications
between computers. Therefore, a) is the appropriate answer. Packet switching technologies such as
X.25, frame relay, and ATM are being supplanted by gigabit Ethernet and the like, so, with the
exception of existing equipment and facilities, their role is ending.

b) The method, in which physical circuits are established for each call and continuous
communications is enabled, is standard “circuit switching.”
c) The method, in which data is divided into fixed length cells, destination information is
attached to each cell, and high speed switching is used to relay the cells, is “ATM”
(Asynchronous Transfer Mode).
d) The method, in which network transmission processing such as error control is simplified in
order to provide faster speeds than conventional packet switching, is called “frame relay.”

80
Morning Exam Section 3 Technological Elements Answers and Explanations

Q3-33 c) Description of ADSL

ADSL (Asymmetric Digital Subscriber Line) is a data transmission method which uses existing
metallic subscriber lines (telephone lines) and supports single to double-digit Mbps downstream
transmission speeds from exchanges to user premises, and upstream transmission speeds from
several hundred kbps to several Mbps. It uses devices called splitters in order to transmit both low
frequency telephone signals and high frequency ADSL signals over metallic circuits. Splitters are
installed in both exchanges and customer premises. Therefore, c) is the appropriate explanation.
The reason why upstream and downstream transmission speeds are different in ADSL is that in the
Internet, the volume of “upstream data,” consisting of website URL requests and the like, is far
smaller than the amount of “downstream data,” such as website contents including images and the
like. In this way, limited bandwidth is used effectively.

a) Echo cancellation is used on high speed ADSL (12 Mbps or greater), overlapping the
upstream and downstream frequency bandwidths. Normal telephone communications use
upstream and downstream transmission simultaneously, and use echo cancellation to separate
the two. ADSL echo cancellation is based on the same principles, so the same frequency
bandwidths are used for both upstream and downstream transmission. Non-high speed ADSL
uses a frequency division system that assigns different frequency bandwidths for upstream and
downstream transmissions.
b) As explained above, splitters separate ADSL and telephone signals, so both Internet services
and telephone services can be used at the same time.
d) ADSL connections are affected by noise impacted by various factors such as wire diameter,
other circuits, bridge taps (unterminated lines between telephone exchanges and user premises
used for branching), and long transmission distances. Echo cancellation is especially
susceptible to noise, so the maximum distance between telephone exchanges and users
depends on the type of ADSL being used.

Q3-34 a) VPN and security protocols

IPsec (IP Security Protocol) is used as the network layer (IP layer) security protocol when a
VPN connection over the Internet is established. Therefore, a) is correct.

b) S/MIME (Secure/Multipurpose Internet Mail Extensions): This protocol is used to encrypt and
transmit e-mail contents, attachments, and the like. It corresponds with the application layer of
the OSI basic reference model.
c) WEP (Wired Equivalent Privacy): This is an encryption method for wireless LAN specified by
IEEE802.11. It corresponds with the data link layer of the OSI basic reference model.
d) WPA (Wi-Fi Protected Access): This is a wireless LAN encryption and authentication method
defined by an industry group called the Wi-Fi Alliance in order to resolve problems such as
WEP vulnerabilities. As with WEP, this corresponds with the data link layer.

81
Morning Exam Section 3 Technological Elements Answers and Explanations

Q3-35 d) Calculation of LAN utilization rate

Let us consider this question under the condition that transmissions are performed between two
pairs of nodes. The file transfer frequency for a single pair is 60 times per second, with a file size
of 1,000 bytes per file. In addition, control information whose size is 30% the size of the files
being transmitted is appended, so we can calculate “D”, the number of bits of data transmitted per
second, as follows:

D = 60 (times/sec)  1,000 (bytes)  1.3  8 (bits)  2 (pairs)


6
= 1.248  10 (bits)

Therefore, for a 10Mbps LAN, when the volume of data determined above is transmitted, the
utilization rate ρ is as follows:
6 6
ρ = (1.248  10 )  (10  10 )  100 = 12.48  12 (%)

Therefore, d) is correct.

Q3-36 c) Calculation of file download time

The time taken to download a file from the Internet can be calculated using the effective speeds
of the FTTH and LAN used in the downloading. Here, control information and broadband router
delay time can be ignored, and the Internet can be considered sufficiently fast.
Transmitting a 540MB file over an FTTH connection with an effective speed of 90Mbps takes
48 seconds, as shown below.

540  106  8
 48(sec)
90  106

In the same way, transmitting it over a 100Mbps LAN with a transmission efficiency of 80%
takes 54 seconds, as shown below.

540  10 6  8
 54(sec)
80  10 6

The amount of time it takes to download a file can be considered the time between when the
first data arrives and when the last data is received. Since the question stipulates that the Internet
can be considered sufficiently fast, the download time is determined by whichever of the FTTH or
LAN takes the most download time. When a 540MB file is downloaded, the slower LAN will be
the bottleneck; that is, it takes 54 seconds to download the file. Therefore, c) is correct. The
transmission times for the FTTH and LAN were calculated separately, which may be misleading.
It is important to remember that while there is a small gap in FTTH and LAN downloading, they
are performed nearly in parallel.

82
Morning Exam Section 3 Technological Elements Answers and Explanations

Below is a supplementary explanation of the terminology used in this question.


• FTTH (Fiber To The Home): This is used for extending fiber optic connections to users’
premises.
• ONU (Optical Network Unit): This is a device installed on users’ premises when fiber optic
lines are used.

Q3-37 b) CSMA/CD broadcasting

Broadcasting refers to transmitting data to all nodes at once, in the same manner as television or
radio broadcasting. It is used when routing control information is exchanged between routers.
Therefore, b) is the appropriate description.

a), d) In broadcasting, data is sent “at once,” not “in order,” so these descriptions are not
appropriate. Incidentally, transmitting data to a specific node by specifying a single address is
called unicasting.
c) This description applies to multicasting.

Q3-38 b) Access control of FDDI

Control of transmission rights on FDDI (Fiber Distributed Data Interface) is performed using
the token passing method. In this method, a special electronic message called a token is constantly
relayed from one node to the next, and a node gains the right to transmit data onto the line when it
detects the token passing on the line. Therefore, b) is correct.

The other descriptions correspond to the following access control methods:


a) CSMA (Carrier Sense Multiple Access)
c) This description is insufficient to identify any single access control method.
d) Polling

Q3-39 a) Uses and functions of devices which make up networks

Network interconnection devices have been categorized, using the OSI basic reference model,
as repeaters, bridges, routers, and gateways. Gateways are devices which primarily relay transport
layer or upper layer traffic, and they can be used to connect networks with different protocols.
Therefore, a) is appropriate. Note that in UNIX, Windows, and the like, routers are referred to as
gateways.

The other descriptions contain the following mistakes:


b) Bridges are devices that relay traffic at the data link layer, not the physical layer.
c) Repeaters relay traffic at the physical layer, not the network layer.
d) Routers relay traffic at the network layer, not the data link layer.

83
Morning Exam Section 3 Technological Elements Answers and Explanations

Q3-40 b) Functions of switching hub

Switching hubs have ports for connecting multiple LANs, and send received frames (packets)
only to the LAN port which includes the destination MAC (Media Access Control) address
specified in the frame. Therefore, b) is the appropriate description.
The MAC address corresponds to layer 2 (the data link layer) of the OSI basic reference model,
so switching hubs are also called layer 2 switches or simply LAN switches. Functionally,
switching hubs are the equivalents of bridges, but as they only connect ports to ports, and don’t
pass unnecessary frames to other LAN ports, they have come to be central devices in LAN
construction.

a) The protocol which performs dynamic allocation of IP addresses (layer 3) is DHCP (Dynamic
Host Configuration Protocol), and the device which performs this allocation is called a DHCP
server.
c) As explained above, switching hubs look at the MAC addresses of received packets, and then
transmit them only to necessary LAN ports. Devices that transmit to all LAN ports are hubs
(repeater hubs) without switching function.
d) Actual transmission of data by IP or other network layer protocols uses functions offered by
the lowest layer, the data link layer. When the length of a packet to be transmitted (received
from a higher layer) exceeds the maximum packet length that the data link layer protocol can
handle, the packet is divided into multiple smaller blocks in accordance with the data link
layer’s maximum packet length. This division is performed at the network layer, so this is a
router function.

Q3-41 b) ATM switches

ATM (Asynchronous Transfer Mode) switches are used in cell relay services, which divide data
into fixed length blocks called cells, attach headers containing destination information to each cell,
and then transmit the cells. The transmitted units are of fixed length, and their headers contain
destination information, which makes hardware-based high speed switching possible. Therefore,
b) is the correct answer. Cells are 53 bytes long: 5 bytes of header and 48 bytes of data.

a) This is a description of PBXs (Private Branch eXchanges).


c) This is a description of packet switches. In packet switches, switching is performed by
hardware, so transmission rates are in the order of several dozen kpbs.
d) This is a description of frame relay switches used in frame relay services. Frame relay services
presume the use of high-quality digital transmission circuits, and support greater speed by
eliminating processing such as retransmission for error recovery.

84
Morning Exam Section 3 Technological Elements Answers and Explanations

Q3-42 b) OSI basic reference model

The OSI (Open Systems Interconnection) basic reference model divides the functions needed in
network architecture into seven layers. It clarifies the roles of each layer and the interfaces
between layers. The layer which manages “transmission control between adjacent open systems”
is the data link layer, the second layer of the OSI model. Therefore, b) is correct. This layer
“divides data into transmission units” and performs “sequence control for each transmission unit,”
“error control,” and “data flow control.” HDLC (High Level Data Link Control) is a protocol
which corresponds with this layer. Adjacent open systems refer to systems directly connected by
communication lines or the like.

a) Application layer: The seventh layer, the application layer, provides protocols and service
definitions related to application services as the goal of communications. These include
FTAM (File Transfer Access and Management), VT (Virtual Terminal), and RDA (Remote
Database Access).
c) Transport layer: The fourth layer, the transport layer, supplements service quality offered by
the layer beneath it, the network layer, and offers highly reliable and economical transmission
functions. These include data “multiplexing and demultiplexing,” “splitting and merging,”
“segmentation and reassembly,” “concatenation and separation,” “error control,” “flow
control,” and the like. The TCP (Transmission Control Protocol) of the TCP/IP stack
corresponds to this layer.
d) Network layer: The third layer, the network layer, uses the data transmission functions offered
by the data link layer to perform end-to-end communications. This includes “routing,” “data
transmission and relay,” and “coordination of differences in network quality when traffic
passes through multiple networks.” The IP (Internet Protocol) of the TCP/IP stack corresponds
to this layer. In communications models such as the Internet, both ends of a transmission may
not always be in adjacent systems, so network layer functions are necessary.

Q3-43 c) Destination addresses in packet transmission

Routers which connect two LANs (segments), such as the router in the figure, contain two
MAC addresses, one for each LAN. When host A sends a packet to host B, host A knows the
destination IP address. At this point, it is still not known whether host B belongs to another LAN,
so host A uses ARP (Address Resolution Protocol) to try to determine host B’s MAC address from
its IP address. In this example, host B is on a different LAN than host A, so in response to the
broadcast ARP message specifying host B’s IP address, the router replies with its own MAC
address. The MAC address used is MAC3, the MAC address on the host A LAN. Host A,
receiving the router’s MAC address in response to its ARP request, transmits an Ethernet frame to
the router, specifying the original IP address (IP datagram address) and the router’s MAC address.
The Ethernet frame’s destination, then, is MAC3, and the IP datagram address is host B’s IP

85
Morning Exam Section 3 Technological Elements Answers and Explanations

address, IP2, so combination c) is appropriate.

Q3-44 d) Private addresses

Private addresses are IP addresses defined in RFC1918 to be used freely within organizations
(internal networks) in response to IP address depletion issues. Private address ranges are defined
as shown below.

Class A: 10.0.0.0 – 10.255.255.255


Class B: 172.16.0.0 – 172.31.255.255
Class C: 192.168.0.0 – 192.168.255.255

Therefore, d) is correct.

Q3-45 b) IP addresses

The subnet mask is 255.255.255.224, or, in binary, 11111111 11111111 11111111 11100000 (see
Note), so the first 27 bits of this IP address are the network address, and the last 5 bits are the host
address. The last 8 bits of IP address 202.16.0.180 are (180)10 = (10110100)2, so if the host part
(i.e., low order 5 bits) of the last 8 bits is set to all zeroes, the last eight bits is
(10100000)2 = (160)10. In other words, 202.16.0.160 is the network address. The host address is
5
indicated with the last 5 bits, so there are 2 = 32 possible addresses. When all bits are 0s, the
address is the network address, and when all bits are 1s, the address is the broadcast address, so
these two addresses cannot be assigned to hosts. Therefore, there are 30 possible host addresses
(00001 to 11110). This means that the IP address range that can be assigned to hosts is
202.16.0.161 to 202.16.0.190, so b) is correct.

Note: 255  16 = 15 with a remainder of 15, and 224  16 = 14 with no remainder. Therefore,
255 and 244 can be represented as (FF)16 and (E0)16 respectively in hexadecimal.
(FF)16 = 11111111, (E0)16 = 11100000

Q3-46 d) Problems which occur because of subnet mask errors

Client A directly communicates with other networking devices when they are within the same
network, and transmits data to the router when the other devices are not within the same network.
In order for client A to determine whether or not the device it wishes to communicate with is in the
same network, it can compare the network part of the destination IP address with its own network.
When doing so, it uses the subnet mask. The subnet mask consists of a string of bits, with the part
marked off with “1” indicating which part of an IP address is the network address part, in order to
extract only this part from the destination IP address.

In this question, the subnet mask set for client A is:

86
Morning Exam Section 3 Technological Elements Answers and Explanations

“255.255.0.0” = “11111111.11111111.00000000.00000000”

Therefore, client A determines that the first 16 bits of the IP address are the network part.
The IP address of print server C is “10.1.2.1”. Client A decides that since the first 16 bits match
its own network part, the print server is on the same network as client A, so instead of sending data
to the router, it sends it directly to print server C. However, in actuality, print server C is not on
LAN1, so the data will not reach it, and the printer will not print anything. Therefore, d) is the
correct answer.

a) The USB connected ADSL modem is unrelated to the IP address or subnet mask.
b) Client B’s IP address is “10.1.1.11”. Client A decides that since the first 16 bits match its own
network part, client B is on the same network as client A, so it sends data directly to client B.
Client B is on LAN1, the same LAN as client A, so communications are possible between the
two clients.
c) Database server D’s IP address is “10.1.1.1”, so, in the same way as client B, client A
transmits data directly to database server D. Database server D is on LAN1, the same LAN as
client A, so communications are possible between client A and server D.

Q3-47 a) Protocol for automatically configuring IP addresses

The protocol which performs dynamic configuration of IP addresses is DHCP (Dynamic Host
Configuration Protocol). DHCP is a protocol for automatically configuring networking parameters
(IP addresses, subnet masks, etc.). It dynamically allocates IP addresses when clients start up or
issue requests, and, unless an update request is received, reclaims IP addresses when their lease
periods expire. Setting network parameters dynamically in this way makes it possible to move a
computer to a different subnetwork and continue using it without giving consideration to the
configuration.

b) DNS (Domain Name System) is a distributed system for managing domain names. TCP/IP
transmissions use IP addresses to identify sources and destinations. These (IPv4) IP addresses
consist of 32-bit binary strings, and are hard for people to interpret. Instead, domain names,
which consist of meaningful character strings, are used. DNS is used to resolve these domain
names into IP addresses.
c) FTP (File Transfer Protocol) is a protocol for uploading and downloading files over a TCP/IP
network.
d) PPP (Point-to-Point Protocol) is a technology for connecting two computers via a
point-to-point connection over a network such as a telephone line. It was used widely in
dial-up Internet connections before ISDN became prevalent. It was also used to identify users
and allocate IP addresses when users connected their computers to provider access points.

87
Morning Exam Section 3 Technological Elements Answers and Explanations

Q3-48 b) Protocols for receiving e-mail

POP3 (Post Office Protocol Version 3) and IMAP4 (Internet Message Access Protocol Version
4) are two protocols for receiving e-mail from e-mail servers. The protocol which has the
characteristics described in the question is b), IMAP4.

The meanings of the other terms used are given below.


a) APOP (Authenticated POP): This is a command used for encrypting and transmitting
passwords (challenge-response encryption) in POP authentication.
c) POP3: This is a protocol used for receiving e-mail from an e-mail server, but unlike IMAP4,
e-mail contained in a mailbox on the e-mail server is downloaded to the client, and managed
in a folder on the client side.
d) SMTP (Simple Mail Transfer Protocol): This protocol is used on TCP/IP networks to send
e-mail from e-mail clients to e-mail servers, and to transmit e-mail between e-mail servers.

Q3-49 b) Protocol used by ping

In a TCP/IP environment, when ping is used to confirm connectivity to a host or other devices,
echo request and echo reply messages defined by ICMP (Internet Control Message Protocol) are
used, so b) is correct. ICMP is also used to perform notification when an error occurs during data
transmission using IP packets.

The meanings of the other terms used are given below.


a) DHCP (Dynamic Host Configuration Protocol): This is a protocol for dynamically allocating
IP addresses, subnet masks, and other parameters to client PCs, etc.
c) SMTP (Simple Mail Transfer Protocol): This protocol is used on TCP/IP networks to send
e-mail from e-mail clients to e-mail servers, and to transmit e-mail between e-mail servers.
d) SNMP (Simple Network Management Protocol): This is a protocol for exchanging network
management related information such as failure information between SNMP managers and
SNMP agents on a TCP/IP network.

Q3-50 d) Network management protocols

Networking devices such as routers or switches installed in remote locations are often
monitored remotely over the network to which the devices are connected. SNMP (Simple Network
Management Protocol) is a protocol used in exchanging management information between
monitoring computers and monitored devices. SNMP uses UDP to exchange a small amount of
data. Therefore, d) is inappropriate.

a) MIB (Management Information Base) is database which contains management information


regarding monitored network devices, and is stored in those monitored network devices.
b) The basic structure of SNMP is for SNMP agents to respond to queries sent by SNMP

88
Morning Exam Section 3 Technological Elements Answers and Explanations

managers. However, in situations such as when a failure has occurred in a network device,
SNMP agents may send notifications to SNMP managers. The messages sent in such
situations are called traps.
c) Network devices monitored by SNMP are called SNMP agents. The computers used to
monitor network devices are called SNMP managers.

Q3-51 b) Effective methods for detecting abnormal ends on DBMS from monitoring servers

There are five types of SNMP PDU (Protocol Data Unit): get-request, get-next-request,
set-request, get-response, and trap. The type of PDU which originates on the agent side and is sent
to the manager to notify the agent of an exceptional situation is called a trap. This question asks
“which is an effective method where monitoring server X detects the abnormal termination of a
DBMS daemon on application server A,” so the answer is b) “SNMP Trap PDU from application
server A to monitoring server X.”

The meanings of the descriptions in the other options are explained below.
a) “ICMP destination unreachable messages” are error messages used to indicate that an IP
packet was unable to reach its destination IP address.
c) “Finger” is a service function over the Internet used to display user information for an account
on a TCP/IP host. For example, we can determine the names and login times of users logged
in to a certain host by executing a finger command with the host name.
d) “Ping” is a command used to confirm connectivity to an arbitrary computer on a TCP/IP
network.

Q3-52 b) Cryptography

In common key cryptography (private key encryption), both the sender (encryption) and
receiver (decryption) use the same key. In public key cryptography, encryption and decryption are
performed using different keys. The purpose of encrypted communications is to ensure that the
encrypted information is hidden from those other than the parties involved. In order to achieve
this, it must not be possible for a third party to decrypt the encrypted contents. In public key
cryptography, the keys used for encryption and decryption are different, so even if the encryption
key is made public, as long as the decryption key is kept confidential, the encryption contents can
only be decoded by the person who knows the decryption key. In the case of common key
cryptography, the same key is used for encryption and decryption, so if the encryption key is
disclosed, all those who know that key will be able to perform not only encryption but also
decryption. Therefore, different keys must be prepared for each communicating party. In other
words, common key cryptography requires that we have unique private keys for each
communicating party, so it makes key management burdensome. Therefore, b) is the appropriate
description.

89
Morning Exam Section 3 Technological Elements Answers and Explanations

a) In common key cryptography, the same key is used by both the sending side and the receiving
side.
c) In public key cryptography, the encryption key is made public, and the decryption key is kept
secret.
d) The purpose of signatures is to serve as evidence that something could only have been done
(known) by the person in question, proving that they are who they say they are. If the key used
in signing a message were made public, anyone would be able to sign messages, and the
signature would no longer serve as proof of the identity of the signer. Instead, a private key
known only to the signer is used for signing messages, and a public key is used for decryption.

Q3-53 c) Cryptography

The two important types of cryptography are common key (also known as “private key”)
cryptography and public key cryptography. In private key cryptography, only the sender and
receiver of information know the encryption key, and the same key is used for both encryption and
decryption. The key is shared by both the sender and the receiver in advance, and kept secret. On
the other hand, in public key cryptography, the encryption key is made public, but the decryption
key is kept secret. The encryption key is different than the decryption key. A typical example of
this type of cryptography is RSA (Rivest, Sharmir, Adleman).

a) DES (Data Encryption Standard) is a US federal standard encryption algorithm, a typical


example of a private key cryptography.
b) As explained above, the decryption key is kept secret.
c) This description is appropriate. Digital signatures are a type of authentication technology for
message authentication (proving to communicating parties that the data was actually sent by
the party it was presented as having been sent by). The message creator uses a private key to
attach a digital signature to the message. The receiver uses the verification key, which is a
public key, to verify that the message actually came from the message creator. Specifically,
the sender generates a digital signature by encrypting the message digest (data obtained by
applying a hash function to the message body) with the sender’s own private key. The receiver
uses the sender’s public key to decrypt the encrypted digital signature. The receiver then
creates a message digest by applying the same hash function as the sender to the received
message body, and confirms that the message digest matches. This confirmation tells the
receiver two things. First, it tells the receiver that the private key used for creating the digital
signature by the sender was valid (if it were not valid, decryption would not be possible). The
other thing it tells the receiver is that the same message body was received as was sent by the
sender. As a result, the former verifies the sender, and the latter verifies that the message has
not been falsified. The verification key is made public, so third parties can verify the
interactions between the two involved parties, and thereby recognize the information as
evidence.

90
Morning Exam Section 3 Technological Elements Answers and Explanations

d) As explained above, in private key (i.e., common key) cryptography, the encryption key is the
same as the decryption key, and is kept secret by both the sender and the receiver.

Q3-54 b) Security management of private keys

From the perspective preventing unauthorized use by security administrators, the best approach
is to keep private keys secret from anyone other than each key’s owner, with no records relating to
the key. However, the question states that one requirement of the system is that “private keys must
be able to be restored in the event of a user accident,” so some form of information is needed to be
able to restore private keys. If the encrypted private key data is divided up so that it cannot be
decrypted unless all the data is gathered together, and in addition, each of several security
administrators stores part of this data, it will be impossible for a single person to engage in
unauthorized use of the data. Thus, if the security administrators have a peer check (in other
words, unless all of the security administrators conspire together), it is possible to prevent
unauthorized use. Therefore, b) is the appropriate description.

a) If one person encrypts and stores private keys, there are no checks on that person, and thus the
system in no way prevents unauthorized use by the security administrator who has the keys.
c) If there is no private key information, it is impossible to restore private keys in the event of a
user accident.
d) If anyone in the security department can see private key information, anyone in the security
department can engage in unauthorized use, and thus the system is not one that prevents
unauthorized use.

Q3-55 a) Digital signatures and hash values

The following procedure is used in the procedure of a digital signature and the method of using
hash values.
(1) The sender applies a hash function to the message to generate a hash value. The generated hash
value is encrypted using the sender’s private key, and the resulting content is used as the
signature.
(2) The sender sends the message body and its signature (encrypted hash value) to the recipient.
(3) The recipient decrypts the signature by using the sender’s public key, extract a hash value, and
compare it with another hash value obtained by converting the received message body by
means of the same hash function as that of the sender.
(4) If the two hash values match, it establishes both the authenticity of the sender and the integrity
of the message.
Therefore, a) corresponds with (3), and is the appropriate answer.

b) The sender encrypts a hash value with his or her own private key.
c) Hash conversion is one way, so messages cannot be restored from hash values.

91
Morning Exam Section 3 Technological Elements Answers and Explanations

d) The entire message is not encrypted, but only the message body’s hash value is encrypted. The
public key is used to decrypt it.

Q3-56 d) Security related protocols

SSL (Secure Sockets Layer) is an industry standard protocol for the safe exchange of
information using message authentication and encryption, in TCP/IP applications between Web
servers and browsers. Therefore, d) is correct.

The other descriptions correspond with the protocols below.


a) IPsec is a collective term for the technologies needed to ensure transmission security at the IP
layer. PPP authentication protocols include PAP (Password Authentication Protocol) and
CHAP (Challenge Handshake Authentication Protocol).
b) PAP performs authentication of the user name and password without using encryption. CHAP
uses encryption in user authentication.
c) PPP (Point-to-Point Protocol) is a protocol used for data transmissions between two distant
points. E-mail encryption protocols include S/MIME (Secure MIME) and PGP (Pretty Good
Privacy).

Q3-57 c) Authentication technology

Unlike fixed password authentication, in which the same password information is sent over the
network again and again, challenge-response user authentication offers enhanced security against
the risk of network wiretapping. In challenge-response authentication, hash values based on
passwords are used. Even if these hash values are captured by third parties on the network, they
cannot be used to recover or guess the initial password. On the other hand, in fixed password
authentication, eavesdropped passwords can be used in replay attacks. Therefore, c) is the
appropriate description.

a) This description corresponds to single sign-on systems.


b) Challenge codes do not include password information, so password decipherment is not
possible. Likewise, response codes cannot be used to decipher passwords either. However, if
challenge codes and response codes are captured, passwords can be decoded by means of
dictionary attacks or brute force attacks.
d) Biometric authentication consists of authentication using data generated from physical traits,
such as finger prints, palm vein patterns, and iris patterns. Authentication which is based on
confidential information remembered by the user is called knowledge-based authentication.

92
Morning Exam Section 3 Technological Elements Answers and Explanations

Q3-58 c) Using public keys registered with Certification Authority

Digital signatures can be obtained by encrypting message digests (hash values) of data by using
the signer’s private key. Recipients can confirm the authenticity of the sender (signer) and detect
falsification of the signed data, by using authentic signers’ public keys registered with a CA to
decrypt signatures, and comparing them with message digests generated from the signed data. In
this way, the public key of the communicating party is used in digital signature verification, so c)
is correct.

a) Certificates are used to prove that registered public keys belong to the person who registered
them, and are issued to the registering party (signer). The communicating party’s public key is
not involved in the receiving of a certificate.
b) In public key cryptography, a message encrypted with a specific communicating party’s public
key is sent to a recipient together with that public key. The recipient uses his or her own
private key to decrypt the received encrypted message. If the public key were capable of
decrypting it, anyone with the public key could decrypt the message, so public keys are not
usable for decryption.
d) As explained above, digital signatures are created using the sender’s own private key.

Q3-59 b) Explanation of risk management

Risk management for information systems refers to the series of activities involved in
identifying potential future risks which might affect information systems, considering
countermeasures, and implementing those countermeasures. Risk items which should be identified
include destabilizing factors in the functional characteristics and latent vulnerabilities in the
system, and measures with reasonable costs are implemented to prevent or minimize losses in the
event of risk occurrence. Therefore, b) is the correct answer.

a) This is a description of “measures against risks,” one element of risk management, and not an
overall explanation of risk management.
c) This is a description of “contingency plans,” one element of risk management, and not an
overall explanation of risk management.
d) This is a description of providing information needed for risk reduction, one element of “risk
countermeasures,” which are in turn part of risk management, and not an overall explanation
of risk management.

Q3-60 d) Explanation of content implemented by risk management

Risk analysis is to predict risks which would result in management resource losses if they came
to be, and to confirm the scope of their effects. Risks can be categorized into pure risks and
speculative risks. The former consists of risks which would only result in losses, while the latter

93
Morning Exam Section 3 Technological Elements Answers and Explanations

consists of risks which could result in either profits or losses. Speculative risks also include losses,
so they are of course one of the risk analysis targets. Therefore, d) is the correct answer.

a) When risk analysis is performed, in addition to estimating loss amounts, we must also estimate
the budget for risk countermeasures.
b) Speculative risks are controlled by managing bodies. Generally speaking, speculative risks
consist of not only company stock investments but also business incurred risks. These do not
include pure risks such as fires, theft, or fraud.
c) Risk financing is a method of allocating funds in order to recoup losses with minimal costs
after a risk has occurred. Risk analysis, risk control, and similar costs are not included.

Q3-61 b) Three elements of ISMS information security

ISMS (Information Security Management System) is defined by ISO/IEC 27002 (JIS Q 27002)
(Note: ISO/IEC 27002 is a renumbering of the previous standard, ISO/IEC 17799) as a mechanism
of management systems whose purpose is to protect information assets from a wide range of
threats.
This standard requires that three information security items be maintained.
• Confidentiality: Ensuring that information is accessible only to those authorized to have access
• Integrity: Protecting the accuracy and completeness of information and the methods that are
used to process it
• Availability: Ensuring that information and related assets are accessible when needed by an
authorized entity
Therefore, b) is the correct answer.
ISO/IEC 27001 (JIS Q 27001) is an ISMS certification standard. The ISO/IEC 27000 series is a
series of international standards developed based on the British standard BS 7799.

Q3-62 a) TEMPEST technology

TEMPEST (which is not an acronym, but has been given various backronyms, including “Total
Electronic and Mechanical protection against Emission of Spurious Transmissions”, “Transient
Electromagnetic Pulse Surveillance Technology”, etc.) was a codename used by America’s NSA
(National Security Agency) to refer to electromagnetic eavesdropping technologies. These are
technologies which do not involve direct contact with electronic devices, but instead catch and
decode the contents of electromagnetic emissions from computers or peripheral devices. The only
way to counter these technologies is through electromagnetic shielding, so a) is the appropriate
countermeasure.

b) Technologies which involve intercepting packets mid-transmission and altering their contents
are not TEMPEST technologies.
c) This description applies to computer virus related technologies and countermeasures, and is

94
Morning Exam Section 3 Technological Elements Answers and Explanations

not related to TEMPEST.


d) A technology which involves intercepts and analyzes communication signals transmitted from
wireless LANs is a way of eavesdropping on data frames, and is not a TEMPEST technology.

Q3-63 b) Content of CC (Common Criteria)

CC (Common Criteria) is a shared IT security evaluation criteria defined as an international


standard, ISO/IEC 15408, and is a short for Common Criteria for Information Technology
Security Evaluation.
The CC was a result of a project to unify the existing TCSEC (Trusted Computer System
Evaluation Criteria) and ITSEC (Information Technology Security Evaluation Criteria). Its first
draft was announced in 1994, and underwent trial use from 1996. After that, it became an
international standard in 1999 in the form of ISO/IEC 15408 (Evaluation criteria for IT security).
Therefore, b) is the correct answer.

a) Encryption algorithms are standardized by ISO/IEC 18033.


c) This description corresponds with ISO/IEC 17799 (changed to ISO/IEC 27002 in 2007).
d) Security related protocols are standardized individually for each layer. For example, IPsec is
standardized by the IETF (Internet Engineering Task Force).

Q3-64 b) Methods for handling computer virus

Binary-type viruses are executable form viruses written in binary code (code composed of 1s
and 0s). It is difficult to decode program command procedures from binary code contents, but
binary code can be disassembled, and while the original complete source code cannot be obtained,
it is easier to analyze the command procedures of the resulting mnemonic code (expressed in
assembly language) than it is to analyze binary code itself. Therefore, it is an effective method for
clarifying the functions of new viruses. Therefore, b) is appropriate.

a) Pattern matching is to check bit patterns in data being scanned against bit patterns
characteristic of virus code. Encrypting the data will cause the bit pattern of the data being
scanned to change, so pattern matching cannot be applied to encrypted data.
c) The method of detecting a virus by identifying unauthorized behavior is a way to monitor
virus behavior, but since it only evaluates the likelihood that something is a virus, it is not the
most effective way to identify virus names. In order to identify virus names, other
confirmation, such as pattern matching, is needed.
d) Worms are programs which can self replicate by, for example, creating copies of themselves.
Recently, worm propagation over networks has become common. Parasitic viruses which
infect existing files are generally categorized as narrowly-defined viruses, not worms.

95
Morning Exam Section 3 Technological Elements Answers and Explanations

Q3-65 d) Explanation of Web beacon

Web beacons are images embedded in Web pages or e-mails formatted using HTML in order to
collect access information regarding users, so d) is correct. Normally, extremely small images are
used, so as not to be noticed by users. When users access Web pages or view e-mails containing
Web beacons, accesses to servers will occur in order to download the corresponding image data.
The server records this access information, and secretly collects user access information.
Because of their extremely small size, Web beacons are also known as Web bugs. Recently,
there has been a great deal of criticism of Web beacons, and some e-mail programs can be
configured so as not to open image files when previewing e-mail.

Q3-66 c) Software which surreptitiously externally transmits personal data

Spyware is often hidden in file sharing software or download support software, and is
downloaded or installed by unsuspecting users who have agreed with software licenses without
reading them carefully. It then surreptitiously externally transmits data such as users’ personal
data. Therefore, c) is correct. Unlike viruses or other malicious programs, it does not illegitimately
invade computers, but receives user authorization, so provided that the transmission functions of a
given piece of spyware are clearly stated within a software license, that program may not always
be categorized as a malicious program. They are also called adware or grayware.

The meanings of the other terms used are given below.


a) Finger: Finger is an application used to discover over the Internet user information for a host
on the Internet. It is defined by RFC 1288. Finger can be used to discover not only the names
and login times of logged in users, but also, if a specific user name is specified, detailed
information regarding that user. As such, many hosts forbid Finger queries.
b) Whois: Whois is a database that contains information about domain names (the IP addresses of
the domain names, their administrators, contact information, etc.) for use in network
operations management.
d) Rootkit: Rootkit is the collective term for malicious tools such as backdoors and Trojan horses
used in unauthorized intrusion and attacks. Attackers often use rootkits in their initial attacks.

Q3-67 a) Computer crime techniques

The salami technique is a computer crime technique in which small amounts are taken from a
large number of resources, so a) is the appropriate description. An example of the salami technique
would be to have all fractional sections of accrued bank interest deposited into ones own personal
bank account.

b) Scavenging is a criminal technique which involves silently searching inside and around
computers for residual information after a program has been executed in order to steal used

96
Morning Exam Section 3 Technological Elements Answers and Explanations

information. For example, this might include recovering printed materials which have been
thrown away in a garbage can.
c) In the Trojan horse technique, programs (modules) which surreptitiously perform
unauthorized actions within software systems are placed on those systems, and perform
unauthorized processing at a later time when triggered by some sort of event.
d) Spoofing is a criminal technique in which unrelated third parties bypass authentication
procedures and pretend to be legitimate parties in order to carry out unauthorized actions.

Q3-68 a) Packet filtering firewall processing

In firewall processing, a firewall “proceeds in order from rule 1 shown in the rule list, and stops
when any rule applies, disregarding the rules which follow it.” Therefore, we should check
whether or not each rule matches the information of packet A in order from rule number 1.
Packet A has source IP address “10.1.2.3”, so rule number 1 applies (if both source IP addresses
are the same, it is irrelevant what the other information is). The action listed for rule number 1 is
“prevent passing.” In other words, the packet is prevented by rule number 1, so a) is correct.

Q3-69 b) S/MIME functions

S/MIME (Secure Multipurpose Internet Mail Extensions) is an encryption technology used in


e-mail which uses RSA public key cryptography to encrypt digital signatures and e-mail bodies,
and thereby prevents spoofing, tampering, wiretapping, and the like. Therefore, b) is the
appropriate description. S/MIME is an extended specification that adds confidentiality to MIME
(Multipurpose Internet Mail Extension), an e-mail standard in which images, non-English
documents, and the like are converted into ASCII code text files for transmission.

a) Using set procedures to reduce data volume without losing the meaning of the data is called
compression, but this is not an S/MIME function.
c) Message disposition notification is the sending of a reply e-mail notifying a sender that the
e-mail has been received and opened. This is performed manually or by using e-mail software
functions.
d) Resending an e-mail message is generally performed manually, or with e-mail software or
services which can resend the e-mail.

Q3-70 b) Security for access to Web server

When a Web server is accessed from a Web client via a proxy server, transmissions between the
proxy server and the Web server normally use port 80 (HTTP). However, when SSL (Secure
Sockets Layer) is used, port 443 (HTTPS) is used. Therefore, even if there are proxy servers along
the transmission route, HTTPS transmission is designed to tunnel through proxy servers (they only
relay traffic). In other words, proxy servers do not decrypt SSL encryption, so reference

97
Morning Exam Section 3 Technological Elements Answers and Explanations

information sent between users and Web servers are not disclosed to any other parties. Therefore,
b) is the appropriate description.

a) Proxy server cache contents can be seen by multiple users, and so are disclosed to users other
than the original users.
c) With basic HTTP (HyperText Transfer Protocol) authentication, when users access a Web
page which requires authentication, the Web server asks for their user ID and password. Once
these have been entered, pages beyond the login page become viewable. Therefore, if the
browser is left running, and different users use the browser, information will remain viewable
to users other than the original user.
d) Reverse proxy is useful to improve response time as it caches static content.

Q3-71 a) Threats to integrity

The concept of information security consists of three properties: integrity, confidentiality, and
availability. To maintain properties these is to maintain security. Integrity is the property of
protecting the accuracy and completeness of information assets, so attacks which carry the risk of
falsifying or deleting data are attacks which threaten integrity. Therefore, a) is correct.

b) This is an attack that threatens confidentiality. Confidentiality is the property of making


information unusable or unviewable by unauthorized parties. Unauthorized access,
unauthorized copying, information leakage, and wiretapping attacks are confidentiality related
attacks.
c) This is an attack that threatens availability. Availability refers to the ability of authenticated
users to access and use systems or data when they wish to. Using a DoS attack to prevent
proper system usability is an attack to availability.
d) This is an attack that threatens confidentiality.

Q3-72 d) Explanation of SAML (Security Assertion Markup Language)

SAML (Security Assertion Markup Language) is a protocol that is used for the mutual exchange
of XML formatted authentication information, attribute information, and access control
information in exchanging digital information between different systems, such as in B2B
situations, in order to reduce the overhead effort involved in performing authentication each time
communications are carried out with multiple systems. With SAML, authentication information
for one site can be used on other sites, so the function of sign-on can be provided. In November
2002, version 1.0 was accepted as a standard by the XML interoperability standardization
organization OASIS (Organization for the Advancement of Structured Information Standards).
Web services are automatically coordinated by services offered on the Internet, which have the
function role of a single service as a whole. For example, if a user enters overseas business trip
requirements on a portal site, the portal site can coordinate with railway company, airline

98
Morning Exam Section 3 Technological Elements Answers and Explanations

company, and travel agency sites, and thus the user can automatically and collectively perform all
necessary reservation processing. Messages transmitted between the sites use the SOAP (Simple
Object Access Protocol) as an XML data transmission protocol. Among Web service
implementation protocols, SAML is a protocol which is in charge of exchanging authentication
information and the like. Therefore, d) is the correct answer.

a) This is an explanation of UDDI (Universal Description Discovery and Integration), which uses
XML.
b) This is an explanation of e-mail security protocols such as S/MIME.
c) This is an explanation of XKMS (XML Key Management Services), used in the registration
and expiration of key information used in XML signatures, XML encryption, and the like, as
well as validity check.

Q3-73 d) SSH

SSH (Secure Shell) is a tool and protocol used to securely execute remote commands such as
remote login or remote file copy. Conventional UNIX remote commands transmitted login
authentication information and files in plain text, and were therefore unsecure. SSH performs
secure mutual authentication between servers (remote hosts) and clients (local hosts), and thereby
offers a variety of commands whose confidentiality and integrity are assured. Therefore, d) is
correct.

a) This is an explanation of S/MIME (Secure/Multipurpose Internet Mail Extensions).


b) This is an explanation of SET (Secure Electronic Transaction), 3-D Secure, and the like.
c) This is an explanation of e-mail security tools such as PGP (Pretty Good Privacy).

Q3-74 b) SQL injection attack

SQL injection attacks are attacks which can occur when applications utilize user-entered data to
dynamically generate SQL commands in order to access databases. It is called SQL injection in the
sense of “injecting” the SQL command, because when users enter SQL command strings which
are not anticipated by the application, those contents are executed. In order to prevent this type of
attack, systems must prevent SQL interpretation of user-entered values, for example by
eliminating characters (columns) in user-entered values which contain special meanings related to
database queries or operations. Therefore, b) is correct. Eliminating unauthorized character strings
in order to prevent them from affecting database processing is called escape processing.

The other options in the answer group refer to the attacks and their countermeasures as
explained below.
a) This method can be used to prevent directory traversal, which makes improper use of directory
specification methods.

99
Morning Exam Section 3 Technological Elements Answers and Explanations

c) This method can be used to prevent XSS (cross site scripting) which causes improper
operation on clients, including server output containing improper HTML commands or scripts.
Replacing character strings with other character strings is called sanitizing.
d) This method can be used to prevent buffer overflows, which are triggered by inserting data
that exceeds the maximum input buffer length, and which result in rewriting return addresses
fraudulently on the stack and executing malicious code.

100
Morning Exam Section 4 Development Technology Answers and Explanations

Section 4 Development Technology

Q4-1 d) Activities in external design phase

In the external design phase of system development, functions of the new systems are clarified
based on the requirements specifications and a general outline of what sort of system it will be is
designed. More specifically, these include checking requirements specifications, defining and
deploying subsystems, screen and form design, code design, and logical data design. In logical
data design, data items are identified with regard to I/O data considered in the designing of the
definition and deployment of the subsystems, and then the relationships among those data items
are considered to determine data structure. Therefore, d) is the correct answer.

a) Physical data design is performed in the internal design phase.


b) Program structured design is performed in the program design phase.
c) Requirements definition is performed in the basic planning phase.

Q4-2 a) Techniques for software requirements definition, analysis, and design

A decision table is a table consisting of a conditions section that describes possible conditions
and an actions section that describes actions that take place when those conditions occur. It is
suitable for organizing complex conditions. As stated in the question, it is used for describing
requirements specifications and checking programs. Furthermore, it is also effective for
indentifying test cases. Therefore, a) is the correct answer.

b) This is a description of a state transition diagram. An NS (Nassi-Shneiderman) chart is a


popular example of a structured chart, and it is suited to illustrating the logical structure of
programs. However, its drawback is that diagrams become complicated as the logical structure
becomes more complex. For this reason, PAD and YAC II charts which fix this problem are
used.
c) This is a description of a control flow diagram. A control flow diagram is a diagram that adds
a description of the flow of control on a data flow diagram. Caution is required as this is
different from control flows and control graphs which show the logical structure of a program.
d) This is a description of a data flow diagram.

Q4-3 c) Systems suitable for using state transition diagrams

A state transition diagram is a representation method for describing the changes of a number of
states in response to certain conditions. It is suited to describing systems that behave differently
depending on its state even if the same event (condition) occurs. As an image of this
representation method, it would be helpful to imagine a state transition diagram (ready - executing
- waiting) of tasks.

101
Morning Exam Section 4 Development Technology Answers and Explanations

For controls that adjust temperature for optimum plant growth through sensors installed in a
greenhouse, there are a number of control states (for example, strong heat, weak heat, no control,
weak cooling, strong cooling, etc.) and the state changes in response to the values detected by the
sensors. Therefore, for the system in d), a state transition diagram is the suitable representation
method.

a) “An inventory-taking system that counts inventory assets at the end of each month and at book
closing,” is not suitable, because quantities are aggregated uniformly and processes do not
change in response to state changes.
b) Measurement of the operational status of system resources and the resulting reports are
records and output of data and do not involve state transition.
c) “A water bill accounting system that calculates charges from the data of water meters,” is a
simple arithmetic calculation.

Q4-4 a) Analysis at the time of event-driven program development

Event response analysis is an analysis method, as described in a), which is used to “analyze a
series of behaviors which are responses by the system over time to external events.” Events are
system states that are achieved through external actions. Responses are system behaviors in
accordance with external events. This technique is applied to the control flow model and the Petri
net model.

b) KJ method: A technique devised by Jiro Kawakita based on the state-oriented approach.


c) Functional analysis: A method that analyzes the relationships between input data and output
data and defines the relationships between each other by using DFD.
d) Structural analysis: A method that regards the elements that make up a system as entities and
the relationships between the entities as relations, and represents these as an abstract model.

Q4-5 b) Top-down and bottom-up approaches

A logical data model represents the information requirements of a target world. It analyzes the
entire surrounding environment of the intended system, and thereby models the overall data
structure. A typical representation method is the E-R model. Characteristics of the logical data
model are that it is an overall description and that it is not dependent on DBMS. Note that this
logical data model is sometimes called the conceptual data model.
Regardless of whether the analysis approach is top-down or bottom-up, the final logical data
model that is created should be the same. Furthermore, it is desirable to cover all the required data
items and attributes and to be normalized so as to eliminate redundancies. Therefore, b) is the
appropriate description.

a) Even if the top-down approach is applied, we should consider not only the required data

102
Morning Exam Section 4 Development Technology Answers and Explanations

handled in existing business operations but also the user requirements for the new system. It is
incorrect to say that the current state of business operations should not be analyzed.
c) In the top-down approach, irrespective of current screen and form formats, truly required
business operation functions are selected, and new screen and form designs are performed
based on those selections. It is a technique that differs from the bottom-up approach that
refines designs based on current screens and forms.
d) There are cases where new systems are designed using the bottom-up approach. Therefore,
this is an incorrect description.

Q4-6 d) DFD

DFD (Data Flow Diagram) is one of the tools used in the structured analysis method and refers
to a diagram that is used for modeling business systems focusing on the flow of data. Therefore, d)
is the correct answer.

a) This is an explanation of a flowchart.


b) This is an explanation of a state transition diagram.
c) This is an explanation of an E-R diagram.

Q4-7 b) Hierarchical DFD

A DFD (Data Flow Diagram) is a diagram that is used for modeling business processes with the
focus on the flow of data. The four symbols shown below are used in a DFD (Data Flow
Diagram).

Process
Data store (file)

Data source or sink (destination)


(Does not appear in the diagram
shown in the question)
Flow of data between components
(data flow)

In the question, a part of the inside of a hierarchical DFD is shown, and asks about the
description of a DFD where process 1 is partitioned into sub-processes and is detailed. It is
common to create a detailed DFD for each process in a hierarchical DFD. Here, the key is that the
number of I/O data flows in and out of upper level processes must match the final number of data
flows (that do not have start or end processes) of a detailed DFD. Process 1 has two inputs and
two outputs. Note that while the sub-processes of process 1 are represented as process 1-1, process
1-2, and so on, the data flow that connects sub-processes have internal start and end points, and
does not appear in the representation of process 1. Furthermore, processes in a DFD are defined as

103
Morning Exam Section 4 Development Technology Answers and Explanations

obtaining output data flow by performing some sort of process on the input data flow. This means
that processes that have no input or processes that have no output are inappropriate.

a) The flow of data overall is one input (data enters process 1-1) and two outputs (one each out of
1-2 and 1-3). Process 1 has two inputs, and so the inputs do not match. Therefore, this is
inappropriate.
b) The flow of data overall is two inputs (one each into 1-1 and 1-2), and two outputs (one each
out of 1-1 and 1-3). This matches process 1. In addition, there are no problems in the flow of
data between sub-processes (from 1-1 to 1-3, from 1-2 to 1-3). Therefore, this is appropriate.
c) The flow of data overall is two inputs (two into 1-1), and one output (out from 1-3). Process 1
has two outputs, and so the outputs do not match. Therefore, this is inappropriate.
Furthermore, sub-process 1-2 has only inputs and no output. This type of process is an
inappropriate process for DFD.
d) The flow of data overall is two inputs (two into 1-1) and two output (two out from 1-3). At
first glance, the number of inputs and outputs match. However, process 1-2 has only outputs
and no input. This type of process is an inappropriate process for DFD.

Q4-8 a) Interpreting E-R diagrams

When an entity has a relationship with itself as shown in the E-R diagram in the question, the
entity is said to have a recursive relationship. Although this E-R diagram represents a parent-child
relationship of organizations, it can also be viewed as diagram representing organizations of
different hierarchical levels, such as division, department, section, and unit, all as the same types
of entities. It is difficult to list all possible interpretations of this E-R diagram, so it is best to find
the correct answer by considering each of the options.
Since the parent organization and child organization are in a many-to-many relationship, and
there are no other restrictions concerning the number of parent organizations of a child
organization, a) is the appropriate interpretation.

The other options have the following mistakes:


b) There are no restrictions on hierarchy. As mentioned in the previous example of division,
department, section, and unit, a hierarchy with four levels is possible.
c) A many-to-many relationship also includes zero (0), so an organization with no child is
possible. Furthermore, if there must be a child organization, the organizational hierarchy will
loop, or the number of hierarchical levels will becomes infinite.
d) A network structure is a structure that allows a parent to have multiple children, and a child to
have multiple parents. The parent-child relationship in this E-R diagram is many-to-many and
certainly is a network structure. When a parent-child relationship is one-to-many, it is a
hierarchical structure and not a network structure.

104
Morning Exam Section 4 Development Technology Answers and Explanations

Q4-9 b) Diagrams used in UML

UML is a unified modeling language used in object-oriented system development, and is


standardized by OMG (Object Management Group) in the US. In UML, a standard notation is
used for analysis to design, implementation, and testing. UML uses class diagrams, object
diagrams, use case diagrams, sequence diagrams, communication diagrams, statechart diagrams,
activity diagrams, component diagrams, and deployment diagrams. The question is an explanation
of a sequence diagram. Therefore, b) is the correct answer. Note that communication diagrams
have the same content as sequence diagrams but it was not offered as an option.

a) Component diagrams show what interfaces components (parts) of the software have, as well as
the dependencies between components.
c) Statechart diagrams show the state transition of objects from creation to termination.
d) Use case diagrams show the relationship between system functions and external functions.

Q4-10 a) Software quality characteristics defined by ISO/IEC 9126

Among the six quality characteristics (functionality, reliability, usability, efficiency,


maintainability, and portability) defined by ISO/IEC 9126, functionality is “the ability of the
software product to provide functions that match stated or implied needs when it is used under
specified conditions.” Therefore, a) is the appropriate description.
b) is an explanation of usability, c) is efficiency and d) is reliability. Therefore, these are all
incorrect. Furthermore, the meaning of maintainability and portability are as described below. It is
advisable to summarize the six quality characteristics based on this question.

• Maintainability: It means the ability of the software product to be easily modified.


Modification may also include corrections or improvements, as well adapting the software to
changes in environment, changes in requirements specifications, and changes in functional
specifications.
• Portability: It means the ability of the software product to be transplanted from one platform to
another.

Q4-11 c) Characteristics of data-oriented design

Process-oriented design is the conventional structured design method. When using this method
of system development, in the event of a change in processes, data structures need to be rebuilt,
and all processes related to that data may need to be changed. The data-oriented design was
introduced to avoid this problem. With the data-oriented design method, data is deemed a shared
resource of the company in the same manner as people and physical objects. This method focuses
on the consistency and integrity of these resources. Furthermore, data for the entire system is
centrally managed. Therefore, c) is the correct answer. Development using data-oriented design is

105
Morning Exam Section 4 Development Technology Answers and Explanations

generally conducted in the order shown below.


(1) Modeling business processes
(2) Standardization of data
(3) Design of data life cycle processes
(4) Encapsulation of data and programs
(5) Embedding consistency of data updates in capsules

a) Step (1) above is the modeling of business operations related to the life cycle of data (creation
→ change → disposal). First, clarify the overall picture of business operations and identify the
data to be used. Next, in step (2), standardize the names and formats of the identified data,
and then design a database by using data modeling.
b) Conforming data structure to business processes is process-oriented design.
d) Time must be taken when a system is built using steps (1) through (5) above. It is not suited to
computerization in a short period of time. Furthermore, if it is used only for specific business
operations, the point of centrally managing data is lost.

Q4-12 a) Description related to structured charts

Structured charts are charts that construct program algorithms using the three basic control
structures of sequence, selection, and iteration, without using the GOTO statement. Therefore, a) is
the most appropriate description.

b) This is an explanation of a state transition diagram.


c) This is an explanation of a DFD (Data Flow Diagram).
d) This is an explanation of HIPO (Hierarchy, plus Input, Process, Output).

Q4-13 b) Life cycle of system development projects

Blank A in the diagram is an activity (process) based on the “structured specifications,” which
are the result of a “structured analysis.” Since a “test plan” and “packaged design” are created as a
result, d) is the appropriate structural design. This diagram is a diagram that shows new life cycles
in cases where structured analyses are implemented, as shown in “Structured Analysis and System
Specification,” written by DeMarco, which proposes the structured method.
In addition to T. DeMarco’s structured analysis, there are other structured methods including E.
Yourdon’s top-down design, G. J. Myers’ composite design, and W. P. Stevens’ structured design.
These methods use diagrams, such as data flow diagrams and E-R diagrams, as appropriate.
Along this line, Michael Jackson announced a technique suitable for the design of individual
modules, and this is known as the Jackson method.

106
Morning Exam Section 4 Development Technology Answers and Explanations

Q4-14 c) Object-oriented software components

This description points to the c), componentware. Componentware is general purpose modular
software consisting of individual components which can be combined and used. Typical examples
are JavaBeans and Active X.

a) Groupware is software that improves the efficiency of group activities by implementing


communications between users and information sharing. A typical example is Lotus Notes.
b) Concurrent engineering is an activity to shorten development periods, effectively utilize
resources, and cut costs by applying a concurrent approach to activities such as design,
manufacture, and support.
d) Reverse engineering in the case of software, is the clarification of the technical details of
software through schematization and reverse assembling of the structure and specifications.
Caution is required regarding the infringement of intellectual property rights.

Q4-15 d) Combination of the basic concepts of object-orientation

Objects are what are handled as components by combining procedures and data. Object-oriented
design and development applies such component-based methodology in system development.
In object-orientation, objects are the basis where data attributes and procedures (methods) are
encapsulated, and these interrelate with each other to constitute a whole. Objects of the same type
which share characteristics in common constitute classes. By abstracting a group of object classes,
a higher level object class can be defined. Classes generally have a hierarchy with upper levels
known as a parent classes or superclasses, and lower levels known as child classes or subclasses.
Child classes basically inherit the characteristics of parent classes, so child classes only need to
add characteristics unique to each child. Such passing on of the characteristics of parent to child is
referred to as inheritance. Therefore, d) is the appropriate combination.

Q4-16 b) Relationship between classes in object-orientation

In object-orientation, the relationship of dividing the upper level concept of “automobile,” into
specific items such as “bus” and “truck,” as shown in the diagram is known as a
“generalization-specialization” relationship. In a generalization-specialization relationship,
defining an upper level class by extracting the common details of a lower level class is called
“generalization.” Therefore, b) is the appropriate description.

a) The passing on of the details (data and operations) defined in the “automobile” upper level
class to lower level classes such as “bus” and “truck” is known as inheritance. An instance
refers to individual objects whose class has been set with specific attributes, etc.
c) Upper level classes are known as superclasses, and lower level classes are known as
subclasses. Therefore, in relation to the “Automobile” class, classes such as “Bus” and

107
Morning Exam Section 4 Development Technology Answers and Explanations

“Truck”" are subclasses. Objects are the actual elements that make up the details of individual
classes. A class is the concept of regarding objects as a collective entity.
d) Specialization is to define lower level classes such as “Bus” and “Truck” according to the
differences in the categorized details of the “Automobile” class. This means that the “Bus”
and “Truck” classes are not defined as a class of the “Automobile” superclass, but rather as a
subclass of the “Automobile” superclass.

Q4-17 c) Relationships between base class and derived class

As can be guessed from the general meanings of the words base and derived, a base class is a
class that constitutes a basis, and a derived class is a class that is derived based on the base class.
In this case, the expression “based on the base class” means to “inherit the characteristics
(attributes and methods) of the base class,” and is known as inheritance. Furthermore, the
relationship of classes that have an inheritance relationship is known as a
generalization-specialization relationship (an “is-a” relationship). The base class can also be
called a parent class (superclass), and the derived class can be called a child class (subclass).
The option that is in a generalization-specialization relationship is the “diagram” and “triangle.”
Therefore, c) is the correct answer. In addition to the generalization-specialization relationship
asked in this question, there is another relationship between classes known as whole-part
(aggregation-decomposition) relationship (“part-of ” relationship), and the other options are of this
relationship.

Q4-18 c) Explanation of delegation in object-orientation

“Delegation” in object-orientation is a mechanism that assigns operations on one object to


another object. Therefore, c) is the correct answer. More specifically, processing for a certain
operation (method) is implemented as the sending of a message (processing request) to a different
object.
Delegation is regarded as one of the reuse technologies in object orientation. As a reuse
technology in object orientation, inheritance is a typical technology. In inheritance, all methods
defined by the superclass can be reused. However, in reality, even in cases where all methods are
not used, inheritance relationships are defined for the reuse of just part of the methods. The
problem that this causes confusion among relationships between classes has been pointed out. For
this reason, when functions that are a part of another class are reused, it has been proposed to
request processing through delegation instead of using unreasonable inheritances.

a) This refers to a complex object.


b) This refers to propagation.
d) This refers to inheritance.

108
Morning Exam Section 4 Development Technology Answers and Explanations

Q4-19 a) Module strength (cohesion)

The characteristics of module strength (cohesion) are shown in the table below.

Table Module strength

Strength Characteristics
Weak Coincidental strength Includes multiple functions with no relationships.
Logical strength c) Includes multiple functions that have relationships and
selectively executes according to arguments.
Classical strength b) Groups functions that are executed sequentially at the
same time.
Procedural strength Includes multiple functions to implement one procedure
(specification).
Communicational strength d) Includes the characteristics of procedural strength in
addition to each function handling the same data.
Informational strength a) A module that groups multiple functions to handle
specific data structures, and has separate entry points
(aliases) for each function.
Strong Functional strength A module that consists of only one function.

a) “Functions that handle certain tree-structured data are grouped together with the data itself,
and the tree-structured data is made invisible from outside the module” is the concept of
encapsulation, and this is a module with informational strength.
b) “Performed all at once at a certain point” means that this is a module with classical strength.
c) “An argument is prepared for choosing which of the two functions A and B to use” means that
this is a module with logical strength.
d) “The calculation results of A are used in B” means that this is a module with communicational
strength.
When arranged according to module strength, the order is a) > d) > b) > c). Therefore, a) is the
strongest.

Q4-20 d) Module coupling

“The relationship between the modules that refer to global data is common coupling,” and
therefore, d) is the appropriate description. Global data are variables declared in the common area
outside of functions and are referred to as global variables. In module design, independence
between modules is greater when the module coupling is weak, which is generally desirable.
Module coupling is explained below in order of increasing the degree of coupling.

(1) Data coupling: Only data elements that are not in the common area and have no structure are
passed along.
(2) Stamp coupling: Data structure (structure or array) not in the common area is passed along
between modules.
(3) Control coupling: Data that instructs the control of the target module is passed as parameters
(arguments).

109
Morning Exam Section 4 Development Technology Answers and Explanations

(4) External coupling: Only data elements externally declared as global data are shared between
multiple modules.
(5) Common coupling: Only data structure externally declared (common area) as global data is
shared between multiple modules.
(6) Content coupling: Data referencing or direct instruction execution takes place in modules that
have not been externally declared.

a) This is an explanation of content coupling.


b) This is an explanation of control coupling.
c) This is an explanation of external coupling.

Q4-21 d) Module partitioning techniques

Modules are program units that are subject to coding, compiling, and unit testing. Module
partitioning techniques performed during the internal design phase include STS partitioning and
TR partitioning which focus on the flow of data, the Jackson method and the Warnier method
which focus on data structure. In the Jackson method, the program structure for converting data is
defined by focusing on both the input data structure and the output data structure. Therefore, d) is
the appropriate answer.

a) STS (Source/Transform/Sink) partitioning is a technique that focuses on the flow of data and
partitions the flow into input data processing, conversion processing, and output data
processing.
b) TR partitioning is a technique that focuses on the flow of data and partitions the flow into the
different processing details dependant on the type of input data. In other words, it is
partitioned according to each type of transaction.
c) Common functional partitioning is a technique that focuses on the functions of a program and
extracts the common functions of multiple modules to constitute a separate common module.

Q4-22 b) Purposes of performing design reviews

A design review is a review that is performed during the software development process at the
design phase. The purpose of the review is to identify design flaws and errors in software
specifications at an early stage to improve quality. Furthermore, specific activities of review are to
evaluate the output (design documents, etc.) of each phase, promptly resolve areas that require
improvement, and prevent faults from being carried over into the next phase. By identifying and
solving errors and specification failures at the design phase, errors are not carried over into the
next phase (programming), and as a result, person-hours for rework can be reduced. Therefore, b)
is the appropriate answer.

a), d) The purpose of a design review is to improve the quality of various design documents that

110
Morning Exam Section 4 Development Technology Answers and Explanations

are the output of designs, as described above. If time is required for revisions of the design
documents as a result of the design review because of poor quality of the documents, this will
affect the development schedule. Although obviously a revision of the schedule will become
necessary, if no review is performed yet, the revision of the schedule will be delayed, and the
quantity of rework will increase to some degree in relation to the delay. Furthermore,
improving quality also improves the accuracy of scale estimates.
c) Although the quality of design documents can be improved through a design review, tests
cannot be simplified. However, an improvement in test efficiency can be expected as a result of
the reduction in the number of bugs.

Q4-23 d) Characteristics of design reviews

The method of checking software requirements definitions, design specifications, programs, etc.
through a conference method that has a defined format, and indentifying flaws and improving
quality is known as a review. Three typical review techniques that are often asked in exams are
shown below.

• Inspection: This is a review lead by a facilitator called a moderator. As its name indicates,
inspection (checking, auditing) is a type of review that often centers around formal details
such as whether the target of the review complies with development conventions. Typical
reviews include code inspections which target the source code.
• Walk-through: This is originally a term which describes an inspection method where
appropriate input values for a program are assumed and then simulations of the execution
process are performed on paper. Typically, this is a review that is conducted by the author of
the item under review. The author explains the details and various parties involved ask
questions.
• Round robin: This is a method where participants take turns in explaining items. This also
provides grounds for communication and education.

For A through C in the question, the review methods shown below apply for each.
• A: Since all participants take turns to become the person in charge of the review, it can be
determined that this is describing the characteristics of the round robin method.
• B: Since the author is explaining the item under review and simulations are conducted by
assuming input data values, it can be determined that this is describing the characteristics of
the walk-through method.
• C: Since the lead facilitator is fixed and the focus of review is narrowed down and then
evaluated, it can be determined that this is describing the characteristics of an inspection.
Therefore, d) is the appropriate combination.

111
Morning Exam Section 4 Development Technology Answers and Explanations

Q4-24 d) Structured programming

In modular logic design, it is important to keep in consideration the structuring of logic in order
to create an easy-to-understand program. Programs should not be created haphazardly but logic
used in the design should be restricted. If the program is an appropriate program designed in
accordance with the structure theorem with “one entry and one exit,” the logic of any program can
be described using the three basic logical structures of sequence, selection, and iteration (DO
WHILE). By restricting logic, although freedom may decrease, a program that is easy to understand
by everybody increases the productivity of development and maintenance. Therefore, d) is the
appropriate answer.

a), b) These are common points to note about coding, and not limited to structured programming.
c) In general, the number of lines of code in one module should be between 300 and 500 lines.

Q4-25 b) Methods of preparing test data for black box tests

A black box test is a test where a tester evaluates whether the results of the input are all correct
in accordance with the functional specifications on the assumption that the tester does not know
the internal structure of the program. This can be considered as a test from a user’s perspective.
Sufficient input conditions based on the functional specifications must be considered for test
data. Thus, equivalence partitioning (a test method that divides a range of input into multiple
equivalent classes and then takes representative values) and boundary value analysis (a method
that tests the boundary value of inputs) are used. Therefore, b) is the appropriate answer.

a), c) These options do not test all possible input conditions, and are therefore not applicable to
black box tests. These are methods that are used in operational tests.
d) This is a test that is based on the internal structure of a program and is implemented as part of a
white box test performed from the program developer’s perspective.

Q4-26 c) Test data for the equivalence partitioning method

This question is related to the test data design methods for black box tests. A black box test
checks up whether the functions are executed correctly, based on external specifications without
regard to the internal specifications of programs.
Equivalence partitioning, which is one method of test data design, uses, as test data, a
representative value of the valid equivalence class, which is valid data, and representative value of
the invalid equivalence class, which is invalid data. Here, the question asks for the least amount of
test data. Thus, out of the “−2 through 0” invalid equivalence class, the “1 through 5” valid
equivalence class, and the “6 through 8” invalid equivalence class, three representative values (one
for each class) are selected as test data.
Therefore, c) is the appropriate answer; that is, a set of “−1, 3, and 6” satisfies the requirements.

112
Morning Exam Section 4 Development Technology Answers and Explanations

Q4-27 b) White box tests

White box testing is a test method that checks the validity of the program design and
programming by creating test cases focusing on the internal specifications of the system. For
condition coverage, test cases are created for all true and false decision condition combinations.
For instruction coverage, test cases are created where all instructions are executed at least once.
Condition coverage and instruction coverage are white box tests that create test cases based on the
internal specifications of the program. Therefore, b) is the appropriate answer.

a) Cause-effect graphs and experimental design are black box test methods which create test
cases based on input and output conditions.
c) Equivalence partitioning and boundary value analysis are black box test methods which create
test cases by focusing on groupings of input condition and the boundary values of input
conditions.
d) Module analysis and error guessing are black box test methods which create test cases where
errors are more likely to occur from an analysis of the modules and from experience.

Q4-28 d) Test data required for multiple condition coverage

Decision condition coverage, as can be discerned from its other name of branch coverage, is the
conducting of tests that covers both the true and false direction branches for all decisions by
focusing on the coverage of the branch direction of decisions (the diamond shaped parts of the
flowchart). In this question, the branching into the false (No) direction as a result of the (A = 4,
B = 1) test data, and the branching into the true (Yes) direction as a result of the (A = 5, B = 0) test
data are tested. This satisfies the branch coverage criteria.
On the other hand, multiple-condition coverage, as mentioned in the question, is the strictest
coverage criteria. It requires tests that cover all possible combination for all conditions. In this
question, there are two conditions. There are four possible conditions, as shown below.

Condition (1) (2) (3) (4)


A>6 True True False False
B=0 True False True False

The two test data items that has already been prepared satisfy (4) and (3), but (1) and (2) cannot
be tested, so test data that corresponds to these two can simply be added. In this case, test data that
is added must be true for the “A > 6” condition. Looking at the details of the options from this
perspective, only d) satisfies these conditions. Furthermore, (A = 7, B = 0) corresponds to (1), and
(A = 8, B = 2) to (2). Therefore, d) is the appropriate answer.

113
Morning Exam Section 4 Development Technology Answers and Explanations

Q4-29 c) Tests during system development

In a typical waterfall model, the relationship between each design phase and the tests to check
their details, for example, “requirements definition → system test,” “basic design → integration
test,” and “detailed design → unit tests,” is known as a V-shaped model because of the fact that
the test are aligned in a V shape with manufacturing (programming) in the center.
“Activity” in the question is defined as a “process component” in Common Frame 2007, which
is a further segmentation of processes that have been grouped from the perspective of roles in
system development activities. Furthermore, systems architecture design, software architecture
design, software detailed design, system integration (test), and software integration (test) in the
question are activities in the system development process in Common Frame 2007. The
relationship between activities, such as requirements analysis and design, and tests that assures
these activities based on Common Frame 2007 are also V-shaped as shown below. Therefore c) is
the appropriate answer.

System requirements analysis System qualification test

Systems architecture design System integration (test)

Software requirements analysis Software qualification test

Software architecture design Software integration (test)

Software detailed design (Software unit test)

Software coding and testing

Q4-30 d) Bottom-up testing

Bottom-up testing is one of the integration test techniques. It is a method to conduct tests
moving upwards from the lower level modules that constitute the program and integrating with
upper level modules. In this test, since the upper level modules that call the modules being tested
are not integrated, a tentative upper level module (driver) is required. Therefore, d) is the
appropriate answer.

a) In general, there are a large number of lower level modules and not much relationship between
the modules, which allows parallel testing. Therefore, parallel activities are possible from the
early stages of development.
b) A stub is a “tentative lower level module” in top-down testing, and in contrast to bottom-up
testing, it is used to test the integration of modules from the upper level to lower level.

114
Morning Exam Section 4 Development Technology Answers and Explanations

c) In bottom-up testing, since tentative upper level modules (drivers) are used instead of actual
upper level modules, there is no need for upper level modules that have already been tested.

Q4-31 c) Migration into full operation after system changes

After completion of system tests by the developers, operational tests are conducted, which are
lead by the operators with participation of the developers, user departments and vendors.
Evaluation of the operational tests is conducted by the operators who will be taking over the
system and managing the operation. Changes to the system are verified from the user’s standpoint
to make sure that the changes do not adversely affect the system in use, and the changes are
conducted according to plan and design. These verifications are important activities that determine
whether or not the operation department will accept the system that has been changed. The reason
for this is because if the system is accepted despite an unfavorable evaluation, the operators will
face problems and issues while the system is in operation. In this sense, operational tests can be
regarded as acceptance tests by the operators. Therefore, c) is the most appropriate answer.

a) Even if there are no changes in operational methods, it is dangerous to put the system back
into full operation after completion of only development department tests.
b) Whether or not to put the system back into full operation should be considered based on the
decision of the person responsible of the operations department.
d) Acceptance tests by the operations department are required.

Q4-32 c) Items to check upon delivery of software

When the ordered software is delivered, verification is required to make sure that the software
has the specified functions and works correctly. Therefore, c) is the correct answer.

a) Verification of whether the estimate details provided by the contractor are appropriate should
be done when the estimate is received.
b) Verification of whether work is proceeding without delay should be done during software
development and not when the software is delivered.
d) The quality management plan in the question refers to a plan to verify the quality of the
ordered software. This verification should be conducted before verification upon delivery.

Q4-33 a) Education techniques

The explanation of in-basket is correct. Therefore, a) is the correct answer. b) is an explanation


of OJT (On-the-Job Training), c) is an explanation of role playing, and d) is an explanation of
brainstorming.

115
Morning Exam Section 4 Development Technology Answers and Explanations

Q4-34 b) Software maintenance

When an identified bug is corrected, there are cases where the identified bug is corrected but
new bugs (errors) occur as a result of the correction. Such cases are when corrections affect other
areas or cause unintended results. These errors are known as regression errors. Tests to verify
whether or not there are regression errors are known as regression tests. Therefore, b) is the correct
answer.

a) Performance test: This is a test to verify whether goals of system performance such as
response time and turnaround time are met.
c) Load test: This is a test to verify the durability of the system by applying stresses to it, such as
whether the system can handle the processing of large amounts of data and can operate for
extended periods of time.
d) Exception handling test: This is a test to verify whether the system can respond to exceptional
events by entering exceptional data or performing exceptional operations that rarely occur in
regular operations.

Q4-35 c) Software development process models

The development method that improves the level of completion of a system by iteratively
performing the development cycle is known as iterative system development. Iterative system
development can be further divided into incremental iteration and evolutional iteration depending
on how the level of completion is improved. Incremental iteration is a method that divides the
development target into a certain number of subsystems and improves the system’s level of
completion by iterating the development of each subsystem. A typical example of this is the spiral
model. Therefore, c) is the correct answer. A characteristic of the spiral model is that it eliminates
risks such as problems with the development techniques and design details of the previous
development before development of a new area (the next iteration), and improves the system
development process itself. It is considered a model suitable for the development of such systems
that the developer does not have development experience in.
On the other hand, evolutional iteration is a method that sequentially improves the fulfillment
levels of requirements by, for example, developing an orthodox process (prototype) and then
adding exceptional patterns and error checks.

a) RAD (Rapid Application Development) model: This is a model that improves completion
levels by using prototypes. It is categorized as evolutional iteration and the scale of iterations
are generally not large enough to be called development cycles.
b) Waterfall model: This a process model that conducts system development by proceeding from
upstream to downstream through development processes such as requirements definitions and
external design.
d) Prototyping model: This is a process model that advances development by repeating user tests

116
Morning Exam Section 4 Development Technology Answers and Explanations

and improvements using trial models known as prototypes. This falls under the category of
system development through evolutional iteration.

Q4-36 a) CMMI

CMMI (Capability Maturity Model Integration) adds the interfacing with hardware
development to CMM (Capability Maturity Model) for software development. There seems be no
option that explains all of the details, but the description of a) is an explanation of CMM, which
CMMI is based on, and b) through d) have no relation to CMMI. Therefore, a) is the correct
answer. CMM is a process evaluation technique established by the Software Engineering Institute
at Carnegie Mellon University in the US. As with CMM, CMMI evaluates the maturity level of
development process by using the five levels of evaluation criteria shown below.
• Level 1: An initial stage where processes have not been established
• Level 2: A level where processes are managed individually but are not organized
• Level 3: A level where individual experiences are collected and documented, and standard
processes that are consistent throughout the organization are defined
• Level 4: A level where standardized processes are quantitatively measured and analyzed
• Level 5: A level where standard processes are optimized and improved in accordance with the
differences in technical and requirement environments.

b) “Process model in software development” models the procedures of development activities.


Some examples are the waterfall model and the growth model.
c) “The common frame for system development and transactions centered around software”
(SLCP-JCF98) defines the common mechanisms in the software lifecycle process with the aim
of clarifying development and transactions.
d) There is no model that matches the description.

Q4-37 b) Purpose of Common Frame 2007

The common frame for system development and transactions centered around software
(SLCP-JCF2007) is created giving considerations to the characteristics of the domestic software
industry while maintaining consistency with SLCP of ISO/IEC 12207. The purpose of the
common frame is to establish transparency in the software market as well as to vitalize the market,
by using this common frame as a “common measuring stick” for system development and
transactions centered around software. Therefore, b) is the correct answer.

a) The common frame does not handle proposal or management responsibilities in software
transactions.
c) This is a description of software management guidelines.
d) Common frame does not exclude transactions between departments of a company.

117
Morning Exam Section 4 Development Technology Answers and Explanations

Q4-38 a) Explanation of software reuse

When software components are developed, requirements for reliability, versatility, and
standards compliance are stricter than those for regular software, on the assumption that the
components will be reused. This results in increased development person-hours and costs.
Therefore, a) is the appropriate description.

b) Software components that are actually used often have small functional units. However, in the
case of small components, even though only person-hours related to coding and unit tests can
be reduced, as components become larger, person-hours for design and integration tests can
also be reduced. Therefore, reduction of person-hours is greater on a per-unit scale.
Furthermore, when a large number of small components are reused, person-hours are required
for coding to connect the components, including the designing of that code, and for testing
connections between each component. Therefore, if reuse is possible, large components
increase productivity. However, actual functional units become smaller because of the fact that
the feasibility of reuse without modification becomes lower as functional units become larger.
c) If there is an atmosphere within the organization that encourages reuse activities through
incentives such as a commendation system, this has the effect of promoting reuse. In general,
during the early stages of reuse of components, the increase of person-hours for creating
components as well as hesitation for first using the components become elements of
resistance. Therefore, commendation systems will be worthwhile. However, the reuse of
components becomes established over time, and it cannot be said that the effect of a
commendation system will increase in comparison with the early stages.
d) As described in b), the ratio of the person-hours that can be reduced through reuse is
proportional to the size of the component.

Q4-39 a) Explanation of reverse engineering

This question tests the clear understanding of the difference between “reverse engineering” and
“re-engineering.”
Reverse engineering is a process that is a reverse of the normal process of creating source code
from a specification. Therefore, a) is the correct answer.
Furthermore, the specifications obtained from reverse engineering are used for rebuilding
systems and for system maintenance. Re-engineering is the engineered rebuilding of something
that is already completed.

b) Although the obtaining of specifications from an existing program can be considered as


reverse engineering, overall, this is program development based on specifications and
therefore an explanation of regular engineering or re-engineering.
c) This is an explanation of the technology of creating software components.
d) This is an explanation of software reuse technology in object-oriented programming.

118
Morning Exam Section 4 Development Technology Answers and Explanations

Q4-40 c) Supported targets of software configuration management tools

Configuration management of software generally refers to the management of software names,


version information, usage, installation information, license information, resources information,
release information, change management information, patch application information, etc.
Therefore, c) is the correct answer.
a), b), and d) have no direct relation to configuration management of software.

119
Morning Exam Section 4 Development Technology Answers and Explanations

120
Morning Exam Section 5 Project Management Answers and Explanations

Section 5 Project Management

Q5-1 a) Role of project manager

In addition to managing the progress of the project, a project manager should also provide labor
management of project members and enhance communication with supervisors and related
departments. Therefore, a) is the appropriate answer.

b) Evaluation of project progress is not an activity which can be left to others.


c) Project members play the main role in project development. The motivation of each member
will determine the success of the project.
d) Communication with users is important. However, an approach of accepting whatever users
request will not have a good influence on project progress. In some cases, the project manager
may have to convince users to retract their requirements.

Q5-2 b) Key to a successful system development project


)
A system development project is organized according to computerization planning to implement
information strategy. Based on computerization planning, the project manager must provide a
project policy, estimate necessary resources, and prepare various plans such as scheduling, quality
planning, and cost planning to complete the project successfully. In accordance with the above
objectives, the project manager must also optimize the organization structure and personnel
assignment. The primary task of the project manager is to prepare a “project plan” showing these
details and obtain approval from senior management. Therefore, b) is the appropriate answer.

a) Determining methods to resolve business operations issues should not be left until later. The
higher the seriousness of the issue, the sooner it should be defined as a system requirement.
c) The project scope and objectives should be clarified at the time of project launch. It is too late
if this is done during the system design phase.
d) The computerization plan includes the system objectives and scope, functional overview,
project-promoting scheme, cost-effectiveness, and development master schedule. It does not
define the details of project management. As the project manager is responsible for project
management, he should prepare a project plan specifying computerization planning details and
manage the project in accordance with it.

Q5-3 c) Preferred option a project manager should choose

This is a question about undertaking a subcontracted system development project with


undetermined functional specifications. In order to select the correct answer, the difference
between an underpinning contract and a (quasi-) mandate contract must be understood. The key
point is whether an underpinning contract is appropriate for a project with uncertain elements.
Under an underpinning contract, the subcontractor is responsible for the deliverables regardless of
negligence. On the other hand, under a mandate contract, the subcontractor only provides support
and other services and is not responsible for the deliverables if there is no negligence.
It is not possible to make a correct estimation when the functions to be incorporated are still

121
Morning Exam Section 5 Project Management Answers and Explanations

undetermined. The safest method with minimum risk is to support the outsourcer to finalize
functional specifications under a mandatory contract, and then carry out the development tasks
based on the specifications under an underpinning contract or quasi-mandate contract. Therefore,
c) is the appropriate answer.

a) If the amount is estimated based on the functional specifications which have been determined
in the interim, the development of additional functions will be necessary, and will incur
additional costs. Therefore, this is inappropriate.
b) System integration is a type of underpinning contract. Signing a contract when functions have
not been determined is very risky, and is inappropriate.
d) If the customer accepts this contract, this is not a bad option because measures against risks
have been considered. However, it is not likely that the customer would accept a contract
estimate which inflates the number of development person-hours without reason. And a
competitor may offer a more competitive price. Therefore, this is inappropriate.

Q5-4 a) Processes in project management

A series of activities which produce a result is called a process. There are five process groups in
project management.
1) Initiating process: to start up a project
2) Planning process: to create a plan
3) Executing process: to carry out the project in accordance with the plan
4) Controlling process: to control the progress, cost, quality, and human resources through
reviews to achieve the project goals
5) Closing process: to close the project
Therefore, a) is the correct answer.

Q5-5 c) Base items in a project plan

A project plan is used as a guideline for carrying out the project smoothly and effectively.
Therefore, the project plan must describe the project policy, and plans for necessary resources such
as people, materials, and money. It generally includes: 1) project objectives, 2) scheduling, 3)
quality management plan, 4) personnel plan, 5) outsourcing plan, 6) development environment, 7)
cost plan, among other things. In particular, the cost-effectiveness is essential information in a
report for management because it is a basis for business judgments. The project plan in the
question describes the expected effects of the project in item 1 but does not explain the related
costs.
A cost plan preferably provides not only a plan describing how much, when and how money is
invested, but also a cost-effectiveness analysis to determine whether the investment is justified by
its effects.
Since the question says “the purpose is to report the profitability of the business project as part
of the mid-term plan,” c) “Cost-effectiveness analysis of system development” is the most
appropriate answer.

122
Morning Exam Section 5 Project Management Answers and Explanations

Q5-6 b) Scope definition in project management

Project management scope defines the deliverables and the work scope of the project. Scope
management has several processes. First, the scope is outlined in the scope planning process. In
the scope definition process, deliverables such as products and documents are defined in a specific
manner. In subsequent processes, WBS (Work Breakdown Structure) and other tools are used to
define the activities for producing the deliverables. Therefore, b) is the appropriate answer.

a) This is a description concerning schedule development in project time management, but it is


not a description of scope definition.
c) This is a description concerning the identification of stakeholders. Although clarifying
stakeholders is related to processes such as communication management, it is not defined as
part of the scope definition.
d) This is a description concerning some activities carried out in the quality planning process for
project quality management, but it is not a description of scope definition.

Q5-7 d) Diagram showing a detailed breakdown of project activities

A top-down approach to project management is one method of identifying requisite activities by


breaking down broader-level activities into specific activities. At this time, a configuration
diagram called a WBS (Work Breakdown Structure) is prepared. Therefore, d) is the correct
answer. The project manager can use a WBS to gain a comprehensive grasp of work items, clarify
person-hours, time for completion, the responsibilities and authorities of staff members, and
identify the characteristics of the project.

a) DFD (Data Flow Diagram) is a diagram used for modeling business processes with a focus on
the data flow.
b) DOA (Data Oriented Approach) is a system design technique in which an E-R diagram is
prepared by modeling a target business process with a focus on the data. Based on this
diagram, the data items are standardized and then the system functions are designed.
c) PERT (Program Evaluation and Review Technique) is a scheduling technique for developing a
work schedule. The project manager can use PERT to determine when to start and finish each
activity in order to complete the project in the shortest possible time, and which activities have
little margin in terms of the number of days required for activities (critical path).

Q5-8 b) Arrow diagram

In order to answer questions concerning process management, it is important to identify which


activity process is the critical path (the path which requires the most number of days among the
sequence of activities). Based on the schedule in the arrow diagram, the critical path is the
sequence (1) → (2) → (4) → (7) → (9), which requires 24 days (= 7 + 6 + 6 + 5 ). If an activity on
the critical path can be shorted by one day, the total number of days can also be shorted by one
day. Since the path (2) → (4) is on the critical path, b) is the correct answer.

123
Morning Exam Section 5 Project Management Answers and Explanations

Q5-9 a) Characteristics of Gantt chart

A Gantt chart is a process management chart illustrating the implementation schedule and the
progress of each activity item with horizontal lines, with the y-axis representing activity items and
the x-axis representing the time period. It makes it possible to grasp the starting and ending points
of each activity at a single glance. Therefore, a) is the appropriate answer. The Gantt chart is
named after Henry Gantt who developed this graphical presentation method.

b), c) These are descriptions concerning an arrow diagram.


d) This is also one of the characteristics of an arrow diagram and is not specific to a Gantt chart.

Q5-10 b) Estimation of person-days based on project work distribution model

First, calculate the number of days to complete the entire project. Since processes from
requirements definition to internal design took 228 days, and the total ratio of these processes is
0.57 ( = 0.25 + 0.21 + 0.11 ), the estimated number of person-days required for the entire project is
400 days ( = 228 / 0.57 ).
Since half of the program development process has been completed and the remaining half has
not, the ratio of the period that has passed after the internal design process is 0.055 ( = 0.11 / 2 ).
When this ratio is converted to the number of person-days, the result is 22 days ( = 400 × 0.055 ).
At the present time, 250 days ( = 228 + 22 ) have passed and 150 days ( = 400 − 250 ) remain.
Therefore, b) is the correct answer.

Q5-11 a) Number of adjusted function points in the FP method

The function point method is used to evaluate system size on the basis of system functions.
System size can be evaluated independently of system platforms such as development method,
development language, and OS. It also has other advantages such that it can be implemented at an
early stage of the development project, and it can be easily understood by users since it is based on
external specifications. Estimation using the function point method is dependent upon the
identification of five elements: input, output, inquiries, logical file, and external interface. The raw
function point is derived based on these elements. The raw function point is calculated by
multiplying these values with the corresponding adjustment factor of each system characteristic.
Typical system characteristics to calculate adjustment factors include data communication,
distributed processing, performance, highly-loaded configuration, transaction volume, online data
entry, end user efficiency, online update, complex processing, reusability, ease of installation, ease
of operations, multiple sites, and ease of change. Therefore, a) is the appropriate answer.

b) Function point calculation is not dependent on development programming language.


c) The number of program steps is not used as an input when the function point is calculated.
d) Function point calculation is not dependent on the skills of the development members. Based
on the derived function point, the number of development person-hours is estimated by
considering the skills of development members, among other factors.

124
Morning Exam Section 5 Project Management Answers and Explanations

Q5-12 a) Estimation of workload in system development

COCOMO (Constructive Cost Model), proposed by Barry W. Boehm in1981, is a method for
estimating the person-hours and time period required to complete a system development project
based on its development size (the number of source code lines) and characteristics. COCOMO
provides adjustment factors (effort adjustment factors) for each element that has an influence on
the project, such as target area, complexity, limitation of computing machinery, staffing
requirements, and tool requirements. When COCOMO is applied to an actual corporate scenario,
the adjustment factors must be selected based on the company’s past experience and performance
data. For this purpose, productivity data must be collected. Therefore, a) is the appropriate answer.

b) Since each development staff possesses a different skill level, it is even more important to
accumulate data on skill level evaluation and the corresponding actual person-hours. Data
from past projects will be very helpful as reference.
c) Actual software quality has a great effect on the accuracy of estimation. If there is a large
difference between the actual number of person-hours and the estimate (planned data), it is
likely that some quality management problems were overlooked during the planning phase.
Therefore, the description “it is unrelated to quality management” is not correct.
d) In the function point method, the size is estimated based on system functions from the user’s
perspective, not on the number of program steps.

Q5-13 c) Person-days required for coding

The total number of days required for coding is 95 person-days


( = 20 × 1 + 10 × 3 + 5 × 9 = 20 + 30 + 45 ). Since the additional person-days required for
specification verification and testing is eight times this number, the total person-days for the
project is 95 × (1 + 8) person-days. In order to complete the entire project in 95 days, 9 staff
members are required ( 95 × 9 ÷ 95 ).
Therefore, c) is the correct answer.

Q5-14 d) Expression to represent productivity

Productivity is obtained by dividing the volume of deliverables (i.e., the number of completed
steps in this case) by the number of person-months.
Assume there is a program composed of M steps which has completed three phases including
design, production, and testing. Based on the given productivity, the number of person-months
required for each phase is as follows:
“M/X” for design, “M/Y” for production, and “M/Z” for testing.
Next, add these values to obtain the total person-months for the entire process.

M M M ⎛1 1 1⎞
+ + = M⎜ + + ⎟
X Y Z ⎝X Y Z⎠

When the volume of deliverables “M steps” is divided by the number of person-months (the
above expression), the productivity of the entire process is obtained as follows:

125
Morning Exam Section 5 Project Management Answers and Explanations

1
1 1 1
+ +
X Y Z
Therefore, d) is the correct answer.

Q5-15 c) Progress management with EVM

EVM (Earned Value Management) is a quantitative project management method based on a


comparison of actual progress versus planned progress.
EVM uses three basic values as follows:
• Planned Value (value of work scheduled) is the budgeted cost of work scheduled during a
given period of time. This is the baseline for cost control which can be established when the
plan is developed.
• Earned value (value of work completed) is the cost budget of work actually performed during
a given period of time. It is possible to determine the work progress by comparing this value
against the planned value.
• Actual cost is the total cost actually incurred during a given period of time. Since this is the
cost actually spent to complete the work and is the basis for the earned value, it is possible to
determine cost trends (i.e., increase or decrease) by comparing these two values.
The question asks us to select “a project which can be expected to be completed without cost
overrun and schedule delay.”
• If the earned value is greater than the planned value, the progress is faster than scheduled.
• If the actual cost is smaller than the earned value, the cost is less than the budget.
Chart c) satisfies both of the above conditions. Therefore, the appropriate answer is c).

a) Since the actual cost is higher than the planned value or the earned value, this means that the
budget is overrun.
b) In addition to the condition described above, since the earned value is less than the planned
value, this means that the schedule is delayed.
d) Since the earned value is less than the planned value, and the actual cost is higher than the
earned value, this means the schedule is delayed and the budget is overrun.

Q5-16 b) Quality assurance in quality management

Quality assurance refers to all planned and systematic activities that are implemented within the
quality system and verified as required in order to establish sufficient confidence that a certain
target will fulfill quality requirements. Therefore, b) is the appropriate answer.
Note that in ISO 9000 (JIS Q 9000), quality assurance is defined as “part of quality
management focused on providing confidence that quality requirements will be fulfilled,” and
quality control is defined as “part of quality management focused on fulfilling quality
requirements.”

a), c), and d) are explanations of quality control. While quality assurance refers to overall
activities to assure quality, quality control refers to daily activities associated with establishment,
measurement, and correction of quality criteria for products or services. Quality assurance is a set

126
Morning Exam Section 5 Project Management Answers and Explanations

of activities to establish and operate the quality management system, and monitor and improve
quality management activities.

Q5-17 a) Quality control chart of test processes

It can be seen from the quality control chart of test processes that the number of uncompleted
test items is larger than expected and the number of errors detected is also larger than expected.
It can also be seen that the number of errors detected increases faster than the number of
uncompleted test items decreases. This suggests there is a problem with quality, and the processes
before testing are likely to contain many elementary errors. Therefore, areas that contain large
numbers of errors should be identified, appropriate measures should be implemented, and previous
processes should be reviewed as required. Therefore, a) is the appropriate answer.

b) Since the number of uncompleted test items is larger than expected, the test environment may
have defects or development staff may be insufficient. However, since the number of errors
detected is significantly larger than expected, the test processes are likely to contain many
elementary errors. This means there is a quality problem, and b) is not the correct answer.
c) The number of uncompleted test items is larger than expected and does not mean that the
testing speed is high.
d) When the number of errors detected is larger than expected, this implies two different
scenarios. First, the quality of the test item is high, and errors have been detected efficiently
using a small number of test items (that is, the test has been performed efficiently). Second,
software quality is lower than expected, and a large number of elementary errors have been
found. Even in the first scenario, the process must be reviewed for accuracy, and making
optimistic assessments (that is, there are no particular problems at present) must be avoided. If
the review results show that the test process has been performed successfully, the statement
“progress of error detection must be managed so that an unresolved error will not remain for a
long time” is an appropriate description.

Q5-18 a) Analysis of bug detection data

For post-test quality assurance, the conditions including “the number of test items is above the
reference standard” and “there are no unsolved bugs” are necessary but not sufficient conditions.
Therefore, program quality cannot be determined based only on the above conditions (i.e., the
number of completed test items or unsolved bugs).
Both subsystem A and B satisfy the criteria for the number of test items and have no unsolved
bugs. However, these satisfy only necessary conditions and do not satisfy sufficient conditions.
One sufficient condition is bug convergence. The charts outline the reliability growth curves that
show the relationship between test duration and cumulative number of bugs detected. The chart of
subsystem A indicates such convergence. Therefore, a) is the appropriate answer.

b) Subsystem A has achieved bug convergence. It can be considered to be stable in quality. On


the other hand, the number of bugs in subsystem B continues to increase and has not achieved
bug convergence. It cannot be considered to be stable in quality.
c) The number of bugs detected also varies depending on the quality of the software under

127
Morning Exam Section 5 Project Management Answers and Explanations

testing. Post-test software quality cannot be determined based solely on the number of bugs
detected.
d) Although subsystem B has no unsolved bugs, the chart does not show bug convergence.
Subsystem B requires additional testing.

Q5-19 d) Organization of software development team

A democratic team is composed of approximately ten people. The team leader can be changed
according to the task and consensus-driven decision making is the norm. As team members are on
an equal footing, this enables a free exchange of opinions. However, it is disadvantageous in that
the leader cannot exert strong leadership. The most appropriate answer is d).

a) A hierarchical team has a hierarchical structure containing different levels of responsibility. It


is composed of a leader who is responsible for the overall management of the entire team,
sub-leaders who support the leader, and staff members carrying out the actual development
work. This facilitates effective communication within the team.
b) A specialist team is composed of experts whose expertises are required for system
development. In general, the team does not have a manager who is dedicated to project
management.
c) A chief programmer team is composed of a chief programmer who is responsible for decision
making, three or four permanent programmers, and other temporary programmers. This type
of team is suitable for a project of small to medium size with a relatively low level of technical
complexity.

Q5-20 a) Risk management plan for system development project

A risk management plan is prepared according to the following procedure: risk identification,
risk quantification, countermeasure planning, and risk monitoring and control.
In risk identification, every potential risk that may exist in the project is identified, and its
corresponding impact is examined. In risk quantification, the potential loss that the risk can cause
is converted to quantitative information such as time and money. In planning the measures against
risks, countermeasures are determined by prioritizing risk factors based on criteria including
implementation cost, because it is not efficient to implement countermeasures against every risk.
In risk monitoring and control, an action plan is developed to monitor and control risks. Therefore,
a) is the appropriate answer.

b) If countermeasures are not considered until the risk becomes an actual problem, it is not
possible to take prompt response measures. If there are expected risks, the corresponding
countermeasures should be considered in advance. Therefore, b) is inappropriate.
c) Similar to b), it may be too late if a countermeasure is not considered until the risk is a real
problem. Potential risk factors and their corresponding countermeasures should be considered
in advance. In the course of the project, it is also important to implement countermeasures
proactively before project goals become hard to achieve. Therefore, c) is inappropriate.
d) In identifying risk factors, ones with major impact should be given higher priority. Identifying
risk factors with minor impact is also recommended to determine if they need to be managed.

128
Morning Exam Section 6 Service Management Answers and Explanations

Section 6 Service Management

Q6-1 c) Role of the operations manager

Under fault management, in the event of a failure, the initial response includes actions such as
starting to take records, determining the current situation, determining the extent of impact, and
contacting related parties. We cannot determine whether a failure has occurred just from a
notification from a user saying that “the system has suddenly stopped responding.” Therefore,
confirming the phenomenon by first looking at console messages would be valid. Therefore, c) is
the correct answer.

a) Failures cannot be identified when the notification is initially received. After identifying the
failure by checking console messages and other phenomenon, search for past failure cases
should be done if required.
b) Related parties should be informed after confirming the phenomenon and the extent of impact.
d) Recovery should be performed after confirming the phenomenon, identifying the failure, and
contacting related parties.

Q6-2 c) Implementation procedure of IT service management

When implementing IT service management, first, “clarify the vision” (C ), “understand the
current status” (B), and then “set goals” specifically (F ). After this is completed, “investigate
approaches of goal achievement” (E ), and then “investigate approaches to understand status of
goal achievement” (D). Furthermore, “investigate continuous improvement approaches” (A), to
constantly go through the PDCA cycle. Therefore, c) is the correct answer.
In order to answer this type of question that involves procedures, an efficient approach is to
focus on certain elements whose contexts are clear, and then select the applicable answer. For
example, if the focus is on “goal,” it is clear that setting goals comes first, investigating how to
achieve these goals, and then investigating approaches to understand status of goal achievement.
The only option where the procedure includes F → E → D is c), and so it is easy to identify the
correct answer.

Q6-3 a) Task of the system operations manager

The tasks of the system operations manager include understanding the problems occurring in
the system that is in operation, investigating the causes of the problems, and proposing
improvements to eliminate the problem.

a) When response times are deteriorating, the operations department responsible for operations
must first investigate the cause. If it is determined that the cause is due to aging of the system
and it is best to renew the system, the system operations manager has the responsibility to
summarize this opinion and submit a proposal. However, there is no guarantee that the cause
will be identified through investigations by the operations department. In a case like this,
related departments such as the system development department may be asked to cooperate.
b) Although this option contains ambiguous terms such as within budget and migrating of

129
Morning Exam Section 6 Service Management Answers and Explanations

equipment, depending on the type of equipment that is migrated, there is a possibility that this
will cause unexpected effects on the system in use, even if it is within budget. It is not
appropriate for the operations department to solely make decisions regarding migration of
equipment. Although the stance to migrate equipment in order to reduce operating costs is a
positive and commendable one, there are cases where the effects of the target equipment on
the current system need to be considered and proposals also need to be submitted instead of
migrating based on a unilateral decision.
c) Although improving problems, starting from the easiest first, is one guideline, this is not
necessarily the only guideline. The proposed order must be decided after giving consideration
to matters such as urgency, cost-effectiveness, and ease.
d) The scope of proposal responsibilities of the system operations manager is not just for addition
of devices. The scope also includes the renewal of the entire computer system.

Although the description in a) includes some aspects that are uncertain, it is possible to submit
proposals in cases where the system operations manager decides that system renewal is required as
a result of an investigation of the causes. Since the options b) through d) contain the descriptions
that are considered as problem areas, those are all inappropriate.

Q6-4 b) Approach to performing hardware maintenance

This question asks about the approach to hardware maintenance. In the first place, maintenance
management of information equipment does not stop at hardware. It is an extremely important
operation for those who operate the equipment and the goal is to maintain normal operations of the
information equipment. In order to achieve this, continuous management is required in order to
enable constant recording (documentation) of installation status and daily utilization of hardware
and software within the department, enhancement history and presence or absence of failures as
well as those aspects, and capability to check the records any time. It is desirable to ensure
implementation even if there may be no hardware failures found during periodic maintenance. It is
necessary to make efforts to identify “hidden failures” that cannot be seen by the users. Therefore,
b) is the appropriate approach.

a) As noted previously, maintenance work should not be conducted only when a failure occurs.
c) When the user senses some anomaly that differs from the sense of normal use, maintenance
work should be performed immediately without waiting for the periodic maintenance
schedule.
d) The goal of implementing hardware maintenance work is not only for printers, storage
devices, and other devices that involve mechanical operations. It also applies to semiconductor
parts, such as the CPU and memory, which do not have any mechanical parts.

Q6-5 b) SLM

JIS Q 20000-2:2007 (Service Management — Part 2: Code of practice) is the JIS adaptation of
the ISO/IEC 20000-2 international standard. Regarding service catalogs, it is written in “6.1.1
Service catalog” that a service catalog should define all services. Therefore, b) is inappropriate and
is the correct answer.

130
Morning Exam Section 6 Service Management Answers and Explanations

Note that for services, a service catalog defines goals, and a formal document that includes goal
levels for major services agreed upon with the customer is an SLA (Service Level Agreement).
The recording and management of service levels by the service department in order to achieve the
goals defined in the service catalog is known as SLM (Service Level Management).

a) It is written in “6.1.3 Service level management (SLM) process” that customer satisfaction
should be recognized as being a subjective measurement.
c) “6.1.1 Service catalog” states that the service catalog should be maintained and kept
up-to-date. The service catalog includes information such as the name of the service, targets
(e.g., time to respond), contact points, and service hours.
d) It is written in “6.1.2 Service level agreements (SLAs)” that the SLAs should include only an
appropriate subset of the targets to focus attention on the most important aspects of the
service.

Q6-6 c) Explanation of SLA

An SLA (Service Level Agreement) is an agreement between the user and the provider
regarding service quality. For example, it is common to agree upon service indicators and target
values such as “system availability shall be 99%,” “immediate answer rates for help desk service
shall be 80%,” and “notification by security monitoring services of unauthorized access detection
shall be within 5 minutes.” Therefore, c) is the correct answer.

a) This is an explanation of ITIL (IT Infrastructure Library). ITIL is a collection of best practices
for system operations collected and published by an agency of the British government, now
the OGC (Office of Government Commerce). It is currently used internationally.
b) This is an explanation of SLCP (Software Life Cycle Process). This concept is published in
Japan as a common frame known as SLCP-JCF2007.
d) This is an explanation of the ISO 9000 series. There are corporate certification systems which
are implemented based on this series.

Q6-7 c) Number of staff required to operate a system

First, calculate the number of work days for one operator.

(Operator work days) = 365 − (holidays in a year) − (annual paid vacations)


= 365 − 120 − 20 = 225 (days)

Next, calculate the total number of work days required to operate this system. According to the
figure shown in the question, since three work schedules overlap between 15:00 and 17:00 every
day, three operators must report to work per days. Based on this, perform the calculation shown
below.

(Total work days) = 365 days × 3 = 1,095 days

Finally, divide the total number of work days by the number of operator work days and obtain
the required number of staff.

(Number of required staff) = (total number of work days) ÷ (number of operator work days)

131
Morning Exam Section 6 Service Management Answers and Explanations

= 1,095 days ÷ 225 days ≈ 4.9

In this case, the required result is the total number of staff. Rounding down the calculation
result to four people will not satisfy the total number of work days. Therefore, the result is
rounded up to five, so the correct answer is c).

Q6-8 a) Purpose of system migration tests

The migration test is a test performed after the new system has been built to make sure that the
switching (migration) from the old system to the new system is smooth. It is not a test of the new
system itself. This test is performed beforehand to check the system migration process from a
safety and efficiency perspective. Therefore, a) is the most appropriate description.

Although b), c), and d) are correct descriptions related to performance verification and tests
performed before migration tests, they are not the main purposes of performing migration tests.

Q6-9 c) SLA items recommended by ITIL

An SLA (Service Level Agreement) is an agreement between the IT operations department and
the user department with the aim of maintaining and improving service quality. Items that are
required here are service qualities, such as system availability and reliability, which affect how the
user performs business operations. Therefore, c) is the correct answer.
Incidentally, ITIL is a collection of best practices for system operations collected and published
by an agency of the British government, now the OGC (Office of Government Commerce), and
the IT service management international standard ISO/IEC 20000 (JIS Q 20000) is also
established based on this. SLM (Service Level Management) based on SLA is defined in these.

a) Portability helps the development department to increase its development efficiency, but it is
not part of service quality offered to the system user.
b) Development productivity is related to the work efficiency of the development department, but
it is not part of service quality offered to the system user.
d) Maintainability is related to the work efficiency of the operations and maintenance divisions in
the event of a failure, but it is not part of service quality offered to the system user.

Q6-10 b) Purposes of incident management

The word “incident” refers to an accident, unexpected occurrence or event. In IT services,


interruption of services or a degradation of quality is referred to as an incident. Specifically, it
refers to cases such as deterioration of response and stagnation of operations because of too much
load. It also includes requests of additions and changes to the system from the user. In other
words, the purpose of incident management is to keep service interruptions to a minimum and
maintain service quality. Therefore, b) is the correct answer.
Incident management is one aspect of service support in ITIL (Information Technology
Infrastructure Library). ITIL summarizes a series of items relating to IT service management rules
as guidelines. It also includes processes implemented from a short-term perspective into service
support. These processes include service desk, problem management, configuration management,

132
Morning Exam Section 6 Service Management Answers and Explanations

change management, and release management.

a) The management of IT resources components is the configuration management of service


support.
c) Implementation of software and hardware is integrating the software or hardware into the
computer so that it can be used. Implementation changes are changes to these aspects and thus
are the purposes of change management.
d) This is related to an inquiry from a user and therefore indicates a purpose of the service desk.

Q6-11 d) Main task of the service desk

The service desk is also known as the help desk. It is the first point of contact within a company
for all inquiries relating to troubles such as malfunctions with computers and peripheral
equipment. Inquiries from users can be in the form of phone, e-mail or fax. After looking at the
details of the problem, problems that are determined to be able to be resolved immediately are
responded to on the spot. It is important to not let users feel any stress in cases where the problem
cannot be resolved on the spot, such as those requiring repairs, by contacting related departments
in an appropriate fashion. Therefore, d) is the correct answer.

a) This is capacity management in the service delivery aspect of ITIL. In order to maintain IT
service levels, system utilization must be investigated, analyzed, and evaluated.
b) This is a task performed by the department that developed the application in order to improve
quality.
c) There are many cases where interviews to determine the computerization requirements as a
project are conducted by the information systems department and the business management
office in order to plan for the next version of a system. This is not a role of the service desk.
Information collected by the service desk includes computerization requirements that can be
used in the next version of a system, so there are cases where the service desk will provide this
information as required.

Q6-12 b) Program change management

Change management for programs includes important management items for the system from
the stage where the change request is placed. Failure rates in this question are the rates of new
failures that occur when changes are made. The target value for a failure rate can be thought of as
the α%, which means that “the failure rate is kept equal to or below α%.”
When a change request is made, evaluation criteria of change implementation results should
also be developed at the requirements definition stage. For example, these include the forecasted
values for the person-hours required for change, and target values for failure rates. These values
are used to verify the validity of the changes and the system at the evaluation stage after
implementing changes, and also used for data for future changes. Therefore, b) is the appropriate
description.

a) Although “deciding on a standard format for change request documentation as the basis for
change requests” is correct, specific target values for failure rates must be decided after
receiving a change request, creating a requirements definition, and clarifying change details.

133
Morning Exam Section 6 Service Management Answers and Explanations

c) On the occasion of implementing changes, these changes must be implemented at a time when
the changes are required and in a way that does not cause failures. Therefore, deciding on the
timing implementation as implementation criteria for change is correct. In addition, criteria for
determining whether or not changes will be implemented must also be decided along with
implementation procedure. However, the target value for a failure rate—the objective of “ the
failure rate is kept equal to or below α%”—is a requirement of the system after changes are
made and is set as evaluation criteria for change implementation results at the requirements
definition stage.
d) Failure rates are not included in change category items.

Q6-13 c) Initial costs of computer systems

Costs for a computer system can largely be divided into initial costs and running costs (i.e.,
operating costs).
Initial costs are the costs incurred during installation and include equipment purchase costs
(excluding lease agreements), software purchase costs, and construction costs.
Running costs are the costs required to maintain and operate the system. This includes lease
costs, labor costs, communication costs, and facility maintenance costs.
Therefore, c) is the correct answer.

Q6-14 b) Explanation of service level management in service delivery under ITIL

ITIL (IT Infrastructure Library) is a collection of best practices of system operations collected
and published by the British government and as of 2008, a 3rd Edition has been published.
At the moment, the 2nd Edition is the edition that is commonly used and consists of the
explanations of the two major processes of service delivery and service support as well as seven
books that explain other management processes. Although service delivery and service support are
both made up of a number of other processes, service delivery contains the five processes of
service level management, financial management for IT services, availability management, IT
service continuity management, and capacity management.
Among these five processes, service level management is defined as a process that manages
(maintains, improves) an agreed upon service level (a service level concluded in the SLA). This
service level is a result of a negotiation with the customer regarding service definitions provided to
the customer who uses the service. Therefore, b) is the correct answer.

a) This is a description of financial management for IT services.


c) This is a description of incident management included in service support.
d) This is a description of availability management.

Q6-15 b) ITIL

ITIL (IT Infrastructure Library) is a collection of best practices of IT system operations


collected and published by an agency of the British government, now the OGC (Office of
Government Commerce). ITIL is positioned as a providing IT services that fit the objectives of the
computer system. ITIL consists of a number of books that are a collection of best practices. The

134
Morning Exam Section 6 Service Management Answers and Explanations

main books in the ITIL are the two books of service support and service delivery. Service support
consists of a collection of best practices in six fields: service desk, incident management, problem
management, configuration management, change management, and release management. Service
delivery consists of a collection of best practices in five fields: financial management for IT
services, capacity management, IT service continuity management, and availability management.
The process used to quickly respond to any accidents and problems that occur is incident
management. Therefore, b) is the correct answer.

a) IT service continuity management is a process that ensures the continuity of operations in the
event of a disaster.
c) In fault management, the service desk is a point of contact for customers and users, and
receives notifications of failures and responds to those notifications. The service desk monitors
and controls the entire recovery operation by cooperating with incident management when
required.
d) Problem management is a process that investigates the root causes of problems and errors and
then applies permanent measures.

Q6-16 c) Descriptions related to operator management

Having multiple staff in principle perform console operation is appropriate because mutual
checks and the prevention of operation mistakes can be expected. Therefore, c) is the appropriate
answer.

a) Program design specifications contain the detailed specification of the internal processes of a
program and are not directly related to the purpose of achieving accuracy in operations.
b) If nobody is put in charge, responsibility becomes ambiguous leading to an increased risk of
operation mistakes. A person should be put in charge beforehand.
d) Only approved operators should be allowed to enter the room where files are stored and
archived. In the case that important files are stored, keeping records of entry into the room
must be considered.

Q6-17 d) Descriptions regarding system maintenance

MTBF is the mean time between failures, while MTTR is the mean time to repair. In order to
extend the MTBF, occurrence of failures should be prevented. The process performed to prevent
failures is d), preventative maintenance. Tasks such as detecting signs of failure at an early stage
through periodic maintenance as well as error log analysis, and performing maintenance work
before a failure occurs are relevant to preventive maintenance.
Note that measures such as “a) remote maintenance” and “c) distributing maintenance centers”
will reduce travel times (there is no travel time for remote maintenance), which allows repairs to
be started faster, resulting in a shortening in the MTTR.
In addition, “b) incidental maintenance” in an event of a failure is maintenance that is
performed when a failure has already occurred. Therefore, incidental maintenance does not
directly affect MTTR or MTBF.

135
Morning Exam Section 6 Service Management Answers and Explanations

Q6-18 d) Descriptions regarding data backup

When business operations are carried out while data backup tasks are performed, data that is
updated while data backups are taking place may be omitted from the backup, depending on the
timing of the updates. In addition, since data backup operations place a heavy burden on hard
disks, it greatly affects the performance of operational processes, and conversely, the operational
processes might affect data backup processes. For this reason, there may be cases where data
backups performed during operational processes may not complete properly, and so it is advisable
that “the backup process and business operations must be scheduled so as to not overlap with each
other,” as described in d).

a) In order to minimize the recovery time from backups, it is effective to use only full backups.
However, since backup processes will take time with just full backups, there are cases where
focus is on daily backup times rather than recovery times, and use differential backups which
back up only items that have been changed since the most recent full backup. With this
method, although daily backup process times can be reduced, recovery times are longer than
those from full backups because a recovery from the most recent full backup must be
performed before recovery from a differential backup. There are also incremental backups
which include not the differences from the most recent full backup, but changes from the most
recent backup including backups other than the full backup. When using incremental backups,
although backup times can be reduced because the target of daily backups is even smaller than
differential backups, recovery times are even longer than with differential backups because the
recovery process involves first recovering from the most recent full backup and then
recovering by using the daily incremental backups.
b) Considering the content of a backup, it is not always necessary to use media capable of
random access. In addition, magnetic tape is not a media capable of random access.
c) When backups are made to a single media, recovery will not be possible if that media fails or
is damaged. Backups should always be made to a different storage media.

Q6-19 a) Operations management during system failures

For example, when a failure occurs in a business system (server side), the extent of clients that
will be affected can be determined if the clients are associated with the business system they use.
Therefore, a) is the correct answer.

b) In order to propose workarounds in the event a system failure, the cause of the failure must be
determined. The causes of the failure cannot necessarily be determined just by associating
business systems with their clients.
c) In order to identify the level of deterioration of the business processes, information such as
how much the client is being affected and the severity of a system failure is required. It is
difficult to understand the severity of the failure just by associating business systems with their
clients.
d) There is no direct relation between the associating of business systems with their clients and
the accumulation of recovery procedures.

136
Morning Exam Section 6 Service Management Answers and Explanations

Q6-20 a) Countermeasures against loss of power from an outside source

An emergency power system that can be used in response to a momentary loss of power or a
temporary power outage is a UPS (Uninterruptible Power Supply). In the event of a loss of power,
the UPS provides power from the battery by converting it into an alternating current. In addition, it
is a power unit that also protects against fluctuation in voltage, stabilizes frequency, and protects
against noise from electrical power systems. However, a UPS only provides power from the
battery and does not provide power for prolonged periods of time. It is not a piece of equipment
that provides power to maintain operations, but a unit that provides power so that computers and
peripherals do not crash and can be safely shutdown. Therefore, a) is the correct answer.

b) A private power generator is not a countermeasure against loss of power because it takes time
to start up.
c), d) Since there is only one source of power, duplexing distribution boards or transformers are
not countermeasures against loss of power.

Q6-21 a) Scope of facility management

Facility management of information systems is the appropriate building and maintenance of


facilities that include ancillary facilities such as power supply, air conditioning, and security
equipment required to operate the information system. The point here is “ancillary facilities for
information systems.” Therefore, the correct answer is a) that describes the monitoring and
improvement of IT-related facilities, which is the same as ancillary facilities for information
systems. The other options are explanations related to information systems.

b) This is an explanation of CAM (Computer Aided Manufacturing) which uses computers to


control factory production lines.
c) This is an explanation related to CRM (Customer Relationship Management), a method where
a company uses information systems to build long-term relationships with customers.
d) This is an explanation of ERP (Enterprise Resource Planning), which is integrated software
used to increase the efficiency of management through integrated management of the entire
company from the perspective of effectively using management resources.

Q6-22 d) Characteristics of system audits

When system audits are conducted in accordance with system auditing standards, “system
management standards” are, in principle, used as a measurement standard for audits by the auditor.
Therefore, when system audits are carried on specifically, it is important to check whether the
information system that is the audit target conforms to system management standards. Since it is
also important to reflect the current state of the organization, it is not meant to force compliance
with system management standards for all aspects, but it should be noted that it is “in principle.”
In other words, d) is the correct answer.

a) System auditors who perform internal audits must belong to the audit department or a
department that has internal audit functions such as the general affairs department, president’s
office, or planning department. In addition, such a department must be independent of the

137
Morning Exam Section 6 Service Management Answers and Explanations

department that is the audit target. However, since internal audits are part of management
activities, there is no need for the auditor to be independent of corporate management.
b) The auditor’s standpoint is to present opinions on findings, improvements, etc. The auditor is
not responsible for the planning, development, operations, or maintenance of the system.
c) System audits are sometimes performed as part of a business operations audit. It is desirable
for the auditor to belong to the audit department. Members on the audit board may also
perform audits.

Q6-23 d) Roles and authority of system auditors

The “system auditing standards” established by the Ministry of Economy, Trade and Industry,
defines the roles and authority of the system auditor as part of the general standards. It states that
“system auditor can require the department being audited to submit documents.” System
verification and evaluation are conducted independently from the operations department, such as
the information system department. Therefore, d) is the correct answer.

a) The operations, monitoring, and maintenance of the system are the actual targets of the audit.
If the system auditor performs these tasks, the auditor himself ends up having to audit himself.
This does not achieve the objective of the audit.
b) Proposing and implementing a computerization strategy in accordance with management
principles is the role of the systems analyst.
c) The building and installation of information systems is the task of the systems integrator, and
as with a), it is also a target of an audit.

Q6-24 d) Auditability of information systems

Auditability refers to eligibility of being audited. If the controls are valid and function
appropriately, then the ensuring of reliability, safety, and efficiency can be verified both
continuously and after the fact. This is the significance of the auditability of information systems.
Aspects that constitute the auditability include the existence of controls and audit evidence
(including audit trails). Option d) is the appropriate description because it refers to internal
controls that point to the validity of processes and the existence of controls. It also touches upon
system design and operations that allow for audit and review of these internal controls.

a) The existence of just audit evidence is insufficient, but controls are also required as an aspect
constituting auditability. In addition, the completeness of an audit report is not directly related
to the existence of control.
b) Although the recognition by the company and the cooperation of the department being audited
are important elements for the smooth implementation of an audit, they are not part of
auditability.
c) Although it is the ability required by the system auditor, it is not part of auditability.

Q6-25 c) System audit procedure

This is a question related to system audit procedures. System audits are implemented after first

138
Morning Exam Section 6 Service Management Answers and Explanations

proposing an audit plan and then conducted in the following fashion based on the plan:
preliminary audit, main audit, and evaluation and conclusions. Main audits are preformed to verify
whether controls are being operated appropriately by investigating and analyzing the audit target
in accordance with the audit objective. First, preparation status for controls against information
system risks brought to light in the preliminary audit is checked using various audit techniques
(check current situation). Next, audit evidence is obtained using audit procedures, and an
evaluation of whether to use that evidence as audit evidence is performed (evaluate the
admissibility of audit evidence). Therefore, c) is the appropriate description.
Although an overview of the procedure is written in “IV. Implementation Criteria” in the
“System Audit Standards” revised in 2004, for more details, refer to “New Edition System Audit
Standards Guide” (2004 Revised Version).

a) The audit is complete after implementing the preliminary audit, main audit, and finally the
evaluation and conclusions. Improvements based on the findings are not included in the audit
and are regarded as an item that is implemented by the audited department.
b) During the evaluation and conclusions, although the documents acquired from the audited
department may be used for creating audit working papers, the audited department does not
conduct any evaluations to reach a final conclusion,
d) In the preliminary audit, although various investigation techniques are used, the most
appropriate techniques in accordance with the audit target and audit objectives should be
selected to understand the current status of the audit target. There is no set method or
procedure.

Q6-26 a) System audit procedures implemented during the preliminary audit

The preliminary audit is performed to understand, as clearly as possible, the current status of the
audit target; for example, whether information system risks of the audit target are appropriately
identified, or that there are appropriate controls in place based on risk assessments. Written
inquiries, interviews, as well as the reading and collecting of materials are audit procedures
commonly used in the preliminary audit. Preliminary audits are indispensible in order to improve
audit accuracy and efficiency.
Option a) is the correct answer because it checks the information relating to risk awareness
through a questionnaire and is applicable as audit procedure of the preliminary audit.

b) Audit evidence is obtained using audit procedures for the main audit and is not a target of the
preliminary audit. In addition, the summarizing of findings is performed at the stage when the
audit report is created.
c) This is an explanation of on-site investigations that are used during the main audit. Clear
improvement proposals are not yet summarized at the preliminary audit stage.
d) This option is an explanation of the main audit. A statement on auditing procedure is created
based on the results of the preliminary audit.

Q6-27 a) Objectives for the use of “system audit standards”

The preamble of the “System Audit Standards” states that “system audit standards are codes of

139
Morning Exam Section 6 Service Management Answers and Explanations

conduct for auditor for the purpose of ensuring the quality of system audits and implementing
audits in an efficient and effective manner.” Therefore, a) is the correct answer.

b) Standards that are used as a scale by the system auditor during audits are system management
standards.
c) System audit standards can be used for both audits whose objective is to assign assurance in
information systems (guarantee type audit), and audits whose objective is to provide advice for
improvements of the information system (advice type audit).
d) System audit standards can be used in system audits where the audits are performed by an
auditor outside of the organization, and also in system audits implemented by an internal audit
department in the organization.

Q6-28 c) System audit techniques

The explanation in the question is a characteristic of the snapshot method, and therefore, c) is
the correct answer.

The characteristics of each system audit technique are as follows:


a) ITF (Integrated Test Facility) method: It is a method that checks the correctness of processes
by creating and operating a dummy account for audits inside an audit target file.
b) Code comparison method: It is a method that checks for changes to the program and
unapproved changes by comparing the audit target program with a verified program for audit
purposes on a line by line basis.
d) Tracing method: It is a method that checks the correctness of processing procedure of the audit
target program by tracing instructions of the programming language executed in a certain
transaction process and displaying the code addresses that are passed through.

Q6-29 c) Audit trail

An audit trail is a mechanism for tracing and confirming events related to information systems
from their occurrence to a final result. It is used to verify that each control ensures the safety,
reliability, and efficiency of the information system. When obtaining an audit trail, the following
points must be kept in mind: consideration of efficiency and cost efficiency, feasibility of
observation, tracking of the approval process, and the validity of storage periods and methods.
Therefore, c) is the appropriate description.

a) Access logs are used to determine whether a person has access permissions and is an example
of an audit trail related to the control of safety.
b) In order to obtain an audit trail, the mechanism must be in place from the beginning. It is not
easily derived from interrelationships.
d) This is a description of audit working papers, not of an audit trail.

Q6-30 d) Audit evidence in system audits

Audit evidence refers to the “facts required to prove audit opinions.” Therefore, performing a
system audit is the same as collecting audit evidences.

140
Morning Exam Section 6 Service Management Answers and Explanations

The system operation records obtained from the audited department are collected during the
implementation of the audit. It can be thought of as being obtained as part of the facts required to
be reflected in audit opinions. Therefore, d) is the correct answer.
Options a), b), and c) cannot be seen as facts that support audit opinions.

Q6-31 a) Audit working papers

An audit working paper is a summary of audit procedures implemented by the system auditor
and their results, and is a supporting document for the audit results. Therefore, a) is the correct
answer.

b) A declaration by the auditor has no relationship to audit working papers.


c) Although the guidelines used in the audit can be used as reference material, they are not audit
working papers.
d) Materials used as the basis of a decision are the audit evidence. Although audit evidences are
often included as part of audit working papers, they cannot be called audit working papers by
itself because they do not include records of the results of the audit procedures.

Q6-32 d) Internal control

Internal control is a management process that sets standards and procedures to be applied within
an organization to achieve its objectives as well as to appropriately operate these procedures. As a
result, corporate management is able to obtain reasonable assurance regarding the achievement of
the organization’s objectives. Therefore, d) is the appropriate answer.

a) An internal auditor investigates and evaluates the preparation and operations of internal
control. He then provides advice and suggestions. However, he is usually not responsible for
supervision.
b) The level of a risk is evaluated based on both occurrence frequency and level of impact. The
higher the occurrence frequency and the larger the impact, the more higher the evaluation level
of the risk.
c) A business operations department evaluating its own operations is referred to as daily
monitoring. Independent monitoring means that an internal auditor provides evaluations from
a perspective that is independent from daily operations.

Q6-33 b) General control in IT

IT related controls in the “Internal Control Over Financial Reporting” established by the
Financial Services Agency were published in February of 2007 by the Business Accounting
Council of the Financial Services Agency (FSA). These controls are defined in the “Standards and
Practice Standards for Management Assessment and Audit concerning Internal Control Over
Financial Reporting” (hereinafter, FSA Implementation Criteria) that has been applied since April
of 2008. In this question, IT related controls are divided into general controls for IT and business
process controls for IT. General controls for IT are control activities to guarantee a valid functional
environment for business process control and in general refers to policies and procedures related

141
Morning Exam Section 6 Service Management Answers and Explanations

to multiple business process controls. Business process controls for IT are internal controls for IT
that are integrated into business processes to ensure the correct processing and recording of all
approved operations in the operations management system. With this in mind, specific examples
for each of these controls are shown in the chart below.

• Management related to system development and maintenance


Specific examples
• System operations and management
of IT general
• Ensure system safety through managing access from inside and outside
controls
• Management of contracts regarding outsourcing
• Controls to ensure the integrity, accuracy, and validity of input information
Specific examples • Correction and reprocessing of exception handing (errors).
of IT business • Maintaining and managing master data
process controls • Authenticating system usage, managing access by restricting operational
scope

Therefore, b) is the correct answer.


General controls for IT and business process controls for IT are important concepts in system
audits. At the very least, we should understand the specific examples defined in the FSA
Implementation Criteria. Regarding IT controls, the Ministry of Economy, Trade and Industry has
also released a publication based on the FSA Implementation Criteria. The “System Management
Standard - Supplementary Edition (Guidance for IT Controls over Financial Reporting)” that
organizes in detail the correspondence relationship between system management standards and
“Responses to IT” was published at the end of March, 2007.

142
Morning Exam Section 7 System Strategy Answers and Explanations

Section 7 System Strategy

Q7-1 d) Description concerning information strategy

The question relates to a general description of information strategy. In general, the information
strategy and computerization plan are developed according to the following procedures:
(1) Define the business strategy issues (business policy, business objectives, and management
issues).
(2) Develop the computerization strategy (computerization policy and issues of computerization).
(3) Develop a medium- and long-term computerization plan (target business operations,
investment effect, schedule, structure, and development environment).
(4) Prepare a development plan for each business system.
In general, the information strategy is developed in steps (1) through (3), and a development
plan for each system is prepared separately from the information strategy. In other words,
when the overall information strategy plan is prepared, the details of each system are not
examined. Therefore, d) is the appropriate answer.

a) The description says that the evaluation of the current information system is not useful.
However, in the planning phase, the investment effect analysis of each business or task is
essential.
b) The description says that the information infrastructure preparation policy should be defined
after the information strategy is defined. However, it must be defined in the information
strategy.
c) The description pertains to a). Effectiveness must be validated in both pre-evaluation during
the strategy preparation and post-evaluation after the system is in operation. Therefore, it is
not an evaluation based only on the outcome.

Q7-2 b) Considerations in developing a company-level business operations model

A company-level business model represents the basic structure of the activities of the entire
company, and also expresses the ideal way that the company should conduct its business. Since
this model is a logical model showing how business management should be carried out, it must
include work-related activities as well as management-related activities such as decision making
and business planning. Therefore, b) is the correct answer.

a) A data class is a logical collection of data required for business operations from a user
viewpoint. It is a necessary part of the business operations model, but a detailed description is
not required for a company-level business operations model. Only a general outline of
activities for each business operations model is sufficient.
c) The business operations model represents the ideal way that the company should conduct its
business, and does not reflect the current process as it is.

143
Morning Exam Section 7 System Strategy Answers and Explanations

d) Duplicate data in a data class should be eliminated as much as possible to build an ideal
information model.

Q7-3 c) Improvement index in supply chain management

Supply Chain Management (SCM) is a method in which all the links are considered in the flow
of the organization from manufacturing to distribution, and product information is shared among
organizations for overall business optimization. An effective SCM provides the right product at the
right time while unnecessary inventory is reduced. The rate of decrease in dead stock is one of the
improvement indexes or indicators of SCM. Therefore, c) is the correct answer.

Q7-4 a) Definition of a business operations model during the total optimization planning phase

“System Management Standards” is the practical guideline used by an organization to plan an


effective information system in accordance with its business strategy, and based on such a strategy,
to appropriately prepare and implement controls for effective investment in the information system
during its lifecycle (system planning, development, operations, and maintenance) as well as for
risk reduction.
One of the policies/objectives for the total optimization of System Management Standards is “to
identify an ideal information system for the entire organization.” The business operations model is
defined during total optimization planning for the purpose of organizing the relationship between
business operations and utilized information across the entire enterprise and clarifying an overall
picture of the information system. Therefore, a) is the correct answer.

b) According to System Management Standards, the scale and cost of system development are
estimated in the development planning phase.
c) According to System Management Standards, the hardware, software, and network are
selected in the procurement phase.
d) Although the expression “confirm the operational procedure” is not used in System
Management Standards, this activity corresponds to the analysis phase of planning operations.

Q7-5 c) Explanation of EA (Enterprise Architecture)

Enterprise Architecture (EA) is a comprehensive and strict approach to describe and analyze the
structure and functions of “processes, information system, and personnel and other departments”
in an organization such as an enterprise or a government agency. EA provides a guideline for total
optimization so that the organization can function according to strategic objectives. Therefore, c)
is the correct answer.

The organization structure used in EA includes four key architectures as follows:


1) Business architecture including business objectives and processes

144
Morning Exam Section 7 System Strategy Answers and Explanations

2) Data architecture representing the information (in the system) used in each business operations
or system and the relationship between the various data
3) Application architecture representing the type of information system optimized for the
business operations
4) Technology architecture representing various technology components and security
infrastructure used for the actual system construction

a) This is a description about the system analysis and design technique based on UML (Unified
Modeling Language).
b) This is a description about data modeling based on E-R (Entity-Relationship) diagram.
d) This is a description about the design technique based on DFD (Data Flow Diagram).

Q7-6 d) KPI and KGI

When an enterprise carries out business processes to achieve business objectives and strategies,
the processes must be monitored and evaluated in an appropriate manner. KPI (Key Performance
Indicator) and KGI (Key Goal Indicator) are indicators to monitor and evaluate business
processes. KGI is an indicator that provides a specific goal to be achieved, such as a target sales
amount and gross profit ratio. The KGI value is obtained only when the business process is
completed. However, at that time, some of the goals may not have been met for which KGI can
not be measured. To solve such problems, KPI is set up as an indicator to measure the status of the
business process in progress. Examples of KPI are the number of new customers and the number
of new membership cards. Here, let us examine the options one by one.

a) and b) Different measures for existing and new customers are selected for KPI and KGI, and
have no relationship. Therefore, a) and b) are inappropriate.
c) KGI is an indicator of a specific goal to be achieved when the business process is completed,
and the number of visits to new customers is inappropriate. Examples of appropriate KGI are
sales amount and gross profit margin.
d) KPI is the number of visits to new customers which can be measured for an ongoing business
process. KGI is the sales from new customers and is appropriate as the goal to be achieved by
the end of the business process. Therefore, d) is the appropriate answer.

Q7-7 d) PDCA cycle

Repetition of the PDCA management cycle results in successful business operations. In the
PDCA cycle, four steps are repeated in the following sequence: Plan (develop business plans), Do
(carry out business plans), Check (evaluate the outcome), and Act (implement improvement
measures). Therefore, d) is the correct answer.

145
Morning Exam Section 7 System Strategy Answers and Explanations

Q7-8 c) Explanation of contact management

SFA (Sales Force Automation) is an information system or a technique to improve the efficiency
of a corporate sales department by using information communication technology such as personal
computers and the Internet.
Contact management is a process of organizing information, such as customer needs and the
details of the negotiations with business partners, and managing them in a database. Its objective is
to store detailed information about each customer and to provide the best service possible for each
customer. Although customer information has traditionally been managed by each sales
representative, by managing and sharing this information among the entire staff in an integrated
fashion, it can be applied to post-sales support, new product promotion, and marketing analysis.
Therefore, c) is the correct answer.

Q7-9 c) IaaS

IaaS (Infrastructure as a Service) is a service which provides customers with a system


infrastructure, such as computers and a network, over the Internet which is necessary for the
construction and operation of information systems. Therefore, c) is the appropriate answer.

a) This is a description of ASP (Application Service Provider).


b) This is a description of PaaS (Platform as a Service).
d) This is a description of SFA (Sales Force Automation).

Q7-10 b) SOA

SOA (Service Oriented Architecture) is one of the methods used for system development. SOA
provides business processing functions as a software service over the network. Since SOA is not
constrained by specific software or operating environments, it is desirable that the service
interface be defined based on a standardized technology specification. Therefore, b) is the
appropriate answer.

a) When an SOA-based system is developed, system functions are often implemented by


combining existing services, so it can be developed generally in a short time.
c) A Web service interface is a de-facto standard used as an interface in SOA-based system
development.
d) In SOA, the system is developed by combining services such as payment terms and inventory
inquiry. However, the size of the service is not specifically defined.

Q7-11 c) BPO

BPO (Business Process Outsourcing) is a type of outsourcing in which indirect business


processes (such as general affairs and accounting) and professional services (such as call center)

146
Morning Exam Section 7 System Strategy Answers and Explanations

are commissioned to external vendors. One of the key criteria for selecting a vendor is low
operation cost. Therefore, c) is the appropriate answer.

a) In BPO, the company selects a vendor that can provide lower operation costs. Many projects
are outsourced to China and other South East Asian countries for lower cost.
b) By using a network to facilitate communication, it is not a problem if the outsourcing
company and the vendor are geographically separated.
d) The type of contract to outsource information system-related processes such as system
development and system operations is called IT outsourcing. IT outsourcing can be regarded
as a type of BPO, but is not appropriate as the definition of BPO.

Q7-12 d) Purposes of modeling business operations

The overall plan of an information system which is performed prior to individual development
planning is an information system development plan from a company-wide and long-term
viewpoint synchronized with the business strategy. It is also called a medium- and long-term
information computerization plan.
In overall planning, it is necessary to draw an overall picture of the ideal information system
based on the ideal form of the business and then develop a schedule by breaking it down into
specific items of information subsystem development. These activities are performed based on the
business operations model. A business operations model is constructed by analyzing business
activities from a company-wide viewpoint, defining business operations to be performed,
clarifying the relationships between such operations, and structuring corporate activities, the
information necessary for such activities, and how to maintain such information. It is a logical
model that represents the ideal form of corporate management. The purpose of the business
operations model is to understand the relationship between business challenges and related
business operations, and to maintain consistency between business operations and information
subsystems. In order to achieve such goals, it is important to implement a top-down approach
outside the scope of the current business implementation. Therefore, d) is the appropriate answer.

a) A business operations model is used for structuring and organizing the ideal form of corporate
management, not for allocating responsibility. The allocation is one of the methods for
achieving the goals, which should be performed after the overall planning is complete.
b) During overall planning, the business operations model does not require a detailed analysis of
the current business status.
c) Identifying the issues of the current business status is not the purpose of developing a business
operations model.

Q7-13 a) Role of user department in system development

The user department is often involved in system development projects. The question asks what

147
Morning Exam Section 7 System Strategy Answers and Explanations

role the user department staff should play in the project. In a computerization project, the user
department must take a primary role in defining the business requirements based on its business
experience. Therefore, a) is the appropriate answer.

b) In quality management during the design phase, system functions are checked against business
requirements. Quality management should be performed by a specialist with extensive
technical expertise. Therefore, this is not an appropriate answer.
c) In general, unit tests are performed by the programmer who created the program. It is not an
activity to be performed by the user department.
d) Internal design includes the detailed design of a system based on the physical properties of a
computer system. Defining internal design is not an activity for the user department.

Q7-14 a) Points to consider in an interview

The question is about interview techniques during system analysis. The interviewee is
considered to be a staff member involved directly with the actual business processes that are the
target of computerization.
For successful system analysis, it is important to distinguish whether the interviewee is talking
about facts or speculations. Therefore, a) is the appropriate answer. When the interviewer cannot
determine if they are facts or speculations, he can ask follow-up questions to derive a specific
quantitative value which can back up the answer.

b) The interviewee is not necessarily a staff member directly involved with the actual business
processes. When the question is outside the scope of business processes or requires higher
level knowledge, the interviewee can ask his supervisor or another department that can
provide an appropriate answer.
c) It is recommended that a list of questions be provided before the actual interview. In this way,
the interviewee can spend his time more efficiently and can be well-prepared for the interview,
which can lead to a successful analysis.
d) Besides questions which can be answered with a “yes” or “no” response, other types of
questions should be included to cover a wide range of opinions.

Q7-15 c) RFP

RFP (Request For Proposal) is a request by a company for specific proposals from candidate
vendors of IT services. Therefore, c) is the appropriate answer.

a) This is an explanation of ITIL (Information Technology Infrastructure Library).


b) This is an explanation of IT governance. In general, “governance” means implementing rules
and control. IT governance is the ability of organizations to actually utilize IT investment to
achieve business objectives.

148
Morning Exam Section 7 System Strategy Answers and Explanations

d) This is an explanation of SLA (Service Level Agreement).

Q7-16 b) Parties involved in RFP

The RFP (Request For Proposal) is a document to request a proposal. It is prepared by the
information systems department requesting vendors to submit their proposals. The RFP contains
an outline of the system to be installed, requested proposal items, assurance requirements,
procurement conditions, and deliverables. Based on the RFP, the vendor creates a proposal
specifying the system overview, system configuration, development method, cost, and
development period. Therefore, b) is the appropriate answer.

a) The CIO (Chief Information Officer) is the executive in charge of the information systems.
One of the documents submitted by the information systems department to the CIO is a draft
proposal for approval of system development costs.
c) The documents submitted by the information systems department to the user department
include a system operation manual and a troubleshooting manual.
d) The documents submitted by the vendor to the CIO include proposals and estimates.

149
Morning Exam Section 8 Business Strategy Answers and Explanations

Section 8 Business Strategy

Q8-1 c) Core competence management

Core competency management is a management strategy that identifies the “corporate strength”
of a company which is a strength unique to each company and attempts to manage the company
based on this strength. This company strength is called “core competency.” Core competency is a
collection of unique technologies and skills within the company, which other companies are
unable to imitate, and which provides benefits to customers. Therefore, c) is the appropriate
description.

a) This option is an explanation of M&A (Mergers and Acquisitions)


b) This option is an explanation of a synergy effect
d) This option is an explanation of benchmarking as a management technique

Q8-2 b) M&A

Mergers and Acquisitions refer to a “merger with, or acquisition of a company.” It is a corporate


strategy of acquiring the management resources of other companies to strengthen management
foundations or to compensate the company’s own weaknesses. As a corporate strategy, it is used
for entering new fields, or for expanding or reorganizing the business. A merger is when both
parties agree to a contract to merge, and an acquisition is when a company is bought. Therefore, b)
is the correct answer.

a) This option uses terms such as proprietary know-how and technology, and is an explanation of
management strategy related to knowledge management.
c) This option is an explanation of strategy planning based on analyzing market growth rate and
market share. This technique is known as PPM (Product Portfolio Management).
d) This option is an explanation of benchmarking as a business process improvement technique
that uses best practices.

Q8-3 d) Purposes of placing importance on stakeholders

A stakeholder is a term that is used to refer to all parties with interest in the company including
shareholders and employees. The idea of increasing the satisfaction of stakeholders is essential for
the continued development of the company has recently become a mainstream way of thinking.
Therefore, d) is the correct answer.

a) This option is an explanation of compliance management.


b) This option is an explanation of core competency management.
c) This option is an explanation of management governance.

150
Morning Exam Section 8 Business Strategy Answers and Explanations

Q8-4 a) Analysis techniques used for planning management strategies

SWOT is a term that takes the first letters of the words for the company’s strengths,
weaknesses, opportunities, and threats. It is a method that tries to maintain the company’s
strengths and overcome its weaknesses by analyzing these four elements. Therefore, a) is the
appropriate description.

b) Business success factor analysis is also known as CSF (Critical Success Factor) analysis, and
is a technique for clarifying the requirements for a company’s success such as its
differentiation and establishing of competitive advantage. Specifically, success factors are
identified using history analysis (a technique that clarifies the historical success factors from
the company’s founding to the present) and SWOT analysis. It is not an analysis technique
based on financial statements.
c) The first half of this option is correct in that market analysis is a technique to clarify the
positioning of a company’s product or service in the market and customers’ evaluation of the
same. However, market analysis consists of such things as demand analysis, trade area
analysis, competitive analysis, and distribution analysis. It includes neither complaint analysis
nor failure analysis.
d) Product mix analysis is used to build an optimal product line by analyzing the
competitiveness, profitability, and growth potential of a group of products or an individual
product. This analysis is not used to determine prices, but its aim is to meet market needs,
optimally use management resources, or maintain growth.

Q8-5 a) Explanation of management principles

Management principles present the fundamental concepts for company operations. Without
management principles, it becomes unclear how the company should move forward and on what
basis it should make its decisions. In addition, it may become impossible to figure out the
purposes of the company or the employees. Management principles clarify the goals, purposes,
and values of the company. Therefore, a) is the correct answer.

b) Competitive advantage is superiority over other competitors in acquiring customers in the


market. In addition, resources for a company include human resources, material resources,
financial resources, and information resources. Some companies also consider time and
branding as resources.
c) This option is an explanation of a management plan. Although there are no clear definitions
regarding time frames, typically, long-term plans are more than three to five years, mid-term
plans are less than three to five years, and short-term plans are less than one year.
d) This option is related to corporate culture. Although in schools there are school cultures that
are similar to corporate cultures, students attending the school change every three or four years
and the culture changes with time. However, in the case of a company, once employees are

151
Morning Exam Section 8 Business Strategy Answers and Explanations

hired they can stay in the company for as long as 40 years. A corporate culture established
over many years is not likely to change in a short period of time.

Q8-6 c) Differentiation strategy

Differentiation strategy is a strategy to compete at an advantageous level by differentiating the


company’s products from those of other companies and promoting the product’s uniqueness.
Typical elements used to differentiate products from those of other companies are performance,
function, and design. There are also cases where companies can gain an advantage by
differentiating customer service or brand imaging. Therefore, c) is the appropriate answer. In a
differentiation strategy, it is important to provide the customer with the differentiated value and to
ensure that the value is not imitated by others.

a) A strategy to gain an advantage by differentiating from other companies in cost is known as a


cost leadership strategy.
b) A strategy to specialize in a specific area by differentiating the product selection from other
companies is known as a focus strategy.
d) Trying to create uniqueness by drastically improving performance or functions of a product
over other similar products results in the product or service costing more. When the cost of the
product is high, elements such as high performance and high functionality do not appear as
unique aspects to customers with low buying power. In this case, promoting the uniqueness of
the product does not necessarily result in gaining the support of many customers.

Q8-7 c) Explanation of marketing mix

A purpose of a company is to return the profit to the company, stakeholders, and employees,
which was gained by providing consumers with some sort of product or service, and to continue
the business activities. The act of continuing and maintaining the corporate organization
indefinitely is known as “going concern.” However, there may be cases where activities are
expanded, reduced or even cancelled (bankruptcy or discontinuance) depending on the sales of the
product. In order to realize “going concern,” the deployment method for marketing (business
activities ranging from production to sale) is extremely important. Marketing mix is gaining
attention as a technique to fulfill the needs of the market.
This is a keyword that collectively represents the various sales activities deployed for product
sales such as brand strategy or service, product shipping, inventory management, and advertising.
It is thought that effective marketing deployment can be realized by thoroughly considering a sales
strategy after also considering all of these points when deploying a product. The marketing mix
should be appropriately considered from various levels such as through the chart below which
shows the consumer’s perspective (4Cs), and the supplier’s perspective (4Ps). Therefore, c) is the
correct answer.

152
Morning Exam Section 8 Business Strategy Answers and Explanations

The 4Cs and 4Ps of a marketing mix.


Element 4Cs (Consumer’s perspective) 4Ps (Supplier’s perspective)
Customer solution Product
Product
(Value to the customer) (Quality and appeal of the product)
Customer cost Price
Price
(Cost incurred by the customer) (Product price)
Convenience Place
Distribution
(Convenience of purchase method) (Sales method)
Communication Promotion
Promotion
(Information transmission) (Advertising)

a) This option is a process model for consumer behavior. It is a psychological process model
known as the AIDMA model that explains consumer psychology from first being aware of a
product to purchase in five levels: Attention, Interest, Desire, Memory, and Action.
b) Market segmentation is the dividing of the market by grouping consumers who demand
diversified products or marketing mixes. In order to do this, elements, such as geographic
variables, demographic variables, psychographic variables, and behavioral variables, are used.
This technique is utilized in target marketing, a marketing process that sets a target market and
applies the company’s marketing mix to that target.
d) This option is a product life cycle model. This model explains that life cycle of a product and
the entire flow of product sales and changes in profit, from when it first appears on the market
until it gradually declines in sales and disappears, by using four levels: introduction, growth,
maturity, and decline.

Q8-8 d) RFM analysis

RFM analysis is a traditionally used method among the various database marketing data
analysis methods. An RFM analysis evaluates customers based on three axes to obtain customer
trends and customer composition ratio by segment. The “R” stands for Recency (last purchase
date), “F” for Frequency (frequency of purchases), and “M” for Monetary (accumulated purchase
amounts). Therefore, d) is the correct answer.
By using RFM analysis, we can clearly understand “Who uses the service frequently (prime
customers)?”, “Who used to use the service frequently but hasn’t recently?” and “Who are the new
customers?”

Q8-9 d) CSF analysis

CSF refers to “Critical Success Factor” and it includes elements that are decisive and critical to
achieve goals. The technique for analyzing what these elements are is a CSF analysis. Effective
business strategies can be planned beforehand by thoroughly analyzing CSF before actual actions.
Therefore, d) is the correct answer.

153
Morning Exam Section 8 Business Strategy Answers and Explanations

a) This option is an explanation related to best practice analysis


b) This option is an explanation related to SWOT analysis
c) This option is an explanation related to ABC analysis

Q8-10 b) Explanation of benchmarking

Benchmarking is the process of setting improvement goals to drive management innovation by


investigating other companies, selecting a company that has the highest success level, and using
the best practices of that company as a reference. Therefore, b) is the correct explanation. The term
“bench mark” was originally used for measuring things. Although the word “benchmark” is used
for describing performance comparison standards for computers, in management terminology, an
“ing” is added to the word benchmark to differentiate the term.

a) This option is an explanation of concentrated injection of management resources through core


competency.
c) This option is an explanation of M&A (Mergers & Acquisitions).
d) This option is an explanation of PPM (Product Portfolio Management).

Q8-11 b) Purpose of balanced score cards

A balanced score card (BSC) is a typical corporate activity analysis and strategy management
planning technique. From the perspective that there are limits to performance management
methods that emphasizes financial indicators, there are four perspectives used as evaluation
criteria: financial, customer, business process, and learning and growth. It is a technique that
places emphasis on the fact that it is difficult to quantify customer satisfaction and employee
morale. Actions and goals that individuals and departments need to perform in order to achieve
strategic goals set by the company are defined from these viewpoints. Conditions are constantly
checked using the PDCA cycle, and as a result the strategy is corrected in order to achieve the set
goal. Therefore, b) is the correct answer.

a) This option is a description of an arrow diagram


c) This option is a description related to game theory
d) This option is a description related to LP (Linear Programming)

Q8-12 c) Explanation of ERP

ERP (Enterprise Resource Planning) is a technique or concept for planning and managing
business operations that consolidate the operations of the entire company with the aim of optimal
distribution of management resources. Therefore, c) is the correct answer.

a) This option is an explanation of SFA (Sales Force Automation). This improves sales
productivity and creates a customer oriented corporate management system by installing

154
Morning Exam Section 8 Business Strategy Answers and Explanations

information systems in the sales organization.


b) This option is an explanation of RSS (Retail Support System). It is a technique or concept
where manufacturers or wholesalers support the retail stores under them.
d) This option is an explanation of EC (Electronic Commerce). Electronic commerce is a
technique or concept of conducting commerce using electronic networks such as the Internet.

Q8-13 a) Explanation of CRM

CRM (Customer Relationship Management) is a corporate strategy that aims to improve


business performance and customer satisfaction through managing and sharing individual
customer information by using information systems. Therefore, a) is the correct answer. For
example, we can respond to customer needs in more detail by analyzing customer consumption
trends and preferences based on a customer database such as the past order history. This leads to
an increase in customer satisfaction and convenience which will turn the customer into a repeat
customer and increase the rate of return.

b) This option is an explanation of SCM (Supply Chain Management).


c) This option is an explanation of CIM (Computer Integrated Manufacturing).
d) This option is an explanation of CSF (Critical Success Factor). The analysis of CSF is one
technique for planning management strategies.

Q8-14 a) Explanation of the “Valley of Death”

The “Valley of Death” is an issue in Japanese technology management. It is an issue where


research and development does not result in a final product or business because of a lack of funds.
The explanation of the “Valley of Death” is an issue where value capture is not possible because
no funds were invested in research and development that connects fundamental research to
product development and as a result such basic research did not produce any product. Therefore,
a) is the correct answer.

b) This option is an explanation of the “Darwinian Sea” where fierce competition makes the
survival of the product difficult.
c) This option is an explanation of issues that occur in relation to the setting of a target (target
market). For example, if a depression or deflation worsens while permeating a high-value
added product to a middle-class target, customers who were oriented towards high added value
become oriented towards lower prices resulting in fewer sales.
d) This option is an explanation of issues that occur on the occasion of commoditizing a product.
Commoditization is the popularization of high-value added products. There is high
competitive superiority when a new product is developed and sold on the assumption of no
other similar products and low price as well as high performance and usability. However,
when other companies start selling similar products, what was at first a unique product

155
Morning Exam Section 8 Business Strategy Answers and Explanations

becomes just another regular product.

Q8-15 c) TLO law

The TLO (Technology Licensing Organization) law is also known formally as “the Act on the
Promotion of Technology Transfer from Universities to Private Business Operators.” The law has
been in effect since August of 1998.
A TLO is a university, a technical college, or an inter-university research institute that performs
specified university technology transfer operations as specified in the law. The primary role of the
organization is to support the creation of new business in a company by patenting the research
results, such as technologies developed by the universities and research organizations, and
licensing them to the company. Some of the profits are returned to the university as royalties and
are applied to new funding for research. In other words, a TLO becomes the middleman between
industry and academia, and helps to promote this cycle. Therefore, c) is the appropriate role.
There are two types of TLO: an approved TLO and a certified TLO. An approved TLO has the
characteristic of being able to handle patents held by individuals such as university professors.
Business plans for the TLO are approved by the Minister of Education, Culture, Sports, Science
and Technology and the Minister of Economy, Trade and Industry under the TLO law. A certified
TLO has the characteristic of being able to handle government patents held by national
universities and national testing and research facilities. These TLOs are certified by the ministry or
agency that oversees each research facility.
a), b), and d) have no direct relationship to the main purpose of the TLO law.

Q8-16 a) Control of optimum production volume

This question assumes that a networked distribution system has been built between the
manufacturer and the chain stores, and that streamlining of operations is currently taking place.
The question asks about information that is obtained in the chain stores and is important to the
manufacturer.
Planogram information is information that shows “how many of what products to put on which
shelf.” This is typically information that is created for the chain stores by the chain headquarters or
wholesaler. Storeroom information is information that can be obtained by the manufacturer as
well.
Point of sales information is data from the POS system that is used to analyze “who” bought
“how many of what, and when.”
Since the manufacturer uses this information to decide on an optimum production volume, it
needs information (point of sales information) on how well the product is selling. However, even
when the product is selling well, if there is a lot of inventory, there is a possibility that the chain
stores will not place an immediate order, and increasing production immediately runs the risk of
overproduction. Therefore, inventory information and POS information are the two items of

156
Morning Exam Section 8 Business Strategy Answers and Explanations

information that are required, and a) is the appropriate answer.

Q8-17 d) MRP calculation procedure

MRP (Material Requirements Planning) obtains material requirements using a parts list (D) and
calculates the net requirements by referencing inventory status (C ). Furthermore, order volume is
decided based on the order placement policy (B ) that is a unique condition for production orders
by that company. Actual ordering of materials is done after considering the standard schedule (A )
such as the lead time from production completion date to delivery. Therefore, d) is the appropriate
combination.

Q8-18 c) Taking advantage of the cell production system

The cell production system is a production method that appeared for the purpose of
compensating the shortcomings of the traditional line production system.
The line production system is a production method for producing large quantities of a single
product in a certain period of time. The process for each person is decided beforehand and one
product is completed through the accumulation and completion of all processes of an assembly
line operation. On the other hand, in the cell production system, one worker is responsible for all
processes including the mounting of parts, assembly, processing, and inspection.
Since changes to the line (changes in production processes) in a line production system take
time, this method cannot flexibly respond to the start of production for new products. In addition,
when operations are interrupted in the middle of the line, such an interruption stops all processes
after the interruption and decreases overall productivity. In the cell production system however,
since one worker is responsible for all processes, an interruption in operations has no effect on
other workers. In addition, since the production process can be rearranged to be one that is optimal
for a product in a short period of time, it becomes possible to easily increase production variation.
For this reason, it becomes possible to adapt to production that is highly varied and flexible.
Therefore, c) is the correct answer.
a), b), and d) are advantages of the line production system.

Q8-19 c) Required number of components

In order to produce 10 of product A, 20 of component B and 10 of component C are required. In


order to produce 20 of component B, 20 of component C are required. Therefore, in order to
produce 10 of product A, the required number of C becomes 10 + 20 = 30 . However, since there
are 5 of component C in stock, the number of component C required becomes 30 − 5 = 25 .
Therefore c) is the correct answer.

157
Morning Exam Section 8 Business Strategy Answers and Explanations

Q8-20 b) G to B

G to B refers to EC (Electronic Commerce) and EDI (Electronic Data Interchange) between


governments or local governments and businesses. Therefore, b) is the correct answer. The
relationship of EC and EDI between three parties such as between government, businesses, and
individuals are referred to as shown below.

EC between companies. It is also written as B2B. The B stands


B to B
for Business.
EC between companies and individuals. The C stands for
B to C
Consumer
EDI between governments or local governments. The G stands
G to G
for Government
EC and EDI between governments or local governments and
G to B
businesses.
EC and EDI between governments or local governments and
G to C
individuals. The C stands for Citizen

C to C EC and EDI between consumers. Both Cs stand for Consumer

The digitization of agencies and ministries of governments and


inG
local governments.

a) This option is inG.


c) This option is G to G.
d) This option is G to C.

Q8-21 b) Online credit card limits

A CAT (Credit Authorization Terminal) is a terminal used for CAFIS (Credit and Finance
Information Switching System) of NTT (Nippon Telegraph and Telephone Corporation).
Connection to the CAFIS center is through telephone and the validity of the user’s credit card and
the credit limit are checked online. Therefore, b) is the correct answer.

a) ACR (Automatic Carrier Routing) is a function that, for example, chooses the line for the most
inexpensive carrier when lines from multiple carriers are available between the caller and the
receiver of a long-distance telephone call. Carriers are chosen using an adapter added to the
telephone or a selection function built into the telephone.
c) GPS (Global Positioning System) is a positioning system that covers the entire planet. The
system is comprised of 24 satellites launched into orbit by the US military, ground based
control stations, and the user’s mobile station. Although the system was originally intended for
military use, it is currently widely used in the civilian market including car navigation systems
and the “Doconavi” service, a service from NTT DoCoMo that provides location based
information.
d) PDAs (Personal Digital Assistants) are small information devices that are about the size of the

158
Morning Exam Section 8 Business Strategy Answers and Explanations

palm of the hand. Currently, products that also combine cell phone or PHS (Personal
Handyphone System) functions are called PDAs as well.

Q8-22 b) RFID

RFID (Radio Frequency Identification) is a technology that enables the individual identification
or location confirmation of items that have an IC (Integrated Circuit) tag attached to them. The IC
tag and reader communicate with each other wirelessly by using radio waves. The IC tags combine
an integrated circuit and an antenna, and small tags that are only a few millimeters in area are
currently being used. Therefore, b) is the appropriate answer.

a) This option is an explanation of electronic money.


c) This option is an explanation of two-dimensional codes such as QR codes.
d) This option is an explanation of biometric authentication.

159
Morning Exam Section 9 Corporate and Legal Affairs Answers and Explanations

Section 9 Corporate and Legal Affairs

Q9-1 c) Corporate governance

Corporate governance refers to a corporate decision making mechanism that is governed


mainly by shareholders. It can also be said that it is a mechanism that respects the viewpoints of
shareholders, who are the strongest stakeholders, and checks the efficiency and validity of the
management. Therefore, c) is the correct answer.

a), b) Although both of these options contribute to society and external elements which are
outside of the company’s normal business operations, they are not related to corporate
governance. Option a) is what is known as environmental accounting, and b) is social
contribution based on CSR (Corporate Social Responsibility).
d) This option is a description of disclosure, which increases the transparency of corporate
management. It has no direct relation to corporate governance.

Q9-2 a) Development method of problem solving skills

The phrase in the question “a method in which many management issues that occur daily are
presented in order to make evaluations and judgments on them within a given period” is known
as “in-basket.”
In this method, a large amount of pending documents and memos are placed in the pending
basket in order to train and evaluate how much required paperwork can be completed in a
limited amount of time. Therefore, a) is the correct answer.

b) A case study is originally a method of management research. It collects materials


concerning a specific company or workplace, thoroughly analyzes the elements that are
affecting management phenomenon, and clarifies their relationships. In short, it is a
research method that aims to discover the commonality and principles of similar cases.
c) An affinity diagram is a technique to represent chaotic factors, such as future problems and
inexperienced problems, as language data, and to organize them into a diagram in
consideration of the affinity between these factors in order to clarify problem areas.
d) Role playing is an educational training method where the trainee undergoes training by
playing various roles under set conditions. A characteristic of role playing is that there is no
script prior to playing the roles.

Q9-3 c) Explanation of project organization

“A temporary and flexible organization that is only active for a set period and for a set goal
and is composed of specialists from various departments in order to handle a specific issue” is a
project organization. Therefore, c) is the appropriate description.

a) “An organization where members belong to both their own specific functional department
and a department that performs a specific business” is a matrix organization.
b) “An organization that is composed of departments based on business properties such as
purchasing, production, sales, and finance” is an organization that is divided based on
departmental functions, so this is a functional organization.

160
Morning Exam Section 9 Corporate and Legal Affairs Answers and Explanations

d) “An organization that can deploy self-contained management activities” is a divisional


system organization. A business unit that directly connects strategic planning and
implementation is also known as an SBU (Strategic Business Unit).

Q9-4 a) Role of the CIO

The CIO (Chief Information Officer) is the top managing officer that is in charge of the
planning, promotion, dissemination, and management of information systems and technology.
The CIO’s responsibilities include the integration of management and information strategies,
construction planning and management of system infrastructures, and practical skills
improvement of workers using the information. Therefore, a) is the appropriate answer.

b), c) These are roles and functions of a system auditor.


d) This is a description of the general role of an administrator in charge of the information
systems department. It is not an appropriate description of the role of an officer with the
title of CIO.

Q9-5 b) Business impact analysis in BCP

A business impact analysis is an analysis technique for estimating the effect on core business
operations when they are halted by the occurrence of unforeseen events such as disasters. An
analysis is typically performed in the following order.
(1) Select and prioritize vital operations that need to be maintained and recovered.
(2) Identify required resources and confirm limiting conditions for business continuity and
recovery.
(3) Determine the maximum allowed downtime and estimate losses.
(4) Evaluate resources required for recovery and establish measures.
(5) From the results of these considerations, propose and implement measures that need to be
implemented beforehand.
Therefore, b) is the correct answer.
The maximum allowed downtime is the time allowed for a halt in operations and is set per
operation in consideration of the amount of resources within the organization and effect on
customers.
On the other hand, a recovery time objective is also commonly used as an indicator, but this
is a target time for the recovery of halted operations after an unforeseen event. It is set to
minimize the effect (losses) of the downtime.

a), c), d) All of these options do not directly relate to the effect on business in a state of
emergency.

Q9-6 c) Leadership style

This question tries to understand leadership styles by comparing strength between work and
human relationships through the maturity process of an organization. Work oriented leadership
is leadership through logical persuasion such as instructions and controls while human
relationship oriented leadership is leadership through emotional persuasion such as
encouragement and compassion. The concept that different types of leadership are effective

161
Morning Exam Section 9 Corporate and Legal Affairs Answers and Explanations

according to the maturity level of the members (followers) was advocated by Paul Hersey in
1977 as the Situational Leadership Model and is also known as the SL theory. In the diagram,
the maturity level of the followers increases in order of A, B, C, D. D shows a state where the
maturity level of the team members is at the highest. The organization has matured sufficiently
and is at a level where team members can operate independently to achieve results even when
both work oriented leadership and human relationship oriented leadership are weak. At this
point, what is required is leadership that does not get in the way of the players. This is called
“delegating leadership”. Therefore, c) is the correct answer.

a) From a work oriented leadership perspective, “only half of the nagging” comes after the
transition from A to B. On the other hand, from a human relationship perspective, although
in the beginning, maturity levels were low, this can be seen as the B level where maturity
levels have increased slightly because of an increased effect of human relationship oriented
leadership through the accumulation of human relations between the leader and the
members. In this case, “participating leadership” is applicable, and everyone agrees to an
idea and makes decisions.
b) “Thorough discussion with the players” is a level where human relationship oriented
leadership is strong and work oriented leadership has weakened slightly. This is referring to
the level of C where maturity levels have greatly increased. In this case, “selling
leadership” is applicable, and the leader explains the ideas and answers questions.
d) “Strictly managing players” is a beginning level where work oriented leadership shows
strongly and human relationship oriented leadership is not a top priority. It can be said that
this is referring to level A. In this case where the maturity levels of the players are low,
“telling leadership” is applicable.

Q9-7 b) Linear programming problem

This is a linear programming problem. From the description, values regarding products A and
B, and materials P and Q can be organized into a table as shown below.

Inventory quantity
Product A (1 ton) Product B (1 ton)
of materials
Material P (tons) 4 8 40
Material Q (tons) 9 6 54
Profit (10,000 yen) 2 3

From this table, the following set of inequality expressions regarding materials is true. Here,
the production volumes of Products A and B are represented by x and y respectively.
4 x + 8 y ≤ 40 ... (1)
9 x + 6 y ≤ 54 ... (2)
x ≥0, y ≥0

The goal of “maximum profit” can be expressed as 2 x + 3 y (ten thousand yen). Therefore, b)
is the answer. As reference information, when the objective function is placed as z = 2 x + 3 y
and transformed into y = −2 / 3x + z / 3 , it is a straight line with the slope of −2/3 and the

162
Morning Exam Section 9 Corporate and Legal Affairs Answers and Explanations

y-intercept of z/3. When this straight line passes through the intersection ( x, y ) = ( 4,3) of (1)
and (2) that are mentioned earlier, the y-intercept is at its maximum and z = 17 (ten thousand
yen) is the maximum profit (please check this for yourselves).

Q9-8 b) Relationship between order quantity and total cost

This is a question about a graph that relates inventory quantities from the aspects of both
order quantity and total cost.
As written in the question, when the order quantity of each order is increased, the number of
orders placed decreases and as a result the annual order costs are reduced. Thus, order quantity
and annual order costs are inversely proportional.
On the other hand, when the order quantity per order is increased, management costs such as
inventory costs increase. Thus, order quantity and management costs are proportional.
The total cost is the total of order costs and management costs. This is shown as the curved
line in b). The straight dotted line in b) that rises to the right represents management costs, and
the dotted curved line that curves toward the bottom left represents order costs. The solid
curved line at the top represents the total cost that is the sum of both.

Q9-9 a) Products suited to fixed order quantity system

Since order dates in the periodic order system are predetermined, orders can be placed by
estimating the required quantity based on inventory quantity and demand forecasting on the
order date. On the other hand, with the fixed quantity order system, inventory quantities are
continually monitored and orders for a set quantity are placed when quantities fall below the set
limit quantity. In this case, an order point, which is the inventory quantity limit, has to be set
beforehand. Therefore, as described in a), the fixed quantity order system is the method that is
appropriate for products that can have order points set beforehand.

b) In an ABC analysis, products in group “A” have high inventory costs even though there are
not so many types. However, since it has no relation to inventory fluctuations, we cannot
determine which order system is suitable.
c) As for products whose fluctuation in demand is large, it is difficult to set order points
because some periods may see sudden high volume shipments while other periods may see
little movement. This also requires the changing of order quantities, thus is not appropriate
for fixed quantity ordering.
d) Products with high unit costs and long inventory times are not applicable for periodic
ordering or fixed quantity ordering, and there is no choice but to order when necessary. It is
not particularly suitable for fixed quantity ordering.

Q9-10 c) Critical path in an arrow diagram

Among all possible paths, the critical path is the path where the total of the required number
of days for each activity is the largest. It is called the “critical path,” because when a delay
occurs in this path, it affects the entire schedule. Calculating the total of the required number of
days for each option yields the following:

a) 42, b) 40, c) 45, d) 42

163
Morning Exam Section 9 Corporate and Legal Affairs Answers and Explanations

Paths that were not offered as options include A – C – E – H – K – N (43), and A – D – F – J – M


(42). In comparison of the required number of days among all the paths, the largest number is
45. Therefore, c) is the correct answer.

Q9-11 c) Reducing the required number of days in the arrow diagram

Although not shown in the diagram in the question, an explanation is given below by using a
diagram with numbers placed in the circles.
First, keeping the number of days for activity D as 10 days, we figure out the total number of
days for the entire project. The dotted line arrow from node (5) to (6) in the diagram is a
dummy activity and signifies that activity H cannot be started unless both activities E and F are
completed.
E
(5 days)
B 4 5 G
(3 days) (3 days)
A
(5 days) D

Dummy
activity
1 2 (10 days) 7

(6 days)
C F H
(5 days) (12 days) (6 days)
3 6

• (1) → (2) → (4) → (5) → (7): 5 + 3 + 5 + 3 = 16 days


• (1) → (2) → (3) → (4) → (5) → (7): 5 + 5 + 1 0 + 5 + 3 = 28 days
• (1) → (2) → (3) → (4) → (5) → (6) → (7): 5 + 5 + 1 0 + 5 + 6 = 31 days
• (1) → (2) → (3) → (6) → (7): 5 + 5 + 1 2 + 6 = 28 days

Therefore, the total number of days required for the entire project is 31 days, and the critical
path is (1) → (2) → (3) → (4) → (5) → (6) → (7).
Next, consider the case where the required number of days for activity D is reduced from 10
days to 6 days.

• (1) → (2) → (4) → (5) → (7): 5 + 3 + 5 + 3 = 16 days


• (1) → (2) → (3) → (4) → (5) → (7): 5 + 5 + 6 + 5 + 3 = 24 days
• (1) → (2) → (3) → (4) → (5) → (6) → (7): 5 + 5 + 6 + 5 + 6 = 27 days
• (1) → (2) → (3) → (6) → (7): 5 + 5 + 1 2 + 6 = 28 days

In this case, the total number of days required for the entire project is 28 days, and the critical
path is (1) → (2) → (3) → (6) → (7).
From these results, the number of days that can be reduced is 31 − 28 = 3 days. Therefore, c)
is the correct answer.

164
Morning Exam Section 9 Corporate and Legal Affairs Answers and Explanations

Q9-12 d) Decision making for investment plans

In the maximin principle, pessimistic values, such as the minimum profit value (or the worst
possible loss value) ensured at the occurrence of the worst predicted economic trend, is taken
up for each of the multiple investment plans. The plan with the largest pessimistic profit value
is selected. It is called the maximin because it maximizes the minimum profit; in other words, it
takes the maximum of the minimums. It is a decision making principle that takes the most
stable option and minimizes the losses even in a worst case scenario.
In this question, the minimum profit (pessimistic value) for each plan is 500,000 yen for
aggressive investment, one million yen for continued investment, and two million yen for
passive investment. According to the maximin principle, the passive investment plan that
secures a profit of two million yen even in a worst case scenario is chosen. Therefore, d) is the
appropriate description.

a) A mixed strategy is used when multiple strategies are statistically chosen and implemented.
It is not appropriate in situations where one strategy is chosen as in this question.
b) Pure strategy one where a particular strategy is chosen with certainty. When each
probability of occurrence for three predicted economic trends that are the basis for selection
is unclear, the expected profit for each plan is determined by assuming that the probability
of occurrence is the same for each plan, that is, a probability of 1/3, and then the strategy
with the largest expected profit is selected. In this question, the expected profit is 2.33
million yen for aggressive investment, two million yen for continued investment, and 2.83
million yen for passive investment. As a result, the passive investment plan is chosen.
c) In the maximax principle, optimistic values, such as the maximum profit value ensured at
the occurrence of the best predicted economic trend, is taken up for each of the investment
plans, and the plan with the highest optimistic value is chosen. It is called the maximax
because it maximizes the maximum profit; in other words, it takes the maximum of the
maximums. It is a decision making principle that dreams of the maximum profit by
assuming the most optimistic option and the continual occurrence of the most advantageous
situations. The maximum profit for each plan in this question is five million yen for
aggressive investment, three million yen for continued investment, and four million yen for
passive investment. Therefore, according to the maximax principle, the aggressive
investment option, which should make a profit of five million yen if all goes well, is
chosen.

Q9-13 b) Explanation of the work sampling method

This is a question related to the IE (Industry Engineering) analysis technique in production


management. The work sampling method analyzes workload (standard time) by using
measurement records obtained from the periodically repeated measurement of the status of
work for work such as clerical work that is difficult to quantify. Therefore b) is the correct
answer.

a) This is a description related to direct measurement. It is the most precise measurement


method of standard time.
c) This is a description related to the PTS (Predetermined Time Standards) method. It sets the
standard operation time based on individual motions.

165
Morning Exam Section 9 Corporate and Legal Affairs Answers and Explanations

d) This is a description of the most common method for setting standard time. Work sampling
is used to analyze the standard time for such activities where reports on actual performance
are difficult to obtain.

Q9-14 c) OC (Operating Characteristic) curve

An OC curve takes the percent defective of a lot on the horizontal axis and the pass rate
against the percent defective based on the sampling method, and expresses this as a graph. We
can find out the properties of the sampling method from this OC curve. The graph in the
question shows that when the actual percent defective is p1%, the percentage that will pass the
sampling inspection is L1. From this, the descriptions for each of the options are as follows:

a) Since the pass rate of a lot that has a percent defective greater than p1% decreases along the
curve to a level below L1, the probability of passing is at most (i.e., less than or equal to)
L1.
b) Since the pass rate of a lot that has a percent defective less than p1% is greater than L1, the
probability of failing the inspection is at most “1.0 − L1”.
c) The concept for p2 is the same as p1. The probability that a lot with a percent defective
greater than p2% passes the inspection is at most L2. This is the correct description.
d) Since the probability that a lot with a percent defective less than p2% passes the inspection
is greater than L2, the probability of failing the inspection is at most “1.0 − L2”.

Q9-15 d) Monte Carlo method

The method of obtaining approximate solutions by repeating simulations using random


numbers is called the Monte Carlo method. Therefore d) is the correct answer. This is a method
that was named after Monte Carlo, famous for its casinos. The random numbers are numbers
which are generated continuously and have no regularity. These can easily be created using
computers.

a) The cluster analysis method is a technique based on the creating of a number of clusters
consisting of similar types of items from a target consisting of numerous types of items.
b) The exponential smoothing method is one technique for predicting the future based on past
data. It is not simply the extension of past trends, and is characterized by how it places
weight on data that is more recent and less weight on past data and therefore places
importance on data from the recent past.
c) The Delphi method is a technique for predicting the future and is a method that summarizes
the predictions of numerous specialists through repeated questionnaires. The word “Delphi”
is taken from the temple with the same name in Greek myths.

Q9-16 c) What can be learned from the results of a Pareto chart analysis

A Pareto chart is a composite graph that combines a histogram of the occurrence of items in
descending order and a line graph of the cumulative total of such occurrences. By using a
Pareto chart, it becomes easier to understand the percentage out of the whole that each data
value accounts for. It is used in areas such as ABC analysis as clues to deciding which items
should have priority management. In this question, failures that have occurred are analyzed to

166
Morning Exam Section 9 Corporate and Legal Affairs Answers and Explanations

improve system quality. A Pareto chart can show the cause of the failures and the percentage
that each cause accounts for at the same time, and is appropriate as an analysis result.
Therefore, c) is the correct answer.

100%

Cumulative Percentage
75%

50%

25%

Example of Pareto chart

a) This is an appropriate case for using a histogram.


b) This is an appropriate case for using a scatter diagram.
d) This is an appropriate case for using a cause and effect diagram.

Q9-17 c) Association diagram method

An association diagram method is used to clarify the cause and effect of various elements
that compose a problem and to find a solution. It connects the complex and intertwined causes
and effects by using arrows. Depending on the direction of the arrow, it can represent one of
two types: either cause-effect or purpose-measure. It is one of the “new QC seven tools.”
Therefore, c) is the appropriate explanation.

New QC seven tools: The traditional “QC seven tools” is an analysis method that uses number
values to maintain a certain level of product quality. The new QC seven tools is also an
analysis method for maintaining a level of product quality, but it is a set of tools to solve
vague problems and often uses words and language instead of number values for data. In
addition to a), c), and d), there are also affinity diagrams, matrix diagrams, matrix data
analysis, and arrow diagrams.

a) This is a description of PDPC (Process Decision Program Chart). It is one of the “new QC
seven tools.” It considers measures to improve processes until objectives are achieved.
b) This is a description of the KJ method, a problem solving technique. It is a technique that is
effective for analyzing the substance of a problem by organizing data obtained through
fieldwork. In addition, by organizing into groups, it can lead to sharing common values
among group members and improving teamwork.
d) This is a description of a tree diagram. It seeks to find the optimum measure by deploying
objectives and their techniques from abstract to more detailed models. It is one of the “new
QC seven tools.”

167
Morning Exam Section 9 Corporate and Legal Affairs Answers and Explanations

Q9-18 d) Control charts

A control chart is used for managing the characteristic values of products in chronological
order. It is a graph that is used to determine whether products of the exact same type and
specifications made using the same process have stable manufacturing processes. Products that
have characteristic values between the upper and lower control limits are deemed to be within
standards, and products that fall outside of this range are deemed to be nonstandard products
with quality defects. It is normal for the fluctuation of characteristic values within the
acceptable range to occur at random. Therefore, in the case where certain trends are observed;
for example, characteristic values often exceed reference values or are on the increase, it can be
used to determine a possible abnormality in the process.
For Line B, the changes in characteristic values that should be close to the center line show
an increasing trend and can be predicted to exceed the upper control limit in the future. There
may be some sort of abnormality in the manufacturing process and it becomes necessary to
identify the cause. Therefore, d) is the appropriate description.

a) The characteristic values of Line A, centered on the center line, are within the upper and
lower control limits and there is no significant fluctuation.
b) Although Lines A and B are both within the control limits, as mentioned above, there
appears to be some sort of abnormality in the trend of values in Line B and the cause should
be investigated.
c) As long as a line is within the boundaries of the upper and lower control limits, unless there
is a distinctive trend, the line is deemed normal even if it deviates from the center line.

Q9-19 d) Financial statements

Among the four financial statement options, the balance sheet of option d) shows the
financial condition of the company at the end of the accounting term, and the asset, liability,
and capital balances are listed by item. It is “a financial statement which indicates assets,
liabilities, and net assets of a company at a certain point in time and clarifies its financial
condition” as written in the question.

a) Statement of shareholders’ equity: With the implementation of the New Company Law of
Japan in 2006, the appropriation methods of profit must now be recorded not only in the net
assets section of the balance sheet but also in the statement of shareholders’ equity.
b) Cash flow statement: This statement shows the inflow and outflow of cash according to
objectives and causes such as sales cash flow, investment cash flow, and financial cash
flow. It is a statement that focuses on the flow of actual cash. For example, on an income
statement, when capital investment is made, it appears that the costs for the current period
are large and profit is low. However, on a cash flow statement, it can be seen that it is for an
aggressive growth strategy.
c) Income statement: This statement lists all the earnings for an accounting period and their
related costs, and shows the difference between these two as profits.

Q9-20 c) Financial indicators

This question refers to financial indicators or indexes used in management analysis that

168
Morning Exam Section 9 Corporate and Legal Affairs Answers and Explanations

determines the level of business performance and financial condition by analyzing the
company’s profitability, safety, productivity, and growth potential. The percentage of the profit
to gross capital is the total capital profit ratio, and a larger ratio means higher profitability.
Therefore, c) is the appropriate answer.

a) Fixed ratio is the percentage of fixed assets to equity capital. Typically, a percentage less
than 100% is favorable.
b) Equity to total assets is the percentage of equity capital to gross capital. Although a larger
value is better, 50% or above is typically favorable.
d) Current ratio is the percentage of current assets to current liabilities. The larger this value is,
the safer it is in the short term. Typically, a percentage higher than 200% is favorable.

Q9-21 a) Calculation of gross profit

The following items can be determined from the cost report.

Current term total manufacturing cost = Material cost + Labor cost + Expenses
= 400 + 300 + 200 = 900

Current term product manufacturing costs


= Current term total manufacturing cost
+ Initial work in process inventory
− Ending work in process inventory
= 900 + 150 – 250 = 800

Calculations are made by entering these values in the income statement.

Cost of sales = Initial work in process inventory


+ Current period product manufacturing cost
− Ending work in process inventory
= 120 + 800 − 70 = 850
Gross profit = Sales − Cost of sales = 1,000 − 850 = 150 (thousand yen)

Therefore, the answer is a).

Q9-22 a) Explanation of Activity Based Costing

Activity Based Costing is a cost calculation methodology that aims to perform continued cost
improvements by measuring the cost and performance of each business activity. Unlike the
traditional method of calculating indirect costs in a consolidated manner and apportioning them
to the product, this methodology aims to reflect costs as directly as possible on product costs by
understanding costs and performance of each activity of even indirect departments. Therefore,
a) is the correct explanation.

b) This is an explanation of standard costing.


c) This is an explanation of an inventory management technique known as ABC analysis.
b) This is an explanation of direct costing.

169
Morning Exam Section 9 Corporate and Legal Affairs Answers and Explanations

Q9-23 d) Calculating the break-even point from an income statement

When this table is summarized, sales are 700 million yen, variable costs are 140 million yen,
fixed costs are 500 million yen, and profit before tax is 60 million yen.

Fixed cost
Sales at break-even point =
Variable cost
1−
Sales
500 500 × 700
= = = 625 (million yen)
140 560
1−
700

Also, as another solution to this question, when sales at the break-even point is X million yen
and the variable cost at the break-even point is Y million yen, the following equations hold for
X and Y and can be used for calculations.

X = Y + 500 ... (1)


Y ÷ 140 = X ÷ 700 ... (2)

Since Y = X − 500 , the equation (2) can be solved using X as follows:


( X − 500) ÷ 140 = X ÷ 700
700 × ( X- 500)=140 × X
(700 − 140) × X= 700 × 500
X = 700 × 500 ÷ 560 = 625 (million yen)

Therefore, d) is the correct answer.

Q9-24 a) Break-even point analysis

Variable cost fluctuates proportionally with the fluctuation of sales. Fixed costs do not
change even when sales fluctuate. The relationship of sales and profit of Company A and
Company B can be graphed as follows:

(100 million yen)


1,400
Profit for
Company B Profit for
1,200 Company A

1,000

800
Cost

600
Sales
400 Fixed cost for Company A
Cost of sales for Company A
200 Fixed cost for Company B
Cost of sales for Company B
0
0 200 400 600 800 1,000 1,200 1,400

Sales (100 million yen)

170
Morning Exam Section 9 Corporate and Legal Affairs Answers and Explanations

As can be seen from the graph, since Company A has a higher fixed cost percentage and
lower variable cost ratio, its profits increase drastically as sales increases, but because the
break-even point is higher, it will quickly fall in to the red when sales decrease. Therefore, the
characteristic of Company A is a).

b) Marginal profit is the amount after subtracting variable costs from sales. When marginal
profit is divided by sales, this is known as the marginal profit ratio. The marginal profit
ratio is determined by subtracting the variable cost ratio from one (1). Since the variable
cost ratio of Company A is lower than Company B, the marginal profit ratio is higher.
c) Sales at the break-even point can be calculated using fixed “cost / (1− variable cost ratio)”,
and so sales at the break-even point for Company A and Company B is as follows:
Company A 400 /(1 − 500 / 1,000) = 800
Company B 100 /(1 − 800 / 1,000) = 500
Therefore, the break-even point for Company A is higher.
d) Since the break-even point is higher and the percentage of fixed costs is larger for Company
A, a decrease in sales can more easily result in larger losses.

Q9-25 b) ROE

The ROE (Return On Equity) is the ratio of profit to equity capital and is also known as
“return on capital equity” or “return on shareholders’ equity.” Therefore, b) is the correct
answer. More specifically, it is a financial index that shows how much profit was gained
(capital efficiency) in relation to how much was invested (funds invested by shareholders).
Since it shows profitability in relation to shareholders’ equity, it serves as a guideline for
dividend capacity and is important as information for investors when they make a decision of
whether or not to invest. In other words, ROE is a shareholder oriented management index
because it clarifies how much profit is being made from the perspective of the shareholder
(investor). Keep in mind that this is a fundamental term in Japanese modern business where
shareholder investment has become increasingly active. ROE is typically calculated using the
following formula:

ROE = Profit (current term net profit) ÷ capital stock (gross capital × capital ratio)

a) The ratio of profit gained from main management activities to operating capital is the
operating capital profit margin. This shows how much profit was gained on the capital
(invested capital) managed by the company in normal business activities during those
activities. It is basically the same concept as ROI described in d). The operating capital
profit margin is calculated using the following formula:
Operating capital profit margin
= operating profit (sales – cost of sales − selling costs − management costs)
÷ operating capital (gross capital (= total assets) − items not used in main business
(deferred assets, idle fixed assets, construction in progress, investment assets,
etc.)
c) The ratio of profit to total assets is ROA (Return On Assets). ROA is the operating
efficiency of capital invested during business activities. It is different from ROE in that it is
an index for the efficiency of all assets held by the company from the perspective of not

171
Morning Exam Section 9 Corporate and Legal Affairs Answers and Explanations

only shareholders but all investors including creditors. ROA shows to what level the
company’s total assets are being utilized to gain profits and is typically calculated using the
following formula:

ROA = Profit (current term net profit) ÷ gross capital (= total assets)

d) The ratio of profit to invested capital is ROI (Return On Investment). ROI is an index used
to determine the investment efficiency of the entire company, individual investment
projects, and divisions. It shows the ratio of profit generated from invested capital in
relation to each business. ROI is typically calculated using the following formula:

ROI = Operating profit (ordinary profit + interest paid)


÷ invested capital (loans + value of corporate bonds issued + capital stock)

As related items, make sure to understand fundamental business terms such as EVA
(Economic Value Added) that shows how much profit greater than capital cost (e.g., shareholder
dividends) was generated from business activities from the perspective of shareholders, EPS
(Earnings Per Share) that is calculated by dividing the current term profits by the number of
issued shares at the end of the period, and PER (Price Earnings Ratio) that shows how many
times the EPS the current stock price is.

Q9-26 a) Amortization methods of intangible fixed assets

When a company outsources the development of the software it uses, or develops the
software by themselves, the material costs, labor costs, and regular costs used in the software
development are totaled and listed as intangible fixed assets in the balance sheet and amortized
over 5 years on a straight-line basis. This is also the same for purchased software. Therefore, a)
is the appropriate answer. The original copy of software that is meant to be copied for sale and
software that is used for research and development is amortized over 3 years.
b), c), d) None of these options are applicable.

Q9-27 c) Calculation of the moving average method

This question relates to the calculation method of sold unit cost. The moving average method
calculates sold unit costs by calculating the average value of inventory costs and purchase costs
each time items are purchased. It then uses this value to determine following payment costs.
Therefore, c) is the correct answer. The following are typical calculation methods of sold unit
costs.

Cost method Description


First-in first-out method The first items purchased are the first to be sent
out, and the sold unit cost is the unit cost of the
earliest items purchased.
Last-in first-out method The most recent items purchased are the first to be
sent out, and the sold unit cost is the unit cost of
the most recently purchased items.

172
Morning Exam Section 9 Corporate and Legal Affairs Answers and Explanations

Periodic average method The sold unit cost is the average value that is
obtained by dividing the sum of the evaluated
value at the beginning of the term and all the
purchased cost through the entire term by the total
number purchased.
Simple average method The sold unit cost is the average of the purchased
unit costs of throughout the term.

a) Although this is the simple average method shown above, it is not actually used.
b) This is the first-in first-out method shown above.
d) This is the last-in first-out method shown above.

Q9-28 d) Calculating cost of sales with the last-in first-out method

The last-in first-out method is a concept where the most recent purchase is the first to be
sold. In this case, simply assume that the most recent purchase is sold every time a sale is made.
Taking note that the carryover from the previous month is 100 units, the applicable cost is as
follows:

• The cost of sales for 50 units sold on September 10


The most recent purchase cost of 50,000 yen on the 6th applies to all 50 units.
(As a result, the product inventory of 50 units purchased on the 6th is gone)
50,000 yen/unit × 50 units = 2.5 million yen
• The cost of sales for 100 units sold on September 25
The most recent purchase cost of 40,000 yen on the 17th applies to 50 units. The cost of
30,000 yen applies to the remaining 50 units carried forward from the previous month.
40,000 yen/unit × 50 units = 2 million yen
30,000 yen/unit × 50 units = 1.5 million yen Total 3.5 million yen

Therefore, the sales cost is 2.5 million yen + 3.5 million yen = 6 million yen, or option d).

Q9-29 d) Copyrights of web pages

The law stipulates that conventional works are subject to protection by copyright, and
databases of those which exhibit creativity are also subject to copyright protection.
A URL (Uniform Resource Locator) is an address to identify information resources on the
Internet. If URLs of homepages (web pages) are collected and simple comments are added, on
the whole, it can be considered as an item with some creativity. This link collection is
considered as a database of information related to a specific field, and is protected under the
Copyright Act (Article 12-2). Therefore, d) is the appropriate description.

a) Since a homepage (web page) is displayed publicly on the Internet, it is illegal to publish
without permission the copyrighted works of others even if it is for personal use.
b) Although freeware can be freely distributed and copied, the copyright still belongs to the
author and is protected under copyright law.
c) Although shareware can be freely used on a trial basis and distributed, a fee must be paid if
the user wishes to continue to use it after the trial period. The copyright for the actual
software belongs to the author. Although it is illegal to continue to use the software after

173
Morning Exam Section 9 Corporate and Legal Affairs Answers and Explanations

the end of the trial period, there are no problems for data created using the software.

Q9-30 a) Vested rights of programs as work made in the course of duty

Article 15 of the Copyright Act states that “the authorship of a computer program work
which, on the initiative of a juridical person, etc., is made by an employee in the course of his
duties in connection with its business, shall be attributed to such juridical person, etc., unless
otherwise stipulated by contract, working regulations or the like at the time of the making of the
work.” Therefore a) is the correct answer.

b), c), d) None of these options are defined in the Copyright Act.

Q9-31 d) Protection of copyrighted works

Article 10-3 of the Copyright Act states that “the protection granted by this Act to works
shall not extend to any computer programming language, rule, or algorithm used for creating
such work.” Therefore, d) is the appropriate answer.

a) The copyright for programs that are jointly developed are not decided solely on the share of
development costs. Joint copyrights cannot be exercised without the agreement of all
copyright holders. (Article 65 of the Copyright Act)
b) When a database attains creativity by how its information is selected or how its structure is
organized, the database is protected as a copyrighted work. (Article 12-2 of the Copyright
Act)
c) Although a program is protected as a copyrighted work, know-how is not protected as a
copyrighted work.

Q9-32 c) Protection under the Unfair Competition Prevention Act

Trade secrets are protected under the Unfair Competition Prevention Act. Trade secrets are
“technical or business information useful for commercial activities such as manufacturing or
marketing methods that is kept secret and that is not publicly known.”
There are three conditions that must be met.

(1) Undisclosed: Not publicly known


(2) Controlled secret: Kept secret
(3) Usefulness: Information useful for commercial activities

Therefore, c) is the correct answer.

a) Patented inventions are protected under the Patent Act.


b) System development procedures manuals that are distributed are protected under the
Copyright Act.
d) Even if the specification is important, as long as it is not controlled as confidential
information, then it is not protected under the Unfair Competition Prevention Act.

Q9-33 a) Compliance with software license agreements

Software license agreements aim to reduce costs and simplify software management when

174
Morning Exam Section 9 Corporate and Legal Affairs Answers and Explanations

the same software is installed on multiple computers to be used continually by purchasing and
concluding licenses for the required number of computers instead of purchasing a software
package for each computer. From the question, simply moving the computer from one place to
another is not a problem and is still in compliance with the license agreement. Therefore, a) is
the correct answer.

b) Installing Software X on a computer that was purchased after the conclusion of a license
agreement means that there are more computers than the license allows and an additional
license must be purchased.
c) The discontinuation of sales of Software X has no relation to being able to freely install
Software X regardless of license.
d) A software license agreement defines the number of PCs that the software can be installed
on and does not have any relation to the number of PCs that can use the software at the
same time.

Q9-34 d) Act on the Prohibition of Unauthorized Computer Access

The Act on the Prohibition of Unauthorized Computer Access prohibits unauthorized access
to corporate and government computers by using the passwords and user IDs of other users
without their permission. Although “charge of the obstruction of business by damaging a
computer” under the Penal Code could not be applied if there was no obstruction of business
even if there was unauthorized access, this law prohibits unauthorized access itself.
Article 3 (Prohibition of Unauthorized Access) of the Act on the Prohibition of Unauthorized
Computer Access specifies “specific computers with access restriction functions.” Computers
without access restriction functions are not protected under this law. Therefore, d) is the
appropriate description.

a) As explained above, under this act, unauthorized access is punishable even if there are no
damages.
b) Article 3 (Prohibition of Unauthorized Access) specifies “specific computers with access
restriction functions through communication lines…” Therefore, acts committed without
going through a network are not punishable under this law.
c) Article 4 (Prohibition of acts that facilitate unauthorized access) states that access
credentials “shall not be provided to anyone other than access administrators associated
with the access restriction function or users that have been granted access.” The act of
providing others with a password without permission is regarded as an act that facilitates
unauthorized access and is punishable under this law.

Q9-35 d) Control objectives of JIS Q 27001:2006

“JIS Q 27001:2006 (ISO/IEC 27001:2005) Information security management


systems—Requirements” is a standard for providing a model for establishing, implementing,
operating, monitoring, reviewing, maintaining, and improving an Information Security
Management System (ISMS) by organizations. It is used as a certification standard in the ISMS
conformity assessment system. Annex A is divided into 11 articles, A.5 through A.15 as shown
below, and defines 39 security categories according to control objectives and is further
comprised of 133 controls.

175
Morning Exam Section 9 Corporate and Legal Affairs Answers and Explanations

A.5 Security policy


A.6 Organization of information security
A.7 Asset management
A.8 Human resources security
A.9 Physical and environmental security
A.10 Communications and operations management
A.11 Access control
A.12 Information systems acquisition, development and maintenance
A.13 Information security incident management
A.14 Business continuity management
A.15 Compliance

The “providing of management and support for information security in accordance with
business requirements and relevant laws and regulations” is a control objective in the
“information security policies” category included in A.5 “Security policies.” Therefore, d) is the
correct answer.

Q9-36 c) Provisions of the Act on Electronic Signature and Certification Business

The Act on Electric Signatures and Certification Business in effect from April 1, 2001 lays
the legal foundation for treating digital signatures in the same manner as handwritten signatures
and impressed seals. Article 3 stipulates that when a specific digital signature is placed on
information on electromagnetic records, it is deemed as authentic and concluded. It has the
same basic validity as an impressed seal under the Code of Civil Procedure. Therefore, c) is the
appropriate answer.

a) Digital signature technology used to be mainly public key cryptography based. However,
even when new technologies are developed in the future and implemented, the law is
phrased so that it does not limit digital signatures to just public key cryptography. For
example, digital signatures using biometric technologies such as fingerprints are recognized
as digital signatures under this law.
b) Digital signatures are defined as a “measure taken with respect to information that can be
recorded in an electromagnetic record” (Article 2 Section1).
d) Certification businesses can also be provided by private entities. In particular, “Certification
Business performed with respect to an Electronic Signature that conforms to the criteria
prescribed by ordinance of the competent minister as an Electronic Signature that can be
performed by that person in response to the method thereof” is referred to Specified
Certification Business. Only entities that have been authorized by competent minister as
businesses that perform these services can publish public key certificates.

Act for Securing the Proper Operation of Worker Dispatching Undertakings and Improved Working
Q9-37 d) Conditions for Dispatched Workers

The Act for Securing the Proper Operation of Worker Dispatching Undertakings and
Improved Working Conditions for Dispatched Workers defines worker dispatch as “means
causing a worker(s) employed by one person so as to be engaged in work for another person
under the instruction of the latter, while maintaining his/her employment relationship with the

176
Morning Exam Section 9 Corporate and Legal Affairs Answers and Explanations

former, but excluding cases where the former agrees with the latter that such worker(s) shall be
employed by the latter.” It clearly states that instructions are given by employees of the
company that the worker was dispatched to. Therefore, d) is the correct answer in that the
project manager at the company that the worker was dispatched to gives instructions to the
dispatched worker.

a) Liability for defect warranty refers to liabilities for defects found after delivery. Although
dispatched workers perform work under the instructions of the supervisor at the company of
dispatch, the worker is not responsible for the completed product. Therefore, a contract that
places the burden of responsibility on the dispatched worker is not appropriate.
b) The Act for Securing the Proper Operation of Worker Dispatching Undertakings and
Improved Working Conditions for Dispatched Workers states that “efforts must be made
not to perform acts that are intended to identify a specific dispatched worker,” and so
requests for a specific worker cannot be accepted.
c) Interviews are also acts that lead to identifying a specific individual and are against the Act
for Securing the Proper Operation of Worker Dispatching Undertakings and Improved
Working Conditions for Dispatched Workers.

With a revision of the Act for Securing the Proper Operation of Worker Dispatching
Undertakings and Improved Working Conditions for Dispatched Workers in March of 2004,
interviews and sending of resumes before the start of employment are allowed for “employment
placement dispatching” for the purposes of job placement, but it is not applicable for dispatched
workers in this question.

Q9-38 c) Type of contract that meets given conditions

This is a slightly difficult question that associates details (conditions) of a contract with the
type of contract.
When the conditions to be met are associated with each type of contract, we get the
following.
(1) There is no obligation to hand over completed work to the service provider.
(2) Work related instructions are given by the service provider.
(3) There are no particular constraints on workplaces.
These three conditions are simply applied to the given options.

a) (2) and (3) apply to an underpinning contract but (1) does not. In “Contracts for Work” of
Article 632 of the Civil Code, it states that “A contract for work shall become effective
when one of the parties promises to complete work and the other party promises to pay
remuneration for the outcome of the work.” In “Timing of Payment of Remuneration” of
Article 633 of the Civil Code, it states that “Remuneration must be paid simultaneously
with delivery of the subject matter of work performed.”
b) Although (1) applies to a loan contract, as a general rule, (2) and (3) do not apply.
d) It is clear that (2) and (3) do not apply to a dispatch contract.

Therefore, c), quasi-mandate contract, is the correct answer. Note that in “Mandates” of
Article 643 of the Civil Code, it states that “A mandate shall become effective when one of the
parties mandates the other party to perform a juristic act, and the other party accepts the

177
Morning Exam Section 9 Corporate and Legal Affairs Answers and Explanations

mandate.” Furthermore in “Quasi-Mandate” of Article 656 of the Civil Code, it states that “The
provisions of this Section shall apply mutatis mutandis to mandates of business that do not
constitute juristic acts.”

Q9-39 a) Companies responsible under the Product Liability Act

The following companies can be held liable under the Product Liability Act.

(1) Companies that manufacture, process, or import the applicable product (hereinafter
“manufacturer”).
(2) A company that displays its name, business name, trademark or other markings on the
product as the manufacturer of the product (hereinafter “display of name”) or a company
that displays its name on a product for the purpose of deceiving the reader as being the
manufacturer of the product.
(3) In addition to the companies listed above, a company that displays its name on a product to
determine that the product manufacturer was indeed responsible for the manufacture of the
product from the perspective of the product’s manufacture, processing, import, or sale.

It can be seen that Company A is liable since both (1) and (2) apply. Company B which
performed the coding may be liable because (1) applies. However, under the exemptions in
Article 4 Section 2, it states that “in case where the product is used as a component or raw
material of another product, the defect occurred primarily because of the compliance with the
instructions concerning the design given by the manufacturer of such another product, and that
the manufacturer, etc. is not negligent with respect to the occurrence of such defect.”
Companies are not liable for defects resulting from the following of orders of the parent
company. In this case, the design was done by Company A, and the above exemption applies to
Company B, which is thought to be not liable. Therefore, a) is the correct answer.
None of the three conditions apply to Company C, and so it is not liable.

Q9-40 c) Act on the Protection of Personal Information

Article 30 (Charges) of the Act on the Protection of Personal Information states that “when a
business operator handling personal information is requested to notify the Purpose of
Utilization or to make a disclosure, the business operator may collect charges for taking the
measure.” Therefore, c) is the appropriate answer.

a) Article 22 (Supervision of Trustees) states “When a business operator handling personal


information entrusts an individual or a business operator with the handling of personal data
in whole or in part, it shall exercise necessary and appropriate supervision over the trustee
to ensure the security control of the entrusted personal data.”
b) Article 20 (Security Control Measures) states “A business operator handling personal
information shall take necessary and proper measures for the prevention of leakage, loss, or
damage, and for other security control of the personal data.”
d) Article 15 (Specification of the Purpose of Utilization) states that “When handling personal
information, a business operator handling personal information shall specify the purpose of
utilization of personal information to the extent possible.”

178
Morning Exam Section 9 Corporate and Legal Affairs Answers and Explanations

Q9-41 a) Software Management Guidelines

“Software Management Guidelines” specify items that should be implemented when


corporate entities use software, in order to prevent the illegal copying of software.
In the guidelines, as a basic item that should be implemented by corporate entities is to make
sure all users fully understand license agreements. Therefore, a) is the correct answer.

b) Software installation is done by the user.


c) This option explains the appointment of a software administrator and is an item that should
be implemented by the company.
d) Personal software can be used if it is approved beforehand by the software administrator
and is not prohibited.

Q9-42 b) JISC (Japanese Industrial Standards Committee)

The JISC (Japanese Industrial Standards Committee) is a deliberation council appointed by


the Ministry of Economy, Trade and Industry. It has functions as a council that deliberates on
the enacting and reviewing of JIS (Japanese Industrial Standards) based on the Industrial
Standardization Act, as well as investigating and deliberating industrial standardization.
Therefore, b) is the correct answer.

a) This refers to the JSA (Japanese Standards Association)


c) This refers to the JEITA (Japan Electronics and Information Technology Industries
Association)
d) This refers to the JIPDEC (Japan Information Processing Development Corporation)

Q9-43 b) UCS-2

The UCS (Universal Character Set) is a 2-byte or 4-byte character set standard that can
accommodate the character sets of countries around the world. It was adopted as the ISO/IEC
10646 international standard in 1993. The foundation for the character entities that can
accommodate the various languages utilized in the standard is referred to as BMP (Basic
Multiple Plane) and includes most of the commonly used basic characters and symbols.
UCS-2 is a subset used to display the 16-bit (2-byte) characters in UCS and is based solely
on BMP. Therefore, b) is the correct answer.
UCS-2 is the same as Unicode that was established by US vendors in 1980.

a) This refers to the EBCDIC code.


c) This refers to the JIS Kanji code.
d) This refers to the Shift-JIS Kanji code.

Q9-44 a) QR code characteristics

A QR (Quick Response) code is a 2-dimentional matrix barcode that has a pattern of black
and white squares on a 2-dimensional plane. A sensor captures the horizontal and vertical
aspects of the code to read it, and the rotation angle and read direction are identified using three
position detection symbols allowing the code to be read from any angle. Therefore, a) is the
correct answer.

179
Morning Exam Section 9 Corporate and Legal Affairs Answers and Explanations

QR Code

b) The code can handle a maximum of 4,296 alphanumeric characters or a maximum of 1,817
double byte (i.e., Kanji) characters.
c) A QR code is able to handle data such as numbers, letters, kanji, hiragana, symbols, as well
as binary and control codes.
d) QR codes are not a programming language.

180
Afternoon Exam Section 10 Business Strategy, Information Strategy, Strategy Establishment and Consulting Techniques
Answers & Explanations

Section 10 Business Strategy, Information Strategy, Strategy


Establishment and Consulting Techniques

Q10-1 Business transformation in an automated reception terminal


manufacturing and selling company

[Answer]
[Subquestion 1] (1) a), e) (2) c), f)
[Subquestion 2] (1) Sales activities: The features of functions and capabilities of the new
products can be utilized in sales proposals.
<Alternative answer> The features of the new products for advanced customers
can be utilized in sales proposals.
Design and manufacturing: The optional parts for the new products can be
utilized as standard parts in regular products.
(2) 1. Delivery by the delivery date demanded by the customer can be expected.
2. Reduction of product prices can be expected.
[Subquestion 3] Because ordering costs and subcontracting costs can be reduced by
limiting procured parts to standard parts with established processing
drawings.
[Subquestion 4] (1) c) (2) g)

[Explanation]
[Subquestion 1]
(1) In [New sales strategy], for advanced customers, it has been decided that order reception
activities still will be performed as they had been performed in the past. This subquestion asks
about its order reception method.
At the beginning of [New sales strategy], it states that “Until now, the same services have been
offered to both advanced customers and other customers,” and at the end it says “for other
customers, carry out order reception activities for its regular product series.” In other words, the
order reception method until now had been the same for all customers, but because from now on,
for the other customers, orders are to be received for “regular product series,” the answer should
contrast against this order reception method.
For advanced customers, since the order reception method is the same as it had been before, the
method would be that “orders will be received for products with individual specifications that meet
customer needs.” Also, on the other hand, attention should be paid to the description in [Design
department transformation] that states that “With regard to custom-made products for advanced
customers,...design [will] instead combin[e] basic structures and new options.” Selecting answers
from the answer group which correspond with these would result in finding two, which are a)
“Reception of orders for custom-made products, which require the development of new optional
parts” and e) “Aggressive reception of orders for products which meet new needs of customers.”

(2) In [Current state of the sales department, it states that orders from advanced customers are high
volume, and additional orders can be expected. The passage after that has a description purporting

181
Afternoon Exam Section 10 Business Strategy, Information Strategy, Strategy Establishment and Consulting Techniques
Answers & Explanations

that since the other customers do not have a large number of stores, profit ratio is low despite the
high workload. In response to this, the description says that Company D “decided to carry out
order reception activities for its regular product series.” The “regular product series” are complete
ready-made products that do not entail design changes, which mean that the activities are different
from past order reception activities for products with individual specifications to meet customer
needs. Furthermore, note the description in [Design department transformation] states that “The
design method of the regular product series is to divide the product structure into basic structures
and options and allow the specifications of the product to be decided by combining basic structures
and options in accordance with customer needs.”
Selecting answers from the answer group which correspond with these descriptions would
result in finding two, which are c) “Reception of orders for products which can be manufactured by
adding standard parts to the basic structures” and f) “Reception of orders for products chosen from
regular product series.”

[Subquestion 2]
(1) This subquestion asks what advantages there are in the sales activities and design by receiving
orders of products that meet needs as before without changing the order reception method to
advanced customers.
In [Current state of the sales department], it states that “There is a tendency that when advanced
customers in their respective industries deploy new products based on new needs, other customers
will follow and deploy similar products. So, the features of new products deployed by advanced
customers are utilized by the sales department in their sales proposals to other customers.” On the
other hand, in [Design department transformation], it states that “With regard to custom-made
products for advanced customer...design [will] instead combin[e] basic structures and new options.
The parts separated as new options will be...used as standard parts in their regular product series.”
Therefore, answers should indicate that, for sales activities, “features of the new product can be
utilized in sales proposals,” and that, for design and manufacturing, “optional parts of the new
product can be utilized as standard parts for regular products.”

(2) This subquestion asks about the advantages on the part of “other customers” resulting from the
changes in the order reception method. As mentioned before, until now, orders have been received
for products which meet customer needs, but from now on, the method was changed to take orders
for regular product series. In other words, until now, the needs of each customer were addressed,
but from now on, customers will select the products to install from the regular product series
prepared by Company D. This may be perceived as a degradation of service level when seen from
the customer perspective. Furthermore, there is no direct description in the question text that states
the advantages of the change in the order reception method for customers. In such case, it will be
effective to look for the disadvantages of the past method to customers.
At the end of [Current state of the Design department], there is a description that states that “the
amount of additional design works is increasing for existing products as well, and the current
design method does not allow the company’s designing capacity to keep up with this increase. Due
to the work delays, delivery time tend to become extended, often making the company unable to
satisfy customer demands.” With the change this time in the order reception method, since

182
Afternoon Exam Section 10 Business Strategy, Information Strategy, Strategy Establishment and Consulting Techniques
Answers & Explanations

additional design will not be performed, delivery time can be shortened. However, simply having a
shorter delivery time is not sufficient as an advantage to customers. Note that this becomes an
advantage to customers only if it is what the customer desires. The description above states that
company D is “unable to satisfy customer demands,” therefore, based on this, an answer as in
“delivery by the delivery date demanded by the customer can be expected” would be good.
Continuing on from the above-mentioned description, it states that “the increase in costs for
additional design and manufacturing is causing prices of products to become relatively expensive,”
which is another disadvantage for customers. And since there is no more additional design works
due to the change in the order reception method, and standard parts will be used, the causes for
relatively expensive product prices will be eliminated, and naturally, reduction of product prices
could be expected. Therefore, an answer as in “Reduction of product prices can be expected” is
appropriate.

[Subquestion 3]
At the end of [Design department transformation], it states that “changing to this new design
method would also have the effect of reducing procurement costs.” This subquestion asks for the
reason behind that.
First, looking for descriptions related to procurement costs in question, we find the description
“procurement costs are higher than internal processing costs,” at the end of [Current state of the
Manufacturing department] “(1) Parts processing factory.” Based on the surrounding descriptions,
it is clear that the costs related to subcontracted parts are the procurement costs. There is a
description about subcontracting costs as part of the procurement cost, which are relatively
expensive because specifications of the subcontracted parts are different each time.
On the other hand, at the beginning of [Manufacturing department transformation and
production management system construction], there is a description that states that “subcontracted
parts for use in regular product series products will be limited to standard parts with established
process drawings.” Therefore, with the transformation of this time, because parts specifications will
not change with each order, it seems possible to avoid subcontracting costs from becoming
relatively expensive. This appears to be the answer, but note that the subquestion does not concern
subcontracting costs, but rather, procurement costs. Procurement costs are, literally, all costs
necessary for procurement, so they include not only subcontracting costs which are direct costs for
the processing, but also costs for ordering, etc.
There are no direct descriptions of costs related to order placement, but it states that each time
processing is to be subcontracted, processing drawings must be provided to the subcontractor in
order to receive an estimate, resulting in a significant amount of order-placement desk work. Since
significant amount of work naturally leads to significant amount of operational costs, these costs
for ordering must also be included in procurement costs when considering. Thus, with regard to
this area also, as with the subcontracting cost, since it will no longer be necessary to supply
processing drawings and receive an estimation for each occasion of subcontracting, the amount of
order placement work, therefore, ordering costs, can be expected to be reduced.
To summarize these, the direct reason why procurement costs can be reduced is because
“ordering costs and subcontracting costs can be reduced.” This is possible due to the fact that
“procured parts will be limited to standard parts with established processing drawings.” Thus, an

183
Afternoon Exam Section 10 Business Strategy, Information Strategy, Strategy Establishment and Consulting Techniques
Answers & Explanations

answer such as “By limiting procured parts to standard parts with established processing drawings,
both ordering costs and subcontracting costs can be reduced” is appropriate.

[Subquestion 4]
(1) A production plan is a plan concerning what products will be manufactured, when, and in what
quantities. And, in order to assemble more efficiently, based on the types and quantities of products
to be manufactured, parts are manufactured and procured, and assembly sequences etc. are
determined.
To consider this production plan, the production management system is supposed to use
information of design BOM and manufacturing BOM. However, these two (2) alone are not
sufficient enough because the most important information is missing; that is, what products are
needed, when, and in what quantities. In other words, even if the necessary parts and the
assembling procedures are known, without information about what products to produce, plans
cannot be established.
From the latter half of [Current state of the Manufacturing department] (2) “Assembly factory”,
it is apparent that Company D ships products in accordance with the customer’s installation work
dates and times. And since this information is recorded to the sales database as installation work
schedule when installation work dates are decided, from this information, it is possible to know
which products are necessary, when, and in what quantities. Therefore, c) “Customer installation
work schedule” is the correct answer.

(2) This subquestion asks what information, other than standard work times and quantities for the
relevant products, is necessary in order to issue parts processing instructions. Since parts
processing must be completed in accordance with the time that unit assembly starts, unless the unit
assembly start time is known, appropriate work instructions cannot be given. Therefore, the correct
answer is g) “Unit assembly start time.”

184
Afternoon Exam Section 10 Business Strategy, Information Strategy, Strategy Establishment and Consulting Techniques
Answers & Explanations

Q10-2 Investment decisions

[Answer]

Ft
t
[Subquestion 1] (A) (1  r ) (B) M
[Subquestion 2] (C) 8.7 (D) 5
[Subquestion 3] (E) d) (F) c) (G) e) (H) i)
[Subquestion 4] (1) For Plan 1, the NPV (Net Present Value) would be important in cash flow
assessment
For Plan 2, the IRR (Internal Rate of Return) would be important in cash
flow assessment
(2) Plan which should be selected: Plan 3
Reason: They have a proven track record of the production, and can start
alternate production the fastest.
[Subquestion 5] a) T b) F c) F d) T

[Explanation]
This question concerns investment appraisal using the NPV (Net Present Value) method. In the
NPV method, for plans such as equipment investment, etc., the present value is obtained by
discounting the future cash flow of each fiscal period to their present values giving consideration to
annual interest rates. The total of the present values represents the NPV. The NPV method is used for
deciding whether or not to adopt a plan where if the total amount is positive, the plan should be
adopted, and if the total amount is negative, the plan should be rejected. Subquestion 1 questions how
to derive the formula for calculating the NPV. Since the method of calculating the NPV is described in
the question text, the answer can be determined by building the formula in accordance with the
description. Subquestion 2 asks to calculate specific values using the formula derived in Subquestion
1. Since the question asks for answers to be rounded to the first decimal place, attention needs to be
paid when answering the question. The key to Subquestion 2 is how to treat the residual values of
assets. Since Subquestion 3 is a multiple-choice question that asks to answer terminologies, it can be
considered that the question can be answered through a careful reading of the question text.
Subquestion 4 (1) is a slightly difficult subquestion. Since the question imposes a constraint requiring
that the word “cash” be used in the answers, using the expression “cash-flow assessment” would be
sufficient. Subquestion 4 (2) must be answered under the constraint of “considering the competitive
environment of parts Y”. Since Company A’s business conditions are described at the beginning of the
question text, the description of the conditions can be referred to in answering this question. Since
Subquestions 4 (1) and 4 (2) ask for brief explanations, long answers are not appropriate. It is
important not to lose sight of the key points which must be included in the answers. Subquestion 5
concerns the new manufacturing equipment which is scheduled to be completed in three years. Since
the question begins with a description on how the situation will change when the manufacturing
equipment is completed, the responses to this subquestion must not lose sight of the key points

185
Afternoon Exam Section 10 Business Strategy, Information Strategy, Strategy Establishment and Consulting Techniques
Answers & Explanations

described in the question. This question is somewhat difficult.

[Subquestion 1]
• Blanks A, B: The question text describes the NPV calculation method as follows: “[T]he sum of the
cash flows for each fiscal year, discounted to their present values, is calculated. From that, initial
investment costs are subtracted, and residual value added, to calculate the NPV.” The following
formula to calculate the NPV is given:

n
B
NPV = 
t 1
A – (I –
(1  r) n
)

By deforming this formula, we get:


n B
NPV = 
t 1
A –I+
(1  r) n

Since “I” represents the initial investment cost, by comparing this to the method of calculating the
n B
NPV described in the question text, “  A ” and “ ” are either the “sum
t 1 (1  r) n

of the cash flows for each year, discounted to their present value” or the “residual value.”
The residual value when calculating the NPV must be discounted to the present value for Year n.
Referring to the formula indicated in the question text which calculates the present value of the
cash flow for Year 1 and Year 2, in order to discount the present value for n years in the future, the

 is a symbol for calculating


n
cash flow is to be divided by (1 + discount rate) . Considering that
n
the sum, it is apparent that “  A ” represents the sum of the cash flows for each year,
t 1
B
discounted to their present values, and that “ ” represents the residual value. Since the
(1  r) n
cash flow for Year t is Ft, and in order to discount it to the present value for Year t using discount

rate r, it should be divided by (1  r) t , the present value of cash flow for Year t can be calculated

Ft
using the expression . On the other hand, since the residual value is M, the residual value
(1  r )t
M
discounted to the present value for Year n can be calculated with . Checking the given
(1  r ) n

formula considering that  represents the calculation for calculating the sum,
Ft
is
(1  r ) t
appropriate for blank A, and M is appropriate for blank B.

186
Afternoon Exam Section 10 Business Strategy, Information Strategy, Strategy Establishment and Consulting Techniques
Answers & Explanations

[Subquestion 2]
• Blank C: Referring to Table 1, the 6.5 and 8.5 in the given expression represent the present values of
the expected annual cash flow for Year 1 and Year 2. The appropriate value to enter in blank C is
the present value of the expected annual cash flow for Year 3. Since the expected annual cash flow

for Year 3, before discounting to present value, is 11, and the value of (1  r) n for Year 3, given

the discount rate of 8%, is 1.260, the present value of the expected annual cash flow for Year 3 can
be calculated as follows:

11 / 1.260 ≈ 8. 730

The answer for blank C is rounded to the first decimal place and is 8.7.• Blank D: Since the
residual value after three years must be considered as increased cash flow, it is equal to the present
value when the invested equipment is sold after three years. According to Table 1, the present
value (expected disposal price) is 5. Thus, the answer for blank D should be 5.

[Subquestion 3]
• Blank E: In [Comparison and consideration of each of the Parts Y Alternative Production Plans], a
precondition for the trial calculations is described as “for Plan 2, transactions will be performed in
dollars.” If the yen falls against the anticipated yen/dollar exchange rate, the cost calculated in
“yen” to be paid to overseas Company E for parts Y will increase. The risk of actual costs
fluctuating due to floating exchange rate is called exchange fluctuation risk. Therefore, the answer
for blank E is d) “exchange.”

• Blank F: In [Comparison and consideration of each of the Parts Y Alternative Production Plans], a
preconditions for the trial calculations is described as “the initial investment cost for Plan 3 is the
amount for the subscription of new stock allocated to a third party by Company B, and the
residual value is the stock selling price.” If the stock price falls, the value of the subscribed stock
allocated to a third party will fall. Similarly, the residual value acquired by selling the stock will
also fall. The risk of value dropping due to fluctuations in stock price is called stock price
fluctuation risk. Therefore, the answer for blank F is c) “stock price.”

• Blanks G, H: At the beginning of the question text, there is as description stating that “Parts Y is a
technologically established product, but, at present, complete automation of its manufacturing is
difficult, and some of its processes rely on manual work, resulting in labor expenses accounting
for approximately 30% of its manufacturing costs.” Labor expenses are, essentially, personnel
costs, and they are fixed costs that occur without regard to the quantity of parts Y manufactured. In
addition to labor expenses, if Plan 1 is adopted, amortization costs of the equipment will occur,
which are also fixed costs. Costs other than labor expenses are, in Plan 2, manufacturing contract
costs to overseas manufacturer Company E, and in Plan 3, manufacturing contract costs to
domestic subcontractor Company B. In either case, Company A merely needs to pay an amount
proportional to the delivered quantity of parts Y, and thus the amount paid is basically
proportional to the amount of parts Y delivered. In other words, parts Y production costs will be
able to be considered as variable costs. Therefore, the answer for blank G is e) “fixed costs,” and

187
Afternoon Exam Section 10 Business Strategy, Information Strategy, Strategy Establishment and Consulting Techniques
Answers & Explanations

the answer for blank H is i) “variable costs.”

[Subquestion 4]
(1) When making a decision on which of the several plans to adopt, often times, as the ultimate
decision criteria, the advantages and disadvantages of each plan are considered. This subquestion
concerns the fact that, depending on what importance is placed on, which plan will be adopted will
vary. If the factor considered as important corresponds with the advantages of a plan, that plan
should be adopted. Therefore, considering the advantages of Plan 1 and Plan 2, let us think where
the importance is placed on. Since the NPV (Net Present Value) method and IRR (Internal Rate of
Return) method are used here as the method of cash flow assessment, which is often used
especially as investment decisions criterion, these values can be referenced to make a decision.

Plan 1: Initial investment costs are high, and the establishment of an overseas subsidiary is also a
major concern. However, since the NPV value is the highest, when importance is placed on
NPV when assessing the cash flow, this is the plan which should be adopted.

Plan 2: The NPV is the lowest of the three plans, and the residual value is zero (0), but it has the
lowest initial investment cost, and the highest IRR. When importance is placed on IRR when
assessing the cash flow, this is the plan which should be adopted.

(2) The current state of the competitive environment in which Company A is can be found in the
description at the beginning of the question text. It states, “Regarding parts Y, in the long term,
demand for the parts is foreseen to remain at almost the same level, but due to increasing price
competition, earnings are declining for all companies. If a company could win the survival race, the
company can expect the benefit of the remaining player, but missteps in price competition may
result in losing market share to competitors.” The plan that will be adopted when based on the
current competitive environment will require conditions where “the market for parts Y will not
shrink, so continued demand can be expected. Conversely, there are no elements for the market to
grow, so there is no need for the amount of production to be increased. Losing in price competition
can be expected to result in rapidly losing market share, so the plan must offer price
competitiveness.” For each plan in Table 2, since there are no conditions that affect the production
volume, for this question, production volume does not need to be considered as an indicator when
selecting a plan. However, there is a description in the question text stating that, in relation to parts
X, “Company A’s board of directors decided on the basic policy of immediately increasing the
manufacturing of parts X at the Z1 factory, and considering alternative plans for parts Y.”
Therefore, it can be inferred that production of parts Y in Z1 factory will be switched over to
production of parts X. Because parts Y is one of Company A’s main products, if there is a period of
reduced production, it will not be possible to keep up with demand from the market of parts Y. This
may result in losing market share to competitors. Since the production of parts X will be increased
immediately, it is desirable for the alternative production of parts Y to be started immediately. Since
Plan 1 consists of “establishing an overseas subsidiary and constructing a factory,” from common
sense, it will not be possible to immediately start production of parts Y. Plan 2 consists of
subcontracting production of parts Y to overseas manufacturer Company E, which has a track
record of producing parts similar to parts Y. Plan 3 consists of subcontracting production of parts Y

188
Afternoon Exam Section 10 Business Strategy, Information Strategy, Strategy Establishment and Consulting Techniques
Answers & Explanations

to domestic manufacturer Company B, which is already responsible for producing 50% of parts Y.
Comparing the plans in order to start production of parts Y immediately, it would be preferable to
agree on a manufacturing contract with Company B, which already has production experience of
parts Y itself, not parts that are similar to parts Y. Therefore, Plan 3 should be adopted, and it will
be sufficient to answer with the reason that since Company B has experience of producing parts Y,
it can start increased production immediately.

[Subquestion 5]
From the description in the question text, it is apparent that when the new manufacturing
equipment for parts Y is completed in three years, which can increase the ratio of automated
manufacturing processes, significant reduction of labor expenses can be expected. In other words, it
will be possible to greatly reduce the manufacturing costs of parts Y. Since Company A’s business
environment is extremely competitive, and missteps in price competition will result in market share
being lost to competitors, by installing the new manufacturing equipment, product selling price
could be reduced, allowing the company to win against its competitors. Furthermore, if the
company settles on a strategy to keep the selling price at the current level, the profit margin will be
larger by the amount of reduction in manufacturing costs.

a) T (True): Since using the new manufacturing equipment will result in reduced labor expenses,
manufacturing costs can be lowered. If the product selling price is kept at the current level, the
reduction in manufacturing costs will result in greater profit.

b) F (False): Future demand is foreseen to stay stable in the long run, and although the new
equipment aims to reduce costs, it does not aim to increase revenue through increased
production. There is no guarantee that increased production will lead to increased selling.

c) F (False): If the selling price is raised, the company will lose market share to competitors, and
the likelihood is that profit will fall.

d) T (True): If the selling price is reduced by the amount of reduction in manufacturing costs, the
company will be more competitive from a price standpoint, leading to a larger market share
for Company A.

189
Afternoon Exam Section 10 Business Strategy, Information Strategy, Strategy Establishment and Consulting Techniques
Answers & Explanations

Q10-3 Consideration of a systematization policy in order to contribute to


business transformation for a steel sheet processing manufacturer

[Answer]
[Subquestion 1] (1) 1. Outsourcing of work related to exports
2. New development of overseas consumers
(2) Because it will enable the consecutive manufacturing of products without
switching coating materials.
[Subquestion 2] A: the product numbers and warnings to be printed on the surface of products
B: joint development of highly functional products together with construction
material manufacturers
[Subquestion 3] The functions needed are those for monitoring operating conditions at each
factory, and selecting the optimal production factory.

[Explanation]
[Subquestion 1]
(1) There is a description in [Approach of Company C to business transformation] (3) that states
“Improve profit ratio by expanding direct selling,” indicating that profit ratio is higher for direct
selling than for receiving orders through trading companies. Since Company C will strengthen its
overseas roll-out through trading companies, despite the profit ratio being lower than for direct
selling, there must be some advantages (objective) in doing so. To answer this subquestion, we
must search through the question text for these advantages. There is a description in [Business
outline of Company C] that states “business procedures related to exports are also outsourced to the
trading companies,” so the fact that Company C can outsource export related work which
procedures are troublesome, is an advantage for Company C. Furthermore, there is a description in
[Results of interviews with the sales head office manager and factory managers] (1) that states that
“Company C would like to explain their products and present selling case examples to trading
companies with robust overseas reputation to newly develop consumers,” and a description in
[Company C business transformation plan] item (1) states the same. From these, since it is clear
that it is difficult for Company C to newly develop overseas consumers on its own, the answers are
“Outsourcing of work related to exports” and “New development of overseas consumers.”

(2) Searching the question text for descriptions related to coating, we find, in [Results of interviews
with the sales head office manager and factory managers] (2) a description that states “The Tokyo
factory . . . DB will be linked in order to. . . implement an efficient production system in which
products which use the same coating material are produced consecutively.” In (3), we find a
passage that states “[At t]he Osaka factory . . . because of the large amount of switchover work, the
utilization rate has not risen. Company C would like to implement a manufacturing specification
DB such as the one used in the Tokyo factory, and raise the utilization rate.” While the switchover
work at the Osaka factory may be difficult to understand, since the utilization rate of the Tokyo
factory has risen because the switchover work is less, we just need to take note on the difference
between the Osaka factory and the Tokyo factory with regard to the coating process. As stated in
(2) in the above-mentioned interviews with factory managers, at the Tokyo factory, products which

190
Afternoon Exam Section 10 Business Strategy, Information Strategy, Strategy Establishment and Consulting Techniques
Answers & Explanations

use the same coating materials are produced consecutively, resulting in greater efficiency.
Therefore, it seems that if coating materials are not the same, then switchover work is needed.
Thus, we just need to describe this situation as indicated by the sample answer.

[Subquestion 2]
From [Company C business transformation plan] (2), we understand that Company C would
like to strengthen the quality control of electric appliance products by adding the contents of blank
A to the manufacturing specification DB. The description in [Results of interviews with consumers
and trading companies] (2), regarding quality issues with electric appliance products, states that
there has been a complaint that “appliance product numbers and warnings printed on the surface
were for different models of the same product.” Since there are no other description which relate to
quality issues, the content which would resolve this complaint is the appropriate answer, such as
the answer given in the sample answer.
Likewise, from [Company C business transformation plan] (5), we understand that Company C
aims to expand direct selling through blank B. According to the question text, direct selling refers
to transactions for selling directly to consumers, without going through trading companies.
Consumers, here, refers to construction material manufacturers and household appliance
manufacturers. Since blank A concerns electric appliance manufacturers, by looking for a
description concerning construction material manufacturers, we find the following description in
[Results of interviews with consumers and trading companies] (1): “They are particularly interested
in joint development of highly functional products. If it is possible to make low cost products by
using general standard steel sheets, they would like to expand the amount of purchase orders.”
Since an increase in quantities ordered by construction material manufacturers would be, for
Company C, an expansion of direct selling, we just need to describe this situation as indicated by
the sample answer.

[Subquestion 3]
Searching for descriptions concerning the “Company-wide Production Plan System” in the
question text, we find in [Results of interviews with the sales head office manager and factory
managers], a description that states: “Because the number of production lines cannot be increased
(in the Tokyo factory), there is a need for a ‘Company-Wide Production Planning System’ which
can coordinate with regional factories and adjust workloads across multiple factories.” In [Business
outline of Company C], it states that “[w]hile for some factories the number of orders is falling and
the amount of work is decreasing, other factories, due to their full production schedules, find
themselves having to turn down orders or having customers delay their delivery dates.” Given these
two (2) descriptions, since it is clear that there is a need for a function which coordinates
manufacturing between factories, an answer such as the sample answer is appropriate.
While it was not addressed in Subquestion 2, according to the legend to the “Company C
business transformation map,” the arrows between Critical Success Factors (hereinafter referred to
as CSF) in the map shows the relationships between each CSF. Therefore, it is clear from the map
that the factor “Construct a ‘Company-wide Production Planning System’” can “Reduce delivery
time” and “Improve utilization rate.” With regard to delivery time reductions, it is clear from the

191
Afternoon Exam Section 10 Business Strategy, Information Strategy, Strategy Establishment and Consulting Techniques
Answers & Explanations

description in [Business outline of Company C] that some of the products that are produced at
factories with full production schedules can be produced instead at factories with less workload and
surplus production capacity. With regard to utilization rates, there is a description in the question
text that states the number of orders has fallen at the Hiroshima factory, resulting in a lower
equipment utilization rate. It is apparent that if the workload is increased, the utilization rate will
also increase. This also shows that a function to coordinate manufacturing across the factories is
necessary. Thus, taking note of the figure in the question may make it easier to narrow down the
key points. The figure in the question is called a Balanced Score Card (hereinafter referred to as
BSC) strategy map, and it is advisable to at least study its outline.

192
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

Section 11 Programming (Algorithms)

Q11-1 Binary search trees

[Answer]

[Subquestion 1] 47, 48, 49

[Subquestion 2] (A) n ≠ x->key <Alternative answer> n is not equal to x->key

(B) k

[Subquestion 3] (1) 60

(2) (C) x->right (D) x->left

(E) e->left ≠ nil <Alternative answer> e->left is not nil

[Subquestion 4] (F) f) (G) b) (H) e)

[Explanation]
A binary search tree, as the question indicates, contains two or less child nodes for any given node,
and meets the following conditions:

<Conditions>
• If y is a node in the left subtree of x, “y’s key value < x’s key value.”
• If y is a node in the right subtree of x, “x’s key value < y’s key value.”

Consider the binary search tree in Fig. 1 to confirm these conditions. For example, the root node’s
key value, 50, is greater than any of the key values (15, 37, 46) in the left subtree, and smaller than any
of the key values (60, 70, 75, 77, 82) in the right subtree, thus satisfying the conditions (see Fig. A).
These conditions also apply to any other node.

50
Smaller than Larger than

37 75

15 46 60 82

70 77

Fig. A Relationships between node key values

193
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

Thus, when searching for a node with some key value, one can narrow down the search range based
on these conditions, by comparing the key value being searched for and a node’s key value.
(Ex.) Searching for a key value of 46.

46<50

Left 50

37<46 37 75
Right
15 46 60 82

70 77

Fig. B Searching for a node with a key value of 46

Based on the basic knowledge regarding binary search trees presented above, the subquestions
consider the details of algorithms and the computational complexity of the searching.

[Subquestion 1]
Referring to Fig. B, let us consider how the range gradually narrows down for the key values to
be inserted in the left subtree of the node with the key value 46, while traversing the tree from the
root to that node

(1) Since 46 < 50 (root node key value), the key value of the node in the target subtree < 50

46<50 50

37 75

15 46 60 82

70 77
Key value of node within
the target subtree < 50

194
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

(2) Since 37 < 46, therefore 37 < key value of node within the target subtree < 50

50

37<46 37 75

15 46 60 82

37 < key value of node 70 77


within the target subtree < 50

Further, based on the binary search tree conditions, the key values which can be inserted in the
right subtree of 46 must be greater than 46. Therefore, the target range consists of the natural
numbers greater than 46 and smaller than 50. Enumerating the numbers yields “47, 48, 49.”

[Subquestion 2]
Consider the program for the insert operation for inserting a new node into a binary search
tree. As the question states, when adding a new node to the binary search tree, the processes
described in (1) and (2) below are performed. On performing these processes, consideration must
be given to cases where a node containing the specified key value already exists in the binary
search tree and the program does nothing.

(1) Insertion point search: Searches for the appropriate location to insert a node with the specified
key value.
(2) New node insertion: Creates a new node, and performs pointer reassignment in order to insert
the node into an appropriate position in the binary search tree.

The program can be broadly divided into an initial configuration section, an iterative section
using a while statement, and a conditional decision section using an if statement. The while
statement performs the tree traversal toward a leaf, selecting the left or right subtree based on the
comparison of the key values; this part performs “(1) Insertion point search.” In the if statement, a
new node is created and attached at the end of a left or right pointer; therefore, this part performs
“(2) New node insertion processing.” The insert operation can be summarized as shown in Fig. C.
The roles of variable x (pointer for finding the appropriate insertion point), variable w (pointer for
storing x), and variable k (pointer to the newly created node) are vital for understanding the details
of the program.

195
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

/* n: Key of node to be inserted, t: pointer to root node */


function insert(n,t)
x ← t {Initial configuration, sets x to point to root node}

while (x ≠ nil and A )

(1) Insertion point search


w ← x {x is stored in w}
if (n < x->key) {the key value of the node to be inserted is compared to the key
value of the node pointed to by x}
x ← x->left {x is substituted as the left subtree pointer}
else
x ← x->right {x is substituted as the right subtree pointer}
endif
endwhile

if (x = nil) {Determine whether insertion processing needs to be performed}


Create a new node, and assign pointer to that node to k
k->key ← n
k->left ← nil

(2) New node insertion


k->right ← nil
if (k->key < w->key) {Determine whether to insert to the left or the right}

w->left ← B {Pointer substitution for left side insertion}

else

w->right ← B {Pointer substitution for right side insertion}

endif
endif

end

Fig. C insert operation overall structure

Consider the blanks based on the above.

• Blank A: To answer this question, one should consider when (1) Insertion point search should end. It
may first appear that it is when the insertion point is found. However, as there are two conditions,
there may be one more condition for its completion. As explained before, if a node is found with
the same key value as the specified key value, nothing occurs, therefore searching also ends there.
In other words, there are two possible ways the searching terminates: when an appropriate
insertion point is found, and when a node with the same key value as the specified key value
already exists within the binary search tree. Paying attention to the iteration condition of the
while statement which specifies that “processing is repeated while ... is true,” one can see that the
appropriate condition is “while an appropriate insertion point has not been found, and no node
with the same key value as the specified key value has been found.” The appropriate insertion
point, as given by the question, is “(where) a nil pointer is encountered,” and thus the first

196
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

condition “x ≠ nil” indicates that the insertion point has not been found. Since blank A
corresponds with the second condition, in order to express the state in which no node has been
found with the same key value as the specified key value, it should be set specifically to “the
specified key value (n) and the key value of the node which x is currently pointing at (x->key) are
not equal.” Thus, the answer for blank A is “n ≠ x->key.”

• Blank B: The if statement at the start of (2) evaluates whether insertion processing is necessary
(condition “x = nil”), and, if true, a new node is created, assigned values, and placed in the
appropriate position in the binary search tree. The section preceding the if statement containing
blank B carries out new node key value configuration and pointer initialization. Blank B performs
pointer reconfiguration in order to insert the new node at the appropriate position found by the
while statement. The if statement immediately before blank B compares the new node key
(k->key) and the insertion point node key (w->key), deciding into which subtree, the right or the
left, to insert the node. Caution must be taken here to the fact that x, the pointer used to traverse
the binary search tree, has advanced by one while the while statement searched for the insertion
position. In other words, at this point x = nil, therefore key value comparison must be performed
using the node pointed at by w, which stores the x value from one iteration back. If k->key is
smaller, the new node pointed to by k is connected to the left of the node pointed at by w. If larger,
it is connected to the right. Specifically, in the former case, the value of k is set to w->left, and in
the latter case, the value of k is set to w->right. Thus, the answer for blank B is “k.” Fig. D
shows an example of adding a node with key value 47 to the binary search tree shown in Fig. 1.
Here, 46 < 47, therefore k’s value is set to w->right.

37 37
w w

15 46 15 46
nil nil nil nil nil nil nil
k
x is set to nil w->right ← k

47
nil nil

Fig. D Example of adding a node with key value 47 to a binary search tree (Fig. 1)

Moreover, the insert program in Fig. 3 uses an iterative structure, but given that tree structures
are recursive data structures, insertion program can also be expressed with recursive calls.

[Subquestion 3]
Consider the program which deletes a node containing a specified key value from a binary
search tree. It should be kept in mind that a node containing the specified key value always exists.
Details of the data structure and operations on it are the same as those for the algorithm in

197
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

Subquestion 2. What is important in Subquestion 3 is that, when deleting a node, where that node is
located in the binary search tree makes a difference. For example, if the node to be deleted is a leaf
node, that is, it has no child nodes, it can be simply deleted without affecting other parts of the tree.
However, when the node has one or more child nodes, the child nodes must be placed in
appropriate positions so that the binary search tree conditions are satisfied even after deletion is
completed.
Taking into account the above observation, and referring to the comments in the program, the
overall program structure is depicted in Fig. E. This program, like the insertion program, can be
broadly divided into the while statement section and the if statement section. The comments
indicate that the while statement section searches for the node to be deleted, and the if statement
section performs node deletion.
The roles of variable x (working pointer for performing node searches), variable w (pointer to
parent of node to be deleted), variable d (pointer to the node to be deleted), and variable e (pointer
to the substitute node) are important for understanding the details of this program.

198
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

/* n: Key of node to be deleted, t: pointer to the root node */


function delete(n,t)
x ← t {Initial configuration, set x to point at root node}
while (n ≠ x->key) /* Search for the node to be deleted */
(1) Search for the node to be
: deleted
endwhile
d ← x /* Start node deletion */ {Save position of found deletion node in d }
if (x->left = nil) {Evaluate whether the node to be deleted has a left
subtree or not}
if (n < w->key) {Compare key values of node to be deleted and
parent node}
w->left ← C
else Processing when there is no left subtree
w->right ← x->right
endif
elseif (x->right = nil) {Evaluate whether the node to be deleted has a right
subtree or not}
if (n < w->key) {Compare key values of node to be deleted and parent
node}
w->left ← x->left

(2) Node deletion


else
Processing when there is no right subtree
w->right ← D
endif
else
e ← x->right {e now points at the right subtree of the node to be deleted.}
while ( E ) /* Search for the substitute node */
x ← e Search for the substitute node
e ← e->left
endwhile
d->key ← e->key /* Set substitute node key as key of node at the deletion point */
if (d = x)
x->right ← e->right {When the substitute node is direct a child of node to
be deleted}
else Processing after the substitute node has been
inserted at the deletion position
x->left ← e->right
endif {When the substitute node is a grandchild node
of node to be deleted or beyond}
endif
end

Fig. E delete operation overall structure

199
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

(1) The root node, which has a key value of 50, has both left and right subtrees, therefore in order
to perform deletion, it must be replaced with a substitute node. According to the question, “when a
substitute node is necessary, it is found in a certain location in the right subtree of the node being
deleted,” therefore the program searches for a substitute node within the right subtree that satisfies
“key value of the node in the left subtree < key value of the substitute node < key value of the node
in the right subtree.” In other words, the key value of the substitute node will be smaller than the
key value of the node in the right subtree. The substitute node will be removed from the right
subtree, and what remains of the right subtree will be the substitute node’s right subtree. In
conclusion, the substitute node is the node with the smallest key value within the right subtree. As
for the binary search tree in Fig. 1, it will be the node with the key value “60.”

(2) In the program, the search for the node to be deleted, indicated by (1) in Fig. E, terminates
when n = x->key, therefore at this point, x will be the pointer to the node to be deleted, and w the
pointer to the parent of that node. Prior to the node deletion, indicated by (2) in the figure, the value
of x is saved in d. This is because x is a working variable, and is also used during the substitute
node search, which may change its value.

• Blanks C, D: These blanks are a part of the processing performed when the node to be deleted has no
left subtree (x->left = nil is true) and when the node to be deleted has no right subtree
(x->right = nil is true), respectively.

If the node to be deleted has no left subtree, it either has a right subtree or has no subtrees.
Therefore, by setting the pointer in the parent node pointing at the node to be deleted to point at the
child node of the node to be deleted, the right subtree of the node to be deleted can be connected to
the parent node. As explained above, since w is the pointer to the parent node of the node to be
deleted, the pointer from the parent is w->left or w->right. In either case, what needs to be
connected is the right subtree of the node to be deleted, therefore the right subtree of the node to be
deleted can be connected to the parent node by setting either w->left or w->right to x->right.
Therefore, the answer for blank C is “x->right.” At this point, one might think that while the right
subtree of the node to be deleted is connected to either the right or left of the parent node, a child
node may already exist in that position. However, since that child node is the very node to be
deleted, there is no problem. If the node to be deleted has no right subtree, x->right is nil, and
therefore w->left or w->right will be set to nil, just as they should be. Fig. F illustrates the
deletion of a node with key value 15.

200
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

w
w

37
37
x
x
×
15 15 Set w->left to x->right so that it
nil
nil points to node 20
(w->left←x->right)
20 20
il

Fig. F Deletion of node with key value 15

As with the case of the blank C, if the node to be deleted has no right subtree, the left subtree
of the node to be deleted should be connected to the parent node. Specifically, either w->left or
w->right should be set to x->left. Therefore, the answer for the blank D is “x->left.”

• Blank E: This contains the condition for repeating (iterating) the search process for the substitute
node. A substitute node is necessary when the node to be deleted possesses both left and right
subtrees. In this question, the substitute node is chosen from the right subtree of the node to be
deleted, and it is specifically the node with the smallest key value in the right subtree. Two
examples involving a substitute node are shown below.

(Ex. 1) The child node (key value 82) of the node to be deleted (key value 75) is used as the
substitute. This child node has the smallest key value in the right subtree.

50 50

75 82
e

60 82 60 90
nil

90

201
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

(Ex. 2) The node with key value 77, the smallest key value in the right subtree of the node to be
deleted (key value 75), is used as the substitute.

50 50

75 77

60 82 60 82

77 90 90
nil

With these examples in mind, consider blank E.

e ← x->right {Set e to the pointer to the right subtree of the node to be deleted}
while ( E ) /* Search for the substitute node */
x ← e Search for the substitute node
e ← e->left
endwhile

The program sets variable e to the pointer to the right subtree of the node to be deleted, and uses a
while statement to search for the substitute node. Since the substitute node is the node with the
smallest key value in the right subtree, that node can be found by following the pointer to the left
side of each node to the point where there are no further left subtrees (left pointer is nil).
Therefore, the search for the substitute node repeats itself while the value of the pointer to the left
subtree is not nil. Thus, the blank E is “e->left ≠ nil.”

There are no blanks past this point, but below is a brief overview of the processing which is
performed. First, after the search for the substitute node is completed, the key value of the
substitute node pointed at by variable e is set as the key value of the node to be deleted pointed at
by variable d. In other words, the overall process is not to actually delete the node to be deleted,
but to replace it with the substitute node. However, processing does not end merely with the
replacement of the key value. If the substitute node is a leaf, there is no problem, but if the
substitute node has one or more child nodes, these child nodes must also be moved to appropriate
positions. Note here that a substitute node is the node with the smallest key value in the relevant
subtree, and does not have a left child. In other words, if a substitute node has a child node, it will
always be a right child (right subtree).

In the next if statement, depending on the relationship between the values of variable d and
variable x (whether they are identical or not), different pointers are set. This is done in order to
move the substitute node’s right child to the correct position. Variable d points at the node to be

202
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

deleted, and variable x points at the parent node of the substitute node. Therefore, if the values of
these two variables are identical, the node to be deleted is the substitute node’s parent. This
corresponds to Ex. 1 above. If these two values are different, the node to be deleted is not the
parent of the substitute node. In other words, the substitute node is the grandchild or lower of the
node to be deleted, which corresponds to Ex. 2.

When the node to be deleted is the parent of the substitute node, the right child (right subtree) of
the substitute node can be kept as-is as the substitute node’s right child. However, since the
substitute node is moved to the position of the node to be deleted, in order for the right child of
the substitute node to be the right child of the node to be deleted, the right pointer (e->right) of
the substitute node should be set to the right pointer of the node to be deleted (note that since d =
x, x also points at the node to be deleted). On the other hand, if the substitute node is the
grandchild or lower of the node to be deleted, the right pointer of the substitute node should be set
as the substitute node’s parent’s left pointer (which used to point at the substitute node). A left
pointer and a right pointer are intertwined, making it somewhat confusing, but the key value of the
substitute node was smaller than the key value of the parent. Therefore it is connected as the left
child. The key value of the right child (subtree) of the substitute node is larger than the key value
of the substitute node, but smaller than the key value of the parent of the substitute node.
Therefore it is not in the parent’s right subtree, but in the left subtree rooted at the substitute node.
Thus, from the perspective of the substitute node, it is the right child, but from the perspective of
the parent, it is the left child.

In this way, deleting a node with a specified key value is not performed by actually deleting that
node, but is achieved by moving the substitute node key value so that the substitute node’s
contents are positioned correctly, and replacing the substitute node’s child, thereby, deleting the
substitute node from the binary search tree.

[Subquestion 4]
• Blank F: When searching a complete binary tree, the number of nodes with which a key is compared
is maximal when the searched data is a leaf node, or when the searched data cannot be found --
that is, when comparison is performed until the lowest level (level d) is reached. Therefore, the
answer for the blank F is choice f), “d.”

• Blank G: In a complete binary tree, any non-leaf node has two child nodes, and the number of levels
from root to any leaf is always the same. When the number of nodes is n, and the number of levels
d d
is d, as Fig. G shows, n = 2 –1. Therefore, the answer for the blank G is choice b), “2 –1.” Given
limited test time, one can determine the relationship by drawing a relatively small complete binary
tree, for example, with 3 levels, and noting that the number of nodes is 7.

203
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

Number of nodes in
Level each level Total number of nodes, n

0
1 2
1
2 2
2 d –1 k d
3 2 2 –1
n = Σ 2 = ――――
k=1 2–1
: : : d
: : : = 2 –1
: : :
d–1
d ...... 2

Fig. G Relationship between the number of levels and nodes in a


complete binary tree

• Blank H: Consider what happens when the root node of a complete binary tree is deleted when n=15
and d=4. When evaluating the condition in the while statement, the pointers are set as shown in
Fig. H. Looking at e, the pointer to the substitute node, one sees that every time e ← e->left in the
while statement is performed, e moves down one level. In this example, after three iterations,
e->left = nil turns true, terminating the search for the substitute node. Analogously, if the
number of levels is d, the search terminates after the condition in the while statement is evaluated
d –1 times. Thus, the answer for the blank H is choice e), “d –1.”
x d
(1st time)
e (1st time)

e (2nd time)

e
(3rd time)
...
Substitute node

Fig. H Pointer settings during condition evaluation

204
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

Q11-2 Character string search

[Answer]
[Subquestion 1] (1) (A) n – m + 1 (B) T[i + j – 1] (C) i ← i + 1
(2) m  (n – m + 1) times
(3) 1st character: a) 2nd character: a) 3rd character: b)
(Complete answer with all three (3) characters correct)
[Subquestion 2] (1) (D) m (E) T[i] (F) m – j
(2) d)

[Explanation]
This question is about an algorithm for searching for a specific character string within another
character string (text). The question deals with two approaches: one is to perform a search starting
from one end of the text, and moving toward the other end one character at a time, and the other is its
improved version. The question’s description refers to the specific character string being searched for
as the “pattern,” and the substring of the text compared against the pattern as the “candidate string.”

[Subquestion 1]
This Subquestion is about the string search program (simple approach) of Fig. 2. It uses the first
m characters of the text as the candidate string and compares it with the pattern consisting of m
characters. If they do not match, the program shifts the candidate string by one character at a time
to the right until a match is found. In the example of Fig. 1, the string “DEF” is initially selected as
the candidate string, and is compared against the pattern, KLM, to see whether they match. Since
they do not match, a new candidate string “EFG” that starts at one character to the right of the
original candidate string is selected and compared against the pattern. The search continues in this
manner until a match is found, or until the search reaches the end of the text. Fig. A shows an
overview of the program.

205
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

i ← 1
last ← A Initialization
found ← 0
while(found = 0 and i <= last) /* Loop for shifting the starting position */
j ← 1

while(j <= m and B = P[j]) /* Loop for string comparison */


Comparison
j ← j+1 of candidate
string and
endwhile Shift candidate
pattern
string one character
if(j > m)
at a time to the right
found ← 1 until a match is
found
else
C
endif
endwhile
if(found = 1) /* Result display */
print "Found" and value of i
else Display search result
print "Not found"
endif

Fig. A String search program (simple approach)

(1) Fill in blanks A through C.


Blank A: This blank contains the initial value for variable last. Its name, “last”, suggests that it is
related to the termination of something. Furthermore, it is used in the iteration condition of “Loop
for shifting the starting position”; and this loop corresponds to that part of the question which
states “the string comparison is repeated until a match is found, with the candidate string shifted to
the right on each repetition.” From this, it is clear that the condition for loop iteration is “until a
match is found” -- that is, “while there is no match.” Note here that, as is clear from the program’s
requirements, this is not the only iteration condition. Though it is too obvious to be explicitly
stated in the question, the search terminates when the end of the object being searched, that is, the
text, is reached. Thus, the iteration condition, “found = 0 and i <= last”, of “Loop for shifting the
starting position” means that the loop will be repeated while a match with the pattern has not been
found, and the program has not reached the end of the text. The first condition is for judging
whether or not a match has been found, and the second condition is for deciding whether or not
the end of the text has been reached. The second condition, which uses the value of variable last,
is used to repeat the search while i is equal to or less than last, and to terminate when it exceeds
last. This condition will no longer be met when the last candidate string is examined and no match
is found.

206
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

First candidate Last candidate


string string

i 1 2 3 ... n
Text ...

Pattern

m characters

Range of variable i

Fig. B Range of variable i

Since the initial value of variable i is 1, the search starts with the candidate string at the start of
the text, and continues by shifting the candidate each time until the final candidate string indicated
by shading in Fig. B is examined. When a match is found, the result display section of the
program displays a message along with the value of variable i to indicate the start of the matching
substring. Therefore the starting position of the candidate string is indicated by variable i. Variable
last, then, should be set to the starting position of the last candidate string. Since the number of
characters in the text is n, and the number of characters in the pattern is m, there are n – m
characters leading up to the final candidate string. Therefore, the starting position of the last
candidate string is n – m + 1, which is the value to be set to variable last. Thus, the appropriate
answer for blank A is “n – m + 1”. If unsure whether or not to add 1 to n – m, consider the example
in Fig. 1, where n=14 and m=3, and think of the case where non-match continues until the final
candidate string is examined. The final candidate string is “LMN”, and its starting position is the
12th character. Thus, we have n – m + 1 = 14 – 3 + 1 = 12, and adding 1 is correct.

Blank B: This is a part of the condition for the loop for string comparison. This loop compares the
candidate string against the pattern. Character comparison from the character 1 to m of the pattern
is performed using, in addition to variable i storing the starting position of the candidate string,
variable j for indicating the characters to be compared. The repetition condition of the loop for
string comparison is “j <= m and B = P[j].” It appears that this condition checks that
the number of characters is within the valid range for comparing against the pattern, and that the
characters match. Blank B must be set to a character in the candidate string being compared in the
second condition against character P[j] in the pattern. When considering the index for array T, the
fact that the starting position of the candidate string is controlled by the outer loop must be taken
into account. Since variable i represents the starting position of the candidate string, the index for

207
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

array T should start from i, and, together with the index for the pattern, be incremented by one a
time (see Fig. C). The pattern is scanned from 1 to m using variable j. P[1] is compared against
the first character T[i] of the candidate string, P[2] against T[i+1], and so on; therefore if the
index for the pattern is i, then the index for the candidate string should be represented as i + j – 1,
and therefore P[j] should be compared against T[i + j – 1]. Thus, blank B should be “T[i + j – 1].”

i i+1 i+2
Candidate string

Pattern
1 2 3

Fig. C Range of the index for the candidate string (when m=3)

Blank C: This blank specifies what is to be done when the if-condition evaluates to “false” after the
loop for string comparison has terminated. The condition compares variable j and the number of
characters in the pattern m, and the “true” part of the if-statement is indicated and can be used as
a hint. When the condition evaluates to “true”, that is, if variable j is larger than m, the assignment
found ← 1 is performed, indicating that this part corresponds to the case where a substring
matching the pattern has been found in the text. Conversely, when the condition evaluates to
“false”, that is, if variable j is less than or equal to m, a character non-match has occurred during
the comparison between the pattern and the candidate string. When a substring that matches the
pattern is found in the text during the comparison loop, variable j is incremented after completing
the comparison up to the end of the string, resulting in j > m; and if a non-match occurs during
comparison, j ≤ m. When a non-match of the pattern and the candidate string occurs, the candidate
string must be shifted one character to the right for a next round of comparison against the pattern.
Therefore, blank C should contain the process for shifting the candidate string one character to the
right, that is, adding 1 to variable i that stores the starting position of the candidate string. Thus,
blank C should be “ i ← i+1”.

(2) The worst case is when character comparison succeeds except for the very last character,
thereby nullifying the comparison effort up to that point. For a comparison between a candidate
string and a pattern, when a non-match is found only at the last character, that would be the worst
case. If this occurs for each candidate string, it would be the worst case for string search.
First, if a non-match occurs at the last character when comparing a candidate string and a
pattern with m characters, the number of comparisons is m. Since there are n–m+1 candidate strings
in total within a text with n characters, if m comparisons are performed for each candidate string,
the worst case of performing “m  (n–m+1)” comparisons would occur (Fig. D).

208
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

last = n – m + 1
First candidate string Last candidate string

Text ... ... ...



1st time ...

m
2nd time ... 

...

(n–m+1)-th time ...

Fig. D Number of comparisons in the worst case

(3) When a text is a sequence of twelve A’s, and the three-character-long pattern is, for example,
AAB, the worst case scenario will occur. With this type of pattern, the first and the second
characters will match, and a non-match will be found when the last character is compared. In this
example, the last character is B, but any character other than A would result in a non-match. Since
the text is a sequence of twelve A’s, and the same situation applies to each candidate string, this
pattern will lead to the worst case scenario. Therefore, the characters in the pattern for the worst
case would be: the first character is A, the second character is A, and the third character is a
character other than A. Chosen from the answer group, the answer would be a), a), and b) for the
first, the second, and the third characters, respectively.

[Subquestion 2]
This Subquestion is about the string search program (improved approach) of Fig. 5. The
program in Fig. 2 shifts the candidate string one character at a time from the first character of the
text, but, as shown in Figs. 3 and 4, depending on the last character of the candidate string that has
resulted in a no-match, the starting position of the candidate string can be advanced more
efficiently. The program in Fig. 5 is based on this observation. Tables 1 and 2 show examples of the
shifting width for each character. These are calculated in advance, and there is no need to
understand how the values for the shift width d(x) are determined in order to answer the question
(On the contrary, trying to understand this requires some time, and reduces the time available for
answering the actual question. Some questions contain traps such as this, thus beware.).
Once getting the general idea of the process from Figs. 3 and 4 and Tables 1 and 2, it is
important to understand the key points of the improved string search approach shown in (1)
through (3) below. The two primary differences from the program shown in Fig. 2 are: “(1) The
candidate string and pattern are compared one character at a time, starting at the end and moving
towards the beginning.” and “(3) If a non-match is found during the comparison, the candidate
string is shifted by the width d(x) where x is the last character of the candidate string, using the
pre-determined function d.”
The program in Fig. 5, as with the program in Fig. 2, consists of three parts: the initialization

209
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

section, the core section of the program which is the loop for shifting the starting position with an
internal loop for string comparison, and the result display section.

(1) Fill in blanks D through F.


Blank D: This blank concerns the initialization for variable i. In the program of Fig. 2, the variable is
used as the index indicating the starting position of the candidate string, where comparison starts.
In the program of Fig. 5, variable i is again used as the index for array T, and indicates the
position of the character within the candidate string. But, since the comparison method is
different, the initial value may not be the same. Within the loop for string comparison, the position
of the first character to compare must be set as the initial value, but note that the value of variable
i is decremented by one each time. That is, as indicated in (1), comparison starts from the end of
the candidate string. Since the first candidate string, as indicated in Figs. 3 and 4, is the first m
characters of the text, the position of the last character of the candidate string, where comparison
will start, is m. Therefore, the appropriate answer for blank D is “m”.

Blank E: This is part of the repetition condition for the loop for string comparison. As with the
comparison loop in the program of Fig. 2, the character in the candidate string that will be
compared against P[j] in the pattern should be set here. Using the first candidate string as an
example, we can observe the index behavior. As was considered with blank D, variable i is
initialized to m. Variable j is set to m before entering the comparison loop. Therefore when
comparison starts, we have a situation as shown in Fig. E.
i

1 m
Candidate string ...

Pattern ...
1 m

Fig. E Comparison of the first candidate string and the pattern

210
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

The comparison starts at the m-th character for both the candidate string and the pattern,
decrementing both variable i and variable j by one after each repetition. Therefore character P[j] of
the pattern is compared with character T[i] of the candidate string. Thus, the appropriate answer
for blank E is “T[i]”.

Blank F: This blank contains an operation performed when a non-match has occurred between the
candidate string and the pattern in the loop for string comparison. Variable i is decremented by
one on each iteration of the comparison loop. Since it has moved toward the head of the candidate
string, it must be reset in preparation for the comparison with the next candidate string. The line
following blank F adds the shift width d(x) to i, therefore before this occurs, variable i must be
reverted to the point where comparison had started, that is, the last character of the candidate
string. Since variable i is decremented by one on each iteration of the comparison loop, it can be
restored by adding the number of iterations performed to the current value of variable i. However,
because the initial value of variable i will differ each time, it is not possible to determine the
number of loop iterations from the value of variable i. Since variable j, likewise, is decremented
by one on each loop iteration, and its initial value before starting the comparison loop is m, the
number of loop iterations can be determined by m - j. Therefore, m - j should be added to the
value of variable i, and thus the appropriate answer for blank F is “m – j” (see Fig. F).

Candidate string

i
Text ... ... ... x ...
Non-match Same number of characters, i.e., m – j
Pattern ... ...
j m
m –j

Fig. F Approach to blank F

(2) In the string search approach in Fig. 5, if none of the characters in the pattern appears in the text,
for any string comparison (between the candidate string and the pattern), a non-match results when
the last characters of the candidate string and the pattern are compared. When this occurs, the
candidate string is moved to the right by the shift width d(x) where x is the last character of the
candidate string. It is clear, from Tables 1 and 2 that if none of the characters in the pattern appears
in the text, then the shift width d(x) is equal to the length of the pattern, m.

211
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

Text ... ... ...


Non-match
 Non-match
1st time ...
m Non-match

2nd time ...


m
...

n/m times ...


m

Fig. G Shift width when none of the characters in the pattern appears in the text

The number of characters in the text is n, and the number of characters in the pattern is m.
Therefore when shifting by m characters each time, the end of the text will be reached in n/m times.
However, n may not be a multiple of m, so n/m may not always be an integer. For example, if n=14,
and m=3, then comparison will be carried out as shown in Fig. H, and there will be two characters
at the end which are not compared. The specific calculation is 14/3=4.666...., but the actual number
of comparisons is four which means that the calculation result truncated at the decimal point is the
number of comparisons made. Thus, the number of comparisons will be d) “Value of n/m, truncated
at the decimal point.”
3 3 3 3 2

n = 14

Fig. H Comparison when n=14 and m=3

212
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

Q11-3 Merge Sort

[Answer]
[Subquestion 1] d)
[Subquestion 2] a) second.length
b) first[i] < second[j]
c) first[i]
d) second[j]
[Subquestion 3] e) 1
f) A.length – m
g) A[m + i]
h) merge_sort(first)
i) merge_sort(second)
[Subquestion 4] a)

[Explanation]
The merge sort is achieved by repeatedly splitting an array to be sorted into single elements and
then merging them into a sorted array. The key point in the merge sort algorithm in this question is that
the merge sort recursively calls itself until the number of elements in the array is one.
Keeping in mind that the procedure is recursively called, the merge sort algorithm described in the
question can be organized as shown in flowchart of Fig. A. Processes (1) through (4) in Fig. A
correspond to procedures (1) to (4) of the merge sort algorithm. Note that the question assumes that
there are no duplicate elements in the array to be sorted.

213
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

Start merge sort


When the number of
elements = 1, end and
return to the caller. …… No
(1) Number of
elements > 1

Continue splitting while the


Yes …… number of elements > 1
(2) The array is split into two halves
(first half and second half)

Merge sort applied


(3) Merge sort(first half) recursively on the first half
Merge sort applied
(3) Merge sort(second half) recursively on the second
half
(4) Merge (Stop splitting, and)Move
on to the merge process.

End merge sort

Fig. A Merge sort algorithm

[Subquestion 1]
Let us consider the order of split and merge processes when the steps (1) through (4) of the
algorithm are applied to the example shown in Fig. 1. Here, take note that the merge sort algorithm
is recursively called, and understand when calling and returning occur. Using the labels [1] through
[14] attached to each of the split and merge processes in Fig. 2, the call sequence can be described
as shown in Fig. B.

214
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

Split [1] Split [2] Split [4]


First half First half First half Number of elements = 1, so return
Second half Second half Second half Number of elements = 1, so return
Merge [14] Merge [12] Merge [8]

Split [5]
First half Number of elements = 1, so return
Second half Number of elements = 1, so return
Merge [9]

Split [3] Split [6]


First half First half Number of elements = 1, so return
Second half Second half Number of elements = 1, so return
Merge [13] Merge [10]

Split [7]
First half Number of elements = 1, so return
Second half Number of elements = 1, so return
Merge [11]

represent merge sort algorithm units, and


first half / second half represent merge sorting recursive calls.

Fig. B Call sequence in merge sort

Let us trace the split and merge processes in accordance with the call sequence. The numbers
within parentheses “()” below indicate the array element shown in Figs. 1 and 2.

• When merge sort starts, split [1] is performed first on the array to be sorted (4, 7, 2, 5, 8, 3, 6,
1), dividing it into the first half (4, 7, 2, 5) and the second half (8, 3, 6, 1).

• The merge sort procedure is called on the first half (4, 7, 2, 5), and split [2] is performed on it,
dividing it into the first half (4, 7) and the second half (2, 5).

• split [4] is performed on the first half (4, 7), dividing it into the first half (4) and the second
half (7). The merge sort procedure is again called on each of the first half and the second half.
Now the argument provided is already split completely (the number of elements is 1), and the
procedure will return immediately.

• merge [8] is performed by comparing 4 and 7, resulting in array (4, 7), and then the procedure
returns to the caller.

• Split [5] is performed on the second half (2, 5), dividing it into the first half (2) and the second
half (5). The merge sort procedure is again called on each of the first half and the second half.
Now the argument provided is already split completely (the number of elements is 1), and the
procedure will return immediately.

• merge [9] is performed by comparing 2 and 5, resulting in array (2, 5), and then the procedure
returns to the caller.

• The elements in array (4, 7) and array (2, 5) are compared, and merge [12] is performed,

215
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

resulting in array (2, 4, 5, 7), and then the procedure returns to the caller.

• The same merge sort procedure is applied to the second half (8, 3, 6, 1), and split [3] → split
[6] → merge [10] → split [7] → merge [11] → merge [13] are performed. This results in
array (1, 3, 6, 8), and then the procedure returns to the caller.

• Finally, merge [14] is performed on array (2, 4, 5, 7) and array (1, 3, 6, 8), resulting in array
(1, 2, 3, 4, 5, 6, 7, 8).

The result of this trace indicates that the order of the split and merge processes is d) [1] → [2]
→ [4] → [8] → [5] → [9] → [12] → [3] → [6] → [10] → [7] → [11] → [13] → [14]. The correct
answer can be derived by correctly tracing up to [8], where elements 4 and 7 are merged. Looking
at Fig. 2, it would appear that [3] follows [2], or that [5] follows [4], but it is important to correctly
understand the call timing, as shown in Fig. B.

[Subquestion 2]
Let us consider the part (4) of the merge sort algorithm, which merges the split arrays. As the
Subquestion says, “In step (4) above, the elements at the beginning of the two arrays are compared,
and the smaller of the two is retrieved one at a time into a single array thereby yielding a sorted and
merged array.” This is shown in Fig. C.

Array first Array second


2 4 5 7 1 3 6 8

Array A 1 2 3 4 5 6 7 8

Fig. C Example of creating a sorted and merged array

The following must be kept in mind when considering the algorithm.

• The array index starts at zero (0).


• The array length is denoted as “array_name.length”.
• Array A is a variable with space allocated for the merged array.
• In the logical operation “x or y,” y is not evaluated if x is true.
• In the logical operation “x and y,” y is not evaluated if x is false.

In the merge function, indices i and j are used to hold the position of the elements in arrays
first and second, respectively, and are set initially to zero before the merge process. Then, the
merge process is repeated by advancing indices i and j one by one from the start toward the end of
arrays first and second.
Consider the condition containing blanks, and processes that follow (see Fig. D). When the
condition is “true,” the process terminates by incrementing index i. It suggests that an element of
array first is merged to array A. Likewise, when the conditional is “false,” index j is incremented
and an element of array second is stored in array A.

216
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

j ≥ A or No
(i < first.length
and B )

Yes
Index i is advanced by C D
Index j is advanced
A[i + j]← A[i + j]←
1, so the process by 1, so the process
containing blank C containing blank D
stores an element of i ← i + 1 j ← j + 1 stores an element of
array first in array A. array second in array A.

Fig. D Algorithm for the merge function (excerpt)

The following two situations could result in an element of array first being stored in array A.

(1) All elements of array second have already been stored in array A, and only the elements of
array first are left unprocessed.

(2) Both arrays, first and second, still contain elements not yet stored in array A, and the
element of first is smaller than the element of second (first[i] < second[j]).

Table E shows all possible conditions, including situations in which elements from array
second are stored in array A.

Table E Conditions for storing elements in array A


Element of first is stored in A Element of second is stored in A
(conditional = true) (conditional = false)
Only first array
Yes
elements remain
Only second array
Yes
elements remain
Elements remain in
both arrays, and Yes
first[i] < second[j]

Elements remain in
both arrays, and Yes
first[i] > second[j]

217
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

• Blanks A, B: In accordance with the logical operation rules which have been pointed out in the
Subquestion as important points to note when considering the algorithm, the conditional “j ≥
A or (i < first.length and B )” is true when “j ≥ A ” is true (in
this case, the second half of the disjunctive is not evaluated), or, when the first half of the
disjunctive is false and “i < first.length and B ” is true. These two cases correspond
to the first and the third rows of Table E, respectively. Since the value of index j is being
addressed in relation to blank A, this condition must be related to array second. Since the record
merge process which includes these evaluations ends when there are no more elements in either
first or second, by the fact that this evaluation is taking place, it is clear that there are elements
remaining in either or both of the arrays. Therefore, if there are no elements remaining in one of
the arrays, it is clear that there must be one or more elements remaining in the other array. Thus,
the first condition (no elements in second, one or more elements in first) is “j ≥
second.length”. When checking the values of elements in arrays, we must take note that the
index is within the range of the array. In this question, one or more elements remain in both
arrays. Because the condition which includes blank B is evaluated when the condition including
blank A is false, it is clear that there are still one or more elements in array second. Thus, the
algorithm checks whether there are also one or more elements remaining in array first, and then
compares the values of the elements in both arrays. In other words, the condition of the third item
is that “i < first.length and first[i] < second[j]”. Therefore, blank A is second.length,
and blank B is first[i] < second[j].

• Blanks C, D: C is where an element of first is stored in array A, and thus the answer is first[i].
D is where an element of second is stored in array A, and thus the answer is second[j]. In the
example shown in Fig. C, indices i and j are both 0 at the start. Comparing the first element 2
of first and the first element 1 of second, we have 2 > 1, which corresponds to the fourth row in
Table E. Therefore, element 1 from the second array is stored in array A. Next, index j is
advanced by 1, and the next element in array second is compared. Comparing the first element 2
of first and the second element 3 of second results in 2 < 3, which corresponds to the third row
in Table E. Therefore, element 2 from first is stored in array A. A similar process continues
thereafter.

[Subquestion 3]
The merge_sort function implements the merge sort algorithm defined in (1) through (4)
(shown in Fig. A) using the merge function defined in Subquestion 2. It is important to understand
that the merge_sort function is a function responsible for the whole merge sort algorithm, and that
the merge function performs one part of it, namely the merging of split and sorted arrays as defined
in (4). Therefore, do not confuse the two. Also, note that array A is used by the merge function for
storing sorted, merged arrays, and is used by the merge_sort function to store both unprocessed
array before splitting, and the sorted array after merge sort.
The procedure for allocating the space for an array of length n is denoted as “new
array_name(n)”. Look at the contents of the merge_sort function, noting that integer division
results are truncated at the decimal point. The function will be easier to understand if compared
with the flowchart in Fig. A.

218
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

• Blank E: As defined in (1), if the number of elements is one, the function will return to the caller. If
the number of elements is two or more, the processes described in (2) and onward will be
performed. Therefore the conditional here should be “A.length > 1”. Thus, the answer is 1.

• Blank F: The repeated section, from “m ← A.length  2” up to the copying of the latter half,
corresponds to the splitting of the array into first and second halves defined in (2). Blank F
corresponds to setting the value to variable n, which represents the length of array second. Since
array second contains the second half of array A, this is set to array A’s length, A.length, minus
the number of elements in the first half, m. Therefore, the answer is A.length – m. Here, values for
variables m and n are set, and arrays first and second of those sizes are allocated.

• Blank G: During the copying of the first half, the first half of array A (indices 0 through m – 1) is
copied to array first, and during the copying of the latter half, the second half of array A (indices
m through m + n – 1) is copied to array second. The second half copying process is repeated from
i = 0 to i = n – 1. By setting the index of array A to m + i during the copying process, copying will
be performed correctly, as shown in Fig. F. Therefore, the answer is A[m+i].

0 m–1 m m+1 m+n–1


Array A

0 1 n–1
Array second

Fig. F Second half copying process

• Blanks H, I: These blanks correspond to the recursive calls to merge sort defined in (3). Merge sort is
called on the first half (first) and on the second half (second). Therefore, blank H is
merge_sort(first), and blank I is merge_sort(second).

[Subquestion 4]
In merge sort, sorting an array A with n elements (n being a positive integer) requires area for
the split arrays (in the algorithm, arrays first and second). The first half of array A is copied to
first, and the second half to second, implying that an area with twice the number of array A
elements (2n) is needed.
The computational complexity of the merge sort is O(nlog2n). The number of partial arrays
doubles with each split. If k splits results in n partial arrays of size one, we have2k = n. Taking the
logarithm of both sides, k = log2n, but when merging, n elements must be copied, therefore the
computational complexity would be nlog2n. Thus, as the following calculation and the graph show,
this value is smaller than n2.
2
n – nlog2n = n (n – log2n) > 0 ( n > 0, from graph n – log2n > 0)

Based on the above, a) would be the appropriate description of the merge_sort function.

219
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

f(n)
f(n) = n

4
3
f(n) = log2n
2
1
n
0 1 2 3 4
n is a positive integer, so compare the integer values of the graph.

Fig. G Graphs of f(n)=n and f(n)=log2 n

b) Even if lists or stacks are used as data structures, the splitting and merging processes would
not change, therefore the computational complexity would be the same as with arrays.

c) Splitting is performed until the number of elements is one, regardless of how the elements are
ordered. Even if the array is already sorted in the first place, splitting and merging will be
performed, therefore the size of necessary working memory and computational complexity
will be the same.

d) In this algorithm, array splitting is repeated until the number of elements in each section is
one. Therefore, the number of splittings performed significantly impacts the computational
complexity. The current merge_sort function splits an array roughly in the middle, therefore,
as the example in Fig. 2 shows, after splitting, the number of elements in the partial arrays will
be roughly halved, and if the number of elements is the same, the number of splittings will
also be the same. However, if a hash function is used to determine where to split the array, the
number of elements in the partial arrays after splitting will vary (depending on where to split),
so even if the number of elements is the same, the number of splits will also vary. Therefore, it
can be said that the current method results in less variations in computational complexity than
a method using a hash function. With the current algorithm, the number of splittings necessary
for an array with n elements is, as shown in Fig. H (1), log2n. If a hash function is used and the
array is split into partial arrays with roughly half the number of elements each, then the
number of splits would be roughly equivalent. However, if splittings occur in such a way that
one of the partial arrays contains a single element, then n – 1 splittings will be required, as
shown in Fig. H (2).

220
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

k
Splitting
terminates
k
k
n=2
k = log2n
Splitting
Fig. H (1) Splitting in the Middle terminates
k=n–1
Fig. H (2) Continuously splitting into
one and n-1 elements

Q11-4 Data exchange between computers

[Answer]
[Subquestion 1] (1) c) (2) a) (3) b) (4) b)
[Subquestion 2] (1) b) (2) c) (3) a)
[Subquestion 3] (A) Order_details
(B) <Order
Ordering_party = "Company ABC"
Order_date = "2006-10-20"
Desirable_delivery_date = "2006-10-31">
(C) <Order_details>
(D) </Order_details>

[Explanation]
This question concerns the features of XML document data, and how that data is stored.
Subquestion 1 concerns which data format, CSV or XML, is appropriate to use in order to satisfy
specific requirements, given the characteristics of CSV and XML as data formats. Subquestion 2
concerns determining whether the given XML documents are valid documents, well-formed but not
valid documents, or neither. This is simple to answer with an accurate understanding of XML syntax.
However, even without detailed knowledge on XML syntax, the question can still be answered with
understanding of basic tag notations and the fact that the markup starting with !DOCTYPE is the DTD.
Subquestion 3 concerns converting the order data shown in Fig. 1 into a valid XML document. The
answers can easily be determined by carefully following how the order data in Fig. 1 corresponds to
the blanks in the XML document in Fig. 2, using Subquestion 2’s “valid document” as reference. This
question is relatively simple.

221
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

[Subquestion 1]
CSV is a data format in which data items are separated with commas. It contains no
information other than the data itself, and the order of the data. XML is a data format which stores
data enclosed between or within tags, in which it is possible to define parent-child relationships
between individual data items, and assign names to data.

(1) Answer which data format is appropriate for “transactions with foreign as well as domestic
businesses.” The fundamental difference between CSV and XML is that, as mentioned earlier, CSV
stores only data, while XML stores, in addition to data itself, attributes related to it. Differences
between data handled by domestic businesses and data handled by foreign businesses might include
data names, character types (kanji, alphabet), and code systems. If the names of tags and types of
characters differ, there is no meaning in retaining data names using XML. CSV does not contain
code system related information, therefore when exchanging data between computers with differing
code systems, codes must be matched in advance. For both XML and CSV formatted data,
arrangements must be made in advance of data conversion, thus c) “Impossible to say” is
appropriate.

(2) Answer based on the fact that “because network bandwidth is limited, the data size should
preferably be small.” CSV contains only data itself, while XML contains markup, such as tags, in
addition to data, therefore generally XML formatted files contain more data. Here the network
bandwidth is limited, and the format with the smaller data size is better, thus a) “CSV” is
appropriate.

(3) RosettaNet is a non-profit organization which defines standard specifications for BtoB
infrastructure, composed primarily of electronic device manufacturers. In order to promote BtoB
between companies, transaction rules must be defined. RosettaNet formulates product code and
product name description methods for use by companies when exchanging data, quotation
document and purchase order document formats, product codes used in transactions, business
processes, and the like, based on the XML format. Here “the data must be conformant with the
RosettaNet standard,” thus b) “XML” is appropriate.

(4) In order to retain product-specific information, such as color or size, in a CSV file, each
position within a row of data items must be assigned to each type of information, such as color or
size. With XML, this can be accomplished by using tags to specify information such as color or
size. Here the requirement states: “Depending on product types, product-specific information, such
as color or size, may need to be added. Information to be added cannot be foreseen at present,”
therefore it would be difficult to decide in advance on row positions to represent specific data.
XML, on the other hand, can flexibly support additional data. Therefore, b) “XML” is the
appropriate answer.

[Subquestion 2]
The definitions of a well-formed document and a valid document are given in the question;
therefore this Subquestion can be answered by checking the markups of each XML document
against those definitions. XML documents are composed of three parts: the XML declaration (the
part beginning with “<?xml version= ...” , the DTD (the part beginning with “<!DOCTYPE ...” , and

222
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

the XML instance (the part containing data marked up with tags). First, an explanation of elements
and attributes is given. The entire section between a start tag and an end tag is called an element.
The name of a tag is called an element name, and the part between the tags the element’s contents.
Elements are represented as follows.

<Element_name>Element contents</Element_name>

In the case of tag pairs without element contents (<Element_name></Element_name>), the


following representation can be used, which omits the end tag.

<Element_name/>

An element may have attributes. They are represented as shown below. The “attribute_name
= "value"” section is called the attribute.

<Element_name attribute_name = "value">

A well-formed document is one which contains an XML declaration and an XML instance, and
which uses a correctly formed, element nesting structure. A valid document is a well-formed
document which has its DTD, and whose XML instance is in accordance with the DTD definitions.
The relationship between well-formed documents and valid documents is as shown in the figure
below.

Well-Formed Document

Valid Document

(1) An XML declaration is given, and elements (<Product>, <Product_code>, <Product_name>,


<Product_type>, and <Unit_price>) are correctly nested, therefore the document is a
well-formed document. However, since no DTD is specified, this is not a valid document.
Moreover, even without knowledge on DTD, given the name (Document Type Definition), and the
contents of (2) and (3), one can guess that the “<!DOCTYPE ...” section corresponds to the DTD.

(2) An XML declaration is given and a specification which appears to be in DTD exists. As the
question explains, XML is a subset of SGML, and the basics should be the same as HTML, another
subset of SGML. Looking at the XML contents from this perspective, we can see that in the XML
instance, there are no end tags for product code or product name. Because of this, we can guess that
the XML contains syntax errors. Fig. 2 of Subquestion 3 should be a “valid document,” therefore it
may be used as a reference. In Fig. 2, tags such as <Product_code> are closed with
</Product_code> end tags, therefore we can be sure that end tags are necessary in XML.

(3) An XML declaration, a DTD, and an XML instance are specified, and the XML instance
section is specified in accordance with the defined DTD (product elements contain unit price
elements; products contain product code, product name, and product type attributes (the
“<!ATTLIST ...” section)), therefore this is a valid document. As with (2), Fig. 2 may be consulted

223
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

as a reference.

[Subquestion 3]
• Blank A: Blank A is followed by “(Product_code, Product_name, Quantity)”, therefore the name
of that element (ELEMENT) which is composed of product code, product name, and quantity is the
answer for the blank. The order data in Fig. 1 shows that the product code, product name, and
quantity constitute an order detail. Since the name of the order detail is Order Details,
“Order_details” is the answer for the blank.

• Blank B: This is the start tag that encapsulates the entire XML instance. The end tag for the entire
instance is </Order>, therefore the tag name is “Order.” The DTD description specifies that the
order contains the order details element, and the ordering party, order date, and desirable delivery
date attributes. The ordering party in the order data shown in Fig. 1 is “Company ABC,” the order
date is “2006-10-20,” and the desirable delivery date is “2006-10-31.” Using the XML document
of (3) in Subquestion 2 as a reference, the attribute description can be specified as shown below.

<Order
Ordering_party = "Company ABC"
Order_date = "2006-10-20"
Desired_delivery_date = "2006-10-31">

Noting that blank B is clearly larger than blanks A, C, or D, it is possible to guess that the tag
name, “Order,” alone, is not the answer.

•Blank C: This corresponds to the start tag for the element which makes up the order, containing the
product code, product name, and quantity elements. Thus, “<Order_details>” is the answer for
the blank.

•Blank D: Since blank D is the end tag corresponding to blank C, “</Order_details>” is the answer
for the blank.

[Reference]
Below is a supplementary explanation of XML syntax. Use it to gain an accurate understanding
of XML syntax.

(1) XML Document


An XML document consists of three parts: the XML declaration, the DTD, and the XML
instance. The DTD may be omitted.

(2) XML Declaration

<?xml version ="version" encoding ="character code" ?>

The XML declaration is written in this format. The XML declaration is mandatory, while
encoding can be omitted. When encoding is omitted, it is assumed that UTF-8 or UTF-16
encoding is used.

224
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

(3) DTD (Document Type Definition)


DTD specifies the data format used in that XML document. In XML documents, arbitrary tags
may be defined to represent various types of data. Therefore, unless the parties using XML
documents decide on certain rules in advance, such as “this XML document defines the XX tag
which holds data that means YY,” data cannot be exchanged between them. The DTD is used to
describe these rules agreed upon between the relevant parties. Since the DTD specifies data
formats, it is called a schema language. The DTD description in (2) and (3) of Subquestion 2 is
used below as an example to explain the DTD.

<!DOCTYPE Product [ --- (i)


<!ELEMENT Product (Product_code, Product_name, Product_type, Unit_price)> --- (ii)
<!ELEMENT Product_code (#PCDATA)> --- (iii)
<!ELEMENT Product_name (#PCDATA)> --- (iv)
<!ELEMENT Product_type (#PCDATA)> --- (v)
<!ELEMENT Unit_price (#PCDATA)> --- (vi)
]> --- (vii)

(i), (vii): Indicate that this is a declaration of a document format. Specific contents are given
between them.
(ii): Defines that the “Product” element contains, within it, the “Product_code”, “Product_name”,
“Product_type”, and “Unit_price” elements, in that order.
(iii): Defines that the “Product_code” element consists of character data (#PCDATA).
(iv): Defines that the “Product_name” element consists of character data (#PCDATA).
(v): Defines that the “Product_type” element consists of character data (#PCDATA).
(vi): Defines that the “Unit_price” element consists of character data (#PCDATA).

<!DOCTYPE Product [ --- (i)


<!ELEMENT Product (Unit_price) > --- (ii)
<!ATTLIST Product Product_code CDATA #REQUIRED --- (iii)
Product_name CDATA #REQUIRED --- (iv)
Product_type CDATA #REQUIRED> --- (v)
<!ELEMENT Unit_price (#PCDATA)> --- (vi)
]> --- (vii)

(i), (vii): Indicate that this is a declaration of a document format. Specific contents are given
between them.

(ii): Specifies that the “Product” element contains a sub-element “Unit_price.”

(iii): Defines that the first attribute of the “Product” element is “Product_code,” that it consists of
character data (CDATA), and that it is required (#REQUIRED).

(iv): Defines that the second attribute of the “Product” element is “Product_name”, that it consists
of character data (CDATA), and that it is required (#REQUIRED).

(v): Defines that the third attribute of the “Product” element is “Product_type”, that it consists of
character data (CDATA), and that it is required (#REQUIRED).

225
Afternoon Exam Section 11 Programming (Algorithms) Answers and Explanations

(vi): Defines that the “Unit_Price” element consists of character data (#PCDATA).

(4) XML Instance


The XML instance section contains specific data, and is composed of elements and attributes.
Elements may be nested. Attributes may be omitted. An example of elements and attributes is
shown below.

<Product
Product_code = "S001">
<Product_name>Mouse</Product_name>
<Unit_price>5000</Unit_price>
</Product>

226
Afternoon Exam Section 12 Strategy Related Items Answers & Explanations

Section 12 Strategy Related Items


Q12-1 Management analysis

[Answer]
[Subquestion 1] a, c, e, f
[Subquestion 2] A: B5 / (1  B4)
B: (C5  C6) / (1  C4)
[Subquestion 3] 72.5 billion yen

[Explanation]
[Subquestion 1]
Ordinary profit
(1) Ratio of ordinary profit to total capital =  100%
Total capital

To calculate ordinary profit, interest income and dividends (a) as well as selling, general and
administrative expenses (e) are required. Also, to calculate total capital, profit reserve (f) is
required.

Ordinary profit
(2) Ratio of ordinary profit to sales =  100%
Sales
As in (1), to calculate ordinary profit, interest income and dividends (a) as well as selling,
general and administrative expenses (e) are required.

Sales
(3) Total capital turnover =
Total capital

As in (1), to calculate total capital, profit reserve (f) is required.

Current assets
(4) Current ratio =  100%
Current liabilities

To calculate current assets, goods (c) is required.

[Subquestion 2]
(1) Profit = Sales  Costs
= Sales  (Variable costs  Fixed costs)
= Sales  (Variable cost ratio  Sales  Fixed costs)

Since the break-even point is the sales when profit is zero, it is equal to the sales when:
0 = Sales  (Variable cost ratio  Sales  Fixed costs)

227
Afternoon Exam Section 12 Strategy Related Items Answers & Explanations

Fixed costs
Therefore, Break-even point =
1  Variable cost ratio
B5
=
1  B4

(2) Since Profit = Sales  (Variable cost ratio  Sales  Fixed costs),

C6 = Sales  (C4  Sales  C5)

Sales can be obtained from this as follows:


C5  C6
Sales target =
1  C4

[Subquestion 3]
Profit = Sales  Variable costs  Fixed costs
= Sales  Variable cost ratio  Sales  Fixed costs
= ( 1  Variable cost ratio)  Sales  Fixed costs
= Marginal profit ratio Sales  Fixed costs, and therefore:
4 = 0.4  Sales  25

Thus:
25+4
Sales =
0.4
= 72.5

Q12-2 Break-even point analysis of a measuring instrument manufacturer

[Answer]
[Subquestion 1] (1) Division A: 576 (million yen)
Division B: 79.2 (million yen)
(2) Division name: Division A
Reason of decision: Because the safety ratio is extremely low compared to
Division B.
(3) Convert fixed costs to variable costs as much as possible in order to lower
the break-even point.
[Subquestion 2] Division A: 5 (units)
Division B: 9 (units)
[Subquestion 3] (1) d
(2) Because, since the fixed costs of the entire company is constant, costs that
are incurred do not change for Company P.

[Explanation]
This question concerns break-even point analysis. In order to perform break-even point analysis, it

228
Afternoon Exam Section 12 Strategy Related Items Answers & Explanations

is necessary to present the profit and loss statement as a variable profit and loss statement by dividing
fixed costs and variable costs. A variable profit and loss statement is a profit and loss statement where
all incurred costs are divided into and stated as either variable cost or fixed cost, based on whether
there is any increase or decrease linked with the changes in sales volume. Normal profit and loss
statements are as shown below.

Sales volume
  Costs
Gross profit
  Expenses
(Ordinary) profit

In this case, manufacturing expenses as well as sales and management expenses can be
distinguished, but the origin and the characteristics of the costs is unknown, and thus it is difficult to
reflect the costs in cost management and profit plans (however, in this question, costs are integrated).
Therefore, it is effective to create a variable profit and loss statement and analyze costs and profit, in
which costs are divided into variable costs which increases in accordance with the increase of
products, and fixed costs which a certain amount is always incurred. Variable profit and loss
statements are as shown below.

Sales volume
  Variable costs
Marginal profit
  Fixed costs
(Ordinary) profit

By analyzing these items, measures for ensuring profit can be clarified.

[Subquestion 1]
(1) The break-even point sales is to be calculated. The break-even point is “the state where the
marginal profit equals fixed costs,” and sales volume at this point is calculated by division.
First, for Division A, the marginal profit per unit of product is 20 million yen. If the number of
units sold corresponding to the break-even point sales is X units, then the equation 20  X = 480
holds, thus X = 24 units. The sales volume at this point is 24 (million yen)  24 = 576 (million yen).
Calculating likewise for Division B, the selling of 44 units of Product B corresponds to the
break-even point, and the sales volume is 79.2 (million yen). Therefore, the answer is, for Division
A, “576 (million yen),” and for Division B, “79.2 (million yen).”

(2) Which of Division A or Division B has lower safety is to be answered. With regard to safety,
from the description in the question text that states, “In break-even point analysis, it is necessary to
calculate the margin of safety ratio and to analyze the safety of a company or business. The margin
of safety ratio is an indicator that shows by what percentage sales volume would have to decrease
from its current value to reach the break-even point. The larger the value of this indicator, it is
considered that the more safe the company or business is,” the margin of safety ratios for each
division is compared. The margin of safety ratio is obtained by calculating ( sales volume 

229
Afternoon Exam Section 12 Strategy Related Items Answers & Explanations

break-even point sales) / sales volume.

Division A: ( 600  576 ) / 600=4%


Division B: ( 228.6 - 79.2 ) / 228.6=65%

It is clear that the margin of safety ratio of Division A is far lower. In the case of Division A, a
slight decrease in selling would result in lack of profit, while the cost structure of Division B allows
it to remain profitable even if sales decreases by half. Therefore, the answer of the division name is
“Division A,” and the reason of the decision is “Because the safety ratio is extremely low compared
to Division B.”

(3) The improvement of the cost structure of Division A is to be answered. As is clear from Table 2,
the ratio of fixed costs against total costs is extremely high with Division A. High fixed costs ratio
means that much cost is incurred regardless of sales quantities (sales volume), which is a cause of
raising the break-even point. If no fixed costs are incurred, the concept of the break-even point
would not occur as long as the amount of marginal profit could be ensured. The concept of the
break-even point occurs when fixed costs exist, and the costs must be recovered through marginal
profits. In other words, if fixed costs are small, the costs necessary to be recovered through
marginal profits would be smaller, making it possible to lower the break-even point. In this case,
the ratio of fixed costs of Division A is 83% and ratio of fixed costs of Division B is 30%. At
Division A, it is desirable to raise the margin of safety ratio by converting fixed costs to variable
costs as much as possible such as through factoring in performance-based payment in personnel
expenses and subcontracted processing expenses, using metered rate services instead of fixed rate
services, etc., and lowering the break-even point. Therefore, an answer such as “Convert fixed costs
to variable costs as much as possible in order to lower the break-even point,” is appropriate.

[Subquestion 2]
In the next fiscal year, fixed costs are expected to increase. In this case, if the sales quantity
remains the same as the current fiscal year, profits will decrease. Therefore the answer is the
additional sales quantity necessary in order to exceed the profit of the current fiscal year.
First, since unit selling prices and variable costs will not change from the current fiscal year, the
marginal profit per unit will be, for Division A, 20 million yen, and for Division B, 1 million yen,
remain unchanged for the next fiscal year. Next, since fixed costs are expected to increase by 20%
for each, fixed costs for Division A is expected to increase to 480 (million yen) 1.2 = 576 (million
yen), and fixed costs for Division B is expected to increase to 44 (million yen)  1.2 = 52.8 (million
yen). Profits for the current fiscal year are, for Division A, 20 million yen, and for Division B, 83
million yen. Therefore, with the target number of units sold for next year for Division A being X
and that for Division B being Y, X and Y are calculated.

20  X  576  20
1  Y  52.8  83

Solving these inequalities would result in X = 30 and Y = 136 (the minimum integer values
which satisfy each condition).
Since the question asks how many more units must be sold in addition to the sales quantity in

230
Afternoon Exam Section 12 Strategy Related Items Answers & Explanations

the current year, for Division A, the answer is 30  25 = “5” units, and for Division B, 136  127 =
“9” units.

[Subquestion 3]
This subquestion asks to consider in which division, A or B, it would be more advantageous to
manufacture new Product C. At first glance, it might appear that calculations must be performed in
accordance with the given conditions, and the results compared, but by reading the subquestion
carefully and understanding it, the calculation will not be required.
In this case, since no new fixed costs will be incurred, the costs incurred by Company P as a
whole is constant regardless of which division Product C is manufacture in. Take care not to be
misled by the allocation rule of fixed costs. In considering in which division to manufacture
Product C, while there are changes in the evaluated amount of profit and loss calculation for each
product, there are no differences in both sales and costs when seen from the perspective of
Company P as a whole. Therefore, regardless of the division that will manufacture Product C, there
is no economic difference.
Trial calculations are performed below using a model. Let, for Division A, the fixed costs be
KA, variable costs be HA, and for Division B, the fixed costs be KB, variable costs be HB, and for
Product C, variable costs be HC.

・ Total costs of Company P when Product C is manufactured in Division A

HA  0.7  KA  HC  0.3  KA  HB  KB
=HA  KA  HB  KB  HC

・ Total costs of Company P when Product C is manufactured in Division B

HB  0.7 KB  HC  0.3 KB  HA  KA
=HA  KA  HB  KB  HC

Thus, indeed, the costs are the same regardless of which division Product C is manufactured in.
Since sales are also expected to remain unchanged, the answer for (1) is d) “Regardless of which
division makes Product C, the profit of Company P does not change.” Furthermore, for (2), an
answer such as “Because, since fixed costs of the entire company is constant, costs that are incurred
do not change for Company P,” is appropriate.

Q12-3 Linear programming

[Answer]
[Subquestion 1] Production volume of Product A 1,800 (units)
Production volume of Product B 4,000 (units)
Machine operating time 21,000 (minutes)
[Subquestion 2] Production volume of Product A 3,000 (units)
Production volume of Product B 4,000 (units)
Sales profit 360,000 (yen)

231
Afternoon Exam Section 12 Strategy Related Items Answers & Explanations

[Explanation]
By asking the optimal solution, this question tests the basic knowledge of linear programming and
break-even point analysis. Especially in Subquestion 1, it must be kept in mind that when obtaining
the machine operating time from the production volume of Products A and B, the operating time is the
value that corresponds to the break-even point. In other words, the break-even point is the point at
which the fixed costs can be recovered from the sales profit (= selling price − variable costs) of
Products A and B, and the relevant expression must be derived from this content.
The graph in the subquestion describes the intersection of expression (1) and expression (2), and
its contents raises concerns that examinees that jump to conclusions may be deceived. The given
expression must be properly made into a graph, and the point at which the objective function is
maximized must be considered.

[Subquestion 1]
This subquestion asks the production volume and machine operating time of Product A and B
that corresponds to the break-even point where the machine operating time is at its minimum. First,
the expression for obtaining the machine operating time is considered.
Let Z be the machine operating time, x be the production volume of Product A, and y be the
production volume of B. Then, the following expression holds.

Z=5x  3y……(1)

Here, x and y are under the following conditions.

0  x  3,000
0  y  4,000

The production volume of A and B corresponding to the break-even point can be obtained as
values where the sum of each sales profit (that is, selling price minus variable costs) is equal to the
fixed costs. In other words, the following expression holds at the break-even point.

60x  45y = 288,000 …… (2)

From expression (2),

x =  0.75  y  4,800......(3)

Substituting expression (3) into expression (1) yields:

Z = 5(0.75y  4,800)  3y
=  0.75y  24,000 ......(4)

In order to minimize machine operating time Z in expression (4), the production volume y
of Product B which uses less machine operating time per unit time just needs to be the maximum of
4000. By letting y = 4,000 (units), the value of Z is as follows:

Z =  0.75y  4,000  24,000 = 21,000 (minutes)

232
Afternoon Exam Section 12 Strategy Related Items Answers & Explanations

In this case, from expression (3), the production volume x of Product A is as follows:

x =  0.75y  4,000  4,800 =1,800 (units)

[Subquestion 2]
This subquestion asks to consider the given constraint conditions and objective functions to
obtain the quantities and amounts of sales profit of Products A and B that yield the maximum sales
profit.
First, the expressions of the given constraint conditions, expression (1) through expression (5)
are made into a graph.

y ( × 1,000)

9
8
7 Expression (2) (the slope is the same as that of the objective function)

6
5 (2,727,4,364)

4 Expression (4)
Expression (1)
3
2 (3,000,4,000)
1
Expression (3)
0
x ( × 1,000)
0 1 2 3 4 5 6 7 8 9 10

The area which satisfies the constraint conditions is the shaded area in the figure, enclosed by
expression (1) through expression (4). On the other hand, looking at the objective function, since Z
= 60x  45y, it is apparent that the slope of the graph is the same as that of expression (2),  4 / 3 .
Therefore, the point at which the value of objective function Z is maximized within the area
enclosed by the constraint conditions is where expression (2), expression (3), and expression (4)
intersect, and is (3,000, 4,000).
Hence, as the optimal solution, the production volume x of Product A is 3,000 units, and
production volume y of Product B is 4,000 units.
Thus, the maximum sales profit under the optimal solution is as shown below:

Sales profit = 60  3,000  45  4,000 = 360,000 (yen)

In this question, since Subquestion 2 explains the sales profit per unit, and expression (5)
defines “sales profit,” calculations need to be made based on these.

233
Afternoon Exam Section 12 Strategy Related Items Answers & Explanations

Q12-4 Processing capacity and reliability improvement of a system

[Answer]
[Subquestion 1] (a) b (b) e (c) d (d) c (e) a
[Subquestion 2] b
[Subquestion 3] f

[Explanation]
This question asks about the knowledge or the calculation competence concerning performance
and operating rate, which are essential when operating a system. As in this question, calculating the
individual operating rate of components within a system and obtaining the operating rate of the system
as a whole will be an important task in formulating specific measures for maintaining the target
operating rate. In this question, operator terminal PCs, Web servers, and DB servers are made
redundant, but in environments for more critical businesses, additionally, networks and networking
devices are made redundant, and furthermore, power sources are also made redundant.

[Subquestion 1]
This subquestion asks about the calculation of system performance values, and the knowledge
necessary for reliability improvement.

• Blank A: This asks for the contract bandwidth required for the peak hour period in the business of
Company X. While the current contract bandwidth is provided in increments of 5M bits / second,
the amount of required bandwidth must first be calculated. The business of Company X is broadly
divided into two (2) categories which are “catalog sales” and “TV sales.” Of these, “catalog sales”
does not have a peak time, and constantly receives 100 calls per hour. On the other hand, “TV
sales” receives 1,000 calls per hour at peak time. In other words, during business peak time of
Company X, it receives 1,100 calls per hour. Since the volume of data transferred per call is 3M
bytes from the data center to the call center, and 1M byte for those sent from the call center to the
data center, there is a tendency to regard that there is a total of 4M bytes per call. However, since
the question text states that the wide area Ethernet is capable of communicating using the full
duplex method, the amount of transferred data is considered separately for each direction of
communication, thus only the amount of data sent from the data center to the call center which is
larger needs to be obtained. Thus, since at peak time, per hour, 3M bytes of data are sent 1,100
times, the following expression is obtained.

3M (bytes / call) = 24M (bits / call)


24M (bits / call)  1,100 (calls / hour) = 26,400M (bits / hour)
26,400M (bits / hour)  3,600 (seconds / hour) = Approximately 7.3M (bits / second)

Since the contract bandwidth is in increments of 5M bits / second, as contract bandwidth, 10Mbps
is needed. Therefore, b) is the correct answer.

• Blank B: Since Web servers which have a DB server as a backend can delegate exclusive control of
data and storage of data to the DB server, it is common to make redundancy as load distribution

234
Afternoon Exam Section 12 Strategy Related Items Answers & Explanations

and as a countermeasure against failures in the form of multiple servers performing the same
processing. When doing this, a network device called a load balancer is used to distribute requests
from Web clients to multiple Web servers. A load balancer distributes load on the Web servers by
being operated in a manner in which its own IP address is made to look like an IP address of a
Web server to Web clients, and distributes and transfers requests sent to its own IP address to the
registered multiple Web servers. Therefore, e) is the correct answer.

• Blank C: Since making the DB server redundant requires exclusive control of data and maintaining
of integrity, it is common to place a highly reliable shared storage in the backend, and operating
them in a dual configuration of a primary and secondary unit with one (1) unit in an active state,
and the other unit in a standby state. Also, since the question text states that “in the event of a
failure of a server, operation is switched over to the standby server,” it is clear that operations is
performed using this method. This form in which one (1) unit is in operation and the other one (1)
is on standby in anticipation of contingencies is referred to as the duplex method. Therefore, d) is
the correct answer.

• Blank D: While the number of calls in the system of Company X at peak time is 1,100 calls per hour,
since the percentage of concluded contracts is 90%, 1,100  0.9 = 990 orders is the number of
orders per hour at peak time. And since the description in [Current business contents and system
configuration] in the question text states that “the order reception amount per order for Company
X is 3,800 yen on average,” the amount of loss per hour can be calculated as follows.

990 (orders / hour)  3,800 (yen) = 3,762,000 (yen / hour)

Therefore, c) is the correct answer.

• Blank E: As can be understood from the answer to blank A, bandwidth of approximately 7.3M bits /
second is necessary at peak time. Since the “System operation guideline” requires double the
performance, a bandwidth of approximately 14.6M bits / second is necessary. Meanwhile, since
the current contract is for 10M bits / second, 5M bits / second is required as an addition.
Therefore, a) is the correct answer.

[Subquestion 2]
Since this subquestion involves terms such as MTBF and MTTR, and requires calculations
including decimal points, burdensome manual calculation is necessary. However, it is desirable to
understand that by considering the point to “round the third decimal place . . . to the second
decimal place” and taking note of the key points, it is possible to obtain the answer relatively
quickly even with manual calculation.
MTBF (Mean Time Between Failures) represents the mean of the interval between failures for a
device. In contrast, MTTR (Mean Time To Repair) represents the mean of the time required to
repair. Since the MTBF is determined by the reliability of the components that constitute the
device, while operating companies and service providers who purchase devices can select a product
based on its MTBF, it is difficult to change the MTBF of an existing product. MTTR, on the other
hand, vary based on the policies of operating companies and service providers, such as whether
they keep spare units in stock, whether they have entered into a maintenance agreement, whether

235
Afternoon Exam Section 12 Strategy Related Items Answers & Explanations

they have assigned personnel, etc.


Here, while the operating rate and non-operating rate of each component must be calculated,
they can be calculated as shown below based on their MTBF and MTTR.

MTBF
Operating rate =
MTBF + MTTR
MTTR
Non-operating rate =
MTBF + MTTR
1 = operating rate  non-operating rate

Based on the above, with regard to Table 2 of the question text, the operating rates represented
in the form of the third decimal place rounded to the second decimal place are as shown below.
Moreover, while there are 10 types of components in Table 2, since the components that require
operating rate to be calculated are divided into two categories, which are those with MTBF of
5,000,000 hours and MTTR of 5 hours, which include the layer 2 switch on the call center side, and
those with MTBF of 50,000 hours and MTTR of 5 hours, which include routers on the call center
side, only values for these two (2) categories need to obtained.

Component Operating rate (per unit)


Operator terminal PC -
Call center side layer 2 switch Operating rate 100.00%
Call center side router Operating rate 99.99%
Wide area Ethernet between call
Operating rate 99.99%
center and data center
Data center side router Operating rate 99.99%
Load balancer Operating rate 100.00%
Data center side layer 2 switch Operating rate 100.00%
Web server Operating rate 99.99%
DB server Operating rate 99.99%
Shared disk drive Operating rate 100.00%

Next, of the above components, those that are connected serially and whose operating rates
therefore need to be multiplied with each other are identified. Thus, first, based on the conditions
given in the subquestion, the operating rate of the group of terminals for the operators is 100.00%.
Furthermore, of the table above, since the shared disk drive also has an operating rate of 100.00%
when considered up to the second decimal place, it is clear that it can be excluded from
consideration.
With regard to the DB servers, since the redundant configuration consists of two (2) units, the
square of their non-operating rate is the non-operating rate of the DB servers group, and subtracting
this value from 1 is the operating rate of the DB server group. Since the non-operating rate of a
single DB server is 0.01%, the non-operating rate of the DB server group is 0.0001%, and the
operating rate is 99.9999%, which is 100.00% when rounded, it is clear that this also can be
excluded from consideration.

236
Afternoon Exam Section 12 Strategy Related Items Answers & Explanations

With regard to the Web servers, whether they can perform processing at peak time when one (1)
of the servers stops must be considered. Since, from Table 1, it is understood that a single Web
server can handle 10 calls per minute, it means that it can handle 600 calls per hour. However, in
this case, if any single server fails, it is not possible to perform all the processing during the peak
hour period. In other words, both Web servers must be in operation. Therefore, operating rate is
calculated considering that the two (2) units are connected serially.
Based on the above, the configuration is a serial configuration consisting of the call center side
router, the wide area Ethernet between the call center and the data center, the data center side
router, and the two (2) Web servers. Therefore, the multiplication of the operating rate of the five
(5) components is the operating rate of the overall system. The operating rate for each, with the
precision up to the second decimal place, is 99.99%. Thus, the answer is 99.99% to the fifth power.
Here, while manual calculation is burdensome, understanding the point that 99.99% to the second
power is 99.98%, and to the third power is 99.97%, it quickly becomes apparent that 99.99% to the
fifth power is 99.95%. Therefore, b) is the correct answer. It is recommended to actually refer to the
following calculation.
99.99% to the second power is as follows (for clarification, the base number is enclosed in a
rectangle).

99.99%  99.99%
= 99.99%  100%  99.99%  0.01%
= 99.99%  0.009999 %
= 99.980001%
Likewise, 99.99% to the third power is as follows.
99.98%  99.99%
= 99.98%  100%  99.98%  0.01%
= 99.98%  0.009998 %
= 99.970002%

[Subquestion 3]
Here, measures must be considered in accordance with the “System operation guideline”
presented to Company A by Company X. While the candidate measures given in the answer group
concern DB servers and Web servers, as is stated in the explanation of Subquestion 2, the operating
rate of the DB server group in a two (2) unit configuration, rounded to the second decimal place is
100%. Since adding units will not improve the operating rate, it is clear that the DB servers can be
eliminated as a candidate.
Next, with regard to making the Web server redundant, it states in “System operation guideline”
3-1 that a capacity expecting double the load at peak time is required. Based on this, in 3-2, it states
that the operating rate of the overall system must be 99.96% or more. The current Web server is in
a two (2) unit configuration, and the total processing capacity is 1,200 calls per hour. Furthermore,
since the operating rate of the overall system obtained with this two (2) unit configuration was
99.95%, it is understood that the processing capacity and the overall system operating rate must be
increased by adding Web servers. With regard to processing capacity, as considered in the
explanation for Subquestion 2, since currently each Web server can process 600 calls per hour, in

237
Afternoon Exam Section 12 Strategy Related Items Answers & Explanations

order to process an amount of 2,200 calls per hour which is twice that of 1,100 calls per hour at
peak time, it is apparent that four (4) Web servers are necessary. However, with a four (4) unit
configuration, if one (1) unit fails, since the processing capacity of three (3) servers is 1,800 calls
per hour, the processing capacity does not satisfy the condition. In other words, all four (4) servers
must be in operation. From the contents of Subquestion 2, while the operating rate of the current
overall system with the Web server in a two (2) unit configuration is 99.95%, in the case of the Web
server in a four (4) unit configuration, since it is considered as two (2) units of a device with an
operating rate of 99.99% are newly added serially to the current system, the operating rate drops,
causing the system not to satisfy the requirement of operating rate of 99.96%. Next, the case where
the Web servers consist of five (5) units is considered. With regard to processing capacity, since
four (4) units are sufficient to satisfy the condition, it is possible to consider operating rate
including the case when one (1) unit fails. Normally, as the operating rate of a Web server group in
a five (5) unit configuration, the probability that four (4) or more units are in operation is the sum
of the probability that all five (5) units are in operation and the probability that out of the five (5)
units, four (4) units are in operation and only one (1) is failed. However, since in the question text
there is a condition that states that the probability that out of m units, n units of more will be in

operation is obtained using the expression 1 - (1 - device operating rate)m - n , using this expression

to obtain the probability that four (4) units or more out of a Web server in a five (5) unit

configuration are in operation, 1 - (1 - 0.9999)5 - 4  1 - (0.0001)1 = 0.9999 = 99.99% is obtained.

Furthermore, the operating rate of the overall system in this case can be considered as the operating
rate of a configuration where a Web server group with 99.99% operating rate is serially connected
to the call center side router, the wide area Ethernet between the call center and the data center, and
the data center side router. In other words, the operating rate can be obtained as the rate of a serial
configuration of four (4) units of devices, each with an operating rate of 99.99%, and since the
overall operating rate is 99.96%, it is clear that this satisfies the conditions. Since the question does
not ask the total number after the addition, but rather, how many units need to be added, three (3)
units are added. Therefore, f) is the correct answer.

Q12-5 System reliability and performance evaluation

[Answer]
[Subquestion 1] (a) d (b) c
[Subquestion 2] (1) a (2) b (3) d
[Subquestion 3] (c) 0.6 (d) 112.5 (e) 187.5 (f) 35.7
(g) 11.4 (h) 270.3
[Subquestion 4] b

[Explanation]
This question concerns system operating rate and the queuing model. Operating rate is a topic
which is highly likely to appear as a question. It is desirable to understand the basic formulas of serial

238
Afternoon Exam Section 12 Strategy Related Items Answers & Explanations

systems and parallel systems, and acquire the capability to also answer applied questions.
Subquestion 1 is a question of basic knowledge. The question about operating rate in Subquestion
2 requires the capability to apply knowledge, but it should be possible to answer if basics are properly
understood. Subquestion 3 can also be answered with an understanding of M/M/1 model formulas.
Subquestion 4 requires understanding of the overall picture of the process, including the contents of
the explanation. The overall difficulty can be considered as average.

[Subquestion 1]
• Blank A: A device which performs load distribution by distributing accesses from users on to
multiple servers is d), a “load balancer.”
a: SIP server......In an IP telephone system, a server which performs translation of a telephone
number to an IP address as well as call processing, etc.
b: Proxy server......A server which connects to the internet on behalf of an internal network
computer which cannot directly connect to the Internet
c: Reverse proxy......A proxy server which relays a server connection request from the Internet on
behalf of a specific server

• Blank B: From the description stating that access is distributed in turns to three (3) Web servers, the
answer is c) “round robin.” Since this site does not require user session management, access can
be distributed in turns. A session refers to the logical start to end of the exchanges of data between
a browser and a Web server, and the exchanges related in terms of business constitute a single
session. Moreover, generally, a load balancer has functions to distribute accesses to the same Web
server during the same session.
a: Event-driven......A method in which processing is performed in response to operations (events)
performed by program users, such as mouse clicks, etc.
b: First answer.......A method in which traffic is distributed to the first responder
d: Least-connection......A method in which traffic is distributed to a target with the least
connection frequency

[Subquestion 2]
It is desirable to have a thorough understanding of basic formulas concerning operating rate.
When the operating rates of two (2) devices are, respectively, A and B, then the following is
obtained.
Serial system (both units are in operation): A  B
Parallel system (at least one (1) out of two (2) units is in operation): 1  (1  A)(1  B)

(1) For the Web server portion, since the condition is that two (2) or more units out of the three (3)
units must be in operation, it is necessary that either all three (3) units are in operation, or two (2)
units out of the three (3) units are in operation.

• Probability that all three (3) units are in operation: This is W 3 .


• Probability that two (2) units out of three (3) units are in operation: The number of combinations
of selecting two (2) units out of three (3) units is 3 C 2  3 . Also, since the probability that two (2)

239
Afternoon Exam Section 12 Strategy Related Items Answers & Explanations

units are in operation and one (1) unit is failed is W 2 (1  W ) , the probability to obtain is

3 W 2 (1 - W)  3 W 2 - 3 W 3 .

• Operating rate of the Web server portion: Since this is the sum of the two expressions above,
this is W 3  3 W 2 - 3 W 3  3 W 2 - 2 W 3 .

Therefore, a) is the correct answer.

(2) The operating rate of the portion that combines the DB servers and magnetic disks is that of a
parallel system configured with two (2) serial systems as shown in the diagram below.

DB server Magnetic disk

DB server Magnetic disk

• Operating rate of one (1) serial system: This is D  J = DJ.


• Operating rate of the portion that combines the DB servers and magnetic disks: This is

1 - (1 - DJ) 2  2DJ - D 2 J 2 . Therefore, b) is the correct answer.

(3) The overall operating rate of the information provision site can be obtained by adding other devices
as serial systems. First, the operating rate of the AP server portion is obtained.

• Operating rate of the AP server portion: This is 1 - (1 - A)2  2A - A 2  A(2 - A) .

• Operating rate of the information provision site as a whole: Since the site is a serial system
composed of two (2) FWs, a load balancer, a Web server portion (Pw), the portion combining the DB

servers and magnetic disks (PDJ ) , and the AP server portion, this is F2 XPW PDJ A(2 - A) .

Therefore, d) is the correct answer.

[Subquestion 3]
It is desirable to solidly understand the formula “average waiting time = average usage
time  utilization rate  (1 – utilization rate)” in the M/M/1 model.

• Blank C: Because for the 24 accesses per second, the accesses are distributed using the round robin
method to three (3) units of Web servers, the number of accesses each unit is distributed is eight
(8) accesses per second. And since the average CPU usage time per access is 75 milliseconds, the
CPU usage time per unit per second (=1,000 milliseconds) is 8  75 = 600 milliseconds, and the
utilization rate is 600  1000 = 0.6.

• Blank D: Using the “average waiting time = average usage time  utilization rate  (1 – utilization

240
Afternoon Exam Section 12 Strategy Related Items Answers & Explanations

rate)” formula, this is

75  0.6  ( 1  0.6) = 112.5 milliseconds.

• Blank E: Since “average response time = average waiting time  average usage time,” this is

112.5  75 = 187.5 milliseconds.

• Blank F: Since 24 accesses per second are distributed using the flip-flop method and processed at
two (2) units of AP servers, 12 accesses are processed at each unit. Because the average CPU
usage time per access is 25 milliseconds, the utilization rate is 12  25  1000 = 0.3. The average
waiting time is 25  0.3  ( 1  0.3) = 10.7 milliseconds, and the average response time is
10.7  25=35.7 milliseconds.

• Blank G: Since 24 accesses per second are distributed using the flip-flop method and processed at
two (2) units of DB servers, 12 accesses are processed at each unit. Because the average CPU
usage time per access is 10 milliseconds, the utilization rate is 12  10  1000 = 0.12. The average
waiting time is 10  0.12  ( 1  0.12) = 1.4 milliseconds, and the average response time is
1.4  10=11.4 milliseconds.

• Blank H: The average response time concerning CPU processing is the sum of the times for the Web
servers, the AP servers, and the DB servers, which is 187.5  35.7  11.4 = 234.6 milliseconds.
Furthermore, since the average response time of the magnetic disks is the same as the average
response time for the AP servers, which have the same average usage (access) time and number of
accesses processed, it is 35.7 milliseconds. Therefore, the overall average response time of the
information provision site is 234.6  35.7=270.3 milliseconds.

[Subquestion 4]
As methods for load balancers to distribute processing to multiple servers, there are the static
distribution method, in which distribution is performed in a pre-determined order, and the dynamic
distribution method, in which distribution is performed based on the load situation of each server.
The round robin method is a static distribution method. Furthermore, the methods of options b
through d in the answer group are dynamic distribution methods.
In order to shorten the average response time, much processing must be distributed to the Web
server with the doubled MIPS value. However, if it is too much, waiting time will grow longer and
response time will also grow long. Therefore, a method by which the response time of the three (3)
units of Web servers is equalized is the most appropriate. Therefore, the answer is b).
a: When the Web server with the double MIPS value is “A,” and the other two units of (2) Web
servers are “B” and “C,” and access is distributed in the order of A→A→B→C with A being
accessed twice consecutively, there is a higher likelihood that the processing of the second “A” will
be made to wait. Instead of distributing traffic to “A” twice consecutively, a distribution in the
order such as A→B→A→C would be an appropriate method.
c, d: The Web server with the double MIPS value will have the equivalent number of processing or
equivalent amount of data sent with those of other servers. Therefore, the Web server with the
double MIPS value will not be effectively utilized, and thus these methods are not appropriate as
distribution methods to shorten the average response time of the information provision site.

241
Afternoon Exam Section 12 Strategy Related Items Answers & Explanations

Q12-6 Requirements analysis

[Answer]
[Subquestion 1]
Destination Receive Destination Issue Ticket
destination ticket
Destination

Confirm
payment

Change

Amount Fare table


Money Receive inserted Return Returned
money Amount money money
inserted

[Subquestion 2] (a) Cancel (b) Payment complete (c) Ticket issuance complete
[Subquestion 3]

Process
Action Receive Receive Return
Issue ticket
destination money money

d [Return money] 1 0 0 1
e [Receive new destination] 1 0 0 0

[Explanation]
Requirements analysis work is an extremely important work since subsequent system development
are performed based on the requirements specifications identified here. If mistakes are made during
this work, subsequent design and development work will be based on incorrect specifications,
resulting in significant waste.
In the structured analysis method, a data flow diagram (DFD) is used as a tool for analyzing
customer requirements and creating a requirements model.
A data flow diagram can be used to express the flow of data in business processing. A circle
represents a process (processing concerning data). The name of the process is written inside the circle,
and input/output data to the process is represented by an arrow ( ). Furthermore, represents
what is called a data store, which is equivalent to stored data, namely, a file.
A data flow diagram expresses the flow of data between processes, but does not express the order
in which the processes are performed. However, as the question text states, in a real-time system,
process timing and synchronization control are often required to be expressed. Therefore, the data flow
diagram is extended to create a control flow diagram to express execution timing, etc., of each process.
The result is the transformation schema in Fig. 1. In the transformation schema, in addition to the
normal data flow diagram, the control flows represented with dotted lines can be confirmed.

242
Afternoon Exam Section 12 Strategy Related Items Answers & Explanations

Those who are unfamiliar with this type of schema can think as follows.
The circled items in the Fig. 1 transformation schema are considered as program modules. “Ticket
issue control,” “receive destination,” etc. correspond to these. The arrows that point to and from
“ticket issue control” are considered as representing the call relation with other modules. The
relationship between “ticket issue control” and “receive destination” is that, the “ticket issue control”
calls the “receive destination” module at a certain time, and the “receive destination” module returns
to its caller on the timing the destination is detected. Furthermore, the arrows that point to and from
circles other than “ticket issue control” correspond to the arguments for those processes (modules). In
other words, in order for the “receive destination” module to carry out its function, it requires
“destination” data, and furthermore, it outputs “destination” data as a result of processing.
Also, as with “ticket issue control” in the transformation schema, it is extremely important for
performing module partitioning to clearly distinguish modules which control other modules from other
processes, and assign functions accordingly to each. While the processes on the data flow diagram are
processes which manipulate data (processing processes), the “ticket issue control” is a process which
manages call control of each process (control process). By considering these separately, the functions
which are required of the processing processes can be simplified. In other words, each processing
process can concern itself exclusively with performing the functions given in the requirements, leaving
portions other than its own functions, such as when it will be executed, or what function is called next
when the execution ends, etc. up to the control process. By doing this, since the specifications required
by each process become simplified, and their coupling relationship with other processes become
weaker, it is possible to seek the improvement of the ease of understanding the specifications, the ease
of their implementation, and eventually the improvement of their maintainability.

[Subquestion 1]
Considering the entire system as a single process, this system inputs “destination” and
“money,” and outputs “ticket” and, if necessary, “returned money.” The process of converting input
data into output data is divided and assigned to individual processes. When considering the
functions of each process, it is advisable to clarify the output of each process, and identify what
input is necessary to obtain that output. Since this question contains no details regarding the output,
some guesses will be needed to answer it.
First, the input and output of the “issue ticket” process is considered. The output information of
this process is the same as the output of the system, thus, it is a ticket. However, which information
is printed on the ticket is unclear. Since Fig. 1 shows that the input data for the “issue ticket”
process is the “destination,” it is assumed that “destination” information is printed on the ticket.
However, it is unclear whether other information, such as boarding station, fare information, etc. is
printed on the ticket. However, this does not affect the answers.
Next, the “return money” process is considered. The output of this process is “returned money.”
Reading [Requirements for the automatic ticketing machine] in the question, the description
concerning the returning of money corresponds to the following description. “When there is
change, it is returned at the same time that the ticket is issued,” and the part of “When cancelled, all
money inserted up to that point is returned.” In other words, “returned money” carries two (2)
meanings: “change” and “money returned on cancellation.” Then, the input information necessary
for obtaining both items of information, as well as the source of the input information is

243
Afternoon Exam Section 12 Strategy Related Items Answers & Explanations

considered. Both require information concerning the amount of money to return, but the sources of
that information differ. In the case of “cancellation,” “amount inserted,” which is output
information from the “receive money” process, is the source, and in the case of “change,” it is
appropriate to calculate in the “confirm payment” process, the difference between the amount
inserted and the fare from the fare table, and output that amount.
Next, the output information of the “confirm payment” process is the amount of “change” to be
passed to the “return money” process. This process must also have a function to determine whether
the amount of money inserted is equal to or greater than the fare to the destination. Therefore, this
process requires, in addition to the fare table, information concerning the “destination” (from the
“receive destination” process) and “amount inserted” (from the “receive money” process).
Below is a summary of the above. Moreover, the table below also contains, as output, control
information that is not represented in the data flow diagram.

Process name Function outline Input Output


Receive destination Receive and output Destination Notification of
destination destination detection
information Destination
Receive money Receive money Money Amount inserted
Confirm payment * Determine whether Destination Notification of
the amount inserted Amount inserted payment complete
equals or exceeds Fare table Change
the fare for the
destination
* Calculate change
Issue ticket Issues a ticket Destination Notification of ticket
issuance complete
Ticket (destination)
Return money * When there is Change Returned money
change, return Amount inserted
change
* When ticket
purchase is
cancelled, return
amount inserted

[Subquestion 2]
The state transition diagram shows how system state can transition. In this subquestion, a
possible state is represented with a double lined rectangle, and transition from state to state is
represented with an arrow. An event is a trigger that causes a transition of a system state, and is
represented by the words and phrases written above the dotted line arrows in the transformation
schema of Fig. 1. An action corresponds to the instruction statement of the “ticket issue control”
that is caused by the trigger.

• Blank A: This is a state transition that takes place when an event occurs for returning from the
During receipt of money state to the initial state without issuing a ticket. This is the occurrence of
a cancel event. Cancels are accepted until a ticket is issued, and when the “ticket issue control”
receives a “cancel” instruction, it returns the system to its initial state, and issues a “return money
instruction” to the “return money” process.

244
Afternoon Exam Section 12 Strategy Related Items Answers & Explanations

• Blank B: After receiving money, when an amount of money equal to or greater than the fare is
inserted, the system transitions to the “During issuance of ticket” state. This is an event that
occurs when a money insertion complete is notified by the “confirm payment” process. This
causes the “ticket issue control” to issue an “issue ticket instruction” to the “issue ticket” process.

• Blank C: After completion of issuing a ticket, the system returns to its initial state. This corresponds
to the part where the “ticket issue control,” upon receiving a notification of “ticket issuance
complete” from the “issue ticket” process, issues a “process start” to the “receive destination”
process.

[Subquestion 3]
While this subquestion requires the processes started in accordance to the actions in Fig. 2 to be
enumerated, since considering only the actions will limit functions, it is advisable to consider in
terms of events that cause those actions without sticking to actions.

• Blank D: The [Return money] action is caused by the “cancel” event. When the “cancel” event
occurs, all inserted money is returned, and selection of a new destination is made possible. The
function which realizes this starts the “return money” process, returns money, and starts the
“receive destination” process in order to return the system to its initial state. None of the other
processes are involved in start.

• Blank E: The [receive new destination] action is caused by the “ticket issuance complete” event. At
the time of “ticket issuance complete”, the system is put into the initial state and enters into a state
of awaiting the selection of a destination. The function which realizes this is the start of the
“receive destination” process. Other processes are started when other events occur.

245
Afternoon Exam Section 13 Technological Elements Answers & Explanations


Section 13 Technological Elements

Q13-1 Differences between IPv4 and IPv6

[Answer]

[Subquestion 1] (a) p (b) o (c) b (d) m

(e) j (f) e (g) f (h) g

(i) c (j) d (k) n (l) q

[Subquestion 2] (1) NAPT

(2) Uniquely identifying nodes by their IP addresses

[Subquestion 3] Processing becomes simpler, allowing routing to be sped up

<Alternate answer> Processing becomes simpler, decreasing load on routers

<Alternate answer> Hardware acceleration becomes easier to implement

[Explanation]
The term IPv6 may cause confusion, but for the most part, the subquestions can be answered with
basic knowledge of IPv4. Currently, Windows Vista uses IPv6 by default. Knowledge of IPv6 will be
essential in the future, so it would be best to learn about it.

[Subquestion 1]
As stated in the question, IP addresses were originally unique identifiers for identifying
individual peers. However, the possibility of IP address exhaustion resulted in the creation of the
concept of private IP addresses, IP addresses used exclusively within an organization. In contrast to
private IP addresses, IP addresses used on the Internet are now called global IP addresses.
Private IP addresses fall into one of the three ranges listed below:

Class A 10.0.0.0 to 10.255.255.255


Class B 172.16.0.0 to 172.31.255.255
Class C 192.168.0.0 to 192.168.255.255

Users can assign these private IP addresses freely. However, addresses would overlap if they
are used among organizations, and would not be unique. Therefore they cannot be used as-is on the
Internet. Therefore, in order to connect to the Internet when using a private IP address, address
conversion is necessary.

246
Afternoon Exam Section 13 Technological Elements Answers & Explanations

• Blanks A, B: The surrounding description indicates that the addresses in question are usable only
within organizational networks, so the term for blank A is “private.” Accordingly, the term for
blank B is “global.” Thus, the answers for blanks A and B are p) and o), respectively.

• Blank C: The introduction of private IP addresses delayed IP address exhaustion, but did not
fundamentally solve it. Thus, the IPv6 standard was developed as a countermeasure against IP
address exhaustion. The length of IP addresses was expanded significantly, from IPv4’s 32 bits to
128 bits (even if the examinee does not know this, it can be determined by looking at Fig. 2). Not
only was the length of IP addresses in IPv6 expanded significantly from IPv4’s 32 bits, but it was
designed to reduce the size of routing tables in routers on the Internet. Therefore, the answer for
blank C is b).

• Blanks D, E: In conjunction with the transition to IPv6, protocols which support IP also change.
ICMP (Internet Control Message Protocol) is, as its name suggests, a protocol for transmitting IP
control messages and error messages. It is used by ping, a network diagnostic program. Thus,
blank D refers to error messages, and blank E to network diagnostic programs such as ping.
Therefore the answer for blank D is m), and the answer for blank E is j).
With the transition to IPv6, ICMP has evolved to become ICMPv6 and has been significantly
enhanced to perform an important function.

• Blank F: IP is a layer 3 (network layer) protocol, and when actually transmitting IP packets, LAN
cable and other layer 2 (data link layer) functions are needed. Address resolution, which performs
the matching of a layer 3 IP address to a layer 2 MAC (Media Access Control) address is
essential. Moreover, MAC addresses are also known as physical addresses.
In IPv4, ARP (Address Resolution Protocol) is used for IP address and MAC address resolution.
When ARP is used, ARP query packets are broadcast (sent to all nodes within the same LAN) to
determine the MAC address of the device with the destination IP address. Therefore, the answer
for blank F is e), “ARP.”
However, when broadcasting, packets are also received by unrelated nodes, placing unnecessary
load on their CPUs. Also when using a switching hub (L2 switch), broadcast LAN packets are
sent to all ports, distributing unnecessary LAN packets over the LAN. Because of such drawback,
MAC address resolution using broadcasting is not performed in IPv6, as done by ARP,.
In IPv6, ICMPv6 is used to resolve MAC addresses. Specifically, ICMPv6’s Neighbor Discovery
Protocol is used. The Neighbor Discovery Protocol consists of multiple messages. When
performing MAC address resolution, Neighbor Solicitation messages and Neighbor
Advertisement messages are used. They function in much the same way as IPv4 ARP query
packets and ARP response packets. The primary difference from ARP is that the problem-prone
broadcasting approach is not used. Instead, multicasting is used. Multicasting is a transmission
method which is used for transmissions to a specific group. Unlike broadcasting, it is a
network-friendly approach that does not transmit packets to all nodes, to avoid placing undue load

247
Afternoon Exam Section 13 Technological Elements Answers & Explanations

on unrelated nodes or LAN circuits.

• Blank G: IPv6 has adopted plug-and-play functionality (functions which operate automatically just
by plugging in a cable), and has functions for automatically configuring IP addresses.
IPv4 also allows automatic configuration of IP addresses using DHCP (Dynamic Host
Configuration Protocol). Therefore, the answer for blank G is f). When using DHCP, IP addresses
are leased from a DHCP server, but this also involves a significant amount of broadcasting.
In IPv6, the ICMPv6 Neighbor Discovery Protocol and router activity are used to discover
routers. Information such as the network address section of the IP address of the router is received,
and the IPv6 host itself generates an IPv6 address. This is performed using Router Solicitation
messages and Router Advertisement messages of Neighbor Discovery Protocol.

• Blanks H through J: IPv6, like IPv4, must perform the translation to the network layer IP address
from the host names and domain names included in application layer addresses (URLs, e-mail
addresses, etc.). This is made possible by DNS (Domain Name System).
DNS servers must be configured with information called resource records in order to respond to
DNS queries. Resource records contain multiple types of data, the most important of which is IP
address information. IPv4 address information is set in A (address) records. IPv4 addresses are 32
bits long, while IPv6 addresses are four times longer, with 128 bits. As such, IPv6 address
information is set in records called AAAA (address) records.
Therefore, the answers for blanks H, I, and J are g), c), and d), respectively.
For the purpose of reference, typical resource information configured in DNS are listed below.

A record IPv4 Host address


AAAA record IPv6 Host address
CNAME Alias for a host
MX record E-mail server
NS record DNS server for the domain
PTR For reverse lookup (determining domain names from IP addresses)

• Blank K: This may appear difficult with the IPv4 and IPv6 header structure diagrams, but it is
actually simple. As can be seen from Fig. 1, optional fields can be used in IPv4 headers, so the
length of a header can vary. Padding refers to a stuffing used to align the length of the IP packet to
a multiple of 32 bits. Therefore, the appropriate answer for blank K is n), “option.”

• Blank L: Comparing Figs. 1 and 2, looking for fields which are present in IPv4 headers but absent
from IPv6 headers, there are length of the IP header, flag, fragment offset, header checksum,
option, or padding, etc. The surrounding descriptions indicate that blank L is a field which is used
to check for header errors, which corresponds with a header checksum. Therefore, the correct
answer is q).
IPv6 uses a fixed length header, so there is no option or padding fields, and length of the IP header

248
Afternoon Exam Section 13 Technological Elements Answers & Explanations

is unnecessary. The IPv6 header also eliminates the identifier, flag, and fragment offset fields.
These fields are used for IP packet fragmentation, but IPv6 is designed to generally avoid
fragmentation. (Explained later)
Moreover, the IPv4 TOS (Type of Service) field is used for QoS (Quality of Service) functions. In
IPv6, QoS functionality has been enhanced, and represented in the traffic class and flow label
fields. The payload length field indicates the length of the data section of the packet, and its value
is equal to the size of the entire packet minus the length of the IPv6 base header.

[Subquestion 2]
(1) Private IP addresses cannot be used on the Internet, so they must be translated to global IP
addresses. NAT (Network Address Translation) is a one-to-one translation of private IP addresses to
global IP addresses. NAPT (Network Address Port Translation) is a many-to-one address
translation. This question concerns many-to-one address translation, so the answer is NAPT.

(2) IP addresses are unique identifiers for identifying communication peers in IP communication.
Therefore, when an IP address is rewritten by address translation, it means the identifier has
changed. When NAPT is used, in particular, a single global IP address is translated into multiple
private IP addresses, so the IP address alone cannot be used to identify source hosts. In other words,
from the Internet end, it is impossible to see who the communicating peer is on the private end.
This twists the original concept of IP, which is to use IP addresses to perform end-to-end
communications, and it results in many problems. For example, Web (http) communications cannot
use IP addresses to identify individual clients, so alternative methods, such as cookies, etc. are used
for identification.
Problems such as this occur because, though IP addresses were originally meant to uniquely
identify individual nodes (terminals, transmission equipment), they cannot be used as intended, due
to address translation such as NAT.
The question asks the examinee to identify “something which was possible with the original IP
communication approach,” so the answer is “uniquely identifying nodes by their IP addresses.”
Moreover, the first sentence also provides a hint.

[Subquestion 3]
This question asks the advantages of structuring the length of the IP headers to a fixed length.
As the IPv4 header structure diagram in Fig. 1 and the IPv6 header structure diagram in Fig. 2
show, the lengths of IPv4 headers vary, while IPv6 headers have a fixed length.
Normally, the option field is not added to an IPv4 header, so its length is usually 20 bytes.
However, in principle, it is variable, and programs must first read the IP header length field at the
start of the IPv4 header, and using loop operations, sequentially read the data for the number of
bytes indicated in the length field, and store the data into the variables for each data field. In

249
Afternoon Exam Section 13 Technological Elements Answers & Explanations

addition, in the case of IPv4, checksums must be calculated based on the read data, and compared
against the value of the checksum field to verify that the header data has not been damaged. As the
question states, the IPv4 TTL (Time To Live) is decremented by one each time it passes through a
router. The TTL value is changed each time it passes through a router, requiring laborious
checksum recalculation. Furthermore, in IPv4’s case, IP packet fragmentation is also frequently
used, splitting each individual IP packet into multiple IP packets. This is another factor that slows
down IPv4 processing. These are all examples of how IPv4 IP header processing is troublesome
and inefficient.
On the other hand, IPv6 headers have fixed length of 40 bytes, making processing simple. For
example, when using the C programming language, structures containing members matching the
structure of the header can be used to read in 40 bytes of data. Also, IPv6 does not use header
checksums, so there is no need to calculate checksums, reducing CPU load.
Note that IPv6 uses a fixed length header with no option fields. Instead, extension headers can
be used. The header shown in Fig. 2 is what is called an IPv6 base header. When necessary, it can
be followed by extension headers.

No extension Next header indicates upper layer protocol


header
IPv6
Base header Payload

Next header indicates next extension header


With extension Next header of last extension header indicates
header upper layer protocol
IPv6 Extension Extension Payload
Base header header header

Fig. IPv6 extension header

Extension headers include IPsec Authentication Headers (AH), Encapsulating Security Payload
(ESP) headers, etc. In IPv6, option headers are also attached for fragmentation. Thus, while in IPv4
there was an upper layer protocol field, in IPv6, it is replaced with a field called the Next header
field. If an extension header follows the base header, this field contains the number that indicates
the next extension header. If there is no extension header, this field contains the number of the
upper layer protocol.
In this way, use of fixed length headers, elimination of checksums, etc. has simplified
processing in IPv6. This processing simplification also reduces the burden on router CPUs. This
makes it possible to increase the speed of IP packet routing processes. Also, the fixed length makes
it possible to perform processing with hardware, also contributing to increased speed.
Therefore, acceptable answers would be answers such as “Processing becomes simpler,

250
Afternoon Exam Section 13 Technological Elements Answers & Explanations

allowing routing to be sped up,” “Processing becomes simpler, decreasing load on routers,” or
“Hardware acceleration becomes easier to implement.”
Moreover, the lack of header checksums in IPv6 might cause concerns regarding reliability, but
upper layer IP protocols such as TCP and UDP also contain data used for checking. This check
includes checking for pseudo-headers, which are important information in the IP headers, such as
IP addresses, etc. In IPv4, IP header checking is performed at the IP level, and IP addresses, etc.,
are checked at the upper layer TCP and UDP level as well. Thus, the same information is checked
twice, which is inefficient. With IPv6, since bit errors in the payload section are checked at the
upper layers, it was decided not to perform it in the IP layer.

Q13-2 IP addresses and routing

[Answer]

[Subquestion 1] (a) discard (or nondelivery, or error) (b)looping

[Subquestion 2] (c) 1xx.64.10.0 (d) 1xx.64.10.127

[Subquestion 3] (e) 1xx.64.10.32/27 (f) 1xx.64.10.33

Maximum value: 29

[Subquestion 4] (g) 1xx.64.10.16/28 (h) 1xx.64.10.17

[Explanation]
This question concerns IP packets, IP addresses, and IP routing. Knowledge on IP is necessary for
understanding IP packet filtering on routers or firewalls (FW), and is an important topic that lays the
foundation of security issues.
IP packets are composed of a header section and a data section, as shown in Fig. 1. The header
section is composed of fields, such as source IP address, destination IP address, etc.

251
Afternoon Exam Section 13 Technological Elements Answers & Explanations

IP header section Data section (payload)

20 bytes

IP header section
TTL Source IP Destination IP
(8) address (32) address (32)
20 bytes
Numbers in parentheses indicate bit length.

IP headers contain a variety of information, but only the information needed


to answer the question are shown.

Fig. 1 IPv4 packet structure

[Subquestion 1]
Answering this question requires some knowledge on IP headers.
The Time-To-Live (TTL) value in the IP header is decremented by one (or, in some situations,
more than one) each time the IP packet is routed (forwarded) by a router. This is a mechanism to
prevent IP packets from looping, where IP packets are transmitted to the wrong destination and
continually passed in an infinite loop, due to faults in routing tables or router software bugs. When
packets such as this exist on the Internet, they waste resources such as communication bandwidth,
etc. Therefore, packets are provided with a TTL (Time-To-Live), and when this value reaches zero,
the packets are discarded by routers. When a router discards a packet, it uses ICMP (Internet
Control Message Protocol) to send a Time Exceeded (packet discarded due to exceeded TTL) error
message to the source host. ICMP is responsible for the notification of IP transmission control
messages and error messages, and can be considered as a protocol that supports IP. ICMP is also
the protocol used by the ping command, which is often used for connectivity test on IP networks.
Therefore, the answer for blank A is either “discard,” “nondelivery,” or “error,” and the answer
for blank B is “loop.”

[Subquestion 2]
IPv4 IP addresses are 32 bits long, composed of a network address section, followed by a host
address section. The information used to identify the network address and the host address that
follows, is the subnet mask.
Originally, IP addresses were divided into conceptual unit called classes. The length of the
network address section was determined by the classes such as class A, class B, and class C, so
subnet masks were not necessary.

252
Afternoon Exam Section 13 Technological Elements Answers & Explanations

As Fig. 2a shows, the network address section of class A addresses is composed of the first 8
bits, for class B addresses, the first 16 bits, and for class C addresses, the first 24 bits. The section
following the network address section of the IP address is the host address section, so the host
address sections of class A, class B, and class C addresses are 24 bits, 16 bits, and 8 bits long,
respectively. The bit length of the host address section determines how many devices (hosts) could
be connected to the network.
For class A addresses, the host address section is 24 bits long, so there are 224=16,777,216
possible combinations. However, the address in which all bits of the host address section are 1 is
called a broadcast address, and cannot be assigned to a host.
Broadcast refers to transmissions to all hosts within the same network (all hosts whose IP
addresses share the same network section) (this corresponds with (4) of the [IP address assignment
criteria within the subnet of Company S’s corporate LAN] section of the question, which can serve
as a hint to examinees whose knowledge regarding broadcast addresses may be vague). An IP
address in which all bits of the host address section are 0 is used to refer to the entire network
whose hosts' IP addresses share the same network address section. This address cannot be assigned
to a host, either (this corresponds with (1)).
Therefore, there are two addresses which cannot be assigned to hosts, so the maximum number
of possible connectible class A hosts is 16,777,216  2  16,777,214 . However, it is hard to
imagine any actual IP network containing 16,777,214 hosts. Class B addresses have a 16 bit long
host address section, resulting in 216=65,536 possible combinations, and thus a maximum of 65,534
connectible hosts, which is again very large. Therefore, both class A and class B addresses contain
many unassigned, unused addresses, resulting in a waste of valuable IP addresses.

IPv4 32 bits long


Network address section Host address section
Number of hosts: 224–2
Class A Maximum of 16,777,214 hosts
8 24
Network address section Host address section
Class B Number of hosts: 216–2
Maximum of 65,534 hosts
16 16
Network address section Host address section
Class C Number of hosts: 28–2
Maximum of 254 hosts
24 8
Bit lengths of the network address section and the host address section are predefined.

Fig. 2a Number of hosts in each IPv4 address class

Conversely, the host address section of a class C address is 8 bits long, resulting in 28=256
possible combinations, for a maximum of only 254 connectible hosts. This number may be

253
Afternoon Exam Section 13 Technological Elements Answers & Explanations

insufficient for the networks of mid-sized organizations. The number of assignable class C hosts
may also be too large for small-scale organizations. This method of dividing IP address space into
classes results in wasted IP addresses and a lack of usable IP addresses. To resolve these situations,
a concept emerged in which large IP networks could be broken down into multiple smaller subnets
(subnetworks), making effective IP address allocation possible. This is called subnetting.
When subnets were first envisioned, they were meant to be used to partition large IP networks
defined by classes into multiple subnets, but now the concept of classes itself is often ignored, and
classless address allocation is adopted. This is called CIDR (Classless Inter Domain Routing) (Fig.
2b). With CIDR, a subnet mask or prefix is necessary to indicate the length of the network address
section.

A subnet mask (255.255.255.128) or prefix (/25) is used to flexibly structure network address
sections and host address sections of IP addresses without the need to adhere to classes.

Network address section Host address section


Classless

? ?

Bit length of network address section is unknown.

Network address section Host address section

Subnet mask
1111111111111111111111111 0000000
Ex. 255.255.255.128
25 7
(can also be designated as /25)

Bit length of network address section can be deduced


from the subnet mask.

Fig. 2b The concept of classless addressing

To explain using the IP addresses in the question, Company S’s corporate LAN has been
allocated with 1xx.64.10.0/25. The “/25” portion of this address is the prefix, and indicates the
length of the subnet mask. In this case, it indicates that the first 25 bits of the 32 bit IP address are
all ones. Written in binary, this subnet mask is:
1111 1111 1111 1111 1111 1111 1000 0000
Divided into groups of eight bits for better legibility, this is:
1111 1111.1111 1111.1111 1111.1000 0000
If each eight bit section is then represented in base 10, the address becomes:
255.255.255.128

254
Afternoon Exam Section 13 Technological Elements Answers & Explanations

Taking the bitwise logical product (AND) of the IP address and the subnet mask yields the
network address section.

1xx. 64. 10. 0 = XXXX XXXX.0100 0000.0000 1010.0000 0000


AND 255.255.255.128 = 1111 1111.1111 1111.1111 1111.1000 0000
XXXX XXXX.0100 0000.0000 1010.0000 0000

Converting this result into base 10 produces 1xx.64.10.0 (blank C).


The broadcast address is that which contains all ones in the host address section, so the answer
to blank D can be determined as shown below.
XXXX XXXX.0100 0000.0000 1010.0111 1111=1xx.64.10.127

[Subquestion 3]
First, let’s look at Tables 1 through 3 in the text and consider the answers to this subquestion.
Tables 1 through 3 contain routing tables. The question text provides two hints: “The
destination host’s IP address contained in the IP packet is compared against the IP address network
address sections set in hosts on the network, and routing tables set in network routers, in order to
determine which router to transfer the packet to” and “This process is repeated until the packet
arrives at the destination host.” Routing tables are also called route control tables, and consist of
lists that indicate to which ports or routers to forward traffic destined for a given destination IP
address.
Looking at the figure, it is apparent that traffic destined for subnet A from a network other than
subnet A must pass through router A (1xx.64.10.1). Referring to Tables 1 through 3, one finds that
destination 1xx.64.10.1 corresponds to the fourth row of Table 2, and the corresponding IP address
is 1xx.64.10.32/27. This means that “to reach 1xx.64.10.32/27, transfer traffic to 1xx.64.10.1 next.”
Table 2 is router B’s routing table, so because, from router B’s perspective, traffic should be
transferred to 1xx.64.10.1 (router A), behind router A is subnet A. Therefore, one can conclude that
subnet A’s address is 1xx.64.10.32/27. The fourth row of Table 3 shows the same thing, so e) is
1xx.64.10.32/27.
Next, one finds that 1xx.64.10.32/27 corresponds to the fourth row on router A’s routing table
(Table 1), and the destination is 1xx.64.10.33. This means that from router A “to reach
1xx.64.10.32/27 (subnet A), transfer traffic to 1xx.64.10.33 next.” Therefore, router A’s subnet A
side is 1xx.64.10.33, so f) is 1xx.64.10.33.
These answers can also be determined without referring to Tables 1 through 3. Let’s consider
how, in order to gain a better understanding of IP addresses. Company S’s corporate LAN has been
allocated 1xx.64.10.0/25, but one can determine based on the network configuration shown in the
figure that it must be further subdivided as follows.

255
Afternoon Exam Section 13 Technological Elements Answers & Explanations

Backbone (core) network 1xx.64.10.0/29


Subnet A E
Subnet B 1xx.64.10.64/26
Subnet C1 1xx.64.10.16/28
Subnet C2 1xx.64.10.8/29

1xx.64.10.0/29 has been allocated to the backbone network, and routers are connected to it.
Expressing the last eight bits in binary, this means:

Router A 1xx.64.10. 0000 0001


Router B 1xx.64.10. 0000 0010
Router C 1xx.64.10. 0000 0011
Router D 1xx.64.10. 0000 0100

The prefix is “/29,” so the length of the host address section is 32 bits – 29 bits = 3 bits. In other
words, the host address section consists of the last three bits. Writing out the entire backbone (core)
network 1xx.64.10.0/29 address space produces the following:

0000 0000 1xx.64.10.0 Subnet address (indicates the entire network)


0000 0001 1xx.64.10.1 Assigned to Router A
0000 0010 1xx.64.10.2 Assigned to Router B
0000 0011 1xx.64.10.3 Assigned to Router C
0000 0100 1xx.64.10.4 Assigned to Router D
0000 0101 1xx.64.10.5 Unused
0000 0110 1xx.64.10.6 Unused
0000 0111 1xx.64.10.7 Broadcast address

Next, let us consider subnet C2’s 1xx.64.10.8/29. Its prefix is “/29,” so the last three bits (32
bits – 29 bits = 3 bits) are the host address section.

0000 1000 1xx.64.10.8 Subnet address (indicates the entire network)

This is the lowest address, so it is assigned to router C’s


0000 1001 1xx.64.10.9
subnet C2 side.

0000 1010 1xx.64.10.10 C201 The remainder is assigned to hosts in order.

0000 1011 1xx.64.10.11 C202

0000 1100 1xx.64.10.12 C203

0000 1101 1xx.64.10.13 C204

256
Afternoon Exam Section 13 Technological Elements Answers & Explanations

0000 1110 1xx.64.10.14 C205

0000 1111 1xx.64.10.15 Broadcast address

Next, let us consider subnet C1’s 1xx.64.10.16/28. Its prefix is “/28,” so the last four bits (32
bits – 28 bits = 4 bits) are the host address section.

0001 0000 1xx.64.10.16 Subnet address (indicates the entire network)


This is the lowest address, so it is assigned to router C’s
0001 0001 1xx.64.10.17
subnet C1 side.
0001 0010 1xx.64.10.18 C101 The remainder is assigned to hosts in order.

0001 0011 1xx.64.10.19 C102

0001 0100 1xx.64.10.20 C103

0001 0101 1xx.64.10.21 C104

0001 0110 1xx.64.10.22 C105

0001 0111 1xx.64.10.23 C106

0001 1000 1xx.64.10.24 C107

0001 1001 1xx.64.10.25 C108

0001 1010 1xx.64.10.26 C109

0001 1011 1xx.64.10.27 C110

0001 1100 1xx.64.10.28 Unused

0001 1101 1xx.64.10.29 Unused

0001 1110 1xx.64.10.30 Unused

0001 1111 1xx.64.10.31 Broadcast address

Next, let us consider subnet B’s 1xx.64.10.64/26. Its prefix is “/26,” so the last six bits (32 bits
– 26 bits = 6 bits) are the host address section.

257
Afternoon Exam Section 13 Technological Elements Answers & Explanations

0100 0000 1xx.64.10.64 Subnet address (indicates the entire network)

This is the lowest address, so it is assigned to router B’s


0100 0001 1xx.64.10.65
subnet B side.

0100 0010 1xx.64.10.66 B01 The remainder is assigned to hosts in order.

0100 0011 1xx.64.10.67 B02

: : :

0111 0011 1xx.64.10.115 B50

0111 0100 1xx.64.10.116 Unused


: : :
0111 1110 1xx.64.10.126 Unused

0111 1111 1xx.64.10.127 Broadcast address

Considering, in binary, only the last eight bits of the IP address range which can be assigned to
subnet A, the following range is available.

0010 0000 ~ 0011 1111


(1xx.64.10.32) (1xx.64.10.63)

The usable host address section length is five bits long, so the subnet mask is 27 bits long (32
bits – 5 bits = 27 bits), and the subnet address is 1xx.64.10.32/27.
Now that the subnet address has been determined, let us consider allocation in subnet A.

0010 0000 1xx.64.10.32 Subnet address (indicates the entire network)

0010 0001 1xx.64.10.33 This is the lowest address, so it is assigned to router A’s
subnet A side.
0010 0010 1xx.64.10.34 A01 The remainder is assigned to hosts in order.

0010 0011 1xx.64.10.35 A02


: : :
0011 1110 1xx.64.10.62 ?? (because number of hosts is unknown)

0011 1111 1xx.64.10.63 Broadcast address

Therefore, blank E is 1xx.64.10.32/27, and blank F is 1xx.64.10.33.


The maximum value, n, is the total number of combinations minus the subnet address and
broadcast address, thus 32 – 2 = 30, minus router A itself, and therefore 29.

258
Afternoon Exam Section 13 Technological Elements Answers & Explanations

Fig. 3 shows the network configuration with the IP addresses that have been determined.

1xx.64.10.32/27 1xx.64.10.64/26
1xx.64.10.33 through 1xx.64.10.62 1xx.64.10.65 through 1xx.64.10.126
Broadcast 1xx.64.10.63 Broadcast 1xx.64.10.127
1xx.64.10.32/27
Subnet A Subnet B 1xx.64.10.64/26
E
n 50
Host Host Host hosts Host
hosts
A01 ... An B01 ... B50

Hub A Hub B

F 1xx.64.10.65
1xx.64.10.33

Backbone (core)
Router A Router B
network
1xx.64.10.1 1xx.64.10.2
1xx.64.10.3 1xx.64.10.4
Router C Router D

1xx.64.10.17 1xx.64.10.9
1xx.64.10.0/29
1xx.64.10.1 through 1xx.64.10.6
Hub C1 Hub C2 Broadcast 1xx.64.10.7
10 50
Host hosts Host Host hosts Host
C101 ... C110 C201 ... C205

Subnet C1 1xx.64.10.16/28 Subnet C2 1xx.64.10.8/29

1xx.64.10.16/28 1xx.64.10.8/29
1xx.64.10.17 through 1xx.64.10.30 1xx.64.10.9 through 1xx.64.10.14
Broadcast 1xx.64.10.31 Broadcast 1xx.64.10.15

Fig. 3 Network configuration

[Subquestion 4]
A router’s routing (IP relay) function picks out the destination IP address in the IP header of a
received IP packet, looks it up in its routing table to decide where to forward the packet to, and
relays it there. Moreover, as the question text states, only the network address section of an IP
address is used for routing.
First, let us consider router A’s routing table (Fig. 4a). Router A connects the backbone network
(1xx.64.10.0/29) and subnet A (1xx.64.10.32/27).

259
Afternoon Exam Section 13 Technological Elements Answers & Explanations

IP address Destination Meaning of routing table entry

Transfer packets whose destination is 1xx.64.10.8


1xx.64.10.8/29 1xx.64.10.3 (subnet C2) to 1xx.64.10.3 (router C).

G 1xx.64.10.3

Transfer packets whose destination is 1xx.64.10.0


1xx.64.10.0/29 1xx.64.10.1 (backbone network) to 1xx.64.10.1 (router A’s
backbone side).
Transfer packets whose destination is 1xx.64.10.32
1xx.64.10.32/27 1xx.64.10.33 (subnet A) to 1xx.64.10.33 (router A’s subnet A
side).
Transfer packets whose destination is 1xx.64.10.64
1xx.64.10.64/26 1xx.64.10.2 (subnet B) to 1xx.64.10.2 (router B).

Default route
0.0.0.0/0 1xx.64.10.4 Transfer all other packets to router D (external
network).

Fig. 4a Router A routing table

The IP address in the first row is 1xx.64.10.8/29, which corresponds to subnet C2. It indicates
that traffic whose destination is C2 should be sent to router C (1xx.64.10.3) from router A via the
backbone network.
The IP address in the second row is blank G. Its destination is 1xx.64.10.3, which is router C.
When going from router A through router C, one arrives at either subnet C1 or subnet C2. Subnet
C2 is listed in the first row, so the second row must be the path to subnet C1. Subnet C1 is
1xx.64.10.16/28, so blank G should be 1xx.64.10.16/28.
The IP address in the third row is 1xx.64.10.0/29 (the backbone (core) network). This indicates
that packets whose destination is the backbone network should be directed to 1xx.64.10.1 (router
A’s backbone side).
The IP address in the fourth row is 1xx.64.10.32/27 (subnet A). This indicates that packets
whose destination is subnet A should be directed to 1xx.64.10.33 (router A’s subnet A side).
The IP address in the fifth row is 1xx.64.10.64/26 (subnet B). This indicates that packets whose
destination is subnet B should be directed to 1xx.64.10.2 (router B).
The IP address in the sixth row is 0.0.0.0/0, which is a special route called the default route. It
indicates where a packet is to be sent to when its destination IP address cannot be found in the
routing table. The routing table is read from the top to the bottom, so if no rows are found to match
a packet, that packet is sent to the destination (normally the router connected to the outside – that
is, the Internet) listed as the default route in the last row. In the network described in the question,
router D (1xx.64.10.4) is connected to the external network, so the sixth row of the routing table

260
Afternoon Exam Section 13 Technological Elements Answers & Explanations

indicates that if destination IP addresses in packets do not match any of rows one through five, the
packets should be sent to router D (1xx.64.10.4). In Windows, the default route is called the default
gateway.
Moreover, blank G in router B’s routing table (Fig. 4b) contains the same IP address as blank G
in router A’s routing table.

IP address Destination Meaning of routing table entry


Transfer packets whose destination is 1xx.64.10.8
1xx.64.10.8/29 1xx.64.10.3
(subnet C2) to 1xx.64.10.3 (router C).
G 1xx.64.10.3
Transfer packets whose destination is 1xx.64.10.0
1xx.64.10.0/29 1xx.64.10.2 (backbone network) to 1xx.64.10.2 (router B’s
backbone side).
Transfer packets whose destination is 1xx.64.10.32
1xx.64.10.32/27 1xx.64.10.1
(subnet A) to 1xx.64.10.1 (router A).
Transfer packets whose destination is 1xx.64.10.64
1xx.64.10.64/26 1xx.64.10.65 (subnet B) to 1xx.64.10.65 (router B’s subnet B
side).
Default route Transfer all other packets to router D
0.0.0.0/0 1xx.64.10.4
(external network).

Fig. 4b Router B routing table

In the same way, let us consider router C’s routing table (Fig. 4c).
The IP address which corresponds to blank H is 1xx.64.10.16/28. This corresponds to subnet
C1, so to transmit packets from router C to subnet C1, the packets must be sent to the subnet C1
side of router C. Thus, blank H is 1xx.64.10.17.

IP address Destination Meaning of routing table entry


Transfer packets whose destination is 1xx.64.10.8
1xx.64.10.8/29 1xx.64.10.9 (subnet C2) to 1xx.64.10.9 (router C’s subnet C2
side).
1xx.64.10.16/28 H
Transfer packets whose destination is 1xx.64.10.0
1xx.64.10.0/29 1xx.64.10.3 (backbone network) to 1xx.64.10.3 (router C’s
backbone side).
Transfer packets whose destination is 1xx.64.10.32
1xx.64.10.32/27 1xx.64.10.1
(subnet A) to 1xx.64.10.1 (router A).
Transfer packets whose destination is 1xx.64.10.64
1xx.64.10.64/26 1xx.64.10.2
(subnet B) to 1xx.64.10.2 (router B).
Default route Transfer all other packets to router D
0.0.0.0/0 1xx.64.10.4
(external network).

Fig. 4c Router C routing table

261
Afternoon Exam Section 13 Technological Elements Answers & Explanations

Q13-3 Communications networks

[Answer]

[Subquestion 1] Cable: (3)

Traffic volume: 2.9Mbps

[Subquestion 2] (1) Server name: Logistics management server

Device name: Switching hub

(2) b

(3) Server room

[Subquestion 3] (1) (i) IP address

(ii) Default gateway address

(2) DHCP

[Subquestion 4] IP address translation function

<Alternative answers>

NAT, NAPT, IP masquerade

[Explanation]
This question addresses basic knowledge regarding communications networks, especially a
company-wide LAN. It requires an understanding of the differences between switching hubs and
repeater hubs, and the basics of IP address allocation. TCP/IP and Internet related technologies span a
wide range, so the fundamental topics must be learned well before taking the test.

[Subquestion 1]
The volume of traffic from each department to the logistics management server is shown below.

Sales department Upstream 20  10 3 bps  30units  0.6Mbps


Downstream 10  10 3 bps  30units  0.3Mbps
Technical department Upstream 20  10 3 bps  15units  0.3Mbps
Downstream 10  10 3 bps  15units  0.15Mbps
Manufacturing department Upstream 20  10 3 bps  50units  1Mbps
Downstream 10  10 3 bps  50units  0.5Mbps

In the case of TCP/IP communications, connections are established between PCs and servers

262
Afternoon Exam Section 13 Technological Elements Answers & Explanations

and one-to-one communications are performed. When traffic is passed through switching hubs,
data is not transmitted to places where it is not needed. Therefore, when accessing the logistics
management server, only sales department, technical department, and manufacturing department
traffic is passed to segments (1), (2), (4), and (5). Traffic through the repeater hub is half duplex, so
the total traffic volume is equal to the amount of upstream traffic plus the amount of downstream
traffic, as below.

(1) Sales department 0.6M + 0.3M = 0.9Mbps


(2) Technical department 0.3M + 0.15M = 0.45Mbps
(4), (5) Manufacturing department 1M + 0.5M = 1.5Mbps

Moreover, all PCs access the logistics management server, so the amount of traffic on (3) is
the sum of all the traffic volumes, as below.

(3) (1) + (2) + (4) 0.9M + 0.45M + 1.5M = 2.85Mbps


 2.9Mbps

The transmission rate is 10Mbps, so the utilization rate in (3) is 2.9M / 10M  100 = 29%, just
under the 30% threshold given in the question text as the point beyond which delay increases at an
accelerated rate. This means that a small increase in traffic may result in sudden performance
decreases.

[Subquestion 2]
(1) One of the main differences between repeater hubs and switching hubs is that repeater hubs
send data received on one port out to all ports, while switching hubs do not send traffic out
onto ports which do not need it (they have a filtering function). This means that switching
hubs can make effective utilization of LAN transmission capabilities.
From Subquestion 1, the highest amount of traffic is experienced by the logistics
management server, so traffic to the logistics management server should be separated from
traffic to other servers. The switching hub still has unused ports, so the logistics management
server should be connected to an open port on the switching hub.

(2) The flow of packets is segmented at the switching hub for packets passing through cables
when PCs at the technical department access the Web, so the cables essentially consist of (2)
and (4) (more accurately, packets pass through (3) when performing domain name resolution
for URLs, but this has little impact on this question, so it does not need to be taken into
consideration).
If the new direct access cable from the logistics management server to the switching hub,
added in (1), is given cable number (6), the traffic of options a) through e) would be each
carried by the cables listed below:

263
Afternoon Exam Section 13 Technological Elements Answers & Explanations

a) (1), (4)
b) (1), (6)
c) (2), (3)
d) (3), (4), (5)
e) (4), (5), (6)

Therefore the only transmissions that do not pass through either cable (2) or (4) are those in b).

(3) By changing to a switching hub, throughput (amount of traffic passed per unit time) can be
improved. The reason for this, as explained in (1), is that “switching hubs do not send traffic
out onto ports which do not need it,” and that full duplex transmission is possible.
The subquestion concerns which existing repeater hub should be replaced with a switching
hub, so the most effective approach would be to replace the repeater hub in the server room
where all traffic is concentrated with a switching hub. However, in case of this question,
traffic is concentrated in the server groups, so the traffic in (3) cannot be decreased. However,
cable (3) can offer 100Mbps transmission, and use full duplex, reducing network traffic
affecting transmission performance.

[Subquestion 3]
(1) Four TCP/IP items can be configured on PCs:

•IP address
•Subnet mask
•Default gateway address
•DNS server address

When a PC is relocated from the sales department to the manufacturing department, the PC is
moved across a router in the network in question. Crossing a router changes the network, so,
as shown in the figure in the text, the network address also changes.
An IP address is composed of a “network address” and a “host address”. If a network address
changes, the IP address will also change. The default gateway is the router used by the hosts
within a network to communicate with the external network, and different addresses are
assigned to each router port. If a PC in the sales department is relocated to the manufacturing
department, it has to send IP packets to the router port to which the PCs in the manufacturing
department are connected. Therefore, the default gateway address of the PC would also have
to be changed.
Since the other two items are not affected by this question, they do not need to be changed.
Therefore, the answers to this question are “IP address” and “default gateway address.”

(2) Manually configuring the items mentioned in (1) is time-consuming and prone to errors.
DHCP (Dynamic Host Configuration Protocol) offers a convenient way of automatically

264
Afternoon Exam Section 13 Technological Elements Answers & Explanations

setting items such as IP address. To use this feature, a DHCP server needs to be set up.
Moreover, in some SOHO routers, the router itself comes equipped with DHCP functionality.

[Subquestion 4]
The network addresses used by Company M are 10.0.1.0 and 10.0.2.0. These are called private
addresses, and are only valid within a closed network. They cannot be used to communicate
externally. Private address ranges are specified by RFC (Request For Comments) 1918, as shown
below.

(1) 10.0.0.0 to 10.255.255.255 (class A)


(2) 172.16.0.0 to 172.31.255.255 (class B)
(3) 192.168.0.0 to 192.168.255.255 (class C)

In order to connect to the Internet, these private addresses must be translated into global
addresses. The function which performs this is NAT (Network Address Translation). This function
is normally built into routers, etc., and translates private addresses into global addresses when
connecting to the Internet. However, NAT is only used for one-to-one IP address translation. In
order to handle one-to-many communications, functions are used which also translate TCP and
UDP port addresses. This function is called NAPT (Network Address Port Translation) or IP
masquerade, and is now built into most routers.

265
Afternoon Exam Section 13 Technological Elements Answers & Explanations

Q13-4 Sales Management System

[Answer]

[Subquestion 1] (1) (a) Sales_date (b) Customer_code (c)Sales_staff_code

(d) Sales_number (e) Unit_price

(2) (f) NOT NULL (g) FOREIGN KEY

(3) The referential constraint is defined before the referenced table is


defined.

[Subquestion 2] (1) The sections (words and lines) in bold are the answer.

(2) (i) Sales_staff_code in Customer_table is overwritten in the middle of


a semiannual period

(ii) Table name: Sales_table

Attribute: Sales_staff_code (at time of a sale)

(iii) The bold dotted line is the answer.

Sales detail Sales Customer

Product Sales staff

[Explanation]
This question concerns a sales management system. It is a basic question which involves a
bottom-up approach in which data items are picked up from a sales slip, and normalized to obtain a set
of relational tables. In Subquestion 1, blanks containing attributes in the normalized tables are to be
filled, but answers can be deduced without considering the normalization process. An error in DDL is
difficult to point out unless you notice the order of the DDL statements. Subquestion 2 is a typical
question concerning the fact that history management is not possible without possessing data
regarding sales staff at the time of sales.

266
Afternoon Exam Section 13 Technological Elements Answers & Explanations

[Subquestion 1]
(1) Blanks in the relational tables are to be filled. Let’s do this following the normalization
process.
First, from the “Sales Slip” layout in Fig. 1, we extract the data items, including the repeating
items (enclosed in inside parentheses), and obtain the following:
Sales_slip(Sales_number, Sales_date, Customer_code, Sales_staff_code,

(Row_number, Product_code, Product_name, Unit_price, Quantity))

(i) First normal form (when there are repeating items)


Sales_table(Sales_number, Sales_date , Customer_code,

Sales_staff_code)

Sales_detail_table(Sales_number, Row_number, Product_code, Product_name,


Unit_price, Quantity)

In the first normal form, the key from the upper level is added to the repeating group’s key
(Sales_number + Row_number is the primary key of Sales_detail).

(ii) Second normal form (when a non-prime attribute has a partial functional dependence on a
candidate key)
There is no partial functional dependence on the composite key {Sales_number,
Row_number}, so nothing to be done here.

(iii) Third normal form (when a non-prime attribute is transitively dependent on a candidate
key)
(Third normal form of Sales_table)

Sales_table(Sales_number, Sales_date, Customer_code)

<- Customer_code is a foreign key


Customer_table(Customer_code, Customer_name, Sales_staff_code)

<- Customer_name is added


Sales_staff_table(Sales_staff_code, Sales_staff_name)

<- Sales_staff_name is added


(Third normal form of Sales_detail)

Sales_detail_table(Sales_number, Row_number, Product_code, Quantity)

<- Product_code is a foreign key


Product_table(Product_code, Product_name, Unit_price)

<- All these attributes were in Sales_slip at the starting point

From the relational tables obtained in (i) through (iii), it is clear that blank A is Sales_date,
blank B Customer_code, blank C Sales_staff_code, blank D is Sales_number, and blank E is
Unit_price.

(2) Blanks in the DDL statements are to be filled.

267
Afternoon Exam Section 13 Technological Elements Answers & Explanations

Blank F is NOT NULL. The ability to answer this depends on whether or not the examinee
understands the syntax of DDL (Data Definition Language) statements. The NOT NULL
constraint is used for attributes which cannot be NULL, such as primary keys or foreign keys.
For example, it indicates that there must always be a Product_number for each line of the
Sales_detail_table.
Blank G is FOREIGN KEY. It is followed by REFERENCES, so the examinee may realize that it
refers to a foreign key, but FOREIGN KEY must be spelled correctly. This is known as a
referential constraint or a foreign key constraint.

(3) In this subquestion, the examinee must answer why the DDL statements will produce an error.
When executing DDL that includes a definition of a foreign key, the relational table referred to
by the foreign key must be defined beforehand. In this subquestion, for example,
Sales_table refers to Customer_table, so the definition of Customer_table must come
before that of Sales_table. An answer pointing out the order dependency would suffice.

[Subquestion 2]
(1) In this subquestion, an E-R diagram is to be completed from the relational tables.
The relational tables are shown in Subquestion 1, so it is sufficient here to create the E-R
diagram from them. Creating an E-R diagram from the relational tables (if tools are used) is a
type of reverse engineering. This is used to create an E-R diagram using a bottom-up approach
from an existing system. Then addition to and/or modification of that E-R diagram will yield
an E-R diagram of the new system.
If one knows that a relationship in an E-R diagram corresponds to a referential constraint
between a primary key and a foreign key, then all that is left to do is basically copying out the
names of the relational tables.

Screen E-R diagram


Bottom-up method
Forward engineering
Form

Reverse engineering
Addition/
modification of
new section DDL
statement New DB

DDL
Old DB
or statement

Fig. A E-R diagram centered database design

268
Afternoon Exam Section 13 Technological Elements Answers & Explanations

(2) This subquestion deals with a problem encountered when a list of sales by each sales staff is
produced, and its solution.

(i) As the question text states, the reason for an abnormally high sales amount during a period
is that the sales amount of a sales staff member before handover is added to the new sales
staff member’s total sales. This is because Sales_staff_code in Customer_table always
reflects the current state, and Sales_staff_code from the time of the actual sales is not
retained. Therefore, an answer such as the sample answer, “Sales_staff_code in
Customer_table is overwritten in the middle of a semiannual period,” would suffice.

(ii) To retain all sales staff codes from the time when sales were actually made as history data,
it would suffice to include Sales_staff_code in Sales_table. This would make it
possible to keep track of the sales by retired sales staff, as well. Another approach might be
to retain the information in Sales_detail_table, but compared to Sales_table, the
number of rows updated would be larger, so it is not an appropriate answer.

(iii) Sales_staff_code added to Sales_table is a foreign key, so all that remains to be done
is to draw a line in the E-R diagram representing this relationship. Seen from
Sales_table, Sales_staff_code is decided through Customer_table, so it may appear
at first glance as a redundant relationship. However, Sales_staff_code decided through
Customer_table is always Sales_staff_code of the sales personnel who is currently in
charge of the customer, so this relationship is necessary.

The sales staff is decided based


Sales Customer on the customer, but this is not
necessarily the sales staff at the
time of the sale.

Sales staff

Sales staff at the time of the sale

Fig. B Modified E-R diagram

269
Afternoon Exam Section 13 Technological Elements Answers & Explanations

Sales Customer

× Sales staff changed in the middle


of a semiannual period

Sales staff

Fig. C Relationships among Sales, Customer and Sales staff before and after a sales
staff change

Q13-5 Retail chain sales system

[Answer]

[Subquestion 1] (a) Member_number (b) Product_code

[Subquestion 2] (c) b (d) a

[Subquestion 3] (1) (e) Product_type.Store_number = :Store_number (f) LEFT

(g) Product.Product_code = Retail_price.Product_code

(2) (h) Price = Standard_price

[Subquestion 4] (1) b, c

(2) Member

(3) When points are used at multiple POS terminals, and the total amount of
points used exceeds the accrued points

[Explanation]
This is a question about database design and E-R diagrams, in the context of a retail chain’s point
service. The subquestions are all basic ones, and not particularly difficult, but they test not only basic
knowledge, but also conceptualization skills based on real-world experience with databases as well.
The question indicates the locations of original data and data replicated with the DBMS replication

270
Afternoon Exam Section 13 Technological Elements Answers & Explanations

function in Table 1 “Allocation of original data and replicated data,” corresponding to the explanation
of the system and its operations, and the system E-R diagram. These are followed by questions
requiring the examinee to complete the E-R diagram and data allocation table, fill in blanks in the SQL
used to create a replicated table, and identify data location improvements and a potential problem. It
requires that examinees gain a thorough understanding of the figures and the tables presented.

[Subquestion 1]
This subquestion requires the examinee to fill in the blanks in the “E-R diagram for the system”
with the appropriate attribute names. It tests knowledge of the relationship between the
one-to-many relationship of the E-R diagram and the primary and the foreign keys.
Generally, when there is a one-to-many relationship between entities, the primary key of one
entity is the foreign key of the other entity (the primary key side is “one,” and the foreign key side
is “many”). Although it does not relate to the blanks in the figure, for any one-to-many relationship,
the entities with arrows pointing to them (the “many” side) have the primary keys from which the
arrows extend included as their foreign keys.
Based on the above observation, let’s consider the attribute names which should be entered in
the blanks in the diagram.

• Blank A: Looking at Sale entity, one sees that there is an arrow pointing to it from Member entity.
However, there is no attribute in Sale entity that matches Member_number, the primary key of
Member entity, so this blank should be Member_number. It is clear that this Member_number is a
foreign key referencing Member entity, but let’s check if it must also be included in the primary
key. For Sale entity, the primary key already contains Terminal_number, Date, and
Sale_number. These three items are sufficient to identify which sale records are from which POS
terminals, when, and in what order. Therefore, the combination of these three items is sufficient as
a primary key. Therefore, Member_number is a non-prime attribute. The attribute name, then,
should be underlined with a dashed line, indicating a foreign key, thus: Member_number. Checking
the relationship between POS_terminal entity and Sale entity, one can see that Terminal_number
in Sale entity is a part of the primary key, so it is underlined with both a solid line, indicating a
primary key, and a dashed line, indicating a foreign key. Checking the relationship between Sale
entity and Sale_detail entity, one sees that Sale_detail entity’s Terminal_number, Date, and
Sale_number are underlined with a dashed line, indicating a foreign key.

• Blank B: With a same argument as in the case of blank A, the answer is Product_code.
Sale_detail is a detailed record for an individual sale. That is, it is a record containing what was
sold, when, and in what quantity. Product entity’s primary key, “Product_code”, is not depicted
as a foreign key, so this attribute is necessary. The combination of Terminal_number, Date,
Sale_number, and Detail_number can be used to uniquely identify individual detail records, so
Product_code does not need to be included in the primary key.

271
Afternoon Exam Section 13 Technological Elements Answers & Explanations

[Subquestion 2]
This subquestion requires the examinee to fill in the blanks related to the replication method
column in Table 1. The two possible replication methods are shown below. One of these must be
selected.
Method 1: Differential data is replicated continually at short time intervals, e.g., three minutes.
Method 2: All necessary data is replicated once a day, as a part of the morning batch process
performed daily.
The differences between the two methods are their frequency, and the extent of data replicated
(differential data or whole data). For either type of data, the same data as the original is replicated.
Therefore, the essential difference between the two methods is the frequency of replication, and
method selection should be made based on the immediacy of update content reflection required in
the replicated data.

• Blank C: From Table 1, the original tables involved are Product, Retail_price, and Product_type,
all stored in the head office server. The corresponding replication extracts the necessary data from
these tables, storing it in a separate table, Daily_retail_price, in POS terminals (the SQL
statements which produce this table are defined in Subquestion 3). The question states that “For
each product, its standard price, common to all stores, is set as a part of the product data. Each
store, however, can set and use its own actual retail price instead of the standard price during the
limited period specified by each store. The actual retail price must be set in advance, and it cannot
be changed in the middle of the specified period.” So while it is possible that the particular store’s
product prices change every day, those changes are set in advance, and do not change during
business hours. Therefore, “Method 2: All necessary data is replicated once a day, as a part of the
morning batch process performed daily.” is acceptable, and b) (method 2) is the correct answer.
Moreover, using Method 1 to frequently replicate data would not cause any problem with data
contents, but replicating identical contents frequently is wasteful, and therefore violates the
requirement that the method “[is chosen] based on … the network traffic.”

• Blank D: Let us now consider the replication method for Sale and Sale_detail. The question states
that “Moreover, in addition to the sales operation, the system is also capable of performing the
statistical analysis on the sales records of all stores in near real-time manner.” The sources of
replicated data, the POS terminals, are updated with new data each time a sale is made. The head
office server needs this information in near real-time, so it is clear that replication must be
performed for each sale. Therefore, “Method 1: Differential data is replicated continually at short
time intervals, e.g., three minutes.” is the appropriate replication method. a) (Method 1) is the
correct answer.

272
Afternoon Exam Section 13 Technological Elements Answers & Explanations

[Subquestion 3]
This question requires the blanks in Daily_retail_price replication SQL statements to be
filled in. It concerns table joining operations as well as row and data insertion.
(1) It requires blanks to be filled in SQL statements which perform row insertion, inner join, and
outer join. There are several INSERT statement patterns, and here, the following pattern is
used to insert a row list extracted from Table2 into an appropriately formatted column list,
Table1. Here, the column list returned by the SELECT query must match the elements in
Table1’s column list.

INSERT INTO Table1 (column list)

SELECT (column list)

FROM Table2

The blanks are in the table from which the SELECT draws data. This table is not a simple one; it
is obtained by an inner join between Product and Product_type_in_stock, outer-joined on
something with Retail_price.
The inner join follows the pattern shown below.

SELECT -

FROM Table1 INNER JOIN Table2 ON (join condition)

It returns a set of rows joining those rows from Table1 and Table2 that satisfy the join
condition, and disregards those that do not. The result is same as that obtained by placing the
join condition in a WHERE clause.

• Blank E: Here, the conditions for the inner join of Product and Product_type_in_stock must be
specified. The attribute shared by both Product and Product_type_in_stock is the
Product_type_code, and immediately before blank E, the SQL statement states this condition.
Considering what other requirements are necessary, one notes that, from the question text, one
needs Daily_retail_price with “Store_number” being :Store_number and Date being :Date.
Product_type_in_stock is made up of Store_number and Product_type_code, and it indicates
that the store with Store_number carries products which belong to Product_type specified by
Product_type_code (such as food products, sundries, etc.). Therefore, one needs a condition that
specifies only those items with a Product_type carried by the store with :Store_number. Given
this condition, products which are not carried by the store in question would not satisfy the join
condition, and would as a result not be selected. Based on the above, one can determine that the
blank should be “Product_type_in_stock.Store_number = :Store_number.”

• Blank F: Because this blank is immediately followed by OUTER JOIN, it is an outer join. There are
three types of outer joins: LEFT OUTER JOIN, RIGHT OUTER JOIN, and FULL OUTER JOIN, so either
LEFT, RIGHT, or FULL must be chosen.

273
Afternoon Exam Section 13 Technological Elements Answers & Explanations

A left outer (right outer, full outer) join takes a format as shown below.

SELECT -

FROM Table1

LEFT(RIGHT, FULL) OUTER JOIN Table2 ON (join condition)

While inner joins only return rows which satisfy the join condition, outer joins also return rows
which do not satisfy the join condition. For example, a left outer join prioritizes the rows on the
table specified on the left (Table1), returning NULL values for the parts of Table2 which do not
satisfy the join condition. In a right outer join, Table1 and Table2 are reversed, with the rows
from the right table, Table2, being prioritized. In a full outer join, rows from both tables are
output, and the sections which do not satisfy the join condition are returned containing NULL
values. Moreover, the sample pattern above refers to Table1 and Table2, but they can be
substituted by the results of inner or outer joins. In this question, Table1 is the result of an INNER
JOIN of Product and Product_type_in_stock (the inner join including (E)). In other words, this
FROM clause specifies an outer join with Sale_price of an inner product of Product and
Product_type_in_stock.
Here, an outer join is being performed on Retail_price, but looking at its attributes, it is clear
that the data indicates what product to be sold, at what store, from when, to when, and the data is
used when selling products at a price that differs from the standard price shared by all stores. An
outer join is performed on this data with the products sold at the store in question (results of an
inner join between Product and Product_type_in_stock), but it is important to note that there is
no retail price record that corresponds to products being sold at the standard price. SQL statement
(2) contains an UPDATE statement, which updates row values, and the constraint “Price IS NULL.”
Price is an attribute of Retail_price, but the temporary work table created as a result of this
join contains rows in which the value of Price is NULL. Therefore, (F) cannot be “RIGHT.” If the
join is a full outer join, Retail_price of Product which are not carried at the store would also be
selected. This would mean that unnecessary data is being selected, so FULL is not appropriate
either. Here, the inner join first selects the products handled by the store, and their standard prices,
and then, if special prices have been specified for that store, selects them as well. If no special
prices have been specified, it must set the price to NULL. Therefore, the answer is LEFT.

• Blank G: One can see that a condition is missing that links the first half of the left outer join to
Retail_price. Therefore, the answer should be “Product.Product_code =

Retail_price.Product_code.”

(2) The question states that “Moreover, Price in Daily_retail_price is taken from
Retail_price, if there is valid data for the current day. If not, it is taken from
Standard_Price in Product.” There is an UPDATE statement here which updates the row
values. It is structured as follows:

274
Afternoon Exam Section 13 Technological Elements Answers & Explanations

UPDATE Table SET column name = value WHERE select conditions

When there is no valid Retail_price for the current day, the price column is left NULL and
it should be set to Standard_price for the product. Therefore, rows with NULL values for
Price must be updated such that Price is set to Standard_price. Therefore, blank H is
“Price = Standard_price.”

[Subquestion 4]
This question concerns where to locate data such that even if the head office server fails during
business hours, business operations can continue. This question may be easier for those with
real-world experience working with databases.

(1) In this question, one must select from the answer group which of the processes will not be
possible when using the data arrangement shown in Table 1. Let us consider whether
processes (a) through (d) can be executed.

a: Retrieving the retail price of a product: It has been completed during the morning batch
process carried out that day. So even if the head office server goes down during business
hours, the data is already retrieved, and there will be no need to do it again that day.

b: Displaying the points accrued: The total number of points accrued is an attribute of
Member entity. Member table is located at the head office server, and it cannot be accessed if
a failure occurs on the server, making it impossible to perform this process.

c: Using accrued points: When using points, the number of accrued points must be looked
up, and updated with the number of points after points are used. This requires access to
Member table, and therefore cannot be performed.

d: Awarding points: The number of points held by each member is recorded in


Total_points in Member table. The question does not state explicitly how points are
awarded (at the time of sale, or by a batch process), but the number of points awarded is
stored in the POS terminal as a part of the sales data at the time of sales. The accrued
points can be used starting from the following day. Therefore, even if the points are not
stored immediately in Member table on the server, as long as the sales data at the POS
terminal can be used by the next day to reflect the updated points, there is no problem.
Therefore, it is reasonable to conclude that this operation can be performed.

Therefore, the answer is b) and c).

(2) According to Table 1, the original is stored at the head office server, and there are no copies
on the POS terminal side. The only additional table necessary on the POS terminal side is
Member table. Therefore, Member is the correct answer.

275
Afternoon Exam Section 13 Technological Elements Answers & Explanations

(3) When a copy of Member table is stored on the POS terminal side and a member uses points, it
will be necessary to immediately update the original of the number of points accrued by the
member, and then to replicate this updated information from the head office server to all POS
terminals. This is because data consistency must be maintained. In this case, replication
method 1 must be used.
Given these conditions, if a failure occurs on the head office server side, and it becomes
impossible to access Member table, it is possible to switch over to the POS terminal data,
looking up the number of accrued points, displaying it, and reducing it when points are used.
However, if, during a server failure, a customer purchased a product using points at one POS
terminal, and then, while the server failure was still not fixed, purchased a product using
points at a different POS terminal, it would be impossible to accurately determine the number
of valid points. Therefore, answers such as “When points are used at multiple POS terminals,
and the total number of points exceeds the number of accrued points” or “When the same
member uses points at multiple terminals” are appropriate.

276
Afternoon Exam Section 13 Technological Elements Answers & Explanations

Q13-6 Auction system

[Answer]

[Subquestion 1] (a) Seller_number

(b) Item_type_code

[Subquestion 2] (c) MAX(Bid2.Bid_price) AS Current_price

(d) Bid Bid2

(e) Bid1.Bidder_number = :Member_number

(f) Bid2.Item_number = Bid1.Item_number or


Bid2.Item_number = Auction_item.Item_number

(E and F can be in either order)

(g) GROUP BY

(h) End_datetime

[Subquestion 3] (i) unsuccessful bid (j) inserted (k) c (l) e

[Subquestion 4] (m) Auction_item (n) current price (o) g (p) j

(q) “Auction_item” table search

(r) “Auction_item” table update

[Explanation]
This question concerns table design, SQL, and transaction processing.
Subquestion 4 requires a redesign of tables in order to improve processing performance, which
may puzzle examinees without hands-on experience. This question is somewhat difficult.

[Subquestion 1]
This subquestion concerns table design. A fill-in-the-blank question is quite common in every
test.

• Blank A: The question states, concerning attributes of “Auction_item” entity, “to put items up for
auction, members enter data such as the item category, name, description, minimum bid
increment, auction end date and time, etc.” Comparing this description to the attribute in
“Auction_item” entity in Fig. 2 “E-R diagram,” it is clear that the data indicating who placed the
item up for auction is missing. Further, in Fig. 2 “Member” entity is related to “Auction_item”

277
Afternoon Exam Section 13 Technological Elements Answers & Explanations

entity with a one-to-many relationship, indicating that an item related to Member is needed, and
thus that Member_number, the “Member” entity’s primary key, should be placed as a foreign key in
“Auction_item” entity. Therefore, the correct answer is “Seller_number” or “Member_number.”

• Blank B: The correct attribute name is not given directly in the question text, but looking at Fig. 2
“E-R diagram,” one can see that “Item_type” entity has a one-to-many relationship with
“Auction_item” entity. “Auction_item” entity has “Item_type_code,” which can be thought to
reference the primary key of “Item_type” entity. Moreover, in accordance with the subquestion’s
directions, the answer should be underlined, to indicate it is a primary key. Therefore, the correct
answer is “Item_type_code.”

[Subquestion 2]
This subquestion asks the examinee to determine Item_number, Item_name, member’s
Bid_price, Current_price, and End_datetime of the items for which a given member’s bids
have been accepted (the member is specified by the member number stored in host variable
“:Member_number"). Moreover, if the bidder in question has bids on multiple auction items,
information for each auction item is displayed.

(Item_number, Product_name, End_datetime)


It is sufficient to display the columns of “Auction_item” entity for each auction item which
satisfies the condition “auction items which the member has bid on.” In other words, it is sufficient
to use a WHERE clause which specifies that the Bidder_number equals the member’s number. The
member’s number is stored in the host variable “:Member_number,” so this would be
“Bid.Bidder_number = :Member_number” and “Auction_item.Item_number =

Bid.Item_number.” Thus we have an SQL statement in Fig. A.

SELECT Auction_item.Item_number, Item_name, End_datetime

FROM Auction_item, Bid

WHERE Bid.Bidder_number = :Member_number

AND Auction_item.Item_number = Bid.Item_number

Fig. A Item_number, Item_name, End_datetime

(Bidder_bid_price)
It is conceivable that a member would repeatedly bid on an auction item they liked in order to
win the auction. The question text addresses this situation, saying “When a member has placed
several bids on a given auction item, the highest bid placed by the member is displayed...”
Therefore, one can select rows from the “Bid” entity which matches the “auction items the member
has bid on” condition, grouping the results by Item_number, and displaying the highest Bid_price.

278
Afternoon Exam Section 13 Technological Elements Answers & Explanations

Thus we come up with an SQL statement in Figure B.

SELECT Auction_item.Item_number, MAX(Bid.Bid_price)

FROM Auction_item, Bid

WHERE Bid.Bidder_number = :Member_number

AND Auction_item.Item_number = Bid.Item_number

GROUP BY Auction_item.Item_number

Fig. B Bidder_bid_price

(Current_price)
Regarding the Current_price, the question states that “if the bid price is equal to or greater
than the ‘Current_price + Minimum_bid_increment’, then the entered bid price becomes the new
current price.” In other words, when rows of the “Bid” entity are grouped by Item_number, the
highest bid price in each group is the current bid price. Thus we have an SQL statement in Fig. C.

SELECT Item_number, MAX(Bid_price)

FROM Bid

GROUP BY Item_number

Fig. C Current_price

Moreover, the subquestion contains the condition of “the data to display...when a member
whose bids have been accepted...performs a bid status display operation.” When a bid is accepted,
for that auction item, at least, “Bid_price = Current_price.” One might, then, think that it is
possible to determine the Current_price without resorting to a complex SQL statement in Fig. C.
However, the clause “when...bids have been accepted” means that, at the time of the bid in
question, the member was the potential winning bidder. After this, though, another member may
have placed a successful bid, causing the first member to give up the position of the potential
winning bidder. In other words, there is possibility that the current price has been driven up, and
“Bid_price  Current_price.” This is why the Current_price must be found with an SQL
statement as shown in Fig. C.
Let us then add a condition “auction items which the member has bid on” to the SQL statement
shown in Fig. C. Doing so results in the SQL statement in Fig. D. The FROM clause contains
correlation names “Bid Bid1” and “Bid Bid2,” to use two “Bid” tables. This is because the “Bid” in
the SQL statement in Fig. C is used to determine the Current_price, and is not related to the
search for the member’s Bid_price in Fig. B. In other words, even though it is the same “Bid”
table, they are used differently within a SQL statement. "Bid1” is used to determine the

279
Afternoon Exam Section 13 Technological Elements Answers & Explanations

Bidder_bid_price, and “Bid2” to determine the Current_price.

SELECT Auction_item.Item_number, //from Fig. B

MAX(Bid2.Bid_price) //from Fig. C

FROM Auction_item, Bid Bid1, Bid Bid2 //from Fig. B and Fig. C

WHERE Bid1.Bidder_number = :Member_number //from Fig. B

AND Auction_item.Item_number = Bid1.Item_number //from Fig. B


AND Auction_item.Item_number = Bid2.Item_number

//Auction items which the member has bid on

GROUP BY Auction_item.Item_number //from Fig. B and Fig. C

Fig. D Current_price of auction items which the member has bid on

Fig. D combines Figs. B and C, and also contains a new WHERE condition,
“Auction_item.Item_number = Bid2.Item_number.” This narrows down the auction items’
current prices to be displayed to those for auction items on which the member has successfully bid.
This condition can also be written as “Bid1.Item_number = Bid2.Item_number.”
Moreover, the “Auction_item.Item_number” in the SELECT and the GROUP BY clauses may be
replaced with “Bid1.Item_number” or “Bid2.Item_number.”
(Answer SQL statement)
First, let us join the SQL statements in Figs. A and B. Both are for items which satisfy the
“member has placed a successful bid” condition, so their FROM and WHERE clauses are identical. The
difference between the SQL statements in Figs. A and B is in their GROUP BY clauses. The columns
selected by the SELECT statement in Fig. A are not aggregated, so they must be included in a GROUP
BY clause. Doing so results in the SQL statement in Fig. E.

SELECT Auction_item.Item_number, Product_name, End_datetime,

MAX(Bid.Bid_price)

FROM Auction_item, Bid

WHERE Bid.Bidder_number = :Member_number

AND Auction_item.Item_number = Bid.Item_number

GROUP BY Auction_item.Item_number, Product_name, End_datetime

Fig. E Item_number, Product_name, End_datetime, Bidder_bid_price

Lastly, let us merge the SQL statements in Figs. D and E. Both are for items which satisfy the
“member has placed a successful bid” condition, so Fig. D’s FROM and WHERE clauses include Fig.
E’s. In order to match Fig. D, Fig. E’s “Bid” is replaced with “Bid1.” As was explained in the Fig.

280
Afternoon Exam Section 13 Technological Elements Answers & Explanations

D section, “Bid1” is used for determining the Bid_price for the member, and “Bid2” is used for
determining the Current_price. The result is the SQL statement shown in Fig. F.
In order to avoid confusion, a correlation name has been marked by “AS” in the SELECT clause
in the SQL statement. The WHERE clause condition “Auction_item.Item_number =
Bid2.Item_number” can also be written as “Bid1.Item_number = Bid2.Item_number."

SELECT Auction_item.Item_number, Product_name

MAX(Bid1.Bid_price) AS Bidder_bid_price,

MAX(Bid2.Bid_price) AS Current_bid_price C , End_datetime


FROM Auction_item, Bid Bid1, Bid Bid2 D
WHERE Bid1.Bidder_number = :Member_number E
AND Auction_item.Item_number = Bid1.Item_number

AND Auction_item.Item_number = Bid2.Item_number F


GROUP BY G Auction_item.Item_number,

Product_name, End_datetime H

Fig. F Based on the above, blanks C to H can be filled in as shown in the sample
answer.

[Subquestion 3]
This question concerns data consistency. Processing within a computer is carried out by the unit
of blocks in Fig. 3. When concurrently processing multiple bids (bid A, and bid B), a block of bid B
is processed after a block of bid A is completed. Fig. 3 does not include mutual exclusion, so a
problem will occur as shown in Cases (1) and (2) below.

• Case (1): The case where there are multiple bid operations with the same price for an auction
item:
Suppose that, for a single auction item, two bids A and B, both with the same bid price, are
placed at approximately the same time, and that blocks are processed in the order (1) through
(6) shown in Fig. G. The question states that “If there are multiple bid operations with the
same price for the same auction item, no bids other than the first to arrive at the system are
accepted.” Therefore, according to this specification, when bid A is successfully placed in
block (1), bid B, which arrives late by the time difference between (1) and (2), must be
rejected. In other words, (6) must not be executed. However, at the point where bid B executes
(2), bid A’s data has not yet been inserted in the “Bid” table, so the evaluation which takes
place in (4) based on the search results from (2) will return that the bid is successful, and (6)
will be executed (moreover, the results would be same if (2) and (3) are reversed).

281
Afternoon Exam Section 13 Technological Elements Answers & Explanations

• Case (2): The case where there are multiple bid operations for an auction item, and the bid
placed later has a lower Bid_price:
This is more serious than Case (1). If the Bid_price of bid B is lower than that of bid A which
was placed just before bid B, bid B should be rejected. However, if processing is carried out in
the order (1) through (6) in Fig. G, as in Case (1), a problem will occur. Namely, even though
bid B’s price is lower, it would be inserted into the “Bid” table.

“Member” table search “Member” table search

“Auction_item” “Auction_item”
table search table search

“Bid” table search (1) “Bid” table search (2)

No No
Bid approved? (3) Bid approved? (4)

Yes Yes

“Bid” table insertion (5) “Bid” table insertion (6)

Bid A Bid B

Fig. G Incorrect processing order when bids are processed simultaneously

• When mutual exclusion is used:


For both Cases (1) and (2), bid B data, which should be rejected, is instead inserted. This
problem can be avoided if bid A’s information is reflected in the “Bid” table when bid B
performs a search on it. In other words, the problem can be avoided if processing occurs as
shown in Fig. H, in the order (1) through (5), and bid B is rejected.

282
Afternoon Exam Section 13 Technological Elements Answers & Explanations

“Member” table search “Member” table search

“Auction_item” “Auction_item”
table search table search

“Bid” table search (1) “Bid” table search (4)

No No
Bid approved? (2) Bid approved? (5)

Yes
Yes
“Bid” table insertion (3) Not approved

Bid A Bid B

Fig. H Processing order when bids are processed concurrently

In order to ensure this processing order, while processes (1) through (3) are being executed for
bid A, bid B processing must be put on hold. To do this, mutual exclusion mechanism may be
employed, as shown in Fig. I.

Start of mutual exclusion

“Bid” table search

No
Bid approved?

Yes
“Bid” table insertion

End of mutual exclusion

Fig. I Mutual Exclusion

Mutual exclusion is started immediately before block (1), that is, immediately before “Bid”
table search, and ends immediately after block (3), that is, insertion into the “Bid” table. By doing

283
Afternoon Exam Section 13 Technological Elements Answers & Explanations

this, no other bid processes can interrupt while blocks (1) through (3) are processed for bid A. This
results in data consistency being preserved.
Performing this mutual exclusion indicates that blocks (1) through (3) must satisfy transaction
ACID characteristics.
From an application perspective, if, as a result of search, a bid is successful, it must be inserted.
Conversely, if the bid is not successful, it must not be inserted. The processes, from search to
insertion, are indivisible. Therefore, they must be atomic.
They include both reading and writing, and throughout the processes, data consistency must be
ensured. Therefore, they must be consistent.
Even if multiple bid process requests are received by the computer at the same time, they must
be processed as if they were all individual, without being affected by each other, and data
consistency must be ensured. Therefore, they must be isolated.
These three characteristics mean that, specifically, in Cases (1) and (2) in Fig. H, inappropriate
processing like the processing of bid B must be prevented.
Once bids have been successfully placed, the results must not be lost, so durability is also a
requirement.
Therefore, it is clear that blocks (1) through (3) should be handled as a single transaction. This
transaction performs reading from and writing to a single table, so as shown in Fig. H, when bid A
is being processed, bid B must be put on hold. In other words, bid processing must be performed
serially. Therefore, it is appropriate to specify SERIALIZABLE as the transaction’s isolation level.

Let us fill in blanks I through L based on these considerations.


As discussed in Cases (1) and (2), when inappropriate processing is performed, “unsuccessful
bid” (blank I) data is “inserted” (blank J) into the “Bid” table, resulting in data inconsistency.
The minimum transaction range is from c) (blank K) to e) (blank L), in accordance with Fig. I.
Moreover, the subquestion asks for the “minimum” transaction range, so the explanation above
has been tailored accordingly. If the subquestion did not make this specification, it would be
acceptable to specify a transaction range greater than blocks (1) through (3). In real-world
situations, for example, the start of the transaction could be set at a). Fig. 3 includes a branch
processing, so the point at which the Yes and No paths divide (not shown in Fig. 3) could be set as
the transaction end point.

[Subquestion 4]
One can use the text of the subquestion to find hints regarding load reduction.
The subquestion states that N will be added to the “M” table. It also says that “an additional
update process will need to be performed, but a query process can be eliminated.” Therefore, load
reduction is achieved through the following three steps:

284
Afternoon Exam Section 13 Technological Elements Answers & Explanations

(1) Add a column to the table


(2) When processing a bid, a process to update the column in (1) is added
(3) When processing a bid, currently a high-load search process is used, but this would be
replaced by a reference to the column added in (1)
Let us look, then, at the Fig. 1 “A sample bid operation screen,” and identify an item which
cannot be obtained by a simple column reference and requires a high-load search operation. As we
discussed regarding the SQL statement in Fig.C, the Current_price is obtained using an aggregate
function and incurs a high cost.
If the Current_price is stored (updated) in the “Auction_item” table, the next time the
Current_price is referenced, an aggregate function can be omitted. This satisfies the three steps
(1) through (3) above.
Therefore, blank M is “Auction_item” and blank N is “Current_price.” Blank Q is
Current_price search processing, but matching its granularity to other blocks and data, and
writing it in table unit form, it becomes “’Auction_item’ table search.” Blank R is Current_price
update processing, so it becomes “’Auction_item’ table update.”
In order to process a bid correctly, the Current_price search and update processes added in
order to reduce processing load must be included in the transaction range. This is because bidding
is performed based on the “Current_price.” The correct sequence for concurrent processing, as
shown in Fig. J, is for bid B to be put on hold while bid A proceeds through blocks (1) through (4).

“Member” “Member”
table search table search

“Auction_item” “Auction_item”
table search (1) table search (5)

No No
Bid approved? (2) Bid approved? (6)
Yes Yes
“Bid” table insertion (3) “Bid” table insertion

“Auction_item” “Auction_item”
table update (4) table update

Bid A processing Bid B processing

Fig. J Correct processing order when bids are processed concurrently

Therefore, the minimum transaction range is from g) (blank O) to j) (blank P)’, in accordance

285
Afternoon Exam Section 13 Technological Elements Answers & Explanations

with Fig. J.
As illustrated in this subquestion, by including calculated items such as blank N within a table,
the load resulting from calculations can be reduced, improving processing performance. Items such
as this are called “derived items.”

Q13-7 Voice recorder (embedded systems)

[Answer]

[Subquestion 1] (1) 160

(2) 34 hours, 43 minutes

(3) 8kHz sampling frequency recording mode

[Subquestion 2] (a) Low (b) Interrupt

(c) D (d) 5

[Subquestion 3] Highest priority interrupt: (1)

Reason: After the interrupt, 1kB of data must be transferred within 5


milliseconds.

[Explanation]
This question concerns basic topics related to the design of a voice recorder which can perform
both recording and playback. Topics include the volume of digitized data, operation of a key scan
circuit, and interrupt processing.

[Subquestion 1]
This subquestion concerns the requirements analysis of the voice recorder.

(1) This question requires the examinee to calculate the volume of data when analog stereo (two
channel) data is digitized at a sampling frequency of 40kHz, with a quantization bit rate of 16
bits. This type of data calculation frequently appears in the test.
The quantization bit rate of 16 means that each analog data is digitized into 16 bit digital
forms. Analog data such as audio is continuous along the time axis. When digitizing
continuous analog data, a limited amount of data per unit of time must be used to represent the
analog data. The sampling frequency indicates how many units of digital data are used to
express one second of analog data.

286
Afternoon Exam Section 13 Technological Elements Answers & Explanations

Therefore, the data volume per second can be calculated as shown below.

Volume of data per second


 number of channels  sampling frequency  quantization bit rate
 2  40k  2  160k (bytes)

Moreover, the subquestion states that “1 kilobyte is equivalent to 1,000 bytes.” This is
explicitly stated because the “kilo” unit, when used in reference to computer storage device
sizes, is sometimes used to mean 210 (  1,024).
The answer must be in kilobytes, and rounded to the first decimal place, so the answer is
“160.”

(2) This question requires the examinee to calculate the recording time when compressing
digitized data to 1/10 of its original size, and recording it in a flash memory, using a monaural,
40kHz sampling frequency recording mode.
Monaural, or single channel, digital data, before conversion, is 80kbyte, half the data volume
calculated in (1). This data is compressed to 1/10 its original size, so the data stored on the
flash memory is 8kbyte per second. The flash memory has a storage capacity of 1Gbyte. This
“G (giga)” means k × k × k, so the amount of audio which can be recorded on the flash
memory, expressed in seconds, is:

Memory capacity 1  1,000  1,000  k


  125,000(seconds)
Data volume per second 8k

The question contains the instruction to “[a]nswer in hours and minutes, rounding to the
nearest minute.” 125,000 seconds, thus converted, is:

125,000
 34.722.(hours)
3,600

The decimal fraction of this can be multiplied by 60 to convert it into minutes.

0.722  60 = 43.32 (minutes)

The answer, then, is “34 hours, 43 minutes.”

(3) This question asks what kind of recording mode should be offered which would result in a
recording time 10 times as long as the stereo, 40kHz sampling frequency recording mode. It
specifies, however, that the quantization bit rate and compression rate are to be left unchanged.
Recording time can be extended by increasing the compression rate, or decreasing the amount
of original data. Because the question specifies that the compression rate is not to be changed,
the amount of original data must be decreased.
First, switching from stereo to monaural doubles the recording time. The data must then be
further reduced to 1/5 of its original size. As explained in (1), the volume of data per channel

287
Afternoon Exam Section 13 Technological Elements Answers & Explanations

is the sampling frequency × the quantization bit rate. The quantization bit rate is not to be
changed, so the sampling frequency should be reduced to 1/5, or 8kHz. Therefore, the answer
is “8kHz sampling frequency recording mode.”

[Subquestion 2]
Subquestion 2 is a fill-in-the-blank subquestion concerning a key scan circuit.
The key scan circuit is shown in Fig. 2.
Keys are connected to horizontal signal lines connected to output ports, and vertical signal lines
connected to input ports. When a key is pressed, the horizontal and the vertical signal lines attached
to that key are connected, and the signal from the output port is fed into the input port. For
example, if the “4” key is pressed, the output value of output port PA1 will be fed into input port
PB2.
When no keys are pressed, the input ports and output ports are completely unconnected, so the
value of each bit of the input ports, pulled up towards the + power supply (pulled up to the power
supply voltage -- the resistance below the + power supply is called the pull-up resistance), will be
the High logic level. Here, for example, if the 4 key is pressed, the PA1 output port output value
will be fed into input port PB2, but if PA1’s value is High, whether or not key 4 is pressed, PB2’s
value will be High, so it will be impossible to recognize if key 4 has been pressed. If the PA1
output port’s value is Low, when key 4 is pressed, the PB2 input port’s level will become Low, so it
will be possible to recognize that the key has been pressed. In order to know that the key that was
pressed was the 4 key, PA1’s level must be Low, and the other PA bits must be High. If PA2 is also
Low, it will be impossible to determine, when input port PB2’s level is Low, whether key 4 or key
7 is pressed. Therefore, in a key scan circuit, the first output port bit only must be Low. One can
determine whether the key connected to an output port with a Low value has been pressed by
reading the input port PB value.

• Blank A: With the knowledge above, reading the question text, it is clear that “Low” goes in this
blank.

• Blank B: This question concerns how to use the INT signal. The INT signal is the NOR of the NOT
of each input port bit. When no keys are pressed, the three signals input into the NOR are all Low,
so the INT signal is High. When at least one key is pressed, at least one of the signals feeding into
the NOR will be High, so the INT signal will change from High to Low. What is generated at this
edge is what goes into blank B. Generally, when input data is generated, an interrupt is sent to the
MPU to notify it that data has been generated. Therefore, “interrupt” goes in the blank.

• Blanks C, D: This subquestion asks what value should be set for output port PA in order to evaluate
whether key 5 has been pressed, and what value will be read in from input port PB when only key
5 is pressed. In order to determine when key 5 is pressed, output port PA1 must be Low, and other
bits must be High. In other words, PA0 = 1, PA1 = 0, PA2 = 1, PA3 = 1. 0 represents “Low,” and 1

288
Afternoon Exam Section 13 Technological Elements Answers & Explanations

represents “High.” PA0 is the least significant bit, so in binary this is 1101, or in hexadecimal,
“D." Therefore, the answer entered into blank C should be “D”. The read input port values will be
PB0 = 1, PB1 = 0, and PB2 = 1, and as PB0 is the least significant bit, in binary this is 101
(written with 4 places, 0101), or in hexadecimal, “5." Therefore, the answer entered into blank D
should be “5”.

[Subquestion 3]
Subquestion 3 concerns interrupts.
The subquestion text mentions three interrupts: (1) Data transfer interrupts from the I/O unit,
(2) timer interrupts, and (3) key interrupts.
Interrupt (1) occurs when data is sent between the I/O unit and the MPU, with one interrupt
every 1kbyte of data transferred. Interrupt (2) occurs when triggered by a timer, and is generated
every 10 milliseconds. This interrupt routine sets the key scan output port value. Interrupt (3) is
generated when a key is pressed.
The subquestion asks which interrupt should have the highest interrupt priority, and why. When
an interrupt is generated, the MPU calls an interrupt routine to process it. When a high priority
interrupt occurs during that processing, the processing underway is stopped, and the higher priority
interrupt is processed. Therefore, the most urgent interrupt must be given the highest priority.
Processing of lower priority interrupts occurs after that of higher priority interrupts, and in worst
case scenarios, the processing is may not be performed.
The question, then, is which interrupt must be processed the fastest.
It states in the question, about Interrupt (1), that “[d]ata transfer must be performed within 5
milliseconds of the data transfer request issued by the I/O unit.”
Timer interrupt (2) is used to set the value of the output port for key scanning, and delaying its
processing somewhat is unlikely to have an impact.
Interrupt (3) is generated when a key is pressed, and there is no problem as long as it is
processed within several dozen ms.
Therefore, the interrupt which should be given the highest priority is (1), the data transfer
request from the I/O unit.
A reason such as “After the interrupt, 1kbyte of data must be transferred within 5 milliseconds”
would be an appropriate answer.

289
Afternoon Exam Section 13 Technological Elements Answers & Explanations

Q13-8 Dehumidifier (embedded systems)

[Answer]

[Subquestion 1] (1) (a) 31.7 (b) 14.4 (c) 43.7

(2) (d) Operates (e) Stopped (f) Open

[Subquestion 2] (a) 98 (b) C (c) 0 (d) float

(e) tank detection

[Subquestion 3] (a) multiplexer (b) B (c) C (d) 3 (e) A

(f) 3

[Explanation]
This question concerns a dehumidifier. The information necessary to answer the subquestions is
provided within the question text. They can be answered easily with a close reading of the question
text, and an accurate understanding of the subquestions.

[Subquestion 1]
(1) is a fill-in-the-blank question which requires the examinee to calculate the relationship
between relative humidity and condensation, the fundamental principle of a dehumidifier. As the
question states, air contains water vapor, but for any given air temperature, there is a maximum
volume of water vapor that can be suspended. This maximum value is called the saturated vapor
density. Table 1 shows saturated vapor density for several different temperatures. Air which has
reached its saturated vapor density cannot store any more water vapor in the form of a gas, so the
water vapor becomes a liquid. This is the phenomenon normally known as condensation. As one
can see from Table 1, the lower the air temperature, the lower the saturated vapor point.
Dehumidifiers work by cooling air containing water vapor, causing condensation, reducing the
amount of water vapor suspended in the air.
The humidity of air is usually represented in terms of relative humidity. The definition of
relative humidity is presented in the question, as follows:

Vapor density
Relative humidity (%) = 100
Saturated vapor density

With an understanding of the above, let us now look at the questions.


Blank A should contain the amount of water vapor suspended in 1m 3 of air when the
temperature is 35°C, and the relative humidity is 80%. Based on the definition of relative humidity,

290
Afternoon Exam Section 13 Technological Elements Answers & Explanations

the amount of water vapor in the air is calculated as shown below.


Relative humidity  saturated vapor density / 100
Looking at Table 1, the saturated vapor density at 35°C air is 39.6g per 1m 3 , so when the
relative humidity is 80%, the amount of water suspended in 1m 3 of air is:

0.8  39.6  31.68( g m 3 )

The answer must be rounded to the first decimal place, so blank A is 31.7.
Blank B should contain the amount of water vapor which condenses into liquid water when this
air is cooled to 20°C. The saturated vapor density at 20°C air is 17.3g. There is 31.7g of water
suspe1nded in the air, so the amount of water which will condense when the air is cooled to 20°C
is:
31.7  17.3  14.4 ( g m 3 )

Blank C should contain the relative humidity when the air, which has been cooled to 20°C, is
heated up again to 35°C. The amount of water vapor in the air which is reheated to 35°C is the
amount of water vapor originally suspended in the air, 31.7g, minus the amount which condensed
out, 14.4g, resulting in 17.3g which is still suspended in the air.
Therefore, using the definition of relative humidity:

17.3
 100  43.6868(%)
39.6

The answer must be rounded to the first decimal place, so blank C is 43.7.
The question itself is simple, but it involves three digit multiplications and divisions, so it is
somewhat troublesome. Making a mistake in the initial calculations will result in all further
calculations meaningless, so all calculations should be rechecked.
Question (2) concerns control during automatic dehumidifying operation.
In the automatic dehumidifying mode, the dehumidifier maintains the room humidity between
55% and 60%. Fig. 3 is a sample time chart of the automatic dehumidifying mode, and indicates
the behavior of the compressor, fan motor, and two-way valve in relation to changes in the relative
humidity. The question asks that the appropriate terms or phrases to be inserted into blanks D
through F. As Fig. 3 shows, blank D should contain the state of the compressor (in operation or
stopped), blank E should contain the state of the fan motor (in operation or stopped), and blank F
should contain the state of the two-way valve (open or closed).
In the automatic dehumidifying mode, when the room humidity drops below 55%, the
dehumidifying function stops. In other words, automatic dehumidifying operation consists of the
following two states:

(1) Dehumidifying is performed

291
Afternoon Exam Section 13 Technological Elements Answers & Explanations

(2) Dehumidifying is not performed


Also, when the temperature of the evaporator drops to 2°C, defrosting begins. In other words,
there is a third state.

(3) Defrosting
During defrosting, dehumidifying is not performed.

When dehumidifying is underway, coolant circulates through the compressor - condenser -


evaporator. The evaporator removes external heat, and the condenser radiates heat externally. This
coolant circulation takes place when the compressor operates and the two-way valve is closed.
When this occurs, the fan motor is in operation.
When dehumidifying is not being performed, the compressor is stopped, and the coolant does
not circulate. The two-way valve is closed, and the fan motor is in operation.
During defrosting, the compressor operates, the two-way valve is open, and compressed, high
temperature coolant flows into the evaporator, heating it. The question text states that the fan motor
is stopped while this occurs. Summarizing the above:

Dehumidifying is Dehumidifying is
State Defrosting
performed not performed

Compressor In operation Stopped In operation

Fan motor In operation In operation Stopped

Two-way
Closed Closed Open
valve

The vertical lines in Fig. 3 represent the times at which the dehumidifier state changes.
Consider the times which correspond to the blanks. The line representing the relative humidity over
time drops as one goes up the time axis, which indicates that the relative humidity is dropping --
that is, dehumidification is underway. When the relative humidity line begins rising, it indicates
that dehumidification is not underway. The dehumidifier switches from dehumidifying to not
dehumidifying when the humidity is 55%, or when defrosting begins. During the period which
corresponds with the blank, when the humidity is over 55%, the dehumidifier switches from
dehumidifying to not dehumidifying, and dehumidification does not begin again even when the
humidity rises over 60%. This indicates that during the time in question, defrosting is underway.
Therefore, blank D is “In operation,” blank E is “Stopped,” and blank F is “Open.”

[Subquestion 2]
This is a fill-in-the-blank question concerning the basic usage of a PPI (Programmable

292
Afternoon Exam Section 13 Technological Elements Answers & Explanations

Peripheral Interface).
A PPI is often used in embedded systems. One way in which they are often used is for a control
word to be used to determine if the PPI’s ports are used as input ports or output ports.
When data is read from a port designated as an input port, the port’s state at the time of the
reading is read.
When data is written to a port designated as an output port, the written data is set for the port,
and the value of the data is retained until data is written to that port the next time.
A PPI generally has multiple ports, and each port’s I/O status is specified with a control word.
The PPI in this question has three 8-bit ports: port A, port B, and port C. Port C can use the first 4
bits and second 4 bits independently. Therefore, the PPI in the question effectively has four ports:
two 8-bit ports, port A and port B, and two 4-bit ports, port C upper and port C lower. The I/O state
of each of these four ports is specified by bit 4 (port A specification bit), bit 3 (port C upper
specification bit), bit 1 (port B specification bit), and bit 0 (port C lower specification bit) of a
control word. When the value of any of these bits is “0,” it indicates that the port is an output port.
When the value is “1,” it indicates that the port is an input port.
Blank A in the question should contain the hexadecimal value of the control word. As explained
before, the control word contains information which specifies whether each port is an input port or
an output port. Looking at the table:

Port A InputSet bit 4 of the control word to 1


Port B Output Set bit 1 of the control word to 0
Port C (lower, 0 - 3 bit) Output Set bit 0 of the control word to 0
Port C (upper, 4 - 7 bit) InputSet bit 3 of the control word to 1

Therefore, the data which should be set as the control word is, in binary:
10011000
Converting this result into hexadecimal produces “98.”
Blanks B and C concern the control methods of the compressor motor. Operating or stopping
the motor, according to Table 2, is controlled by bit 0, which corresponds to port C. In other words,
when the value of this bit is “1,” the compressor motor operates, and when the value of the bit is
“0,” the motor stops. Therefore, blank B is “C," and blank C is “0.”
Blanks D and E concern compressor and motor operating conditions. The question states that
“When there is no tank in the dehumidifier, or when the amount of water in the tank has reached
the specified amount, the dehumidifier stops operating.” Therefore, for the motor to operate, the
tank must be in the dehumidifier, and the amount of water dehumidified must not have reached the
specified level. The tank detection switch detects whether or not the tank has been inserted. This
switch is set to on when the tank is inserted. When the amount of water removed by
dehumidification reaches the specified level, the float switch is tuned on. Therefore, for the motor
to operate, the tank detection switch must be on, and the float switch must be off. Therefore, the

293
Afternoon Exam Section 13 Technological Elements Answers & Explanations

answer entered into blank D should be “float," and blank E should be “tank detection.”
When actually operating the motor, first port C (A2 address) data is read, and if bit 4 is set to
“0” and bit 5 to “1,” bit 0 of port C is set to “1.”

[Subquestion 3]
This question concerns reading information from the sensor. When digital values, such as
whether or not a tank has been detected, or whether or not the tank is full of water, are sent to the
MPU, digital input ports such as a PPI can be used to read the data. For analog data, such as
temperatures or humidity, the analog data is first converted into digital data using an A/D converter,
and then read into the MPU from a digital input port.
When there are multiple sensors issuing analog data, the analog data is usually digitized using a
single A/D converter. At any given time, the A/D converter only digitizes one analog data.
Therefore, one output value from the multiple sensors is selected, and the analog data is digitized
by the A/D converter. A device which selects a single signal from multiple signals is generally
called a multiplexer. As Fig. 2 shows, three sensors are connected to the left side of one
multiplexer. A control signal is sent to the multiplexer for selecting one of the multiple input
signals. In Fig. 2, this is the signal sent from the PPI to the bottom of the multiplexer. The analog
signal selected by the multiplexer is fed into the A/D converter. An input signal is issued to the A/D
converter to start A/D conversion. In Fig. 2, this is the signal sent from the PPI to the bottom of the
A/D converter. The A/D converter receives a signal to start the conversion, and converts the analog
data fed into it into digital data. This requires some time, be it 1 microsecond or 100 microseconds.
A/D converters which require little time for conversion are called high speed A/D converters, while
those that require some time for conversion are called low speed A/D converters.
A/D converters convert analog data into n bits of digital data. Obviously, the greater the value
of n, the more precise the digital conversion. The number of bits of converted digital data is called
the resolution. For example, when an analog input signal is between 0 and 10V, with 8 bit
resolution, the smallest voltage difference that can be converted is equal to 10 / 28  0.039(V).
With 10 bit resolution, it is 10 / 210  =0.0098(V). Audio CDs have a resolution of 16 bits, so it is
10 / 216  0.00015(V).
As was mentioned earlier, A/D conversion takes some amount of time, but analog data may
change over that time. When analog data fed into an A/D converter changes during conversion, it is
impossible to accurately digitize that data. Therefore, when an A/D conversion instruction is
received, analog data is sampled, and the sampled analog data is held constant while it is digitized.
The type of circuit responsible for this is a sample hold circuit. Many recent modular A/D
converters have built-in sample hold circuits. The A/D converter shown in Fig. 2 can be interpreted
as containing a sample hold circuit.
In this question, the 8 bit digitized data resulting from A/D conversion is read by the MPU via
port A of the PPI.

294
Afternoon Exam Section 13 Technological Elements Answers & Explanations

There are two approaches to reading digitized data, taking into account the time needed for A/D
conversion. In one approach, reading is performed after waiting after a A/D conversion instruction
is sent for a sufficient amount of time for A/D conversion to be performed. The other approach is to
send an interrupt to the MPU when the A/D converter completes its A/D conversion. The
dehumidifier in the question uses the latter approach, using interrupts.
In Subquestion 3, it is clear that blank A should contain where the signal from the sensor is sent.
As explained above, and as Fig. 2 shows, blank A should be the “multiplexer.” Three sensor signals
are fed into the multiplexer. Blank B must contain the source of the signal used to select one of
those three sensor signals. Looking at Table 2, one can see that port B’s bits 0 and 1 are for sensor
selection, so blank B should be “B.” Blank C should contain the A/D conversion start signal port,
and blank D the bit of that port bit. Looking at Table 2, one can see that A/D conversion is started
by port C, bit 3. Therefore, blank C is “C,” and blank D is “3.” Blank E should contain the port used
for reading digitized data, which, according to Table 2, is “A.” Blank F must contain the number of
times in which data must be written to ports in order to perform A/D conversion. In order to
perform A/D conversion, first, one of the three sensors must be selected. In order to do this, port
B’s bits 0 and 1 must be set to values corresponding to the desired sensor, and written to port B.
Next, in order to start A/D conversion, port C’s bit 3 must be set to “1.” Setting the same bit to “0”
then starts A/D conversion. Therefore, ports must be written to three times, so blank F is “3.”

Q13-9 Systems using real-time kernels

[Answer]

[Subquestion 1] (a) 1.0 milliseconds

[Subquestion 2] (b) wai_flg (c) 1 (d) 0 (e) 1 or more

(f) ready state

[Explanation]

[Subquestion 1]
The diagram below is based on the description in the subquestion text, and shows the state
transition of each task.

295
Afternoon Exam Section 13 Technological Elements Answers & Explanations

High Task switching time (0.5 milliseconds each)

Task 1

A
Task 2

B1 B2
Priority

Task 3

C1 C2
Interrupt
handler

(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11)
Low

874 milliseconds

<Description of the figure>


(1) Cause B’s external interrupt is generated, and the interrupt handler is called.
(2) set_flg is executed, and task 2 is set to the ready state.
(3) There is no task with higher priority than task 2 and in the running state or the ready state,
so task 2 is set to the running state. Task 3 switches from the running state to the ready
state.
(4) Cause A’s external interrupt is generated, and the interrupt handler is called.
(5) set_flg is executed, and task 1 is set to the ready state.
(6) There is no task with higher priority than task 1 and in the running state or the ready state,
so task 1 is set to the running state. Task 2 switches from the running state to the ready
state.
(7) Upon completion of the processing, task 1 executes wai_flg, and switches to the waiting
state.
(8) Task 2 has the highest priority among the tasks in the ready state, and switches to the
running state.
(9) Upon completion of the processing, task 2 executes wai_flg, and switches to the waiting
state.
(10) Task 3 has the highest priority among the tasks in the ready state, and switches to the
running state.
(11) Upon completion of the processing, task 3 executes wai_flg, and switches to the waiting
state.

The execution time of each task is shown below.

296
Afternoon Exam Section 13 Technological Elements Answers & Explanations

A 20 ms
B1  B 2 50 ms
C1  C 2 800 ms

Therefore, the total processing time of the interrupt handler is:

874 (ms)  20 (ms)  50 (ms)  800 (ms)  0.5 (ms) 4


↑ ↑ ↑ ↑
Task 1 Task 2 Task 3 (Task
execution execution execution switching
time time time 4 times)
= 2 (ms)

During this processing, the interrupt handler is called two times, and, as the subquestion states
that the processing time of the handler is identical each time:
2 (ms)  2 (number of times handler is called ) = 1 (ms)
The question requires the examinee to “[r]ound the values to first decimal place,” so the answer
is 1.0.

[Subquestion 2]
In this subquestion, one must take into account that the order of the timing when set_flg is
called may not be determined, due to external interrupts and the timing of each task calling wai_flg.
Assuming an initial value of 1:
• When wai_flg is called first:
The task calls wai_flg, the event counter is decremented to 0 ( 1  1 → 0), and the task is set
to the waiting state.
When set_flg is called, the event counter is 0, so it is incremented to 1 (0 + 1 → 1), and the
task is set to the ready state.
• When set_flg is called first:
The event counter is still in its initial state, so it is incremented to 2 (1 + 1 → 2), and the
task state is not changed. In other words, instead of being set to the waiting state, it remains
in the running state. After that, if wai_flg is called, the event counter is decrements to 1
( 2  1 → 1), and the task is not set to the waiting state. Thus, the objectives of the counter
are achieved.
Assuming an initial value of 0:
• When wai_flg is called first:
The task calls wai_flg, and the event counter becomes negative ( 0  1 → –1). Even if the
set_flg adds 1, the result is not 1.
• When set_flg is called first:
set_flg sets the value to 1 ( 0  1 → 1), and the state simply changes from the running state
to the ready state.

297
Afternoon Exam Section 13 Technological Elements Answers & Explanations

Given the above, the initial value must be 1.

Q13-10 Public key cryptography

[Answer]

[Subquestion 1] (a) c (b) g (c) e (d) f

[Subquestion 2] It should perform verification of whether there are other responses with the
same serial number.

<Alternative answer> It should perform verification of whether there are any


responses serial numbers which have not been assigned.

[Subquestion 3] (1) (e) hash

(f) digests <Alternative answer> message digests

(g) the respondent’s private key

(h) the respondent’s public key

(2) Respondent anonymity

(3) submission file digests digitally signed using department manager private
keys

[Explanation]
This question concerns the development of a questionnaire system using public key cryptography.
Public key cryptography is used in creating and authenticating encrypted messages, but it requires
different keys for encryption and decryption, making key management difficult. This makes it a
common theme in the test, so it is to your advantage to take this question as an opportunity to gain a
correct understanding of the relationship between public key cryptography and key usage.

[Subquestion 1]
Shared key cryptography, like public key cryptography, is also frequently used. It uses the same
key (a shared key) for both encryption and decryption, but if the key is made public, it can be used by
anyone to decrypt encrypted messages, so the key must be kept confidential. With public key
cryptography, keys come in pairs. One key can be used for encryption, and the other for decryption.
Therefore, even if one key (key (1)) is made public, encrypted messages which use this key can only
be decrypted by its partner key (key (2)), so as long as key (2) is kept confidential, encrypted messages
for the owner of key (2) can be created. A message encrypted with key (2) can be decrypted with its

298
Afternoon Exam Section 13 Technological Elements Answers & Explanations

partner, key (1). Key (1) is generally released publicly, as the public key, so anyone can decrypt
encrypted messages. This means that the encrypted message has no value in terms of secrecy, but it
does have other useful applications. Obtaining correct information when using key (1) to decrypt a
message is only possible when the encrypted message was created via the correct encryption
procedure. This correct procedure consists of using key (2), key (1)’s partner, to encrypt the message.
Since the owner of key (2) is the only person who knows the key, this means that the message was
encrypted by the owner of key (2). In other words, this approach proves the sender’s identity. In this
way, keys can be used in two ways in public key cryptography in order to satisfy differing objectives.
Moreover, some examinees may have seen “private key” in the answer group and concluded that
shared key cryptography was being used for encryption, but as the question states that public key
infrastructure (PKI) is used, and since recent examination questions use “shared keys” instead of
“private keys” to refer to the keys used in shared key cryptography, the examinee must assume that
public key cryptography is being used for encryption.

• Blank A: The objective of this encryption is the creation of an encrypted message. The key point in
this is that nobody other than the respondent is able to decrypt it, so it must be created such that it
can only be decrypted with the respondent’s private key. Therefore, what should be used for
encryption is “the respondent’s public key,” and thus the answer is c). Moreover, the other public
keys in the answer group can be used by anyone, so it is important to note that since the
respondent does not know the corresponding private key, decryption of a message encrypted with
one of these other public keys will not be possible.

• Blanks B through D: The objective of this is also the creation of an encrypted message. Referring to
the overview of the questionnaire submission method in Fig. 2, let us look at what is encrypted
and who decrypts the encrypted sections. Key A is used to encrypt serial numbers, signatures, and
response contents. This section is also encrypted with key B, but even if it is decrypted by the
person in charge of collection, this decryption will only restore the encrypted contents to the state
they were in before encryption with key B – that is, the decrypted contents will be the encrypted
results of key A encryption. These results are then sent as-is to the person in charge of tallying
results, to be tabulated after decryption. In other words, the encrypted message created with key A
can only be decrypted by the person in charge of tallying results, so that encryption uses “the
public key of the person in charge of tallying results.” The person in charge of collection must
decrypt the encrypted message created with key B, and remove attributes such as the name of the
respondent, etc. Therefore, the person in charge of collection must be able to decrypt the message.
However, if the message can be easily decrypted by anyone other than the person in charge of
collection as well, the encryption would not serve any purpose, so the encryption must be
performed using “the public key of the person in charge of collection” so that only the person in
charge of collection can decrypt it. The attributes that the person in charge of collection deletes,
such as names, before gathering together a submission file must be decrypted. This section is
encrypted with key B, the public key of the person in charge of collection, so decryption requires

299
Afternoon Exam Section 13 Technological Elements Answers & Explanations

“the private key of the person in charge of collection.” Therefore, blank B is g), blank C is e), and
blank D is f).

[Subquestion 2]
As the question states, serial number signatures are created using the questionnaire response
program’s private key, so even if signature verification is performed, the identity of the person who
created the response cannot be authenticated. The basis of identity authentication for this system is
the use of a serial number which only the respondent knows. However, if management of these
serial numbers and their corresponding respondents is performed, anonymity cannot be preserved.
Therefore, let us consider verification contents for ensuring that serial numbers cannot be falsified
by anybody else. The question does not describe how the serial numbers are created, or what type
of numbering system is used, so it is sufficient to consider a response assuming a simple situation.
For example, if someone other than the respondent uses his/her own serial number and creates a
signature, submitting a response file, the signature will be correct for that serial number. In other
words, signature verification would not be sufficient to detect if a different person had sent in a
response in this manner. Even in accidental situations, such as if a respondent sent in a previous
response by mistake, the serial number signature would be correct. In order to check this type of
response file, what is important is the uniqueness of serial numbers, including those of past
questionnaires. Therefore, serial number overlap verification is necessary. Overlap verification
would not detect if a falsified serial number was used, so there must also be verification that the
serial number is one that was actually used. Thus, an appropriate answer is one which summarizes
one of these, such as the sample answer.

[Subquestion 3]
(1) An explanation is provided in Subquestion 1 regarding identity authentication using public key
cryptography. Here, tampering detection is also needed, in addition to identity authentication.
Identity authentication, as explained earlier, can be performed through encryption using a
person’s private key, which only they can do. To detect tampering – that is, changing contents
midway – one can compare received contents to sent contents. One could attach to a file to be
sent the same file, encrypted with the sender’s own private key. The recipient could then use
the sender’s public key to decrypt the file, and check that the contents matched. However, the
contents which would be attached for authentication would be the same size as the file being
sent, which would not be efficient. Instead, what is used is a (message) digest, created from
the sent contents with a function called a hash function. Moreover, digest here does not mean a
summary. It can be best thought of as a contracted form of the original contents.
A hash function performs specific calculations on input data (bit strings), using a function that
returns a bit string which is of a fixed length (hash value) regardless of the length of the input
data. The hash value cannot be used to obtain the input information, so it is uni-directional.

300
Afternoon Exam Section 13 Technological Elements Answers & Explanations

The calculations used by the hash function are designed to return different hash values for
different input contents. Hash values have a fixed length, so the likelihood that a different
input will result in the same hash value is not zero. However, in reality, it is impossible to
produce tampered files with meaningful contents which cannot be identified as having been
tampered with, and still result in the same hash value.
The sender uses a hash function to create a digest from the contents that are to be sent. The
digest is then encrypted with the sender’s private key, and sent together with the sent contents
as a digital signature. The recipient uses the same hash function to create a digest from the
contents that have been received. The digital signature (the digest, encrypted with the sender’s
private key) is decrypted with the sender’s public key, and compared against the digest created
by the recipient. If the two match, that is, if the digest created by the recipient matches the
digest obtained by decrypting the digital signature, it indicates that the encryption was
properly performed using the sender’s public key. In other words, the creator of the digital
signature was the sender him/herself (identity authentication). The digest created when the
message was sent (decrypted from the digital signature) matching the digest created from the
received contents also indicates that the message contents have not been tampered with.
Conversely, if the two do not match, either the digital signature was forged, or the received
contents were tampered with. It is impossible to determine which, but as it indicates that at
least one of the two has occurred, the contents are not valid.
Filling in the blanks based on this, blank E is “hash,” blank F is “digest,” (or “message
digest”), blank G is “respondent’s private key,” and blank H is “respondent’s public key.”

(2) As the question states, authentication using digital signatures made from digests created from
response files is a far more secure authentication method than the current serial number
approach. However, the Q system does not use this approach. The reason for this is that doing
so would make it impossible to implement one of the Q system’s requirements. The
authentication approach described in (1) requires decryption using the respondent’s own
public key. In other words, authentication cannot be performed without knowing who a given
response file belongs to. This is a violation of the requirement for respondent anonymity.
Therefore, the correct answer is “Respondent anonymity.”

(3) It is important to note that while respondent anonymity is required, anonymity of the person in
charge of collection is not. Therefore, a digital signature like that described in (1) can be
attached to the submission files sent by the person in charge of collection to the person in
charge of tallying results, providing identity authentication of the person in charge of
collection and making it possible to detect submission file tampering. When doing this the
digest to be used will be one created from the submission file, and the key used to create the
digital signature will be the private key of the person in charge of collection. The sample
answer shows an example of a brief description of this.

301
Afternoon Exam Section 13 Technological Elements Answers & Explanations

Q13-11 Protection of personal information by Web sites

[Answer]

[Subquestion 1] (a) f (d) h (e) d

[Subquestion 2] (b) escape processing or sanitizing

(c) file name or path name

[Subquestion 3] (1) SSL communication is being performed

(2) They should check the validity of the server certificate

[Subquestion 4] social engineering

[Explanation]

[Subquestion 1]
• Blank A: A countermeasure to vulnerabilities caused by improper Web server configuration is f)
(server) hardening. Hardening refers to overall strengthening of server security configuration. This
term may not be immediately familiar, but if one knows the other terms in the answer group, one
can see that none of them correspond to countermeasures against improper configurations, so
“hardening” can be arrived at by process of elimination.

• Blank D: Tricking web users and leading them to malicious websites is called h) phishing.

• Blank E: The term in the answer group which corresponds to something used in public key
cryptography to confirm senders is d) digital signatures.

[Subquestion 2]
• Blank B: SQL injection attacks are attacks which access databases without authorization using SQL.
This kind of attack targets Web applications which embed values entered into Web pages into SQL
statements and access databases. By entering malicious data on Web pages, they result in
unanticipated SQL statements being executed and unauthorized access.
One of the characteristics of these improper input values is that special symbols and codes,
which are not found in normally input values, are often used. A typical countermeasure against
SQL injection attacks is escape processing, in which these types of special symbols or codes are
replaced with other characters. This escape processing is also called sanitizing. Therefore, “escape
processing” or “sanitizing” is the correct answer.

302
Afternoon Exam Section 13 Technological Elements Answers & Explanations

• Blank C: Directory traversal attacks use “../” or similar expressions to indicate parent directories and
directly specify file names, in an attempt to read file contents which normally cannot be accessed.
As a countermeasure, systems can be made not to recognize directly specified file names or path
names. Therefore, “file name or path name” is the correct answer.

[Subquestion 3]
(1) As stated in the section of the question which says that “the URL begins with https,” the lock
symbol displayed when accessing a Web server with a browser indicates that SSL
communication is being performed. Therefore, “SSL communication is being performed” is
the correct answer.

(2) As mentioned earlier, the lock symbol displayed in browsers only indicates that SSL
communications are being used, and is not sufficient to confirm that the Web site being
accessed is Company J’s web site. When using SSL, confirmation that the site being
connected to is the correct site is performed using server certificates. Users can confirm that
the web site is the Company J web site by checking that the certificate is valid. Therefore, an
answer such as “They should check the validity of the server certificate” is correct.
To confirm server certificates, users can double-click the lock symbol displayed in their
browser and view the certificate contents. Certificates contain information such as their issuer,
expiration date, and subject (the domain name of the party the certificate was issued to), so the
period of validity and subject contents can be checked to confirm their validity.

[Subquestion 4]
This question asks the name of an attack in which a malicious third party pretends to be a
Company J employee and directly asks for member information. This type of attack is normally
called social engineering, so this is the correct answer. Moreover, social engineering refers to
acquiring IDs, passwords, or the like by physical means in order to gain access without
authorization. In addition to the attack methods mentioned in the question, it includes peeping over
users’ shoulders, searching garbage for documentation and materials, etc.

303
Afternoon Exam Section 13 Technological Elements Answers & Explanations

Q13-12 Access control using firewalls

[Answer]

[Subquestion 1] (a) DMZ (DeMilitarized Zone) (b) IP addresses

(c) port numbers (d) router D

(b and c may be reversed)

[Subquestion 2] (e) 25 (f) 220.1xx.204.2 (g) 220.1xx.204.1

(h) anywhere (i) 110 (j) 52000 (k) 192.168.10.100

[Subquestion 3] c

[Explanation]
This question concerns firewalls, a network security technology. Firewalls are logical walls erected
between networks with different security levels. Firewalls can be used to determine, based on
transmission needs, whether to permit or deny access between untrusted external networks (the
Internet, etc.) and trusted internal networks (corporate LANs), making it possible to limit unauthorized
access from external networks.

External network
Internet
Security level - low

FW
DMZ
(Open network) Security level - medium

FW

Corporate LAN Security level - high

Internal network

Fig. A DMZ

Fig. A shows an example of a network with firewalls. Firewalls are used to separate the external
network (Internet) from the DMZ, and the DMZ from the internal network (corporate LAN).
A DMZ (DeMilitarized Zone) is a special network which is established between the external
network (the Internet) and the internal network (corporate LAN). Normally, the DMZ is used as an

304
Afternoon Exam Section 13 Technological Elements Answers & Explanations

open network, and contains Web servers, e-mail servers, and similar servers which are open to
networks outside of the company network.
Moreover, in Fig. A, two firewalls are used, but a DMZ can be established with a single firewall
containing three LAN ports.
Below is a simple explanation of firewall types.
Generally, firewalls are divided into packet filtering firewalls, used in the question, and application
gateway firewalls.
Understanding packet filtering firewalls requires an understanding of IP packets, so first let us go
over the basics of IP packets.

Destination port number


Application data Source port number

Destination IP address
Data TCP
Header Source IP address

Data IP
Header

Fig. B-1

Packet filtering firewall Packets are allowed or denied based on the


combination of their destination IP address,
TCP/IP packet source IP address, destination port number,
and source port number (Note).
TCP IP

Packet filter setting table


Firewall
which defines whether
packets are to be allowed
Access control list
or denied
[Packet filtering
Note: Some combinations are setting table]

omitted

Fig. B-2

IP packets are composed of an IP header and a data field, as shown in Fig. B-1. The IP header
contains the destination IP address and the source IP address. IP routers check this destination IP
address in order to determine where to send the IP packet (routing). IP packet data fields normally
contain TCP or UDP (upper level protocols) packets. TCP packets also have headers, called TCP

305
Afternoon Exam Section 13 Technological Elements Answers & Explanations

headers, which contain destination port numbers and source port numbers. These destination port
numbers are used by devices which receive TCP packets to identify which application should process
the packet. In the same way, UDP packets also have headers, called UDP headers, which contain
destination port numbers and source port numbers.
Packet filtering firewalls use the destination IP addresses, source IP addresses, destination port
numbers, and source port numbers contained in the IP and TCP(UDP) headers to determine whether to
permit or deny transmission, thereby controlling access (see Fig. B-2).
Moreover, many standard routers also have packet filtering access control functions. The question
states that “Company M uses routers with packet filtering functionality,” so routers A through D are
also capable of packet filtering access control.
Application gateway firewalls include the servers generally known as proxy servers. For example,
Web proxy servers, which are often used in companies, are located between the company’s client PCs
and Web servers on the Internet, providing security by preventing direct connections between the two.
Specifically, communications from client PCs inside the company are received by the Web proxy
server, and then the Web proxy server resends them to Web servers on the Internet.

[Subquestion 1]
• Blank A: The zone between the two firewalls is called the “DMZ” or “DeMilitarized Zone.”

•Blanks B, C: Packet filtering firewalls monitor the aforementioned “IP addresses” and “port numbers”
contained in packets. (B and C can be in either order)

• Blank D: As mentioned earlier, packet filtering functions are offered not only by firewalls, but also
by routers, so routers A through D are also capable of packet filtering access control. The router
directly connected to the partner company LAN is router D, so it is reasonable to conclude that
router D provides access control for the partner company LAN. Therefore, the answer is “router
D”.

[Subquestion 2]
This question concerns packet filtering settings for access control. Access control to prevent
unauthorized access requires configurations in line with an organization’s information security policy
(rules). This question states, in [Corporate LAN access control], that “Employee LAN1 and Employee
LAN2 can be used for Internet Web access, to send and receive e-mails, and for access to the entire
corporate LAN. Partner Company LAN can only be used for Internet Web access and the groupware
server access.” so these rules must by complied with. As Table 1 “Company M’s public servers”
shows, there are rules which apply to the public servers. Answers to this type of packet filter
configuration question must always comply with the security related rules provided in the question
text. The key points to note in this question are the last lines in the packet filter configurations in
Tables 3 through 5. They deny traffic to all (“any”) destination port numbers, and to any combination

306
Afternoon Exam Section 13 Technological Elements Answers & Explanations

of source IP address and destination IP address (“anywhere”). This prohibits all traffic. This rule, a
“general prohibition rule,” is a rule which essentially prohibits all transmissions. Of course, because
this would prevent all traffic, rules are configured to allow only necessary traffic (this is sometimes
called punching a hole in the firewall). The blanks in the tables, then, must be filled with combinations
of IP addresses and port numbers for which transmissions are allowed.
Table 2 contains port numbers used by individual protocols, such as 80 or 110, so that the question
can be answered without prior knowledge of specific port numbers. SMTP (Simple Mail Transfer
Protocol), a protocol used in e-mail transmission, uses port number 25. POP3 (Post Office Protocol
Ver.3), a protocol used to receive e-mails from an e-mail server inbox, uses port number 110. HTTP
(HyperText Transfer Protocol), a protocol used to access Web servers, uses port number 80. It would
be best to remember these port numbers, used by these commonly used protocols (called well-known
ports). Moreover, the port used by the company groupware, 52000, is the one selected specifically for
this question, and does not need to be remembered.
First, let’s look at Table 3, the packet filter settings for transmission from the Internet to the DMZ
through firewall X.

• Blank G: The port number is 80, which corresponds to HTTP. Table 3 contains configurations for
transmissions from the Internet to the DMZ, which means access to Company M’s Web server, so
the destination IP address is that of the Web server. Note that here a specific IP address must be
entered into the destination IP address column. The Web server’s IP address goes in here, so blank
G is “220.1xx.204.1.”

• Blanks E, F: The e-mail server row in Table 1 says that it “[r]eceives e-mails from outside the
company network.” Therefore, firewall X must permit e-mail transfer from external (Internet side)
e-mail servers to the e-mail server in the DMZ. The protocol used for transferring e-mails between
e-mail servers is SMTP, whose port number is 25. The IP address of the e-mail server is
220.1xx.204.2. Therefore, blank E is “25” and blank F is “220.1xx.204.2.”

Let’s look at Table 4 (firewall Y). The transmission direction is from the corporate LAN towards
the DMZ.

• Blank H: This question states, in [Corporate LAN access control], that it must be possible to access
the Internet Web sites and send and receive e-mail from Employee LAN1 and Employee LAN2.
Packets with destination port number 80 (HTTP), which has already been entered into Table 4, are
for Web access to the external network (via the DMZ to the Internet). Their destination IP
addresses, then, correspond to the entire Internet, so the range of IP addresses cannot be limited.
Therefore, the answer is “anywhere”.

• Blank I: Blanks E and F are the same as in Table 3, so this refers to e-mail transmissions using port
number 25 (SMTP) to the e-mail server (220.1xx.204.2). Here, transmissions from the company

307
Afternoon Exam Section 13 Technological Elements Answers & Explanations

LAN to the e-mail server use two protocols: SMTP, for sending e-mails from client PCs, and the
other for client PCs to retrieve (receive) e-mail on specific user’s e-mail server inboxes. There is
more than one protocol clients can use to receive e-mails, but in this question, one can see from
Table 2 that POP3, which uses port number 110, is used. Therefore, blank I is “110”.

Table 5 contains router D packet filter settings, and the direction is from Partner Company LAN to
the corporate LAN.

• Blank H: From the description in [Access control on the corporate LAN side], it is clear that from the
Partner Company LAN, only access to Web sites and the groupware server is made possible. The
first row of Table 5 specifies port number 80 (HTTP), and destination IP addresses 192.168.10.1
to 192.168.10.254 – in other words, Employee LAN1. Since the setting is to prohibit traffic, this
rule prohibits Web access from Partner Company LAN to Employee LAN1. The second row,
likewise, prohibits Web access from Partner Company LAN to Employee LAN2. The third row
also specifies port 80, but this rule is set to permit traffic. Web access to the Internet is permitted
from Partner Company LAN, so the destination is set to “anywhere,” encompassing the entire
Internet.

• Blanks J, K: The fourth row allows traffic to pass, enabling access from Partner Company LAN to
the groupware server (192.168.10.100), so blank J should be the company groupware port
number, “52000,” and blank K should be “192.168.10.100.”
Moreover, the third row allows port 80 traffic to any destination IP address (“anywhere”),
which would also allow HTTP traffic to Employee LAN1 and Employee LAN2. The rules in rows
1 and 2 are there to prevent this (as explained in the question, the higher a rule is in the list, the
higher its priority).

[Subquestion 3]
This question involves two types of packet filtering: static packet filtering and dynamic packet
filtering. In static packet filtering, packet filter settings remain constant, and, as stated in the question,
response packets sent by Internet servers in response to access request packets must always be
permitted. However, this results in there being a constant hole in the firewall for response packets,
which carries the risk (vulnerability) of being taken advantage of by a malicious third party to gain
unauthorized access. Consider the situation where a user accesses a Web server on the Internet from
inside the company. When the Web server receives an access request packet (destination port 80) from
the client in the company network, it sends a response packet. The source port number and destination
port number in the response packet are the reverse of those in the access request packet, so the source
port is now 80.
When always allowing response packets in static packet filtering, if an attacker trying to gain
unauthorized access to the company network sends packets which use this port number to spoof

308
Afternoon Exam Section 13 Technological Elements Answers & Explanations

response packets to access request packets sent to Internet servers, these packets will not be filtered,
and will be allowed to pass through, which presents a danger. Packets sent in response to access
request packets sent to the Web server (port 80) have 80 as their source port number, and may have
any destination port number, making it impossible to identify spoofed packets via their port numbers.
This means that the packets described in c) might be allowed to pass (the correct answer is c)).
With dynamic packet filtering, response packets are not always allowed (holes are closed). Instead,
response packets which correspond to access request packets sent from inside the network are allowed
to pass on a limited basis (a hole is opened dynamically when necessary), providing a higher level of
security than static packet filtering.

Q13-13 Computer viruses

[Answer]

[Subquestion 1] (a) h (b) a (c) j

[Subquestion 2] (1) A (2) D (3) C (4) B

[Subquestion 3] Number: (2)

Reason: There is a possibility of sender spoofing

(Alternative answer: There is a possibility of increased network load)

[Explanation]
This question concerns viruses and their countermeasures, and can be answered with a basic
knowledge gained through hands-on experience. It would be fair to say that the difficulty level of this
question is fairly low.

[Subquestion 1]
This subquestion requires the terms for the underlined sections to be selected from the answer
group. For this level of terminology, it would be best to acquire a level of knowledge such that it
would be possible to answer free-form questions without an answer group.

• Underlined section (a): This section corresponds with h) “security hole.” As the question
states, applying the most current OS patches, that is, security patches, can be used to counter
these. Conversely, if security patches are not applied, and vulnerabilities (security holes) are
left unattended, the risk of virus infection via these security holes grows higher.

• Underlined section (b): This section corresponds with a) “DDoS.” Before explaining DDoS,

309
Afternoon Exam Section 13 Technological Elements Answers & Explanations

let’s look at a DoS attack. “DoS attack” is short for “Denial of Service attack.” A level of
traffic that exceeds the capacity of a server (or its network) is concentrated and directed at the
server (or network), making it impossible to maintain a normal level of service, ultimately
resulting in the server service stopping.
A DDoS (Distributed DoS) attack is a variation on this, in which viruses or other means are
used to place attack programs on a large number of PCs, which initiate an attack en masse at a
certain date and time.

• Underlined section (c): This section corresponds with j) “back door.” A back door is a
mechanism for logging into PCs or servers without their administrators being aware of, or the
creation of accounts on them which can be easily accessed again at a later date.

[Subquestion 2]
First, let us look at how “OS patch application” and “current virus signature file installation of
antivirus software” relate to [Cause analysis] A through D, and the problems with each.

• A: Regular password updating is practiced in order to limit the amount of damage in the event
that a password is leaked, and to increase the likelihood that no damage will be incurred.
However, the question text states that “there are many employees who have not changed their
passwords for a long period of time,” which, while not good from a security standpoint, does
not relate to patch or virus pattern file application.

• B: Applying patches to operating systems may cause malfunctions in applications which


currently function normally (especially critical devices such as servers), so in some cases
before patches are applied, the patches are tested to ensure that there will be no malfunctioning
as a result of applying them. The question text states that “these patches have not been
installed on some PCs,” which hints at the possibility of virus infection via security holes.

• C: The question text states that “This software has a function for searching for viruses when
files are written.” This function, called “real-time search,” continually uses OS resources, so
on devices with low processing power, such as slower CPUs or small amounts of memory, the
load placed by real-time search may be excessive. Since “some have disabled the functions,” it
is impossible for them to prevent viral infection.

• D: The question text states that “the process times out on many PCs, and the signature files fail
to be downloaded.” In other words, there were many PCs whose virus signature files had not
been updated, and different PCs had different signature file version numbers (some PCs had
older pattern files, while some had newer pattern files).
Next, let’s look at the traits of viruses Y and Z.

• Y: It does not infect computers with the most current OS patches.

310
Afternoon Exam Section 13 Technological Elements Answers & Explanations

:It can be removed by real-time search using the newest virus signature files

• Z: This is an old virus (and it can be handled even without the newest signature files).
Considering the above, let us look at the causes for (1) through (4) in the subquestion.

(1) This is a security problem, but is not related to patches or virus signature files, so this
corresponds to “A.”
(2) Y cannot be detected by the real-time search, but Z can. The difference between the two is the
recency of the virus pattern files. This corresponds with “D.”
(3) This states that neither virus Y nor Z will be detected, so the issue is whether or not the
antivirus software is in operation. This corresponds with “C.”
(4) Even if the antivirus software is installed, infection by virus Y is possible. In other words, the
issue is whether the newest OS patch has been applied. This corresponds with “B.”

[Subquestion 3]
Let us consider the proposed countermeasures.

(1) This countermeasure is to improve server capabilities, preventing failures when PCs are
downloading virus pattern files. This is a correct countermeasure.
(2) There is a possibility that the virus has falsified the source e-mail address. Because of this,
replying to the e-mail may not necessarily result in the reply reaching the actual sender.
(3) Avoiding opening suspicious e-mail attachments is an effective way to prevent the spreading
of viruses. This is a correct countermeasure.
(4) It means to establish a system which ensures that OS patches are applied. It is effective against
viruses which target security holes. This is a correct countermeasure.

Therefore, the problematic countermeasure is (2). There are several potential reasons for this,
depending on one’s perspective. All of the below are appropriate answers.

• There is a possibility that the virus has falsified the source e-mail address, so the warning
e-mail may be sent to an unrelated e-mail address.

• A large volume of virus e-mails may be sent in order to use up network resources by taking
advantage of the fact that automatic warning e-mails are sent when viruses are detected. If this
occurs, the resulting flood of warning e-mails will increase the load on the network, and, if the
source e-mail address has been spoofed, it might spread damages even more.

311
Afternoon Exam Section 14 Information Systems Development Answers and Explanations

Section 14 Information Systems Development

Q14-1 Online shopping system design

[Answer]
[Subquestion 1] (A) Customer rank (B) Payment method
[Subquestion 2] (C) multiplicity: 1
(D) multiplicity: 1..5
(E) multiplicity: 0..*
[Subquestion 3] (1) (F) Product management (G) Order
(2) (H) Quantity in stock (I) Expected stock date
(3) (J) Out of stock

[Explanation]
UML is a uniformly used modeling language for object-oriented design. This question requires
examinees to understand the contents of a business system model depicted in a UML class diagram
and a sequence diagram from the question text and accompanying diagrams. Even with little
knowledge of UML, examinees can answer based on the description in the [Online shopping system
overview] section of the question text in addition to the contents of the class diagram and sequence
diagram.

[Subquestion 1]
This question requires examinees to check up on the attributes of each class. The [Online
shopping system overview] section contains a description of members and orders, so by comparing
the contents of the section with the attributes already provided for each class, the attributes to be
entered in the blanks can be determined.

Blank A: In [Online shopping system overview] (4) Member management (ii), it states that
“Customer ranks are set based on the total order amount of the previous year.” All other
customer related attributes are already written in the class diagram, so the answer for blank A
is “Customer rank.”

Blank B: One of two payment methods can be chosen: direct debit (direct bank withdrawal), or
credit card payment. It says in [Online shopping system overview] (3) Payment (iii) that “The
payment method is specified at the time of each order,” so information about customer
payment method must be stored for each order. Therefore, the answer for blank B is “Payment
method.”

312
Afternoon Exam Section 14 Information Systems Development Answers and Explanations

[Subquestion 2]
Blank C: The member management class contains the following attributes: total order amount,
annual order amount, and customer rank (blank A). The total order amount and annual order
amount are specific to each customer, so the member class and member management class can
be considered as corresponding on a one-to-one basis. Therefore, the answer for blank C is
“1”.

Blank D: This subquestion asks the multiplicity of order details per single order. There can be no
orders without order details, so the lower limit of multiplicity is “1”. According to [Online
shopping system overview] (2) Orders (v), “Up to five types of products can be purchased per
order,” so the maximum multiplicity of order details class for the order class is “5”. Therefore,
the answer for blank D is “1..5”.

Blank E: According to [Online shopping system overview] (2) Orders (iv), “Any number of
keywords can be assigned, but some products have no assigned keywords.” Since keywords
are sometimes not assigned, the minimum multiplicity is 0, and there is no upper limit.
Therefore, the answer for blank E is “0..*”.

313
Afternoon Exam Section 14 Information Systems Development Answers and Explanations

Below is the class diagram with blanks A through E filled in.

Member Member Product details


management
1 1
Member ID Member ID Product ID
Total order amount Name Keyword
Annual order amount Address
Customer rank Phone number 0..*
E-mail address
Birth date
Account number
Credit card number

0..* 1
Order Order details Product Order
1 1..5 0..* 1 1 1
Order ID Order ID Product ID Product ID
Member ID Product ID Product name Number of
Payment method Quantity Genre orders
Shipping address Price Expected
stock date
Delivery date
1
Order date

1
Product
management
Product ID
Number in
stock
Legend
Class name 1 Class name 2 Solid lines between classes indicate that classes are related.
Labels on solid lines between classes indicate the multiplicity between the
Attribute 1 1 0..* Attribute 1
Attribute 2 Attribute 2 classes.
Attribute 3 Attribute 3 When there is a range of multiplicities, notation such as “r..s” is used. When
there is a single multiplicity, notation such as “r” (a single number) is used.

[Subquestion 3]
The blanks in the sequence diagram are all related to “detail display execution” operation by a
member, so the information acquired in blanks H and I is part of the product detail information.
This is described in [Online shopping system overview] (2) (iii) where it states that in addition to
the product detail information acquired from “Product,” the quantity in stock, and, if out of stock,
the expected stock date must also be acquired. Each blank in the diagram, then, must relate to
acquiring one of those pieces of data. If the sequence in which the information is acquired is
known, then the contents of the blanks can be identified. However, the question text does not
contain any statements relating to the sequence in which the data is acquired, so another hint must
be searched for.
The most prominent description in the diagram is the rectangle which contains blanks I and J.
The contents of the rectangle are executed if the optional condition of blank J is satisfied. This is
the key to filling in these blanks. Of the two pieces (quantity in stock and expected stock date) of

314
Afternoon Exam Section 14 Information Systems Development Answers and Explanations

product detail information acquired, the expected stock date is only displayed when the product is
out of stock, so blank I must be “expected stock date.” The entity which contains this information,
“Order,” must then go in blank G. Blank J, the condition for acquiring information, is “Out of
stock.”
“Quantity in stock” goes in blank H, and blank F, which holds this information, is “Product
management.”
These answers are determined by considering the content of the blanks based on the optional
section of the sequence diagram, but the same answers can be reached by looking at what
information is necessary. The expected stock date is displayed when a product is out of stock, so to
determine whether or not an expected stock date is necessary, we must know the quantity in stock.
This also hints that the quantity in stock must be acquired first.

315
Afternoon Exam Section 14 Information Systems Development Answers and Explanations

Below is the sequence diagram with blanks F through J filled in.

Product Product Product Product Order


search search management
Member screen processing
Product search
condition input

Product search
execution Product list
acquisition Product list
acquisition

Product list
Product list information
information
Product list
display
Detailed display
product selection

Detailed display
execution Product detail
acquisition Product detail
acquisition
Product details

Quantity in stock acquisition

Quantity in stock

opt
*1 [When out of stock]

Expected stock date acquisition

Expected stock date

Product details

Product detail
display

*1: The section enclosed in the opt (optional) section is executed if the condition shown in the
brackets is satisfied.

316
Afternoon Exam Section 14 Information Systems Development Answers and Explanations

Q14-2 Test case creation

[Answer]
[Subquestion 1] a) valid equivalence b) invalid equivalence
c) boundary values (values near boundaries)
[Subquestion 2] d)
– N Y –

[Subquestion 3]
Cause Effect
“Available for sale” lamp is lit  (1)  (2)  Product is placed

 (3)
Purchase button is pressed  Indicator is initialized

Exact change required Change is returned


 (4)
 “Available for sale”
“Out of change” lamp is lit
lamp is turned off

Coin return button is pressed Money is returned

[Subquestion 4] (1) “Sold out” lamp is lit


(2) “Out of change” lamp is lit

[Explanation]
Improving test case quality is a key point in ensuring test process quality. In actual system
development, the downstream process of testing is often placed under significant time constraints
because of the delay of upstream processes. In order to perform effective and efficient testing during a
limited testing period, it is important that good test cases be designed.
Design techniques for test cases can be categorized, by what they are based on, into two major
categories: one based on internal specifications (white box testing) and the other based on external
specifications (black box testing).
Logic coverage testing is classified as white box testing. In this form of testing, test cases are
designed based on the coverage of executed instructions and conditional branches.
However, as it says in the question, this question is concerned with black box testing techniques.
This question is composed of subquestions which check examinees’ knowledge regarding the
characteristics of each technique, and subquestions which check examinees’ ability to design test cases
using cause-effect graphs, one of typical black box testing techniques.

317
Afternoon Exam Section 14 Information Systems Development Answers and Explanations

[Subquestion 1]
This question is concerned with the characteristics of both equivalence partitioning and
boundary value analysis.
Equivalence partitioning applies the concept of subsets to the multitude of possible input
values, and thereby divides them into subsets of valid input values (called a valid equivalence
class) and subsets of invalid input values (called an invalid equivalence class). A representative test
case is selected from each subset (called an equivalence class) in order to rationally reduce the
number of test cases. Therefore, blank A is “valid equivalence,” and blank B is “invalid
equivalence.”
Boundary value analysis is an extension of equivalence partitioning, in which case boundary
values and near-boundary values from equivalence classes are used as test cases. The intention or
purpose of this technique is to more effectively design test cases for detecting errors, which
frequently occur near boundary conditions, such that an “if” statement to be “fail if under 30” is
erroneously handled as “fail if 30 or under.” Therefore, blank C is “boundary values” (or “values
near boundary”).

[Subquestion 2]
This subquestion is concerned with creating decision tables from cause-effect graphs.
Cause-effect graphs are used for representing the mutual relationships between inputs or stimuli
(causes) and their associated outputs (effects), in accordance with specifications. Decision tables
are made from cause-effect graphs in order to identify test cases. Moreover, in cause-effect graphs,
nodes are connected by branches (lines). In equivalence partitioning and boundary value analysis, it
is not possible to clearly indicate combinations of multiple input values, but cause-effect graphs
make this possible. The number of combinations of input values can be rationally decreased with
improved test coverage. While cause-effect graphs offer this advantage, they also have drawbacks;
that is, it is difficult to understand the expressions that use logical operators, and as the number of
causes increases and the relationships between causes grow more complex, graphs also become
more complex. Although textbooks often describe cause-effect graphs as a test case design method,
cause-effect graphs are seldom used in actual development. This question may be the first
experience many examinees have with using cause-effect graphs to design test cases. However,
questions which deal with specific methodologies almost always clearly spell out the notation and
rules of those methodologies, so examinees should remain calm.
Moreover, in this subquestion, the blanks should be filled after confirmation of the notation
described in Fig. 2 “Symbols used in cause-effect graph” and the note field described under Table 2
“Decision table of purchase preparation,” and keeping in mind that the conditions and actions
shown in Table 2 correspond respectively to the causes and effects shown in Fig. 3 “Cause-effect
graph of ‘purchase preparation’.”

318
Afternoon Exam Section 14 Information Systems Development Answers and Explanations

• Leftmost blank: The single cause (connected by a line) which results in Fig. 3 “Indicator
displays amount of money inserted” is “Coins are inserted (true),” and “‘Sold out’ lamp is lit”
is irrelevant. Therefore, the first blank is “–”.

• Second blank from the left: The combination of causes which results in Fig. 3 “ ‘Available for
sale’ lamp is lit” is “Coins are inserted (true)” and “Total amount of money inserted ≥ product
price (true)” and “ ‘Sold out’ lamp is lit (false).” Therefore, the second blank is “N”.
The cause “Coins are inserted (true)” alone results in the effect “Indicator displays amount of
money inserted.” This question does not address it, but note that there is an X in the action
column of the decision table for “Indicator displays amount of money inserted.”

• Third blank from the left: The cause which results in Fig. 3 “ ‘Sold out’ lamp is lit” is “ ‘Sold
out lamp’ is lit (true).” Therefore, the third blank is “Y”.

• Rightmost blank: The single cause which results in Fig. 3 “Coins are directly returned” is
“Unusable or unidentifiable coins are inserted,” and “ ‘ Sold out’ lamp is lit” is irrelevant.
Therefore, the last blank is “–”.
In fact, each vertical column of the completed decision table is a test case.

[Subquestion 3]
This subquestion requires examinees to complete Fig. 4 cause-effect graph. (3) connects, by
using “  ” (and), all the causes (inputs): (1) ‘ “Available for sale” lamp is lit (true)’ and ‘Purchase
button is pressed (true)’ ; Exact change required (true); and ‘Out of change’ lamp is lit (true).
Causes (inputs) related to this can be determined by looking at the [Purchasing and money return]
section of the system specifications shown in Fig. 1.
The descriptions that support this answer are listed below:

• Line connecting Fig. 4 (3) and effect “Product is placed”: Corresponds with Fig. 1 “(1) ‘Out of
change’ lamp is off”—“product is placed in the dispensing slot.”
• Line connecting Fig. 4 (3) and effect “Indicator is initialized”: Corresponds with Fig. 1 “(1)
‘Out of change’ lamp is off”—“indicator display is reset to 0.”
• Line connecting Fig. 4 (3) and effect “Change is returned”: Corresponds with Fig. 1 “(1) ‘Out
of change’ lamp is off”—“change is sent to the coin return slot.”
• Line connecting Fig. 4 (3) and effect “ ‘Available for sale’ lamp is turned off”: Corresponds
with Fig. 1 “(2) ‘Out of change’ lamp is on”—“all ‘available for sale’ lamps are extinguished.”
Moreover, it is important to remember to use symbols, such as  or  , when there are two
or more causes.

[Subquestion 4]
This question asks what is lacking for the cause-effect graph shown in Fig. 4; in other words,

319
Afternoon Exam Section 14 Information Systems Development Answers and Explanations

what is lacking—nothing has been described in the appropriate place in Fig. 1—under the current
specifications of [Purchasing and money return].
It is hard to determine what is missing from where there are no descriptions, but hints that help
answer questions can almost always be found within the question descriptions themselves. This
subquestion can be answered relatively easily if we note that the contents of the Fig. 1
[Assumptions] section have not directly related to any of the subquestions so far, and thus they
probably contain hints which can lead to the answer to this question.
The descriptions that support the answers are described below:

• Fig. 1 [Assumptions] “When a product is out of stock...the product’s ‘sold out’ lamp is lit”:
Fig. 1 [Purchasing and money return] specifications do not include lighting the “sold out”
lamp when a purchase button is pressed and the last product is placed in the dispensing slot.
Therefore, the answer should be “ ‘Sold out’ lamp is lit.”

• Fig. 1 [Assumptions] “There is an ‘out of change’ lamp...the ‘out of change’ lamp is lit”: Fig.
1 [Purchasing and money return] specifications do not include lighting the “out of change”
lamp when a purchase button is pressed, the product is placed in the dispensing slot, change is
returned, and the amount of remaining change in the machine falls below 100 yen. Therefore,
the answer should be “‘Out of change’ lamp is lit.”

Moreover, the current specifications do not include turning off the “sold out” lamp and the “out
of change” lamp, so some examinees may be tempted to consider them correct. However, these are
not appropriate answers for the following reasons:

• The subquestion asks only about what is lacking from the [Purchasing and money return]
specifications. Turning off the “sold out” lamp, for example, occurs when vending machines
are refilled, and, as such, is unrelated to purchasing and money return.

• There is nothing in the question that describes these points. For example, even if the machine
is out of change, as long as the purchaser doesn’t need change, products can be purchased. The
examinee may assume that the money to buy products will be added to the change pool, and
then the “out of change” lamp can be turned off. However, nowhere in the question does it
state that the vending machine is designed in such a way.

Examinees can produce any number of answers if they add their own assumptions which are
not explicitly stated. However, the exam question requires examinees to provide the answers that
the examiner is expecting. And as mentioned, hints that help in determining the correct answer can
be found in some form within the question itself.

320
Afternoon Exam Section 14 Information Systems Development Answers and Explanations

Q14-3 Web-based order entry system

[Answer]

[Subquestion 1] (1) (A) Member authentication (B) Product selection


(C) Order condition input
(D) Order content confirmation and finalization
(E) Product detail information display
(2) (F) f) (G) g) (H) c) (I) b) (J) a)
(K) d) (L) e)
[Subquestion 2] (1) Item to be added to the member master record: Last order date
Item to be added to the product master record: Product registration date
(2) Process: Order content confirmation and finalization
Processing to be added: Update the last order date in the member master
record to reflect the current date

[Explanation]
This question is concerned with DFD for a Web-based order entry system. It uses a relatively
simple model, composed of five processes and two files. The file layout is also provided in the
question, so those can serve as clues to answer the question.

[Subquestion 1]
For subsection (1) of this subquestion, process descriptions must be selected from the answer
group. For subsection (2), files and transferred data must be selected. First, let us consider what
processes are required for (1). The [Procedure of product order] section of the question lists five
procedures, so it is easy to deduce that these go in the five blanks, A through E. Let us insert each
process in the appropriate blank, while at the same time considering the data flow addressed in (2).

• Blanks A, F, H, I: The processes progress in order from process (A), so the first process executed,
“member authentication,” is the answer. During member authentication, the “member” of the
data source is authenticated via the login screen. If an authentication error occurs, the error is
conveyed to the member. Therefore, the data flow makes logical sense if (F) is f)
“authentication error information” and (G) is g) “login information.” In order to perform
authentication, member information needs to be fetched from the member master file, so (H) is
c) “member master file” and (I) is b) “member information.” Moreover, the information passed
in (J) to the next process (blank B) will be considered below.

• Blanks B, K, L: The process after (A) “member authentication” is “product selection,” so this is
the answer for (B). There are only two master files, so (L) must be e) “product master file.”

321
Afternoon Exam Section 14 Information Systems Development Answers and Explanations

The information received from the product master file is “product information,” so the answer
to (K) is d) “product information.”

• Blanks C, J: “Order condition information” flows from the “member” of the data source, so (C) is
“order condition input.” Member information is fetched from the member master file. In order
to do so, the member ID is needed before this process is started. Therefore, (J) is a) “member
ID.”

• Blank D: “Order finalization direction” flows from the “member” of the data source, so this
process is “order content confirmation and finalization.”

• Blank E: We can determine by a process of elimination that this process is “product detail
information display,” but we can also determine this answer by noting that product codes are
passed by process (B), and product information is passed from the product master file.

[Subquestion 2]
(1) There are two items to consider in the [Additional functional requirements] section. We may
assume that information which applies to both should be added, but reading the question carefully,
only (2) is a requirement for adding an item to the master record.
(1) mentions “products within the same category,” so it may appear that category information
must be added to the master record. However, the note on Fig. 2 “Layout of product master record”
states that “The first two digits of each product code indicate the product category,” so this
information is already contained, and it is not necessary to add a “category information” item.
One of the key phrases in (2) is the “last order date.” Order information is managed by a
separate system, so this information can be known only by registering it in one of the master files.
The last order date varies for each member, so the “last order date” should be added to the member
master record.
The next point to note relates to all of (2). Namely, products to be displayed as recommended
products must be products registered after the “last order date” registered in the member master
record. In order to determine the date, some sort of date information must be registered in the
product master record. Considering the purpose of this date information, it is apparent that a
“product registration date” item must be added.

(2) As one of the roles of the “last order date” registered in the member master record, that date is
used to compare with the “product registration date” in the product master record in order to
determine whether to display a product as a recommended product. Then, what happens if the same
member places an order again? In the same way, the “last order date” is used. Thus, it is clear that
the “last order date” must be updated each time an order is placed.
Orders are finalized as part of the “order content confirmation and finalization” process, so “updating
the last order date in the member master record to reflect the current date” must be done here.

322
Afternoon Exam Section 15 Management Related Items

Section 15 Management Related Items

Q15-1 Estimation of development scale

[Answer]
[Subquestion 1] (A) Medium (B) ILF
[Subquestion 2] (C) EI (D) Low (E) High(F)EO
[Subquestion 3] (G) 1 (H) 0 (I) 2 (J) 15 (K) 66
[Subquestion 4] 55

[Explanation]
Using a development plan for a library management system as a subject matter, this question
concerns the estimation of development scale using the Function Point method.
Estimations of development scale were often performed based on the number of programs or the
number of lines of code in programs. However, as represented by object oriented development, Web
based development, etc., development approaches have become more diverse, resulting in an increase
in situations in which traditional methods cannot be applied. The Function Point method is an effective
technique in such cases.
The Function Point method is a technique that focuses on system function volume (function point)
which is not dependent on the programming language used or the development environment, and can
be applied in common to the various development approaches. Furthermore, the calculation procedure
is clearly defined and is little dependent on individuals, thus can be regarded as an objective
estimation technique.
Estimation of development scale is a duty of a project manager, and in Project Manager
Examinations, more difficult questions have appeared in the past which ask about the essence of the
Function Point method. On the other hand, past Software Design & Development Engineer
Examinations have not directly asked questions concerning knowledge on the Function Point method,
but instead have directly reflected in the structure of their questions, the estimations that use function
points. The question is created in such a way that if one can accurately read the conditions necessary
for answering the questions from the question text, then by solving the questions one by one, the
answers can be obtained accordingly. For your reference, the correspondence between the flow of
development scale estimation using the Function Point method and subquestions of this question is as
described below:

1. Regarding the numbers of screens, files, etc. as an indicator of the functional volume of the
subject system, those numbers are identified. More precisely, system external inputs, external
outputs, external queries, internal logical files, and external interface files are identified and
categorized. Their complexity is evaluated according to the criteria. → Subquestions 1, 2

323
Afternoon Exam Section 15 Management Related Items

2. Based on the complexity of the functions, the unadjusted function point is calculated.
“Unadjusted” here means that system characteristics have not been taken into account and
represents pure function volume.→ Subquestion 3

3. The degree of influence of the system is calculated based on system characteristics, and taking
this into account, the final (adjusted) function point is calculated. This function point should
be regarded as an abstracted function volume, and is an indicator that represents development
scale independent of the programming language used or development environment.→
Subquestion 4

4. By applying the function point to a regression equation, the specific development scale (lines
of code of the program, number of development man-hours, etc.) is calculated.→ This is not
included in this question.

[Subquestion 1]
Data functions are, as indicated in the title of Table 2, ILF (internal logical files) and EIF
(external interface files), and assessment of their complexity is also performed based on Table 2.

• Blank A: According to Table 5, with regard to Book information, the number of record types is 1
(one) and the number of data items is 60. This corresponds with row 1, column 3 of Table 2, thus
the complexity is “Medium.”

• Blank B: Since asset management information is created by “Asset management information


creation” included in the library management system, it is an internal logical file. Therefore, the
function type is “ILF.” Moreover, since update processing etc. of these files is not described in the
figure, the function type may appear to be an external interface file (EIF). However, note that
“Asset management information creation” is also included in the estimation scope as a file
creation function. Furthermore, when updates, etc. occur, the number of function points will
increase as separate functions of update screen, etc.

[Subquestion 2]
Transaction functions are, as indicated in the titles of Tables 3 and 4, EI (external interfaces),
EO (external outputs), and EQ (external queries), and assessment of their complexity is also
performed based on these tables.

• Blank C: “Book information correction,” as with “Book registration” and “Book deletion,” maintains
“Book information,” and its transaction type is also the same “EI” like these.

• Blank D: With regard to “Book deletion,” the function type is “EI,” the number of related files is 1
(one) (“Book information”) and the number of data items is 6 (six). This corresponds with row 1,
column 2 of Table 3, thus the complexity is “Low.”

324
Afternoon Exam Section 15 Management Related Items

• Blank E: The With regard to “Book information query,” the function type is “EQ,” the number of
related files is 3 (three) (“Book information,” “Publisher information,” “Author information”),
and the number of data items is 20. This corresponds with row 2, column 3 of Table 4, thus the
complexity is “High.”

• Blank F: Note (2) of the figure indicates that the “Book list” does not consist simply of only data
extraction, but is created via processing logics such as aggregation, etc. This matches the
explanation in Table 1 that “Information is provided to users using processing logic other than
data extraction,” thus the function type is “EO.” Moreover, “Processing logic other than data
extraction” corresponds specifically to the “calculations or derivations of new data elements”
mentioned in the explanation of EQ.

[Subquestion 3]
The numbers already shown in Table 7 are function points which have been set with regard to
function type and degree of complexity (Low, Medium, High). By multiplying those numbers by
the number of functions, the overall function point of the system can be calculated. The number of
functions is counted by function type and complexity (Low, Medium, High), with regard to the
function list organized in Table 5 and 6.
Note that the number of record types, the related files, the number of data items, etc. shown in
Tables 5 and 6 are used in assessing the complexity of the functions, and are not relevant here.
The completed Table 7 is shown below.

Complexity
Low complexity Medium complexity High complexity Total
Function type
External input 1  3+ 0  4+ 2  6 15
External output 1  4+ 0  5+ 1  7 11
External query 0  3+ 0  4+ 1  6 6
Internal logic file 2  7+ 2  10 + 0  15 34
External interface file 0  5+ 0  7+ 0  10 0
Unadjusted function point 66

[Subquestion 4]
The final function point of this system is calculated after adjusting the unadjusted function
point by considering the degree of influence of the system. Applying the unadjusted function point
calculated in Subquestion 3 (blank K) and the total degree of influence by the system indicated in
the subquestion to the calculations method described in Table 8 would be sufficient.

325
Afternoon Exam Section 15 Management Related Items

Adjustment factor=0.01  total degree of influence (1) + 0.65

=0.01  18+0.65
=0.83

Function point = unadjusted function point (blank K)  adjustment factor

=66  0.83
=54.78
 55 (round off the first decimal place)

For your reference, the total degree of influence is, as the note in Table 8 mentions, the total of
the evaluation of 14 general system characteristics, each rated from 0 to 5. The minimum value is
14  0 = 0, the maximum value is 14  5 = 70, and the average is 35. By applying this to the
formula that calculates the adjustment factor, one finds that the minimum value of the adjustment
factor is 0.65, the maximum is 1.35, and the average is 1.0. In other words, in the formula to
calculate the function point, the unadjusted function point is adjusted within a range of  35%,
taking system characteristics into account.

Q15-2 Quality management

[Answer]
[Subquestion 1] (A) e) (B) b)
[Subquestion 2] (C) subsystem B (D) subsystem D (E) subsystem C
[Subquestion 3] Because the number of non-test item errors was high while the number of test
item errors was low
[Subquestion 4] c), e)

[Explanation]
This question concerns evaluating progresses and quality management situations in the
development of each subsystems based on a table of data concerning quality management. Examinees
with no business experience may feel perplexed for a moment, but the quality data in the table is
distinctive, and the question can be answered relatively easily if approached calmly.
Also, knowledge about the representative review methods and the points to consider in reviews,
which is asked in Subquestion 4 is also important.

[Subquestion 1]
With regard to the blanks of the number of test items in the table, this subquestion asks to

326
Afternoon Exam Section 15 Management Related Items

answer using other data and the question text as hints.

• Blank A: The part in the question text that states “(1) the number of errors found regarding subsystem
A is within an appropriate range, and therefore appears that program quality is being maintained”
is the hint. From the statement “program quality is being maintained,” it may seem that that option
f) (6,466), which has the highest number of test items, is the correct answer. However, if f) were
selected, the current percentage of errors found vs. the total number of test items would be 14%
(910  6,466), which would contradict with the description in the latter (3). If option d) (3,642)
were selected, the current percentage of errors found vs. the total number of test items would be
approximately 25% (910  3,642) which barely meets the target value of 25%. Since the tests of
subsystem A have not yet been completed, and considering the fact that new errors will be found
through the remaining tests, this can be considered as effectively exceeding the target value, and
thus inappropriate (options a) to c), of course, exceed 25%).
Therefore, option e) (4,150) can be considered the correct candidate. In this case, the
percentage of the errors concerning the test items is approximately 22% (910  4,150), and the
percentage of error for non-test items is approximately 1.7% (69  4,150), both of which are
below target values, and thus can be confirmed as appropriate. Note that two target values are
established, which are percentage of errors concerning test items (25%) and percentage of errors
for non-test items (2%). Naturally, subsystem A must have percentages which are below both
target values. However, since the description of (3) is being used as a hint when considering, we
tentatively narrow down the answer based on the percentage of errors concerning the test items.

• Blank B: The section of the description that states “the percentage of errors found at present vs. the
total number of test items is low at approximately 15%, which is below the target value” is the
hint. As mentioned earlier, two target values are established, but since the passage states
“approximately 15%, which is below the target value,” this number can be considered as the
percentage of errors concerning the test items (the target value for non-test items is 2%).
Subsystem A has already been considered, and since its ratio was approximately 22%, it is
ineligible, thus the remaining subsystems are considered. First, for subsystems B and C, specific
values, which are approximately 28% (758  2,730) and approximately 12% (398  3,420)
respectively, can be calculated. As a result, subsystem B far exceeds the percentage which is to be
approximately 15%, thus does not match the description. On the other hand, subsystem C’s
percentage is approximately 12%, which is even lower than the value given in (3) which is
approximately 15%. However, note that conclusions cannot be determined simply from this value
alone. The values shown in the table are values at a certain point in time during development, and
are related to the progress situation of the tests. In other words, a subsystem which tests have
progressed further would have more errors found, and the percentage would be higher. Generally
speaking, the number of errors found can be considered to some extent as proportional to the
degree of progress with the tests. Checking the progress concerning subsystem C from this
perspective, the degree of progress is approximately 50% in terms of both lines of code as well as

327
Afternoon Exam Section 15 Management Related Items

number of programs, thus tests have not progressed much. If tests are to progress in this way, the
number of errors found can be expected to be double (or more) the number of those currently
found.
On the other hand, for the remaining subsystem D, approximately 80% of the tests (71  88,
182  230) have been completed, which means tests have progressed further than subsystem C.
If the result is approximately 15%, then this would be more close to the content described in (3).
Note that the number of test items for subsystem D is blank B. The description of (3) can be
considered as a basis for determining the value for the blank. Thus, based on the number of errors
found concerning the test items for subsystem D., which is 473, the number of test items can be
selected from the answer group that would result in the percentage being approximately 15%.
Option b, 3,160 (473  3,160 = 0.1497) is appropriate as the correct answer. Also, the number of
errors concerning non-test item for subsystem D is 129, and if the number of test items were
3,160, the percentage would be approximately 4%, which is double the target value. This can be
considered as evidence that there is a problem with the set up of the test items.

[Subquestion 2]
This subquestion asks to answer the characteristics of the progress and quality management of
each subsystem’s development based on analysis of the table containing quality data by subsystem.

• Blank C: Based on the values in the table, subsystem B can be regarded as having quality issues with
the programs. While only approximately 80% of the tests have been completed, the number of
errors found is 758, already exceeding the target value (2,730  0.25 = 682.5 errors). The
number of errors concerning non-test items also has already exceeded the target value.

• Blank D: As discussed in the explanation of Subquestion 1, the remaining subsystem, subsystem D,


is applicable.

• Blank E: The subsystem whose progress is latest with unit tests is subsystem C. Only about 50% is
finished. Incidentally, approximately 90% is finished in Subsystem A, and approximately 80% in
subsystem B and D.

[Subquestion 3]
As the description states, the characteristic of subsystem D is that the percentage of errors
found at present vs. the total number of test items is low at approximately 15%. However, the
number of errors found for non-test items is exceptionally high at 129 (approximately 4% = 129 
3,160), which suggests that there is a problem with the set up of the test items themselves.

[Subquestion 4]
This subquestion asks about knowledge concerning review methods in quality management.

328
Afternoon Exam Section 15 Management Related Items

a) It is said that reviews should be kept within around 2 hours per session in order that
participants can maintain their concentration. Therefore, this option is appropriate.
b) Walkthroughs and inspections are both known as typical review methods. The difference
between the two is that while walkthroughs consist of mutual verification by related parties,
inspections are facilitated by error managers called moderators. But both methods, if
appropriately carried out, can produce desired review results. Therefore, this option is
appropriate.
c) When managers participate in reviews, it may lead to evaluation of the participants, which will
prevent a free and vigorous exchange of opinions. Therefore, it is a general principle that
managers do not participate in reviews, and thus this option is not appropriate.
d) For both walk-throughs and inspections, review responsibility is the collective responsibility
of all participants. Therefore, this option is appropriate.
e) Concentrating a review into a single day may, at first, appear reasonable, but performing
reviews for long hours may result in decreased concentration by participants, leading to
critical errors being overlooked or considerations being insufficient. Therefore, this option is
not appropriate.

Q15-3 Project plan of software development

[Answer]

[Subquestion 1] (1) d)

(2) The development team and integration test team are separate

[Subquestion 2] (1) Participants can focus on identifying essential defects.

(2) b), c)

[Subquestion 3] (1) The function B development team

(2) The number of identified defects is roughly half the standard value.

<Alternative answer>

The number of identified defects is far below the standard value.

[Explanation]

[Subquestion 1]
This subquestion must be answered by reading [Framework of Company E] in the question text
carefully, and thoroughly understand the characteristics and the expected effects of the framework

329
Afternoon Exam Section 15 Management Related Items

in which the development and integration test teams are separated and operated in parallel.
(1) Underline (1) of [Framework of Company E] states that “after establishing the integration test
team, for a while, the efficiency of development teams will drop temporarily,” and this subquestion
asks the reason.
[Framework of Company E] states that “As the leader of the integration test team, Chief G, who
has sufficient experience concerning tests, is appointed, not the leaders of each development team.
Chief G joins the project five (5) weeks before integration test starts. The chief reads the design
document, interviews each development team, understands the specifications and design, and
proceeds with preparations for integration test, such as creating test plan document, test
specifications, etc.”
Within this description, considering what will affect the work of each development team,
occurrence of the task to cooperate with the interviews held by Chief G can be noticed. And from
“Fig. 2 Development schedule,” since it can be understood that preparation work of integration test
by the integration test team is performed in parallel with production and unit test work by the
development teams, it can be considered that the development teams will be affected in order to
cooperate with interviews held by Chief G who is doing preparation work for the integration test.
Thus, “d) Because time will be used to respond to interviews held by the team leader of the
integration test team, Chief G” is the correct answer.
Moreover, the question text does not refer to development experience of members as in options
a) and c). Also note that with regard to b) documentation work of the integration test specifications
and e) shortage of development team members, the transfer of the development team members to
the integration test team is after the start of integration test, and preparation work before the start is
performed by Chief G.

(2) Underline (2) of [Framework of Company E] states that “it has the effect of reducing the risk
that the start of integration test will be delayed.” Looking at “Fig. 2 Development schedule,” it can
be noticed that preparation work of integration test by the integration test team is performed in
parallel with the internal design and production and unit test done by the development team.
However, this is already described in the subquestion, and this subquestion asks for another reason.
In the explanation of the development project plan, in addition to putting in Chief G, there is a
description that states “At the start of integration test, Manager F plans to transfer several members
from each development team to the integration test team, to have them be involved in the execution
of the integration test.” Since this is not a particularly special framework, not much consideration
may have been given. However, the description later states that “often times, . . . performed
integration test while keeping the team structure used during the development phase .”
The framework that is planned to perform the integration test may not be special in general, but
it appears it is different from the normal development framework at Company E. In other words,
for Company E, this framework is a characteristic of the development project plan of this time.
And, if the development and integration test is done with the same team structure as usual, delay in

330
Afternoon Exam Section 15 Management Related Items

the development would keep the members busy, which may directly lead to delay in starting
integration test. On the other hand, since by transferring members and separating the integration
test team from the development teams, integration test can be started without being affected by
delay in the development, an effect as stated in underlined section (2) can be expected. Therefore,
an answer such as “the development team and integration test team are separate” is appropriate.

[Subquestion 2]
This subquestion concerns, with regard to the [Quality Management Plan] in the question text,
the meanings of the special notes of quality management indicators and the rules to perform
reviews appropriately.

(1) This asks what kind of effect, the rule “Mistakes, omissions and violation of formatting rule are
not counted as defects,” in the Special notes of “Table 1 Standard values of quality management
indicators concerning internal design in Company E,” would bring to actual reviews.
The description following Table 1 in the question text states the reason for establishing rules to
place special emphasis on activities during the review preparation phase is to “make it possible to
focus on identifying essential defects.” In other words, “Mistakes, omissions and violation of
formatting rule” are indeed defects, but they cannot be considered as essential defects within the
internal design process. If these matters are counted as the same as essential defects, focus will
have to be placed on identifying them as well, resulting in a possibility of reducing the
effectiveness of reviews. In other words, in order to focus only on essential defects, although
“Mistakes, omissions and violation of formatting rule” may be pointed tentatively, this rule can be
considered as a rule for not counting those as defects. Since the subquestion asks for a brief
description, an answer such as “Participants can focus on identifying essential defects” is
appropriate.

(2) Two specific contents as meant by the underlined section (4) of [Quality Management Plan],
“rules have been established not to waste review time on trying to understand the design
document” are to be selected from the answer group in order to answer. Furthermore, since the
description immediately preceding that the relevant section states that “Special emphasis is placed
on the activity during the preparatory stage,” it can be considered that matters that are to be done
before performing reviews were established as rules.
It is clear that in order to avoid wasting review time in understanding the design document,
reviewers should look over the design document before review starts, and understand their
contents. And in order to do so, the review host needs to distribute design document to reviewers
before the review. Therefore, b) and c) are the correct answers.
Moreover, since design document review is necessary regardless of the degree of complexity, a)
is incorrect. Also, with regard to d), since the amount of time required to understand the design
document varies from person to person, and is not dependent on the number of reviewers, d) is also

331
Afternoon Exam Section 15 Management Related Items

incorrect. And with regard to e), this is a general rule for reviews, but it is not appropriate as a
specific example of a rule for the prevention of wasting review time on understanding the design
document.

[Subquestion 3]
This subquestion asks about the evaluation of quality management indicators in the [Situation
of internal design] of the question text, and the actions taken in response to the evaluation.

(1) From “Table 2 Actual values of quality management indicators concerning internal design for
each development team (at a point after three (3) weeks),” the tolerance range of the quality
management indicators for function A development team and function B development team is
calculated, and the team which derivates from of the range is selected as the answer.
As quality management indicators, there are review time and number of identified defects
which are shown in “Table 1 Standard values of quality management indicators concerning internal
design in Company E.” With regards to review time, the function A development team has a
standard value of 18 hours, and the function B development team a standard value of 21 hours, and
since actual review times exceed these standard values, they are within the tolerance range.
With regard to the number of identified defects, the function A development team has a
standard value of 24, and the function B development team a standard value of 28. However, from
the number of actual defects identified, the function A development team has identified 26 defects,
which falls within the tolerance range (24  7.2 / KLOC), but the function B development team,
has identified 14 defects, which falls outside the tolerance range (outside the range of 28  8.4 /
KLOC). Therefore, the answer is “the function B development team.”

(2) From the result of (1), quality management indicator of the function B development team which
falls outside the tolerance range is the number of identified defects, which is 14, half of the
standard value, 28. Therefore, answers such as “The number of identified defects is roughly half
the standard value” or “The number of identified defects is far below the standard value” are
appropriate.

Q15-4 Evaluation of batch job processing time

[Answer]
[Subquestion 1] (A) Job-A → Job-D → Job-H
(B) 180 (minutes)
[Subquestion 2] (C) 60 (minutes) (D) 8 (jobs) (E) 9 (jobs) (F)210 (minutes)
(D and E are not in order.)

332
Afternoon Exam Section 15 Management Related Items

[Explanation]
This question concerns the calculation of processing time for a batch job group composed of
multiple jobs. Processing time must be calculated giving consideration to the constraint of the job
execution order. The processing time for each job is made clear, and there are no constraints other than
the order of execution. In other words, constraints such as wait time between jobs, job switchover
time, system resources, etc. do not need to be considered. In order to calculate overall processing time
for a batch job group with constraints on execution order, a PERT (Program Evaluation and Review
Technique) diagram can be used. Job processing time based on the number of data items processed can
be calculated using simple multiplication. By rewriting a given diagram of a batch job group as a
PERT diagram, the question can be resolved as a question concerning PERT diagrams. The difficulty
of this question is medium.

[Subquestion 1]
First, in order to calculate the critical path and the overall batch processing time of the batch job
group, draw a PERT diagram for the job chart of Z1 shown in Fig. 1. In PERT diagrams, activities
(in the case of this question, jobs) are represented by arrows, and the connection between one
activity and the activity which continues after it is called a node, and is represented by a circle. The
start and finish of activities are also represented by a circle as nodes. Together with the arrow
which indicates an activity, the required number of hours and the required number of days for the
activity is also listed.

When drawing a PERT diagram, rules such as the following apply.


• Each activity has a start and finish node.
• Nodes other than those at the start point and the finish point have one or more preceding and
following activities.
• No activity must form part of a loop.
• Parallel activities are represented using dummy activities.

Parallel activities are activities as shown in Fig. A, whose nodes are represented by stars. If
these parallel activities are converted directly into a PERT diagram, the diagram would be as shown
in Fig. B, but this representation is incorrect.

Job-X
Job-X

Job-Y Job-Y

Fig. A Representation in a job group Fig. B Representation in a PERT


diagram (incorrect example)

333
Afternoon Exam Section 15 Management Related Items

When representing parallel activities in a PERT diagram, a dummy activity, as shown in Fig. C,
is used. A dummy activity is represented as a dashed arrow, and the required number of hours and
the required number of days for the dummy activity is zero. Note that the direction of the dummy
activity must be decided so that it does not conflict with the dependency relationship with the
subsequent activities.

Job-X

Job-Y (Dummy activity)

Fig. C Correct PERT diagram representation using a dummy activity

Given these considerations, the job chart of Z1 job in Fig. 1 represented as a PERT diagram is
shown in Fig. D. Bold numbers indicate the required number of hours for each job.

Job-D
2 5
85
Job-H
Job-A
30 65
Job-B Job-E Job-G Job-I
1 3 6 7 8
35 30 60 50
Job-C
Job-F
25 75
4

Fig. D PERT diagram of the job chart of Z1

Let us now solve what to enter in blanks A and B.


Consider the case when batch job group Z1 is finished in the least amount of time. In order for
that to occur, when to start each job, and by when each job must be finished, must be considered.
The combination of jobs which cannot be delayed in order that the job group be finished within the
conceivable minimum time is called a critical path (blank A). The minimum finish time of a job

334
Afternoon Exam Section 15 Management Related Items

group (blank B) and the combination of jobs which make up the critical path can be obtained using
a PERT diagram.
A point that needs attention when calculating the amount of time necessary is that if multiple
activities arrive at a single node, the next activity to leave that node will not start until all activities
which arrive at the node have been finished (see Fig. E). This requirement is the same even if the
arriving activity is a dummy activity.

Job-X
Job-Z Job-Z cannot be started unless Job-X
and Job-Y are finished.

Job-Y

Fig. E Arrival of multiple activities

Let us specifically find the critical path and minimum required time of batch job group Z1.
(2) to (8) below correspond to the numbers in parentheses in Fig. F.

(1) A three-layer blank is created for each node.

(2) The earliest time that a job leaving a node can be started is called the earliest node time.
Earliest node times are entered into the top layer of the blanks created in (1). Using 0 minutes as
the start time of the first job, times are entered in the job chart, in the order of the job chart, based
on the required time for each job. Consider Job-D as an example. Since Job-D can begin at 30
minutes after the start time when the processing of Job-A has finished, 30 is entered as the earliest
node time for node 2. In the same way, 25 is the earliest node time for node 4.

(3) For node 3 where multiple jobs (including a dummy activity) arrive, since Job-A, Job-B, and
Job-C must all be finished, the earliest node time is set to 35 minutes to wait for the finish of Job-B
which has the longest processing time. Therefore, 35 is entered as the earliest node time for node 3.

(4) In the same way as described in (2) and (3), all earliest node times are entered. For node 7, of
the earliest node time of 115 for node 5 which is represented as a dummy activity, and the earliest
node time of 125 (65 + 60) that passes through Job-G, the greater of the two, 125, is used. Entering
all earliest node times results as shown in the top layers in Fig. F.
The earliest node time entered in the final node, node 8, is 180. This is the shortest finish time
for this job group. Thus, blank B is “180”.

(5) Next, the latest time by which each job arriving at a node can begin and still finish the job
group in the least amount of time is determined. This time is called the latest node time. The latest
node time is calculated by pursuing the PERT diagram in reverse, from the job group finish point
(the end of the PERT diagram) towards the job group start point. Calculated latest node times are

335
Afternoon Exam Section 15 Management Related Items

entered into the middle layer of the blanks of each node. For the job group finish point (node 8),
since the latest node time is the job group’s earliest node time, for batch job group Z1, 180 is
entered into the middle layer.

(6) For node 7, since node 8 must be arrived at by 180 minutes at the latest, 50, Job-I’s required
time, is subtracted from 180 resulting in 130, which is entered in the middle layer of the blank as
the latest node time. In the same way, latest node time for node 6 is 70, and latest node time for
node 3 is 40.

(7) Special care must be taken with nodes from which multiple jobs expand. For example, from
node 5, Job-H and a dummy activity expand. When looking at the dummy activity, the latest node
time of node 7 is 130, and since the processing time of the dummy activity is 0, the latest node time
of node 5 would be 130. However, looking at Job-H, since Job-H requires 65 minutes, the latest
node time of node 5 is 115 (180 - 65). In order to satisfy both of these, the value obtained by
subtracting the greater of the required time for Job-H and Job-I from the latest node time of node 8,
180, just needs to be made the latest node time of node 5. Therefore, the latest node time for node 5
is 115. Furthermore, for node 4, it is necessary to note that due to its relationship with activity 3
which is connected via a dummy activity, the latest node time is 40. The result of calculating the
latest node times likewise up to the start point, node 1, is as shown in the middle layers in Fig. F.

(8) Finally, by entering the value obtained by subtracting the earliest node time from latest node
time into the bottom layer of blanks would complete the figure. The values in the bottom layers of
the blanks indicate the float time. For example, for node 6, the float time is 5. This indicates that
even when the subsequent Job-G, for some reason, is delayed 5 minutes, the overall job group will
still be able to finish with the minimum time of 180 minutes without causing a delay.

30 (2) 115 (4)


30 (7) Job-D 115 (7)
0 (8) 2 5 0 (8)
85
180 (4)
Job-A Job-H
30 65 180 (5)
0 (8)
Job-B Job-E Job-G Job-I
1 3 6 7 8
35 30 60 50
35 (3) 65 (4)
0 (2) Job-C 40 (6) 70 (6)
0 (7) 25 Job-F 125 (4)
5 (8) 5 (8)
0 (8) 75 130 (6)
4 5 (8)
25 (2)
40 (7) Large circles represent nodes.
15 (8) Small numbers in parentheses correspond to
the numbers in parentheses within the
explanatory text.

Fig. F Float time

336
Afternoon Exam Section 15 Management Related Items

The combination of jobs on the route that connects nodes with a float time of 0 (zero) is the
critical path. Thus, the answer to blank A is “Job-A → Job-D → Job-H”

[Subquestion 2]
• Blank C: In Job-J, the processing time per data item is 0.1 seconds. Since the number of data
processed by Job-J for a day within a month is 36,000, the processing time of Job-J is as described
below.
36,000  0.1 (sec each) = 3,600 (sec) = 60 (min)

• Blanks D, E, F: According to the job chart of job group Z2, Job-J (Job-JJ) can only start after Job-D
and Job-G are finished. According to the PERT diagram created for Subquestion 1, the earliest
time that Job-D will finish is 115 minutes after the processing of the job group starts. Since the job
group starts at 2:00 am, the earliest time Job-D will finish is 3:55 am. Likewise, the earliest time
that Job-G will finish is 125 minutes after the processing of the job group starts, thus the earliest
time Job-G will finish is 4:05 am. Therefore, the earliest time Job-J (Job-JJ) will start is 4:05 am,
and in order for the entire job group to finish by 5:30 am, Job-J (Job-JJ) must finish within 1 hour
and 25 minutes, that is, 85 minutes. Meanwhile, if the main processing part of Job-JJ is partitioned
into n jobs, the number of data to be processed by each job (Job-JBk) of the main processing part
of Job-JJ is 216,000/n. Since the processing time per item of data is 0.1 seconds, the processing
time for each of Job-JA, Job-JBk, and Job-JC is as shown in the table below.

Job Processing time Remarks


Job-JA + 5n (min)
Job-JC
Job-JBk 216,000 / n  0.1 (sec each)  60 Division by 60 is for converting
= 360 / n (min) seconds to minutes
Job-JJ overall 5n + 360 / n (min)

Therefore, the following inequality is obtained.


5n + 360 / n  85
The denominator (n > 0) is factored out to clear the equation.
5n2 – 85n + 360  0
n2 – 17n + 72  0
(n – 8)(n – 9)  0
Therefore, value n, which satisfies the above question, falls within the following range:
8  n  9
Here, since n is an integer, the number of job partitions is either 8 or 9.The processing time for

337
Afternoon Exam Section 15 Management Related Items

Job-JJ is the same regardless of whether the job is divided into 8 partitions or 9 partitions.

5  8 + 360 / 8 = 5  9 + 360 / 9 = 85 (min)

The required time before adding Job-JJ (job group Z1) was 180 minutes. Since the start time of
Job-JJ is 125 minutes after the start of job group Z2, the finish time of Job-JJ is 210 minutes (125 +
85) after the start of job group Z2. However, since the finish time of Job-H is 180 minutes after
processing starts, Job-I is 175 minutes after, and Job-F is 100 minutes after, which are all earlier
than the finish time of Job-JJ, the overall finish time of job group Z2 depends on the finish of
Job-JJ, meaning that the processing time of job group Z2 is 210 minutes. Therefore, (D) is 8, (E) is
9, and (F) is 210. (D) and (E) can be reversed. In that case, the critical path is Job-B → Job-E →
Job-G → Job-J.

Q15-5 Capacity management

[Answer]
[Subquestion 1] (A) d) (B) a) (C) 4
[Subquestion 2] (D) b) (E) e) (F) 12 (G) c) (H) 21

[Explanation]
In Subquestion 1, the number of CPUs is calculated to satisfy capacity planning procedures and
given performance requirements. It is considered that this subquestion can be answered sufficiently
with the basic level of knowledge for answering morning questions. In Subquestion 2, a server
enhancement plan is established so that the increase in the number of transactions can be handled.
While Subquestion 2 has as few as 5 answers, it is difficult to read and appropriately understand the
meaning of the large amount of information provided in the question text, and the calculations
involved are somewhat complex. Answering this subquestion is expected to take a considerable
amount of time. The level of difficulty of this question is slightly high.
Capacity planning is to evaluate server processing capacity and deliberate enhancement of the
server in response to forecasted increase in the number of transactions, and can be briefly described as
follows:
“Estimation of necessary system configurations (hardware scale and performance, software
capability, etc.), based on performance requirements required of the system and demand forecast of
the system, in order for the system to stably provide services to the users of the system.”
For example, in Web server capacity planning, items such as the following are considered:

• Server performance needed to provide service


• Storage capacity for storing Web contents

338
Afternoon Exam Section 15 Management Related Items

• Number of users
• Number of transactions

Demand forecast requires attention not only on the change in the number of requests to the
system after a certain amount of time has passed, but also on changes, etc. in the increase rate over
time. Generally speaking, in capacity planning system configuration with allowance in terms of
performance is considered. In capacity planning, it is also important to understand the upper limits
of performance (for example, the number of transactions which can be processed per second, etc.),
including the allowance portion.

The general procedure of capacity planning is as shown below.

(1) Collection of load information and determination of performance requirements

(2) Sizing based on performance requirements

(3) Evaluation of sizing results and tuning

Each activity is explained below:


(1) Collection of load information and determination of performance requirements: Based on the
interview results etc. with personnel of the system department and user department, the load
(types and volume of business to be performed) and system performance requirements
(processing time, etc.) required of the system for which capacity planning is being performed
are to be identified. For example, information necessary for “Collection of load information
and determination of performance requirements” may include, for online processing, the
number of processing at peak times and normal times, and average message length, or, for
batch processing, the number of data items, the amount of time available for completing
processing, etc.

(2) Sizing based on performance requirements: Specifications of the required resource (server
performance, storage capacity, etc.) is estimated based on performance requirements, and the
basic system configuration is decided. In order to decide on a system configuration, past cases
of matching business types and scales, industry standard benchmark values such as SPEC,
TPC, etc. can be used as references.

(3) Evaluation of sizing results and tuning: Whether the basic system configuration decided as a
result of sizing can provide sufficient performance cannot be determined unless it is operated

339
Afternoon Exam Section 15 Management Related Items

in a live environment. Simulations, prototyping, etc. are utilized to evaluate sizing results in
situations similar to the live environment. Based on these evaluation results, the system
configuration is corrected, and the precision of the system configuration is improved.

Since, the question is subject to the condition stating that “other system resources such as the Web
servers, the DB server, the firewall, etc., have sufficient margin, and there is no need to enhance them”
at the end of the question text, only enhancement of the AP server needs to be considered.

[Subquestion 1]
• Blank A: As explained in the procedure of capacity planning, tests released by SPEC, TPC, etc., are
“benchmarks.” Thus, d) is inserted.

• Blank B: As explained in the procedure of capacity planning, with regard to the “sizing” results, the
content is evaluated, the optimal values is determined, and the system configuration is fixed.
Thus, a) is inserted.

• Blank C: A summary of the description of the AP server and the [Prerequisites and performance
requirements of transaction processing on the AP Server] is as follows:

• The CPU utilization is not to exceed 60%.


• CPU processing time necessary to process one (1) transaction: 2.4 milliseconds
• Capable of handling 1,000 transactions / second.

Based on these, the total CPU time needed to process 1,000 transactions is as follows:

2.4 (milliseconds)  1,000 = 2,400 (milliseconds) = 2.4 (seconds)

The number of transactions, 1,000 specified in the performance requirements is the number of
transactions that the system must process in a second. The total CPU time required for this
processing is 2.4 seconds, and therefore cannot be processed with a single CPU. In other words,
the performance requirement is not satisfied. In the case of this question, as is understood from
the contents of the question (the number of CPUs), when performance requirement is not
satisfied with a single CPU, multiple CPUs are implemented and transactions are distributed to
those CPUs to process them in parallel in order to meet the performance requirement. However,
it is important to note that there is another condition that the CPU utilization must not exceed
60%. It is not sufficient just to install 3 CPUs (equivalent to three (3) seconds of processing
capability) in order to achieve 2.4 seconds worth of CPU performance in one (1) second. In
order to satisfy the conditions, it must become possible to perform processing of 2.4 seconds or
more worth of total CPU time with 60% of the CPU performance that can be achieved with the
implemented CPUs. Since the CPU time of a single CPU for one (1) second is one (1) second,
when the number of CPUs to be implemented is x, x seconds (x CPUs  1 second) worth of

340
Afternoon Exam Section 15 Management Related Items

CPU processing is possible. And because with 60% of this performance, processing worth of a
total of 2.4 seconds or more of CPU time needs to be performed, x  0.6  2.4 needs to be
satisfied. Solving this inequality yields x  4 (= 2.4  0.6), but since the number of CPUs
necessary is asked, it is reasonable to consider this to be the minimum necessary number of
CPUs, which is 4, and “4” is inserted.

[Subquestion 2]
(1) With the initial configuration, processing of up to 1,000 transactions per second was possible
under the condition of CPU utilization rate of 60%. This means it is possible to process
transactions up to the number forecast for 2 months later. However, the number of transactions
forecast for 3 months later is 1,440 (transactions / second), and in order to process this number
of transactions, the following CPU performance is necessary.

1,440 / 1,000 = 1.44 (times)

(1) Measure by Plan 1


As in Subquestion 1, the number of CPUs necessary to process 1,440 transactions per second
is calculated.

2.4 (ms)  1,440 = 3,456 (ms) = 3.456 (s)


x  0.6  3.456
x  3.456  0.6
x  5.76

Rounding up all decimal places, the number of CPUs needed is six (6). Since there were
initially four (4) CPUs, adding two (2) CPUs makes it possible to process 1,440 transactions
per second under a CPU utilization rate of 60% or less. The current performance ratio by
adding 2 more CPUs is as follows, using the formula from Table 2:

(4 + 2) / 4 = 1.5

The cost required for the measure by Plan 1 is calculated as shown below.

2 (million yen)  2 = 4 (million yen)

Since it is indicated that the required time for Plan 1 is two (2) weeks, this measure is
sufficiently viable.

(2) Measure by Plan 2


In Plan 2, the current performance ratio is given. Since Plan 2 in Table 2 states that the current
performance ratio is 2.7, even if not applied in combination with Plan 1, Plan 2 alone makes it
possible to increase processing by 1.44 times.
Since the required time for Plan 2 is indicated as 1.5 months, this measure is viable.

341
Afternoon Exam Section 15 Management Related Items

(3) Measure by Plan 3


For Plan 3 also, the current performance ratio is given. Since Plan 3 in Table 2 states that the
current performance ratio is 3.5, even if not applied in combination with Plan 1, Plan 3 alone
makes it possible to increase processing by 1.44 times.
However, the required time for the measure by Plan 3 is three (3) months. While Table 1
shows forecasts of the number of transactions, it is hard to imagine from common sense that in
three (3) months the number of transactions will suddenly jump from 1,000 per second to
1,440 per second. During the two (2) month point to the three (3) month point, it is normal to
consider that the number of transactions will increase gradually. While the required time to
implement Plan 3 is three (3) months, during the two (2) month point to the three (3) month
point, the number of transactions increases gradually, and while not reaching 1,440
transactions per second, it will likely exceed 1,000 transactions per second. In other words,
during the two (2) month point to the three (3) month point, CPU utilization will exceed 60%,
and thus Plan 3 alone is insufficient to resolve the bottleneck. Given this perspective, when
adopting Plan 3, combining it with Plan 1 which completes the enhancement within two (2)
weeks, is necessary. Since the measure by Plan 3 is completed three (3) months later, during
the two (2) month point to the three (3) month point, Plan 1 alone must handle the increase in
transactions. The cost of Plan 1 to handle the number of transactions up to the 3rd month is, as
has already been calculated, 4 million yen. And since the cost for adopting Plan 3 is 17 million
yen, the cost for employing both Plan 1 and 3 is as follows:

4 (million yen) + 17 (million yen) = 21 (million yen)

Meanwhile, the current performance ratio when combining Plan 1 and Plan 3 is as described
below, from the fact that the current performance ratio of Plan 1 where two (2) CPUs are
added is 1.5, and from the descriptions of the current performance ratio for Plan 3 in table 2.

1.5 + 2.5 = 4.0

Since the cost as well as the current performance ratio for Plan 1 to Plan 3, and for the
combination of Plan 1 and Plan 3, have become clear, the cost performance “cost / (current
performance ratio –1)” in the question text can be calculated. The results are as shown in the
table below.

Combination of
Plan 1 Plan 2 Plan 3
Plan 1 and Plan 3
Cost (million yen) 4 12 Required period is 3 21
Current months, therefore
1.5 2.7 cannot be 4.0
performance ratio
implemented alone
Cost 4 / (1.5 – 1) 12 / (2.7 – 1) 21 / (4.0 – 1)
performance =8  7.059 =7

The measure with the lowest cost is Plan 1, while the measure that places importance on cost

342
Afternoon Exam Section 15 Management Related Items

performance (lowest number) is the combination of Plan 1 and Plan 3. However, since the
options for Subquestion 2 (1) does not include the combination of Plan 1 and Plan 3, the
combination of Plan 1 and Plan 3 cannot be selected as the answer. Comparing the cost
performance for Plan 1 and Plan 2, the plan with the better cost performance is Plan 2 (7.059).
The cost, in this case, is 12 million yen. Thus, the answer for blank D is b), the answer for
blank E is e), and the answer for blank F is “12”.
Moreover, the reason that the combination of Plan 1 and Plan 3 is not listed as an option is
probably because it does consider the situation beyond the three (3) month point. Certainly,
while the cost performance of the combination of Plan 1 and Plan 3 is high, for handling only
the situation at the three (3) months point, the cost, 21, is too high.

(2) Since Subquestion 2 (2) has combinations of Plans in the answer group, the combination of
Plan 1 and Plan 2, and the combination of Plan 1 and Plan 3 is considered as well.
With the initial configuration, processing of up to 1,000 transactions per second is possible
under the condition of CPU utilization of 60%. The number of transactions forecast for 18
months later is 2,600 (transactions / second). In order to process the transactions 18 months
later, the following performance is necessary.

2,600 / 1,000 = 2.6 (times)

(1) Measure by Plan 1


The number of transactions forecast for 18 months later is 2,600 (transactions / second). As in
Subquestion 1, the number of CPUs necessary to process 2,600 transactions per second is
calculated.

2.4 (ms)  2,600 = 6,240 (ms) = 6.24 (s)


x  0.6  6.24
x  10.4 Rounding up all decimal places yields 11

The “Contents” in Table 2 for Plan 1 states, “* Implementation of up to 8 CPUs is possible,


including current CPUs.” Thus, application of Plan 1 alone is not possible.

(2) Measure by Plan 2


Since Plan 2 in Table 2 states that the current performance ratio is 2.7, Plan 2 alone can
increase processing by 2.6 times without combining Plan 1. In other words, Plan 2 alone can
handle the number of transaction 18 months later.
As considered in Subquestion 2 (1), the case of Plan 2 alone is as shown below.
Cost12 (million yen)
Current performance ratio2.7
Cost performance7.059

(3) Measure combining Plan 1 and Plan 2

343
Afternoon Exam Section 15 Management Related Items

With regard to Plan 2, combining Plan 1 will not make a difference in the current performance
ratio (see Table 2 Note (3)), and since the cost for additional CPUs would be added, the cost
performance will be lower. Therefore, it is possible to disregard the measure of combining
Plan 1 and Plan 2.

(4) Measure by Plan 3


As considered in Subquestion 2 (1), the measure by Plan 3 alone cannot handle the
transactions.

(5) Measure combining Plan 1 and Plan 3


As considered in Subquestion 2 (1), the case of the combination of Plan 1, which adds two (2)
CPUs, with Plan 3 is as shown below.
Cost21 (million yen)
Current performance ratio4.0
Cost performance7
The current performance ratio is 4.0, which makes it possible to handle the number of
transactions 18 months later.
A summary of these results is as shown in the table below.

Combination of Combination of
Plan 1 Plan 2 Plan 3
Plan 1 and Plan 2 Plan 1 and Plan 3
Cost Since this does 12 Since the current Since the 21
(million yen) not provide the
performance ratio required time
Current required
is same as Plan 2 is 3 months,
performance performance, 2.7 4.0
alone, but cost is this plan alone
ratio this plan alone
higher, this is cannot be
Cost cannot be 12 / (2.7 - 1) disregarded 21 / (4.0 - 1)
adopted
performance adopted  7.059 =7

As a result of these considerations, the measure that places importance on cost performance is
the combination of Plan 1 and Plan 3. Since the cost of the combination of Plan 1 and Plan 3 is
21 million yen, blank G is (c), and blank H is “21”.

344
Afternoon Exam Section 15 Management Related Items

Q15-6 Release management

[Answer]
[Subquestion 1] (A) c) (B) d) (C) b) (D) a)
[Subquestion 2] (1) (E) server processing start time (F) server processing end time
(E and F are not in order.)
(G) waiting time
(2) The point in time at which event information is sent from the event
information provision server
[Subquestion 3] (1) (H) the event information provision server (I) the ticket selling server
(J) ticketing terminals (I and J are not in order.)
(2) The provision of services using ticketing terminals during business hours
should be given higher priority than investigation of cause
(3) 6:00 am

[Explanation]

[Subquestion 1]
JIS Q 20000 is the Japanese version of ISO/IEC 20000, an international standard for IT service
management. It stipulates items necessary for IT service providers, etc. providing IT services to
construct a mechanism to stably provide service of a level demanded by customers, and to maintain
and improve that mechanism. There is a certification system based on this standard, and it clarifies
guidelines for its implementation, including scope, levels, etc. with regard to ITIL, which is a
collection of best practices for IT service management. Also, it is based on BS 15000, a British
standard established for the purpose of assessing and certifying that IT service providers, etc. have
built individual processes based on this guideline, and is in a complementary position with regard
to ITIL.
With regard to blank A, since it is stated in JIS Q 20000 about release management that “it is
desirable for the release management process to be integrated with the configuration management
process and change management process,” the correct answer is “(c) change management.”
Likewise, with regard to blanks B and C, since it is stated that “Plans shall record the release
dates and submitted documents and refer to requests for changes, known errors and problems,” the
correct answer for blank B is “d) requests for changes”, and for blank C is “b) known.”
Furthermore, for blank D, since it is stated that “Assessment of requests for changes for their
impact on the release plans shall be performed,” the correct answer is “a) assessment.”
Moreover, even if these set of descriptions are not known precisely, since by considering the
descriptions, etc. immediately following the blanks as hints, the candidates of correct answers can
be narrowed down to some extent, there is no need to give up answering. For blank A, since it is

345
Afternoon Exam Section 15 Management Related Items

immediately followed by “processes,” the candidates of correct answers can be narrowed down to
“c) change management,” and “f) problem management,” which are both included in the five (5)
processes of ITIL service support. With regard to these two (2) processes, while the purposes of
problem management are the prevention of incidents and elimination of root causes, the purposes
of change management are to make changes resulting from handling in problem management
efficiently and promptly. And applying the results of the changes into a live environment is release
management. Therefore, it is possible to judge that “c) change management” is more appropriate.
Also with regard to blanks B and C, since blank C is immediately followed by “errors,” the
candidates of correct answers can be narrowed down to “(b) known” or “(e) unknown.” As it is
impossible to make changes to “unknown errors,” and then release those changes, it is possible to
imagine that b) is the appropriate option. Once that is known, the description here can be guessed
that it is something to do with the cause of the changes to be released, it is possible to guess that
blank B is “d) requests for change.” Also, with regard to blank D, forecasting and evaluating the
impact is referred to as “assessment,” and since there are no other appropriate options, a) is the
correct answer.

[Subquestion 2]
(1) With regard to blanks E and F, in the plan, thought up by Mr. A, as a method for measuring the
response time of the new ticket selling server, measurement is performed based on the difference of
the contents of these two (2) items recorded for each transaction. Also note that Mr. B has pointed
out that consideration with regard to blank G of the ticket selling server has been omitted.
First of all, since it is something that is recorded for each transaction, it could be the processing
results of transactions, log information, etc., but since the description in [Outline of the ticket
selling system] (4) states that the ticket selling terminal, for each transaction, records the ticketing
terminal number, server processing start time, server processing end time, and processing result as
log information, the answer can be assumed to be one of these. And since response performance is
measured by their difference, “server processing start time” and “server processing end time” is
applicable. Since the difference between the processing start time and end time is processing time,
it might seem to be inappropriate as the response time, and it appears that this is what Mr. B
pointed out.
Generally speaking, response time includes the time involved in transferring data, as well as
waiting time, in addition to this processing time. However, the response time being measured here
is for the measurement of the response performance of the new ticket selling server, and for overall
response performance of the ticket selling system, actual measurements are to be taken using the
ticketing terminals. Therefore, while the overall response performance of the ticket selling system
includes processing time of the ticketing terminal, transfer time of the LAN and Internet, as well as
processing time of the ticket selling server, etc., the response performance (time) of the new ticket
server includes only the time between when the new ticket selling server receives a transaction and

346
Afternoon Exam Section 15 Management Related Items

when it sends a response, thus it seems transfer times do not need to be considered. In other words,
the waiting time from when the server receives a transaction and up to when the server actually
begins processing, plus the processing time, is the response time in this case.
Therefore, with regard to blanks E and F, they are as explained earlier, “server processing start
time” and “server processing end time,” and what Mr. B pointed out, blank G, with regard to the
response time, is “waiting time.” Moreover, “overhead” can also be considered as a possible
answer for blank G, but since the majority of overhead is included in processing time, it is not
appropriate.

(2) This subquestion asks the point in time to perform the actual measurement of the response time
of ticketing terminals in order to measure the response performance of the overall ticket selling
system. Because the question concerns point in time, it is possible that the response time of the
ticketing terminal degrades at some point in time.
As was mentioned earlier, while response time includes not only processing time of the server,
but also waiting time and transfer time, etc., the network is also used to transfer event information
containing images, etc. And since this event information is sent in a 30-minute cycle during
business hours, if a ticketing terminal is operated while event information is being transferred, the
waiting time and transfer time along with the response to the request may be impacted. Therefore,
unless actual measurement is performed when event information is being transferred, response
delays during event information transfers may become a problem in the live environment after the
release.
Based on these, the answer will be “The point in time at which event information is sent from
the event information provision server” or the like. Moreover, since response performance during
the transfer of event information has also been pointed out in Subquestion 3 (1), [Conducting a
rehearsal], etc., this should have served as a clue.

[Subquestion 3]
(1) Equipment names from the figure are inserted in blanks H to J.
First, while [Review based on results of the rehearsal] (1) includes the phrase “by utilizing IP
network functions . . . for the communications between H and the information
terminals,” what communicates with the information terminal via IP communication is the event
information provision server which sends event information to information terminals. Thus, “the
event information provision server” is inserted in blank H.
Furthermore, since others that are performing IP communication are between the ticket selling
server and the ticketing terminals, blanks I and J are either “the ticket selling server” or “ticketing
terminals” (not in order). Moreover, if the priority control class between the event information
provision server and information terminals is set lower than the priority class between the ticket
selling server and ticketing terminals, then since sending of ticket selling requests and response are
given higher priority than event information, it would match with the description that response

347
Afternoon Exam Section 15 Management Related Items

performance of ticketing terminals would be maintained.

(2) Since the viewpoint is from the SLA exchanged with the user divisions, looking for a relevant
passage would find at the beginning part of the question text, a description that states “In order to
make it possible to provide services using ticketing terminals during business hours, the system
management division has established an SLA with the user divisions.” In other words, the SLA
viewpoint refers to the ability to issue tickets during business hours. Since the Subquestion also
includes the phrase “the actions of the person in charge of the rehearsal,” looking for a passage in
the question text concerning the behavior of the person in charge of the rehearsal being in
contradiction with the SLA, a description in [Conducting a rehearsal] that states “The time used for
the work was as planned, but since investigating the cause took time, the planned finish time of the
back-out to the current servers significantly exceeded” can be found. From these two (2)
descriptions, it is evident that from an SLA viewpoint, priority should be given to services during
business hours that use the ticketing terminals, and that delay in the start of service due to
investigation of causes will be a problem. Briefly summarized, the answer would be “The provision
of services using ticketing terminals during business hours should be given higher priority than
investigation of cause” or the like.

(3) As considered in (2), in order to comply with the SLA, higher priority must be given to
providing services via ticketing terminals during business hours than to releases. With regard to
business hours, since it is stated in the first paragraph of the question text that it is from 9:00 to
21:00, regardless of whether the release was successful or not, the ticket selling system must be
functioning normally at 9:00. On the other hand, from the description in [Creation of a release plan]
(6) that states “when backing out, it takes two (2) hours for the back-out work, and one (1) hour for
the post-back-out verification test,” the total required time for backing out is three (3) hours. The
subquestion asks what the latest time is for this judgment. Therefore, 6:00, which is three (3) hours
before 9:00,‘is sufficient as the latest time to judge the back-out.

348
Afternoon Exam Section 15 Management Related Items

Q15-7 Sales management system

[Answer]
[Subquestion 1] (A) effectively (B) efficiently (C ) audit procedures
(D) preliminary investigations
[Subquestion 2] a), e)
[Subquestion 3] • By using the field investigation method, understand the current business, and
confirm the validity of the “current workflow and work explanation.”
• By using matching and checking method, match the source data against the
“current expense list” in order to confirm its accuracy.
[Subquestion 4] Review customer loyalty analysis function details, and confirm that the
quantification of customer trust and attachment are valid.

[Explanation]
Within a drastically changing business environment, in order for companies to secure
competitive advantage compared to their competitors, it has become essential to utilize IT
effectively and achieve business objectives that are based on business strategy. In system renewals,
the system after the renewal must contribute to the increase in sales and promoting business
efficiency, and contribute to the achievement of business objectives of the company. System
auditors are required of audit capabilities for guiding companies so that the companies can perform
effective IT investment.
This question tests the understanding of audit items, audit points, and audit procedures for
auditing at the requirements definition stage of a system revision with regard to whether the system
objectives (identification of customer needs, improvement of operational efficiency, and reduction
of operating and maintenance costs) can be achieved.

[Subquestion 1]
The answers are as stated in “IV. Practice standard 1. Making an audit plan” and “2. Audit
Procedures” of the “System Audit Standards.” There is no need to memorize the entire “System
Audit Standards” or “System Management Standards,” but key points must be understood through
daily studies.

[Subquestion 2]
The new sales management system has three (3) objectives, which are “understanding customer
needs,” “improving business efficiency,” and “reducing operating and maintenance costs.” With
regard to Subquestion 2, as is written in the question text, attention must be paid to “understanding
customer needs.” Also, the fact that Mr. C has listed the following three (3) items as audit items

349
Afternoon Exam Section 15 Management Related Items

provides a clue for answering

(1) Requirements definition document


(2) Customer needs list
(3) Customer loyalty analysis function details

a) To confirm the accuracy of the identifying best-selling products and shelf warmer products, and
e) To evaluate the timeliness of monthly sales reports by product and region, refer to
identifying customer purchasing trends, and are directly related to “understanding customer
needs.” Thus, the correct answer is a) and e). The other options are considered below.

b) To evaluate the profitability of operating and maintenance automation, is, just as indicated,
related to “reducing operating and maintenance costs.”

c) “Complaint handling” alone can be considered as relating to “understanding customer needs,”


but since the option here is “To confirm the validity of complaint handing,” and indicates that
the complaint handling analysis function details included in the deliverables of the
requirements definition stage are the target of the audit, this option can be judged to be related
to “improving business efficiency.” In order to lead to understanding customer needs during
the requirements definition phase, what is needed is not the method for complaint handling,
but to perform analysis, etc. of past complaint handling information

d) Confirming the validity of purchase cost reduction does not directly relate to customer needs.
Considering that the deliverables of the requirements definition phase include the current
workflow and corresponding work explanations as well as the new workflow and
corresponding work explanations, it is reasonable to think that this relates to “improving
business efficiency” based on the review of business processes.

[Subquestion 3]
As the question text states, audit techniques include the checklist method, the document review
method, the matching and checking method, the field investigation method, the interview method,
the computer-assisted audit method, etc. A system auditor must consider audit objectives and audit
targets, and select the audit techniques to apply.
With regard to “current workflow and work explanation,” its validity must be confirmed. In
order to understand the current business, the field investigation method, in which the auditor can
confirm the current state of audit targets in person, is optimal. The interview method is also
possible, but it is generally used in confirming management policies with the management.
With regard to “current expense list,” its accuracy must be confirmed. Since the matching and
checking method is a method in which various related records, including source data such as slips,
etc. are matched against each other in order to confirm their accuracy, it is an appropriate audit
technique in this case.

350
Afternoon Exam Section 15 Management Related Items

[Subquestion 4]
Customer loyalty strategy of a company is a method to secure customers by providing products
and services which match customer needs in order to improve overall corporate earnings.
Therefore, customer loyalty analysis functions are vital for corporate growth, and thus is a key
point.
In “(2) Audit points” of [Setting audit items and audit points], it states that “‘loyalty’ is what
indicates the customer’s level of confidence in, and the level of attachment to, the company’s
products and services. Repeat order rate, etc. are examples.” Therefore, in order to evaluate
customer loyalty, elements which are appropriate for use in this evaluation must be determined and
quantified. As part of the audit procedures, “customer loyalty analysis function details” must be
reviewed, and the system auditor must confirm the validity of the concept of their quantification
from a third-party perspective.

Q15-8 Internal control audit of a customer information system

[Answer]
[Subquestion 1] (a) Appropriate rules are developed in the form of a document or paper with
regard to the management of user ID and password
(b) Entry and exit access control is actually being performed, and control
records are being kept
[Subquestion 2] It is acceptable if any of the following is written.
•Make it organize an information management framework.
•Make it regularly report the management situation of information.
•Include an article of “regarding the non-disclosure obligation of confidential
information” in the contract.
•Make it perform scheduled audits. (Regardless of whether it is internal or
external)
[Subquestion 3] Risks include the leakage of confidential information, theft or tampering of
information, etc. through unauthorized access.
[Subquestion 4] (A) spoofing (B) direct intrusion

[Explanation]
This question concerns the audit regarding a system which handles customer information. Since
the Act on the Protection of Personal Information was enforced in April, 2005, many corporations
have already implemented measures. In the case of this bank, there are three (3) systems which handle
customer information, which are the accounting system, the CRM system, and the net banking system.
Subquestion 1 is considered in light of whether or not the internal control which is the objective of

351
Afternoon Exam Section 15 Management Related Items

this system audit is being accomplished. While there are three (3) control objectives listed concerning
the net banking system, sufficient measures should implemented as fundamental security measures
with regard to the management of ID and password in the access management of the
electronic-banking (hereinafter referred to as e-banking) division listed in (a) of Table 2, and the
management of physical access to terminals in the systems division listed in (b) of Table 2.
For Subquestion 2 as well, in order to ensure personal information protection, the manner in which
the controls on externally outsourced companies are developed is considered as an important point. It
is recommended to understand this as knowledge of outsourcing management which frequently
appears as a question on audit.
Subquestions 3 concerns risk management. Questions given in a form such as this one where risks
must be considered are also increasing recently.
Subquestion 4 tests general legal knowledge. Compliance with the laws such as the Act on the
Protection of Personal Information, Act on the Prohibition of Unauthorized Computer Access, criminal
law (computer fraud, etc.), Unfair Competition Prevention Act, etc. is an important item in audits, and
it is necessary to understand it as preliminary knowledge.

[Subquestion 1]
In light of the objectives of the system audit this time, the audit items concerning (a) require
confirmation of the development and operation of rules. The existence of rules must be confirmed
not only through hearings, but also by checking its contents with actual documents. The answer is
to confirm by checking documents, that the management rules for user ID and password have been
developed.
Also, with regards to (b), confirmation is needed that appropriate operation is being carried out.
Confirmation of design documents alone is insufficient, and it is extremely important to confirm
that identification is being thoroughly performed, and that records of entry to and exit from the
room are being kept.

[Subquestion 2]
When auditing outsourcing management concerning externally outsourced companies, controls
that are different from those for internal frameworks are required. Since the companies are separate
companies, work is not ordered as in usual internal work orders, but is consigned in the form of
contracts, and the following measures can be implemented as control items.

(1) Make it organize an information management framework.


(2) Make it regularly report the management situation of information.
(3) Include an article of “regarding the non-disclosure obligation of confidential information” in
the contract.
(4) Make it perform scheduled audits. (Regardless of whether it is internal or external)

352
Afternoon Exam Section 15 Management Related Items

If there is an answer such as these as typical controls related for personal information protection
in outsourcing management, the answer is correct.

[Subquestion 3]
In the case that [Audit Results] (2) is left as it is, the potential risks are “leakage of confidential
information, theft or tampering of information, etc. through unauthorized access.”
Note that generally, risk is used to mean “the potential for losses to occur,” and thus refers to
system stoppages, occurrence of disasters, leakage or theft of information, etc. which lead to actual
losses. Therefore, in this case, “spoofing” and “peeking” are incorrect answers.

[Subquestion 4]
The contents of the passages concerning activities which are prohibited by the Act on the
Prohibition of Unauthorized Computer Access are as shown below.

• Blank A: Intrusion and use of an access-restricted system using another person’s ID and password.
This is referred to as “spoofing.”

• Blank B: Intrusion into an access-restricted system by means of inappropriate information or


modified programs. This is referred to as “direct intrusion.” This term is paired with the term,
indirect intrusion in (3), and can be easily assumed.

353
Afternoon Exam Section 15 Management Related Items

354
AP Exam Preparation Book Volume 2

First Edition: July 2011


Revised: August 2011

Every precaution has been taken in the preparation of this book. However, the information contained in this book is provided

without any express, statutory, or implied warranties. Neither the author, translators, nor publishers will be held liable for any

damages caused or alleged to be caused either directly or indirectly by this book.

The company names and product names appearing in this book are trademarks or registered trademarks of the respective

companies. Note that the ® and ™ symbols are not used within.

Original Japanese edition published by ITEC Inc.


(ISBN: 978-4-87268-805-4)
Copyright © 2010 by ITEC Inc.
Translation rights arranged with ITEC Inc.
Translation copyright © 2011 by Information-technology Promotion Agency, JAPAN

Information-technology Promotion Agency, JAPAN


Center Office 16F, Bunkyo Green Court, 2-28-8, Hon-Komagome, Bunkyo-ku, Tokyo,
113-6591 JAPAN

You might also like