Project Management Tools
Project Management Tools
1 Delivery
a. The network diagram has more importance than the Gantt chart because it more adequately represents the relations of
the tasks and deliverables.
b. Microsoft Project does a poor job of supporting the network diagram.
i. Fix this situation with PERT Chart Pro from CriticalTools.com, a Project® plug-in.
4. The project manager should create plans at as fine a granularity as possible so that the completion of tasks becomes a
binary choice and the percentage completion indicator of the Microsoft Project software actually means something.
6. Hard-schedule all gates when the team agrees to take on the business.
7. All of the consensus management areas are the responsibility of the project manager, not just arbitrarily laid out in the
schedule.
9. Build slack into the time line from the start of the program and manage it with great care.
1. The supplier presents a service or part concept and customer selects them.
2. Internal engineering or process design generates the concept and out-sources to the supplier with appropriate
supporting information.
The next section illustrates various methods of evaluating a supplier. We use a combination of in-house methods and
standards. However, while the acquisition function selects the supplier, the project manager should know and assess the
risk involved with each part or service and with each supplier.
In the case of services, obvious presentation of requirements in the form of mechanical drawings makes no sense. A
service company may need to create a specification or a statement of work to provide enough information for an
outsourced service to provide a quote.
The evaluation grades the supplier's capabilities. For each category, there may be multiple choices to quantify the
supplier's capability with respect to project requirements. The evaluation team will associate a score with each of these
possibilities, particularly in the case of government contracts. The sum of these scores represents the supplier's
capabilities.
The supplier evaluation does not select the supplier; rather, the scores developed during the acquisition process provide
an ordinal list of supplier capabilities. In the automotive industry, a group consisting of representatives from the Supplier
Quality Assurance (SQA) function, technical expertise from the design staff, and the acquisition function (for example, the
purchasing department) performs the supplier evaluation. Team members should participate in this evaluation in order to
provide the project manager with a preliminary understanding of the strengths and weaknesses of the supplier.
In the case of software acquisition, internal methods of selecting suppliers often do not evaluate the supplier in key
software practices. Instead, the review or critique relates more to the supplier's financial and production constraints. The
choice of software supplier based solely on financial data can be myopic and reflective of insufficient technical
involvement in the acquisition activity.
The following list provides some of the factors for consideration in the supplier evaluation:
Company ownership
Affiliated organizations or parent organizations
Sales turnover
Net income
Management expertise
o Customer satisfaction
o Risk philosophy
Production material
o Material management
o Logistical systems
Organizational structure
Organizational awards
Quality system
o Quality philosophy
o Quality planning
o On-time delivery
o Cost of project
Tools
o Verification tools
EDI capabilities
Supplier reliability
Product development
o Prototype availability
Project management
o Organization
o Project processes
o Human resources
The work breakdown structure (WBS) takes the top-level deliverables of the project and functionally decomposes these
items into a hierarchical representation of the final product. In U.S. Department of Defense (DoD) vernacular, the WBS
provides cost centers for cost and schedule tracking of the project. The team should refer to the lowest element in the
WBS as a work package. The decomposition of tasks needed to produce the project objectives allows for detailed
estimations of project costs.
Additionally, the team can match the work packages against available resources to provide a more complete assessment
of the feasibility of the project. Decomposing cost centers to some atomic level, for example, where we have estimates
between eight hours and eighty hours usually improves the accuracy of the forecast. What follows is a benefits list for any
WBS when allied with a bill of resources:
3. Aids development of resource assignments and responsibilities (identifies skills and skill acquisition needs)
6. Identifies tasks for support plans such as configuration management, quality assurance, and verification and
validation plans.
In order to perform effective estimation of the duration of a task, the project manager needs an in-depth understanding of
both the requirements and the required actions. Therefore, the estimates should flow up from those resources that
execute these tasks; that is, the team members and their managers provide their own estimates. The estimates may be
measured by:
Note that we posit work package estimation as a dynamic process designed to produce meaningful results. Having the
project manager dictate the desired schedules to the team while ignoring contributions from team members demotivates
the project team. We see an example WBS in Figure 1.
It may be naive to believe that people assigned to the project work solely on their project tasks. Personal efficiency and
normal interactions consume part of each working day, implying full utilization as impossible even under ideal
circumstances. If a person works half-time on a deliverable, one can assume it will take at least twice as long to complete
that task. In this case, the team assumes little or no disruption in the transition from the other tasks, a possibly unrealistic
option.
Figure 2 Example of resource breakdown structure.
The project manager would be wise to document utilization assumptions. These assumptions allow for more accurate
predictions and also give visibility to the actual workloads. Keep in mind that the cost and schedule assumptions represent
a model of what the project manager desires.
The project manager should be wary of cases where an individual with a penchant for overwork takes on all tasks and
fails-the principal defect of infinite-loading models. Our approach to the management of human resource constraints
appears in Figure 3.
Figure 3 Example of accumulated Human Resource (HR) expense.
We describe project management as a process of progressive elaboration. In the early phases of a project, the entire
team moves into the unknown. They may have nebulous scoping details. As events consume the forecast, the project
manager replaces vague estimation with real data and the remaining forecast improves in quality.
If upstream management interferes with the project by dictating a compressed schedule or a reduced budget, the
likelihood of a successful project diminishes. Unrealistic due dates degrade the quality of the schedule and unrealistic
budgets degrade the value of project costing. Higher-level interference can destroy the sense of ownership in a team by
shrinking the perception of participation and demeaning the contribution of team members.
Additionally, crashing (or reducing) the schedule generally fails to account for the effect of random variation on the project
plan. In retaliation or expectation, some project managers react by padding their estimate; that is, inserting safety lead
time to increase the likelihood of task completion.
Unfortunately, padding produces a distortion in the estimates of both time and cost. An even worse situation occurs when
the upstream managers begin to assume the project managers padded the budgets and routinely call for schedule and
budget attenuation.
Asserting the duration of a non-recurrent task as a single value implies extensive foreknowledge. Describing the task
duration as a range of possibilities reflects the uncertainty of project execution. The program evaluation and review
technique (PERT) uses a network analysis based on events defined within the project and addresses one-off durations; it
allows the project team to express durations as a span of likelihoods. The U.S. DoD classifies estimates as pessimistic,
optimistic, and probable. The team weighs its classifications with the heaviest weight going to the most probable scenario.
Note that the formula hints at a potentially unjustified normal distribution around the most probable scenario.
The PERT technique provides a framework for simulation. A software tool (@RISK®) exists that provides simulation
capability to Microsoft Project. The PERT estimation technique also provides the project manager with a glimpse of the
uncertainty of the estimates. However, the range of values (Pessimistic-Optimistic) provides a strong indicator of the
certainty used by the estimator. The project manager will convert this value into the task variance using the equation
below. The larger the task variance, the more uncertain the estimate:
Variations in the three PERT estimates imply uncertainty. However, if the project manager assumes the estimate of time
follows a normal distribution, then he can refine or broaden the estimates. Taking the individual estimates to the one, two,
three, or six standard deviations (sigma or ?) spreads the available time and improves the probability that the estimate lies
within the range of dates. See the table below:
1-sigma 68.26%
2-sigma 95.46%
3-sigma 99.73%
6-sigma 99.99+ %
The following Figure 4 illustrates the effect of variation. For a confidence interval of 99.73 percent, the range of
possibilities varies from 3 hours to 19.7 hours. Estimates with substantial variation should be removed from the critical
path or receive risk mitigation. Critical path dates with high variation represent risky goals. PERT models become
complicated because the software must iterate through permutations of the three levels-the more tasks/deliverables, the
longer it takes for the model to converge.
Figure 4 Duration estimation technique.
The critical path approach suggests that management of slack becomes crucial to the success of a project. The
measurement of slack provides us with a risk indicator. As slack dwindles, the project moves toward collapse.
The critical path approach may focus too much on problems as they arise, and less on preventing potential problems.
Modern project management software can calculate the critical path quickly and represent it graphically. Software that
calculates multiple critical paths treats the project as a metaproject composed of other projects.
The failure to properly connect the network diagram is probably the single most common scheduling failure by project
managers. We started this chapter with some axioms specific to this problem. If the program manager does not connect
the tasks based on dependencies, A must complete before B can start, then the software will inaccurately represent the
critical path (see Figure 5). Alternatively, an independent task has no dependencies and the team can execute it
immediately. If such is not the case, the task is not independent.
Figure 6 shows the network diagram for the same pseudoproject we used to show the WBS.
Figure 6 Network diagram.
Dr. Barry Boehm and a team of others have created mathematical models for just this sort of estimation methodology on a
grand scale with a process known as Contructive Cost Model (COCOMO), and COCOMO II. 1 This model is very complex
and cannot be adequately handled within a section of a project management book. However, we provide the list below
(not exhaustive) to get a perspective on the number of variables that impact the estimation process. Each variable has a
number of possibilities or grades. It is no wonder software schedule estimates have accuracy issues.
Product attributes
o Required software reliability
Hardware attributes
o Performance demands
o Memory demands
Personnel attributes
o Level of teamwork
Organization attributes
o Communications
o Process maturity
Project attributes
a. Requirements
i. Stability
ii. Completeness
iii. Clarity
iv. Validity
v. Feasibility
b. Design
i. Functionality
ii. Degree of difficulty
iii. Interfaces to other subsystems
iv. Performance
v. Testability
vi. Hardware constraints
vii. Software
c. Coding and testing
i. Feasibility
ii. Coding
iii. Testing efficiency
iv. Implementation
d. Integration testing
i. Test environment (availability)
ii. Product
iii. System
e. Other Disciplines
i. Maintainability
ii. Reliability
iii. Producibility
iv. Safety
2. Development
a. Development process
i. Formality
ii. Suitability
iii. Process control
iv. Familiarity
v. Product control
b. Development system
i. Capacity
ii. Suitability
iii. Usability
iv. Familiarity
v. Reliability
vi. System support
vii. Deliverability
c. Management process
i. Planning
ii. Project organization
iii. Management experience
iv. Program interfaces
d. Management methods
i. Monitoring
ii. Personnel management
iii. Quality assurance
iv. Configuration management
e. Work environment
i. Quality attitude
ii. Cooperation
iii. Communication
iv. Morale
3. Program constraints
a. Resources
i. Schedule
ii. Human resource
iii. Budget
iv. Facilities
v. Equipment
b. Contract
i. Type of contract (fixed, etc.)
ii. Restrictions
iii. Dependencies
c. Program interfaces
i. Customer
ii. Contractors and subcontractors
iii. Corporate management
iv. Vendors
v. Politics
Risk mitigation is the art of reducing potential effects on the project. Below we show four ways to cope with risk:
Usually, the estimate of the event occurrence has coarse granularity. However, this kind of preliminary quantification
provides managers with enough information to make a decision.
The project manager can estimate multiple risks by multiplying estimates if he assumes independent events. He can look
at an example of how this might work. Let's say it becomes necessary to write the specification for the product before a
review with key personnel. To achieve the delivery date, he must have the specification written in a specific period Risk1
and have the review Risk2 within a certain period also.
In this example, the probability of achieving the objective of having the specification completed and reviewed amounts to
81 percent.
The project manager can use probabilistic tools such as @RISK and Crystal Ball® to model the project/program using a
spreadsheet such as Microsoft Excel® or a project management tool like Microsoft Project. These tools allow the user to
run Monte Carlo simulations of the sequences of events and earned value. If the enterprise has a policy of retaining
historical data of various projects, the project manager can choose the appropriate distributions to represent various
activities in the project (note: not everything follows the so-called "normal distribution"). If he does not know the
distributions or knows them poorly, the project manager can estimate some worst-case scenarios and apply a random
walk approach to the Monte Carlo simulations by modeling to uniform distributions.
Figure 8 Simulation.
When there are many variations of the system under design, or when the system under design has to interface or is part
of a system with many variations, simulation can reduce the logistics around obtaining each of these variations for
verification.
There are three types of simulations:2 1. Virtual simulations represent systems both physically and electronically. 2.
Constructive simulations represent a system and its employment. 3. Live simulations simulated operations with real
operators and real equipment.
Virtual simulation is used to develop requirements by getting feedback on the proposed design solution. Virtual
simulations put the human-in-the-loop. The operator's physical interface with the system is duplicated, and the simulated
system is made to perform as if it were the real system. The operator is subjected to an environment that looks, feels, and
behaves like the real thing.2
Constructive simulation is just that, simulating the construction of the proposed solutions. This approach allows quick
design changes to be reviewed for impact. Performance information can be distributed to the rest of the team.
Live simulation require the hardware and software to be present. In these simulations, the situations or ambient
environment is simulated allowing the system to be checked out under various operational situations. The intent is to put
the system, including its operators, through an operational scenario, where some conditions and environments are
mimicked to provide a realistic operating situation.2
Simulation pitfalls Simulation and modeling are only as good as the input data. Models must represent the key variables
that produce the appropriate systems performance. Additionally, modeling and simulation are specialty knowledge areas.
This means the skill set is not often readily available and can be very industry specific. Still, starting earlier, clarifying
concepts and requirements means this is a wonderful tool to help produce the product in a timely fashion and at the
desired quality.
2.2.2 Verification
Any verification of the product, process, or service will provide some data about these products. The project manager
must understand that the product, process, or service is a prototype that may not represent the result. However, the
purpose of material and process prototypes lies in the reduction of risk to the production of the product or service.
2.2.3 Validation
Validation further reduces risk by examining the product or service under more realistic conditions and at a further stage
of development. If the embedded team has the software product built, it can model the defect arrival rate with a Rayleigh
model and provide the program manager with a statistical basis for final release.
3 Cost
To be able to use earned value management (EVM) techniques, the processes and systems in place must at a minimum
have the following characteristics:
3. Quick response from the hour billing system (latency between when time is put in and when it is visible)
4. Definition of task progress, for example use 0 percent (not started)-50 percent (started)-100 percent (completed)
to quantify task disposition
EVM arose from U.S. DoD cost accounting and is not unique to automotive development. Project managers use the
technique to assess the current cost/schedule status of the project. The tool evaluates the project schedule and cost
expenditures against the planned time and cost to determine the status of the project. The system requires detailed
preparatory work, most important of which is the WBS.
Let's assume that the project team has identified the scope, tasks, and estimates for the project. The most common name
for these variables is planned value since it shows expected expenditures for any given time. Other documents refer to
planned value as budgeted cost of work scheduled (BCWS). Once we have the planned value, we can compare it to the
actual cost. Other resources may refer to actual cost as actual cost of work performed (ACWP). The time reporting
systems have rigid constraints. The project manager must ensure that the people doing the work record their time
accurately.
The earned value is the budget at completion (BAC) multiplied by the percentage of completion of the project:
EV = BAC ? %Complete
CPI = EV/AC
Example: We plan four weeks to execute a given set of tasks and constrain planned cost to $16,000. After two weeks of
work, we accomplish 25 percent or $4,000 of the task
SPI = EV/PV
SPI = $4,000/$8,000
SPI = .5
CV = EV - AC
Example: A certain set of tasks was budgeted to cost $4,000. When the tasks were accomplished, the money spent was
$6,000.
CV = EV - AC
CV = $4,000 - $6,000
CV = -$2,000
This means that the secured budget for this project is in trouble. There is a shortfall for this set of tasks that may perturb
the remainder of the project.
Schedule variance is much like cost variance in concept; however, in this case the dollar amount represents the specific
amount spent in relation to the project schedule.
SV = EV - PVR
Example: A project is budgeted to cost $200,000. It is not at the 20 percent completion mark and has spent $60,000.
This simple equation provides a back-of-the-envelope check to see if the program is on, over or under budget. Clearly, the
project is in trouble.
$ETC = EAC - AC
$ETC = $300,000 - $60,000
$ETC = $240,000
Now, the example project will require an additional $100,000 to complete, if nothing else changes (for example, scope or
feature reduction).
5 War Story
1. The program manager needed a contingency plan to handle lost member situations.
2. The subsequent counterproductive arguing added no value to the product or project and had a negative impact
upon team morale.
3. The launch or change process had no control point in this portion of the process (control points generally involve
inspection of work), so the error propagated through the system.
This redundant and costly situation was avoidable by using a design review at the system or metalevel. Metalevel reviews
provide an opportunity for developers of embedded software, services, or manufacturables to reassess their work in light
of a higher-order systems approach. The review team assesses components for cooperative behavior. This cost reduction
should have been available earlier and would have been less than the cost to develop two different components to meet
the functional requirements.
Footnotes
1
Dr. Barry Boehm, Bradford Clark, Ellis Horowitz, Ray Madachy, Richard Shelby, and Chris Westland. April 1995.
Software Technology Conference, An Overview of the COCOMO 2.0 Software Cost Model.
http://sunset.usc.edu/research/COCOMOII/ (accessed February 16, 2008).
2
Defense Acquisition University Press, Systems Engineering Fundamentals, (Fort Belvoir, Virginia, Dau 2001) p. 118.