MSPB Report: Determining An Acceptable Level of Competence For Step Increases
MSPB Report: Determining An Acceptable Level of Competence For Step Increases
Introduction
The General Schedule (GS) pay system provides for fixed salary increases at regular intervals for
an employee who performs at “an acceptable level of competence.” 1 These are known as within-
grade increases (WGIs). 2 (Appendix A provides an overview of WGIs.) The law and regulations
state that when performance is not at an acceptable level, the agency should stop the WGI from
taking effect—an action commonly called a WGI denial.
WGI denials are rarer than some might expect. But that is one reason to study them. Many of
Congress’s discussions at the time of the Civil Service Reform Act (CSRA) of 1978—the
foundation of current civil service employment laws—centered on employee performance and
incentives. 3 Survey results indicate that more than one in four supervisors believes they have at
least one employee who is not at an acceptable level of competence. 4 If each supervisor had 10
employees, that would suggest a WGI denial rate of at least 1 in 40. 5 Personnel action data
reflects a much lower actual rate, just over 1 in 1,000. 6
We do not say that WGIs should or should not be denied more often. It is for agencies to
determine the criteria for what constitutes acceptable competence and how to measure it. The
goal of this research brief is to help Federal agencies evaluate their WGI practices and learn from
the approaches and experiences of other agencies.
MSPB research shows that several factors play a role in WGI practices and decisions. These
include: (1) the agency appraisal system, (2) the nature of the work, and (3) the agency’s
performance management culture and guidance. 7 This research brief will discuss why these
factors are important to the WGI process, the role of the WGI in addressing under-performance,
and some lessons learned for agencies to consider.
1
5 U.S.C. § 5335; 5 C.F.R. § 531.409. There are also within grade increases in the Federal Wage System (5 C.F.R. § 532.417), but the
research for this brief focused on the General Schedule system. Step increases date back to the Classification Acts of 1923 and 1949. The 1923 Act
states that individuals are to begin at the lowest step and the step increases are to “be allowed upon the attainment and maintenance of appropriate”
performance ratings. The 1949 Act set a schedule for such steps, including allowing “longevity” step increases as “a reward for long and faithful
service[.]” See Classification Act of 1949, 63 Stat. 968 (Oct. 28, 1949); Classification Act of 1923, 42 Stat. 1488 (Mar. 4, 1923). As noted, step
increases are awarded faster when an employee is comparatively new to a position and developing proficiency as a relatively high rate. Later, at
higher steps, step increases are three years apart, which is consistent with the 1949 Act’s rate of “longevity” step increases.
2
There are separate provisions for removing employees who fail in a critical performance element.
3
“In appropriate instances, pay increases should be based on quality of performance rather than length of service.” Civil Service Reform
Act of 1978, Findings and Statement of Purpose, P.L. 95-454. (capitalization and punctuation modified).
4
Our 2016 Merit Principles Survey asked supervisors: Do you currently have any employees whose performance you feel is not at an acceptable
level of competence? (Please define “an acceptable level of competence” yourself rather than using anyone else’s definition.) Twenty-seven percent
answered “Yes.”
5
As of June 2020, the ratio of supervisors to non-supervisors (including team leaders) is 1:7.7, which would imply a higher rate of WGI
denials. On the other hand, employees at step 10 can neither receive nor be denied a WGI. Also, our analysis and discussion of WGI data is limited to
GS employees, while the survey included employees in the Federal Wage System and other pay systems.
6
Office of Personnel Management, Enterprise Human Resources Integration–Statistical Data Mart. We calculate the WGI denial rate by
dividing the number of WGI denials by the number of WGI decisions (grants and denials combined). For fiscal years 2012 through 2018, the figures
for General Schedule employees were approximately 6,000 and 4.1 million, respectively.
7
While we can discuss the connection between these things, it does not prove causation.
Determining an Acceptable Level of Competence for Step Increases
All agencies and organizations that responded to our questionnaire indicated that they use “fully
successful” as the criterion to determine if a WGI should be granted. That is consistent with
OPM’s interpretation of its regulation that: (1) an employee who has a rating of record below
Level 3 is not at an acceptable level of competence; and (2) an employee who is at Level 3, 4,
or 5 (in an agency with a five-level performance appraisal system) is at an acceptable level of
competence. 9 Many organizations find it difficult, in principle and in practice, to define or use
Level 2 (a summary rating level at which performance is less than fully successful yet not so poor
as to warrant demotion or removal). However, as discussed in this brief, some organizations do
make active use of Level 2 ratings and other organizations may find their practices instructive.
Twenty-nine percent of supervisors stated on the 2016 MPS that they currently supervised at least
one employee who was less than fully successful without outright failing at the job. As
supervisors should know how their subordinates perform, this implies that the Governmentwide
rate of WGI denials should be higher than it actually is.
In some organizations, a low WGI denial rate may mean that some individuals received a WGI
that should have been denied. In others, a low WGI denial rate may simply mean nearly every
employee is performing at an acceptable level of competence. Nevertheless, the Governmentwide
rate of WGI denials is inconsistent with Governmentwide survey results on the incidence of
under-performance.
8
5 C.F.R. § 531.404(a).
9
OPM, Merit System Accountability and Compliance, response to MSPB inquiry about which summary rating levels constitute an acceptable
level of competence, transmitted by email Jan. 12, 2021.
10
U.S. Merit Systems Protection Board, Managing Public Employees in the Public Interest: Employee Perspectives on Merit Principles in
Federal Workplaces (2013), at 13 (reporting 24 percent agreement that the organization addresses poor performers effectively). Our 2016 MPS also
had 24 percent agreement with this item and 32 percent agreement that steps are taken to deal with a poor performer who cannot or will not improve.
The Federal Employee Viewpoint Survey (FEVS) administered by OPM shows similar results for a similar question, with an average of 30 percent of
respondents stating that steps are taken to deal with a poor performer (FEVS 2006-2017).
11
We note that survey data on the nature and incidence of poor performance should be interpreted with particular caution. First, the definition of
poor performance is context-sensitive. Second, survey respondents can and do have different understandings of poor performance. For example, in
the context of an adverse action under 5 C.F.R. § 432, the regulation defines poor performance as failure (unacceptable performance) in a critical
element. In contrast, a supervisor might define it as “performance that needs substantial improvement.” Both are reasonable on their own terms, but
they are distinct.
Appraisal System Rating Level for Employees Less than Successful but Better than
Unacceptable
One factor that may affect WGI denial rates is the appraisal system’s rating pattern. Employees in
a rating system that allows a Level 2 rating of record (less than successful, but better than
unacceptable) are four times more likely to have a WGI denied than those whose system does not
include this level.
Performance appraisal is the process by which agencies evaluate the performance of employees,
with a numeric rating assigned to that assessment. When done properly, it begins with a
supervisor explaining performance expectations and how the supervisor will assess performance
against those expectations. Over time, conversations occur discussing the employee’s
performance in light of those standards. At the end of a set period of time (typically one year), the
performance on individual elements is assessed and a summary rating level is assigned to each
employee. In the Federal Government, this summary rating is recorded as a number, often 1
(lowest) through 5 (highest). 12
Regulations from OPM allow agencies to use one of eight different rating patterns, which are
reproduced in Table 1 on the next page. 13
12
5 C.F.R. § 430.203 (defining appraisal as “the process under which performance is reviewed and evaluated”). See Angelo S. DeNisi and
Kevin R. Murphy, “Performance Appraisal and Performance Management: 100 Years of Progress?” Journal of Applied Psychology (2017), Vol. 102,
No. 3, at 421 (“Performance appraisal refers to a formal process, which occurs infrequently, by which employees are evaluated by some judge
(typically a supervisor) who assesses the employee’s performance along a given set of dimensions, assigns a score to that assessment, and then
usually informs the employee of his or her formal rating. Organizations typically base a variety of decisions concerning the employee partially on
this rating”).
13
5 C.F.R. § 430.208(d).
Summary Level
1 2 3 4 5
Unacceptable Fully Outstanding
Pattern Successful
A
B
C
D
E
F
G
H
The regulations do not offer a label for Level 2 or Level 4. However, Level 2 falls between
Levels 1 and 3, and agencies often assign it a label such as “needs improvement” or “minimally
successful.” An employee who is at Level 1 and does not improve may be removed. 14 An
employee at Level 3 is successful. Thus, the level most analogous to the employee falling short of
“at an acceptable level of competence” while remaining in the position at full salary (making the
WGI relevant) is Level 2.
This may be why WGI denials are less common when the employing agency’s appraisal system
does not include a summary rating of Level 2. If the employee is at Level 3, that implies they are
at an acceptable level of competence. If the employee is at Level 1, continued employment is
uncertain. 15 Without Level 2, there is no level at which an employee should be denied a WGI, yet
should not be removed.
This is not to imply that ratings at Level 2—where it is available—are common. Only 0.2 percent
of employees under systems with a Level 2 are rated at that level. 16 What is noteworthy is the
relationship between having a Level 2 option in the appraisal system and denying a WGI. As
shown in Chart 1, an employee in a rating appraisal system that includes Level 2 is four times as
likely to be denied a WGI as an employee in a system that does not. This may be a consequence
of the appraisal system allowing an employee to be rated between acceptable and completely
unacceptable. It may also be that agencies choosing to measure performance more precisely
(using more levels for distinguishing performance) have a different cultural attitude towards
performance deficiencies.
14
5 U.S.C. § 4303. For more on adverse actions involving performance, see U.S. Merit Systems Protection Board, Addressing Poor
Performance and the Law (2009) and Remedying Unacceptable Employee Performance in the Federal Civil Service (2019), available at
www.mspb.gov/studies.
15
In fiscal years (FYs) 2012 to 2018, an average of 0.065 percent of employees was rated at Level 1 each year. OPM, EHRI–SDM, GS only
(FYs 2012-2018). Our 2016 MPS asked supervisors who had employed a person that failed in one or more critical elements what had happened to the
most recent of those employees. A majority said the individual was no longer with the organization.
16
In FYs 2012 to 2018, an average of 0.132 percent of employees was rated at Level 2 each year. OPM, EHRI–SDM, GS only (FYs
2012-2018).
Chart 1: Percent of WGIs Denied and Inclusion of Level 2 in the Rating System
0.207% 0.049%
Level 2 Included Level 2 Excluded
However, the use of Level 2 may present other challenges in managing performance, even if it
facilitates the denial of a WGI. First, it is a difficult level of performance to define, both
technically and intuitively. Second, it makes possible a situation that managers, employees, and
members of the public may find intolerable: an employee who is not performing the job
satisfactorily, yet cannot be removed for performance and who remains in the position at a
full salary. In the words of one organization, “It can be a state of limbo. The employee may not
improve significantly [enough] to be rated at level 3 but not decrease in the level of performance
to where formal performance action may be taken.”
The nature of the work also appears to be related to rates of WGI denials. In short, the
occupations that are easier to measure for timeliness and quantity tend to have higher rates of
WGI denials. Chart 2 shows the occupations with WGI denial rates of 0.45 percent (0.5 percent
when rounded) or higher. 17
In many of these occupations, positions are typically centered on case work. That might make it
easier to measure the work being performed. For example, the Department of State explained
that—
This may help explain why WGIs were denied to employees in the “Passport and Visa Examining
Series” at approximately ten times the rate for the rest of the Department of State. 18
17
OPM, EHRI–SDM, GS only (FY 2012-2018), for occupations with at least 1,000 WGI decisions only.
18
Employees in the Passport and Visa Examining occupation had an average WGI denial rate of 0.968%. Employees in all other
occupations combined in the State Department had a WGI denial rate of 0.094%. OPM, EHRI–SDM, GS only (FYs 2012-2018).
Auditing 0.80%
Hydrology 0.53%
While case work can influence WGI denial rates, it does not stand alone. As noted in the prior
section, the rating pattern used by an agency matters, too. One agency (“Agency W”)
demonstrates how these two things can interact. This agency has a workforce primarily engaged
in case work—which typically results in higher WGI denial rates. But its rating pattern has no
Level 2—and the absence of a Level 2 is associated with lower WGI denial rates.
The agency stated that “employees are not evaluated under standards that include the option for
management to rate the person between meeting and failing the individual performance element.”
Despite its case work, Agency W denies only 0.114 percent of WGIs. While this is a much higher
denial rate than other agencies using the same rating pattern, it is slightly less than the
Governmentwide average of 0.146 percent. This agency is an example of why neither the type of
work nor rating pattern alone can explain WGI denial rates. Several factors play interacting roles.
One reason that case work often coincides with higher rates of WGI denials may be that case
work lends itself to easier measurement of performance. But there can be a difference between
having the option to measure performance and a culture that takes advantage of that opportunity.
As described below, a performance management culture and supporting guidance may also affect
WGI denial rates.
One Cabinet department component had a WGI denial rate of nearly 3 percent. 19 It has employees
in a case work occupation and uses rating Pattern H, which allows for five different rating levels.
In addition to these factors, the component’s collective bargaining agreements (CBAs) contain
detailed structures for recognizing performance. For example, in one CBA, the percent of salary
granted as a performance award is based upon the precise extent to which one performance goal
is exceeded, with six different award ranges available. There is also a five-page memorandum on
WGIs that, among other things, instructs supervisors on the exact period of performance to be
considered when assessing accomplishments for the purposes of granting or denying WGIs. This
offers consistency across the component and supervisors are not left to make such decisions
subjectively.
The combination of casework, a five-level rating pattern, and a performance measurement culture
with established metrics may be why this component has the highest rate of WGI denials within
the department. Their combination provides a support structure that supervisors can draw upon
when needed.
A different department (“Department A”) with a comparatively high rate of WGI denials also
indicated that the performance measurement support system may play a large role in how often
the WGI denial authority is used. One component in Department A issued a detailed guide for
managers addressing what a performance deficiency means. The component’s representative said
that the “union liked this transparency.” Another component in Department A conveyed their
managers’ need for a process that is clearly defined, repeatable, and used across the organization.
The view expressed was, “if you have to do something on the fly then you will get problems.”
Several components in Department A stressed the importance of documentation.
Waiting until the WGI is due to assess the employee’s performance may leave a supervisor little
or no time to think about all the considerations, such as: the consistency of the quality, quantity,
and timeliness of the employee’s performance; whether that performance is acceptable; whether
the employee is being judged fairly compared to other employees; whether there is evidence to
support those conclusions; and whether there will be institutional support to deny the WGI.
19
This was the highest rate of denials of any organization with more than 10,000 WGI decisions.
The table below compares the responses of agencies (via questionnaires) and supervisors (via the
2016 MPS). Its purpose is not to present precise, directly comparable measures. 20 Instead, its
purpose is to illustrate how similarly agencies and supervisors assessed factors that may
contribute to deficient performance, regardless of whether that deficiency was failure in a critical
element (supervisor view) or performance that is not at an acceptable level of competence
(agency view).
The table shows the average value our respondents assigned to each item. A value of “3”
indicates the item was perceived as being a factor to a great extent, while a value of “0” indicates
it was believed that the item was not a factor to any extent. 21 That is, the higher the number, the
more the factor was seen as contributing to deficient performance.
Table 2: Extent to which various considerations are seen as a factor in deficient performance
Agency Supervisor
Consideration View View
Those employees are not suited for their particular type of job 2.04 1.69
Those employees are not interested in doing the necessary work to succeed 1.81 2.09
Those employees have difficulty keeping up with changes in technology or
1.73 Not Asked
how the work is performed
Those employees are distracted by matters in their personal lives 1.67 1.66
Those employees do not understand how to do the work 1.65 0.90
For those employees, the causes are rooted in misconduct (e.g., Absent
1.56 1.21
without Leave, abusive treatment of customers or coworkers)
Those employees do not understand what is expected from them 1.56 0.54
Those employees are given more work than they could handle 1.10 0.33
Those employees are a target of an interpersonal work conflict 0.85 0.20
Those employees lack needed resources or tools 0.85 0.27
As illustrated, supervisors and agency representatives held similar views of the relative
importance of the considerations. Both groups felt that employees not suited for their jobs and
employees not willing to do the work to succeed were the most common causes of deficient
performance. For agencies, the third most common cause was “difficulty keeping up with
changes in technology or how the work is performed.”
It is worth noting that both groups rated similarly the item, “Those employees are distracted by
matters in their personal lives.” Supervisors were asked this question in 2016. Agencies were
asked this item in May-July 2020, in the midst of the coronavirus pandemic—a very different
time with respect to the challenges employees might face in their personal lives. Yet, this issue
appeared to be a constant—indicating that it will remain an important consideration even once the
pandemic is over.
20
First, the questions were not identical: agencies were asked about failure to meet an “acceptable level of competence” and supervisors were
asked about failure to meet a critical element. Second, responses to agency questionnaires were not from a representative sample and were simply
averaged. In contrast, the MPS was administered to a constructed sample and responses were weighted to present a measure of Governmentwide
employee opinion.
21
The response scale for both the survey and the agency questionnaires was: a great extent, some extent, a little extent, and no extent. As noted,
agency questionnaire responses were not weighted on number of employees or any other basis. Also, some agencies submitted a single
response, while others provided multiple component responses.
The similarities between the causes of failure to meet a critical performance element and of
falling short of an acceptable level of competence suggest that methods of addressing the former
could also be applied to the latter. Our research brief, Remedying Unacceptable Employee
Performance in the Federal Civil Service, discusses approaches agencies can use to address
deficient performance.
One finding from that brief was that communication is among the most effective methods of
addressing performance issues. Forty-two percent of supervisors stated that it was effective to at
least some extent when discussing with an employee the possible negative consequences of
continued inadequate performance.
In response to our questionnaire for our current brief, many organizations reported that there was
a return on their investment in managing the WGI denial process. 22 In the words of
one organization:
For employees who are not at the level expected[,] supervisors have the ability to
discuss areas for improvement, set goals, and [communicate] clear expectations
to improve performance. This consequence and tool used by supervisors for
underperformers often gets the attention of staff and they make an extra effort to
improve performance and get back on track to work toward the WGI. 23
Warning employees about the possible denial of a WGI—or effectuating the denial if necessary—
may help communicate that those consequences are real. But, to do that, supervisors need to keep
track of the WGI schedule and have the conversations in time to make a different WGI decision if
warranted.
• Monitoring when a WGI is due so that employees can be given positive reinforcement, or, if
necessary, a warning. 24
• Human resources (HR) providing the supervisor with notification that the WGI is nearly due
and asking the supervisor to certify whether the WGI should be granted or denied.
• A supervisor signing the certification and returning it to HR for processing.
• Communication from the supervisor to the employee to either congratulate the employee on
earning the WGI or explain the reasons for denying the WGI.
• A personnel action issued by HR that either grants or denies the WGI.
• If there is a denial, written notification to the employee of his or her appeal rights.
We asked agencies what could be done to increase the potential that supervisors could use the
WGI denial authority more effectively. One department said, “Simplification and automation of
22
This opinion was not universal. As an example of the counter-view, one small agency said: “The resources needed to manage the process (HR
notifying managers of upcoming [WGIs]), denying the [WGIs], and the subsequent work by the managers and employees is time and resource
consuming for the less than 1% of our employee population that results in a [WGI] denial.”
23
Emphasis added. This component (the largest in its department) had a WGI denial rate more than three times the average of the rest of the
department. Denial rates were 0.263 compared to 0.080, with a total of over 200,000 WGI decisions for the department in the studied period.
24
For example, “Your performance is on track for you to get your WGI on [date].” or “If you do not improve, you will not receive a WGI
on [date].”
the denial process would lessen paperwork and likely produce an impact in real-time.” Another
organization noted that the process required them “to print out the 3-part forms and internally
mail [them] to offices only to have them mailed/scanned in return.” However, a different agency
told us that their HR and finance services have made the system paperless.
Conclusion
MSPB’s research identified several factors that can influence how frequently an agency will deny
WGIs. For instance, certain rating patterns make it easier to identify that an employee’s
performance is not sufficiently acceptable to justify a WGI, yet still acceptable enough to warrant
continued employment. Work that is output-oriented or otherwise quantifiable is easier to
measure, enabling the agency to more easily draw the fine distinction between less than fully
successful and better than unacceptable performance. Some organizations have a performance
measurement culture with more structured processes, making it easier not only to measure when a
WGI denial is appropriate, but to communicate to the employee why the outcome is proper.
As stated in the introduction, our purpose is not to say how often an agency or a particular
supervisor should grant or deny a WGI. However, it appears to be common practice in many
agencies that WGIs are nearly automatic, with just over 1 in 1,000 being denied, despite survey
data from supervisors and employees indicating that performance concerns are fairly widespread.
Examples of what other agencies and departments have done—and the results of those efforts—
may be helpful to organizations that wish to improve how they make WGI decisions.
The period between WGIs (waiting period) varies by step. To advance to steps 2,
3 or 4, the employee waits 52 weeks at the next lower step. To advance to steps 5,
6 or 7, the employee waits 104 weeks. To advance to steps 8, 9 or 10, the
employee waits 156 weeks. 28 The figure at left illustrates this process over time. If
there are no promotions or quality step increases (QSIs), it takes 18 years to
progress through all the steps. (A QSI is awarded for particularly excellent
performance and essentially allows the employee to move to the next step without
having to wait the required number of weeks. See 5 C.F.R. § 531.503.)
The personnel system’s procedures for processing WGIs are automated to some
extent, although the extent of automation varies by agency. Typically, the
personnel system generates a report indicating that the waiting period is nearing
completion. At that point, the supervisor is notified of the pending WGI and
asked if the employee is at an acceptable level of competence, and by extension,
whether the WGI should be granted or denied.
In contrast with granting a WGI, the denial is less automated because it comes
with certain statutory obligations for the agency. By law, if a WGI is denied, “the
employee is entitled to prompt written notice of that determination and an
opportunity for reconsideration of the determination within his agency…. If the
determination is affirmed on reconsideration, the employee is entitled to appeal to
the Merit Systems Protection Board.” 29
25
While this research brief discusses step increases in the GS, the Federal Wage System (FWS)—a pay system covering blue-collar
occupations—has a similar process with pay increases on a different schedule. Such increases are granted to a non-supervisory employee, team
leader, or supervisor in the FWS “provided his or her performance in his or her position is satisfactory.” OPM, Appropriated Fund Operating
Manual, Subchapter S8-5, available at https://www.opm.gov/policy-data-oversight/pay-leave/pay-systems/federal-wage-system/appropriated-fund-
operating-manual/subchapter8.pdf.
26
Pay is adjusted for the locality. However, statutory limits on GS pay rates have resulted in pay compression at grade GS-15 in some localities.
This means that the pay for several steps is actually the same amount. For example, in San Francisco, the pay rates for GS-15 steps 5 through 10 are
identical.
27
OPM, General Schedule Classification and Pay, available at https://www.opm.gov/policy-data-oversight/pay-leave/pay-systems/general-
schedule/.
28
There are additional requirements, such as the time waiting must be spent earning “creditable service” and there cannot have been an
“equivalent increase” during the waiting period. See 5 C.F.R. § 531.405.
29
5 U.S.C. § 5335(c).
Appendix B: Methodology
This research brief is based upon three primary sources of information: (1) workforce data from
the U.S. Office of Personnel Management’s Enterprise Human Resources Integration/Statistical
Data Mart (EHRI–SDM); (2) responses to questionnaires sent to select Cabinet departments and
independent agencies; and (3) data from a Merit Principles Survey (MPS) we conducted in 2016.
The reported EHRI–SDM data is primarily for fiscal years (FYs) 2012 through 2018. It includes
only individuals paid under the General Schedule (GS). In total, there were 4,109,095 WGI
decisions available for analysis. This large volume of decisions means that figures and differences
that are small in absolute or practical terms may be statistically significant and therefore useful
for identifying patterns or practices of interest. The presentation of such figures is not to discuss
practical effects or outcomes, such as a large number of WGIs being denied. Rather, it is to
illustrate repeated patterns in policies or practices that tend to coincide with WGI denials and
inform readers how the WGI denial authority might be used more effectively where appropriate.
Regarding the agency questionnaires, some entities elected to submit a single reply covering all
of their components, while others sent separate replies for different components. In total, we
received 50 replies from 17 departments and independent agencies. Collectively, those
departments and agencies employ 79 percent of the civilian Federal workforce. These responses
have not been weighted. We also spoke with some agency representatives, and one bargaining
unit, to ask additional questions or seek clarification of their responses.
The 2016 MPS data comes from two different “paths” of that survey. One was administered to a
stratified random sample of all employees from most large agencies and departments, while the
other was administered to a stratified random sample of supervisors, managers, and executives
from those employers. Questions covered a variety of topics, including performance management
and under-performing employees. More information on the 2016 MPS is available at
https://www.mspb.gov/FOIA/Data/MSPB_MPS2016_MethodologyMaterials.pdf.