ADC Assessment Process
ADC Assessment Process
1.2 2021 Update Nicole Groombridge Narelle Mills Narelle Mills August 2021
Contents
1. Introduction 3
5. Written examination 8
5.1. Written examination format 8
6. Practical examination 10
6.1. Practical examination format 11
6.1.1. Choice of assessment methods 11
6.1.2. Practical examination structure 12
6.1.3. Technical skills day 13
6.1.4. Clinical skills day 13
7. Assessment of tasks 14
7.1. Rating 14
7.2. Scoring 15
7.3. Final result grade derivation 15
7.3.1. Example 1 – passing candidate 18
7.3.2. Example 2 – failing candidate 19
8. References 20
Appendix 1 21
Appendix 2 22
Page 2 of 23
1. Introduction
Health professions in Australia maintain integrity and public safety by the regulation of health
practitioners. This regulation is guided by the National Registration and Accreditation Scheme
(NRAS)1 and the Health Practitioner National Law Act (2009) (National Law). The regulation of
health practitioners includes standards that limit registration only to practitioners who are
competent to practise. Australian training programs that lead to qualification as a dental
practitioner are accredited to ensure Australian-qualified dental practitioners meet these
standards.
Under Section 53 of the National Law, an overseas qualified dental practitioner seeking
eligibility to register in Australia is qualified to apply for general registration if
a. the individual holds an approved qualification for the health profession; or
b. the individual holds a qualification the National Board established for the health
profession considers to be substantially equivalent, or based on similar competencies,
to an approved qualification; or
c. the individual holds a qualification, not referred to in paragraph (a) or (b), relevant to
the health profession and has successfully completed an examination or other
assessment required by the National Board for the purpose of general registration in the
health profession.
The Australian Dental Council (ADC) is the independent accreditation authority for the dental
professions in Australia. A not-for-profit company, the ADC is appointed by the Dental Board
of Australia (DBA) under the NRAS to conduct assessments and examinations of overseas
qualified dental professionals who are seeking eligibility to apply for registration with the DBA.
Page 3 of 23
2. ADC assessment process
The ADC assessment process for overseas qualified dental practitioners (including dentists,
dental hygienist, dental therapists, oral health therapists, and dental prosthetists) aims to
protect the public by ensuring only dental practitioners who are suitably trained and qualified
to practise in a competent and ethical manner are deemed eligible to apply for the DBA
process for registration. It is not used to limit or control the number of overseas qualified
dental practitioners registering to practise in Australia.
The ADC assessment process is a three-stage process:
Page 4 of 23
3. Assessment design theory
A multi-dimensional assessment framework is used to assist in the design of a robust high-
stakes, credentialing assessment process. The framework takes into account the
competencies that need to be assessed together with the level of assessment required for
each of those competencies.
3.1. Competency
The ADC defines competency as a concept that:
includes knowledge, experience, critical thinking and problem-solving skills,
professionalism, ethical values, diagnostic and technical and procedural skills.
These components become an integrated whole during the delivery of patient
care by the competent practitioner. Competency assumes that all behaviours
are performed with a degree of quality consistent with patient well-being and
that the practitioner self-evaluates treatment effectiveness. The term covers the
complex combination of knowledge and understanding, skills and attitudes
needed by the graduate.
The minimum standard of all ADC assessments is set at the level expected of a new graduate
from the relevant accredited dental program in Australia.
Page 5 of 23
Applying Miller’s concepts to the assessment of competence, both knowledge and
performance-based assessments can be used to identify the proficiency of an applicant in
each of the entry-level competencies for a profession. Whilst the knowledge layers of
competence do not directly translate to competence themselves, their measurability and
role as a foundation to competence allows for a staged assessment approach to occur.
Assessment of the knowledge layers of competence can be assessed at the “knows” and/or
“knows how” levels. Knowledge is most commonly assessed using written examinations. If a
practitioner is not able to demonstrate adequate knowledge in a competency, they cannot
be considered competent and there is no need to undertake further, more complex
assessments of the performance requirements of that profession.
The performance of a competence may be assessed at the “shows how” or “does” levels.
Although the “does” level of performance represents the highest level in Miller’s pyramid,
assessments at this level would require assessing a practitioner performing clinical duties on
live patients. Such assessments are difficult to standardise and pose a potential risk to the
participating patients. Therefore, most high stakes examinations for entry into a health
profession assess at the “shows how” level using simulated environments.
Page 6 of 23
4. ADC assessment design
The ADC assessment for overseas qualified dental practitioners is based on the expected
competencies of a recently qualified Australian dental practitioner at the point of graduation
from an ADC accredited dental program. To achieve this, ADC assessments are "blueprinted"
against the ADC entry-level competencies for the relevant dental profession.
Blueprinting is a form of ‘assessment mapping’ that ensures an assessment:
• tests the required attributes and competencies,
• uses assessment methods that are appropriate for the competencies being assessed,
• provides coverage of appropriate depth and breadth,
• is not too predictable or unpredictable, and
• is feasible.
Commencing in 2011, the ADC undertook detailed blueprinting exercises against the
competency statements current at that time. Blueprint workshop participants reviewed the
competencies, identified and prioritised competencies for assessment in the ADC process,
assessed the feasibility of alternative assessment strategies (i.e. MCQ, simulated patient, OSCE
etc.) and determined the preferred method of assessment for each of the competencies to
be assessed.
In 2017 the ADC commissioned an external review of its assessment processes to ensure that
examinations continue to conform to contemporary best practice. One outcome was that
the ADC has now re-visited the overall assessment blueprints for each profession to ensure
that they are based on the most recent competency statements.
In line with the revised overarching blueprint and external environmental changes (including
the construction of an ADC-owned and managed examination centre), the blueprint for the
general dentistry practical examination was extensively revised in 2018, reducing the focus on
restorative skills in the general dentistry examination, ensuring a wider sampling of
competencies and introducing a formal objective structured clinical examination (OSCE)
component to complement the technical (restorative) skills component of the examination.
Additionally, the general dentistry written examination blueprint was subsequently revised in
2021.
Page 7 of 23
5. Written examination
Following on from the overarching assessment blueprints, the ADC has developed individual
written examination blueprints for each of the dental professions based on the competency
document relevant to that profession.
All written examination blueprints are domain and discipline based. Domains reflect the
broad categories of professional activity and concerns that occur in the practice of dentistry
(e.g. oral surgery). Disciplines represent a specific area of dental practice.
The blueprint for the written examination for general dentistry is available in Appendix 1; it
includes five domains, grouped into four clusters, and 13 disciplines.
The structure of the examination, including the number of examination papers and questions
varies by profession. Specific detail about the structure of the examination for each
profession can be found in the applicable written examination handbooks available on the
ADC website.
Page 8 of 23
Figure 3 – written examination format
Page 9 of 23
6. Practical examination
Based on the overarching assessment blueprints, the ADC has developed individual practical
examination blueprints for each of the dental professions based on the competency
document relevant to that profession.
All practical examination blueprints are domain and discipline based. Domains reflect the
broad categories of professional activity and concerns that occur in the practice of dentistry.
Disciplines represent a specific area of dental practice (e.g. oral surgery). Practical
examination blueprints also use groupings which allow assessment of global competencies
across multiple tasks.
The General Dentistry practical examination focusses on the competencies listed in Domain 6
(Patient Care) of the competency statement and its subdomains: clinical information
gathering (6.1), diagnosis and management planning (6.2), and clinical treatment and
evaluation.
To align with the preferred method of assessment and to allow for a wide sampling of
different disciplines (clinical areas), the examination specifications for individual examinations
require that tasks are selected from items based on specific disciplines.
Each assessment task is scored using task-specific checklists of up to 15 criteria. These criteria
are assigned to up to three different groupings per task. The criteria are grouped based on
global competencies which are themselves derived from the competencies for the relevant
professional group. They include:
• effective communication
• clinical reasoning and judgement
• underlying knowledge base
• professionalism and ethics
• infection control
The use of these groupings (sometimes called sub-domains) allows generic, global
competencies to be assessed across multiple tasks.
An example blueprint for the general dentistry practical examination is given below:
(communication-
based)
Diagnosis and management planning 2
(technical skill-based)
Page 10 of 23
6.1. Practical examination format
Page 11 of 23
6.1.2. Practical examination structure
The General Dentistry practical examination is a two-day examination consisting of a clinical
skills day and a technical skills day. The format of each day varies from an OSCE format in the
clinical skills day and simulated technical tasks on typodonts in dental manikins in the
technical skills day. More detailed structure of each examination day is given in the following
sections and is outlined in Figure 4.
Page 12 of 23
6.1.3. Technical skills day
Content
The technical skills day focuses on the demonstration of technical skills described under
domain 6.3, Patient Care – Clinical Treatment and Evaluation of the competencies. This
covers the provision of evidence-based patient-centred care and may include tooth
preparation and /or restoration related to:
• conservation
• endodontics
• fixed prosthodontics.
Process
During the technical skills day, candidates are required to complete six tasks on pre-
prepared, standardised typodont models in manikin heads mounted on clinically realistic
simulation units.
Half the technical tasks will be restorative-based i.e. placing a restoration, half the task will be
preparation-based i.e. preparing a tooth to receive a restoration or other procedure.
All tasks are relevant to contemporary practice in Australia and are designed to reflect the
skills needed to manage common or important clinical situations. Example technical skills day
examination tasks include:
• the preparation of a carious tooth/teeth
• the restoration of a prepared tooth/teeth with resin composite
• the restoration of a prepared tooth/teeth with amalgam
• the preparation and/or temporisation of a tooth/teeth to receive an indirect
restoration(s)
• an endodontic procedure.
The ADC continually develops technical tasks for use in these assessments and currently has
an "item bank" of tasks that have been shown to have high validity and reliability.
Page 13 of 23
7. Assessment of tasks
All observed clinical skills day tasks are marked by an examiner at the time of the task.
Unobserved clinical skills day tasks and all technical skills day tasks are marked by two
independent examiners after the examination. Examination results are generally released
within six weeks of an examination.
Examination areas are fitted with CCTV. Recording of examinations can be used for examiner
training purposes. All examiners are trained and calibrated.
7.1. Rating
Individual candidate performance in each clinical skills day OSCE station task and technical
skills day task is assessed using both global rating scales and checklists.
A global rating scale gives a rating of a candidate’s overall performance in a task. Global
rating scales are appropriate when evaluating multifaceted domains such as clinical
information gathering.
A candidate can receive one of five global rating grades for their overall task performance:
outstanding, pass, borderline, fail or bad fail.
Examiners also assess candidate performance in a task using a checklist. Individual
assessment criteria (or items) are presented to the examiner in the form of a checklist and are
used by examiners to assess performance in a standardised and reliable manner. Examiners
rate candidates across a range of criteria for each task. The criteria have been developed to
identify the attributes of the task which are assessed and to define what a competent
candidate should be able to achieve.
A candidate can receive one of four possible grades for each checklist criterion: very good,
satisfactory, borderline or unsatisfactory. Each grade relates to a numerical score of 3, 2, 1 or
0 respectively.
The grade description for each criterion may vary by task however, in broad terms, the grade
descriptors are outlined below.
Very good identifies a competent performance, above that expected, which is
thorough, complete and well executed.
Satisfactory identifies minor deviations from a very good performance which
Page 14 of 23
7.2. Scoring
When scoring a candidate’s performance, the unit of analysis is the station, task, or cluster
and not the checklist criterion as checklist items are mutually dependent e.g. a correct
diagnosis would be dependent on a candidate taking an appropriate history. Candidates
receive an overall score for each station or technical task. The score is calculated by adding
together the checklist scores given to each of the criteria assessed in that task.
The passing score for each station/technical task is established using borderline regression – a
criterion-referenced standard setting method. Borderline regression is an objective,
reproducible method for calculating the checklist score at the boundary between a
satisfactory and an unsatisfactory performance. The borderline regression method uses the
expertise of the panel of trained and calibrated examiners to assign appropriate "global
scores" and objectively establishes the pass standard in a way that has been shown to
provide a more credible and reliable standard than the more traditional standard-setting
methods such as the Angoff method 12.
Borderline regression uses all the data of a group of candidates. A linear regression model is
used to determine the relationship between global rating scores and checklist scores for all
candidates at a station or task to obtain a station pass mark. This can be used to calculate
an overall pass mark.
A worked example of borderline regression is provided at Appendix 2.
Page 15 of 23
Figure 6 – Practical examination format
Page 16 of 23
A candidate’s final result for the practical examination is calculated using a partial
compensatory test scoring model. A test scoring model refers to the way station/task scores
are combined to arrive at an overall result for the examination as a whole. A partial
compensatory scoring model will be used to calculate the final pass/fail decision for each
individual examination day.
In a partial compensatory scoring model each station/task is assigned to a “cluster” of other
like tasks/stations. A pass/fail decision is reached for each domain cluster by performing a
borderline regression of all global scores against all criteria scores within that cluster.
An expert reference panel is used to assign a competency specific rating for the criteria
assigned to the communication and infection control subdomain clusters. This competency
rating is used in conjunction with the criteria scores for borderline regression.
The use of borderline regression standard setting for setting the passing standard for each
station, combined with a partial compensatory method for determining the final pass/fail
decisions for an examination, has been shown to be a credible method for minimising the
number of incorrect decisions made about passing and failing a candidate 12.
To obtain an overall “pass” in the practical examination candidates must pass both days of
the examination at a single attempt. The clinical skills day and technical skills day are
assessing fundamentally different competencies and a strong performance on one
examination day cannot compensate for a substandard performance on the other
examination day.
A worked example of a final grade derivation is provided below.
These are indicative examples of how final grades are derived and do not represent the
outcome of actual examinations.
Page 17 of 23
7.3.1. Example 1 – passing candidate
Clinical skills day
Station 1 2 3 4 5 6 7 8 9 10
Station result Fail Pass Pass Pass Pass Pass Fail Pass Pass Pass
Explanatory This cluster This cluster This cluster combines the scores from all clinical treatment
notes combines the combines the and evaluation stations. Combined scores in stations 5, 6, 8,
scores from both scores from both 9, and 10 were sufficient to compensate for a poor score in
clinical diagnosis and station 7. The candidate achieved an overall pass score for
information management this cluster.
gathering planning stations.
stations. A high The candidate
score in station 2 achieved an
compensated for overall pass score
a poor score in for this cluster.
station 1. The
candidate
achieved an
overall pass score
for this cluster.
Explanatory This cluster combines the scores given for effective communication This cluster combines the
notes across multiple stations. This candidate achieved an overall pass for this scores given for infection
cluster. control across multiple
stations. This candidate
achieved an overall pass
score for this cluster.
Explanatory A clinical skills day pass requires a pass in all 5 clusters. This candidate passed all 5 clusters and therefore
notes passes the clinical skills day.
Explanatory notes This cluster combines the scores from all 3 This cluster combines the scores from all 3
restorative-based tasks. High scores in tasks 2 and preparation-based tasks. High scores in tasks 4
3 compensated for a poor score in task 1. and 6 compensated for a poor score in task 5.
Explanatory notes A technical skills day pass requires a pass in both clusters. This candidate passed clusters 1 and 2 and
therefore passes the technical skills day.
This candidate would PASS the practical examination as a whole as they passed both days of
the examination.
Page 18 of 23
7.3.2. Example 2 – failing candidate
Clinical skills day
Station 1 2 3 4 5 6 7 8 9 10
Station result Pass Pass Pass Fail Pass Pass Fail Pass Pass Fail
Explanatory This cluster This cluster This cluster combines the scores from all clinical treatment
notes combines the combines the and evaluation stations. Combined scores in stations 5, 6, 8
scores from both scores from both and 9 were not sufficient to compensate for poor scores in
clinical diagnosis and stations 7 and 10. The candidate achieved an overall fail for
information management this cluster.
gathering planning stations.
stations. The A high score in
candidate station 3
achieved an compensated for
overall pass score. a poor score in
station 4. The
candidate
achieved an
overall pass score
for this cluster.
Explanatory This cluster combines the scores given for effective communication This cluster combines the
notes across multiple stations. This candidate achieved an overall pass for this scores given for infection
cluster. control across multiple
stations. This candidate
achieved an overall fail
score for this cluster.
Explanatory A clinical skills day pass requires a pass in all 5 clusters. This candidate failed clusters 3 and 5 and
notes therefore fails the clinical skills day.
Explanatory notes This cluster combines the scores from all 3 This cluster combines the scores from all 3
restorative-based tasks. High scores in tasks 2 and preparation-based tasks. The candidate
3 compensated for a poor score in task 1. achieved an overall pass score for this cluster.
Explanatory notes A technical skills day pass requires a pass in both clusters. This candidate passed clusters 1 and 2 and
therefore passes the technical skills day.
This candidate would FAIL the practical examination as a whole as they did not pass both
days of the examination.
Page 19 of 23
8. References
1. Guide to the National Registration and Accreditation Scheme (NRAS) for health
professions. 1502.
2. Miller GE. The assessment of clinical skills/competence/performance. Acad Med.
1990;65((9 Suppl)):S63-7.
3. Gerhard-Szep S, Guentsch A, Pospiech P, et al. Assessment formats in dental
medicine: An overview. GMS J Med Educ. 2016;33(4):1-43. doi:10.3205/zma001064
4. Surry LT, Torre D, Durning SJ. Exploring examinee behaviours as validity evidence for
multiple-choice question examinations. Med Educ. 2017;51(10):1075-1085.
doi:10.1111/medu.13367
5. Hawkins RE, Swanson DB. Using Written Examinations to Assess Medical Knowledge
and its Application. In E.S Holmboe& R.E. Hawkins (Eds.). In: Practical Guide to the
Evaluation of Clinical Competence. Mosby Elsevier; 2008:42-59.
6. Scalese RJ. Simulation-Based Assessment. In: E.S Holmboe & R.E Hawkins, ed. Practical
Guide to the Evaluation of Clinical Competence. ; 2017:215-248.
7. See K, Chui K, Chan W et al. Evidence for endovascular simulation training: a
systematic review. Eur J Vasc Endovasc Surg. 2016;51(3):441-451.
8. Sawyer T, Gray MM. Procedural training and assessment of competency utilizing
simulation. Semin Perinatol. 2016. doi:10.1053/j.semperi.2016.08.004
9. Khan KZ, Gaunt K, Ramachandran S, Pushkar P. The Objective Structured Clinical
Examination (OSCE): AMEE Guide No. 81. Part II: Organisation & Administration. Med
Teach. 2013. doi:10.3109/0142159X.2013.818635
10. Dong T, Swygert KA, Durning SJ, et al. Validity Evidence for Medical School OSCEs:
Associations With USMLE Step Assessments. Teach Learn Med. 2014;26(4):379-386.
doi:10.1080/10401334.2014.960294
11. Shumway JM, Harden RM. AMEE guide no. 25: The assessment of learning outcomes
for the competent and reflective physician. Med Teach. 2003;25(6):569-584.
doi:10.1080/0142159032000151907
12. Schoonheim-Klein ME. UvA-DARE (Digital Academic Repository) The use of the
objective structured clinical examination (OSCE) in dental education.
Page 20 of 23
Appendix 1
Example written examination blueprint for general dentistry.
Domains
Diagnosis and
Clinical information Clinical treatment
ADC written examination blueprint Professionalism (1) Health promotion (4) management planning
gathering (6.1) and evaluation (6.3)
(6.2)
General dentistry
Cluster 1 Cluster 2 Cluster 3 Cluster 4
Target % of
8% 12% 30% 30% 20%
questions
Dental emergencies 7%
Endodontics 8%
General medicine
9%
(inc. medical emergencies and special needs dentistry) Examination format:
Radiography 5% Implants 4%
Restorative dentistry
12%
(inc. fixed prosthodontics)
Appendix 2
A worked example using borderline regression
For each OSCE station, a candidate is scored across 15 different criteria. A candidate can
receive one of four possible grades for each checklist criterion: very good, satisfactory,
borderline or unsatisfactory. Each grade relates to a numerical score of 3, 2, 1 or 0
respectively. A candidate can therefore receive a minimum score of 0 and a maximum score
of 45 for an individual OSCE station.
The examiner also gives each candidate an overall score for that task, called a global rating.
A candidate can receive one of five global rating grades for their overall task performance:
outstanding, pass, borderline, fail or bad fail. (Each global rating grade relates to a numerical
score of 4, 3, 2, 1 or 0 respectively).
Data for an OSCE station was collected over three different examination sessions. Twelve
candidates sat each examination session giving 36 individual sets of scores (see figure 1).
During borderline regression candidate scores are plotted against global ratings giving a
regression line. The intercept of the regression line on the score axis for those candidates
given a global rating of borderline gives the passing score for that station – in this case the
passing score is 23 out of 45 (see figure 2).
Exam date Student ID OSCE station 1 score Global rating Global score
(out of 45)
09/07/2018 41 22 Borderline 2
09/07/2018 42 29 Borderline 2
09/07/2018 43 13 Borderline 2
09/07/2018 44 38 Outstanding 4
09/07/2018 45 19 Borderline 2
09/07/2018 46 24 Pass 3
09/07/2018 47 25 Pass 3
09/07/2018 48 24 Borderline 2
09/07/2018 49 26 Pass 3
09/07/2018 50 29 Pass 3
09/07/2018 51 39 Outstanding 4
09/07/2018 52 41 Outstanding 4
16/07/2018 53 8 Bad fail 0
16/07/2018 54 22 Fail 1
16/07/2018 55 25 Borderline 2
16/07/2018 56 34 Outstanding 4
16/07/2018 57 31 Pass 3
16/07/2018 58 30 Pass 3
16/07/2018 59 23 Pass 3
16/07/2018 60 28 Borderline 2
16/07/2018 61 29 Pass 3
16/07/2018 62 16 Fail 1
16/07/2018 63 17 Borderline 2
16/07/2018 64 40 Outstanding 4
23/07/2018 65 9 Bad fail 0
23/07/2018 66 10 Bad fail 0
23/07/2018 67 19 Bad fail 0
23/07/2018 68 26 Borderline 2
23/07/2018 69 27 Pass 3
23/07/2018 70 20 Fail 1
23/07/2018 71 28 Pass 3
23/07/2018 72 30 Outstanding 4
23/07/2018 73 14 Bad fail 0
23/07/2018 74 17 Fail 1
23/07/2018 75 20 Fail 1
23/07/2018 76 19 Borderline 2
23/07/2018 77 36 Pass 3
Figure 1 Candidate scores
40
35
out of 45
25
Y
20
Predicted Y
15
10
0
0 1 2 (Borderline) 3 4
Global rating
Page 23 of 23