Important Aspects of OT Assessment
Important Aspects of OT Assessment
net/publication/255992301
CITATIONS READS
3 9,115
3 authors:
Ted Brown
Monash University (Australia)
533 PUBLICATIONS 9,999 CITATIONS
SEE PROFILE
All content following this page was uploaded by Tore Bonsaksen on 04 June 2014.
1 Ergoterapeuten 01.09
vitenskap
Rogers and Holm (1989) classified
the purposes of assessment as: predic-
Initial receipt of referral & preliminary information gathering
tive, discriminative, descriptive, and
evaluative. Predictive assessment provi-
des some indication of expected skill
Identification of OP* issues by client & /family caregivers levels in regards to future occupatio-
nal performance. For example, a child
exhibiting poor visual motor integra-
Assessment & evaluation of specific OP areas (as required) tion skills in kindergarten might be a
reliable predictor of that child having
difficulties with hand writing skills
Planning & selection of OP assessment tools Year three at primary school. Di-
scriminative assessment involves us-
ing norms to measure and compare
Administration & scoring of OP assessment tools performances for the purpose of diag-
nosis, placement, or establishing the
level of function in comparison to the
normative group. A child who exhi-
Summarise & interpret OP assessment results
bits poor gross motor skills can, for
example, be assessed using a standar-
dized test where his/her gross motor
Generate OP hypotheses, goals, & priorities
skill performance is compared to a
matched age-level norm to determine
if performance is above or below ave-
Collaborative OP intervention planning with client & family rage. Descriptive assessment is simply
determining a profile of client’s occu-
pational performance skills, interests,
Implement OP intervention(s) roles, values, habits and routines. Eva-
luative assessment involves testing
methods that are sensitive enough to
Re-assess & re-evaluate OP intervention program detect clinical change when used se-
quentially. An example would be a
client’s self-care skills being evaluated
Discharge planning & further OP follow-up as required before an eight-week intervention
program is started, then at four weeks
into the program, and then after the
Figure 1: Steps in occupational therapy assessment & intervention pro- program is finished at the end of the
cess * OP: occupational performance eight-week program.
In summary, assessment may be a
ting subjective clinical observations. knowledge base of occupational thera- management, clinical, client-perspec-
Fourth, assessment results are someti- py practice and theory. Finally, assess- tive, or professional tool. Therapists
mes used as a goalpost or marker for ments can be used as outcome measu- have a professional and ethical respon-
funding thresholds for publicly fun- res. Potential uses of outcome measu- sibility to assess the need for service,
ded services (such as the eligibility for res include: design interventions based on infor-
funding for an educational assistant i) to demonstrate that a clinical inter- mation gathered from assessment, and
within a classroom environment or vention is effective, evaluate the results of the intervention
attendant care within a client’s home ii) to facilitate change that leads to based on re-assessment results. Admi-
context). improvement in client satisfaction, nistrators and managers may use as-
Fifth, assessments are completed as iii) to demonstrate to a funding agen- sessment information to make infor-
a means to assist with client-centered cy that a contracted service has med decisions about the continuation
or family-centered program planning been provided to a client, of program funding or the need to
for clients since it fosters collaborative iv) to show areas where service deve- establish new clinical programs. Dif-
goal setting and intervention planning lopment might be required or more ferent types of assessment are comple-
with clients and caregivers. Sixth, resources need to be made avai- ted at different stages of the assess-
assessment is an important component lable, and ment process. These stages will now
in providing or generating evidence v) to provide evidence of the effective be described.
(e.g., evidence-based practice) about use of health care resources and
the efficacy of occupational therapy funding. Stages of the assessment process
services, thus contributing to the The steps of the assessment process of
Ergoterapeuten 01.09 2
vitenskap
client service provision fall into a cont- siveness to change (Asher, 2007). validity in the context of occupational
inuum of steps that are outlined in They also have a test manual that performance assessment.
Figure 1. includes set of instructions to follow
and scoring criteria. Informal non- A new definition of test validity
Types of assessment methods standardized tests are frequently de- To ensure that tests are accurately as-
Measurement is the process that en- signed and used by occupational the- sessing what they purport to measure,
tails the assessment, calculation, or rapists. These are often home-made they must demonstrate evidence of
judgment of the magnitude, quantity, checklists or assessment task kits crea- reliability and validity. Traditionally
or quality of a trait or characteristic. ted by clinicians for their own specific validity was viewed as a three part
Assessment refers to a set of procedu- local use. They lack norms, a test ma- concept made up of content, criteri-
res used to find out information where nual, and evidence of established relia- on-related, and construct validity
as evaluation refers to specific proce- bility and validity. Often informal sca- (Cook & Beckman, 2006; Downing,
dures used in the assessment process. les augment the clinical observations 2003) (see Figure 3). In the most
Assessment involves the collection, made by therapists during the assess- recent edition of the Standards of
appraisal, and classification of facts ment process. Psychological and Educational Testing
gathered into an organized manner Norm-referenced tests (similar to (American Educational Research As-
(Law & Baum, 2005). Specific assess- standardized tests) have scale items sociation [AERA], American Psycho-
ment tools and assessment approaches that are scored and then their scores logical Association [APA], & National
(e.g., quantitative and qualitative) are are compared to a large sample of par- Council on Measurement in Edu-
developed and evaluated to ensure ticipant’s scores in order to determine cation [NCME], 1999), the concep-
that they are reliable, valid, consistent, how a client’s test score compares to tualization of validity markedly chan-
responsive, and useful. There are seve- the normative sample scores (Anastasi ged. The view of validity theory pre-
ral types of assessment tools which & Urbina, 1997). Norm-referenced vailing today is largely based on the
occupational therapists use in their tests are often used as scholastic achie- seminal work of Messick (1989, 1994,
clinical practice. There are both stan- vement tests. Criterion-referenced 1995). The current emphasis states
dardized and non-standardized assess- tools tests are those that have scale that all validity is subsumed under
ments as well as formal and informal items based on the published empiri- construct validity and is concerned
instruments. There are also normrefe- cal research findings instead of the with «an overall evaluative judgment
renced and criterion-referenced tests. average performance scores of a norm of the degree to which empirical evi-
The final types of tests are self-report group at different age levels. dence and theoretical rationales sup-
versus performance-based. The final type of test is based on port the adequacy and appropria-
Standardized assessments have pre- their format. There are a number of teness of interpretations and actions
established protocols for the adminis- ways tests can be completed. For on the basis of test scores or other
tration and scoring of scale items. example, some tests have scale items models of assessment» (Messick,
Clients’ performance scores are recor- where clients have to complete a task 1989, p. 741). Although there are
ded and then compared to the norma- and then their task performance is numerous methods available to deter-
tive data. After administering and sco- scored based on a number of specific mine validity, validity is now viewed
ring the test items, the clinician looks criteria. Some tests are self-report/ as a unitary/single concept. The vari-
up the client’s raw score to obtain a arent-report inventories or scales. ous approaches to it are related com-
standard score. The standard score Tests can take several other formats as ponents that can be combined to eva-
provides a comparison of the client’s well including interview schedules, luate what inferences can be made
performance to other individuals who surveys with open ended questions, from test scores (Smith, 2001).
are the same age and/or may be heal- rating scales, true/false scales, and In the 1999 Standards, validity was
thy or have the same diagnosis. A Likert scales (Law, Baum, & Dunn, defined as «the degree to which evi-
number of scores are associated with 2005; McDowell, 2006). dence and theory support the integra-
standardized assessment including tion of test scores entailed by propo-
scaled scores, t-scores, percentile Characteristics of assessment tools sed uses of tests» (AERA, APA &
ranks, stanines, and age-equivalents Assessment tools need to meet a speci- NCME, 1999, p.9). The most im-
(Kielhofner, 2006). Standardized tests fic number of criteria in order for their portant issue in the development and
are more rigid in their administration test scores to be considered useable evaluation of measures is the process
protocols and usually have well-esta- and practical. The criteria include of validation that involves the accu-
blished reliability, validity, and re- reliability, validity, responsiveness to mulation of evidence to provide a
sponsiveness data published in their change, and clinical utility/practicali- sound empirical foundation for pro-
test manuals (Asher, 2007; Benson & ty (Asher, 2007). These are summari- posed interpretations of test scores.
Schnell, 1997). zed in Table 1. However, validity how Downing (2003) eloquently states
Formal tests have a manual that we traditionally know it has been re- this as «validity requires an evidentiary
documents its development, theoreti- conceptualized and therefore, it is chain which clearly links the interpre-
cal rationale, potential uses, standardi- important that therapists are familiar tation of the assessment scores or data
zation, reliability, validity, and respon- with this contemporary definition of to a network of theory, hypotheses
Ergoterapeuten 01.09
3
vitenskap
1. Reliability: the ability of the items of a test or scale to measure a construct, attribute, or trait on a cosistent basis. Specific
types of reliability include:
• Internal consistency: the degree to which the items of a test are correlated with one another; is the degree of homoge
neity between test items. It is a measure of the item’s testing the same construct and should be greater than 0.80.
• Alternate-form reliability/Equivalent-forms reliability: the use of alternate or equivalent test forms to obtain correlati
ons between parallel forms of the same test; the scores from two versions of the same test are compared for consisten
cy; each test is expected to have item equality thus making the tests equal at a given point in time; correlations should
be greater than 0.80.
• Split-half reliability: the items of a test or scale are split into two groups, then combined into a two forms; the scores
from the two tests are then correlated together; the results from performance on the first half of the test are correlated
with the second half or the scores on all evennumbered items are compared to the odd-numbered ones.
• Covariance procedures: the average of all split-half tests and is expressed at KR20 or KR21; is the consistency of
responses between all items on a test.
• Test-retest reliability: the ability of a test to exhibit some degree of score stability between two administrations of the
same test (usually one or two weeks between the first and second administrations of the test);
• Intra-rater reliability: the ability of the same person to score test items consistently between two administrations; is
intended to ensure that individuals rate the same construct or trait in the same way; and
• Inter-rater/inter-observer reliability: the ability of two different people to score test items consistently between two
administrations; correlations of 0.85 or higher are typically expected to ensure the consistency between two raters of
the same test.
2. Content validity: the representativeness or sampling adequacy of the items of an instrument; how well a test measures the sco-
pe of the attribute or trait under consideration that it purports to evaluate.
3. Criterion-related validity: an outside criterion is compared to the test to determine its accuracy in measuring a phenome-
non and refers to how well the test scores compare to what is being measured. Two types of criterion-related validity
include:
• Concurrent/Congruent validity: the relationship between the instrument in question and other already validated
instruments that measure the same phenomenon; the extent of agreement between two simultaneous measures of the
same skill or aptitude; and
• Predictive validity: the ability of an instrument to forecast future behaviour, abilities, or performance of participants who
complete a test; the extent of agreement between the current testresults and a future assessment. It is used to make pre
dictions about future behaviour or skill aptitude.
4. Construct validity: how well a test measures the theoretical facets of a construct it purports to measure. Specific types of
construct validity include:
• Convergent validity: the extent to which a construct is correlated with constructs believed to be similar; the scale
results of a test should correlated highly with another scale that measures the same variable or construct. It infers a
degree of agreement measuring the same trait with two different tests of the same trait or construct.
• Divergent validity: the extent to which a construct is dissimilar from other constructs believed to be unrelated or diffe
rent;
• Discriminant validity: the ability of a test to differentiate between two groups of participants with known differences
(e.g., group with a clinical diagnosis compared with a group who is clinically normal); the test scores should not correla
te with another scale of the same variable or construct. It infers the degree of agreement measuring the same trait with
two different tests of the same trait or construct; and
• Factor analysis validity: items of test group together to measure the construct they were intended to measure. It also
includes the identification of interrelated behaviours, abilities, or functions in an individual that contributes to the collec
tive abilities or functions. It involves the correlation of the test with other groups to define the common traits a test
measures.
5. Face Validity: the items of a scale appear to address the purpose of the test and the variables that it purports to measure.
It is subjective and is based on the local judgement by the author or experts on the topic.
6. Clinical validity: how well the scores of a test can be used to predict future performance and
7. Responsiveness to change: how sensitive to change in the clinical status of a person a test or instrument is
8. Clinical utility: the usefulness of a tool in terms of length, time to complete (also known as respondent burden), scoring
format, and complexity of items
and logic which are presented to sup- tion, data collection and testing, criti- of construct validity evidence are
port or refute the reasonableness of cal evaluation and logical inference» included as a means of addressing the
the desired interpretations» (p. 831). (Downing, 2003, p. 831). central issues implicit in the notion of
In this contemporary context, validity This conceptualization of validity, validity as a unified concept. These
refers to evidence generated to sup- known as «construct validity,» integra- subcomponents are:
port or refute the meaning or inter- tes the traditional components of con- (1) content,
pretation assigned to test results. tent, criteria, and construct validity. (2) substantive/content,
«Validity is never assumed and is an In the 1999 Standards (AERA, APA & (3) structural/response processes,
ongoing process of hypothesis genera- NCME, 1999), five subcomponents (4) generalizability/internal structure-
Ergoterapeuten 01.09
4
vitenskap
/relations to other varia-
bles/external, and (5) conse-
quential aspects of construct
validity (see Figure 4).
5 Ergoterapeuten 01.09
vitenskap
ce Model (Christiansen &
Baum, 1997). Examples of
top-down assessments used
with children include the
Canadian Occupational
Performance Measure (CO-
PM; Law, Baptiste, Carswell,
McColl, Polotajiko &
Pollock, 1998), School
Function Assessment (SFA;
Coster, Deeney, Haltiwanger,
& Haley, 1998), Pediatric
Evaluation of Disability
Inventory (Haley, Coster,
Ludlow, Haltiwanger, &
Andrellos, 1992), Pediatric
Interest Profiles (PIP; Henry,
2000), Children Helping Out:
Responsibilities, Expectations
and Supports (CHORES;
Dunn, 2000), and the School
Assessment of Motor and Pro-
cessing Skills (SAMPS; Fisher,
Bryze & Hume, 2002).
Conclusion
Figure 4: Sources of construct validity evidence An outline of occupational
therapy assessment was pro-
vided. The stages of the
assessment process, types of
would clarify the purpose of occupati- basic science which is not readily assessments, and characteristics of as-
onal therapy for the client. Those roles applicable to clinical use (Law, 1998; sessment tools were presented. The
that are important to the person, espe- Weinstock-Zlotnick & Hinojosa, 20- contemporary conceptualization of
cially the ones that he or she engaged 04). Positive features synonymous construct validity was also described.
in prior to the illness or trauma, beco- with the top-down approach include Finally, the topic of top-down and
me the focus of inquiry. If a discrepan- the fact that it is related to the basic bottom-up assessments was discus-
cy among the past, present, and future tenants of the occupational therapy sed. Using occupational performan-
role performances is detected during profession, provides clinicians with ce assessments with strong measure-
the assessment, the person would see knowledge of occupations, focuses the ment properties (such as validity and
the need for treatment. A top-down clinician on a holistic viewpoint of the reliability) is vital for therapists wor-
assessment further determines which client, facilitates theoretical autono- king with clients and their caregi-
particular tasks define each of the roles my, and identifies clients with occupa- vers/families. K
for that person whether he or she can tional dysfunction (Christiansen &
now do those tasks, and probable Baum, 1997; Weinstock-Zlotnick &
reasons for an inability to do so» Hinojosa, 2004).
(Trombly, 1993, p. 253). Practice models and theories that
As well, in the top-down approach, incorporate a top-down approach in- Reference List
«the foundational factors» (perfor- clude Canadian Model of Occupational
mance skills, performance patterns, Performance and Engagement (Town- American Educational Research
context, activity demands, and client send & Polatajko, 2007), Model of Association, American Psycho-
logical Association, & National
factors) are considered later (Wein- Human Occupation (Kielhofner, 20-
Council on Measurement in Ed-
stock-Zlotnick & Hinojosa, 2004, p. 02; Kramer & Bowyer, 2007), ucation. (1999). Standards for
594). Limitations of the top-down Occupational Adaptation Model Educational and Psychological
approach include the limited number (Schkade & McClung, 2001; De- Testing. Washington, DC: American
of available assessments tools for prac- Grace, 2007), Person-Environment- Psychological Association.
titioners to use, difficulties in the as- Occupation Model (Law & Dunbar, American Occupational Therapy
sessment and implementation of some 2007), Ecology of Human Performance Association. (2002). Occupational
practice models associated with this Model (Dunn, 2007), and the Person- therapy practice framework:
approach, and some models include Environment-Occupational Performan- Domain and process. American
Journal of
Ergoterapeuten 01.09 6
vitenskap
Occupational Therapy, 56, 609-639. Lippincott, Williams & Wilkins. lection instruments. In G. Kielhofner
Anastasi, A. & Urbina, S. (1997). DeGrace, B. W. (2007). The Occu- (Ed.), Research in occupational the-
Psychological testing. Upper Saddle pational Adaptation Model: Appli- rapy: Methods of inquiry for enhan-
River, NJ: Prentice Hall Inte- cation to child and family interventi- cing practice (pp. 155-176). Phil-
rnational. ons. In S. B. Dunbar (Ed.), Occu- adelphia, PE: F. A. Davis Company.
Asher, I. E. (2007). Occupational pational therapy models for inter- Kramer, J. M. & Bowyer, P. (2007).
therapy assessment: An annotated vention with children and families Application of the Model of Human
index. Bethesda, MD: American (pp. 97-126). Thorofare, NJ: Slack Occupation to children and family
Occupational Therapy Association, Incorporated. interventions. In S. B. Dunbar (Ed.),
Inc. Downing, S.M. (2003). Validity: On Occupational therapy models for
Ayres, A.J. (1980). Sensory integrati- the meaningful interpretation of as- intervention with children and famili-
on and the child. Los Angeles, CA: sessment data. Medical Education, es (pp. 51-96). Thorofare, NJ: Slack
Western Psychological Services. 37, 830-837. Incorporated.
Benson, J. & Schnell, B. A. (1997). Dunn, W. (2000). The screening, Law, M., Baptiste, S., Carswell, A.,
Measurement theory: Application to referral, and pre-assessment pro- McColl, M., Polatajko, H., & Pollock,
occupational and physical therapy. cesses. In W. Dunn (Ed.), Best prac- N. (1998). Canadian Occupational
In: J. Van tice occupational therapy: In com- Performance Measure manual.
Deusen & D. Brunt (Eds.), Assess- munity service with children and Ottawa, ON: CAOT Publications
ment in occupational therapy and families (pp. 55–77). Thorofare, NJ: ACE.
physical therapy (pp. 3-24). SLACK Incorporated. Law, M. (1998). Client-centred oc-
Philadelphia, PE: W. B. Saunders Dunn, W. (2007). Ecology of Human cupational therapy. Thorofare, NJ:
Company. Performance Model. In S. B. Dunbar Slack.
Bobath, B. (1990). Adult hemiplegia: (Ed.), Occupational therapy models Law, M. & Baum, C. (2005).
Evaluation and treatment. London, for intervention with children and Measurment in occupational thera-
UK: Heinemann. families (pp. 127-155). Thorofare, py. In M. Law, C. Baum, & W. Dunn
Bruininks, R. H., & Bruininks, B. D. NJ: Slack Incorporated. (eds.) Measuring occupational per-
(2005). Bruininks-Oseretsky Test of Edwards, M. A., Millard, P., Praskac, formance: Supporting best practice
Motor Proficiency, 2nd Edition. L. A., & Wisniewski, P. A. (2003). in occupational therapy (pp. 3-20).
Minneapolis, MN: AGS Publi- Occupational therapy and early Thorofare, NJ: Slack Incorporated.
shing/Pearson Education. intervention: A family-centred ap- Law, M. & Dunbar, S. B. (2007).
Canadian Association of Occu- proach. Occupational Therapy Inter- Person-Environment-Occupation-
pational Therapists (2002). Profile of national, 10, 239-252. Model. In S. B. Dunbar (Ed.), Oc-
Occupational Therapy in Canada, Fisher, A.G., Bryze, K. & Hume, V. cupational therapy models for inter-
Second Edition, 2002. Ottawa, ON: (2002). School AMPS: School versi- vention with children and families
CAOT Publications. on of the Assessment of Motor and (pp. 28-50). Thorofare, NJ: Slack
Carr, J.H., & Shepherd, R.B. (2003). Process Skills. Fort Collins, CO: Incorporated.
Stroke rehabilitation: Guidelines for Three Star Press, Inc. McCormick, G.L. (1996). The Rood
exercise and training to optimise Folio, M. R. & Fewell, R. R. (2000). approach to treatment of neuro-
motor skill. New York, NY: But- Peabody Developmental Motor Sca- muscular dysfunction. In L.W. Pre-
terworth-Heinemann. les, second edition. Austin, TX: Pro- dretti (ed.), Occupational therapy:
Case-Smith, J. (2005). Occupational Ed. Practice skills for physical dysfuncti-
Therapy for Children. St. Louis, MI: Hagegorn, R. (2000). Tools for prac- on (pp. 377-400). St. Louis: Mosby.
Elsevier Mosby. tice in occupational therapy: A McDowell, I. (2006). Measuring
Christiansen, C.H. & Baum, C.M. structured approach to core skills health: A guide to rating scales and
(1997). Occupational therapy: and processes. Philadelphia, PE: questionnaires. New York, NY:
Enabling function and well-being. Churchill Livingstone. Oxford University Press
Thorofare, NJ: Slack. Haley, S. M., Coster, W. J., Ludlow, Messick, S. (1989). Validity. In R.
Colarusso, R.P. (2003). Motor-Free L. H., Haltiwanger, J. T., & And- Linn (Ed.), Educational measure-
Visual Perceptual Test – third editi- rellos, P. J. (1992). Administration ment (pp. 13- 104). New York: Am-
on. Novato, CA: Academic Therapy manual for the Pediatric Evaluation erican Council on Education &
Publications. of Disability Inventory. San Antonio, Macmillan Publishing Company.
Cook, D. A. & Beckman, T. J. (2006). TX: Psychological Corporation. Messick, S. (1994). The interplay of
Current concepts in validity and reli- Hammill, D.D., Pearson, N.A., & evidence and consequences in the
ability for psychometric instru- Voress, J.K. (1993). Developmental validation of performance assess-
ments: Theory and application. Test of Visual Perception (2nd ed.). ments. Educational Researcher, 23,
American Journal of Medicine, 119, Austin, TX: Pro Ed. 13-23.
166.e7-166.e16. Henry, A. D. (2000). Pediatric Messick, S. (1995). Validity of psy-
Coster, W., Deeney, T., Haltiwanger, Interest Profiles. San Antonio, TX: chological assessment: Validation
J., & Haley, S. (1998). School Fun- Harcourt Assessment. of inferences from persons’ and
ction Assessment. San Antonio, TX: Kielhofner, G. (2002). Model of performances as scientific inquiry
Psychological Corporation. Human Occupation: Theory and into score meaning. American Psy-
Crepeau, E.B., Cohn, E.S. & Schell, practice. Baltimore, MD: Lippincott chologist, 50, 741-749.
B.A.B. (2003). Willard & Spackman’s Williams & Wilkins. Miller, L. (1988). Miller Assessment
Occupational Therapy. Philadelphia, Kielhofner, G. (2006). Developing for Pre-schoolers: Manual. Sidcup:
PA: and evaluating quantitative data col- The Psychological Corporation.
7 Ergoterapeuten 01.09
vitenskap
Reed, K.L. & Sanderson, S.N. (19- measure interpretation: A Rasch top-down evaluation: Is one better
99). Concepts of occupational the- measurement perspective. Journal than the other? American Journal of
rapy. Philadelphia, PE: Lippincott of Applied Measurement, 2, 281- Occupational Therapy, 58(5), 594-
Williams & Wilkins 311. 598.
Rogers, J.C. & Holm, M.B. (1989). Townsend, E. A. & Polatajko, H. J.
The therapist’s thinking behind (2007). Enabling occupation II:
functional assessment. In C.B. Advancing an occupational therapy
Royeen (ed) American Occupational vision for health, well-being, & justi-
Therapy Association (AOTA) self- ce through occupation. Ottawa, ON:
study series – Assessing Function. CAOT Publications.
Rockville, MA: AOTA. Trombley, C. (1993). The issue is:
Schemm, R. (2003). Occupation- Anticipating the future: Assessment
based and family-centred care: A of occupational function. American
challenge for current practice. Journal of Occupational Therapy,
American Journal of Occupational 47, 253-257.
Du finner Ergoterapeuten
Therapy, 57, 347-350. Voss, D.E., Ionta, M.K., & Myers, på
Schkade, J. & McClung, M. (2001). B.J. (1985). Proprioceptive neuro-
Occupational Adaptation in practi- muscular facilitation: Patterns & www.ergoterapeuten.no
ce: Concepts and cases. Thorofare, techniques. New York, NY: Harper &
NJ: Slack Incorporated. Row.
Smith, E.V. (2001). Evidence for the Weinstock-Zlotnick, G., & Hinojosa,
reliability of measures and validity of J. (2004). The issue is: Bottom-up or
Annonse
GSound
Ergoterapeuten 01.09 8
View publication stats