0% found this document useful (0 votes)
2 views6 pages

– Lecture 19

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views6 pages

– Lecture 19

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

📌

🔁

🧪
📊
– Lecture 19

SMART Objectives in Health Programs


• S – Specific: Clear and focused on one objective.
• M – Measurable: Progress can be tracked with data.
• A – Achievable: Realistic and feasible given resources.
• R – Relevant: Aligned with priorities and needs.
• T – Time-bound: Has a deadline or timeframe.
Health Promotion Program Planning
• Assessment of needs comes first.
• Goals and SMART objectives are defined.
• Strategies and interventions are developed.
• Implementation involves applying the planned actions.
• Evaluation is done to assess the outcome (process, impact, and outcome
evaluation).

Program Cycle Steps


1. Problem identification
2. Needs assessment
3. Priority setting
4. Goal and objective formulation
5. Planning interventions
6. Implementation
7. Monitoring and evaluation

Evaluation Types
• Process Evaluation: Checks if the program is being implemented as planned.
• Impact Evaluation: Measures short-term effects.
• Outcome Evaluation: Measures long-term results on health status.

Important Notes
• Health promotion is not only education, it includes creating supportive
environments, policies, and community action.
• Programs must be culturally appropriate and evidence-based.

● Validity: A method of determining how accurately a diagnostic method diagnoses.


● Reliability and consistency: A method of determining how consistent the observations are
between the people who make the observations and measurements, and between the
observers themselves or each other.
● Validity: The test to be used should be able to distinguish who is sick and who is healthy.
● In other words, it should give how accurately we measure the value of the variable we
measure.
Validity has 2 components; Sensitivity and Specifity
SENSITIVITY: It shows how many patients the diagnostic/measurement method can detect as
patients from those who are actually sick.—->(example)The ratio of those detected as patients
by the new diagnostic test to the total number of patients detected by the reference diagnostic
test gives the sensitivity (sensitivity) of the new diagnostic test.
SPECIFICITY: It shows how many healthy patients the new diagnostic/measurement method can
detect correctly (healthy).—> (example)The ratio of those found healthy by the new diagnostic
test to those known as healthy according to the reference diagnostic test gives the selectivity of
the new test.

Positive predictive value: It shows how many cases with a positive test result are sick according
to the reference test.
Negative predictive value: It shows how many cases with a negative test result are healthy
according to the reference test.
There are four general classifications of reliability estimates.
1. Intra-observer or inter-observer reliability.
2. Test-retest reliability.
3. Parallel forms reliability.
4. Internal consistency reliability.
RELIABILITY-CONSISTENCY: When measurements, observations, examinations made in any
research are repeated on the same people under the same conditions, by the same observers,
the extent to which the same results are obtained is determined.
-If there is no observer variation, it is theoretically expected that the measurement results will be
the same or very close to each other in examinations made with the same method on two serum
samples taken from the same person at the same time.
-This dimension of similarity is called the reliability/consistency of observations.
ERROR CAUSES BY OBSERVERS:
1) Inter-observer consistency: When observations and measurements of some variables on the
same people are made by different observers under the same conditions, it is the extent of
similarity between the results.
2) Intra-observer consistency: It is the agreement/consistency between the results of
observations and measurements repeated on the same people, under the same conditions, by
the same observer.

THINGS TO DO TO REDUCE INTER-OBSERVER OR INTRA-OBSERVER VARIATION:


1- The tools and equipment used should not be defective.
2- The inspection and measurement methods used should be standard, the same techniques
should be used.
3- Pre-testing of the tools and equipment to be used should be done.
4- Observers should be trained and pre-tested before the study.
5- Observers should be supervised during the application.

You might also like