0% found this document useful (0 votes)
192 views6 pages

Social Experiments in Public Policy Evaluation

The document discusses the history and evolution of social experiments in evaluating public policy. It notes that the first large-scale social experiments were proposed in the 1960s during the War on Poverty, and the first experiment conducted was the New Jersey Negative Income Tax Experiment in the 1970s. Over time, experiments shifted from attempting to simulate behavioral responses to hypothetical policies to simply reporting the mean impacts between treatment and control groups for the specific policy tested (the "black-box" approach). For experiments to have internal validity, random assignment between groups is needed to ensure equivalence on measured and unmeasured characteristics so that any differences in outcomes can be attributed to the treatment.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
192 views6 pages

Social Experiments in Public Policy Evaluation

The document discusses the history and evolution of social experiments in evaluating public policy. It notes that the first large-scale social experiments were proposed in the 1960s during the War on Poverty, and the first experiment conducted was the New Jersey Negative Income Tax Experiment in the 1970s. Over time, experiments shifted from attempting to simulate behavioral responses to hypothetical policies to simply reporting the mean impacts between treatment and control groups for the specific policy tested (the "black-box" approach). For experiments to have internal validity, random assignment between groups is needed to ensure equivalence on measured and unmeasured characteristics so that any differences in outcomes can be attributed to the treatment.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Social Experiments and Public Policy

Caroline Danielson

INTRODUCTION

Social experiments - randomly assign people either to a group that is subject to one or more policy
“treatments” or to one that continues to be subject to the prevailing policy norm (“controls”).

(Social experiments randomly assigns peope to either be a part of the treatment which is the
proposed new policy, or to be a part from the control which is the old policy.)

- alone can assure that any differences that emerge over time in employment, earnings, or other
relevant outcomes can reliably be attributed to the program.
(Social experiments gives assurance that over time all improvement in employment, earnings, or
other relatable outcomes will be accountable for the said experiment.)

- Social experiments also hold the respect of those crafting social policy.
(Social experiments generate this respect because they appear to offer a readily-accessible.)

This chapter will examine the key factors that make social experiments attractive to both re-
searchers and policy makers.

Because there appears to be a consensus among researchers and policy makers that experiments
constitute a gold standard in policy evaluation these special features of social experiments are worth
exploring.

Social experimentation is a promises for a rigorous, straightforward arbiter among political choices—a
method well-suited to the division of labor that leaves the choice of ends to policy makers and the
evaluation of means to technical experts.

EVOLUTION OF SOCIAL EXPERIMENTATION

2003 - Greenberg, Linksz, and Mandell review the history of social experiments and The Digest of
Social Experiments describes all social experiments conducted to-date in the United States.
(The authors analyze the most prominent social experiments ever conducted, to explore their
origins, the use of the resulting findings, and factors that influenced the use of the findings. The
result is a comprehensive examination of the effectiveness of social experimentation and
important insights into how this powerful tool can be used to improve public policy.)

1960s - Large-scale social experiments were first proposed.


(they were a departure from the normal practice of policy research, meaning this was the start of new
and accurate way of policy research.)

Haveman (1987) - notes that poverty research, and the organizations capable of training researchers
and carrying out the research, were fundamentally shaped by the War on Poverty, which provided the
funding and the federal agency loci to stimulate policy research in this field.
(The Digest of Social Experiments reveals that the majority of social experiments have been
conducted with poor populations as subjects in program areas that include health, employment, and
education and training. Mostly these type of research were ignited by the war on poverty which made
policy makers find answers to end poverty.)

The first social experiment, the New Jersey Negative Income Tax Experiment (NIT), was conducted by
Mathematica under contract from the University of Wisconsin’s Institute for Research on Poverty.
(This experiment had several treatment groups, each of which was subject to a different combination
of a minimum guaranteed income and a tax rate on income earned above the guarantee. The core
aim was to test whether adults would reduce their hours of work if they knew they were guaranteed
a minimum income. According to an observer, it was not obvious that experimentally altering
individual’s incomes was ethical. Conducting the New Jersey NIT was justified on the grounds that
there was no other way to obtain answers to the question of individuals’ responses to a guaranteed
income. So, the said experiment was to see if altering the treatment groups income to see if how
would they react if being guaranteed with minimum income that will be enough for their tax rate, this
is to see if changing tax policies would put an end to poverty.)

Haveman (1987) - writing after the first wave of social experimentation in the 1970s had ended, a
goal of all of these experiments was to estimate structural parameters like the behavioral response to
manipulations of income by tax policy.
(Social experiments aim to make controlled scenarios and see how these would affect the treatment
group. This is done to see if what is to be adjusted in policy making, and to put an end to most
problems.)

Heckman in Manski and Garfinkel 1992 - According to economist James Heckman, however, as the
NIT experiment in particular progressed, its aims grew more constrained: it came to be to compute
the mean impacts of the program.
(Instead of one or several experiments providing the raw material that would enable researchers to
simulate behavioral responses to a range of hypothetical policies, an experiment would supply simply
the difference in outcomes between the treatment and the control group for the policy or policies
under study. Heckman proposed that instead of simulating and providing raw materials to a certain
group, why not just differentiate the outcomes of two groups which would be the treatment and
control group.)

Black-box experiment

- researchers make no strong claims about the underlying causes of the outcomes; their focus is on
reporting the results of a particular policy treatment.

- Black-box experiments report the outcome, attribute it to the treatment, and stop there.
(As the NIT failed to give clean results, researchers and policy makers came up with the black-box
experiment.)

Late 1980s - The sense that black-box experiments are the gold standard of evaluation research had
developed.
(Black box experiments has become more effective in conducting social experiments and has
decidedly shifted to estimating mean impacts of the treatment. It has won the approbation of policy
makers and that backs confident statements.)

NUTS AND BOLTS OF EXPERIMENTS

To conduct an experiment, researchers randomly assign some members of a target group to the
program under study and some to the current program.
(This impact is used to measure the mean difference between the treatment and control groups on
relevant measures. That is, how much more or how less income did each go against one another.)

Internal validity is the core methodological strength of experiments. Assigning members of a target
group at random to treatment and control comparison groups ensures that they are statistically
equivalent on both measured and unmeasured characteristics.
(To make adjustments for differences on measured characteristics, other research methods face
problications with their inability to methodologically rule out systematic differences between
nonexperimental comparison groups on unmeasured [Link] make it possible, experiments
surely claims that there are no differences between experimental comparison groups on unmeasured
characteristics. Any differences between groups measured subsequently can therefore be confidently
attributed to the treatment, within the bounds of certainty provided by statistics.)
To insure the internal validity of experiments, researchers must successfully randomize participants
between the test and the standard programs. This involves developing a protocol for initial
randomization that is straightforward and not susceptible to manipulation by those implementing the
protocol. It further involves ensuring that members of the control group do not cross over and obtain
the program reserved for the treatment group.
(Randomization of participants and protocols are made in able for the experiment to succeed, these
are done to get the proper results which is the not under manipulation or simply put that the results
are not tampered.)

It is important to be clear about what the outcomes that can be measured experimentally are. The
experimental outcomes that can be measured depend on the point at which randomization occurred.
(It means that the outcomes can only be measured depending on the type of randomization. One may
be depending on time limit, or one could be depending on how many times should participants
receive the experimental service.)

A related point is that experimental outcomes are measured as differences between those as- signed
to the program and those not assigned to it. The effect of the new program is often not identical to
the impact measured by the experiment because not everyone gets the program.
(Not everyone gets the program, Experiments may be a success but there will be few who will not
avail the program because it is alien to them, which is not seen in controlled experiments.)

CONCEPTION OF CAUSALITY

Here is a stripped-down version of the core question to which policy makers seek an answer when
they commission a policy evaluation: If we implement X program, will Y outcome result? Policy
evaluation is fundamentally a testing of means. Simplifying the real complexities of the process of
policy making, one can say that policy makers seek to achieve an end. The ideal evaluation of a policy
would answer the question, does one particular means as compared to another advance us toward
that end?
(Policy makers question new programs to see if it is fit and if it’s malleable to different situations.
Outcomes is what matters, and if one program cannot predict this outcome then policy makers will
have to decline the program.)

Here is the question that social experiments address: On average, there was (or was not) a statistically
significant difference (at conventional levels) between the outcomes of treatment group T and control
group C on measure M (of outcome Y) in an experiment in which X program was tested. For example,
if policy makers want to know whether a welfare-to-work program that emphasizes quick immersion
into a process of searching for a job (X) improves child well-being (Y), researchers would design an
experiment that randomly assigned some (T) to participate in a sequence of job search activities and
others not (C). Child well-being 👍 might be measured, among other things, by surveying parents about
problem behaviors their children might be exhibiting (M).
(Questions are set to measure outcomes of new programs. This is essential so that the policy makers
could decide to whether accept or deny the said program.)

Is the question that policy makers implicitly pose identical to the question that researchers address?
The central difference between the two questions posed above is generalizability.
- It seems clear from the way that the first query above is framed that policy makers are interested in
a general result, or something resembling law-like behavior.

Experiments are a powerful means of attributing the impact of the intervention, and not other
factors, to the outcomes observed by researchers.
(Above questions clearly states that policy makers wants results that will benefit the many, and will be
beneficial to them and to the community.)

Given policy makers’ interest in the more general question, the natural inclination is to generalize.
Thus a natural slippage occurs: researchers and policy makers treat the experiment as predictive of
outcomes in other times and places that are “similar enough.” But what counts as similar enough?
What would the outlines of an argument that generalized from one particular experiment look like?
There are two key elements:
- identify the most important behavioral mechanisms that produced the result.
- identify key features of situations that make them enough like the experimental situation so that
individuals placed in those like situations will interact with the context in the same manner that the
experimental subjects did.
(So eh identify niya ag mga test ug developments nga maka outcome ana na result. So for example if
ag usa ka social experiment kay ni result and naka answer sa ila queries ila eh identify ug unsa naka
pagawas ato na result.)
(These two key elements are set so that outcomes will be similar enough to answer general questions,
these will serve as guide as a guide on what should the experiment be about or what query should it
answer.)

Because experiments take a black-box approach, they do not address the behavioral responses that
the program may have induced.
(Researchers conducting laboratory experiments always go under controlled settings, However for
those making social experiments does not undergo controlled conditions in which the treatment and
control group programs unfold. In this sense, they cannot rigorously specify the context. )

There is at least one strong reason to believe that experimental situations are exceptional: those who
are “treated” are not blind to their situation, and those who administer the treatment often know the
circumstances of the experiment—this is a crucial difference between double-blind medical trials
where both treatment and control groups are treated and neither researcher nor subject knows who
received the treatment and social experiments.
(As being said, experimental situations are exceptional, those who are under the experiment knows
the situation while those who conduct the experiment knows the supposed outcome.)

METHODOLOGICAL TRANSPARENCY

Perhaps experiments must be interpreted with caution because they do not unpack causal
mechanisms and because conclusions drawn from them do not extend in a straightforward fashion to
programs put in place in other contexts. But as freestanding exercises, experiments have the virtue of
employing a methodology that is more readily grasped than other evaluation methodologies.

Experiments are attractive because they promise to sidestep the debates that heighten the
politicization of policy debates
(Social experiments are expounded with caution because they act as a cause and results are not
predictable. However, social experiments has vast methods which is much more effective than other
methodologies. With this, many use social experiments.)

Social experiments make of a more immediate, incontrovertible truth than other research
methodologies offer appears to rest on two factors.
- grasping the essentials of social experiments seems to require no arcane technical training
inaccessible to policy makers and their advisors.
- the outcomes of experiments are not murky: experiments reliably allow observers to sharply
distinguish between programs that worked and those that had no effect on outcomes of interest.
(Methods in social experiments give great outcomes and doesn’t require a lot. This is why most policy
makers use this method.)
One might complain that a key test of a social scientific methodology in the policy arena should not be
its (apparent) lack of technical complexity, but this complaint would be misplaced.
(Garfinkel, Manski, and Michalopoulos claim that social experiments receive funding preference over
basic research in the social sciences because policy makers are unable to interpret the disputes that
social scientists enter into over the results of quasi-experimental research . Social scientists in general
face precisely the problem that they cannot produce tangible proof that social programs are working
without simultaneously justifying and explaining the methodology by which they arrived at their
conclusions. It is safe to say that social experiments doesn’t need technical complexity. Social scientist
has made methods to which justifies their conclusion.)

It might be fair to say that the way social experiments have been carried out has been influenced by
policy makers’ need for simplicity and clarity. But it would be misleading to state that experiments are
simply methodologically transparent.

ETHICAL JUSTIFICATION

It appears that they are presumed to be ethical because of aspects of the methodology employed.
Researchers point out if resources are limited so that only a subset of applicants can be accommo-
dated in the program, then random assignment—the core distinctiveness of social experiments—is a
fair way of distributing the opportunity to participate, and is more fair than the most likely other
means of allocating it (e.g., first come, first served). This argument can be challenged.
(They are presumed to be ethical because methodology supports the values required for collaborative
work, such as mutual respect and fairness. For example The ethical framework that the physicians
used will ultimately determine which individual will receive priority for treatment and which patient
will have access to the limited medical supplies that are available. It states there that this argument
can be challenged because on some situation that are immense.)

In cases where programs are mandatory for all applicants, or are open to all who request it, random
assignment can be thought of as a fair way of assigning recipients to programs of unknown efficacy.
That is, if it is unknown whether the old or the new program produces better outcomes, a social
experiment can determine whether the new program should continue.
(Random assignment refers to the use of chance procedures in experiments to ensure that each
participant has the same opportunity to be assigned to any given group which is the new and old . But
if it is unknown the social experiment has the power to determine whether the new program should
continue.)

Just as the government can grant or withhold tax relief at will, denying an individual access to a
program to which she does not have a strong claim, even if comparable others do have access to the
program, poses a weak ethical dilemma. This ethical dilemma is further weakened by an individualist
ethic.
(There is no denial in social experiments, Individuals can have access to which services they might
acquire for some in the control group often acquire the same services. In these ways it has become
easy to justify random experiments as an evaluation tool for any program that Congress or a state
legislature might decide to authorize. Randomization to be eligible for income supplements, to have
time-limited welfare benefits, and to receive job search assistance have all recently passed muster. )

O’Connor (2001) claim that poverty research focuses narrowly on individual circumstances and
behaviors instead of on the social and economic opportunities that these individuals face.
(O’Connor claims that research only bases on individual circumstances instead of social and economic
probability which these individuals go through)

In the case of social experiments, it appears that placing faith in their methodological virtues has
allowed policy makers to largely finesse ethical questions.
CONCLUSIONS

To recap, social experiments appear to offer three core strengths: fairness, simplicity, and rigor.
Random assignment offers a fair way (perhaps the most fair way) of allocating scarce resources, no
special training is required to grasp the essentials of the method, and experiments reliably isolate “the
program” from other factors influencing subjects’ outcomes thus informing policy makers how to
further social ends. Policy makers would do well to keep these complexities and limitations in view,
even as they point to the strengths of experimentation. Finally, the fact that experiments are
methodologically attractive should not be a reason to sideline ethical questions.
(Fairness that shows treatment or behavior without favoritism or discrimination, Simplicity that shows
a condition being easy to understand or do, and Rigor that shows what they learn in unpredictable
situations. Randomization creates unpredictable results which is much relatable to to reality. Social
experiments does not need special training and other factors cannot influence the subject. Social
experiments just shows how people behave in certain situations or how they respond to particular
policies or programs. The main goal of this is to ensure that everyone in society lives in peace and
harmony away from conflicts.)

You might also like