0% found this document useful (0 votes)
109 views

Engineer in Society 2

This document provides an overview of the MCE 208: Engineer in Society course. The course covers the philosophy of science and history of engineering, the role and responsibilities of engineers in society, engineering ethics, and lectures from industry professionals. It examines topics like the needs of developing countries, safety and risk analysis, sustainable development, and the engineer's role in nation building.

Uploaded by

leslie woods
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
109 views

Engineer in Society 2

This document provides an overview of the MCE 208: Engineer in Society course. The course covers the philosophy of science and history of engineering, the role and responsibilities of engineers in society, engineering ethics, and lectures from industry professionals. It examines topics like the needs of developing countries, safety and risk analysis, sustainable development, and the engineer's role in nation building.

Uploaded by

leslie woods
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

MCE 208: ENGINEER IN SOCIETY

Course Synopsis:
Philosophy of Science; History of Engineering and Technology – who is an
engineer, basic skills and requirements in engineering, career development in
engineering, the needs of the society, engineering design process.
Developmental needs of the third world countries; safety in Engineering and
Introduction to Risk Analysis; The role of Engineers in Nation Building – the
engineer role in Nigerian local content initiative, the development of different
branches of engineering and different specializations, the engineer’s role in
Sustainable Development Goals
Engineering ethics and conduct, public interest and professional conflicts, the
engineers code of practice, design specification and standards. Lectures from
invited Professionals.

PHILOSOPHY OF SCIENCE AND TECHNOLOGY


The philosophy of science is a branch of philosophy which is concerned with
the assumptions, foundations, methods and implications of science, with the
use and merit of science. This discipline sometimes overlaps metaphysics,
ontology and epistemology, viz., when it explores scientific results comprise a
study of truth.
The philosophy of science is a field that deals specifically with what science
is, how it works, and the logic through which we build scientific knowledge.
Despite its straightforward name, the field is complex and remains an area of
current inquiry.
Philosophy of science has historically been met with mixed response from the
scientific community. There is no concrete agreement among philosophers
about many of the central problems concerned with the philosophy of science,
including whether science can unravel the truth about unsolvable things and
whether scientific reasoning can be justified. Some prominent scientists have
felt the practical effect on their work is limited.
Philosophical thought pertaining to science dates back to the time of Aristotle,
but the philosophy of science emerged as a distinct discipline in the mid-
twentieth century. In this context, we shall be surveying the principal
mainstream philosophies of science from the early modern period to the mid-
twentieth century, and then see how they impact our understanding of
technology.
Knowledge: These are information, facts, truths and principles
learned/gathered over time.
Science: it comes from the latin word Scientia meaning knowledge acquired
through study or practice. Thus, science is a system of ACQUIRING
KNOWLEDGE which involve use of OBSERVATION and
EXPERIMENTATION to describe and explain NATURAL PHENOMENA.
Science is methodology by which we find answers to questions. It asks
questions such as is it possible? It is about acquiring knowledge of the natural
phenomenon along with the reasons for such phenomenon, like why the sky is
blue? Why are leaves green? Why rainfall occurs? What are the colours of the
rainbow? How do plants make their food? What makes river water flow? etc.
When this knowledge is put to practice, to solve human problems or needs, it
is termed Technology. Hence, science deals with theories, principles and laws
whereas technology is all about products, processes and designs. The details of
interrelationship between science, technology and engineering will be
discussed later.
Scientific Demarcation
The demarcation problem refers to the distinction between science and non-
science (including pseudoscience); Karl Popper called this the central
question in the philosophy of science. However no unified account of the
problem has won acceptance among philosophers, and some regard the
problem as unsolvable or uninteresting.
Early attempts by the logical positivists grounded science in observation
while non-science was non-observational and hence meaningless. 5tg (i.e.,
all scientific claims can be proven false, at least in principle, and if no such
proof can be found despite sufficient effort then the claim is likely true).
Scientific Realism and Instrumentalism
Two central questions about science are (1) what is the aim of science and (2)
how should one interpret the results of science? Scientific realists claim that
science aims at truth and that one ought to regard scientific theories as true,
approximately true, or likely true. Conversely, a scientific antirealist or
instrumentalist argues that science does not aim (or at least does not succeed)
at truth, and that it is a mistake to regard scientific theories as even
potentially true. Some antirealists claim that scientific theories aim at being
instrumentally useful and should only be regarded as useful, but not true,
descriptions of the world.
Realists often point to the success of recent scientific theories as evidence
for the truth (or near truth) of our current theories. Antirealists point to
either the history of science, epistemic morals, the success of false modeling
assumptions, or widely termed postmodern criticisms of objectivity as
evidence against scientific realisms. Some antirealists attempt to explain the
success of scientific theories without reference to the truth.

Scientific Explanation
Scientific theories do not only provide predictions about the future but also
offer explanation to events that occur regularly or have occurred in time
past. Philosophers have investigated the criteria by which a scientific theory
can be said to have successfully explained a phenomenon, as well as what it
means to say a scientific theory has explanatory power.
1. Carl G. Hempel and Paul Oppenheim (1948) offered an influential theory
of scientific explanation known as the Deductive-Nomological (D-N)
model. The model says that a scientific explanation succeeds by
subsuming or placing a phenomenon under a general law. An
explanation is defined as a valid deductive argument. The theory was
initially ignored but was later subjected to substantial criticism, resulting
in several counter examples. It is quite challenging to characterize
what is meant by an explanation when the thing to be explained
cannot be deduced from the law.
Hempel and Oppenheim also put forward other statistical models of
explanation which should account for statistical sciences. These theories were
criticized as well.
2. Wesley Salmon developed an alternative statistical model to proffer
solution to account for some of the problems with Hempel and
Oppenheim’s model. His model states hat a good scientific
explanation must be statistically relevant to the outcome to be
explained. In addition to Salmon’s model,
3. others have suggested that a good explanation is primarily motivated
by unifying disparate phenomena or providing a casual mechanism.

Analysis and Reductionism


Analysis is the activity of breaking an observation or theory down into
simpler concepts in order to understand it. Analysis is as essential to science
as it is to all rational activities. For example, the task of describing
mathematically the motion of a projectile is made easier by separating out the
force of gravity, angle of projection and initial velocity. After such analysis it
is possible to formulate a suitable theory of motion.
Reductionism can refer to one of the several philosophical positions related
to this approach. One type of reductionism is the belief that all fields of study
are ultimately amenable (agreeable) to scientific explanation. Perhaps a
historical event might be explained in sociological and psychological terms,
which in turn might be described in terms of human physiology, which in turn
might be described in terms of chemistry and physics.
Daniel Dennett invented the term greedy reductionism to describe the
assumption that such reductionism was possible. He claims that it is just ‘bad
science’, seeking to find explanations which are appealing or eloquent, rather
than those that are of use in predicting natural phenomena. He also says that:
There is no such thing as philosophy-free science; there is only science
whose philosophical baggage is taken on board without examination
−Daniel Dennett, Darwin’s Dangerous Idea, 1995.

Grounds of Validity of Scientific Reasoning


Empirical verification
Science relies on evidence to validate its theories and models, and the
predictions implied by those theories and models should be in agreement with
observation. Ultimately, observations reduce to those made by the unaided
human senses: sight, hearing, etc. To be accepted by most scientists, several
impartial, competent observers should agree on what is observed.
Observations should be repeatable, e.g., experiments that generate relevant
observations can be (and, if important, usually will be) done again.
Furthermore, predictions should be specific; one should be able to describe a
possible observation that would falsify the theory or a model that implies the
prediction.
Nevertheless, while the basic concept of empirical verification is simple, in
practice, there are difficulties as described in the following sections.
Inductivism
How can scientists state, for example, that Newton’s Third Law is universally
true? After all, it is not possible to have tested every incidence of an action,
and found a reaction. There have, of course, been many, many tests, and in
each one a corresponding reaction has been found. But can one ever, be sure
that future tests will continue to support this conclusion?
One solution to this problem is to rely on the notion of induction. Inductive
reasoning maintains that if a situation holds in all observed cases, then the
situation holds in all cases. So, after completing a series of experiments that
support the Third Law, and in the absence of any evidence to the contrary, one
is justified in maintaining that the Law holds in all cases.
Although induction commonly works (e.g., almost no technology would be
possible if induction were not regularly correct), explaining why this is so has
been somewhat problematic. One cannot use deduction; the usual process of
moving logically from premise to conclusion, because there is no syllogism
that allows this. Indeed, induction is sometimes mistaken; 17th century
biologists observed many white swans and none of the other colors, but not all
swans are white. Similarly, it is at least conceivable that an observation will be
made tomorrow that shows an occasion in which an action is not accompanied
by a reaction; the same is true of any scientific statement. One answer has been
to conceive of a different form of rational argument, one that does not rely on
deduction. Deduction allows one to formulate a specific truth from a general
truth: all crows are black; this is a crow; therefore, this is black. Induction
somehow allows one to formulate a general truth from series of specific
observations; this is a crow and it is black; that is a crow and it is black; no
crow has been seen that is not black; therefore, all crows are black.
The problem of induction is one of considerable debate and importance in the
philosophy of science: is induction indeed justified, and if so, how?
Duhem-Quine Thesis
According to Duhem-Quine thesis, after Pierre Duhem and W.V. Quine, it is
impossible to test a theory in isolation. One must always add auxiliary
hypotheses in order to make testable predictions. For example, to test
Newton’s Law of Gravitation in our solar system, one needs information about
the masses and positions of the sun and all the planets. Famously, the failure
to predict the orbit of Uranus in the 19th century led not to the rejection of
Newton’s Law but rather to the rejection of the hypothesis that there are only
seven planets in our solar system. The investigations that followed led to the
discovery of an eighth planet, Neptune. If a test fails, something is wrong. But
there is a problem in figuring out what that something is: a missing planet,
badly calibrated test equipment, an unsuspected curvature of space, etc.
One consequence of the Duhem-Quine thesis is that any theory can be made
compatible with any empirical observation by the addition of a sufficient
number of suitable ad hoc hypotheses. This is why science uses Occam’s
Razor; hypotheses without sufficient justification are eliminated. This thesis
was accepted by Karl Popper, leading him to reject naïve falsification in favor
of ‘survival of the fittest’, or most falsifiable, of scientific theories. In Popper’s
view, any hypothesis that does not make testable predictions is simply not
science. Such a hypothesis may be useful or valuable, but it cannot be said to
be science. Confirmation holism, developed by W.V. Quine, states that
empirical data are not sufficient to make a judgement between theories. In this
view, a theory can always be made to fit with the available empirical data.
However, the fact that empirical evidence does not serve to determine between
alternative theories does not necessarily imply that all theories are of equal
value, as scientists often use guiding principles such as Occam’s Razor.
One result of this view is that specialists in the philosophy of science stress the
requirement that observations made for the purposes of science be restricted to
intersubjective objects. That is, science is restricted to those areas where there
is a general agreement on the nature of the observations involved. It is
comparatively easy to agree on observations of physical phenomena, harder to
agree on observations of social or mental phenomena, and difficult in the
extreme to reach agreement on matters of theology or ethics (and thus the latter
remain outside the normal purview of science).
Theory-Dependence of Observations
When making observations, scientists look through telescopes, study images
on electronic screens, record meter readings, and so on. Generally, on a basic
level, they can agree on what they see, e.g., the thermometer shows 37.9⁰ C.
But, if these scientists have different ideas about the theories that have been
developed to explain these basic observations, they can interpret them in
different ways. Ancient scientists interpreted the rising of the Sun in the
morning as evidence that the Sun moved. Later scientists deduce that the Earth
is rotating. For example, if some scientists may conclude that certain
observations confirm a specific hypothesis, skeptical colleagues may suspect
that something is wrong with the test equipment. Observations when
interpreted by a scientist’s theories are said to be theory-laden.
Whitehead wrote, “All science must start with some assumptions as to the
ultimate analysis of the facts with which it deals. These assumptions are
justified partly by their adherence to the types of occurrence of which we are
directly conscious, and partly by their success in representing the observed
facts with a certain generality, devoid of ad hoc suppositions.”
Observation involves both perception as well as cognition. That is, one does
not make an observation passively, but is also actively engaged in
distinguishing the phenomenon being observed from surrounding sensory data.
Therefore, observations are affected by our underlying understanding of the
way in which the world functions and that understanding may influence what
is perceived, noticed or deemed worthy of consideration. More importantly,
most scientific observation must be done within a theoretical context in order
for it to be useful. For example, when one observes a measured increase in
temperature with a thermometer, that observation is based on assumptions
about the nature of temperature and its measurement, as well as assumptions
about how the thermometer functions. Such assumptions are necessary in order
to obtain scientifically useful observations (such as, “the temperature increased
by two degrees”).
Empirical observation is used to determine the acceptability of hypothesis
within a theory justification of a hypothesis often includes reference to a theory
– operational definitions and hypotheses – in which the observation is
embedded. That is, the observation is framed in terms of the theory that also
contains the hypothesis it is meant to verify or falsify (though of course the
observation should not be based on an assumption of the truth or falsity of the
hypothesis being tested). This means that the observation cannot serve as an
entirely neutral arbiter between competing hypotheses, but can only arbitrate
between hypothesis within the context of the underlying theory that explains
the observation.
Thomas Kuhn denied that it is ever possible to isolate the hypothesis being
tested from the influence of the theory in which the observations are grounded.
He argued that the observations always rely on a specific paradigm, and that it
is not possible to evaluate competing paradigms independently. By “paradigm”
he meant, essentially, a logically consistent “portrait” of the world, one that
involves no logical contradictions and that is consistent with observations that
are made from the point of view of this paradigm. More than one such logically
consistent construct can paint a usable likeness of the world, but there is no
common ground from which to pit two against each other, theory against
theory. Neither is a standard by which the other can be judged. Instead, the
question is which “portrait” is judged by sone set of people to promise the most
useful in terms of scientific “puzzle solving”.
For Kuhn, the choice of paradigm was sustained by, but not ultimately
determined by, logical processes. The individual’s choice between paradigm’s
involves setting two or more “portraits” against the world and deciding which
likeness is most promising. In the case of a general acceptance of one paradigm
or another, Kuhn believed that it represented the consensus of the community
of scientists. Acceptance or rejection of some paradigm is, he argued, a social
process as much as a logical process. Kuhn’s position, however, is not one of
relativism. According to Kuhn, a paradigm shift will occur when a significant
number of observational anomalies in the old paradigm have made the new
paradigm more useful. That is, the choice of a new paradigm is based on the
observations, even though those observations are made against the background
of the old paradigm. A new paradigm is chosen is chosen because it does a
better job of solving scientific problems than the old one. The fact that
observation is embedded in theory does not mean observations are irrelevant
to science. Scientific understanding derives from observation, but the
acceptance of scientific statements is dependent on the related theoretical
background or paradigm as well as on observation. Coherentism, skepticism,
and foundationalism are alternatives for dealing with the difficulty of
grounding scientific theories in something more than the observations. And, of
course, further, redesigned testing may resolve differences of opinion.
Coherentism
Induction must avoid the problem of the criterion, in which any justification
must in turn be justified, resulting in an infinite regress. The regress argument
has been used to justify one way out of the infinite regress, foundationalism.
Foundationalism claims that there are some basic statements that do not require
justification. Both induction and falsification are forms of foundationalism in
that they rely on basic statements that derive directly from immediate sensory
experience.
The way in which basic statements are derived from observation complicates
the problem. Observation is a cognitive act; that is, it relies on our existing
understanding, our set of beliefs. An observation of a transit Venus requires a
huge range of auxiliary beliefs, such as those that describe the optics of
telescopes, the mechanics of the telescope mount, and an understanding of
celestial mechanics, all of which must be justified separately. At first sight, the
observation does not appear to be ‘basic’.
Coherentism offers an alternative by claiming that statements can be justified
by their being part of a coherent system. In the case of science, the system is
usually taken to be the complete set of beliefs of an individual scientist or,
more broadly, of the community of scientists. W. V. Quine argued for a
Coherentist approach to science, as do E. O. Wilson and Kenneth Craik, though
neither use the term “Coherentism” to describe their views. An observation of
a transit of Venus is justified by its being coherent with our beliefs about
celestial mechanics and earlier observations. Where this observation is at odds
with any auxiliary belief, an adjustment in the system will be required to
remove the contradiction.
Ockham’s razor
The practice of scientific inquiry typically involves a number of heuristic
principles, such as the principles of conceptual economy or theoretical
parsimony. These are customarily placed under the rubric of Ockham’s razor,
named after the 14th century Franciscan friar William of Ockham, who is
credited with many different expressions of the maxim, not all of which have
yet been found among his extant works.
“William of Ockham (c. 1295-1349)… is remembered as an influential
nominalist, but his popular dame as a great logician rests chiefly on the maxim
known as Ockham’s razor: Entia non sunt multiplicanda praeter necessitate
[“entities must not be multiplied beyond necessity]. No doubt this represents
correctly the general tendency of his philosophy, but it has not so far been
found in any of his writings. His nearest pronouncement seems to be Numquam
ponenda est pluralitas sine necessitate [Plurality must never be posited
without necessity], which occurs in his theological work on the Sentences of
Peter Lombard (Super Quattuor Libros Sententiarum (ed. Lugd., 1495)). In his
summa Totius Logicae, Ockham cites the principle of economy, Frustra fit per
plura quod potest fieri per pauciora [It is futile to do with more things that
which can be done with fewer]. (Kneale and Kneale, 1962, p. 243)” That is,
explanation that posit fewer entities, or fewer kinds of entities, are to be
preferred to explanation that posit more.
As interpreted in contemporary scientific practice, “entities should not be
multiplied beyond necessity” advises opting for the simplest theory among a
set of competing theories that have a comparable explanatory power,
discarding assumptions that do not improve the explanation. Among the many
difficulties that arises in trying to apply Ockham’s razor is the problem of
formalizing and quantifying the “measure of simplicity” that is implied by the
task of deciding which of the several theories is the simplest. Although various
measures of simplicity have been brought forward as potential candidates, it is
generally recognized that there is no such thing as a theory-independent
measure of simplicity. In other words, there appear to be as many different
measures of simplicity as there are theories themselves, and the task of
choosing between measures of simplicity appears to be every bit as
problematic as the job of choosing between theories. Moreover, it is extremely
difficult to identify the hypotheses or theories that have “comparable
explanatory power”. Though it may be readily possible to rule out some of the
extremes. Ockham’s razor also does not say that the simplest account is to be
preferred regardless of its capacity to explain outliers, exceptions, or other
phenomena in question. The principle of falsifiability requires that any
exception that can be reliably reproduced should invalidate the simplest theory,
and that the next-simplest account which can actually incorporate the exception
as part of the theory should then be preferred to the first. As Albert Einstein
puts it, “The supreme goal of all theory is to make the irreducible basic
elements as simple and as few as possible without having to surrender the
adequate representation of a single datum of experience”
Objectivity of observations in science
It is vitally important for science that the information about the surrounding
world and the objects of study be as accurate and as reliable as possible. For
the sake of this measurements which are the source of this information must
be as objective as possible. Before the invention of measuring tools (like
weights, meter sticks, clocks, etc.) the only source of information available to
humans were their senses (vision, hearing, taste, tactile, sense of heat, sense of
gravity, etc.). Because human senses differ from person to person (due to wide
variations in personal chemistry, deficiencies, inherited flaws, etc.) there were
no objective measurements before the invention of these tools. The
consequence of this was the lack of a rigorous exercise.
With the advent of exchange of goods, trades, and agricultures there arose a
need for such measurements, and science (arithmetic, geometry, mechanics,
etc.) based on standardized units of measurements (stadia, pounds, seconds,
etc.) was born. To further abstract from unreliable human senses and make
measurements more objective, science uses measuring devices (like
spectrometers, voltmeters, interferometers, thermocouples, counters, etc.) and
lately – computers. In most cases, the less human involvement in the measuring
process, the more accurate and reliable scientific data are. Currently most
measurements are done by a variety of mechanical and electronic sensors
directly linked to computers – which further reduces the chances of human
error/contamination of information.
The current accuracy of measurement of mass is about 10-10, of angles – about
10-9, and of time and length intervals in many cases reaches the order of 10 -13
– 10-15. This has made it possible to measure, say, the distance to the Moon
with sub-centimeter accuracy (see Lunar laser ranging experiment), to measure
slight movement of tectonic plates using GPS with sub-millimeter accuracy, or
even to measure as slight variations in the distance between two mirrors
separated by several kilometers as 10-18 m - three orders of magnitude less than
the size of a single atomic nucleus.
Another question about the objectivity of observations relates to the so-called
“experimenter’s regress”, as well as to other problems identified from the
sociology of scientific knowledge: as with all forms of human reasoning, the
people who interpret the observations or experiments always have cognitive
and social biases that lead them, often in an unconscious way, to introduce
their own interpretations into their description of what they are ‘seeing’. Some
of these arguments can be shown to be of a limited scope, when analyzed from
a game-theoretic point of view
Conclusion
Inductivism supported the view that science grows directly out of perceptual
observations unbiased by theory. The logical positivist or logical empiricist
philosophy of science has often been used to reinforce the notion of science as
neutral and technology as applied science. Popper’s falsificationism or critical
approach, belatedly appreciated, allowed for the role of theory as prior to
observation and the role of philosophical theories as background frameworks
for scientific theories. Kuhn and post-positivist, historicist philosophy of
science opened the door to considering the role of philosophical, religious, and
political influences on the creation and acceptance of scientific theories.
Feminist, ecological, and multiculturalist critics use Kuhn’s notion of a
paradigm to expose what they claim to be pervasive bias in the methods and
results of mainstream Western science and technology. Sociologists of
scientific knowledge emphasize that the logic of evidence and refutation of
theories does not determine the course of theory change. Instead, the prestige
of established scientists, recruitment of allies, and negotiation between
competing teams leads to a closure of scientific disputes that is later attributed
to the facts of nature. Instrumental realists emphasize that, in contemporary
science, observation itself is mediated through the technology of scientific
instrumentation. Rather than technology being applied science, technology is
prior to scientific observation.
Study Questions
1. Do you think that inductivism is adequate as a theory of scientific
method? If it is not, why do so many working scientists hold to it?
2. Are scientific theories decisively refuted by counter-evidence or do they
“roll with the punches,” so to speak, being readjusted to fit what was
counter-evidence for the old version? Give an example not in the chapter
if possible.
3. Are scientific theories direct outgrowths of observation or are they
influenced as well by assumptions and worldviews of their creators?
4. Does the instrumental realist approach do away with the problems raised
by earlier accounts of science, such as the inductive and Popperian
falsificationist approaches?

You might also like