0% found this document useful (0 votes)
33 views15 pages

Historical Threads Missing Links and Future Direct

Uploaded by

Monica
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views15 pages

Historical Threads Missing Links and Future Direct

Uploaded by

Monica
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/343341735

Historical threads, missing links, and future directions in AI in education

Article in Learning Media and Technology · July 2020


DOI: 10.1080/17439884.2020.1798995

CITATIONS READS

239 839

2 authors, including:

Ben Williamson
The University of Edinburgh
101 PUBLICATIONS 7,498 CITATIONS

SEE PROFILE

All content following this page was uploaded by Ben Williamson on 11 January 2021.

The user has requested enhancement of the downloaded file.


Learning, Media and Technology

ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/cjem20

Historical threads, missing links, and future


directions in AI in education

Ben Williamson & Rebecca Eynon

To cite this article: Ben Williamson & Rebecca Eynon (2020) Historical threads, missing links,
and future directions in AI in education, Learning, Media and Technology, 45:3, 223-235, DOI:
10.1080/17439884.2020.1798995

To link to this article: https://doi.org/10.1080/17439884.2020.1798995

Published online: 30 Jul 2020.

Submit your article to this journal

Article views: 3536

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=cjem20
LEARNING, MEDIA AND TECHNOLOGY
2020, VOL. 45, NO. 3, 223–235
https://doi.org/10.1080/17439884.2020.1798995

EDITORIAL

Historical threads, missing links, and future directions in AI in


education
a b
Ben Williamson and Rebecca Eynon
a
University of Edinburgh; bUniversity of Oxford
ARTICLE HISTORY Received 14 July 2020; Accepted 14 July 2020

Artificial intelligence has become a routine presence in everyday life. Accessing information over the
Web, consuming news and entertainment, the performance of financial markets, the ways surveil-
lance systems identify individuals, how drivers and pedestrians navigate, and how citizens receive
welfare payments are among myriad examples of how AI has penetrated into human lives, social
institutions, cultural practices, and political and economic processes. The effects of the algorithmic
techniques employed to enable AI are far-reaching and have inspired considerable epochal hype and
hope, as well as dystopian dread, although they remain largely opaque and weakly understood out-
side of the social networks of technical experts (Rieder 2020). The profound social and ethical impli-
cations of AI, however, are becoming increasingly apparent and the objects of significant critical
attention. AI is at the centre of controversies concerning, for example, automation in workplaces
and public services; algorithmic forms of bias and discrimination; automated reproduction of
inequalities and disadvantage; regimes of data-centred surveillance and algorithmic profiling; disre-
gard of data protections and privacy; political and commercial micro targeting; and the power of
technology corporations to control and shape all sectors and spaces they penetrate, from whole cities
and citizen populations to specific collectives, individuals or even human bodies (Whittaker et al.
2018). Numerous ethical frameworks and professional codes of conduct have been developed to
attempt to mitigate the potential dangers and risks of AI in society, though important debates persist
about their concrete effects on companies or the way such frameworks and codes may serve to pro-
tect commercial interests (Greene, Hoffman, and Stark 2019).
The current instantiation of AI on the Web, on smartphones, in social media, and in spaces via
interconnected objects and sensor networks has a much longer history than some recent epochal
claims would suggest. Histories of AI stretch back at least as far as the birth of computer science
and cybernetics in the 1940s. The term ‘artificial intelligence’ itself was coined as part of a project
and workshop at Dartmouth College in the mid-1950s. From the 1960s to the 90s, punctuated by
periods of ‘AI winter’, AI research and development focused first on encoding principles of
human reasoning to simulate human intelligence, and then on ‘expert systems’ that emulated the
procedural decision-making processes of experts based on defined knowledge bases. After 2010,
AI gradually returned under a new paradigm, not as simulated human intelligences or programma-
ble expert systems but as data-processing systems that can learn and make predictions from classify-
ing and correlating huge quantities of ‘big data’. Computational processes including data analytics,
machine learning, neural networks, deep learning and reinforcement learning underpin most con-
temporary forms of AI. AI is, perhaps, just a new catch-all name for a range of statistical, mathemat-
ical, computational and data scientific practices and developments that each have their own complex
and intertwined genealogies, but it also signifies a particular unique nexus of these historical strands
(Schmidhuber 2019, 2020). Modern AI is not focused on creating computational ‘superintelligences’
(‘strong AI’) but ideally on developing machines that can learn from their own experience, adapt to

CONTACT Rebecca Eynon [email protected]


© 2020 Informa UK Limited, trading as Taylor & Francis Group
224 B. WILLIAMSON AND R. EYNON

their contexts and uses, improve their own functioning, craft their own rules, construct new algor-
ithms, make predictions, and carry out automated tasks without requiring control or oversight by
human operatives (Alpaydin 2016; Mackenzie 2017).
Interests in the application of AI to education (AIed) have a long history too – with a range of
social and ethical implications that this special issue is intended to identify and examine. Our aim
in this editorial introduction to the special issue ‘Critical perspectives on AI in education’ is to pro-
vide some historical perspective to the collection of papers and to current hyperbole about AIed and
associated ideas about adaptive systems, pedagogic agents, personalized learning, intelligent tutors,
and automated governance. As with the AI field more generally, we do not see contemporary AIed as
the result of a simple linear history; instead, we understand the contemporary AIed moment as the
result of a set of convergences that, among many genealogical threads, include (1) several decades of
AIed research and development in academic centres and labs, (2) the growth of the commercial edu-
cation technology (edtech) industry, (3) the influence of global technology corporations on edu-
cation, and (4) the emergence of data-driven policy and governance. In this editorial, then, we
want to historicize AI in education not by reproducing a linear timeline but by tracing out some
of the genealogical threads and convergences that have led to the contemporary fascination with
applying AI to education. As a method,
Genealogical analysis traces how contemporary practices and institutions emerged out of specific struggles,
conflicts, alliances, and exercises of power … . [I]ts intent is to problematize the present by revealing the
power relations upon which it depends and the contingent processes that have brought it into being. (Garland
2014, 372)

Genealogies seek to trace out the historical processes, conditions, conflicts, bifurcations and con- and
disjunctures out of which contemporary practices emerged. Applied to a field such as AIed, these
contingent genealogical threads and twists include disciplinary conflicts and encounters, technologi-
cal developments, funding schemes, methodological advances and sectoral encounters between aca-
demic research and commercial imperatives. We cannot in this brief editorial produce a full
genealogy of AIed – though we think that is a necessary future task – but instead highlight some
particular historical trajectories that help situate the present collection. Following that, we also ident-
ify some key missing elements in existing research on AIed, and some future directions for further
work.

AIed research
‘AI in education’ has existed as a coherent academic research field since at least the 1980s, as marked
by the first publication of the International Journal of Artificial Intelligence in Education in 1989 and
the formation of the International AI in Education Society (IAIED) in 1993, although it was preceded
by development of Intelligent Tutoring Systems Computer Assisted Instruction systems in the 1960s
and 70s (Alkhatlan and Kalita 2018; Selwyn 2019). As a field AIed has developed along two comp-
lementary strands of activity: the development of AI-based tools for classrooms, and the use of AI to
understand, measure and improve learning (Holmes, Bialik, and Fadel 2019). Particularly as a field of
research on learning, AIed is closely related to the learning sciences and cognitive science – the ‘cog-
nition, technology and education nexus’ as Pea (2016, 51) has termed this disciplinary confluence –
as well as to developments in learning analytics and educational data mining that have unfolded over
several decades (Buckingham Shum and & Luckin 2019).
In the second issue of the International Journal of Artificial Intelligence in Education, published in
1989, Schank and Edelson (1989, 3) claimed that AI is ‘intimately bound up with education’:
AI concerns itself with getting machines to read, to reason, to express themselves, to make generalizations, and
to learn. These issues are the stuff of education after all. … AI people are in a unique position to improve edu-
cation … . [W]e can couple our expertise in computer technology with our theories of learning and understand-
ing to build computer-based instructional systems that will have a positive impact on education. (Schank and
Edelson 1989, 3–4)
LEARNING, MEDIA AND TECHNOLOGY 225

For Schank and Edelson, AI – and the ‘AI people’ who create it – would not only have a positive
impact on education as it was formally organized in terms of curriculum and instructional design
but would demand far-reaching changes to the ways that teaching and learning were understood
and practised. In fact, the authors were writing from the newly established Institute for Learning
Sciences at Northwestern University, a centre set up outside of the discipline of educational research
with funding from a Chicago-based corporate consulting firm and the Advanced Research Projects
Agency (ARPA) of the US Department of Defense.
A few years later, in a review of applications of AI in education completed for RAND, McArthur,
Lewis, and Bishay (1995, 42) argued that AI applications such as Intelligent Tutoring Systems (ITS)
‘can significantly improve the speed and quality of students’ learning’. Like Schank and Edelson
before them, however, they also saw such successes merely as indicators of potentially profound
transformations:
The technologies that make it possible to automate traditional methods of teaching and learning are also help-
ing to create new methods … and to redefine valued educational goals and learning outcomes. (McArthur,
Lewis, and Bishay 1995, 42)

Realizing this transformative ambition, they suggested, would require bringing ‘high-tech companies
into better cooperation with educational technology research and classroom practice’ (43). The
future prospects for AI in education, they further argued, would be to ‘challenge and even threaten’
existing teaching and learning practices, replacing them with methods such as computer-based ‘indi-
vidualized tutoring’ and ‘inquiry- or project-based learning’ in ways that might ‘transform schools
and classrooms, not improve them in any simple sense’, and ‘offer new goals and practices for teach-
ing and learning’ (72, original italic).
Twenty-five years later, many of the animating aims and transformative commitments of the
‘AI people’ working in the field of education remain distinctly similar, even if the underlying
computational logics of AI have shifted from programmable expert systems to big data analytics,
machine learning, neural networks and deep learning (Knox, Williamson, and Bayne 2020). Intelligent
tutoring systems have continued to evolve in AIed research labs and commercial settings for 40 years
(du Boulay 2019). From around 2005, new research fields of educational data mining and learning
analytics began to emerge, focused on the analysis of ‘big data’ in education, and on the development
of new professional positions for ‘education data scientists’ and other analytics experts (Fischer
et al. 2020). Although learning analytics and mining big educational data – whether at the institutional
or individual level – are not entirely synonymous with AI, they are increasingly genealogically
intertwined and related, as signified by the publication of a double special issue of the British Journal
of Educational Technology dedicated to learning analytics and AIed as a single research topic in
2019. The field is also mutating and evolving, with new positions opening up for so-called ‘learning
engineers’ who possess hybrid forms of expertise crisscrossing the computing, data and learning
sciences (Williamson 2020).
As Luckin et al. (2016, 12) envisaged in an influential report on the prospects for AIed, a ‘market-
place’ of thousands of AI components will eventually combine to ‘enable system-level data collation
and analysis that help us learn much more about learning itself and how to improve it’. With this
marketplace of AI components in place, Luckin and coauthors envisioned both new AIed tools
for classrooms and enhanced analytical capacity for the measurement of learning and teaching at
multiple levels:
Once we put the tools of AIEd in place … we will have new and powerful ways to measure system level achieve-
ment. … AIEd will be able to provide analysis about teaching and learning at every level, whether that is a par-
ticular subject, class, college, district, or country. This will mean that evidence about country performance will
be available from AIEd analysis, calling into question the need for international testing. (Luckin et al. 2016, 48)

As the president of the International AI in Education Society, Luckin has rapidly translated AIed
R&D and advocacy into mainstream, media-friendly predictions which posit not only the
226 B. WILLIAMSON AND R. EYNON

transformative effects of AI on education but on human intelligence and learning with a focus on
augmentation:
The real power that AI brings to education is connecting our learning intelligently to make us smarter in the
way we understand ourselves, the world and how we teach and learn. For the first time we will be able to extend,
develop and measure the complexity of human intelligence – an intellect that is more sophisticated than any AI.
This will revolutionise the way we think about human intelligence. (Luckin 2020)

From this brief overview, we see the contested nature of AIed, with a complex history of method-
ologies and disciplines , and to varied claims of how AI will enable learning scientists to develop
‘transformative’ or ‘revolutionary’ understandings of human cognition, learning and intelligence.
In this issue, Perrotta and Selwyn (2020) highlight how learning sciences research performed with
machine learning, an applied AI method, superimposes algorithmic complexity on reductionist and
contested understandings of human learning. They note that applied AI techniques of ‘automated
knowledge discovery’ such as pattern recognition and correlational analysis are based on a mechan-
ical, inductivist epistemology that assumes all patterns are interpretable in the same standardized
ways across all cultures and contexts. They highlight instead how AI-generated patterns reflect
the specific situations from which they are gathered and are imprinted with professional, disciplinary
and economic contingencies, as they exemplify in their case study of a controversy over the use of
predictive machine learning in personalized learning and adaptive learning development. The article
is a salient reminder that, despite its veneer of objectivity, AIed is infused with politics, embodies
particular sets of values and entails new distributions of power in educational research to data science
experts with particular ontological and epistemological commitments. AIed is also specifically
located in a longer history of knowledge production, theory generation, and the development of epis-
temic expertise in the learning sciences and educational data science that is related to, but not redu-
cible to, commercial and economic interests.

AIed as commercial edtech


AI in education is not just a pursuit of educational data scientists and learning science specialists. It is
also a major commercial concern of educational technology (edtech) companies, which have sought
to bring multiple forms of AI-based products to market, and of powerful philanthropic and invest-
ment actors that support AIed startups as part of the development of adaptive personalized learning
software enacted by machine learning (Selwyn 2019). Rising interests in ‘big data’ in education, edu-
cational data mining and learning analytics from the mid-2000s were accompanied by powerful
framing discourses of adaptive systems and personalized learning (Williamson 2017). These same
discourses of personalized learning are now used to justify, promote and market AI-based solutions
in education (Bulger 2016; Boninger, Molnar, and Saldaña 2020).
The global education business Pearson, in particular, has supported AI in education enthusiasti-
cally for more than a decade, first through support for big data and learning analytics, and later expli-
citly by advocating and producing AI-based intelligent tutors and other ‘adaptive and personalized
learning’ applications. Pearson’s proposed vision of AIEd includes the development of Intelligent
Tutoring Systems (ITS) which ‘use AI techniques to simulate one-to-one human tutoring, delivering
learning activities best matched to a learner’s cognitive needs and providing targeted and timely feed-
back, all without an individual teacher having to be present.’ It also promises intelligent support for
collaborative working – such as AI agents that can integrate into teamwork – and intelligent virtual
reality environments that simulate authentic contexts for learning tasks. Its vision is of teachers sup-
ported by their own AIEd teaching assistants and AIEd-led professional development. As its Web-
page on ‘The future of education’ states:
By combining AI with the learning sciences – psychology, neuroscience, linguistics, sociology and anthropology
– we gain an understanding of what and how people learn. With AI, how people learn will start to become very
different. … AI can adapt to a person’s learning patterns. This intelligent and personalized experience can
LEARNING, MEDIA AND TECHNOLOGY 227

actually help people become better at learning, the most important skill for the new economy. (https://www.
pearson.com/news-and-research/the-future-of-education/artificial-intelligence.html)

As the realization of its approach to AIed, Pearson created and launched AIDA, a smartphone-based
AIed app, as an adaptive tutor that offers personalized, adaptive responses to students (https://www.
pearson.com/en-us/learner/products-and-services/learning-apps-development/aida.html). Pear-
son’s launch of AIDA is significant for several reasons. First, it signifies a certain mainstreaming
of AIed, translating the long history of research and development in intelligent tutoring systems
and pedagogic agents into a marketable product. Second, AIDA is targeted primarily at students
as consumers who will pay a subscription for automated support for their studies, part of an emer-
ging trend in ‘consumer edtech’. Third and relatedly, it represents the emergence of a new ‘shadow
education market’ of private supplementary tutoring in which the private tutor is an automated
smartphone app or a personal robot assistant. Whether AIDA will prove to be a long-term success
for Pearson remains to be seen, but we can see it as a material instantiation of commercial aspirations
to embed automation in education as a potentially profitable market niche.
The example of Pearson exemplifies how commercial edu-businesses and edtech companies have
been pursuing AIed products for many years, and seeking to grow market interest in their products.
However, the expansion of AIed has not taken linear pathways. In this issue, Knox (2020) highlight
the significant growth of commercial AI in education products in the specific context of China. Cat-
alysed by massive state investment in AI, new Chinese AIed companies are attracting enormous ven-
ture capital investments too. In spring 2020, as the spread of Covid-19 led to school closures, the
Beijing-based Yuanfudao edtech company received the largest venture capital investment ever
recorded for a startup in the edtech sector for its AI-based homework and tutoring platform, with
its US$1bn investment taking its total value to an estimated $7.8bn (Dai 2020). China’s adoption
of AI in education is the result of large-scale venture capital investment and increasing parental will-
ingness among more wealthy families to pay for private tutoring and supplemental education ser-
vices and products but is also driven by public–private partnership arrangements and the strong
support of the state for private sector technologies (Knox 2020). It demonstrates the inseparability
of the state, the private sector, and consumer markets in AIed development, roll-out and uptake.
However, the huge expansion of AIed in China has also raised significant controversy and
concern:
If schools are capable of tracking every keystroke, knowledge point and facial twitch, they are effectively fur-
nishing either a technology company or the Chinese state with an eternal ledger of every step of a child’s devel-
opment. This is potentially problematic because, whereas the human teacher assumes change, AI assumes
continuation. … [A]n intelligent tutoring system could not only store that information and tailor a personalised
pathway for the student in the first grade, it may extrapolate that information many years later, when the stu-
dent is in high school. (Liu 2020, n.p.)

Although the adoption of AIed in China has generated significant interest and concern, its
approach to investment in AI in education is also mirrored by the ambitions of the edtech sector
and by venture philanthropic supporters in other countries, especially in the US and India (Cha-
muah and Ghildiyal 2020). In the US context, major supporters and funders of AI-based educational
technologies include the Bill and Melinda Gates Foundation, Schmidt Futures, and the Chan Zuck-
erberg Initiative, the funding and investment vehicles, respectively, of Microsoft founder Bill Gates,
ex-Google chair Eric Schmidt, and Facebook founder Mark Zuckerberg. These organizations make
significant claims for the effectiveness of AI and personalized learning in raising student achieve-
ment, enabling students to develop ‘mastery’ over knowledge domains, and in reducing inequalities
by distributing support to underserved students. They inject millions of dollars into selected organ-
izations as a way of driving up adoption of personalized learning platforms, although their claims
that AI improves outcomes or reduces inequalities remain highly contested (Boninger, Molnar,
and Saldaña 2020).
228 B. WILLIAMSON AND R. EYNON

Dixon-Roman, Nichols and Nyame-Mensah (2020), in this issue, draw critical attention to the
‘racializing forces’ of commercial AI in education, through a detailed case study of one commercial
AIed application. They argue that AI applications in education may ‘inherit sociopolitical relations’
and reproduce racializing processes and educational inequities through the ‘cloak of algorithmic
objectivity’. Their argument reflects the ways that AI and data systems are implicated in race-
based profiling and ‘discriminatory designs’ that reinforce and normalize racial hierarchies and
may enforce ‘racial correction’ (Benjamin 2016, 148). Critical edtech research, for example, has high-
lighted the role of ‘digital redlining’ in excluding certain groups from access to knowledge and infor-
mation based on gender, class and race:
Digital redlining arises out of policies that regulate and track students’ engagement with information technol-
ogy. … Digital redlining is not a renaming of the digital divide. It is a different thing, a set of education policies,
investment decisions, and IT practices that actively create and maintain class boundaries through strictures that
discriminate against specific groups. … Digital redlining takes place when policymakers and designers make
conscious choices about what kinds of tech and education are ‘good enough’ for certain populations. (Gilliard
and Culik 2016, n.p.)

Such research draws urgent attention to the ways that the architectures of technologies are involved
in new forms of exclusion and discrimination. Data-intensive technologies such as AI in education
may be additionally discriminatory based on ‘machine bias’ being encoded in the datasets used to
train algorithms. These brief examples highlight how AIed needs to be understood in the historical
context of the growth of commercial edtech and the support it has gathered from actors as diverse as
state ministries, high-tech venture philanthropies and investment vehicles. AIed also needs to be
understood in the history of discriminatory practices, such as digital redlining or the reproduction
of racial hierarchies, that may be reinforced rather than ameliorated by the design of commercial
edtech.

AI infrastructures
The third historical thread in the development of AIed concerns the ways that transnational tech-
nology corporations have sought to embed AI technologies in education. AI has become a major
industry concern, with many of the major global tech businesses offering ‘AI-as-a-service’, such as
Google’s TensorFlow, Microsoft Azure, IBM Watson, and Amazon’s machine learning on AWS.
All of these developments have their own long histories of product development which are, in
turn, embedded in organizational cultures, hiring practices, and the cultivation of in-house expertise.
They reflect the belief, common in the technology and data analytics industries, that big data and AI
can solve problems as diverse as business efficiency, scientific discovery, urban management, and
educational improvement. As Beer (2019, 30) notes, ‘out-of-the-box Artificial Intelligence tools’
are presented as ‘ready-made and thinking technologies that carry the burden of technique,
know-how and method,’ and ‘locate value in a way that the human alone is unable to do’. Data ana-
lytics and AI are understood by this industry as providing the only sensible way of making decisions
or solving problems. Part of the history of ‘AI-as-a-service’ is related to the emergence of ‘platform
capitalism’ as a dominant mode of value creation and capital accumulation in the digital economy
(Srnicek 2017). Out-of-the-box AI positions machine intelligence as a platform that can be deployed
across a range of sectors, enabling other organizations to deploy ready-made thinking technologies
to aid decision-making and improve their service to customers or users. But such arrangements also
mean that such organizations are dependent upon global commercial AI infrastructures and on the
particular forms of automated intelligence they enable.
Within education, AI-as-a-service applications can be detected in the many ways that global tech-
nology firms have begun to provide back-end infrastructure services for other educational insti-
tutions or edtech companies. Pearson, for example, previously partnered with IBM to embed
Watson APIs in courseware products as part of its early ambitions to create AIed applications. Goo-
gle’s G Suite is familiar as a front-end set of apps for use in classrooms, but it also depends on
LEARNING, MEDIA AND TECHNOLOGY 229

Google’s data-extractive infrastructure and APIs for integrating third-party products. Microsoft has
also recently begun to promote its Power Platform for education. The Power Platform is an inte-
grated platform enabling organizations to integrate various applications and data sources into a
single data model for analysis and provides common models and templates for organizations to cre-
ate their own services and applications, all supported by Microsoft’s Azure cloud computing service.
It includes Power Apps enabling educators to build their own low-code apps, and Power Virtual
Agents to ‘enable institutions to easily create and maintain intelligent chatbots’, such as ‘an intelli-
gent Question Bot that gets smarter and is capable of supplying answers on its own to students,
which allows for greater student independence and supports personalized learning’ (Microsoft Edu-
cation Team 2020). Power Platform is in this sense an infrastructural technology to undergird and
support educators’ and institutions’ implementation of AI. Likewise, Amazon has begun promoting
a variety of educational services and technologies related to its cloud infrastructure Amazon Web
Services (AWS). For example, it claims, ‘Using the AWS Cloud, schools and districts can get a com-
prehensive picture of student performance by connecting products and services so they seamlessly
share data across platforms’ (AWS 2020a). It also strongly promotes its ‘Machine Learning for Edu-
cation’ services to ‘identify at-risk students and target interventions’, ‘improve teacher efficiency and
impact with personalised content and AI-enabled teaching assistants and tutors’, and ‘improve
efficiency of assessments and grading’ (AWS 2020b). These examples indicate how huge global tech-
nology companies are in competition for structural dominance over the digital infrastructures of
education, providing cloud systems that are capable of integrating various platforms and interoper-
ability programs for enabling seamless and frictionless data flow, aggregation and analysis.
In these ways, some of the world’s most powerful technology companies are seeking to insert
themselves into education as back-end infrastructure providers, as well as front-end suppliers or ven-
dors of specific educational services such as Google G Suite or Microsoft education products. McStay
(2020) in this issue, for example, highlights the role of the ‘affective computing’ company Affectiva as
a supplier of ‘emotional AI’ software that can be embedded in other educational technologies. Affec-
tiva originally emerged from the MIT Media Lab’s affective computing centre, and from collabora-
tive research on ‘affect-sensitive autotutor’ technologies conducted within the emerging field of
emotional learning analytics in the early 2000s (D’Mello, Picard, and Graesser 2007). Affectiva is
now one of the world’s leading emotional AI companies, having amassed a databank of billions of
images of human faces that can then be used as the basis for real-time emotion expression analysis.
But as McStay (2020) shows, in its current role, rather than taking a centre-stage role in edtech,
Affectiva prefers a back-stage role, enabling it to scale out its software and its mass collection of facial
images through other products and brands. Affectiva inserts itself into education as an emotional AI
infrastructure.
These brief examples highlight the importance of approaching AI in education as at least partly
the genealogical result of historically situated technology sector efforts to extend AI infrastructures
into a wide variety of other industries and practices. Importantly, they demonstrate how education is
increasingly fused to transnational private technology companies and the business logics of platform
capitalism. They show how AI enters education through mundane back-end AI-as-a-service plug-
ins, rather than in the more spectacular guise of automated pedagogic agents or tutoring systems.
This genealogical development of AI in education is quite distinct from the efforts of either academic
AIed specialists or the dedicated edtech sector. Its significance as yet remains unclear, but it seems
likely that schools and universities will increasingly rely on these infrastructural arrangements to
handle the flows and analysis of data required for institutional performance management, or that
are demanded for processes of governance and accountability.

AI policy and governance


AI has also become an emerging concern in education policy and governance, as part of a long his-
tory of governance by numbers (Piattoeva and Boden 2020). In particular, from the 1990s onwards,
230 B. WILLIAMSON AND R. EYNON

concerns with educational accountability and evidence-based policy drove the implementation of
data systems that could be used to record progress towards performance targets and improvement
goals (Lingard, Martino, and Rezai-Rashti 2013). The use of accountability data as a key source of
education policy and governance was enabled by large-scale information infrastructures for collect-
ing and processing the data (Anagnostopoulos, Rutledge, and Jacobsen 2013), with interest growing
over the subsequent decades in analytics packages and data dashboards for analysing, interpreting
and displaying up-to-date data:
shrinking fiscal resources and the expansion of a global competitive education market have fueled this increas-
ing pressure for educational accountability. The offshoot of these economic drivers has been the development in
the education sector of standardized scalable, real-time indicators of teaching and learning outcomes. (Lockyer,
Heathcote, and Dawson 2013, 1439)

The nascent fields of Educational Data Mining and Learning Analytics, from around 2005, opened
up new opportunities for systematic quantitative analysis in the field of education (Agasisti and
Bowers 2017). Edu-businesses such as Pearson began promoting the idea that these forms of data
mining and analytics could be used to transform policy processes, by focusing on the real-time per-
formance of institutions and even individuals rather than relying on temporally periodic assessment
events as insights for policymaking (Hill and Barber 2014). In this context, processes of educational
governance have become increasingly reliant upon data stored in student information systems and
learning management systems, and by the proliferation of analytics packages for generating predic-
tive insights into institutional and individual-level performance. Emerging AI-enabled data infra-
structures that can perform intelligent analytics of vast quantities of educational information
represent the next instantiation of governance by numbers.
In situated practice, new forms of digital, data-led governance have emerged in governmental
departments of education and associated agencies. For example, the UK Behavioural Insights
Team (‘Nudge Unit’) collaborated with the schools inspectorate Ofsted (Office for Standards in Edu-
cation) to develop a ‘school inspection algorithm’ that could automatically evaluate a school’s data
records and then highlight areas of concern for the embodied human inspector to observe and report
on. Moreover, policy-influencing organizations including the OECD have begun to firmly advocate
for the use of AI to measure and improve learning. An OECD report, for example, advocates ‘the use
of Big Data, Artificial Intelligence algorithms, education data mining and learning analytics … to
improve learning and education’ through transmitting scientific evidence of learning into ‘real-
world education practice and policy’ (Kuhl et al. 2019, 13–14). As these examples indicate, govern-
ance by numbers in education has mutated into digital, data-led governance through algorithms, jus-
tified increasingly explicitly through discourses of scientific, AI-based policymaking.
As Webb, Sellar, and Gulson (2020) argue in this issue, data-led forms of governance have now
begun mutating into ‘anticipatory’ forms of AI-enhanced governance. AI and learning analytics plat-
forms, they suggest, create new conceptions of temporality in education, where past actions captured
in huge comparative datasets can be used to make predictions at the level of the individual student in
‘real time’. Such platforms are designed to identify students ‘at-risk’ and intervene computationally
in ways that ideally ameliorate deleterious learning outcomes. This involves the modelling of prob-
able futures from continuously generated and analysed datastreams, providing synchronous diag-
noses and interventions which model multiple temporal trajectories in order to anticipate student
futures.
Digital regulation is an essential consideration when exploring the complex array of factors that
are shaping these data-led forms of governance. In their paper for this issue, Berendt, Littlejohn and
Blakemore’s (2020) examine the benefits and risks of AI in education in relation to fundamental
human rights. They demonstrate that current laws, such as the European Union’s data protection
law (GDPR) introduced in 2018 that introduced legally enforceable standards relating to privacy
and data protection, are insufficient. The authors argue that human rights need to be protected at
a transnational level, as national approaches are unlikely to work particularly given the global
LEARNING, MEDIA AND TECHNOLOGY 231

reach of many AI systems. The importance of human rights, and the rights of the child is also an
important theme in McStay’s (2020) contribution that questions the extent to which emotional
AI and EdTech are serving the public good. Such arguments chime with the discussion outside of
education about the needs for children’s rights to be firmly integrated into the agendas of global
debates about ethics and data science (Livingstone, Carr, and Byrne 2015; Berman and Albright
2017; Lupton and Williamson 2017). Yet despite this interest and investment, the necessary legal fra-
meworks are not fit for purpose.

Missing links
Our intention with the genealogical notes in the previous sections was to illustrate how we under-
stand AIed as a complex set of relations and confluences among many factors and historical path-
ways. In these final parts of this introduction, we would like to turn our attention to the potential for
future work in this emerging area, focusing first on the missing links. We suggest three areas of focus:
ethnographies of AI in use, links with the philosophy of education, and more meaningful inter-
actions with other academic communities working in AIed.
As can be seen from the discussion above, this special issue covers an array of rich perspectives to
promote the critical study of AIed. The lens of political economy (Knox 2020), science and technol-
ogy studies (Perrotta and Selwyn 2020), socio-legal studies (McStay 2020; Berendt, Littlejohn, Bla-
kemore 2020), education governance and time (Webb, Sellar and Gulson 2020), and new
materialist and Black feminist thought (Dixon-Román, Nichols and Nyame-Mensah 2020) all fea-
ture. Many of the papers also effectively move across micro and macro levels of analysis and across
disciplines to bring together their argument. Such a multi-disciplinary mix of critical perspectives is
central to ensuring the development of this area of study.
Like the papers in this special issue, much of the critical work on AIed to date is based on
rich analyses of AI products, mapping power structures, raising questions of ethics and making
visible systems of infrastructures, the role of the commercial sector and new forms of digital
governance. All of these are crucial, yet there is far less data from a critical perspective of
what happens when these systems are used on a daily basis in varied educational contexts.
We know very little, for example, about how learners and teachers really use AI systems,
and how AI is embedding (or not) into the everyday workings of schools, colleges and other
sites of education and learning. As many of the authors in this special issue have highlighted,
there are likely key risks of such systems. However, we need to better understand how these
happen in practice. We now have a growing and theoretically rich account of the risks and pos-
sibilities of AIed but fewer accounts (beyond more instrumental studies of effectiveness) of
what really happens in the classroom.
More ethnographic case-study work of AI in different educational contexts would add an
additional perspective to enrich the important themes emerging from work in this area. Such an
approach fits well with what makes for the critical study of education and technology, where studies
of the ‘state of the actual’ and investigations into how the use of AIed plays out in varied contexts are
central (Selwyn 2010).
A common call across the papers in this special issue is that education is a special part of society
that requires specific understanding and attention. Given the uniqueness of education, it is clear that
while authors exploring AIed can certainly learn from and inform work in other spheres of social life
(e.g., policing, health, employment) it is also important to work with a consideration of the purposes
and values of education and make this central to the debate. One example of this raised by many of
the contributions to this special issue centre around questions of power: the power of systems to
reinforce or exacerbate bias and discrimination, the powers of the commercial sector, and the
absence of power for students and teachers. These power dynamics combine to lead to a context
where students (and indeed teachers) are not able to give or withhold meaningful consent to use
AI products as part of their education. This is not the same as in other areas of social life, where
232 B. WILLIAMSON AND R. EYNON

individuals have some (although often admittedly limited) power, in terms of the decision whether
or not, or how to use, a certain system (Berendt, Littlejohn, Blakemore 2020; Zeide 2017).
Developing stronger connections with arguments around the philosophy of education could be a
useful way to articulate and promote the uniqueness of education as a site for critical study. Such an
approach would bring attention back to the purposes and values we have and aspire to for our edu-
cation systems, something that often gets lost in more common instrumental discussions of learning
effectiveness and efficiency that are very characteristic in debates about AIed (Biesta 2015; 2019).
As noted in the early parts of this introduction, there are varied communities and stakeholders
engaged in this domain, each with their own ways of conceptualising AIed. Within this complex
field it is important to make a distinction (yet also note the complex relationship between) academics
from the learning and cognitive sciences and those who work in the commercial sector. Perspectives
from learning and cognitive scientists are retold by the authors in this special issue – but there is no
one from this community who represents themselves. Although highly challenging, it would be inter-
esting to promote more dialogue across the different academic communities focused on AIed. Else-
where there have been some moves in this direction (Buckingham Shum and & Luckin 2019; Selwyn
2019) and it would be valuable for these to continue.
More meaningful debate across these communities may lead to some interesting ways forward in
future work and are important in developing meaningful understandings of the motives, under-
standings and objectives of researchers working in related fields of education. It may also provide
stronger arguments and ways to address the strong commercial presence in AIed in all its forms.

Future directions
The papers in this special issue have highlighted how the current and future imaginaries of AIed are
problematic in multiple ways. To change, it requires the engagement and interaction across multiple
relevant social groups (Bijker 2010). No one trajectory is inevitable or fixed, and the emerging domi-
nant ways of thinking about AIed can be reconfigured to support the more democratic vision of the
future of education envisioned by many papers in this special issue (Eynon and Young 2020).
We would suggest that one important way forward could be the use of more participative
approaches, enabling the development of insight and changes in practice simultaneously. A long-
standing and well-established approach in critical education, human computer interaction, and
related fields, it is attracting growing attention by critical scholars of AI. For example, D’Ignazio
and Klein draw on feminist theory to argue for the importance of multiple actors, particularly
those who are most marginalised, to be involved in data work to facilitate positive change. They
argue that enabling everyone to be part of data projects, particularly those who are likely to be
most impacted by such systems, could encourage an array of responses from using data to make
inequalities visible, to developing a consensus around local projects or to support community story-
telling. As the authors note, data scientists alone are unlikely to have the same impact (D’Ignazio and
Klein 2019). Costanza-Chock calls for Design Justice, where marginalised communities lead the pro-
cess of designing new technologies to facilitate positive social change (Costanza-Chock 2020). In a
similar vein, scholars of learning analytics and educational data mining have called for an ‘ethics by
design’ approach in developing and implementing data centric policies and practices in Education
(Harel Ben Shahar 2017; Gray and Boling 2016).
Developing a broader academic community who could both use and critique the development of
AI is also an important aspect for future work. Many of the authors in this special issue have made
visible how AI systems work and in doing so the choices, assumptions and flaws that underpin these
systems. Using AI as a research tool could add to that understanding, and also provide new ways to
engage a range of stakeholders in debates about the future of AIed. AI could be used to assist in map-
ping the hidden networks of actors promoting AIed over time, it could aid in the analysis of the dis-
courses around AIed and make visible structural inequalities in education systems. Such approaches
may add to understanding, offer new ways to demonstrate the issues and further understanding and
LEARNING, MEDIA AND TECHNOLOGY 233

informed critique of what such methods can, and cannot offer the critical edtech community (Wil-
liamson, Potter, and Eynon 2019).
Expanding methodologies and approaches to research in these ways may be an important strategy
to protect against the current pattern of events where, despite the long and complex history of AIed
we have shown above it seems an area very prone to the challenges of hype cycles, short termism and
forgotten histories. These ahistorical approaches are a continued problem in many studies of edtech,
and enable those with limited expertise but significant (often commercial) power to take centre stage
in shaping and investing in educational futures. This special issue showcases a collection of work that
takes an important step towards resisting such trends through the use and development of theory
and an awareness of history. An important additional trajectory of research is for academics to
develop ways to more directly intervene in shaping the future imaginaries of AIed.

Disclosure statement
No potential conflict of interest was reported by the author(s).

Notes on contributors
Dr Ben Williamson is a Chancellor’s Fellow at the Centre for Research in Digital Education and the Edinburgh Futures
Institute at the University of Edinburgh.
Dr Rebecca Eynon is an Associate Professor at the University of Oxford where she holds a joint appointment between
the Department of Education and the Oxford Internet Institute.

ORCID
Ben Williamson http://orcid.org/0000-0001-9356-3213
Rebecca Eynon http://orcid.org/0000-0002-2074-5486

References
Agasisti, T., and A. J. Bowers. 2017. “Data Analytics and Decision Making in Education: Towards the Educational Data
Scientist as a Key Actor in Schools and Higher Education Institutions.” In Handbook of Contemporary Education
Economics, edited by G. Johnes, J. Johnes, T. Agasisti, and L. López-Torres, 184–210. Cheltenham: Edward Elgar
Publishing.
Alkhatlan, A., and J. Kalita. 2018. "Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent
Developments.” arXiv: http://arxiv.org/abs/1812.09628.
Alpaydin, E. 2016. Machine Learning: The New AI. Cambridge, MA: MIT Press.
Anagnostopoulos, D., S. Rutledge, and R. Jacobsen. 2013. The Infrastructure of Accountability: Data Use and the
Transformation of American Education. Cambridge, MA: Harvard Education Press.
AWS (Amazon Web Services). 2020a. K12 and Primary Education. Amazon. https://aws.amazon.com/education/K12-
primary-ed/.
AWS (Amazon Web Services). 2020b. Machine Learning in Education. Amazon. https://aws.amazon.com/education/
ml-in-education/.
Beer, D. 2019. The Data Gaze: Capitalism, Power and Perception. London: Sage.
Benjamin, R. 2016. “Catching Our Breath: Critical Race STS and the Carceral Imagination.” Engaging Science,
Technology, and Society 2: 145–156.
Berendt, B., A. Littlejohn, and M. Blakemore. 2020. “AI in Education: Learner Choice and Fundamental Rights.”
Learning, Media and Technology. doi:10.1080/17439884.2020.1786399.
Berman, G., and K. Albright. 2017. “Children and the Data Cycle: Rights and Ethics in a Big Data World,” Innocenti
Working Papers No. 2017-05. Florence: UNICEF Office of Research – Innocenti.
Biesta, G. 2015. Good Education in an Age of Measurement: Ethics, Politics, Democracy. London: Routledge.
Biesta, G. 2019. “What Kind of Society Does the School Need? Redefining the Democratic Work of Education in
Impatient Times.” Studies in the Philosophy of Education 38 (6): 657–668.
Bijker, W. E. 2010. “How is Technology Made? – That is the Question!” Cambridge Journal of Economics 34 (1): 63–76.
234 B. WILLIAMSON AND R. EYNON

Boninger, F., A. Molnar, and C. Saldaña. 2020. Big Claims, Little Evidence, Lots of Money: The Reality Behind the
Summit Learning Program and the Push to Adopt Digital Personalized Learning Platforms. Boulder, CO:
National Education Policy Center. https://nepc.colorado.edu/publication/summit-2020.
Buckingham Shum, S., and R. & Luckin. 2019. “Learning Analytics and AI: Politics, Pedagogy and Practices.” British
Journal of Educational Technology 50 (6): 2785–2793.
Bulger, M. 2016. “Personalized Learning: The Conversations We’re Not Having.” Data and Society 22: 1.
Chamuah, A., and H. Ghildiyal. 2020. “AI and Education in India.” Tandem Research, April 16. https://
tandemresearch.org/publications/ai-and-education-in-india.
Costanza-Chock, S. 2020. Design Justice: Community-led Practices to Build the Worlds We Need. Cambridge: MIT
Press.
Dai, S. 2020. “Online Education Start-up Yuanfudao Raises US$1bn as Sector Heats Up on Coronavirus Impact. South
China Morning Post, April 1. https://www.scmp.com/tech/start-ups/article/3077915/online-education-
startyuanfudao-raises-us1bn-sector-hots.
D’Ignazio, C., and L. Klein. 2019. Data Feminism. Cambridge: MIT Press.
Dixon-Román, E., P. T. Nichols, and A. Nyame-Mensah. 2020. “The Racializing Forces of/in AI Educational
Technologies.” Learning, Media and Technology. doi:10.1080/17439884.2020.1667825.
D’Mello, S., R. Picard, and A. Graesser. 2007. “Toward an Affect-Sensitive AutoTutor.” IEEE Intelligent Systems 22 (4):
53–61.
du Boulay, B. 2019. “Escape From the Skinner Box: The Case for Contemporary Intelligent Learning Environments.”
British Journal of Educational Technology 50 (6): 2902–2919.
Eynon, R., and E. Young. 2020. “Methodology, Legend, and Rhetoric: The Constructions of AI by Academia, Industry,
and Policy Groups for Lifelong Learning.” Science, Technology, & Human Values. doi:10.1177/0162243920906475.
Fischer, C., et al. 2020. “Mining Big Data in Education: Affordances and Challenges.” Review of Research in Education
44 (1): 130–160.
Garland, D. 2014. “What is a “History of the Present”? On Foucault’s Genealogies and Their Critical Preconditions.”
Punishment & Society 16 (4): 365–384.
Gilliard, C., and H. Culik. 2016. “Filtering Content is Often Done with Good Intent, But Filtering Can Also Create
Equity and Privacy Issues.” Common Sense Education, May 24. https://www.commonsense.org/education/
articles/digital-redlining-access-and-privacy.
Gray, C. M., and E. Boling. 2016. “Inscribing Ethics and Values in Designs for Learning: A Problematic.” Education
Technology Research and Development 64: 969–1001.
Greene, D., A. L. Hoffman, and L. Stark. 2019. “Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement
for Ethical Artificial Intelligence and Machine Learning.” Proceedings of the 52nd Hawaii International Conference
on System Sciences. https://scholarspace.manoa.hawaii.edu/bitstream/10125/59651/0211.pdf.
Harel Ben Shahar, T. 2017. “Educational Justice and Big Data.” Theory & Research in Education 15 (3): 306–320.
Hill, P. W., and M. Barber. 2014. Preparing for a Renaissance in Assessment. London: Pearson.
Holmes, W., M. Bialik, and C. Fadel. 2019. Artificial Intelligence in Education: Promises and Implications for Teaching
and Learning. Boston: Centre for Curriculum Redesign.
Knox, J. 2020. Artificial Intelligence and Education in China, Learning, Media and Technology. doi:10.1080/17439884.
2020.1754236.
Knox, J., B. Williamson, and S. Bayne. 2020. “Machine Behaviourism: Future Visions of ‘Learnification’and
‘Datafication’ Across Humans and Digital Technologies.” Learning, Media and Technology 45 (1): 31–45.
Kuhli, P. K., S.-S. Limii, S. Guerrieroiii, and D. van Dammeiv. 2019. Developing Minds in the Digital Age: Towards a
Science of Learning for 21st Century Education, Educational Research and Innovation. Paris: OECD Publishing.
Lingard, B., W. Martino, and G. Rezai-Rashti. 2013. “Testing Regimes, Accountabilities and Education Policy:
Commensurate Global and National Developments.” Journal of Education Policy 28 (5): 539–556.
Liu, Y. 2020. “The Future of the Classroom? China’s Experience of AI in Education.” Nesta, May 18. https://www.nesta.
org.uk/report/the-future-of-the-classroom/#content.
Livingstone, S., J. Carr, and J. Byrne. 2015. One in Three: Internet Governance and Children’s Rights. Global
Commission on Internet Governance Paper Series. October, No. 22.
Lockyer, L., E. Heathcote, and S. Dawson. 2013. “Informing Pedagogical Action: Aligning Learning Analytics with
Learning Design.” American Behavioral Scientist 57 (10): 1439–1459.
Luckin, R. 2020. “AI in Education Will Help us Understand How We Think.” The Financial Times, March 10. https://
www.ft.com/content/4f24adca-5186-11ea-8841-482eed0038b1.
Luckin, R., W. Holmes, M. Griffiths, and L. B. Forcier. 2016. Intelligence Unleashed: An Argument for AI in Education.
London: Pearson Education.
Lupton, D., and B. Williamson. 2017. “The Datafied Child: The Dataveillance of Children and Implications for Their
Rights.” New Media & Society 19 (5): 780–794.
Mackenzie, A. 2017. Machine Learners: Archaeology of a Data Practice. Cambridge, MA: MIT Press.
McArthur, D., M. Lewis, and M. Bishay. 1995. “The Roles of Artificial Intelligence in Education: Current Progress and
Future Prospects.” i-manager’s Journal of Educational Technology 1 (4): 42–80.
LEARNING, MEDIA AND TECHNOLOGY 235

McStay, A. 2020. “Emotional AI and EdTech: Serving the Public Good?” Learning, Media and Technology. doi:10.1080/
17439884.2020.1686016.
Microsoft Education Team. 2020. “New IDC Report Shows Big Opportunities to Transform Higher Education
Through AI.” Microsoft Education Blog, March 3. https://educationblog.microsoft.com/en-us/2020/03/new-
report-shows-big-opportunities-to-transform-higher-education-through-ai/.
Pea, R. D. 2016. “The Prehistory of the Learning Sciences.” In Reflections on the Learning Sciences, edited by M. A.
Evans, M. J. Packer, and R. K. Sawyer, 32–58. Cambridge: Cambridge University Press.
Perrotta, C., and N. Selwyn. 2020. “Deep Learning Goes to School: Toward a Relational Understanding of AI in
Education.” Learning,Media and Technology. doi:10.1080/17439884.2020.1686017.
Piattoeva, N., and R. Boden. 2020. “Escaping Numbers? The Ambiguities of the Governance of Education Through
Data.” International Studies in Sociology of Education 29 (1–2): 1–18.
Rieder, B. 2020. Engines of Order: A Mechanology of Algorithmic Techniques. Amsterdam: Amsterdam University
Press.
Schank, R. C., and D. J. Edelson. 1989. “A Role for AI in Education: Using Technology to Reshape Education.”
International Journal of Artificial Intelligence in Education 1 (2): 3–20.
Schmidhuber, J. 2019. "Deep Learning: Our Miraculous Year 1990-1991." arXiv: https://arxiv.org/abs/2005.05744.
Schmidhuber, J. 2020. The 2010s: Our Decade of Deep Learning/Outlook on the 2020s. February 20. http://people.idsia.
ch/~juergen/2010s-our-decade-of-deep-learning.html.
Selwyn, N. 2010. “Looking Beyond Learning: Notes Towards the Critical Study of Educational Technology.” Journal of
Computer Assisted Learning 26 (1): 65–73.
Selwyn, N. 2019. “What’s the Problem with Learning Analytics?” Journal of Learning Analytics 6 (3): 11–19.
Srnicek, N. 2017. Platform Capitalism. Cambridge: Polity.
Webb, P. T., S. Sellar, and K. N. Gulson. 2020. “Anticipating Education: Governing Habits, Memories and Policy-
Futures.” Learning, Media and Technology. doi:10.1080/17439884.2020.1686015.
Whittaker, M., K. Crawford, R. Dobbe, G. Fried, E. Kaziunas, V. Mathur, S. M. West, R. Richardson, J. Schultz, and O.
Schwartz. 2018. AI Now Report 2018. New York: AI Now Institute/New York University. https://ainowinstitute.org/
AI_Now_2018_Report.pdf.
Williamson, B. 2017. Big Data in Education: The Digital Future of Learning, Policy and Practice. London: Sage.
Williamson, B. 2020. “New Digital Laboratories of Experimental Knowledge Production: Artificial Intelligence and
Education Research.” London Review of Education 18 (2): 209–221.
Williamson, B., J. Potter, and R. Eynon. 2019. “New Research Problems and Agendas in Learning, Media and
Technology: the Editors’ Wishlist.” Learning, Media and Technology 44 (2): 87–91.
Zeide, E. 2017. “The Structural Consequences of Big Data-Driven Education.” Big Data 5 (2): 164–172.

View publication stats

You might also like