Skip to main content

Code Acts in Education: The Social Life of Artificial Intelligence in Education

insung-yoon-unsplash

Artificial intelligence is becoming a major feature of educational practice and policymaking, but researchers are beginning to raise critical questions about its ethics and effects. Photo by insung yoon on Unsplash

 

Artificial Intelligence (AI) has become the subject of both hype and horror in education. During the 2020 Covid-19 pandemic, AI in education (AIed) attracted serious investor interest, market speculation, and enthusiastic technofuturist predictions. At the same time,  algorithms and statistical models were implicated in several major controversies over predictive grading based on historical performance data, raising serious questions about privileging data-driven assessment over teacher judgment. 

In the new special issue AI in education: Critical perspectives and alternative futures published in Learning, Media and Technology, Rebecca Eynon and I pulled together a collection of cutting edge social scientific analyses of AIed. The purpose was to add alternative analytical perspectives to studies of AIed benefits, and to challenge commercial assertions that AIed will solve complex educational problems while accruing profitable advantage for companies and investors.  

Like AI in general, AIed is social and political. It has its own long history and a complex present ‘social life’, and it is being developed in the pursuit of future visions of education. AIed has emerged in its current form from decades of prior research and development, from technological innovation, from funding practices, and from policy preoccupations with using educational data for various forms of performance measurement and prediction. Far from being merely a future vision, AIed is already actively intervening in education systems — in schools, universities, policy spaces and home learning settings — with effects that are only now coming into view. 

Yet the growth in critical studies of AI in other sectors (such as labour automation, healthcare and the law, the privatization of public infrastructure, surveillance, border control, welfare distribution, visa application sorting, plus emerging legal pushback and challenges to AI data giants) has not been matched by joined-up critical analyses of AIed. Building upon the critical agenda for research on big data in education that Rebecca Eynon called for in a 2013 issue of Learning, Media and Technology, we hope the new special issue goes some way to addressing that absence. This seems all the more important amidst the surge of recent public outrage about predictive grading. While the current predictive grading controversies may not be directly related to AI, the widespread presentation of ‘the algorithm’ as determining students’ futures does raise questions about how AI-based forms of education based on machine learning and predictive analytics may be received or resisted in coming years.

Our editorial article provides historical perspective on the recent development of AIed, identifying a range of genealogical threads and connections that have given rise to current practices. The following papers cover such important critical issues as automated discriminationeducational performance prediction, the political economy and geopolitics of AIed, penetration of emotional AI into edtech, anticipatory policy and governance, and the need for regulation of AI in education. The special issue, we believe, contributes important new critical insights into AIed, its past life and its present social life, and its possible future life as a key source of power and influence in learning, teaching and education. The issue opens up a number of outstanding features of the social life of AIed requiring further analysis. This post highlights a few possibilities for future studies of AIed.

AIed R&D

One of the key aspects of the social life of AIed is academic research and development conducted in specialized labs, centres and alliances. AIed is a serious research enterprise, with a past life stretching back through the establishment of the International AI in Education Society (IAIED) in 1993 to the development of intelligent tutoring systems in the 1960s. It also encompasses cognate fields of learning analytics, educational data science, and learning engineering developed over the last 10-15 years. These fields have built up large archives of publications, international associations and professional communities, funding portfolios, commercial partnerships, and media engagement. AIed does not just consist of automated pedagogic assistants and personalized learning platforms, but is full of these AI people too.

Recently, a new open access journal, Computers and Education: Artificial Intelligence was launched as a ‘world-wide platform for researchers, developers, and educators to present their research studies, exchange new ideas, and demonstrate novel systems and pedagogical innovations on the research topics in relation to applications of artificial intelligence (AI) in education and AI education’. The journal will help establish AIed as a distinctive field of pedagogical innovation and elevate evidence on its benefits. ‘AI people‘ have been working in education for decades, bringing particular forms of learning science, learning analytics and education data science expertise to bear on education and learning; the open access journal will enable them to extend their findings and arguments to new audiences.

The experts of AIed are now gaining influence and access to established media channels to circulate their claims that AIed is a positive and transformative force as well. The AI people are, in other words, establishing a vision for the future of education, building innovative technologies to realize it, constructing an evidence base based on learning and data science methodologies, and building coalitions of support to pursue the imaginary of AIed-enhanced education. Further historical studies should engage with the long development of these forms of expertise and the field-building activities involved in diffusing and realizing their imaginaries of the future of education.

Edtech expansion

The social life of AIed is also characterized by significant efforts by the commercial edtech industry. The global education business Pearson, for example, envisages a future of education driven by AI innovations. ‘With AI, how people learn will start to become very different,’ the company states. ‘AI can adapt to a person’s learning patterns. This intelligent and personalized experience can actually help people become better at learning, the most important skill for the new economy’. Pearson launched AIDA, a smartphone-based adaptive AI learning assistant, to accomplish this vision. Pearson’s efforts to promote, create and profit from AI in education are part of a much wider interest in AI in the edtech sector, assisted by investor funds, philanthropic backing, and powerful framing discourses of personalized learning.

Another way AI and the edtech sector are expanding is through investor funds and market forecasts. HolonIQ, an influential education market intelligence consultancy, produces extensive insights for investors and companies on market trends in education and edtech. Its recent Global Learning Landscape identifies many promising applications to support market growth and investor decisions in the multibillion-dollar edtech market, while an accompanying set of scenarios for education in 2030 establishes particular edtech imaginaries for investors to pursue. HolonIQ also uses AI to analyze edtech market data. It has assembled global datasets and machine learning algorithms in order to ‘generate insights that help educators, entrepreneurs, enterprises and investors make data-driven strategic decisions’. In this way, HolonIQ is mobilizing AI itself to support edtech market growth and the expansion of AIed into further settings and practices of education.

These examples indicate the role of global edu-businesses, market organizations and investor strategies to the expansion of for-profit AIed. Investment in AIed in particular is a subject that as yet has received very little detailed attention, despite its catalytic role in funding technical development and supporting the objectives of for-profit edtech businesses. Venture capital, private equity and philanthropic investors are to a significant extent financing the AI future of education into existence.

Private infrastructures

Global technology companies have begun inserting AI infrastructure into educational institutions and practices too. This aspect of the social life of AI in education means companies including Amazon, Google, Microsoft and IBM are increasingly present in education through the back-end ‘AI-as-a-service’ systems that educational institutions require to collect and analyse data.

Amazon, for example, claims that by ‘Using the AWS Cloud, schools and districts can get a comprehensive picture of student performance by connecting products and services so they seamlessly share data across platforms’. It also strongly promotes its Machine Learning for Education services to ‘identify at-risk students and target interventions’, ‘improve teacher efficiency and impact with personalised content and AI-enabled teaching assistants and tutors’, and ‘improve efficiency of assessments and grading’.

These ‘AI-out-of-the-box’ interventions by global technology companies make public education institutions dependent upon private infrastructures for key functions of data analysis and reporting. They are also part of the history of how Amazon, Google, Microsoft and IBM have sought and competed for structural dominance over the infrastructure services used across myriad sectors and services. Further studies should examine the ways these global tech companies are expanding into education through the provision of infrastructure and platform services, exploring the long-term dependencies and lock-ins they engender.

AI policy

Policy is also mixed up in the social life of AIed, as part of a much longer history of the use of numbers in educational governance. Over the last few decades, large-scale data infrastructures for collecting, processing and disseminating educational data have become key to enacting policies concerned with performance measurement and accountability. AI technologies can extend the capacity of these data systems to become cognitive infrastructures capable of performing predictive analytics and automated decision-making. During the Covid-19 pandemic in 2020, the OECD strongly promoted AI as a solution to school closures and examination cancellations. AI-enabled learning and the preparation of AI workforces are also new parts of different nation’s educational policies and long-term geopolitical strategies.

In India, for example, the new National Education Policy 2020 framework states that ‘New technologies involving artificial intelligence, machine learning, block chains, smart boards, handheld computing devices, adaptive computer testing for student development, and other forms of educational software and hardware will not just change what students learn in the classroom but how they learn’. It also highlights the need for AI education to enable India to become a digital superpower. Likewise, the European Parliament has begun considering a resolution on AI in education. It highlights how ‘AI is transforming learning, teaching, and education radically’, most notably through the potential of ‘personalised learning experience’ made possible by the collection, analysis and use of ‘large amounts of personal data’. Both the NEP and the European Parliament documents call for the rapid upskilling of teachers to take advantage of AI. 

The NEP2020 and the EU proposed resolution on AI in education exemplify the emergence of AIed as an object of global education policy and geopolitical significance. Policy studies should engage with the interweaving of AI and education policy much more closely, teasing out the ways that various powerful organizations are involved in promoting AI in education or education for AI development and productivity enhancement. Such studies should also situate AI-focused policies in national and comparative contexts and in relation to geopolitical competition in the so-called ‘AI arms race’, and further concentrate empirical attention on the ways cognitive infrastructures affect policymaking itself.

Ethics centres

Another significant aspect of the social life of AIed concerns the definition and enforcement of AI and data ethics. In wider context, numerous ethical frameworks and professional codes of conduct have been developed to attempt to mitigate the potential dangers and risks of AI in society, though important debates persist about the ways such frameworks and codes may serve to protect commercial interests or obscure the political decisionmaking that underpins algorithm design.

Currently, in the UK, the Institute for Ethical AI in Education is leading the development of ethical principles for AI in education. Based at the University of Buckingham, a private university, it’s led by the institution’s Vice Chancellor, alongside the president of the International AI in Education Society, and the CEO of AIed company Century Tech. As with the development of all AI ethics centres and institutes, the constitution of this organization embeds it in particular assumptions — notably the assumption that AI has ‘powerful benefits’ that can be realized as long as responsible practices are followed — which may not necessarily reflect those of other stakeholders. Separately, UNESCO is preparing a global standard-setting recommendation on the ethics of AI, part of which is dedicated to a participatory, consultative exercise to define ethical standards for AI in education.

As this indicates, AIed ethics frameworks and standards are now being pursued by a variety of national and international organizations. These organizations have power and influence to define how and whether AIed applications are implemented in defined ethical ways. The social and political work involved in settings such standards is itself a significant factor in enabling or constraining the expansion of AIed. This work remains as yet under-studied or reported despite the powerful role it will play in setting the acceptable and definitive standards for AI in education in years to come.

Controversies

A significant aspect of the social life of AIed is the controversies emerging over automated decision-making and judgment by opaque systems. In summer 2020 this became especially apparent in relation to predictive grading. The first case was the predictive grading system used by the International Baccalaureate Organization to replace exams during Covid-19 school closures. Rather than basing grades on exam scores, the IBO employed an algorithmic grading and awarding model based on student coursework, teacher-delivered predicted grades and historical prediction data. The system, many have argued, is unfair and potentially discriminatory, with more than 20,000 students signing a petition protesting the algorithm. The Norwegian Data Protection Authority has since ordered the IBO to provide further detail on the model as part of an investigation into whether it violated the European General Data Protection Regulation.

The use of statistical modelling and historical performance data to predict and award grades in the four UK nations, and the inequalities of outcomes that resulted from this standardization process, fueled further public, legal and media backlash over the use of predictive algorithms in education. At one protest in London, affected students began chanting ‘fuck the algorithm‘, a phrase quickly taken up on social media. It resulted in eventual political capitulation, the abandonment of algorithmic awarding models, and the reinstatement of teacher assessed grades. One UK Conservative politician later lamented that the scandal was the result of ‘technocratic governance and government by computer’ that failed to recognize ‘that the decisions that are made affect the lives of thousands of people individually’. Although exam grade prediction, moderation and standardization is certainly not unique to these events, the widespread outrage at the outcomes in this case meant governmental trust in numbers in the four education systems of the UK was not able to withstand public calls for returning trust in teachers and legal demands for fair outcomes for students.

These examples highlight how the outcomes of highly technical and statistical procedures are not just the result of objective data scientific analysis performed with software, but of difficult choices, forms of methodological expertise, the practical work of civil servants and statisticians, and political interference. They also show how statistical procedures can easily run into public resistance, especially when they produce discriminatory outcomes that are widely understood to be driven both by political bias and by automated algorithms. Although the grading systems were static algorithms rather than ‘learning’ in the AI sense, their outraged reception raises questions about how future iterations of AIed might be received or rejected, and its potentially tense position in longstanding and ongoing debates about the role of teacher professional judgment in education systems characterized by datafied performance measurement and accountability.

Such controversies signal the need for cautious and critical studies which penetrate through optimistic and futurist claims based on an algorithmic worldview about the powerful benefits of data-led decision-making. Critical studies should attend to the very powerful ways AIed and related techniques are involved in algorithmic profiling, automated digital redlining and discrimination, student modelling, and forms of prediction and classification that can exert potentially harmful effects or lead to deleterious outcomes for students. AIed, and the AI people promoting it, did not create these problems, but potentially reflect and reproduce historic practices of social sorting, classification, ranking, rating and exclusion — as seen in the statistical modelling and prediction of exam grades. Such practices and controversies are now interweaving into the genealogical threads of contemporary AI in education — with potentially significant effects on its future prospects and development — and require much further unpacking and documenting. These controversies themselves reveal unfolding contests between forms of technical expertise and the public.  

Critical perspectives on AIed

Recently, a body of critical social scientific and philosophical research has begun to examine the social, economic and political life of AIed, much of it animated by concern over the kinds of opaque automated decision-making and potentially discriminatory outcomes that predictive grading controversies have recently exemplified. This critical research is showcased in two recent special issues, one in Learning, Media and Technology and the other in the London Review of Education. The papers across these issues raise a range of issues for further examination: the politics of AIed, the influence of commercial, futurist, investment and philanthropic actors on AIed, the political economy of AI, the imaginaries and limits of AIed discourses, the problems inherent in algorithmic decision-making, the role of AIed in producing discriminatory outcomes, and the challenges it poses to democratic control over public education, privacy and students’ rights.

This emerging body of research is opening up the social life of AIed to inquire into the various paths taken — governmental, commercial, philanthropic, academic, financial, and futurist — to arrive at the contemporary juncture. Taking a critical perspective doesn’t necessarily mean criticizing AIed or taking up an activist position. It means unpacking the various genealogical threads, assumptions and practices involved in its creation and enactment, careful documentation of its effects in the present, and consideration of its possible implications for the future of education. One important absence in this work is ethnographic studies in AIed in action. How AIed is used in teacher practice, policymaking centres or in home learning settings is an important aspect of its social life. Understanding how it then affects teachers, policymakers and students would help cut through its powerful framing imaginaries to reveal its actual effects and consequences. Complex statistical models, as predictive grading controversies show, can produce socially, legally, ethically and politically problematic outcomes.

Studying the social lives of AIed also helps us to see beyond current fascination with technologies such as algorithms, machine learning and neural networks to the historically embedded processes and problems in education that AIed has been put to the task of addressing. Issues such as inequality of outcomes, claims of the benefits of personalized learning over standardized education, private influence over public education, performance measurement through numbers, and the geopolitics of education policy are not unique to AIed of course. The application of intelligent software to old problems does not, however, inevitably or unproblematically solve them. AIed becomes entangled in such problems, for example by exacerbating inequalities through inferring probable outcomes from historical performance datasets, or by delegating human judgment to opaque and unexplainable algorithms.

Ideally, critical social scientific and philosophical research should not only examine the past and present social lives of AIed but become involved in shaping its future life too. It should actively intervene alongside system designers and learning scientists to help shape better outcomes, ethical responses, meaningful regulation, socially just designs, and alternative future imaginaries of AIed. We hope the papers in the special issue AI in education: Critical perspectives and alternative futures support initial steps in that direction.

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Ben Williamson

Ben Williamson is a Chancellor’s Fellow at the Centre for Research in Digital Education and the Edinburgh Futures Institute at the University of Edinburgh. His&nb...