Code Acts in Education: PISA for Machine Learners
The Organisation of Economic Cooperation and Development (OECD) has played a considerable role in advancing ideas about education in the Fourth Industrial Revolution, particularly through its long-term Future of Education and Skills 2030 program launched in 2016. A background report on the 2030 project showed how education systems were not responding to the ‘digital revolution’ and new Industry 4.0 demands, and presented the OECD’s case for the development of new skills, competencies and knowledge through ‘transformative change’ in education.
Defining the future skills required of the digital revolution is now being undertaken by the OECD’s Artificial Intelligence and the Future of Skills work program, a 6 year project commenced in 2019 by its Centre for Educational Research and Innovation (OECD-CERI). As described on its approval:
The motivation for the Future of Skills project comes from a conviction that policymakers need to understand what AI and robotics can do—with respect to the skills people use at work and develop during education—as one key part of understanding how they are likely to affect work and how education should change in anticipation.
Its ‘goal is to provide a way of comparing AI and robotics capabilities to human capabilities,’ and therefore to provide an evidence base for defining—and assessing—the human skills that should be taught in future education systems. In this sense, the project has potential to play a significant role in establishing the role of AI in relation to education, not least by encouraging policymakers to pursue educational reforms in anticipation of technological developments. This post offers an initial summary of the project and some implications.
‘PISA for AI’
The first AI and the Future of Skills report was published in November 2021. Over more than 300 pages, it outlines the methodological challenges of assessing AI and robotics capabilities. The point of the report is to specify what AI can and cannot do, and therefore to more precisely identify its impact on work, as a way of then defining the kinds of human skills that would be required for future social and economic progress.
The project, Schleicher added, ‘is taking the first steps towards building a “PISA for AI” that will help policy makers understand how AI connects to work and education.’
The idea of a ‘PISA for AI’ is an intriguing one. The implication here is that the OECD might not only test human learners’ cognitive skills and capabilities, as its existing PISA assessments do, or their skills for work, as PIAAC tests do. It could also test the skills and capabilities of machine learners in order to then redefine the kinds of human skills that need to be taught, all with the aim of creating ‘complementary’ skills combinations. Ongoing assessments might then be administered to ensure human-machine skills complementarities for long-term economic and social benefit.
Computing Cognition
So how does the OECD plan to develop such assessments? One part of the report, authored by academic psychologists, details the ways cognitive psychology and industrial-organisational psychology have underpinned the development of taxonomies and assessments of human skills, including cognitive abilities, social-emotional skills, collective intelligence, and skills for industry. The various chapters consider the feasibility of extending such taxonomies and tests to machine intelligence. Another section of the report then looks at the ways the capabilities of AI can be evaluated from the perspective of academic computer science.
Given the long historical interconnections of cognitive science and AI—which go all the way back to cybernetics—these chapters represent compelling evidence of how the OECD’s central priorities in education have developed through the combination of psychological and computer sciences as well as economic and government rationales. In recent years it has shifted its attention to insights from the learning sciences resulting from advances in big data analytics and AI. Similar combinations of psychological, economic, computational and government expertise were involved in the formation of the OECD’s assessment of social and emotional skills.
In the final summarizing chapter of the report, for example, the author noted that ‘the computer science community acknowledges the intellectual foundation and extensive materials provided by psychology,’ although, because the ‘the cognitive capacities of humans and AI are different,’ further work would require ‘bringing together different types of approaches to provide a more complete assessment of AI.’
The next stage of the AIFS project will involve piloting the types of assessments described in this volume to identify how well they provide a basis for understanding current AI capabilities. This work will begin with intense feedback from small groups of computer and cognitive scientists who attempt to describe current AI capabilities with respect to the different types of assessment tasks.
The project is ambitiously bringing together expertise in theories, models, taxonomies and methodologies from the computer and psychological sciences, in order ‘to understand how humans will begin to work with AI systems that have new capabilities and how human occupations will evolve, along with the educational preparation they require.’
Additionally, the project will result in some familiar OECD instruments: international comparative assessments and indicators. It will involve the ‘creation of a set of indicators across different capabilities and different work activities to communicate the substantive implications of AI capabilities,’ and ‘add a crucial component to the OECD’s set of international comparative measures that help policy makers understand human skills.’ In many respects, the OECD appears to be pursuing the development of a novel model of human-nonhuman skills development, and building the measurement infrastructure to ensure education systems are adequately aligning both the human and machine components of the model.
The idea of a ‘PISA for AI’ is clearly a hugely demanding challenge—one the OECD doesn’t foresee delivering until 2024. Despite being some years from enactment, however, PISA for AI already raises some key implications for the future of education systems and education policy.
Human-Computer Interaction
The OECD-CERI AI and the Future of Skills project is establishing artificial intelligence as a core priority for education policymakers. Although AI is already by now part of education policy discourse, the OECD is seeking to make it central to policy calculations about the kinds of workforce skills that education systems should focus on. The project may also help strengthen the OECD’s authority in education at a time of rapid digitalization, reflecting the historical ways it has sought to adapt and maintain its position as a ‘global governing complex.’
The first implication of the project, then, is its emphasis on workplace-relevant ‘skills’ as a core concern in education systems. The OECD has played a longstanding role in the translation of education into measurable skills that can be captured and quantified through testing instruments, as a means to perform comparative assessments of education systems and policy effectiveness. The project is establishing OECD’s authoritative position to define the relevant skills that future education systems will need to inculcate in young people. It is drawing on cognitive psychology and computer science, as well as analysis of changing labour markets, to define these skills, and potentially displacing other accounts of the purposes and priorities of education as a social institution.
A second implication stems from its assumption that the future of work will be transformed by AI in the context of a Fourth Industrial Revolution. The project seems to uncritically accept a techno-optimistic imaginary of AI as an enabler of capitalist progress, despite the documented risks and dangers of algorithmic work management, automated labour, and discriminatory outcomes of AI in workplaces, and a raft of regulatory proposals related to AI. Cognitive and computer science expertise are clearly important sources for developing assessment methodologies. The risk however is the production of a PISA for AI that doesn’t ask AI to account for its decisions when they potentially lead to deleterious outcomes. Moreover, matching human skills to AI capabilities as a fresh source of productivity is unlikely to address persistent power asymmetries in workplaces–especially prevalent in the tech industry itself–or counter the use of automation as a route to efficiency savings.
Third, the project appears to assume a future in which skilled human labour and AI perform together in productive syntheses of human and machine intelligence. While the role of AI and robotics as augmentations to professional roles may have merits, it is certainly not unproblematic. Social research, philosophy and theory—as well as science fiction—has grappled with the implications of human-machine hybridity for decades, through concepts such as the ‘cyborg,’ ‘cognitive assemblages,’ ‘posthumanism,’ ‘biodigital’ hybrids, ‘thinking infrastructures,’ and ‘distributed’ or ‘extended cognition.’ The notion that skilled human labour and AI might complement each other, as long as they’re appropriately assessed and attuned to one another’s capabilities, may be appealing but probably not as straightforward as the OECD makes out. Absent, too, are considerations of the power relations between AI producers–such as the global tech firms that produce many AI-enabled applications–and the individual workers expected to complement them.
The fourth implication is that upskilling students for a future of working with AI is likely to require extensive studying alongside AI in schools, colleges and universities too. Earlier in 2021, the OECD published a huge report promoting the transformative benefits of AI and robotics in education. While AI in education itself may hold benefits, the idea of implanting AI in classrooms, curricula, and courses is already deeply contentious. It is part of longrunning trends towards increased automation, datafication, platformization, and the embedding of educational institutions and systems in vast digital data infrastructures, often involving commercial businesses from edtech startups to global cloud operators. As such, an emphasis on future skills to work with AI is likely to result in highly contested technological transformations to sites and practices of education.
Finally, there is a key implications in terms of how the project positions students as the beneficiaries of future skills. As an organization dedicated to economic development, the OECD has long focused on education as an enabler of ‘human capital.’ It has even framed so-called ‘pandemic learning loss’ in terms of measurable human capital deficits as defined by economists. In this framing, educated or skilled learners represent future value to the economies where they will work; they are assets that governments invest in through education systems, and the OECD measures the effectiveness of those investments through its large-scale assessments.
The AI and future skills program doesn’t just focus on ‘human capital,’ however. It focuses on human-computer interaction as the basis for economic and social development. By seeking to complement human and AI capabilities, the OECD is establishing a new kind of ‘human-computer interaction capital’ as the aim of education systems. Its plan to inform policymakers about how to optimize education systems to produce skilled workers to complement AI capabilities appears to make the pursuit of HCI capital a central priority for government policy, and it potentially stands to make HCI capital into a core purpose of education. Students may be positioned as human components in these new HCI capital calculations, with their value worked out in terms of their measurable complementarity with machine learners.
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.