Code Acts in Education: PedagoGPT
Artificial intelligence might be top of the tech hype pinnacle right now, but maintaining its momentum requires turning short-term novelty into long term patterns and habits of use. A recent proliferation of educational courses to train users in generative AI, like image and text generators, is intended to do just that. The pedagogical apparatus being built to support AI will extend its applications into a vast array of industries, sectors, and everyday practices.
The range of generative AI developments over the past year has been astonishing. What appeared to be technically impressive but socially, politically and ethically problematic products like ChatGPT and Stable Diffusion only last year are fast being translated into a kind of infrastructure for everyday life and work. AI is being integrated into search engines and all manner of other platforms, seemingly as a substrate for how people access and create knowledge, information and culture.
Depending on your perspective, AI is either going to be enormously disruptive and transformative for businesses, or enormously damaging for societies. The tech entrepreneurs responsible for building AI are themselves calling for long-term regulatory efforts to forestall the “existential risks” of AI, while others argue that AI and other algorithmic systems are already causing considerable harms and dangers that could be addressed in the immediate present.
Governments and political leaders, seemingly wooed by AI leaders like OpenAI’s Sam Altman, have begun calling for regulatory action to ensure “AI safety” while paving the way for the benefits that AI is supposed to promise. Safeguarding AI from too much regulation in order to enable innovation, while speculating about the far-out problems it could usher in rather than addressing contemporary problems, has become the preferred industry and government approach.
Besides efforts to shape and choreograph regulation, however, the other problem for AI leaders and their advocates is to maintain the rapid growth of AI beyond its recent ascent up the curve of the hype cycle. And to do that requires maintaining and growing the user base for AI.
While millions have played with the likes of image generators DALL-E and Midjourney or text generators such as ChatGPT over recent months, prolonged uptake and use into the future may be less certain. The business aim of AI companies after all isn’t to provide “toys” for people to play with, but to embed their AI tools, or more accurately AI infrastructure, into a huge variety of industries, organizations, and everyday activities. Achieving that means educating users, ranging from enterprise leaders, IT managers and software developers, to public sector workers, NGOs, journalists, lawyers, healthcare staff, to families, children, and far more.
This is where a new apparatus of pedagogical interventions to train users in AI has appeared. Let’s call it the PedagoGPT complex. The PedagoGPT complex consists of online courses that are intended to train everyone from beginners to advanced machine learning engineers in how to use and deploy generative AI. Mundane as that may seem, if we understand AI to be becoming an active infrastructural presence in an array of everyday technologies and practices, then new AI courses can be understood as accustoming and socializing populations into such infrastructures. As colleagues and I have previously suggested, tech companies create training schemes to operate as “habituation programs” that tether business practices and personal desires to the affordances of their infrastructures. An infrastructure isn’t just the technology; it depends for its enactment on willing and habitual users.
The PedagoGPT complex
Surveying the PedagoGPT complex reveals a large number of players. A simple search for “generative AI” on ClassCentral, which is like a MoneySuperMarket service for online courses, returns more than 8,600 courses. Many of them are available on online learning platforms like Coursera, Udemy, Udacity or FutureLearn, or are posted on YouTube as basic tutorials by independent entrepreneurial individuals. While many of these courses are only tagged “generative AI” to generate hits, the search surfaces something of the extent to which AI has become a key focus for training courses at a massive scale and scope.
But the PedagoGPT complex is dominated by major industry players. One is Andrew Ng, the former co-founder of online learning platform Coursera, who in 2017 established DeepLearning.AI “to fill a need for world-class AI education.” Ng, who is a key ally and advocate of OpenAI, launched a series of generative AI short courses on DeepLearning.AI in spring 2023. The courses were developed in partnership with OpenAI and are co-taught by OpenAI staff, and focus on topics like “ Building Systems With The ChatGPT API” and “ChatGPT Prompt Engineering for Developers”, where users learn how to use a large language model (LLM) to build “new and powerful applications” and “effectively utilize LLMs”. The short courses, then, are explicitly intended to advance OpenAI infrastructure into business environments through training-up new AI engineers and accustoming them to ChatGPT.
Amazon has a relatively long history of running training programs to habituate workers to the AWS infrastructure through its AWS Educate program. It too has begun offering training guidance and advice for enterprise leaders on generative AI, partially through its “Machine Learning University” scheme. This is in support of the major announcements made by AWS about its LLMs in April 2023.
Similarly, Microsoft has set up an online course called “Introduction to Azure OpenAI Service” to support the integration of OpenAI’s services in its enterprise platforms, as part of a pathway of courses called “Microsoft Azure AI Fundamentals: Get started with artificial intelligence”. The Azure OpenAI course consists of short blocks of text that can be completed in just a few minutes, including a “module” on “OpenAI’s access and responsible AI policies” that takes 3 minutes to complete.
Google, meanwhile, has launched a series of “Generative AI Fundamentals” courses, including “Introduction to Generative AI”, “Introduction to Large Language Models” and “Introduction to Responsible AI courses”, that can be completed for free online. These courses require no prerequisites and are targeted at the general public. Google claims that “By passing the final quiz, you’ll demonstrate your understanding of foundational concepts in generative AI” and earn a digital skills badge.
Other courses to introduce AI are also being taken into schools. Writing in the New York Times, Natasha Singer reported on an “Amazon-sponsored lesson in artificial intelligence” taking place in public schools and an “MIT initiative on ‘responsible AI’ whose donors include Amazon, Google and Microsoft”. Singer noted in particular that the Amazon course introduced schoolchildren to building Alexa applications, while Amazon had just received a huge multimillion dollar fine for illegally collecting and storing masses of child data from Alexa devices. Nonetheless, “the one-hour Amazon-led workshop did not touch on the company’s data practices”.
Public platform pedagogies
The PedagoGPT complex of generative AI courses appears to be growing fast. It represents an expanding educational enterprise to habituate users from schoolchildren to SMEs, big business and civil society organizations alike to the promises of AI. PedagoGPT seeks to tether personal desires to AI providers, and to train organizations to integrate AI infrastructure into their products and working practices.
Emerging AI courses are a form of public pedagogy that plays out on online learning platforms like DeepLearning.AI, MOOCs or corporate training spaces. They are explicitly user-friendly, but their intention is often to make the user friendly. Making the user friendly, as Radhika Gorur and Joyeeta Dey have put it, means that platforms impose particular desires on a range of distributed actors, in the hopes that those actors respond amenably to its overtures. PedagoGPT programmes aspire to make users friendly to generative AI at mass scale, to see their business aims, knowledge work, or cultural experiences seem inextricable from what generative AI can offer. These public pedagogies are intended to synchronize users’ desires with AI.
In these ways, PedagoGPT training programmes might also help concentrate the power of tech businesses like OpenAI, AWS, Google, Microsoft and others. An informal range of friendly ambassadors to generative AI helps extend the messaging and the branding through YouTube tutorials and the like. Increasing numbers of leaders, developers, and other workers will be familiarized with AI and habituated to everyday usage in ways that are aimed at establishing big tech operations in everyday routines, just as enterprise software, digital platforms and web applications have been routinized already. Educating users is a strategy for increasing the network effects and hyperscalability of AI-based platforms too, all intended to ensure long-term investor infusions of finance and growing valuations.
More broadly, PedagoGPT synchronizes neatly with governmental enthusiasm for STEM (science, technology, engineering and maths) as primary educational aims and aspirations. It is little wonder that global leaders simultaneously extol the potential of both AI and STEM: they are both central to economic and geopolitical ambitions. Steve Rolf and Seth Schindler have recently written on the ways state aims are now tied to digital platforms in a synthesis they call “state platform capitalism”. AI and STEM are integral to state platform capitalism and the geopolitical contests and rivalries it signifies. What has been termed “AI Nationalism” relies on a steady STEM pipeline of users who can operationalize AI. While users are made friendly to AI, then, AI isn’t always being put to friendly use.
Finally, PedagoGPT raises the questions of how users are introduced to questions of ethics and responsibility. One might argue, for example, that an emphasis on STEM-based approaches further marginalizes the social sciences, arts and humanities in approaches to AI, when these are the very subject areas where such debates are most advanced. Existing PedagoGPT courses reference ethics and responsibility, but often in limited form – for example, Microsoft’s Azure OpenAI Service course defers to Transparency Reports. Likewise, recent calls for “AI literacy”, often promoted by industry and computer science figures, tend to prioritize skills for using generative AI. Here, ethics are addressed as personal responsibilities, based on an assumption that AI will be beneficial with the right “guardrails” for use and application.
It’s possible to speculate, perhaps, that ethics in public and enterprise AI training might become synonymous with so-called “AI Safety”, a particular approach informed by tech industry enthusiasm for “longtermist” thinking and its concerns over “existential risk” rather than the immediate impacts of AI. Yet generative AI itself poses a well-reported wide range of immediate problems, from the generation of false information and reproduction of biases and discrimination, to the automation of occupational tasks, to the environmental impacts of data processing. These contemporary problems with AI itself may remain out of the purview of ethics in PedagoGPT programs, while ideas of AI literacy framed by industry-based risk and safety concerns could be scripted into PedagoGPT curricula.
It seems likely that PedagoGPT courses will continue proliferating in the months and years to come. Who takes such courses, what content they contain, and how they affect everyday practices of living and working with AI remain open questions. But as AI becomes increasingly infrastructural, these courses will play a key role in habituating users, making them friendly to AI, and synchronizing everyday routines and desires to giant commercial business aspirations.
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.