After nearly two years of rescheduling, I finally got around to catching up with a friend. A few years ago, we had much more regular catch-ups for the specific purpose of commiserating over our PhD journey. Now our paths have diverged. He’s a full-time professor leading a master’s program, and I’ve ducked out of academia to try out the tech industry. And our catch-ups have become playdates with our little humans.

It didn’t take long for our conversation to veer to how different university life is with ChatGPT, and the difficulties they have in judging if a student really authored their essays or if it’s the work of an AI. I may have left academia before this wave hit, but the dilemma still stuck with me. Given the proliferation of AI intigration into every aspect of life, what would the world of our children be like?

Technology was once a tool, and we were the sole “thinkers” using it. Now it feels like that line is blurring. Large language models (LLMs), trained on vast amounts of human language, mimic thought and communication in ways that force us to rethink our relationship to such tools. Instead of focusing on students “cheating” with AI, perhaps it is time to explore the most effective ways of thinking with it. To accelerate learning and spark fresh perspectives.

Over half a century ago, Douglas Engelbart published his conceptual framework on augmenting human intellect 1. He argued that computers should not replace human thinking but extend it. His vision was focused less on automation, which outsources tasks to machines, and more on amplifying our capacity to solve complex problems with their aid. Today’s conversational AI seems to present itself as a step toward that vision. Whether it truly augments thought or merely automates expression is still an open question. Can we really go beyond transactional prompts and enter a dialogue with AI? Can we develop our ideas with a predictive model that strings together words based only on patterns it has learned from past data?

Decades after Engelbart’s call to amplify human thought, we now chase that dream with models so large they risk silencing the very human voices they were meant to extend. A year before ChatGPT was released to the public, a conference paper titled On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? was published 2. The authors warned of the hidden costs behind large language models: from the CO₂ emissions generated by vast servers to the human toll of minimally compensated workers annotating data, and everything in between. Admittedly, I only became aware of this important critique because of the controversy that followed inside Google, when co-author Timnit Gebru was pushed out of the company after disagreements over its publication. The controversy itself illustrates the tug between humanity’s desire to innovate and its need to preserve, of what we are willing to speak of, and for whom.

I too start this project with a tug of sorts. In conflict. Excited by the possibility of engaging in dialogue with what feels like humanity’s collective knowledge in its most organised and accessible form, yet uneasy about the cost at which it comes — the sanity of annotators, the hidden labour that sanitised the training data, the filtering of humanity’s worst so I can face its best. I have no resolution yet.

AI Mirror on this thought

  1. Engelbart, D.C., 2023. Augmenting human intellect: A conceptual framework. In Augmented education in the global age (pp. 13-29). 

  2. Bender, E.M., Gebru, T., McMillan-Major, A. and Shmitchell, S., 2021, March. On the dangers of stochastic parrots: Can language models be too big?🦜. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency(pp. 610-623).