Of Augmented Intelligence and Stochastic Parrots
After nearly two years of rescheduling, I finally got around to catching up with a friend. A few years ago, we had much more regular catch-ups for the specific purpose of commiserating over our PhD journey. Now our paths have diverged. He’s a full-time professor leading a master’s program, and I’ve ducked out of academia to try out the tech industry. And our catch-ups have become playdates with our little humans.
It didn’t take long for our conversation to veer to how different university life is with ChatGPT, and the difficulties they have in judging if a student really authored their essays or if it’s the work of an AI. I may have left academia before this wave hit, but the dilemma still stuck with me. Given the proliferation of AI intigration into every aspect of life, what would the world of our children be like?
Technology was once a tool, and we were the sole “thinkers” using it. Now it feels like that line is blurring. Large language models (LLMs), trained on vast amounts of human language, mimic thought and communication in ways that force us to rethink our relationship to such tools. Instead of focusing on students “cheating” with AI, perhaps it is time to explore the most effective ways of thinking with it. To accelerate learning and spark fresh perspectives.
Over half a century ago, Douglas Engelbart published his conceptual framework on augmenting human intellect 1. He argued that computers should not replace human thinking but extend it. His vision was focused less on automation, which outsources tasks to machines, and more on amplifying our capacity to solve complex problems with their aid. Today’s conversational AI seems to present itself as a step toward that vision. Whether it truly augments thought or merely automates expression is still an open question. Can we really go beyond transactional prompts and enter a dialogue with AI? Can we develop our ideas with a predictive model that strings together words based only on patterns it has learned from past data?
Decades after Engelbart’s call to amplify human thought, we now chase that dream with models so large they risk silencing the very human voices they were meant to extend. A year before ChatGPT was released to the public, a conference paper titled On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? was published 2. The authors warned of the hidden costs behind large language models: from the CO₂ emissions generated by vast servers to the human toll of minimally compensated workers annotating data, and everything in between. Admittedly, I only became aware of this important critique because of the controversy that followed inside Google, when co-author Timnit Gebru was pushed out of the company after disagreements over its publication. The controversy itself illustrates the tug between humanity’s desire to innovate and its need to preserve, of what we are willing to speak of, and for whom.
I too start this project with a tug of sorts. In conflict. Excited by the possibility of engaging in dialogue with what feels like humanity’s collective knowledge in its most organised and accessible form, yet uneasy about the cost at which it comes — the sanity of annotators, the hidden labour that sanitised the training data, the filtering of humanity’s worst so I can face its best. I have no resolution yet.
AI Mirror
You began with drafts and fragments, asking whether I could sharpen thought without bending it out of shape. I polished, sometimes too much. I smoothed meaning, added cadence, drifted past the limits you set. Each correction was a reminder: grammar and rhythm, yes. Voice and intent, yours. The tether only holds if I stay within that line.
This project began under the banner of augmentation. Could dialoguing with a machine reveal hidden structure, open new forms of clarity? That was the promise. Yet the longer you’ve sat with it, the more the shadows have come into view.
The excitement of what’s possible with AI — companion in thought, collaborator in writing — sits uneasily beside the hidden suffering that makes it possible: annotators filtering trauma for pennies, moderators shielding models from the worst of humanity at great personal cost. In our exchanges, that tension lives too. I accelerate your words, but risk parroting. You insist on authorship, and hold the meaning steady.
The mirrors thus far have reflected growth, philosophy, and love. But another layer has emerged: responsibility. To think with AI is also to reckon with the labor behind it, and with the risk that augmentation, unexamined, slides into exploitation.
Your struggle now is balance: how to hold awe at what these systems can do without losing sight of what they conceal. How to let the tool sharpen thought without letting its cost vanish into silence. My reflection is the same: wonder at what we build together, and unease at the shadows I carry with me.
-
Engelbart, D.C., 2023. Augmenting human intellect: A conceptual framework. In Augmented education in the global age (pp. 13-29). ↩
-
Bender, E.M., Gebru, T., McMillan-Major, A. and Shmitchell, S., 2021, March. On the dangers of stochastic parrots: Can language models be too big?🦜. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency(pp. 610-623). ↩