July 8, 2025
Should we teach AI to reflect human values, emotions, and intentions?
In this new preface to How to Speak Machine, John Maeda revisits a decade of technological transformation, from invisible computing to the rise of generative AI, and what it means to teach machines to speak human.
Design Observer community,
We’re living through a quiet but profound shift — from user experience (UX), where people adapt to systems, to agentic experience (AX), where systems adapt to people. Interfaces that once required learning and navigation are giving way to AI agents that interpret intent and handle complexity on our behalf. This transition isn’t just about convenience; it’s reshaping how we design, build, and relate to technology. It’s in this context that How to Speak Machine, first published in 2019, takes on new relevance in 2025.
— John Maeda, author of How to Speak Machine

The following is adapted from How to Speak Machine: A Gentle Introduction to Artificial Intelligence, ©2025. Reprinted with permission from MIT Press
It’s September 2024, and as I sit down to write this new preface, I’m struck by the eerie symmetry between the past and the present. In 2010, I wrote an essay for Forbes titled “Your Life in 2020,” a speculative piece about the future of ubiquitous computing. It was a world where technology would fade into the background, becoming so seamless that it would be unnoticed, an era when “being digital” would transform into “having been digital.” Now, I find myself reflecting on that vision, not just to see how accurate it was but to understand how we’ve arrived at this extraordinary moment in time — a moment dominated by artificial intelligence (AI) and large language models (LLMs).
Back then, the idea that technology would become invisible was both thrilling and unsettling. I envisioned a world where mobile phones would be as vital as the beating of our hearts, where STEM education — focused on science, technology, engineering, and math — would need an infusion of art to bring back a sense of humanity, evolving into what we now call STEAM. I imagined a software industry that would return to its craft heritage, moving away from mass-produced digital experiences to something more bespoke, more personal — something crafted by hand, even if that hand was a digital one.

Then the pandemic hit, and the world as we knew it changed overnight. What might have taken a decade to evolve in the digital space happened in mere months as businesses, schools, and entire industries were forced online. The pandemic became a catalyst for digital transformation — a shift driven by the unprecedented volumes of data generated during this time. By 2020, technology had indeed become pervasive, but rather than disappearing into the background, it became an omnipresent force, shaping our lives in ways that were both profound and, at times, disconcerting. What I hadn’t fully anticipated was how quickly technology would evolve, and how it would bring us to the threshold of a new era — the era of AI.
As I look back at that essay from 2010, it’s clear that we were on the brink of something massive, something that would change not just how we live and work but how we think about the very nature of human creativity. And yet, despite all the advances, one thing has remained constant: our desire to look at the past in order to better understand the future. At a 2005 commencement speech at Stanford, Steve Jobs famously said, “You can’t connect the dots looking forward; you can only connect them looking backward. So you have to trust that the dots will somehow connect in your future.” This reflection brings me to How to Speak Machine.
I spent six years writing How to Speak Machine, not just to demystify the world of computer science for the layperson but to reconnect with the foundational knowledge I had gained at MIT and in my career as a technologist. The process of writing the book was, in many ways, an attempt to reconcile the theoretical underpinnings of computer science with the realities of the rapidly evolving tech world I had been immersed in. It was a journey to understand how machines work but also how we, as humans, can better communicate with them — and, perhaps more importantly, how they can better communicate with us.
Machines seem to speak our language, but it’s essential to remember that they don’t yet speak human.
When How to Speak Machine was first published in 2019, the world was still grappling with the implications of AI. The idea that machines could “think” was both exhilarating and terrifying, depending on who you asked. At the time, large language models were still in their infancy, and the idea of designers using AI to automate their daily tasks seemed like something out of science fiction. But today, that science fiction has become science fact. Designers can indeed write code without engineers, using LLMs to generate and refine their work with a level of ease that would have been unimaginable just a few years ago.
This shift is particularly striking for me, having spent my early career teaching designers how to program, insisting they learn mathematics to fully grasp the logic behind the code. Today, they no longer need to wrestle with the complexities of programming languages because AI can write the code for them. Not only that, it can patiently explain it to them step by step, tailoring it to their respective levels of understanding. It’s a golden age of craft, where the barriers between imagination and execution have dissolved, allowing the artistry of design to flourish in ways we only dreamed of back in 2010.

As we connect the dots from the past to the present, we see that the journey has been about more than just learning how machines work; it has been about preparing them to understand us on a human level. The era of LLMs has just begun, transforming industries in ways we’re only beginning to comprehend. As I continue my work in AI, now at Microsoft amid the disruptive force of ChatGPT, I’m reminded daily of the extraordinary capabilities of these systems and the profound responsibilities they bring.
We find ourselves in an era where machines seem to speak our language, but it’s essential to remember that they don’t yet speak human — they only process language as numerical tokens, devoid of true significance and accountability. To truly navigate this landscape, we must deepen our understanding of computational systems and how we’ve built tech products that appear to speak human but, in reality, speak only in patterns of data. We must not be deceived by this appearance. Only by mastering this distinction can we take on the greater mission of teaching machines to speak human in the fullest sense.
Observed
View all
Observed
By John Maeda
Related Posts
Design Impact
Ellen McGirt|Books
No mandates, only opportunities: IBM’s Phil Gilbert on rethinking change
Education
The Editors|Books
Your October reading list: The Design of Horror | The Horror of Design
Innovation
Vicki Tan|Books
How can I design at a time like this?
Design Juice
Delaney Rebernik|Books
Head in the boughs: ‘Designed Forests’ author Dan Handel on the interspecies influences that shape our thickety relationship with nature
Related Posts
Design Impact
Ellen McGirt|Books
No mandates, only opportunities: IBM’s Phil Gilbert on rethinking change
Education
The Editors|Books
Your October reading list: The Design of Horror | The Horror of Design
Innovation
Vicki Tan|Books
How can I design at a time like this?
Design Juice
Delaney Rebernik|Books
An internationally recognized leader at the intersection of design and technology, John Maeda is Vice President of Engineering, Head of Computational Design for Microsoft AI Platform. He was the 16th President of the Rhode Island School of Design (RISD). Named by Esquire as one of the 75 most influential people of the 21st century, he has appeared as a speaker all over the world, from Davos to Beijing to São Paulo to New York, and his TED Talks have received millions of views. He is the author of Design by Numbers, The Laws of Simplicity, and Redesigning Leadership, all published by the MIT Press.