The Daily Heller: Is Artificial Intelligence Truly Artificial?

  • by

What do we know about real or artificial intelligence? We know that like every newly hyped invention, fear follows form follows function. But for a more nuanced reality check with more intellectual acuity, I spoke with Blaise Agüera y Arcas, AI researcher, CTO of technology and society at Google, and James Goggin, a U.K.-based book designer and founder/partner of Practice about Agüera y Arcas’ enigmatic new volume designed by the latter titled What Is Intelligence?

As Agüera y Arcas states in his introduction, the team at Google Research began working on the machine-learning models powering next-word prediction for Gboard, the smart keyboard for Android phones. “We created these models to speed up text entry on the phone (which remains, at least for those of us over 40, far less efficient than typing on a real keyboard) by suggesting likely next words in a little strip above the virtual keys. Today, similar models power autocomplete features in many apps and editors. … The process can be iterated, so that predicting the word ‘the’ is equivalent to successively predicting the characters ‘t,’ ‘h,’ and ‘e,’ followed by a space. Predicting one word after another could produce a whole paragraph. When iterated, each prediction builds on—or, in more technical language, is conditional on—previous predictions.”

Machine learning in the 2010s was sometimes called “artificial intelligence,” but most researchers working in the field didn’t believe they were really working on AI. “Perhaps they had entered the field full of starry-eyed optimism …”

Agüera y Arcas notes that “neuroscientists had concluded that the brain worked on computational principles, and computer scientists therefore believed that now that we could carry out any computation by machine, we would soon figure out the trick, and have real thinking machines. HAL 9000 and Star Trek’s shipboard computer weren’t idle fantasies or plot devices, but genuine expectations.”

The question Agüera y Arcas’ book poses will not be easily understood by everyone—including me—but it is nonetheless provocative and perhaps offers a key to understanding what the future has in store. Agüera y Arcas is fluent in a language that requires interpretation; Goggin exemplifies the designer who must understand the new idioms. Together yet separately they provide insights into what may become a seminal book for “the New Intelligence.”

You describe AI as “real” intelligence? Isn’t that a contradiction?
Blaise Agüera y Arcas: I don’t think it necessarily is. An artificial sweetener, for instance, may be made out of something other than sugar, but for us, “sweet” is about taste, not chemical composition. So the sweetness of artificial sweetener is real.

Or, to look at it a little differently, my colleague Benjamin Bratton and I have wished AI had instead been called “Synthetic Intelligence,” by analogy with synthetic diamonds. Natural diamonds were formed out of carbon under tremendous pressure in the Earth’s crust a long time ago. Synthetic diamonds are instead made in the lab—but they’re perfectly real diamonds. They’re not cubic zirconias or some other bogus article.

So if intelligence is about what an entity can do, as opposed to how it’s made or where it came from, then AI, too, is perfectly real. And I think it’s reasonable to define intelligence functionally. We don’t test a person for intelligence by opening up their head and seeing what’s inside, or by asking about their ancestry. Every intelligence test I’ve ever heard of is functional or behavioral.

You state that “the concept of function or ‘purpose,’ which is central to life, is inherently computational.” Can you expand upon this? Does it mean that we are all computational beings?
Agüera y Arcas: Yes! It’s actually an old idea.

In 1951, a couple of years before Watson and Crick published their famous paper about the structure and function of DNA, computing pioneer John von Neumann had a big insight about living organisms. He realized that every living thing capable of evolving in an open-ended way needs to have an “instruction tape” specifying how to build itself out of simple and available chemical parts, along with a machine he called a “universal constructor” that could follow the instructions on the tape and do the constructing. The instructions for building the universal constructor would themselves need to be included on that tape. This is what it takes for an organism to reproduce (and also, as we now understand, to maintain itself, or grow, or heal.)

Von Neumann was right. That “tape” is DNA, and ribosomes are the universal constructors that build our bodies. They also build more of themselves; the instructions for making a ribosome are encoded in our DNA, too. The wonderful thing is: von Neumann showed that a universal constructor is precisely a “Universal Turing Machine,” that is, a general-purpose computer. So life is literally computational. It must be, in order to reproduce and evolve.

“Function” and “purpose” turn out to be computational terms. “Function” is the more obvious one. When Alan Turing (another key figure in the history of computing) defined computation, it was literally in terms of mathematical functions. A computation is simply the evaluation, or carrying out, of a function.

When we think about this in von Neumann’s more embodied “universal constructor” terms, the inputs and outputs of such a function are not abstract entities like numbers, but actual stuff. So, for instance, the function—or, we could say, purpose—of a kidney is to filter the urea out of blood. As with intelligence, it doesn’t matter how it carries out this function. As long as the result is filtered blood, the kidney is said to be functional.

There are multiple steps involved in performing the filtration, which we can think of as the steps of a program. But because it’s the overall function that matters, different programs or recipes might work, and if so, they would be interchangeable. And indeed, because function or purpose is so central to life, many living systems have evolved multiple redundant methods or “programs” for performing critical functions—such as aerobic and anaerobic respiration, or different metabolic pathways. On a keto diet? That’s just such an alternative metabolic pathway. A different subroutine.

Functional relationships characterize complex systems at every scale. Think about pollination (it could be done by birds, or by bees), or food production (hunting, gathering, farming) or package delivery (by plane, or by truck). In the end, cells, organisms, cities and ecosystems are also networks of interdependent functions, often with multiple redundant pathways or programs for each function.

Tell us more about computational neuroscience.
Agüera y Arcas: Back in the middle of the 20th century, computer science and neuroscience—the study of brains and how they work—were actually the same discipline. Computers were literally an attempt to make artificial (or synthetic) brains. Computers proved very useful for military, industrial and eventually personal applications, but early attempts at AI failed miserably, so computer science ended up going its own way.

Meanwhile, neuroscience took multiple paths. Some neuroscientists focused on medicine, psychology, linguistics or even philosophy, but one contingent continued to focus on the brain’s functional—that is, mathematical or computational—properties. These researchers often had backgrounds in math or physics. They built quantitative models, probed those models with experiments (often involving electrical brain recordings), and iterated. They were the computational neuroscientists.

In the early days, many computational neuroscientists referred to their field as “cybernetics,” though the term began falling out of fashion in the 1960s. Those who tried to simulate the computational properties of networks of neurons were sometimes referred to as “connectionists.”

Our best current understanding of the brain, and our best (and only) successful approach to AI—neural nets—comes from computational neuroscience. These days, the more applied end of the field is sometimes called “NeuroAI.”

As a designer, I was struck by your section “Grandmother Cell.” What is this concept?
Agüera y Arcas: The original, tongue-in-cheek idea was that your brain has a neuron to represent every concept you can think of—and assuming you can think of your grandmother, that would imply you have a single neuron that fires only and precisely when you’re thinking of your grandmother!

It’s a problematic idea for a bunch of reasons, including lack of robustness (no single neuron is all that reliable).

Nonetheless, back in 2005, researchers recording from single neurons in human patients found cells responsive to the oddest stimuli, including Jennifer Aniston, Jennifer and Brad Pitt together, Pamela Anderson, and so on. It turns out that many of our neurons are very specific, but whole overlapping sets of them light up in response to any given stimulus or idea. This kind of coding is vastly more robust and efficient than a single “grandmother cell.”

I was also intrigued by your statement, “Vision is not the raw stream of sensory input from our eyes. Rather, that input stream acts like an error-correction signal. Vision—what we actually see—is a reconstruction of the world around us, an actively maintained and constrained hallucination.” What is a constrained hallucination?
Agüera y Arcas: A constrained hallucination is a story your brain tells itself about the surrounding world, which is constantly tweaked and corrected by sensory evidence. You tend to gather that sensory evidence (with eye movements, for instance) in the ways that will most helpfully constrain the “hallucination,” keeping it consistent with what’s out there.

The creative part of hallucination is essential, because your raw sensory inputs simply aren’t rich or wide-angle enough to take in much of the world at a time. Neuroscientist Anil Seth and I agree that hallucinogens may work by changing the balance of the “creative” versus “feedback” (or, one could say, “critique”) parts of this process. That can allow the hallucination to start to float free from its moorings. If you’ve never tried a hallucinogen, try Googling the video “DeepDream grocery trip” to get a sense of what happens.

How do you want this book to be used? Is it intended to allay the fears arising from the quick dominance of AI?
Agüera y Arcas: I didn’t write this book to be a comforting fairy tale. It is my best understanding, at this point, of the underlying nature of life and intelligence. I feel that over the past few years many questions have suddenly become a lot clearer to me, due to an unusual vantage point at the intersection of several disciplines and communities (AI and computer science, physics, neuroscience, and philosophy). It seemed like a timely and important topic for anyone who thinks about these things, or is curious about them!

It also felt important and timely because the insights cut against several prevailing narratives, including those of many “AI doomers.” We’re facing plenty of serious societal risks, including AI risks, and I do spend time on those in the book. But I don’t think they’re best conceived of (or addressed in terms of) our intuitions about “dominance,” wherein today humans dominate AI, and tomorrow AI may dominate humanity. Evolution, life and intelligence don’t generally work that way. Networks of interdependence are the norm.

James, for me, the book’s content is both fascinating and confounding. How much of the book did you comprehend?
James Goggin: I appreciate this question because it assumes that I’ve actually read the book that I’m designing. (Which, of course, I have!) It’s a principle of my partner Shan and I that we read everything we design. But it’s common that clients express surprise when they realize that we’ve read the given content supplied to us (and are sometimes offering editorial opinions on the text, to boot). Apparently graphic designers have a reputation of not reading what they design! In my typography classes at RISD, I had a lot of suggestions and recommendations, but pretty much only one rule: Read the text.

The great thing about Blaise’s writing is that it is accessible and generous without being patronizing. He doesn’t necessarily assume that the reader will immediately understand everything, but he assumes that you are willing to understand it. I personally fit into this category of reader. I’m not a philosopher, a biologist or a computer scientist, but you don’t need to be in order to read What Is Intelligence?

It helps that Blaise’s references are truly wide-ranging: not just from machine learning, biology, physics and neuroscience, but from areas where I’m personally more comfortable, like the broader cultural, social and historical references that he naturally weaves through the narrative.

I love reading as an engaged, multitasking kind of activity: I’m constantly reading with my phone next to me (if not actually reading on my phone), finding mentioned places on Google Maps, looking up subjects in Wikipedia, and checking the dictionary for words I’m not familiar with. This is a great book for this kind of active reading, where you come away having learned many new things, having gone on explorations, each time you’ve opened the book.

What level of comprehension did you need to create your design?
Goggin: In spite of what I said above, I don’t actually think total comprehension was necessary. But this was partly due to circumstances: I started the design process for the book as Blaise was still writing it. We were designing for both web (with my digital collaborators Minkyoung Kim and Marie Otsuka) and print at the same time, too. And for two different versions of the book.

The first, a kind of rogue digitally printed edition of the book’s first chapter, given the title What Is Life?, was designed in dialogue with Blaise in a sprint before an initial presentation and launch at MIT Media Lab in October 2024. So I had advance access to chapter one, and as it happens, this provided a good conceptual foundation both for that small edition and the full 10+ chapter final book. (After some initial hesitation at the decidedly non-academic publishing speed with which Blaise and I wanted to get What Is Life? out, MIT Press ended up loving it and we had it reprinted offset for an official release by the press in March 2025.)

Major inspiration came from two particular illustrations found in that first chapter, both of which summed up ideas of life and computation in different ways. The first was a reaction-diffusion diagram by the famed English mathematician, computer scientist, logician, cryptanalyst, philosopher and theoretical biologist Alan Turing. You can see this on the cover jacket for What Is Life? The second was a computational intensity scatter plot that shows the emergence of life. It looks like white noise, like the static on an old cathode ray tube TV. Which is to say, even to the uneducated, it looks alive in some way.

These were exactly the kinds of provocative visuals I was looking for, ostensibly abstract forms, patterns, that, upon reading about them in context, immediately expand in meaning, collapsing time and space, spelling out the book’s intentions. On What Is Life?’s cover, the Turing pattern looks contemporary. Graphic design-wise, it’s possibly even trendy, I hesitate to admit. But on checking the inside cover caption, you quickly realize that it’s from 1952. The noisy scatter plot literally pinpoints the moment that life occurs. 

Were you influenced at all by Quentin Fiore’s design for McLuhan’s The Medium is the Massage?
Goggin: Not consciously, but I’m certain there’s an inevitable subconscious presence of Fiore’s work. His designs for mass-market paperbacks communicating complex ideas in accessible graphic form have long been an inspiration for me as a designer. The Medium is the Massage, Buckminster Fuller’s I Seem to Be a Verb, and other similar non-Fiore–designed paperbacks from the 1970s that popularized specialist topics for a mainstream audience remain an example of accessibility without dumbing down that I aim for in my work (and I know are references for Blaise, too—we’ve discussed them a lot as we work on our books together).

If anything, this current project (like many of my projects, I’d say) owes a debt to Richard Hollis’ work, in particular his design for Ways of Seeing. An earlier project I worked on with the musician and writer Damon Krukowski was perhaps a more explicit example of this influence, to the point of homage (in both Damon’s writing and my design): the book of his podcast Ways of Hearing (also published by the MIT Press). In fact, it was this book that first drew Blaise’s attention to our work, and led to our first collaboration on his COVID-times novella Ubi Sunt.

What were you trying to achieve with your rendition of Cooper Black as the main titling face? Is there a relationship between the typeface and the content?
Goggin: The type does bear a resemblance to Oswald Cooper’s design, doesn’t it? I’m pleased to hear that association, as someone who loves the Cooper type family. But it’s actually Federico Parra Barrios’ typeface Exposure, released by French foundry 205TF in 2022. Like all of our projects, there’s definitely a conceptual and functional relationship between type and content. I wanted something comfortable to read, given the length of Blaise’s writing. I also wanted something that had the certain legitimacy of a textbook, but with a subtle slant to it.

Just as we played with time spanning millennia on the covers of What Is Life?, from 1950s patterns to contemporary computation to Aboriginal art tradition going back at least 20,000 years, Exposure deftly conflates various developments in design and technology. Instead of traditional type weights, Exposure uses a new technology (variable type) to simulate an obsolete one, producing the eroded or blown-out effects of under- or over-exposure in phototypesetting.

The variability has enabled me to animate the type for various web-based and motion graphic applications (for online promotion and conference lecture video loops) while also allowing me to dial in very specific type weights for our print and digital editions, for white and black page backgrounds in the design, etc. Typesetting the body text throughout, I nudged the exposure axis just enough for it to feel like perhaps you’re reading a photocopy of the text, as if someone already felt that Blaise’s important writing was worth copying and sharing.

Was an AI aid used in the design of this volume?
Goggin: Not explicitly, in terms of generating images or design elements. But AI inevitably played a role in basic tasks involved with the copious amount of work needed for image processing and editing. Subtly extending backgrounds here and there for bleed or other reasons, using Photoshop’s content-aware fill, for example. That could well be the only AI aid used, if I really think about it, but given how embedded AI has already become in design software, I’m certain there have been other processes involved.

How do you feel about AI as a design tool or something else?
Goggin: That’s generally exactly how I think about AI: as a design tool, a production tool. I took a great interest in generative text-to-image generation early on, experimenting with DALL·E 2 and Midjourney a few years ago, for example, to understand what was going on and how they worked. But as with all technology, AI or not, I retain healthy skepticism too, a critical awareness of the very important concerns around how AI models are trained on work without our permission, the perhaps unsurprising problems of inherent bias, and the very real threat to certain fields like editorial illustration, among many other issues.

The post The Daily Heller: Is Artificial Intelligence Truly Artificial? appeared first on PRINT Magazine.

Leave a Reply

Your email address will not be published.