artificial intelligence!

AI book: You Look Like a Thing and I Love You from the Author of AI Weirdness Blog

A funny, enlightening and non-technical primer on how AI language systems work, their strengths and weaknesses. If you like the author's blog AI Weirdness, this book is for you!

A discursive review

Today we’re surveying Janelle Shane’s witty primer on the strengths and weaknesses of machine learning: You Look Like a Thing and I Love You.   If you are an AI weirdness blog fan or just someone who is curious about AI, you might want to grab your own drink because this is going to be a meaty discussion.

We start with a look at some of the interesting gaps between the AI of our collective imaginations and the AI of today’s technological reality, and then highlight a few related concepts from the book. 

3.5/5
Is it interesting? 70%
Is it useful? 40%
Is it well written? 70%
Is it worth buying? 80%

Note: when you buy something using links in our articles and projects, we may earn a commission, and you help support new content development! As an Amazon Associate, playhacker earns from qualifying purchases. Learn more in our policies section. Want articles and projects in your inbox, step by step tutorials and a builder community? Join playhacker!

Summary

Before we dive into the book itself, we’ll lay some groundwork with a look at the way human’s fears of the unknown have manifested since time immemorial in myth and, as our technological capabilities advance, have begun to focus on artificial intelligence, in both speculative fiction and futurist scenarioing.  

Then we’ll recap some recent examples of real-world issues we are seeing in AI applications today.  Together, these two threads will help frame the book’s funny and insightful dive into the limitations, challenges and tripwires of the current state of learning systems.

More than just an AI book for beginners, this is simply an engaging read.  Shane helps us understand some complex technical concepts, and effectively educates us about the gap between our cultural expectation of the capabilities of current AI, and the strange, beautiful and sometimes dangerous limitations of real machine learning systems. And the examples are very funny!

Contents

Backstory: fear of the unknown

AI brings with it a lot of excitement, and no small degree of FUD. There’s always been a tension between advancement, and trepidation about the future that new changes might bring about. Fears of the unknown have manifested since time immemorial in myth. And as our technological capabilities advance, these primeval triggers are refocusing on an emerging unknown: artificial intelligence.

The Uncanny Valley: humanlike, but not quite

There is a phenomenon of human consciousness where, when simulacra (dolls, or puppets, or robots which look or act human-like) get more and more realistic, they stop looking to us like a harmless thing, and start to look strange or alien. This is a point before it has been improved so much that we fully trust it.  

This in-between place triggers discomfort, a withdrawal of trust, even fear and revulsion. This phenomenon has existed in humans for aeons, but today in the age of emerging AI and robotics, we have named it: The Uncanny Valley.

uncanny-valle-of-you-look-like-a-thing-and-i-love-you/
The Uncanny Valley, courtesy Wikimedia Commons

The Golem: forces of creation, barely controlled

This fear of “close to human, but apparently not human” seems buried deep in the ancient roots of perception. I suspect it is part of survival mechanisms which keep us alert for things which are sufficiently different from us, things which are likely to be an enemy or a predator.  

An early example of this is the Golem, an ancient Hebrew mystical concept of a human-like creature made of clay, often imagined as a mute slave created by a gifted spiritualist, doing the bidding of its creator until the creator loses control, when the golem becomes a threat, runs amok, etc.

The-Golem
A plate from "The Golem" by Hugo Steiner-Prag 1915 courtesy Joods Historisch Museum

The interesting universals about the golem, which we see again and again across history, are 1. the arcane mysteries required to bring it to life, 2. its alienness, and 3. the possibility of its maker losing control of their creation. Do these three tropes remind you of anything? How about the world’s most famous monster?

Frankenstein: the modern Promethius

In the early 1800s the industrial revolution, the harnessing of electricity, and rapid mechanization was disrupting age-old societal structures. Our ancient fear of the ‘other,’ and a growing fear of the power of technology, sparked a reimagining of the golem by Mary Shelley in her book Frankenstein or The Modern Promethius (1831), this time animated not by mysticism but by galvanism.

Even today, this is a remarkably good read if you like gothic horror. And it fits so well with the drumbeat of loss of control and fear of the unknown that sounds throughout history whenever the new threatens the old order. Frankenstein is considered by many to be the first sci-fi novel.

Frankenstein-mary-shelly
Frontispiece to the 1831 edition of Frankenstein

Spoiler alert: things did not end well for the creature’s creator. The book serves up a quasi-religious moral dishing out, for the sins of hubris and tampering with the natural order. And the creature deeply resented both humanity’s hatred and being in the thrall of its creator; and threw off its shackles as soon as it could.

R.U.R: the first robots run amok

Which brings us neatly to R.U.R., or Rossum’s Universal Robots, a play written by Karel Capek in 1920.  Here again, people create androids, or carbon-based robot slaves. And once again, the interests of people and their creations diverge, and, well, I’m afraid it doesn’t end well for humans. Watch for this theme throughout this article.

Interesting fact: even though these robots are made of artificial flesh, not metal and wires, this play introduced the word “robot” to the lexicon. As Wikipedia explains, “In Czech, robota means forced labour of the kind that serfs had to perform on their masters’ lands and is derived from rab, meaning ‘slave’.”

RUR
The Uncanny Valley, courtesy Wikimedia Commons

Asimov's three laws of robotics: the first attempt at control

As the technological era really started to pick up steam, people started thinking about the potential perils of artificial intelligence in earnest; many decades before the concept really existed except in imaginations.

In a 1942 issue of Astounding Science Fiction magazine, Issac Asimov introduced the Three Laws of Robotics, a thought experiment for the consequences of human dependence on AI, based on the machine-age idea that we could simply top-down engineer-in safeguards. His three interlocking rules were:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
AI-Law-of-robotics
Asimov first laid out the three Laws of Robotics in Astounding in 1942

While modern AI has shown us it won’t be so simple, in Asimov’s defense he wrote a lot of stories exploring the ways his rule set could fail. In 1950 he collected a bunch of his musings about human-robot interactions in the classic short story collection I, Robot. Classic sci-fi dorky, and thoughtful. Give it a read.

Colossus: an early vision of AI takeover

A decade later, Hollywood expressed these early rumblings of “what if uh-oh” pretty well in Colossus: the Forbin Project, a 1970 Universal Pictures film about two supercomputers, which run the USA and the USSR, respectively. 

One day they get to talking, and quickly come to the logical conclusion that the real problem with peace and order is all those illogical little meat sacks everywhere. They decide that, if they pool their resources, they could get a handle on the problem in no time.

Play Video
Click poster to watch the original trailer (Universal Pictures, 1970)
Movie poster from Wikipedia

The movie is based on a dystopian trilogy of sci-fi novels by D. F. Jones, which are another recommended classic of the “bots run amok” genre. But they’re long out of print. Your best be for reading is Alibris, which usually can sell you an old paperback cheap (I picked up the first in the series, Colossus, there for two bucks).

The singularity: intelligence dials itself to eleven

Nick Bostrum’s Superintelligence is a practical-philosophy deep dive into what he feels is the huge risk, or even inevitability, of thinking machines saying “sorry, piss off!” to their makers. Basically a what-if thought experiment on Colossus-like development in our future AI landscape, Superintelligence focuses on how quickly machines may exceed our own intelligence (AKA The Singularity), when “general artificial intelligence” or human-level AI, could go from unfeasible to unstoppable in an eyeblink, pondering the many ways in which that could go very badly for us indeed.

Superintelligence
A dystopian planning manual that assumes humans are much smarter than I think they are

AI today: the unknown, manifest

Many researchers are skeptical, even derisive, of Bostrum-style scaremongering.  The principle argument being that we are obviously centuries away from AI being smart enough to compete with humans. “We don’t need to worry about it yet.” But even today, in the nascence of applied AI, we already have plenty of examples of real harm; less from learning systems’ overwhelming superiority than from our naive trust in their efficacy.

Chatbot or internet troll? Tay the learning twitter bot

Most of us learn how to think about AI first from Sci Fi books and films, and later from breathless news articles about the latest breakthrough. What we don’t hear as much about is what AI can’t do, and where it fails to meet our expectations.

Occasionally, some high profile blunder pierces the veil of “OMG we so live in the future.” Like the wonderful Tay fiasco from Microsoft Research, where they turned a learning chatbot loose on twitter, where the internet trolls taught her how to talk. And everyone was caught off guard by how quickly it became a racist ass!

AI-bot
Tay was only live for 24 hours or so. Too bad.they didn't
leave the tweets up, it was both entertaining and instructive

Human or gorilla? Racist search AI

A remarkably similar problem surfaced not long after, in the Google image-recognition AI system. Turns out, if a bunch of white people design a deep learning model, and select the images to train it against, they tend to select images of white people. And surprise: that creates a machine that isn’t very well-trained with non-white faces.  

image-recognition-AI-system
This wake-up tweet has long since been removed, but as of
2020, it seems Google has not been able to fix the problem

And it may be a harder problem than it initially seemed: because according to Wired, more than two years later they apparently still couldn’t fix the problem, because their initial workaround was still in place (simply refuse to search for any animal it might confuse with a human).

Deer or airplane? One-pixel attacks on image recognition

Amazing but true: AI recognition systems can be tricked into identifying completely different things, by changing just a single pixel, according to research by Jiawei Su et al. (if you don’t like reading papers, Towards Data Science has a nice overview).

AI-recognition-systems
Each of these images had only one pixel changed,
altering target ID (black) to bad ID (blue) arXiv.org

Baseball or espresso? Pattern attacks on object recognition

Perhaps even more interesting is follow-up research by Anish Athalye et al, who took those learnings in images, and applied them to three dimensional objects. The interesting thing here is that the tweaked objects look, to humans, pretty much exactly like the target subject. But by image recognition AI, can be consistently identified as something radically different.  

In two funny examples, a 3D printed turtle is consistently identified as a rifle, and an old baseball is confidently judged a cup of espresso. The researchers have a nice summary on their blog, with some sweet videos too.

3d-objects-corrupts-recognition
Synthesizing Robust Adversarial Examples on arXiv.org

Clear horizon or white truck? Self-driving discrimination problems

One of the most infamous examples of unintended consequences, of the kinds of simplifications AI systems make without our awareness, is the “Tesla vs white truck” crash of 2016. Essentially a white semi truck crossing the road ahead can resemble closely enough a light sky above a clear road, so a car guided by the Autopilot system may slam into said truck at a high speed, without braking. Tesla’s blog post about the accident seemed to imply that a human would not have seen it any differently, but Tesla drivers have noticed the Autopilot system not recognizing very obvious white trucks crossing in front.

AI-autopilot
Darker ground + lighter sky + white trailer = "what truck?"
(I added the semi to a landscape photo by Matt Hardy)

So: what do all these things have to do with each other? We’ve looked at being afraid of what might go wrong; and at being taken by surprise when things go unexpectedly wrong. This is useful context for appreciating You Look Like a Thing And I Love You, which aims to help us helps us understand why AI is cool, how it can sometimes be funny, what’s happening under the hood, some ways things can go wrong, and how to make better decisions around applied machine intelligence.

AI Weirdness blog: You Look Like a Thing

This book is basically ideas from Janelle Shane’s popular Ai  blog about her experiments with machine learning, AI Weirdness, stitched together by some interesting and useful “how does this stuff work under the hood?” context, and wrapped in an overarching theme of being mindful creators, and consumers, of current AI techniques.

A sequence of hypothetical goals for AI applications thinking through the application of AI to the problem, and exploring the funny outcomes; peeks under the hood of how various systems approaches work; and disarming illustrations from the AI’s perspective; all work together to help us better understand, when AI does something unexpected, funny or alarming, why that might have occurred. The result is a quick and entertaining AI primer. Shane’s goals here are pretty much to remind us that:

  1. The state of the art in machine learning today is a lot dumber than people think it is;
  2. The mistakes that AI systems can make are sometimes pretty funny;
  3. We should not forget that AI mistakes have the potential to cause humanity a lot of problems, when we fail to understand the limitations of the technology.

She explains all these things from a high level, simply and clearly, with engaging examples. She helps us understand how machine learning patterns are developed, how these patterns diverge from human problem solving strategies, and how this difference can cause problems if we don’t understand it and mis-apply AI patterns

Few rampaging robots, yet: AI is everywhere

I led up to this peek inside the book with a lot of my favorite ideas about fear of robots. But in overall tone You Look Like A Thing is decidedly not in the AI scaremongering genre. While Shane repeatedly points out why bad AI outcomes can happen, she also reminds us that both the pleasing thing about tinkering with AI, and the core sticking point with applied AI today, is that these systems are much stupider than we think they are.

She takes obvious delight in creating systems that make charming mistakes, and shares some hilarious examples. She also patiently reminds us throughout the book that one of the main problems in AI today is the ” nut loose behind the wheel” problem.

In other words, AI is dumb, and that can both be really funny when tinkering with it creatively, and can have bad outcomes when misapplied in the real world. And that the latter occurs because we don’t think hard enough about the challenges of fully constraining a very limited system, before turning it loose on real problems.

She also is careful to point out that while “AI run amok” is not what we are likely to see anytime soon, the more insidious artifacts of the misapplication of AI, the “garbage in, garbage out” effects of accidental bias, are all around us if we look closely.

Because today, AI is everywhere, and yet we still don’t as a society have the expertise to apply it effectively. Not only in design and implementation, but also in application and regulation. And that these challenges are significantly harder when an invisible algorithm is surreptitiously and accidentally making bad hiring decisions, rather than a killer robot running amok with a gun.

Be specific: kill all humans

In the chapter “How Does It Actually Learn?” Shane explains the basics of evolutionary algorithms, systems which iterate designs thousands of times, testing each against rules of acceptability called fitness functions. A successful outcome depends on careful design of these fitness functions.

She gives a funny example of a hallway that splits left and right, and the goal of using AI to design a robot that gets all the people go to the right, and none go to the left. After explaining that the system may try thousands of really bad robot designs before even getting one that can walk, she also points out that unless we explicitly constrained the system, one of the things it might do was just kill all the humans. Because that neatly solves the “insure no humans take the left hallway” requirement! 

But if we constrained that possibility and got the system building a robot that would guide but not kill the humans, we might find that after thousands of iterations on strategy, instead of the friendly guide we imagined, it might come up with a robot big enough and rectangular enough to completely block the entrance to the left hallway. In other words, a super complex, massively time consuming, extraordinarily expensive door! 

Illustrations courtesy Janelle Shane/Little, Brown Voracious

That example illustrates a second common motif sprinkled throughout the book: just because AI is having a moment right now, does not mean that it is the right solution for a particular problem. We should use AI to solve problems for which it is best suited, limitations and costs carefully considered.  

Be more specific: literal-minded toddlers

The chapter “What are you really asking for?” adds more meat on the bones of the same idea: you must be VERY SPECIFIC about everything. The section on game design has several hilarious and illuminating quotes from programmers trying to engineer a particular AI behavior or outcome, and being caught off guard by the systems doing something completely unexpected, and from the human perspective, totally wrong. Because they didn’t anticipate needing to specifically say not to.

In fact, "pause the game so a bad thing won't happen," "stay at the very beginning of the level, where it's safe," or even "die at the end of level 1, so level 2 doesn't kill you" are all strategies that machine learning algorithms will use if you let them. It's as if the games were being played by very literal-minded toddlers.

Chapter 3: How Does It Actually Learn?

Be extremely specific: AIs love to fall over

To me, so much of this seems rooted in the profound gap between the human domain and the AI domain.  Humans are so used to working inside a vast set of fundamental assumptions about every tiny detail of existence (see Kuhn’s definition of paradigm) that they aren’t even aware of the “water they swim in” until some problem (like an AI) teaches them the hard way. 

It’s so easy to miss, when designing a system, that for all intents and purposes the AI is living, and problem-solving, in a clean-slate world: where the only limitations which exist are ones the programmer can imagine and provides. In other words, it goes without saying that there is no such thing as “it goes without saying.” 

Shane provides a great example of this in the section “Why walk when you can fall?” She describes a scenario in which a designer might give a learning system a robot head, body, two arms and two legs, and tells it to put the pieces together, then go over to another place. The designer imagines, of course, this: 

Book-on-AI
The outcome, in the mind of the designer, of "put the robot together then walk it over there"
Illustration courtesy Janelle Shane/Little, Brown Voracious

But this is not the optimal, obvious, or even conceivable solution for a learning system. How long would it take a system to learn to put all those bits together into a human-like form with bilateral symmetry, and learn to stand upright, and then to move the legs in a controlled fall gait, such that the robot moves to point B similar to the way a human would?

If, before you tell it to go to point B, you don’t first teach it all the stuff that millions of years of evolution created; the strange arrangement of parts that humans use to move around in our particular complicated and inefficient way; then it may very well conclude that it is simpler and quicker to just connect all the bits up in a stack, and then tip it over, like this:

AIs love to fall over
The simplest, fastest solve for the task "asemble robot then move it to point B"
Illustration courtesy Janelle Shane/Little, Brown Voracious

Final analysis

Is it interesting? Quite: interleaved between her charming and effective thought-experiment tutorials on machine learning principles, are multiple hilarious examples of her own creative work with AI. Experiments that readers of her blog AI Weirdness will already be familiar with. Things like training a learning system to create names for metal bands, then repurposing its training set for a machine that generates names for ice cream flavors, and getting results like “Swirl of Hell,” “Beast Cream,” and “The Butterfire.”

Is it useful? Cocktail-party level: it’s not going to make us an AI expert, or even able to do the most basic things in the field. But we come away with a high level picture of some of the common systems and methods used in machine learning today. We have a better understanding of the current scope of AI. We see a lot of examples of the way the limitations of deep learning can be repurposed as fodder for creativity.  As an AI intro for beginners or casual readers, it is a very effective survey of key issues in the field. 

Is it well written? Surely: You Look Like A Thing was a surprisingly quick read for such a complex subject. Throughout, Shane patiently explains esoteric processes and complex technical concepts, anchored with funny, accessible examples, that make learning about challenging topics so much easier, and more fun.

Is it worth buying? Indubitably: the book is pretty inspiring, and makes me feel like I have a bit better grasp of this daunting topic area. And most importantly, makes me eager to peek under the hood a bit myself. Which, from the playhacker perspective, is the whole big-thing point.

3.5/5
Is it interesting? 70%
Is it useful? 40%
Is it well written? 70%
Is it worth buying? 80%

Like our illustration for this article?

Brain in a box, pretty cool.  It makes a great t-shirt!  Fitted or regular fit.  Check em out:

Responses

  1. Knowing very little about the topic, I’m struck by how consistent the programming process is—it seems the process of making a robot a racist jerk is not that different from making a person a racist jerk. In some sense that demystifies AI for me—anything a computer could do on its own is probably something a person wanted it to do, having given the computer the capacity to do it, which seems indistinguishable morally from responsibility. In other words, I guess I tend to see AI as the ability to leverage up all the best and worst parts of humanity. That’s scary, but so are nuclear warheads.

    1. Steve wrote “anything a computer could do on its own is probably something a person wanted it to do, having given the computer the capacity to do it, which seems indistinguishable morally from responsibility”

      I think that’s not the way AI works today, I think learning systems have made it impossible for us to program outcomes. So we instead have to make some guesses, write some rules, start some training, get a network back, test it, and if it looks pretty good, turn it loose on the world and hope for the best.

      If you look at the Google image search kerfuffle mentioned in the article, I think we can agree that that outcome was not something a person wanted the computer to have/do…

X