Dumbing Down

Some books I read slowly and some I devour.

Humans are natural classifiers—we love pigeon-holing. He’s an idiot, she’s beautiful, a naturally happy baby, that dog was born angry… it’s how we roll.

Some people never read books—take the orangutan—in fact when you look at that pile of classified papers strewn over the carpet, you wonder how many light years it would take for those materials to be read. They’re pretty much his equivalent of a presidential library.

Others read books occasionally, some feel they should read regularly, so there’s always a book—what book are you reading at the moment?

Into the last pigeonhole go people like me who read various books concurrently—some apace. Ray Kurzweil’s book ‘The Age of Spiritual Machines’ is one of my slow books. Anne Applebaum’s ‘Red Famine’ is another, and Alvy Ray Smith’s ‘A Biography of the Pixel’ is yet another.

For different reasons.

Applebaum because the horrors comrades Lenin and Stalin committed to the Ukraine in the first half of the XXth century are worse than what the current dictator ending in ‘in’ is doing in the first half of the XXIst—I just can’t read it at one sitting—it’s too brutal.

Alvy Ray Smith because the parable of the pixel has a lot of math in it, and although I read a lot professionally, this kind of reading (and writing) should be both hobby and relaxation. The Pixel is a brilliant book, and the history of images, video, movies, and Pixar is compelling, but it is a journey.

Kurzweil is a futurist, inventor, and deep thinker. One of his big ideas is the singularity—a point when machines surpass humans in intelligence, which opens up the wriggly, elusive, and stinky can of worms called Artificial Intelligence.

AI is a recurring topic of mine and an integral part of my new book, The Hourglass—yes, I’ve finished it, after six years work—well, there’s an epilogue left to write, and that will happen later today.

I have very mixed feelings about AI—it’s the classic case of the sorcerer’s apprentice. We don’t know where we’re going, but we’re pushing on. It’s kind of weird—when humans emerged from prehistory, other animals must have thought, ‘These dudes don’t stand a chance.’

Elephants, lions, gorillas, wolves, and eagles did a two-minute threat assessment and concluded, ‘Look at these little rodents scurrying around. They can’t run, jump, trample, fight, or fly. I wonder if they even taste good.

Ever since that trivial underestimation by the entire animal kingdom, courtesy of a bizarrely brilliant brain, the opposable thumb, and tool development, we have engaged in controlling every other life form on the planet through domestication, mastication, and extermination.

In the case of AI, we seem inordinately keen to develop our new masters, and are well on the way to do so. This is Kurzweil’s singularity—he predicts it will occur by 2048—a mere quarter-century from now, or the generation time for humans.

In practice this means that any child born today will be subjugated by machines by the time they become an adult.

We see AI at work every minute of the day, for both good and bad—it helps simplify tedious tasks, improves medicine, grants access to knowledge… and replaces jobs that can be well performed by humans with impersonal and remote interaction.

I have speculated that humans will never be dominated because we are just too evil—we’ll never manage to make machines that nasty.

But there’s another side to AI that doesn’t work at all—it relates to ambiguity and interpretation, and of course that dovetails with humor.

Fallacious argument—not to be confused with fellatious argument—is one example.

The duchess has a beautiful ship but she has barnacles on her bottom.

This classic fallacy only works because in English ships are female, and it is quoted in guides for better writing, but humans can of course tell the difference—AI could analyze the statement and conclude that a barnacle is a marine crustacean—it would attribute a low probability to the assumption that the duchess regularly parked her ass in seawater, allowing the free-floating barnacle larvae to settle, review anti-fouling literature in the context of navigation, and draw the correct conclusion. A human would smile at the ludicrous statement and move on in a millisecond.

About ten years ago, researchers pointed out that simple questions whose answers are evident to humans give AI a run for their money.

Do alligators sew?

How long does it take a wolf to bake a cake?

Do newts play piano?

Can a ridgeback strum chords?

The above are my versions—Google made a pig’s ear of all the replies and the images it returned when answering that last question are dumb.

The most interesting features of this Google search are (i) that the global search showed no relevant hits and only produced a half-page of images; and (ii) there is no connection between dog and guitar. I called the file ridgeback rock to throw AI off the scent. Proper AI would suggest I’m taking the piss.

And yet, my last question is a refinement of ‘can dogs play guitar?’, a question any playful four-year old might pose. And if you said yes—I would, explaining dogs do that by squatting, extending their (fretboard) tail across their body and strumming with their right paw (unless they’re left-handed)— the child would giggle and tell you you’re teasing. Duh.

Oh, and FYI dogs never use thumbpicks.

But AI could explore the fact that ridgebacks are dogs and a chord is played on a stringed instrument such as a ukelele, mandolin, or guitar. The lack of association between dogs and musical instruments might give the computer a hint that I was taking the piss.

Incidentally, if you ask Google: Can cats take the piss?

It comes back with piffle such as ‘is my cat urinating inappropriately?

My deepest sympathy to folks who wander through life asking those sorts of questions.

Researchers into the dumb side of AI formulated ambiguous questions such as:

Joan made sure to thank Susan for all the help she had received. Who had received the help?

a) Joan
b) Susan

or

Sam tried to paint a picture of shepherds with sheep, but they ended up looking more like golfers. What looked like golfers?

a) The shepherds
b) The sheep

It tickles me particularly to imagine sheep looking like golfers—maybe they stole the crook.

Such questions, which are classified linguistically as anaphora, are AI kryptonite.

One of the foremost proponents of AI is IBM—forever embarrassed when its poster child Watson told Jeopardy that Toronto was a US city.

Perhaps they should have called it Sherlock.

Watson, I mean, not Toronto.

The India Road, Atmos Fear, Clear Eyes, and Folk Tales For Future Dreamers. QR links for smartphones and tablets.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.


%d bloggers like this: