Win my writing essentials • Forthcoming subscriber events at the bottom of the page. including the next Book Club on Tuesday 27th August
This week, I got sucked into a YouTube video showing the process of ‘deep reinforcement learning’ on an AI called Albert, who was tasked with learning how to climb stairs. It looked a long way from the uprising of the machines that many people fear, but by the end of the clip, I despaired for humanity anyway.
It began with a blocky, computer-animated figure being dropped at the top of a white staircase. He is bright orange but humanoid in shape, with googly eyes, resembling a child’s robot costume made of cardboard boxes. He is, I think, supposed to be funny. He writhes on his back for a few moments, before teetering to his feet, taking a step forward and immediately falling down the stairs. You suck, Albert, says the caption.
The stairs are part of an enormous obstacle course: flights up and down, and patches of uneven ground. Albert doesn’t know how to tackle any of these at the beginning of the video, but he keeps on making attempts. His efforts are timed. If he doesn’t complete the course in the allotted time, he is whisked back to the beginning again. He could, I suspect, just dematerialise and appear at the start, but instead he is animated to be flung backwards, his limbs flailing, until he hits the farthest wall. The clock is reset, and he starts again.
This kind of learning is cumulative, and it’s predicated on failure. For each iteration of the task, Albert makes a new attempt, and gains a little knowledge. It’s trial and error, an important method of human knowledge acquisition, but one that we often seek to avoid if we can. Trial and error generally entails a long and dispiriting process in the initial stages before any enlightenment takes place. Brave hearts who are happy to fail over and over can reap great rewards from this method, but many people just give up.
Albert the AI does not give up, although he hasn’t got much of a choice. He has sensors in his feet to detect stairs, but at first he doesn’t know what the signals mean. He trips and falls on his face over and over again. It’s like watching a toddler acquiring gross motor skills, except that no-one supports him as he finds his balance, and no-one scoops him up when he’s on the ground. We watch him stranded on his belly, unable to get up, waiting for the timer to tick down. The caption tells us that Albert is punished for falling on stairs, ‘So Albert’s in a lot of pain.’ Pain? Punishment? Am I the only one who finds this all a bit unsettling?
I have pretty much avoided AI so far. I have not played with Chat GPT or Dall-E; I have not taken up the offer on various sites to generate captions and thumbnails. It is, I suppose, a slightly principled stance - I have little doubt that publishing, film and music executives would merrily choose cheap AI-generated content over expensive human-made art - but it’s also disinterest on my part. It’s just not a space I want to play in at the moment. Maybe one day, I’ll feel differently.
There are clearly a vast number of future benefits to the use of AI, which has the capacity to revolutionise medicine and solve some of the major scientific questions of the day. Yet the flip side is that it has the capacity to leave vast swathes of humanity unemployed, their emotional needs increasingly served by machines. Personally, I don’t think this sounds like much of a utopia. It’s another step in the erosion of our organic relationship with the world, yet another diminution of our complex sensory environment. All the evidence so far shows that we are not thriving under these conditions.
Those problems seem relatively far away, although they will be with us soon enough if we don’t take careful steps to stay in control of this mercurial technology. For now, AI is nothing more than a bright and accurate mirror on our own attitudes, our own desires.
Keep reading with a 7-day free trial
Subscribe to The Clearing by Katherine May to keep reading this post and get 7 days of free access to the full post archives.