I’ve just read this essay by the famous mathematician Terence Tao together with an artist about the use of AI in mathematics. It’s well worth a read and very well-written1. It focuses more on familiar territory of the problems caused by the currently existing AIs. Scientifically we have the human-centipedification of knowledge: students using AI to do their homework, professors using AI to grade the students’ homework, researchers using AI to write papers, referees using AI to write reports on such papers, and so on. Socially we have the massive loss of comfortable, safe jobs like call centre operator, software developer, illustrator, musician, writer, etc., further weakening labour and inevitable increasing the capital’s share of income. Environmentally we have the massive construction of data centres running on the dirtiest possible energy, just when climatologies have concluded that the Gulf stream will most likely collapse after all2.
But everybody knows this. What I want to talk about is the future problems they barely mention: what happens when the AI can do scientific research better than us? Note that I’m not talking about “if” but “when”. The idea that there’s something magical about human cognition that can’t be replicated in silicon is plain ridiculous, that I’m only mentioning here because it’s so widespread. It doesn’t mean that current LLMs can do it, though. They have fundamental limitations that prevent them from ever being capable of scientific research: they cannot learn, they don’t have a sense of time, they lack consciousness and a will. Another revolution in AI will be necessary for it to happen. But it will happen.
Here Tao believes that we will be able to live with better-than-human AI like chess players can live with chess engines they can never hope to beat. I think that’s a desperately naïve point of view. The vast majority of chess players do it for fun, not as a job. And chess engines can’t do their actual job. Scientists, on the other hand, do it almost exclusively professionally3 Why would society pay us to do research when it can be done better and cheaper by machines? It obviously won’t.
What will we do then? I can think of three possibilities:
- Do a communist revolution and spend our days masturbating, smoking weed, and playing video games.
- Do the jobs that AI/robots still can’t handle.
- Die.
I don’t think the first alternative is likely; at this point the billionaires will control all the killbots making revolution impossible. And even if some miracle happened it’s not a very appealing future: masturbating, smoking weed and playing video games is fun but gets old really fast. We need more than that in life.
If the first alternative is excluded, the second alternative is almost tautological. We are definitely not going to do the jobs that AI/robots can handle. What are we doing then? If robotics continue to lag behind AI looks like menial jobs, in a rather ironic reversal of the first industrial revolution. And when they do catch up I guess there will always exist billionaires with a fetish for human servants?
Other than that we can of course die, perhaps at the hand of the aforementioned killbots, in order to make space for a continent-wide golf course.
To conclude, I think the future is going to be an unmitigated disaster, and the best we can do is delay the inevitable. But even that seems to much to ask; in reality we have lunatics like Fields medallist Tim Gowers enthusiastically teaching AI to be better mathematicians. Come on, that’s like the Aztecs trying to make Columbus arrive sooner.
