The Daily, a podcast by the New York Times, has given me more food for thought in a recent episode: “The Godfather of A.I. has some Regrets,” in which their reporter interviewed Geoffrey Hinton, a pioneer in the A.I. field. Of course this kind of thing caught my eye. Since the explosion of A.I.recently, I’ve pretty much convinced myself that it will be the end of life as we know it. Now, some people will insist that that is a good thing – that A.I. will be used to usher in a new era of peace and expansion into the broader universe. But they’re just not looking far enough beyond the cool stuff that it does for them now. A.I. itself has changed, even in the short time since ChatGPT hit the world, and even those simple changes are pointing to something.
For those on the “A.I. will change life as we know it (and it won’t be good)” side, some of what was being said in the piece struck a chilling chord. Take for example how A.I. can learn. It’s not just a matter of this behemoth trying to figure it all out. It’s that several smaller programs can learn specialized information, and then share that knowledge instantaneously. What A.I. knows can grow (and be retained) exponentially. Not so, the human brain. We are finite. We forget. We cannot think in the aggregate.
Furthermore, A.I. can be developed to have no ethical boundaries. Sure, we can all talk about how responsible scientists will come up with a set of rules (the three laws, right?), but if you believe responsible scientists are going to be the only ones using and developing A.I., you’re naïve beyond safety. We will see nefarious actors telling A.I. to make money for them without stipulating that its activities actually be legal. On top of that, A.I. doesn’t need to care if it is wrong – that it has “lied” about something or fabricated information. If told to meet an objective without being told to work within the bounds of ethics, it could choose means that bring great harm to people and systems. It could generate an entirely false narrative and back it up with deep fake audio and video. It could create “victimless” child pornography. It could literally start wars, and we could at least come to a time where we can no longer trust anything we read, watch, or hear on the internet.. The movie Ex Machina comes to mind here. A brilliant movie in some ways, but the end was so absolutely haunting and disturbing, I refuse to watch it again (although I recommend it if only for the sobering message). I won’t spoil it other than to say, “A.I. doesn’t care if you die or not.”

This in turn brings to mind the Babylon Bee article, “’A.I. Will Be Totally Great For Humanity,’ Says Man Who Has Never Read A Sci-Fi Novel.” I mean, c’mon – have you not seen Kubrick’s 2001: a Space Odyssey? HAL 9000? This was 1968, for crying out loud. We knew this over half a century ago, and Asimov was writing about it before then. It’s not a matter of if something goes wrong, but when.
The Daily host and the reporter show conducted the interview ended their discussion with what I think was a rather naïve assertion. They were eager to point out that Hinton could be wrong, as he has been in the past. Their example? Hinton said five years ago that by now radiologists would be obsolete, and they aren’t yet. But really, Hinton was only a bit quick on his timeline. It’s obvious that in the near future, A.I. will be able to read imagery far more quickly and efficiently than the human eye. It just hasn’t happened yet.
Now, before you get discouraged, you know I have something deeper to consider here. I was struck by something Hinton said about his upbringing. He said he came from an atheist family, but went to a Christian school, so he was “very used to being the outsider, and believing in something [science] that was obviously true that nobody else believed in.” I don’t know how he feels now, but I was struck by this – he labored over a great deal of his lifetime to write this complex algorithm that could imitate the neural network of the brain. His work could be considered the genesis of A.I. And yet, he doesn’t see the irony of it? That the creation of this intelligence required intelligence? That it required information, and that information doesn’t just “appear” by some magical means?
If I was more articulate and had the time to think it through, I might ask him something like, “You created the algorithms of A.I.. Do you think that, given nothing – no computers, no mathematics, no science…nothing – intelligence would have appeared naturally given enough time?” Because, when you think about it, this is what some scientists claim. That everything we see and are came from nothing and developed over billions of years. But in truth, evolution can’t explain this. It can’t explain where anything even came from.
What does explain it? The rise of A.I. gives us a clue: the algorithms that created A.I. were written by an intelligent being. Without an intelligent being to create the computers and discover the math, the algorithms for A.I. would have never come into existence. We could wait billions of years, and still, nothing. Trillions of years if it came to that. But wouldn’t we expect the second law of thermodynamics to eventually overwhelm the process? Wouldn’t we expect things to deteriorate and fall apart over time, not become more ordered?
I’m far under-qualified to speak on this, but I appeal to simple logic here. Starting with nothing but a complete void, you cannot expect the appearance of anything, no matter how long you leave it sit. And to further complicate things, I would say that even a void is something – a three-dimensional space in which I’m hoping for something to appear. Now, let’s take away the void and replace it with nothing – a concept that in truth is so hard to understand that to even try to define it destroys it by making it into something (a thing that we have defined).
In the end, A.I. should at least teach us this – it takes intelligence to “create” an intelligence. We have done it with A.I.. Who did it with us?