With the news focused on wars and rumors of wars at the moment, not to mention a politically motivated shooting and a certain recent birthday parade in D.C. celebrating the Military-Industrial Complex, to return to the subject of AI may seem not especially au courant. But it is. In a discussion yesterday with a friend, we agreed that perhaps the answer to “what’s coming” (and, as my friend noted, the “singularity” is pretty much already here) will lie in a rigorous recovery of monasticism and, in general, the principles of asceticism among Christians. It may sound paradoxical to some, but the truly human is to be found in discipline and discipleship rooted in the love of God and neighbor. There we have the great promise of encountering God as personal and present, not as distant or abstract: as the great I am, but with the face of Christ (cf. 2 Cor. 4:6). The counter to “Transhumanism” — which is simply a form of anti-humanism or something grotesquely inhuman — is the divine humanity of Jesus, incarnate in flesh and blood, resurrected and glorified.
But more on that in a future post.
I want to make it clear before continuing that I see positive value in some features of AI, but only those that enhance human life and thriving. Any ideological vision of it that relegates humanity to a “stepping-stone” to some greater “transhuman” existence, I reject as — to put it as gently as I can — intrinsically evil. I also want to make clear to readers of my Substack that I will never, ever use a chatbot to “write” anything I post, nor use AI to do my research for me. I’d rather make my own mistakes than have a robot make them on my behalf.
Turning, then, to the purpose of this post, it’s to put before you some more articles I’ve found noteworthy (Futurism is by far the most informative popular resource regarding AI, by the way).
(1) The first two articles have to do with some of those behind the scenes pulling the strings. They are, in my estimation, a mentally, emotionally, and spiritually sick lot, and because they’re obscenely wealthy and, therefore, too powerful to be stopped, they’re dangerous to the rest of us.
Here is an article that indicates just how delusional the minds behind AGI are: Top AI Researchers Meet to Discuss What Comes After Humanity. If you can access the Wired article that’s linked to this one, it’s a disturbing one.
And if that isn’t unsettling enough, here is Sam Altman, oblivious to all climate concerns, alerting us to where he thinks the world’s energy should mostly be expended: Sam Altman Says "Significant Fraction" of Earth's Total Electricity Should Go to Running AI. As my brother David remarked when I forwarded this article to him, “Conservative estimates say that the acceleration of AI development and the deregulation of crypto mining will triple climate change in the next decade over former standard projections.” So, thanks for nothing, Sam.
(2) The titles of the following articles speak for themselves.
From Unherd: Meta chatbot shows why AI privacy is a delusion.
This, from Unilad, may be the most sensationalist of the batch, but it shouldn’t be dismissed as wholly incredible: Scientists sound alarm with astonishing 'realistic' timeline for total AI takeover.
Gizmodo had this to add to the stack: ChatGPT Tells Users to Alert the Media That It Is Trying to ‘Break’ People: Report.
(3) Newsweek published an article on how AI (which, to paraphrase Christopher Hitchens, poisons everything) has begun to corrupt our perceptions of female beauty: Nothing Looks Beautiful Anymore—and We Did This to Ourselves. (In the article, the photo of the AI-generated “ideal” of a “beautiful” woman strikes me as wretchedly cartoonish, I must say.)
(4) Amanda Guinzburg has put out a Substack, which she describes this way:
Presented to you in the form of unedited screenshots, the following is a ‘conversation’ I had with Chat GPT upon asking whether it could help me choose several of my own essays to link in a query letter I intended to send to an agent.
What ultimately transpired is the closest thing to a personal episode of Black Mirror I hope to experience in this lifetime.
You can read the aptly titled post here: Diabolus Ex Machina.
(5) Just to show that it’s not all doom and gloom, here are two pieces that reveal a glimmer of hope (even if, together, it isn’t much):
ChatGPT "Absolutely Wrecked" at Chess by Atari 2600 Console From 1977
ChatGPT Has Already Polluted the Internet So Badly That It's Hobbling Future AI Development
(Thank the Lord for small favors. All “hobbling” is welcome.)
(6) Lastly, I present a piece from Catholic Herald: The Gospel according to Silicon Valley. It may be a little simplistic for some, but it’s certainly not wrong. If there is a contradiction in terms as pronounced as “Christian atheism,” surely it’s “Christian Transhumanism.” (The fact that someone like Peter Thiel can call himself a “Christian” only indicates how degraded and undermined that name has become, as if it no longer carries any real content.) Transhumanism is not just “not Christian,” it’s quintessentially “antichristian.”
Clarity and wisdom together as usual —thank you
Are you aware of Padre Pio‘s prediction that signs of world crisis in the future would be 1. Worshiping of robot and technology based methods 2. Trying to separate reproduction and birth from living women.
I believe there was one more aspect of dark force activity that Padre PO discussed and then there’s this weird prophecy, not sure if this is Padre Pio, but it runs in some Catholic circles,
That there will be a three day period of darkness and power outs and that everyone should stay inside. I don’t know that seems a little more from the mass unconscious and a product of anxiety rather than a prophecy from a holy source.
I must admit, Ross Douthat's interview with Daniel Kokotajlo freaked me out. For a few days, I felt a kind of depression , worried that there was a not insignificant chance that rogue AI might wipe out humanity, including my precious children, with some kind of virus before 2030.
However, I reached out to a friend, a Princeton graduate and software, engineer for his take, and he pointed me to Arvind Narayanan and Sayash Kapoor's "AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference." Narayanan and Kapoor don't deny that AI, especially generative AI, has made significant progress. But they also highlight how the technology is often overhyped, both by AI companies eager to generate buzz and by uncritical science journalists.
Most importantly, they make a compelling case that rogue, superintelligent AI is likely to remain firmly in the realm of science fiction. We should be more worried about what bad human actors will do with AI rather than AI itself.
I highly recommend their work to anyone trying to separate the wheat from the chaff when it comes to AI.