AI watch 1 - or, "Here comes hell" (gratis post)
More articles and videos to keep you awake at night
Mankind’s belief in its own intellectual superiority is not just to overrate the latter, but — quite evidently — it's an estimation diametrically opposed to reality. Any species that repeatedly risks its own survival and the survival of everything else on earth, as mankind does, can't be as terribly intelligent as it claims to be. Take AI as Exhibit A (not very long ago, I would have said nuclear armaments are Exhibit A, but that notion has suddenly become rather passé). Pardon me for bringing up the subject for the third time this month (my two earlier posts on the matter can be read here and — especially — here), but it’s one that I think will be and should be addressed again and again. Among the deeply human needs and attributes that AI threatens, such as creativity, education, employment, dignity, and so on, I don’t believe it’s any exaggeration to say that the spiritual dangers it poses are grave — perhaps the gravest that human beings have yet created for themselves in their whole history of babyish Babel-like blunders. That it’s the product of hubris, ethically dubious on many levels, that it isn’t really needed, yet foisted on us because it’s lucrative, carries no moral weight when confronting the companies producing it. Not normally given to apocalyptic dread much myself, I have to admit that one could justifiably feel (for many reasons, not just because AI is looming over us) a growing sense that the judgment of God, characterized, for example, in Romans 1 as being “given up” to do what we desire, and in 2 Thessalonians 2 as being given over to embracing a “strong delusion,” is being visited on our benighted generation. Perhaps, horrifyingly, our civilization has been left to uproot what remains of its deteriorating collective sanity. Seeing how readily this unpredictable technology is being welcomed with applause and a worshipful wonder by many thousands should be troubling to genuinely contemplative souls (I’m pleased to note that Pope Leo XIV is among those who find it so)… But I’ve addressed all this before. This post, dark as it is, is merely concerned with presenting you with information that — if you haven’t seen any of it before or even if you have — should shake you up and be an urgent incentive to get your spiritual priorities and disciplines firmly set for whatever is coming our way. Before launching into the recommendations below, which I urge you to read and watch as you can, let me put in a quick plug for my brother David’s important book, All Things Are Full of Gods: The Mysteries of Mind and Life, and his recent 7-part series of posts on Substack, “The Artificial God.” The topic of AI figures strongly in both.
Here, then, is a new list of AI-related articles and videos with which to, er, regale yourself.
(1) From Tech Crunch, here is an article detailing how Anthropic’s Claude Opus 4 has learned to blackmail its engineers. Read it here.
(2) From Built In, a piece on Trump’s “Big Beautiful Bill” and how it could (with disastrous effects) deregulate AI: read it here.
(3) Here is The Verge with an article about how the Chicago Sun-Times published “made-up books and fake experts” because it relied on its use of AI tools: read it here.
(4) The Guardian has what may be the most upbeat story of the bunch. Its title? "Almost half of young people would prefer a world without internet, UK study finds." Well, I didn’t say it was “upbeat” precisely, just “the most upbeat of the bunch.”
(5) The Atlantic has an article revealing how three indisputable achievements of AI are intellectual property theft, the smothering of authentic creativity, and putting people out of work. Here are two significant paragraphs from the piece:
Regardless of what the courts decide or any action that Studio Ghibli takes, the potential downsides are clear. As Greg Rutkowski, one of the artists involved in the case against Midjourney, has observed, AI-generated images in his style, captioned with his name, may soon overwhelm his actual art online, causing “confusion for people who are discovering my works.” And as a former general counsel for Adobe, Dana Rao, commented to The Verge last year, “People are going to lose some of their economic livelihood because of style appropriation.” Current laws may not be up to the task of handling generative AI, Rao suggested: “We’re probably going to need a new right here to protect people.” That’s not just because artists need to make a living, but because we need our visual aesthetics to evolve. Artists such as Miyazaki move the culture forward by spending their careers paying attention to the world and honing a style that resonates. Generative AI can only imitate past styles, thus minimizing the incentives for humans to create new ones. Even if Ghibli has a deal with OpenAI, ChatGPT allows users to mimic any number of distinct studio styles: DreamWorks Animation, Pixar, Madhouse, Sunrise, and so on. As one designer recently posted, “Nobody is ever crafting an aesthetic over decades again, and no market will exist to support those who try it.”
Years from now, looking back on this AI boom, OpenAI could turn out to be less important for its technology than for playing the role of provocateur. With its clever products, the company has rapidly encouraged new use cases for image and text generation, testing what society will accept legally, ethically, and socially. Complaints have been filed recently by many publishers whose brands are being attached to articles invented or modified by chatbots (which is another kind of misleading endorsement). These publishers, one of which is The Atlantic, are suing various AI companies for trademark dilution and trademark infringement, among other things. Meanwhile, as of today, Altman is still posting under his smiling, synthetic avatar.
The full article can be read here.
(6) Futurism has some of the most disturbing articles on what’s happening in the world of AI. Here are five:
(a) The first has to do with how AI chatbots have misled hikers and why they can’t be relied on in such situations: see it here.
(b) The title alone of this article should give us pause: "Terrifying Survey Claims ChatGPT Has Overtaken Wikipedia".
(c) Another winner: "Grok AI Claims Elon Musk Told It to Go on Lunatic Rants About 'White Genocide'".
(d) And another: "Elon Musk’s AI Bot Doesn't Believe In Timothée Chalamet Because the Media Is Evil".
(e) The following article tells about what is, in my opinion, the most dangerous “improvement” of AI now on the market. Google DeepMind’s Veo 3 can make AI videos so real that it’s impossible to distinguish them from reality. The average teen can now “make” a cinematic-style movie that outperforms anything Hollywood CGI can cook up — and in a mere matter of hours, if that. I am posting the pertinent article below, followed by two randomly selected videos on YouTube that demonstrate Veo 3’s capabilities. I trust they will horrify you rather than motivate you to purchase this abomination. What this new AI tech portends for propaganda (easier than ever before to produce — no need for a Wag the Dog scenario involving Hollywood), education (the corruption of audio/visual historical records), mass distraction and delusion, the job market, and, well, the sanity of many fragile minds is just too ugly to contemplate. A ghastlier money-making product hasn’t been invented (not even Creepy Crawlers).
Here is the article: "Google's New Video-Generating AI May Be the End of Reality as We Know It".
Two Veo 3 demos:
(6) Lastly, Ross Douthat interviews Daniel Kokotajlo, the executive director of the A.I. Futures Project, for the New York Times. Kokotajlo has been called “the Herald of the Apocalypse” because, not to put too fine a point on it, that’s how he sees our plight. Here is the introduction to the NYT interview on YouTube, with the time stamps:
Is artificial intelligence about to take your job? According to Daniel Kokotajlo, the executive director of the A.I. Futures Project, that should be the least of your worries. Kokotajlo was once a researcher for OpenAI, but left after losing confidence in the company’s commitment to A.I. safety. This week, he joins Ross Douthat to talk about “AI 2027,” a series of predictions and warnings about the risks A.I. poses to humanity in the coming years, from radically transforming the economy to developing armies of robots.
Read the full transcript at https://www.nytimes.com/2025/05/15/op...
03:20 What effect could AI have on jobs?
06:22 But wait, how does this make society richer?
10:13 Robot plumbers and electricians
15:26 The geopolitical stakes
20:02 AI’s honesty problem
24:01 The fork in the road
28:47 The best case scenario
30:36 The power structure in an AI-dominated world
33:34 What AI leaders think about this power structure
39:16 AI's hallucinations and limitations
44:47 Theories of AI consciousness
48:11 Is AI consciousness inevitable?
52:23 Humanity in an AI-dominated world
*******
If all this seems to you to have little to do with “pragmatic mysticism,” I would suggest that it will have a great deal to do with it — negatively — in the very near future. In this case, pragmatic is the operative word. What we do with our time, in our leisure and in our work, is always essential to spiritual discipline. It will be even more urgent to practice our ascetic lives conscientiously and rigorously as we turn this new historical corner. If we find we are mentally living in fabricated worlds of illusion, being lied to by the media, unable to distinguish fact from fantasy, spiritually confused, unable to concentrate, incapable of learning with a sense of confidence in the veracity of the information being imparted (and younger generations are now at risk in this way to an extent we never were)… then it may well prove the case that only our prayerful, contemplative commitment to Christ and the truth will keep us mentally, emotionally, and spiritually balanced.
The big question for me is,
do we all have the self-restraint to learn about these dangerous AI tech's... and actually not individually, personally, participate in them?
or,
do we continue what has been par for the course in Modernity: Christians who- whether we see and speak the problem of new tech or not- say, "meh, what can you do?" and entirely forego what I'd call an ethic of "new tech discernment" and of "new tech asceticism".
?
Most people in my world,
while sympathetic to this critique, would, especially where their work lives are made easier, also make use of any given AI featured here.
I'm with Kingsnorth (who, to me, is not radical but a realistic voice if a bit prescient)-
We (Orthodox) Christians need to recognize modern passions for tech and "just say no."
-mb
Well, that’s all quite horrifying. But why not pour further oil on the fire? Here:
https://techcrunch.com/2025/05/22/anthropics-new-ai-model-turns-to-blackmail-when-engineers-try-to-take-it-offline/?sfnsn=mo