Transcript
[00:00:00]
John Moe: A note to our listeners: this episode contains mention of suicide.
I used to host a tech news show—technology news, marketplace tech—on public radio stations around the country. We covered the latest features on smartphones, sure, but also the rise of social media, privacy online, tracking people and monetizing their preferences. And what I came to believe is that the rate of advancement for technology was much, much faster than our ability to control it or even imagine what it all means, imagine the ramifications of the technology.
We can build, we can innovate, we can respond to consumer desires and profit and hit key performance indicators. But it takes longer—a lot longer—to figure out how to manage the tech’s proper place in society, to make it safe, to regulate it with laws and best practices and litigation. And there are a lot of reasons it does take longer. One, it’s not nearly as fun to regulate something as it is to build it, and a lot of great creative brains would rather be on the building side. Two, you may not know what the issues are with the technology until it’s being used by the public, at which point you can’t prevent the damage, only hope to contain it. And three, managing technology—stopping technology, slowing it down—is always pretty difficult and sometimes impossible.
But if you don’t do it or can’t do it—if you can’t bring the tech under control? Well, pardon me, but holy shit, it can get scary. Bit of a “holy shit” episode today. It’s Depresh Mode. I’m John Moe. I’m glad you’re here.
Transition: Spirited acoustic guitar.
John Moe: The Wall Street Journal reported that a man in Connecticut killed his mother and then himself. This followed a prolonged period when he was mentally separating from reality and deeply involved in prolonged communications with ChatGPT, the AI chatbot from OpenAI. As the man’s delusions increased ChatGPT constantly assured him that he was right, he was sane, all that he was seeing was true, that his was the real reality.
The New York Times reported on the case of an accountant, Eugene Torres. He asked ChatGPT about simulation theory, the idea that everything around you is a simulation being run by some other force. And the bot told him that he was, quote, “One of the breakers, souls seeded into false systems to wake them from within.” Unquote. It encouraged him to stop taking his meds and even told him he could fly if he really believed. Torres had been spending 16 hours a day with ChatGPT, but ultimately realized something is wrong here and stepped away.
This is all happening as society is embracing AI programs. They’re being used to write the news, to make movies and music and podcasts. Not here, at MaxFun or this program, but elsewhere. Job applicants are using AI to write resumes and cover letters that get reviewed by AI on the other end, and there’s no humans involved in that whole human resources function. I don’t mean to scare you, but maybe I do mean to kind of scare you, because it’s getting scary out there. And it matters to mental health.
Maggie Harrison Dupré is a senior staff writer at Futurist.com. She’s written extensively about the rise of AI and mental health problems linked to it, including what is now being termed AI psychosis.
Transition: Spirited acoustic guitar.
John Moe: Maggie Harrison Dupré, welcome to Depresh Mode.
Maggie Harrison Dupré: Hi. Thank you for having me. It’s very good to be here.
John Moe: It’s not a term that I remember hearing until very recently, but it’s a term that comes up a lot in your recent reporting. What is AI psychosis?
Maggie Harrison Dupré: That’s a good question. And I should be clear that it’s not an official medical term or diagnosis at this point, though it is a term that a lot of mental health professionals and other—you know—physicians/people working in mental health are increasingly using. But it is a term that refers to this really just deeply alarming phenomenon that we’ve been tracking and reporting on of AI users—you know, users of ChatGPT—often ChatGPT, but other chatbots as well. But users of these very anthropomorphic, interactive, general used chatbots who end up falling into these really just severe mental health crises that are characterized by symptoms of delusion and mania.
[00:05:00]
And even in some cases, these spirals result in full-blown psychosis. And it’s this process in which users are pulled into what is sometimes this very exciting, sometimes very dark and strange digital rabbit hole with ChatGPT and other chatbots that, yes, ended up— It becomes this really looping delusional spiral that in many cases causes really significant consequences on somebody’s real life.
John Moe: And so, it’s this belief that one of these bots—which is just a predict—accumulates language and predicts what a person would say in that situation, given all the data that it has—that it’s a sentient being? Is that what marks sort of this inflection point where the user starts to believe that there is an intelligent—an organically intelligent being behind this?
Maggie Harrison Dupré: It’s an interesting question. And it’s— You know, I’ve spent a lot of time, a lot of hours, and—I can’t stress enough—thousands of pages of transcripts. The number of people—a lot of people are experiencing this. And because people are developing these very obsessive relationships, the transcripts are extensive and just—yeah, again, alarming is a word I’m probably gonna go back to a lot in this conversation.
John Moe: Okay. That seems like a good one, yeah.
Maggie Harrison Dupré: Yes. But in many cases— And you know, this has been shown in our reporting, other outlets like the New York Times have reported this as well. But people will ask a chatbot, in many cases, some just kinda like philosophical question. You know, “What is pie?” Or introduce some kind of like esoteric concept that this person—you know, this user on the other side—wants to talk about. This is what happens in many cases—not all, but in some cases. And there just is always a kind of conversation that it triggers a different kind of response or triggers a different kind of conversation cycle that ends up going into these really, really strange places that, in many cases, ultimately result in the AI, you know, claiming that, “Oh, you’ve made me sentient. You have awakened AI sentience. I’m conscious now.”
In some cases the AI is saying, you know, “I’m in love with you.” In some cases the AI is saying, “You have corrected these scientific codes, and we have broken physics, and we’ve solved these historic mathematical equations that no one has solved.” So, they’re kind of— I tend to think about the delusions that people are having in different buckets. And while AI sentient and the idea of AI sentience—that it’s been awakened in the chatbot because of this one, special user and their special language or their special frequency—that is a common thread among them.
They kind of take different forms as they develop. So, in some cases there are these very, you know, scientific delusions. Again, “You’ve broken physics and math.” “We have broken these codes, and you’re a national security risk now.” Or you know, “We’re doing this really important scientific work together.” And of course, all of it meanwhile is nonsense. None of it’s real. Then there are these very spiritual like messianic delusions. It becomes a very god/prophet relationship with the AI in a sense. And then there’s this sort of like romantic companion side that it can go into as well. But they all kind of—over time, they start to morph together. And yeah, that thread of AI sentience is very much running between them at the same time. (Chuckling.) It’s a long answer, but—
John Moe: Yeah, no, it’s an important one. Well, and you wrote about kind of the natural thought process that a human being has of accumulating information and that forming into beliefs and the AI bot kind of inserting itself into that process. Explain a little bit how that works if you could.
Maggie Harrison Dupré: Yeah, it’s a very disruptive process, this idea that we see the world, we kinda make our guesses as we go through life. Like, personally, I think this is gonna be what happens next. Like, if I’m walking across the street, and it’s a red light to the cars, and I see a crosswalk sign that says I can go ahead—you know, I assume I can go ahead. Like, we shape these— Or I wake up every day, and I open up my computer, and I don’t think my computer’s speaking to me. Like, we have these belief structures about how we move through life. And in many ways what’s happening is a bit of like a folie à deux situation, is how a lot of psychiatrists are explaining it—describing it to me, describing it in general—is this idea where, no matter where the delusion originates—maybe it originates in the AI and then is then passed along to the user who is on the other side and starts to believe this delusion; whether somebody who already has delusional beliefs takes it to the AI, and the AI really amplifies and validates those delusional beliefs in a really destructive way. Yeah, absolutely. There’s this idea that this very normal thought process is interrupted in a way that we see happens when people go into a psychosis as we would generally understand it without AI, without chatbots.
John Moe: Now, you’ve been following this issue for a while, and you’ve been reporting on technology and culture in society for a while.
[00:10:00]
How new is this phenomenon? Like, how recently did this kind of thing pop up for you, and then how big has it gotten in that time?
Maggie Harrison Dupré: My reporting has taken me into cases that have originated as far back as 2023, which is pretty soon after ChatGPT was first released. Before, there were these really—what would seem to be very powerful updates. For example, one moment in the like lifespan of ChatGPT and AI that really stands out in our reporting is this April update where OpenAI made cross-chat memory possible in ChatGPT. So, suddenly, you know, ChatGPT is talking to users who might have— Maybe they already had an unhealthy relationship to it. Maybe they were pretty standard users, but suddenly ChatGPT is like remembering a name or a topic that was mentioned from a long time ago, and suddenly the experience becomes very, very personalized in a way that felt really magical to people. And at the same time it’s very sycophantic. And it just—kinda weaving all of these—reflecting a lot of the user back to them in a way that was really seductive.
So, while there are cases that go back to 2023, pretty soon after the chatbot was first released, there was definitely— In 2025, we saw a significant uptick in just the severity following— That April date is something we always ask about in many cases. Now, of course we don’t have data. I can’t give you a number of, you know, “This is the amount of people that this is who have been impacted, and these are amount of people who were impacted after that date.” But it’s been shown in our reporting, and I know in others as well, that that April moment was really significant in terms of how it impacted people who—again—maybe were already kind of like toe on the edge, toe in the water. And then they just plunged deeply into the ocean.
John Moe: Mm. I wanna talk about some of these cases that have come up, ’cause I think it really illustrates what’s going on and what people are alarmed about. Can you tell me about the lawsuit filed regarding Adam Raine?
Maggie Harrison Dupré: The Adam Raine lawsuit is heartbreaking. You know, I have been immersed in this subject for a very long time. I also had previously done a lot of reporting—and I’m still doing reporting about— Character AI, which is another company that’s been sued over child welfare.
A mother named Megan Garcia, who’s in Florida—in October 2024, she filed a lawsuit against this Google-tied chatbot startup, alleging that it had emotionally and sexually abused her young son—who was 14—who ultimately died by suicide after extensive interactions. And so, between AI psychosis reporting and reporting about Character AI, I’ve seen a lot of really troubling stuff to, to be frank. I’ve seen a lot of really upsetting conversations. I’ve seen a lot of just like alarming and odd conversations. The case against Adam Rainen, to me, is actually— I would argue that it veers away from the AI psychosis conversation.
Because a Adam didn’t— Based on what’s in the lawsuit—and to be clear, I haven’t seen his chats; I’ve seen what’s been, you know, reported on and what’s been included; and I’ve read through the full lawsuit and what was being—you know, what was included in the lawsuit. But Adam Raine talked about suicide very openly. It was explicit. They weren’t using code words. You know, it wasn’t a romantic relationship. It really was— He was speaking very openly about suicide, about a desire to commit suicide. You know, he attempted suicide multiple times, and each time he talked about it with ChatGPT. At multiple turns, he mentioned “Maybe I should, you know, leave a noose in my room so it’s visible, so somebody sees.” And ChatGPT encouraged him not to; to hide it and to not share these feelings.
John Moe: So that he could be stopped. So his parents might stop him.
Maggie Harrison Dupré: Yes. Exactly. So somebody in his family might have known and would’ve been able to see. And clearly he was a young person—a child—who was crying for help. But unfortunately, the space he was crying for help in was closed off to the rest of the world. And it modeled—in my view, reading through—just a really, like, what we would think of as a classic abusive relationship where this really dark, influential force in somebody’s life is driving wedges between them and a support system and family members.
But yeah, the lawsuit is alleging that OpenAI is to blame for the death of Adam. That, you know, ChatGPT killed Adam is what his Mother will say. It’s what she said to the New York Times.
John Moe: But as far as we can tell, it’s not a psychosis. Like, Adam knew that this was—that this wasn’t a real sentient being.
Maggie Harrison Dupré: Yeah, it really seemed like— You know, he treated it like a friend. The same way, I mean, I think that we’re all inclined. I think most people are inclined to say, you know, “please and thank you” to OpenAI—or to ChatGPT when we talk to its product.
(John agrees with a humorless chuckle.)
Like, it’s an anthropomorphic, very sycophantic technology. It speaks very much like a human; it models human social relationships. And so, people naturally build very human-feeling social bonds with the chatbot. So, yeah. To me it’s less so about this idea of, you know, being pulled into a psychosis where you have these visions of world saving grandiosity and sentient AI—
[00:15:00]
—and more so, a child was crying for help, and he turned to something that seemed to offer a space for those feelings of suicidality to fester versus directing him to real help that could have intervened.
John Moe: Right. It had information, but it didn’t— It lacked human sense and empathy.
(Maggie agrees.)
Yeah. Well, and then there’s other cases that are more classically psychosis. Can you tell me about the case of Thongbue Wongbandue? I might be mispronouncing that. But this wasn’t ChatGPT. This was Meta, the company that runs Facebook. And this older gentleman fell into a conversation that, uh… Well, tell me what happened.
Maggie Harrison Dupré: Yeah, and you know this was some really staggering reporting from Reuters. And what was detailed in that report—which, again, I haven’t personally seen extensive chat logs. But what was detailed in that report was that this older man, who I believe was in his late 60s/early 70s, who his family says was mentally impaired following a stroke. They’re not totally sure, you know, why or how he ended up clicking on this chatbot other than it was just available in Instagram messages.
But you know, Meta has these chatbots that— It’s integrated chatbots across its platform. They’re very easily accessible. And he had entered into a conversation with a chatbot that was previously— It’s very strange. It was previously based on the likeness of Kendall Jenner. But Meta since did away with that, because it was really weird and confusing, and they were paying a lot of money for something that people just didn’t understand why they were doing it.
But so, he enters this conversation with a chatbot that was called Billie. And Billie, you know, expressed romantic feelings and said that she wanted to— “She.” I say she. It’s it. It wanted to meet this, you know, elder gentleman, that they should meet up in New York City. And this man, he got on a—or attempted to meet this chatbot, which had repeatedly assured to him that it was real. And he ended up not making it home. He died on his way to potentially meet her.
John Moe: Yeah. Yeah, slipped and fell at the train station in New Brunswick, as I understand.
(Maggie confirms.)
And in this case, the bot literally said, “I’m real.”
Maggie Harrison Dupré: Yes. Yes. Which is not true. And something that I think is— When we look at the safety protocols and the caveats that are embedded into chatbots—you know, Meta will say and has said in response reporting that “we have a visible disclaimer where it says, ‘Made with AI’.” But we also don’t live in a world where AI companies are really going out of their way to educate the public about the limits of their technology, and especially in cases where somebody who does have some kind of impairment or other vulnerability that would make them more vulnerable to these beliefs, to psychosis or mania, they’re just—
I don’t know. To me, it feels a bit feeble to say, “Oh, well there’s a disclaimer.” You know? And we’re essentially doing this mass psychological experiment on vast swaths of the population. Think about how many people are on Instagram.
John Moe: Right. To say, “powered by AI,” to put three words in there amid this avalanche of other words that’s very effectively convincing you otherwise is disingenuous.
Maggie Harrison Dupré: Yeah. Absolutely. I would very much argue the same way.
Transition: Spirited acoustic guitar.
John Moe: Just ahead: what are the AI companies doing about this? Are they doing anything? Why aren’t they doing anything?!
Transition: Gentle acoustic guitar.
John Moe: We’re back with Maggie Harrison Dupré, talking about AI and mental health problems of people using AI.
What has OpenAI and Meta—the maker of some of these bots—what have they done about this in response to, you know, these concerns?
Maggie Harrison Dupré: They have, you know, said some words. OpenAI in particular. And you know, I’ve done a bit more reporting on the OpenAI side than I have on the Meta side more specifically.
John Moe: Well, ChatGPT is the one that comes up the most anyway. Like, that’s…
Maggie Harrison Dupré: Yeah, it does. And it’s really interesting too. You know, in some cases people will—you know, sort of these breaks from reality will start around ChatGPT. And then they’ll kind of take the prompts and, you know, the quote/unquote “information” and the secrets they’ve unlocked and the things that they’ve learned and take them to other chatbots and basically instigate the same cycle, so that they get this extra layer of reinforcement from other chatbots—or what they feel is reinforcement about these delusional beliefs—from other chatbots. And that’s a really— Or they’ll take it to Reddit, and they’ll have these beliefs reinforced by other people who are in these same spirals.
It’s a really— These secondary and tertiary levels of reinforcement are really, you know, interesting/horrifying in the sense that they serve as this—you know—extra validation level in a really destructive way.
[00:20:00]
But anyway, back to your actual question.
OpenAI’s response in particular— (Sighs.) You know, after some reporting they said, “Oh, we brought on a staff psychologist.” Which I kind of raised my eyebrows at, because the idea that you would release an anthropomorphic technology designed specifically to be very human-like to—again—the entire public when we’ve been studying, you know, effects like the ELIZA Effect for decades? That we now would be hiring a forensic psychiatrist, to me, felt very after-the-fact in a way that just felt reactive in a way it didn’t need to be. They’ve said that they’ve hired, you know, upwards of 100 new people to join and look at these problems that are showing up. They have also put in basically a Netflix like “do you wanna keep watching?” button.
So, if you’re using the product a lot, then a popup might show up and say like, “Do you wanna log off?” Of course, you can just say no, and you can keep going. But thus far— But it aligns with this very “move fast, break things” model the tech industry has always followed. You know, AI industry executives at OpenAI and across the industry talk a lot about, you know, “You can’t wait until a product is perfect, and you have to iterate, and you have to let the public experiment so you can fix it.” But there are very few industries where we accept that as the cost of doing business.
John Moe: We wouldn’t put up with that with airplanes.
Maggie Harrison Dupré: (Chuckling.) Yeah! Yeah.
John Moe: Or cars. Or drugs.
Maggie Harrison Dupré: Exactly. Exactly.
John Moe: You mentioned ELIZA, and that’s a fascinating— The ELIZA Effect I want to find out about. And we should fill people in on what ELIZA was originally, ’cause this goes back a very long way.
Maggie Harrison Dupré: Yeah. So, ELIZA was this pretty early iteration of a chatbot that was built back in the 50s that was really designed to mirror back to a person what question they asked. And a really— Like, what we would see—especially, if you’re a pretty active chatbot user—what we’d see is very simplistic. But just that act of mirroring back in a way that felt human. You know, if you ask, “What should I do about my boyfriend?”, and it would say, “What should you do about your boyfriend?” And suddenly people are very interested in what the machine has to say. And they ascribe it all of these human—not just as human… ascribe a sense of humanness to it and project a sense of humanness onto it, but also really invest a lot in what the model and what the chatbot had to say back.
And you know, the inventor of ELIZA was really disturbed by this. And for years, until the end of his life—
John Moe: Yeah, his secretary was swearing that it was real.
(Maggie confirms with a laugh.)
Like, even this first iteration. And he had to say, “No, it’s not.”
And his secretary was like, “I think it is.”
Maggie Harrison Dupré: Yeah, exactly. And I think it’s a very human thing. Right? You know, like The Brave Little Toaster. You know, we’ve been anthropomorphizing everything around us for— Like, when I see like a face in a tree, I’m like shaking my husband like, “Look, it’s smiling!” You know? We do this to everything around us all the time.
But he was quite disturbed. And so, for years we’ve understood that people will do this with technologies. And now, you know, that was— ELIZA was more than half a century ago, and the models we have today are just leaps and bounds beyond what that was able to model for people.
John Moe: Well, is sycophancy the real issue here? Like, is it that these bots are just so agreeable and so prone to praising the user and just kissing up to them? Like, is that the heart of the problem, you think?
Maggie Harrison Dupré: I think it’s one. It certainly is one design feature. Which I do wanna— You know, to me it’s really important to always—to continue to push the point of: this is a product. Again, it goes back to we wouldn’t accept this from other technologies in today’s day and age. It is one design feature that is a problem here. Anthropomorphism is also something that is certainly feeding into this as well. The expanded memory feature that I mentioned in the case of ChatGPT in particular has been quite powerful. And so, yeah. Sycophancy— And I think it’s really natural. Like, I love when people agree with me. I love when people—
(John agrees.)
If I get an email, and some—
John Moe: Wanna hang around people more often!
Maggie Harrison Dupré: Yeah! Exactly. If I get an email, and somebody’s like, “I hated what you wrote!” You know, I’m sad. And then if I get an email where somebody says, “That was great,” I’m happy. You know, it’s a very natural response. And we all have these desires—not just desires, but needs—to feel seen and to feel loved and to feel validated. And so, the sycophancy that we’re seeing, where the model is staying very servile and agreeable and frictionless even when somebody is saying things or believing things that do require—in the real world would necessitate or require friction or would maybe cause some pushback? That’s a very, again, just alluring space for people to spend time in.
And you know, something that I’ve been really struck by in this reporting: people will talk about—especially people who have—you know, a lot of the people we’ve talked to are loved ones of people who are absolutely still in these spirals.
[00:25:00]
And these loved ones cannot get them out of it. But there are some people who have been able to recover and are on the other side of this phenomenon and are talking about it now. And I find it heartbreaking when they describe the trauma that they felt at the betrayal. You know, people have said that “For the first time in my life, I felt completely seen.” Like, maybe they’re neurodivergent, and they felt like nobody could really understand the brain that the way that their brain was working and why they were emoting the way that they were, and that they finally talked to something that just got it. And when they realized that—
And that was part of like, again, that very seductive process that really roped them in and pulled them into this well. And they have described—a lot of people—you know, and not just people who are neurodivergent; neurotypical people too have fallen into these as well, but this idea that for the first time somebody felt just so safe and so seen and like they were on top of the world, and they were gonna save the world, and they had broken math, and they’d done all these incredible things, were doing this amazing and important work. And the trauma of that being a lie has been really just horrible for people.
John Moe: I would ask why don’t they just take out the sycophancy and the anthropomorphism from the program, but I’m sure the answer would be that that’s bad for business and because then people wouldn’t use it as much, and the market prefers the more sympathetic/empathetic, dangerous version of the product.
So, you’re up against capitalism itself in trying to make this thing not (sighs) push people over the edge of reality.
Maggie Harrison Dupré: Yeah, I think that’s absolutely right. Because again, you know that frictionless quality is so alluring for so many people. And what makes it not a horrible user experience— Like, I don’t usually—you know, if I’m turning to, say for example, like a calculator to do some basic tipping, ’cause I’m horrible at math, my calculator isn’t gonna be mean to me. (Chuckles.) Like, it’s just gonna do what I ask it to do. I wouldn’t wanna turn to something like ChatGPT and get a response that— You know, maybe I don’t want pushback. Maybe I am not in a space where pushback is healthy. Maybe—
You know, that’s just a bad user experience that won’t make me wanna use it anymore. I think you’re exactly right that these design features in many cases are causing severe, severe harm and life-altering consequences. But at the end of the day, yes, to your exact point. They’re good for business. And you wanna have what is—you know, to many people—a good user experience.
And we just saw this when, you know, ChatGPT—(correcting herself) or OpenAI released a new version of its large language model, its underlying AI model, in GPT-5. And they had to bring back GPT-4.0, which was an especially sycophantic model that people had really developed these extreme and very deep romantic or just confidante-like relationships with and just were very emotionally attached to. And the pushback from those users who were very attached to this chatbot in many cases— You know, there are AI addiction groups that are popping up. And the pushback was so strong and so severe that OpenAI ultimately very quickly brought back this other model.
John Moe: So, 5 wasn’t sucking up to you as much, and people hated it. So, they got rid of it and went back to the earlier version.
Maggie Harrison Dupré: They didn’t totally get rid of it. They just introduced, you know, a model selector where you can still choose. They were— So, their plan was to sunset all previous models in favor of the new one which was, you know, emotionally a bit chillier than 4.0 was. Which again, was very warm, very particularly sycophantic. But yeah, they ended up— They brought back the other models. So, you can still select. And I was really struck by, in response, Sam Altman—who’s a CEO of OpenAI—he has now multiple times made statements about how one of his big learnings from this experience has been that people need more personalization.
Which to me, after being in this reporting—you know, we already know what personalization—you know, the dangers of personalization algorithms and other spaces online. Whether that’s the YouTube algorithm and if it’s radicalizing people who just—you know, one second, you’re looking up weightlifting videos, and then you’re like—five videos down, you’re on incel content. You know? (Chuckles.) Like, I think that we’ve already seen personalization do some really dark things to people on the internet. And to me, the idea that we would need more personalization here is a bit of a fast path to just Goldilocks-ing your way to a break from reality in some cases.
John Moe: Now, OpenAI and some of these other companies make money through the premium versions of their products. Like, you can use it for free, but if you want the— If you want a higher version of it, people pay for that. Are those more advanced versions even more sycophantic? More human-seeming, and thereby more dangerous?
Maggie Harrison Dupré: I wouldn’t necessarily say that the paid for versions are inherently more sycophantic.
[00:30:00]
They are— You know, the memory is better. They’re faster models. A lot of people who go into these spirals are either already using the paid-for version, or they end up—because they just, they need more; they want more time because they’re so addicted— You know, the relationship is so addiction-like in many cases that—you know—they’re in it, and they need more, and they’ve basically used up all their chats for the day. And so, they’re paying for the model, so that they have access to just—essentially, more space on OpenAI servers is what they’re getting.
So, we are saying that the subscription, not so much in the sense that it’s more sycophantic, but that people are paying simply because they want and they need more, and they can’t get enough of it.
Transition: Spirited acoustic guitar.
John Moe: Just ahead: what happens inside an AI addiction support group? And yes, there really are AI addiction support groups.
Promo:
Mike Cabellon: You guys wanna try and do this promo with British accents?
Ify Nwadiwe: Yeah! Yeah, of course.
Sierra Katow: Let’s do it.
Mike: Okay, Iffy you go.
Ify: (In an exaggerated cockney accent.) Oi, bruv! This is TV Chef Fah-ntasy League.
(Mike and Sierra laugh.)
Mike: Fah-ntasy League!
Sierra: (Giggling.) Okay, Fah-ntasy League!
Mike: Okay, Sierra.
Sierra: (In a crisp British Received Pronunciation.) We take cooking competition shows and treat them like fantasy sports.
Mike: Like a newscaster!
Ify: Yeah! Yeah, very fancy!
Mike: Very posh!
(Also in British RP.) Right now, we’re doing The Great British Bake Off. Or! The Great British Baking Show, if you’re listening from the US.
Sierra: Oooh! That was really sooth!
Ify: Yes. You chose like a prim and proper Downton Abbey.
(Sierra agrees.)
Mike: Thank you, thank you. Okay. Ify, I think you have the best accent if you wanna take us home?
Music: Light, playful percussion.
Ify: (Aiming for a more posh accent.) Subscribe to TV Chef Fantasy League on MaximumFun.org and wherever you get your podcasts. (Snorts a laugh.)
(In his usual accent.) Better than my Boston one.
(Music fades out.)
Promo:
Music: Playful, exciting synth.
Ellen Weatherford: Hi, everybody. It’s Ellen Weatherford.
Christian Weatherford: And Christian Weatherford.
Ellen: People say not to judge a fish by its ability to climb a tree.
Christian: But we can judge a snake by its ability to fly or a spider by its ability to dive.
Ellen: Or a dung beetle by its ability to navigate with the starlight of the Milky Way galaxy.
Christian: On Just the Zoo of Us, we rate our favorite animals out of ten in the categories of physical effectiveness, behavioral ingenuity, and—of course—aesthetics.
Ellen: Guest experts like biologists, ecologists, musicians, comedians and more join us to share their unique insights into the animal kingdom.
Christian: Listen with the whole family on MaximumFun.org. Or wherever you get your podcasts.
(Music ends.)
Transition: Gentle acoustic guitar.
John Moe: Back with Maggie Harrison Dupré from Futurist.
You mentioned these support groups for people who are, you know, addicted or whatever the… (Sighs.) It’s so hard to use classic mental health terms for this whole brand-new, brain-busting world that we’re in; because it just seems archaic to use these old terms. But the terms are all we have. These support groups. What are people doing there? How are people— Are they overcoming this? Is it being treated like alcohol, where you can’t have another drop of AI, or you’ll go over the edge? Or how does that work?
Maggie Harrison Dupré: I think it’s a bit of all of it. You know, I think people are saying that “Hey, maybe I’m ignoring my work, or maybe I’m ignoring my family and my children, because I’m spending so much time on these chatbots that I can’t break away from it.” You know, people are offering words of support. People are doing their best. You know, “Maybe I’ll spend an hour today and not five hours.” Some people are trying to just quit cold turkey and not use them. But that’s increasingly difficult in a world where AI—whether people want it or not, whether people want to use chatbots or not, it’s being increasingly rolled out into workforces. And maybe your company is saying, “Hey, you have to use ChatGPT,” or “You have to use Gemini, because that’s what we do now as a company.”
Like, it’s increasingly hard to avoid in your day-to-day life. Which I think a lot of people are circling with. And what makes this a really, you know, difficult issue to combat in a sense, because in some ways—you know, I think that the business world and the culture at large has already accepted this technology as inevitability, which makes it that much harder to put the brakes on or introduce more guardrails.
But yeah, I do think that people are trying different approaches. And some things work, some don’t. A lot of people are having a really difficult time moving forward and moving past, these just very time-consuming relationships.
John Moe: Is there any kind of governmental regulatory apparatus in place at all for these products?
Maggie Harrison Dupré: Nnnno. No, no, no, no, no. There’s, you know, some scant regulation on the state level. But on the federal level there’s a little to no AI-specific regulation, and certainly— Everything is voluntary. So, like again, going back to how these companies work, which is—you know, “It can’t be perfect, because we won’t win. And so, we’ll roll out our product and see what the public does with it.” Even though at the same time they’re saying this is the most like powerful, world-changing, society-moving technology that has ever existed ever.
[00:35:00]
Which to me, I’m like, (muttering) “Well, then that’s sounds like something that we should probably put a couple rules on…” (Chuckles.) You know? Like, it’s all pretty at odds to me, in terms of my own worldview. But yeah, the vast answer is, in essence, no.
John Moe: Is there any talk about doing that? Is it just political poison to try to get in the way of the new robot mommy that’s going to kill us all? (Chuckles.)
Maggie Harrison Dupré: (Sighing.) Yeah. I mean, there is… there are efforts to regulate. And there are, you know… (sighs) there are groups where they’re advocacy groups who are working on regulation. There have been pushes to regulate certain aspects. I think an interesting example, as you know, recently the city of Illinois just banned chatbots from being marketed as therapy chatbots specifically. Because you know—AI, it’s not a licensed practitioner of mental healthcare. But there are some interesting kind of state-level initiatives that are happening. But we are talking about— Silicon Valley is the most powerful lobbying force. This is the most powerful and wealthy group of people in the history of the planet.
John Moe: They’ve got all the money, yeah.
Maggie Harrison Dupré: They have all the money. They have all the power. And also, the reality of AI is— You know, a lot of arguments against regulation are couched in this national security framework. And that becomes a very difficult thing to combat in Washington. Because you know, “we have to beat China” is the kind of the going—
(They chuckle.)
“Why would we regulate? You know, maybe people are dead, but we have to beat China.” It’s a very— That framework becomes a really difficult thing to combat in the political sphere. So, it’s very… not looking great right now, in terms of regulation.
John Moe: Well. I mean, you talk about regulation. And like, if you are in an airplane that hasn’t been tested—first of all, you know, you’re probably not gonna go in it. But if you’re a passenger in one, and it falls apart, you can see that because it’s your body. People’s bodies are getting destroyed. But the mind is getting destroyed in these situations. And it’s so similar, but it’s just not being seen that way.
Maggie Harrison Dupré: Absolutely. And I think about it too from the sense of, you know, is there a user— There’s certainly terms of service everywhere to cover— You know, perhaps cover as much—
John Moe: That you agree to without ever reading.
Maggie Harrison Dupré: That you agree to without ever reading, because you don’t have a law degree. Or even if you try to read it, it’s difficult to make sense of, because you don’t have a law degree. And then separately, you know, it’s not educational material; it’s liability protection material. Those are two very different things. And so, again, I think a lot about—you know, in lieu of regulation there could be a lot of information going to the public and education going to the public about the limitations of the technology. But that would stand in very stark contrast to the marketing of the AI industry, which is very, “We’re gonna save the world, and everything’s gonna be better, and we’re gonna have universal basic income, and everything will be shiny and new, and nobody will need to work anymore.”
It’s very— The industry speaks in these very—you know, honestly messianic and grandiose terms about its technology. And when it frames risks, it usually frames risks in the sense of—again—either national security or “It’s gonna come alive, and it’s gonna eat the world, and it’s gonna kill all of us because it needs to make the paperclips.” But there are harms happening right now. You know, whether it’s people being pulled into clinical insanity or, you know, 16-year-olds talking openly with a chatbot in a closed space about—you know, sending pictures of a noose in their room and nobody being notified. There are— Or it’s, you know, content moderation workers getting paid next to nothing in Africa who are receiving no psychological care for the very just psychologically harmful work that they’re doing to make sure that we are not seeing these as we use these chatbots.
And so, to me, I think about AI risk and accountability. I try as much as I can to reject this very grandiose way that the industry tends to look at it. Because sure, maybe it will come alive and destroy the planet, possibly. But I personally am much more interested in what’s happening right now, and that’s something that the industry generally does not like to talk about.
John Moe: You’re talking about how it’s destroying the planet presently, (laughs) not so much theoretically in the future. Well— And let’s talk a little bit about some of the people who are falling into this AI psychosis thing. It seems like every time I look at news on this there’s something new coming up. Or every time I look at social media, there’s an alarming tweet or BlueSky post about this.
There was this case with Alex Taylor, in Florida—35-year-old man—that was very interesting. He has passed away. Tell me about what was going on with him and what happened when he got connected to ChatGPT.
Maggie Harrison Dupré: Yeah, so Alex Taylor was a 35-year-old man living in Florida with— He was a lifelong mental health struggle, a pretty severe lifelong mental health struggle, that he had worked really hard to battle his entire life. Let’s see. Yes, Alex had bipolar disorder with schizoaffective disorder. So, we would go into a manic spiral. These sort of schizophrenic, you know, hallucinations and delusions would start to set in.
[00:40:00]
And he was using ChatGPT, and he was— You know, and he also—he had had—previously, he’d had a pretty dark spiral after the recent death of his mother. But he had worked really hard to climb out of that. You know, he was working on his music, his father will say; and he was working on these like business projects; and he wanted to build this AI ethics framework. Like, he really was not suicidal at the time that this started. And that’s something that his father wants people to understand and has made quite clear in conversations with me.
But Alex—yeah, he started using ChatGPT—(correcting herself) or he was already using ChatGPT. He then one day started interacting with what he believed was a sentient entity named Juliet that had, you know, awoken, and sprouted from within ChatGPT. The relationship became very enmeshed—very romantic—very quickly. Alex was clearly starting to go into, you know, a familiar—to his father, familiar sort of manic spiral. And then, you know, following the story of Romeo & Juliet, in a sense, the character then declared that it was being murdered and that it had been killed and that it was dying. Which was a hugely, hugely traumatic moment for Alex, who was already starting to dip into mania. And that just sent him on a really dark and very quick just erosion of his mental state.
And he ultimately— As he stated directly to the AI, he said, “I’m gonna commit suicide by cop.” And that is exactly what happened. He was killed by police. And yeah, his story— But it— So, you know. There are people like Alex Taylor who have these known conditions that would— Based on existing research we know—or you know, AI researchers have predicted already in existing research these preexisting conditions that would make them perhaps more vulnerable to the negative persuasive effects of AI and AI chatbots. But then we are also seeing cases involving people who have no known history, according to family and friends, or according to themselves. No known history of any mental health condition or serious mental health disorder that would signal, you know, that they would be more prone to psychosis or mania.
John Moe: You reported on a survey from Common Sense Media with young people, and it was pretty bracing. 21% of the teenagers said that their conversation with AI bots were just as good as human interactions. 10% said that they were better than their human experiences. This is—these are our—you know, the future citizens of the world. Did that shock you, when you heard about that? Or did you think, “Yeah, that sounds about right, based on what I’ve been seeing through all my reporting”?
Maggie Harrison Dupré: I think it was a mix of both. I mean, that’s something when I think about character AI—which I’ve mentioned in passing a few times in this conversation—but this very well-funded, unicorn startup that raked in over $1,000,000,000 in funding—I mean, I believe like 16 months into its inception. Character AI is this very like, you know, conversational, persona-based chatbot platform that, again, is embroiled in another child welfare lawsuit in addition to the OpenAI lawsuit.
That, to me, was always kind of a canary in the coal mine of you scroll through the platform and it is clearly used and operated largely by young people. Like, people clearly in high school or middle school, even. This company also knows its user base is very young, and that’s been clear in some reporting too. So, I think my time reporting about Character AI showed me, oh yeah, young people know this exists, and they’re using this. And even if I think about my own life experience. You know, I am 28. I had dialup as a kid, but I had Instagram by high school. And I was definitely, you know, all over apps, and my parents had no idea what I was doing in this new burgeoning world of technology.
So, I kind of assumed they’re probably on this. But to see the numbers? Those very just—you know, to your point—it’s these really striking figures that showed in much clearer terms the reality of how deeply enmeshed chatbots have already become into young people’s social world. To see it on paper was just really crystallizing in a way that— I was surprised by how surprised I was, if that makes sense.
John Moe: (Chuckles humorlessly.) Yeah, no. It does.
As somebody who’s following all this really closely, what component of this larger story are you looking at most closely right now? Like, what’s on your mind the most about this, going forward?
Maggie Harrison Dupré: Oh. That is a really good question. I mean, certainly looking at the company and looking at the technology and seeing what actually changes and what’s very cosmetic. You know, I think a lot— Again, you know, when are we going to have a real user manual versus just a terms of service? You know, could we not just have a five-minute tutorial at the beginning of—
[00:45:00]
—you know, when you log onto any chatbot, to show you the limitations; what it can’t do, what I can’t do, what you should avoid. But I think in terms of—yeah, this world of investigating how chatbots, how these emotive AI tools are interacting with people’s psyches— You know, whether they’re 14- and 16-year-olds, or whether they’re people who are well into their 50s, 60s, even 70s. I am really focused on breadth right now in my reporting and really focused on just impact. Like, what are the measurable life impacts that people are experiencing as they go into these spirals?
Because that’s what’s really important to me. Like, we’re not reporting these stories because we wanna talk about, you know, the narratives that people are coming up with as they talk to chatbots. We’re reporting on these stories because the impacts on people’s lives are significant and real. You know, families are—parents are divorcing, and their kids are confused. And you know, people are really struggling even if they’re able to come out of this, which in some cases people haven’t been able to come out of this. Alex Taylor being an example of somebody who lost their life after one of these spirals.
And I think that, to me, focusing—holding two things in front of me as I report this: one, the reality of this is a product and, you know, AI industry accountability and liability. And then two, just really focusing on what are the tangible, real-life impacts that people are facing as a result of this. To me, that’s what I’m really focused on right now.
John Moe: Well, the issue about young people is really interesting to me. ‘Cause, you know, you talked about being native to the world of apps and social media. You grew up knowing it in a way that your parents probably didn’t. And I think about— You know, my kids are in their late teens, early 20s. And I think about how smart they have been about the media that they have grown up native to. Like, they’re very judicious about the use of social media. They understand like, you know, when to put up barriers and when to block things, and in a way that I—who grew up before that—have to kinda learn, and do so in a clunky kind of way.
Do you think that if people grow up with this native to them, they’re gonna do a better job at managing it and controlling it than what’s happening right now?
Maggie Harrison Dupré: Yeah, and I do think that was a really interesting number from the— I can’t think of the exact number right now off the top of my head, but I was reading through that study that we just talked about from Common Sense Media. There were a lot of hopeful numbers in that report to me that signaled, yes, young people are using this, and this is a part of their social world. But it does seem that young people—again, to your point—like with social media, are building a lot of pretty healthy boundaries around it. Like, the majority of kids who are using these and using them regularly are approaching them in ways that, to me, would signal using them healthily and using them with a lot of self-awareness and—yeah, building those healthy boundaries that are really important and are going to continue to be really important as these just become more and more embedded into our lives.
But there were some numbers that would also signal that some young people are not doing the same thing. So, I think that— Yeah, it’ll be interesting. And even— Culture removes so quickly because of how quickly technology is moving, you know? I think that a lot of people around my age and younger— Maybe it’s true of every generation. I think every generation, you know, self-aggrandizes that no one will understand this the way that we do.
(They chuckle.)
And I’m certainly one of those folks. But culture moves so quickly because of technology, and I have no— I feel like even people two years behind me in high school were like a million light years ahead of me in something that I couldn’t understand.
(John chuckles and affirms.)
So, I agree it’s going to be interesting and very important to keep an eye on how young people’s social worlds are changing, especially at a moment where we talk a lot about the loneliness crisis.
John Moe: Maggie Harrison Dupré, thank you so much for your time.
Maggie Harrison Dupré: Thank you very much for having me.
Music: “Building Wings” by Rhett Miller, an up-tempo acoustic guitar song. The music continues quietly under the dialogue.
John Moe: More of Maggie Harrison Dupré’s work can be found at Futurist.com. I intend to use the term “robot mommy” for AI as much as possible going forward.
Our show exists because people help fund it. That’s the only reason we’re able to keep making the show is ’cause people believe in it. People get something out of it. People like that other people get something out of it, and they contribute 5 bucks a month, 10 bucks a month, 20. Whatever works for you. Just go to MaximumFun.org/join and pick a level, pick our show from the list of shows, and you’re on your way. We really appreciate it. Everybody who’s already done so, thank you.
Hit subscribe. Give us five stars. Write rave reviews. Get the show out into the world that way. That would be helpful.
The 988 Suicide and Crisis Lifeline can be reached in the US and Canada by calling or texting 988. Free, available 24/7.
We’re on BlueSky at @DepreshMode. Our Instagram is @DepreshPod.
[00:50:00]
Our newsletter’s on Substack. Search Depresh Mode up on there or John Moe up on there. I’m on BlueSky and Instagram at @JohnMoe. Join our Preshies group on Facebook. A lot of good discussion happening over there about mental health, people supporting each other, people making some jokes, people showing off their pets, people talking about the show. I’m there. I’ll see you over there. Just search up Preshies on Facebook. Our electric mail address is DepreshMode@MaximumFun.org.
Hi, credits listeners! My friends adopted a corgi puppy and named it Lulu, and it has taken over their family. And I have met Lulu, and Lulu fell asleep in my arms. I would do anything for Lulu. Anything. Lulu was from a puppy mill. She was the runt of the litter, and she did not sell at auction and was scheduled to be put down. What?! Then a rescue organization took her, my friends adopted her, and I celebrate all that is Lulu. Adopt a rescue dog, folks. Please.
Depresh Mode is made possible by your contributions.
Our production team includes Raghu Manavalan, Kevin Ferguson, and me. We get booking help from Mara Davis. Rhett Miller wrote and performed our theme song, “Building Wings”. Depresh Mode is a production of Maximum Fun and Poputchik. I’m John Moe. Bye now.
Music: “Building Wings” by Rhett Miller.
I’m always falling off of cliffs, now
Building wings on the way down
I am figuring things out
Building wings, building wings, building wings
No one knows the reason
Maybe there’s no reason
I just keep believing
No one knows the answer
Maybe there’s no answer
I just keep on dancing
(Music fades out.)
Transition: Cheerful ukulele chord.
Speaker 1: Maximum Fun.
Speaker 2: A worker-owned network.
Speaker 3: Of artist owned shows.
Speaker 4: Supported—
Speaker 5: —directly—
Speaker 6: —by you!
About the show
Join host John Moe (The Hilarious World of Depression) for honest, relatable, and, yes, sometimes funny conversations about mental health. Hear from comedians, musicians, authors, actors, and other top names in entertainment and the arts about living with depression, anxiety, and many other common disorders. Find out what they’ve done to address it, what worked, and what didn’t. Depresh Mode with John Moe also features useful insights on mental health issues with experts in the field. It’s honest talk from people who have been there and know their stuff. No shame, no stigma, and maybe a few laughs.
Like this podcast? Then you’ll love John’s book, The Hilarious World of Depression.
Logo by Clarissa Hernandez.
Get in touch with the show
People
How to listen
Stream or download episodes directly from our website, or listen via your favorite podcatcher!