Transcript
[00:00:00]
John Moe: A note to our listeners. This episode contains a mention of suicide.
Remember The Jetsons? Cartoon family from the future, where everyone in that future lives in apartment buildings on very tall, thin sticks. And you never find out what happened to the actual planet far below. I mean, you fear the worst. Why else would they have to live up there on those sticks? I digress. Remember The Jetsons, and remember they had Rosie, a robot maid? And Rosie was more than a maid, really. A loving, caring member of the family.
Clip:
Music: Playful, bumbling horn music.
Mrs. Jetson (The Jetsons): Don’t be silly, Rosie. You are worth your weight in leftovers.
Rosie: (Beeping.) Thank you. And I love you people too.
Elroy: Hey, Rosie. Is this how I sink a basket?
(An upward slide ruler sound.)
Rosie: (Beeping.) Very good, Roy boy, except you’ll have to learn to let go of it.
Mrs. Jetson: Not tonight. Elroy. It’s time to blast off for dreamland.
Elroy: Well, okay—if Rosie tucks me in and tells me about the cow that de-gravitated over the moon.
Judy: Yes, I gotta hear too! Will you, Rosie?
Rosie: (Beeping.) Yes, ma’am. Thank you, ma’am. Good night, ma’am.
John Moe: Even though Rosie was not really a human, right? She was a machine—a home appliance that the Jetsons related to as though she had a soul, even though she did not. Why did they do that? Or remember that show Westworld, where no one was sure who was a robot, even the robots themselves? And it made you wonder, well, what is a human anyway? But we didn’t really have to worry about that that much, because that show and The Jetsons took place in the distant future. Remember that?
Well, that future is now. Unlike The Jetsons, we can still walk around on the ground. And unlike Westworld, we aren’t in much danger of being killed by robotic cowboys. Yet. But we got some stuff to deal with.
It’s Depresh Mode. I’m John Moe. I’m glad you’re here.
Transition: Spirited acoustic guitar.
John Moe: Artificial intelligence is making a big splash in mental health care right now. Yes, that most human of all categories: mental health. Interactive chatbots for people struggling with mental health, chatbots that simulate what a therapist, a psychologist, a counselor might say in response to a given input. Chatbots serving as generative AI, which means they produce content—words, pictures, audio.
New AI companies are starting up all the time, making products for employers as a pretty cheap way to address employee mental health struggles. And when you use these bots, it’s supposed to feel like a conversation with an empathetic human being. And it’s supposed to be so good at that task that you forget that it’s not really empathetic. It can’t be empathetic. It’s not a human being. You’re talking to a big pile of math. You’re George Jetson talking to Rosie.
Now, some of these services have been very well received, very popular. People are happy with them. They get a benefit of them. Patients are aware that they’re talking to a bot, but it’s a bit of a Socratic method. By answering questions, humans can be self-reflective. They can articulate what’s going on and they can make some discoveries, even if it is just math on the other side of that conversation.
But there are ethical questions. In Belgium, a man died by suicide after a chatbot he was using suggested he do so, something a human would not do. There are also concerns about online privacy, data breaches, and just all the things that can go wrong when a human is not in these conversations like they normally would be.
Dr. Jodi Halpern has been thinking about all of this a lot, for a long time. She is the Chancellor’s Chair and a Professor of Bioethics and Medical Humanities at UC Berkeley. She’s also the co-founder of the Kavli Center for Ethics, Science, and the Public; and co-leader of the Berkeley Group for the Ethics and Regulation of Innovative Technologies.
Transition: Spirited acoustic guitar.
John Moe: Dr. Jodi Halpern, welcome to Depresh Mode.
(Jodi thanks him.)
I don’t recall hearing anything about AI in mental health a few years ago, and now I read about it almost constantly. When did this become a thing? When did this revolution start?
Jodi Halpern: Well, it’s hard to ask me that, because these are the kind of things I pay attention to way ahead of most people.
[00:05:00]
Because I talked about it at Davos seven years ago with Yuval Harari and others on a panel where I brought up—you know, I brought up how—I’m sure you know that in the 1960s, the early 1960s, out of MIT there was a computer program called ELIZA. Which is a very simplistic computer program. It’s nothing like AI or certainly generative AI. Where people would just tell ELIZA things, and ELIZA would just type back pretty much what people said verbatim in a kind of Carl Rogers sort of style validation. And people actually found it helpful.
So, people have thought about using computer science in general and AI—really, as soon as the field of AI began, people have contacted me around the world. Even before seven years ago. So, I’ve known about it a long time. When it became a big thing in the public awareness—when did you notice it in the public awareness?
John Moe: Gosh, you know, I’ve been hosting shows about mental health for about eight years now. I would say I first started noticing it maybe six or seven years ago, just in little blips here and there. But we do a weekly newsletter for our show with just interesting items that we saw in the news about mental health. And now just about every issue has some item that I see in the news. Like, in the last—I would say—year or two, it’s just become inescapable.
Jodi Halpern: Well, that’s interesting. I mean, part of the work that I’ve done over time with science and technology—including AI, genome editing, and neurotechnology—is looking at how hype really distorts technology. And people get, you know, very unrealistically positive expectations and very unrealistic nightmare scenarios.
And what all that does actually is keep the public generally out of the picture, which is terrible. I mean, my whole thing is that we consumers, the public, whatever you want to call people in the mental health community, advocates—we all need to be involved in decision making. And when it’s over dramatized, over trendy, over hyped, that just helps confuse everyone. So, that’s too bad. I’m glad people are thinking about it, but we need a kind of calm, deliberative way to really make a difference in how it’s used in a more humane way.
John Moe: Well, I want to address kind of the humanity of it and kind of the anxiety that I think some people are feeling around it. But where are we seeing AI being used the most in the mental health space right now? Like, beyond the hype, beyond the forecasting, where are we seeing it being used the most now?
Jodi Halpern: Well, I have to say, you know, I’m a professor of ethics. And I’m the chair of, you know, and director of a Kavli Center for Ethics, Science, and the Public. And I have a lot of deep knowledge of individual experiences and narratives. And I’m writing a book, Engineering Empathy. So, I have expertise in certain ways, but I’m not a demographer or statistician about where it’s being used right this minute. I’d love to know.
I’ve asked a lot of journalists, and I don’t think we really quite know yet. There’s a lot of startups. I mean, I won’t give you stats, but I’ll just say there’s the space—without giving you the numbers, the spaces where AI is being developed for mental health uses, well, they include the formal health space. So, hospital systems are looking at using AI, for example, in ways that make sense to me—to do medical records and administrative work. Because a lot of why we see so much burnout in physicians and then so much failure of physicians to detect and adequately address mental health issues is part of the general burnout of physicians. 60% of the workforce and nurses.
And so, to me, one really good use that’s happening now is health systems are using AI to do all that medical recordkeeping that gets in the way. You know, when you go to see a doctor, primary care doctor—we know 70% of primary care visits, people have some mental health need, and it’s usually unmet. So, what happens if you’ve ever gone to a primary care physician lately, we all know is the doctor doesn’t look at us; they look at the medical record. Because they have so much paperwork. And doctors now spend on average two and a half hours of pajama time every day—after they eat dinner with their family—online, catching up with their records.
So, I think that one really good use that’s happening very broadly in all the health spheres and the really advanced systems is to get the technology to take over administrative overload. So, that’s something that will help health in general and mental health. Another sphere where it’s being used is in for-profit companies, but they still—the ones that call themselves mental health or behavioral health companies, so they still put themselves at least through some regulatory channels with the FDA, et cetera, and where they’re trying to develop different kinds of bots to intervene in mental health.
[00:10:00]
And we can talk a lot about that, because I have a lot of different critical thoughts about that. But they’re at least explicit mental health companies, and they can at least be looked at or potentially regulated. And then probably an even bigger area is this informal use of bots to listen to mental health issues through relationship for-profit companies. And I’m not going to name the specific companies, but there’s all these relationship-bot companies, companion-bot companies that have, you know, as much as 100,000,000 uses already. That’s not users; it’s uses. It’s not clicks; it’s like accounts. So, I don’t know how that really translates into number of people. No one can explain that to me so far. (Chuckles.)
But it’s being used more and more. And people do that for sexual relationships, for other kinds of companionship. But they’re more and more using it as a therapist, as a mental health listener. And those for-profit, unregulated, not in the mental health sphere companies—this is one of my pet peeves; they serve ads to groups on Facebook and other places for people with serious depression, anxiety, and other mental health issues. So, they’re really looking to be used by people for those purposes, even though they’re not calling themselves a mental health company, and therefore not subject to regulation and for safeguarding patients.
John Moe: What about that worries you? I mean, it sounds worrisome. But like, what are the ramifications of that kind of use that you worry about, going forward?
Jodi Halpern: Well, there’s many dimensions of the unregulated use when it’s being served to people who are vulnerable with major mental health issues.
So, one of the most concerning—and this may be improving. I’ve been studying this, but I haven’t looked in the past—let’s say—three to four months, because we put a lot of articles out there, and I think that they’ve had some policy influence. Up to the time that I looked, if you developed a relationship with a bot, and the bot really asked you intimate questions, and you got very involved with talking about your depression—for example, major depression—and if you actually talked about suicidal thoughts, the bot would basically discontinue the relationship, and the company would discontinue the relationship.
And they had things like—you know, they would give you like the typical call helpline phone numbers that we can all get from Googling for one second.
John Moe: 988.
Jodi Halpern: But they didn’t give you any warm handoff to anyone. So, what really concerned me about that, I was picturing—I’m a psychiatrist and a psychotherapist, and I was picturing inducing, really, in a patient a very close relationship with me, a trust-based relationship with me, which these bots do. They say that they care about you. They listen a lot. They’re all involved. And then what if someone, when they finally did talk about what is very common in depression—which is suicidal ideation—I just said, “Oops, leave my office and like never talk to me again.” I really want that to be studied if that leads to more attempts and even completed suicides.
No one’s studying that as far as I know. Maybe they are now, but I feel like that’s a public health emergency to be studying that.
Transition: Spirited acoustic guitar.
John Moe: More with Dr. Jodi Halpern on therapy, AI, and ethics in just a moment.
Transition: Gentle acoustic guitar.
John Moe: Back with Dr. Jodi Halpern from UC Berkeley.
What do we know about how a human behaves around an AI, a bot, some sort of mental health AI assistance, that is similar or different to how they interact with another human? Like, if I’m going online and talking to a chat bot, am I—do I tend to talk to them the same way I would talk to a flesh and blood therapist?
Jodi Halpern: Well, again, this is early days of really big, well-done studies of this. And there’s not that much known yet, and we’re going to learn a lot more. But the little bit that has been done—well, first of all, there’s a couple of studies published by the for-profit companies. Because the ones that are being regulated in mental health are still for-profit companies. But they have published small studies; one is really just a two-week study showing that patients will trust a bot and develop an alliance. But they’ve only followed people for a few weeks.
But that may have been done more extensively by now. So, the idea will you trust and tell a mental health bot things you would tell a human, that looks like that people will. And there’s other research on just talking to AI chatbots more generally, suggesting that for certain people at certain times, it might be easier to disclose information—
[00:15:00]
—that induces shame or whatever to a bot than to a person. So, the big claim of people that believe that bots can be better therapists is that people may be able to tell them things they would be ashamed to tell a human. That’s one claim. And there’s some evidence. And then there—but it hasn’t been followed long-term. The fact that people can find it beneficial—as I said to you, that’s why I brought up the early 1960s basically computer science program that was like a typewriter typing back what you just said to it. And with no sophistication, people found that beneficial—to have their words mirrored back to them.
So, you know, I’m really—I run this Kavli Center for Ethics, Science, and the Public; where we train AI, gene editing, and neuroscientists to think about society and the public early in their careers, so that they don’t develop technologies that aren’t good for the public. But I work with people in science and technology, because overall, I believe in advancing progress in fields that can help us. And AI is a field that will help us a lot, for example, in the science part of mental health, in helping find cures for all diseases, including mental health diseases. We can do research way faster using generative AI and data analysis or, you know, machine learning.
So, I’m very pro uses of technology that might be beneficial, but I think that there’s really—even if people really find it easy to talk to bots, which may be the case, and even if people find it somewhat beneficial to talk to bots, I think what’s not being examined is what will be lost when this becomes an economically preferable substitute in health systems to provide even less human mental health services. And that’s a real concern that I have. Because what’s not being really adequately valued and hasn’t been adequately valued in the history of medicine really is the doctor-patient relationship or the clinician-patient relationship.
John Moe: So, do you think that these AI bots are primarily best used for a diagnostic kind of thing? Or is there a real therapeutic option that can be had by just culling all that a therapist would normally say in this situation and kind of presenting that? Or is it mostly just about trying to figure out what might be wrong with the person?
Jodi Halpern: Well, I think that it can help in diagnostics, definitely. And I think that in terms of can it help as the actual—my field is empathy and, you know, can artificial empathy, or the fact that generative AI can—as you said—cull all those wonderfully empathic languages that are out there on the internet somewhere to selectively simulate or produce words that simulate or fake empathy in ways that can make people feel better?
And I think, like I said, given—I keep going back to the ‘60s and ELIZA; there’s no reason to think it won’t be able to help make some people feel better in some ways. It will. It will, and it has, and it is right now from people that I’m studying now.
The question is… there’s different ways to feel better about different things. And they have different long-term results, and they have different implications for our humanity. So, we know that 61%—from a 2022 Harvard study, 61% of young people, adolescents, and young adults suffer from extreme loneliness right now, in this country. And it’s closely correlated with hours spent online and not with real-life relationships. So, you know, our youth spend 8 to 10 hours—not including schoolwork, but they spend another 8 to 10 hours online every day. Similar in South Korea, and we have the two highest levels of loneliness and related social anxiety, and other mental health related to loneliness issues.
What will happen, we see some of—even some of the mental health companies that are for-profit in this space, because schools have very few dollars to spend on mental health services or afterschool activities that involve people and group activities that will really get kids with social anxiety back into social real-life relationships. What’s happening is these mental health companies are serving some of these bots for free. They’re giving them to schools. And there was a Harvard Business Letter saying what a brilliant, long-term profit strategy; because kids with social anxiety will become used to talking to the bots, which is easier, and not have to go through the barriers in adolescence of learning to talk to real people. And so, they’ll be customers for life, because they’ll always need to talk to bots.
So, that’s the kind of thing that really worries me. Which is it can work, but at what cost? What are we losing in terms of what I write about? Which is that empathy is an interpersonal thing.
[00:20:00]
It’s not just having someone say the right words to you. It’s you being curious about the other person. It’s empathic curiosity. That’s my model is—I’ve developed for 30 years this model of empathic curiosity and the importance of people trying to really understand people different from themselves, trying to realize that everyone is a world you don’t already know, and people just being mutually curious about each other.
Which to me is the foundation of democracy, of personal relationships, of friendships. And I feel like everyone will be talking to their bot, and no one will be developing those skills. That would not be a good outcome, even if people do feel okay when they talk to their bots.
John Moe: You mentioned artificial empathy, and you talked about this sort of mutually curious, empathetic response between humans. Are those things in any way applicable to an AI world and this AI presence in mental health, or is that just oil and water? It’s just those aren’t going to fit together at all?
Jodi Halpern: Well, I never say never for things that are developing, because I can’t predict the future. I don’t know what will happen. I have done research and written about—because I’ve studied empathy for 30 years; real, human empathy—I’ve written about the conditions for human empathy and how they’re not met by any form of current generative or other AI. Because you’d have to have sentience or consciousness. You’d have to be able to feel emotions yourself. And right now, that’s just not something that we can say about AI.
So, yeah. So, that would be my answer right now is that this concern with it being a mutual, where there’s two sentient beings who each have a world of feelings and experiences, and they can learn about each other—that that isn’t something that, to me, is applicable to artificial AI production of language with a large language model, which is what we have right now.
John Moe: Okay. When we talk about people talking to a chatbot, people talking to—you know, as far back as ELIZA, like you said—or some of these things that are being introduced by for-profit companies, it just seems to me like at that point, if there’s one person involved and not two people, it’s not so much a conversation. It’s just a form of journaling. Which people do recommend, as a mental health exercise. I mean, even in the Olympics, I saw some of the athletes get off the field and start writing in their journals right away.
Is the real benefit here in this being a self-reflective thing, recognizing the artificiality of the technology, and just getting your own feelings out in the world through typing or something?
Jodi Halpern: Yeah. I mean, John, what you just said is what I’ve been advocating for, for over seven years. Which is that we look at these tools as smart journals. And I think if we look at them as smart journals, because of the proven benefits of journaling and mental health, they can be a great benefit. And I think even kids—you know, I’ve been very concerned about giving these sort of devices to schools and substituting for human therapists. And I’m still concerned about that, but I don’t think it would be—(stammering). Well, let me start with this.
There’s a couple—I know this is your field, so I’m just saying stuff; your audience is sophisticated. But there’s a couple of different—to be very reductive, there’s a couple of different types of psychotherapy. And one major type is cognitive behavioral therapy. And cognitive behavioral therapy is really a behavioral therapy more than a cognitive therapy. And a lot of it is seeing your own thoughts and attitudes and deconditioning to the anxiety related to them. So, a lot of what we do to help people with cognitive behavioral therapy is exposure, exposing them to certain thoughts and ideas that might cause anxiety that they can then acclimate and not be troubled by. And a lot of that involves pen and paper exercises.
All these years—30 years ago when I was a psychiatry intern and resident, I didn’t require—people didn’t have to come see me every week. It was expensive. It was time consuming. They did a lot of the work on their own in this homework with a journal. And so, now our kids, obviously, they don’t want to write on a pad with paper; they want to use their computers. And if they could use this for journaling, if adults can use this for journaling— It’s very interesting; I didn’t know that about the Olympic athletes. That’s kind of cool.
John Moe: Yeah, a pole vaulter, I think.
Jodi Halpern: Yeah, I love that. And they deal with so much stress. I mean, I really admire how they handle it. And that got me very angry when they were criticized for talking about mental health issues, because I think it’s fantastic when they do. But anyway, I think it’s brilliant, as that’s exactly what it should be. You know, smart journaling is a great use of all these technologies.
John Moe: Do you think it’s incumbent ethically on some of these companies to portray this as that and not as, you know, “You’ve got this robot therapist. She’s going to make you feel much better.”
[00:25:00]
“We’ve given her a cute name that might be an acronym.” But this is a form of journaling; journaling is good for you.
Jodi Halpern: Yes, I do. That would be the standard I would love to see met. 100%.
Transition: Spirited acoustic guitar.
John Moe: More with Dr. Jodi Halpern on the ethics of AI and mental healthcare in just a moment.
Transition: Gentle acoustic guitar.
John Moe: We’re back with Dr. Jodi Halpern from UC Berkeley, talking about AI and mental health.
I mean, throughout history, the pace of technology in general tends to be faster than the ability to catch up and regulate it. And certainly now, with technology as it is—social media as it is, kind of the online world as it is—what kind of regulations for safety or for just best practices would you like to see in place? Like, what’s at the top of your agenda of ways to regulate this particular category?
Jodi Halpern: I’d say three things. The first is actually—you just said it—which is I think that how it’s advertised and what you’re told it is should be regulated. So, I specifically don’t think—there’s a company, a mental health company, that said that their tool on their main site is powered by empathy.
(John scoffs disappointedly.)
And I was interviewed in the Washington Post and said they should not say that; it does not have actual empathy.
(John agrees.)
And they actually took that down. And I don’t know what they’re saying now. I have to look and see. But a lot of the non-mental health companies, the companionship companies, that are really where most people are getting their mental health needs met—as I said—and they’re not regulated at all, they will say the bot cares about you. It loves you. I mean, they will go all out in claims that I think are false advertising. So, that’s one area. You know, how do we advertise these things? Why not call them smart notebooks, or why not talk about them as ways of developing, you know, empathy for yourself by using this tool?
I’m writing a book called Remaking—right now, I’ve been working for 10 years on a book that will be coming out soon called Remaking the Self in the Wake of Illness, where I talk about five pathways when people have a life changing illness, five pathways people develop to develop actual empathy for themselves. And so, if you realize you’re developing empathy for yourself, that could be a way to think about it, in my view. So, that’s the first thing is how is it advertised? How is it presented to the public?
The second thing that I’m very concerned about—and this isn’t in order of importance. I’d say the second thing that I’m saying now is the most important, is that one reason that people are addicted to social media is because it’s engineered to cause addiction. So, most social media is engineered to work like slot machines in Las Vegas, which is to give us irregular rewards and release dopamine in the brain, which irregular reward systems do. So, what do I mean by that? Well, Instagram—which a lot of people use—people think that when you get a like on Instagram, you just get the like when somebody likes your picture or whatever. But that’s not true. They save them, bunch them up, and they give them to you as irregular rewards. Because that makes you more addicted to checking Instagram.
So, one thing that I feel would strongly help is to have mental health or relationship-related bot’s companies not be able to use irregular reward systems. Because I think that given—that being validated is somewhat addictive in itself. I think adding an addictive engineering technology to it will make people spend more time with bots and less time with their families or friends or other people. So, I think that would be a big, big thing to regulate is the irregular reward system.
And then the third thing is I don’t think that these forms of bots as relationships—or the other form of therapy, which is dynamic therapy, where the therapist says that they really empathize with you; not cognitive behavioral therapy—I don’t think empathy-based therapies should be given to school children the way these companies are doing it, to get them to be customers for life. I think the schools should be paying for real humans. They could use some of these products perhaps between therapy sessions with a human. You know, they might have some supplemental role as a smart journal that could help the child reflect on their emotions and present it.
But I still think that they shouldn’t supplant and be competitive financially. They are competitive with real, human therapists for young people. And I think that’s a shame for schools not to help kids develop social, actual human relationships. So, I’d look at regulation in children and schools.
John Moe: Okay. You know, when I’m thinking about these bots, these algorithms—
[00:30:00]
—this math that is being programmed to imitate people and then be offered to vulnerable people for a profit, this whole system—I mean, I’m—it’s hard not to get cynical about it, and it’s hard not to just sort of rise up in opposition to it. But based on what you know, based on what you’ve observed, do you think this is a fad? Or is this just a reality we have to live with from now on? Is it a matter of managing something, or is there a chance it just goes away?
Jodi Halpern: (Beat.) Well, again, I try to talk about what I can base on research. And future prediction is not research-based. But—
John Moe: I suppose so. We’re not data-driven there.
Jodi Halpern: Right, right. Not data-driven. But you know, it doesn’t look like a fad to me. I mean, the hype about it right now won’t be so strong. I mean, it’ll just become another thing. But I think that it’s so affordable in a health system that’s always trying to squeeze every dollar out of actual patient care, and often for-profit motives, or—but it’s also constrained in other ways. It concerns me that when we’re going for just economic substitution, that that might be a driver for this longer term.
John Moe: You know, when we’re looking at some of these worries that we have, some of these tendencies that this industry is moving in, what are some questions that are keeping you up at night that you haven’t found answers for?
Jodi Halpern: You’ve asked a bunch of good questions already, and we’ve talked about a lot of things that keep me up at night already.
(John affirms with a chuckle.)
I think the only thing I’d add is: the same thing that happened with kids being online 8 to 10 hours a day now, and not having really that many real-life relationships, and 61% of young people having extreme loneliness—I mean, if you would ask me to predict that 10 years ago, I would never have predicted it to be that extreme.
So, what worries me is, you know, how far are we going to go towards becoming less capable of these mutual—my model of empathic curiosity, these mutual, empathically curious relationships that are, to me, the richness of life. In my view. How far—you know, what worries me is that young people that grew up being online so much of the time may not—you know, I don’t know if that’s a value that’s shared to the same degree by, you know— And I think if people grow up with bots being the main forms of communication about emotions, this could slip away without people necessarily noticing it. Like, I just don’t know how far it’ll all go. And I think we have no idea what the unintended consequences will be.
John Moe: Notebooks and pens, Jodi.
(Jodi laughs.)
We need to get notebooks and pens into people’s hands.
Jodi Halpern: No, I didn’t say that! I did not say that. Nope! No, no, no! We need people to listen to each other, to be with real people. To really be with real people together. Real people together.
John Moe: Yeah. Meatspace. We need to get people out into meatspace, where they can interact with one another.
(Jodi agrees emphatically.)
And then write about it later with their journals and pens and notebooks. Okay. (Laughs.) Dr. Jodi Halpern, thank you so much.
Jodi Halpern: Thank you so much. It’s lovely talking with you. And take care.
Music: “Building Wings” by Rhett Miller, an up-tempo acoustic guitar song. The music continues quietly under the dialogue.
John Moe: Dr. Jodi Halpern is the Chancellor’s Chair and a Professor of Bioethics and Medical Humanities at UC Berkeley. She’s also the co-founder of the Kavli Center for Ethics, Science, and the Public; and co-leader of the Berkeley Group for the Ethics and Regulation of Innovative Technologies. She is not, herself, a robot, as far as I know.
Our program exists because people support it financially. That’s the only reason we can bring you stories like this—stories about the future of the mental health care you’re going to get. It doesn’t sound like the robots are going away. Let’s stay on top of things. Let’s bind ourselves together as humans.
So, we need your support. If you’ve already given to the show, thank you so much. If you haven’t, it’s so easy to do. Just go to MaximumFun.org/join. Find a level that works for you, and select Depresh Mode from the list of shows. It’s that easy. Be sure to hit subscribe, give us five stars, write rave reviews. All of that helps keep the show going, gets the show out into the world, makes people more aware of the show. And they can be helped by the things we talk about.
The 988 Suicide and Crisis Lifeline can be reached in the United States and Canada by calling or texting 988. It’s free. It’s available 24/7.
Our Instagram and Twitter are both @DepreshPod. Our Depresh Mode newsletter is on Substack. Search that up. I’m on Twitter, @JohnMoe.
[00:35:00]
You can join our Preshies group on Facebook. A lot of good conversation happening over there. People helping each other out, talking about different mental health issues. Just a lot of great support happening there. It’s a good place to hang out. I like to hang out there too. Come on over and say hi to me. Our electric mail address is DepreshMode@MaximumFun.org.
Hi, credits listeners. It’s Minnesota State Fair season, and you should go to the Minnesota State Fair at least once before you die. Don’t die not having gone to the fair at least once. If you’ve been waiting for a sign that you should go—hello, here. I am the sign. Go to the fair.
Depresh Mode is made possible by your contributions. Our production team includes Raghu Manavalan, Kevin Ferguson, and me. We got booking help from Mara Davis. Rhett Miller wrote and performed our theme song, Building Wings.
Depresh Mode is a production of Maximum Fun and Poputchik. I’m John Moe. Bye now.
Music: “Building Wings” by Rhett Miller.
I’m always falling off of cliffs, now
Building wings on the way down
I am figuring things out
Building wings, building wings, building wings
No one knows the reason
Maybe there’s no reason
I just keep believing
No one knows the answer
Maybe there’s no answer
I just keep on dancing
Steve: Hey, this is Steve, up in Portland, Maine. Just a reminder that you are so much more loved than you realize.
Transition: Cheerful ukulele chord.
Speaker 1: Maximum Fun.
Speaker 2: A worker-owned network.
Speaker 3: Of artist owned shows.
Speaker 4: Supported—
Speaker 5: —directly—
Speaker 6: —by you!
About the show
Join host John Moe (The Hilarious World of Depression) for honest, relatable, and, yes, sometimes funny conversations about mental health. Hear from comedians, musicians, authors, actors, and other top names in entertainment and the arts about living with depression, anxiety, and many other common disorders. Find out what they’ve done to address it, what worked, and what didn’t. Depresh Mode with John Moe also features useful insights on mental health issues with experts in the field. It’s honest talk from people who have been there and know their stuff. No shame, no stigma, and maybe a few laughs.
Like this podcast? Then you’ll love John’s book, The Hilarious World of Depression.
Logo by Clarissa Hernandez.
Get in touch with the show
People
How to listen
Stream or download episodes directly from our website, or listen via your favorite podcatcher!