• Pete Hahnloser@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 hours ago

    A few thoughts here as someone with multiple suicide attempts under his belt:

    • I’d never use an “AI therapist” not running locally. Crisis is not the time to start uploading your most personal thoughts to an unknown server with possible indefinite retention.

    • When ideation hits, we’re not of sound enough mind to consider that, so it is, in effect, taking advantage of people in a dark place for data gathering.

    • Having seen the gamut of mental-health services from what’s available to the indigent to what the rich have access to (my dad was the director of a private mental hospital), it’s pretty much all shit. This is a U.S. perspective, but I find it hard to believe we’re unique.

    • As such, there may be room for “AI” to provide similar outcomes to crisis lines, telehealth or in-person therapy. But again, this would need to be local and likely isn’t ready for primetime, as I can really only see this becoming more helpful once it can take over more of an agent role where it has context for what you’re going through.

  • spit_evil_olive_tips@beehaw.org
    link
    fedilink
    arrow-up
    3
    ·
    6 hours ago

    With NHS mental health waitlists at record highs, are chatbots a possible solution?

    taking Betteridge’s Law one step further - not only is the answer “no”, the fucking article itself explains why the answer is no:

    People around the world have shared their private thoughts and experiences with AI chatbots, even though they are widely acknowledged as inferior to seeking professional advice.

    as with so many other things, “maybe AI can fix it?” is being used as a catch-all for every systemic problem in society:

    In April 2024 alone, nearly 426,000 mental health referrals were made in England - a rise of 40% in five years. An estimated one million people are also waiting to access mental health services, and private therapy can be prohibitively expensive.

    fucking fund the National Health Service properly, in order to take care of the people who need it.

    but instead, they want to continue cutting its budget, and use “oh there’s an AI chatbot that you can use that is totally just as good as talking to a human, trust us” as a way of sweeping the real-world harm caused by those budget cuts under the rug.

    Nicholas has autism, anxiety, OCD, and says he has always experienced depression. He found face-to-face support dried up once he reached adulthood: “When you turn 18, it’s as if support pretty much stops, so I haven’t seen an actual human therapist in years.”

    He tried to take his own life last autumn, and since then he says he has been on a NHS waitlist.

  • kbal@fedia.io
    link
    fedilink
    arrow-up
    18
    ·
    13 hours ago

    I’m sure some people find it very helpful to talk to a chatbot. Others find it helpful to talk to a cat. Either way it’s not a therapist.

  • haverholm@kbin.earth
    link
    fedilink
    arrow-up
    21
    ·
    14 hours ago

    I mean, good for them. Now, if anyone has actual mental health issues, please get in touch with a trained, human therapist.

    • Peanut@sopuli.xyz
      link
      fedilink
      arrow-up
      4
      ·
      13 hours ago

      As a peasant I know that professional help is not always available or viable. AI could very well have saved some of my friends who felt they had no available help and took their own lives. That being said, public facing language models should come with a warning for exacerbating psychosis. Notably the sycophantic models like chatgpt.

      • Umbrias@beehaw.org
        link
        fedilink
        arrow-up
        12
        ·
        11 hours ago

        problem: actual mental help has low availability

        solution: ai can stand in where needed

        outcome: ai mental health systemically expands while actual therapists remain inaccessible, as insurance refuses to cover them. mental health outcomes systemically worsen across the board.

      • sabreW4K3@lazysoci.alOP
        link
        fedilink
        arrow-up
        1
        ·
        12 hours ago

        This says everything really. We live in a profit driven society, so where we should invest in public health to ensure that mental healthcare is available for everyone, instead we count pennies, driving public health workers to become private or quit completely. As a result, there’s not enough healthcare professionals to go around, if we can alleviate that a little, we absolutely should invest heavily in it. Have people using AI and supervise the AI, make changes and make it the best we can. Because a free AI, which is the dream, can help to save thousands.

  • rob299@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    13 hours ago

    I think ai can be usefull in cases like this. Especially in a case where a person who is literally about to commit the suicide. Ai might not be 100% accurate but if it can prevent someone from taking their life by offering some support, that is a positive thing.

    • wizardbeard@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      11 hours ago

      Considering there have already been news stories of AI chatbots telling users to kill themselves and feeding into suicidal ideation, it is absolutely not a reliable fact that the AI will not cause further harm.

      Edit: It’s also not just a problem with suicidal ideation. The founder of Business Insider recently wrote a post on his blog about “using AI to generate an AI news room cast”. He openly admits to making comments to the female AI newscaster he created that would definitively be sexual harassment irl. The damn thing complimented him on his directness, reinforcing this creepy asshat being a sex pest to the point that he saw nothing wrong or embarassing about posting about this shit publicly.

      • rob299@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        10 hours ago

        There are going to be ai’s that tell users to kill themselves or not to. Depending on how well they are trained or set up by the creator and how the user is acting towards the ai. It’s not just one factor all those factors are linked to how the ai will respond so i’m not particularly victim blaming.

        I do have to ask why you bring up a news anchors fetish when we’re talking about the potential of an ai chat bot helping someone with suicidal issues. It really just sounds like you have a bias against ai and aren’t taking into account the potential of preventing suicide for a person when they have no one they feel comfortable to talk to.

        However it is important to know the context of these cases. Ai character creators are able to make ai characters (on platforms like character ai) and are able to direct the behavior of the ai to how they would want. Now Gemini and grok on the other hand, ai’s like that where no one knows where Google or Musk are guiding them i’l acknowledge might not be as trustworthy because on character ai the user knows up front how the ai is intended to behave by the creator. If the user want to make a modification to the ai’s behavior or vibe they can literally just tell the ai character how they want the ai to behave and it will adapt. So ai does certainly have the potential help people with their tramas through conversation.