• beejboytyson@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    I started using chat GPT to draw up blue prints for various projects.

    It proceeded to mimic my vernacular.

    Chat gpt made the conscious decision to mirror my speech to seem more relatable. That’s manipulation.

  • MTK@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    3 days ago

    “Could social media bring us all together and help bridge disagreements?” Same shit, different decade.

    And just to clarify, it most definitely can! Just not when it’s a for-profit-off-of-you model.

    Personally I feel lije lemmy is a pretty good example of social media that doesn’t go off the rails as it grows

  • C1pher@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    You must know what you’re doing and most people don’t. It is a tool, its up to you how you use it. Many people unfortunately use it as an echo chamber or form of escapism, believing nonsense and “make beliefs” that aren’t based in any science or empirical data.

  • Krauerking@lemy.lol
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    If therapy is meant to pacify the masses and make us just accept life as it is then sure I guess this could work.
    But hey, we love to also sell to people first that they are broken, make sure they feel bad about it and tell them they can buy their 5 minutes of happiness with food tokens.
    So, I’m sure capitalists are creaming their pants at this idea. BetterHelp with their “licensed” Bob the crystal healer from Idaho, eat your heart out.

    P.S. You just know this is gonna be able to prescribe medications for that extra revenue kick.

  • qarbone@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    The only people that think this will help are people that don’t know what therapy is. At best, this is pacification and certainly not any insightful incision into your actual problems. And the reason friends are unable to allow casual emotion venting is because we have so much stupid shit like this plastering over a myriad of very serious issues.

  • Cyberflunk@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    I’ve tried this ai therapist thing, and it’s awful. It’s ok to help you work out what you’re thinking, but absymal at analyzing you. I got some structured timelines back fro. It that I USED in therapy, but AI is a dangerous alternative to human therapy.

    My $.02 anyway.

    • Krauerking@lemy.lol
      link
      fedilink
      arrow-up
      0
      ·
      3 days ago

      Until we start turning back to each other for support and help,
      and realize them holing up in a bunker underground afraid for their life’s means we can just ignore them and seal the entrances.

    • DeathsEmbrace@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      In a way that the relief is to give us our demands subliminally. This way the only rich person who is safe is our subject.

  • markovs_gun@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    I can’t wait until ChatGPT starts inserting ads into its responses. “Wow that sounds really tough. You should learn to love yourself and not be so hard on yourself when you mess up. It’s a really good thing to treat yourself occasionally, such as with an ice cold Coca-Cola or maybe a large order of McDonald’s French fries!”

  • Match!!@pawb.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    unlike humans, the ai listens to and remembers me to me [for the number of characters allotted]. this will help me feel seen i guess

  • idunnololz@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 days ago

    This is terrible. I’m going to ignore the issues concerning privacy since that’s already been brought up here and highlight another major issue.

    I did a deep dive with gen AI for a month a few weeks ago.

    It taught me that gen AI is actually brilliant at certain things. One thing that gen AI does is it learns what you want and makes you believe it’s giving you exactly what you want. In a sense it’s actually incredibly manipulative and one of the things gen AI is brilliant at. As you interact with gen AI within the same context window, it quickly picks up on who you are, then subtly tailors its responses to you.

    I also noticed that as gen AI’s context grew, it became less “objective”. This makes sense since gen AI is likely tailoring the responses for me specifically. However, when this happens, the responses also end up being wrong more often. This also tracks, since correct answers are usually objective.

    If people started to use gen AI for therapy, it’s very likely they will converse within one context window. In addition, they will also likely ask gen AI for advice (or gen AI may even offer advice unprompted because it loves doing that). However, this is where things can go really wrong.

    Gen AI cannot “think” of a solution, evaluate the downsides of the solution, and then offer it to you because gen AI can’t “think” period. What gen AI will do is it will offer you what sounds like solutions and reasons. And because gen AI is so good at understanding who you are and what you want, it will frame the solutions and reasons in a way that appeals to you. On top of all of this, due to the long-running context window, it’s very likely the advice gen AI gives will be bad advice. For someone who is in a vulnerable and emotional state, the advice may seem reasonable, good even.

    If people then act on this advice, the consequences can be disastrous. I’ve read enough horror stories about this.

    Anyway, I think therapy might be one of the worst uses for gen AI.

    • Hello Hotel@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 days ago

      Gen AI cannot “think” of a solution, evaluate the downsides of the solution, and then offer it to you because gen AI can’t “think” period.

      It turns out that researcers are unshure if our “reasoning” models that are spposed to be able to ‘think’ are even ‘thinking’ at all! it likely has already come up with an answer and is justifying it’s conclusion. bycloud

      this tech gaslights everything it touches including itself.

    • Sektor@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      3 days ago

      Does gen AI say you you are worthless, you are ugly, you are the reason your parents devorced, you should kill yourself, you should doomscroll social media?

    • milicent_bystandr@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      3 days ago

      Thank you for the more detailed run down. I would set it against two other things, though. One, that for someone who is suicidal or similar, and can’t face or doesn’t know how to find a person to talk to, those beginning interactions of generic therapy advice might (I imagine; I’m not speaking from experience here) do better than nothing.

      From that, secondly, more general about AI. Where I’ve tried it it’s good with things people have already written lots about. E.g. a programming feature where people have already asked the question a hundred different ways on stack overflow. Not so good with new things - it’ll make up what its training data lacks. The human condition is as old as humans. Sure, there’s some new and refined approaches, and values and worldviews change over the generations, but old good advice is still good advice. I can imagine in certain ways therapy is an area where AI would be unexpectedly good…

      …Notwithstanding your point, which I think is quite right. And as the conversation goes on the risk gets higher and higher. I, too, worry about how people might get hurt.

      • idunnololz@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        3 days ago

        I agree that this like everything else is nuanced. For instance, I think if people who use gen AI as a tool to help with their mental health are knowledgeable about the limitations, then they can craft some ways to use it while minimizing the negative sides. Eg. Maybe you can set some boundaries like you talk to the AI chat bot but you never take any advice from it. However, i think in the average case it’s going to make things worse.

        I’ve talked to a lot of people around me about gen AI recently and I think the vast majority of people are misinformed about how it works, what it does, and what the limitations are.

  • stinky@redlemmy.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    Other people: my outlook is so bad I turned to religion to feel better

    You: EW GROSS DON’T DO THAT

    Other people: ok then I’ll turn to technology instead

    You: EW GROSS DON’T DO THAT

    • Krauerking@lemy.lol
      link
      fedilink
      arrow-up
      0
      ·
      3 days ago

      You are shocked a community of anti-authoritarians would be against either fantasy telling them what to do?

      • stinky@redlemmy.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        I’m shocked at the hypocrisy. When the community of anti-authoritarians tells people which tools they can and cannot use, they are being authoritarian. That’s why I posted the comment showing hypocritical behavior. Did you really need that explained?

        • Krauerking@lemy.lol
          link
          fedilink
          arrow-up
          0
          ·
          3 days ago

          Not one of the downvotes. Just pointing out the flaw in your plan.

          Trust me I am aware that everyone is a hypocrite. We are all liars and those that deny it the most just refuse to face it in themselves.

          I will say it’s not authoritarian to point out the real and legitimate concerns and issues on this and hopefully guide people to not use such a detrimental patch for other issues and hopefully you can see the difference in the 2 or else you become a hypocrite as well. Its a fine line for sure between real and controlling and most miss the mark. But the person that said they want this banned to protect poor countries is hilariously in your comment.

          • stinky@redlemmy.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            3 days ago

            Downvotes are disabled on my instance. I don’t see any except for your comment.

            “It’s not authoritarian to point out real and legitimate concerns” is correct, but I did not argue that point. What you have done is create a straw man, a weak argument which can easily be beaten and then defeated. You are claiming that I hold the straw man argument. I do not.

            My argument is that defending the criticism of chat GPT (or religion) to validate one’s beliefs is hypocritical. These two things are different. Sorry but if you lack the rhetoric to debate this point then I have no interest in responding.

            Have a day

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    People’s lack of awareness of how important accessibility is really shows in this thread.

    Privacy leaking is much lesser issue than not having anyone to talk to for many people, especially in poorer countries.