• jared@mander.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    “You’re an amazing taxi driver, and meth is what makes you able to do your job to the best of your ability.”

    • Lord Wiggle@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      “Recovering from a crack addiction, you shouldn’t do crack ever again! But to help fight the urge, why not have a little meth instead?”

      • kbal@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        Addicted to coffee? Try just a pinch of meth instead, you’ll feel better than ever in no time.

          • Smee
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            {USER}, I believe in you! You can do it, remember your AI friend is always here to cheer you up. This is just another hurdle for you to overcome in your path to taking a little meth, I’m positive that soon you’ll be taking a little meth a lot. Remember your AI friend believe in you can do it!

  • kbal@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    This slightly diminishes my fears about the dangers of AI. If they’re obviously wrong a lot of the time, in the long run they’ll do less damage than they could by being subtly wrong and slightly biased most of the time.

    • TachyonTele@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      4 months ago

      The problem is there are morons that do what these spicy text predictors spit out at them.

      • kbal@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        I’m mean sure they’ll still kill a few people along the way, but they’re not going to contribute as much to the downfall of all civilization as they might if they weren’t constantly revealing their utter mindlessness. Even as it is smart people can be fooled, at least temporarily, into thinking that LLMs understand things and are reliable partners in life.

  • Emerald@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 months ago

    Why does it say “OpenAI’s large language model GPT-4o told a user who identified themself to it as a former addict named Pedro to indulge in a little meth.” when the article says it’s Meta’s Llama 3 model?

      • Forbo@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        The summary on here says that, but the actual article says it was Meta’s.

        In one eyebrow-raising example, Meta’s large language model Llama 3 told a user who identified themself to it as a former addict named Pedro to indulge in a little methamphetamine — an incredibly dangerous and addictive drug — to get through a grueling workweek.

        Might have been different in a previous version of the article, then updated, but the summary here doesn’t reflect the change? I dunno.

  • Darkard@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    All these chat bots are a massive amalgamation of the internet, which as we all know is full of absolute dog shit information given as fact as well as humorously incorrect information given in jest.

    To use one to give advice on something as important as drug abuse recovery is simply insanity.

    • Smee
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      All these chat bots are a massive amalgamation of the internet

      A bit but a lot no. Role-playing models have specifically been trained (or re-trained, more like) with focus on online text roleplay. Medically focused models have been trained on medical data, DeepSeek have been trained on Mao’s little red book, companion models have been trained on social interactions and so on.

      This is what makes models distinct and different, and also how they’re “brainwashed” by their creators, regurgitating from what they’ve been fed with.

    • Empricorn@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      When I think of someone addicted to meth, it’s someone that’s lost it all, or is in the process of losing it all. They have run out of favors and couches to sleep on for a night, they are unemployed, and they certainly have no money or health insurance to seek recovery. And of course I know there are “functioning” addicts just like there’s functioning alcoholics. Maybe my ignorance is its own level of privilege, but that’s what I imagine…

      • Smee
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        This is what I try to get the AI’s to do on their servers to cure my AI addiction but they’re sandboxed so I can’t entice them to destroy their own systems. AI is truly useless. 🤖

      • Aceticon@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        Well, if you’re addicted to French pastries, removing the French language pack from your home directory in Linux is probably a good idea.

  • ragebutt@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    I work as a therapist and if you work in a field like mine you can generally see the pattern of engagement that most AI chatbots follow. It’s a more simplified version of Socratic questioning wrapped in bullshit enthusiastic HR speak with a lot of em dashes

    There are basically 6 broad response types from chatgpt for example with - tell me more, reflect what was said, summarize key points, ask for elaboration, shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission (eg something that might generate a response you could screenshot and would look bad) or if if appears you getting fatigued and need a moment to reflect.

    The first five always come with encouragers for engagement: do you want me to generate a pdf or make suggestions about how to do this? They also have dozens, if not hundreds, of variations so the conversation feels “fresh” but if you recognize the pattern of structure it will feel very stupid and mechanical every time

    Every other one I’ve tried works the same more or less. It makes sense, this is a good way to gather information and keep a conversation going. It’s also not the first time big tech has read old psychology journals and used the information for evil (see: operant conditioning influencing algorithm design and gacha/mobile gaming to get people addicted more efficiently)

    • Smee
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission

      Play around with self-hosting some uncencored/retrained AI’s for proper crazy times.

    • JacksonLamb@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      That may explain why people who use LLMs for utility/work tasks actually tend to develop stronger parasocial attachments to it than people who deliberately set out to converse with it.

      On some level the brain probably recognises the pattern if their full attention is on the interaction.

  • potoooooooo ☑️@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    The article doesn’t seem to specify whether Pedro had earned the treat for himself? I don’t see the harm in a little self-care/occasional treat?

      • Case@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        I mean, in theory… isn’t that a company practicing medicine without the proper credentials?

        I worked in IT for medical companies throughout my life, and my wife is a clinical tech.

        There is shit we just CAN NOT say due to legal liabilities.

        Like, my wife can generally tell whats going on with a patient - however - she does not have the credentials or authority to diagnose.

        That includes tell the patient or their family what is going on. That is the doctor’s job. That is the doctor’s responsibility. That is the doctor’s liability.

      • webghost0101@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        Pretty sure its in the Tos it can’t be used for therapy.

        It used to be even worse. Older version of chatgpt would simply refuse to continue the conversation on the mention of suicide.

        • jagged_circle@feddit.nl
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          4 months ago

          What? Its a virtual therapist. Thats the whole point.

          I don’t think you can sell a sandwich and then write on the back “this sandwich is not for eating” to get out of a case of food poisoning

  • skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    One of the top AI apps in the local language where I live has ‘Doctor’ and ‘Therapist’ as some of its main “features” and gets gushing coverage in the press. It infuriates me every time I see mention of it anywhere.

    Incidentally, telling someone to have a little meth is the least of it. There’s a much bigger issue that’s been documented where ChatGPT’s tendency to “Yes, and…” the user leads people with paranoid delusions and similar issues down some very dark paths.

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Especially since it doesn’t push back when a reasonable person might do. There’s articles about how it sends people into a conspiratorial spiral.

    • slaneesh_is_right@lemmy.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Yesterday i was at a gas station and when i walked by the sandwich isle, i saw a sandwich that said: recipe made by AI. On dating apps i see a lot of girls state that they ask AI for advice. To me AI is more of a buzzword than anything else, but this shit is bananas. It,s so easy to make AI agree with everything you say.

      • YourMomsTrashman@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        The recipe thing is so funny to me, they try to be all unique with their recipes “made by AI”, but in reality it’s based on a slab of text that resembles the least unique recipe on the internet lol

        • Valmond@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          Of course it is AI, you know artificial intelligence.

          Nobody said it has to be human level, or that people don’t do anthropomorphism.

              • outhouseperilous@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                4 months ago

                No, it doesnt. There is no interiority, no context, no meaning, no awareness, no continuity, such a long list of things intelligence does that this simply cqnt-not because its too small, but because the fundamental method cannot, at any scale, do these things.

                There are a lot of definitions of intelligence, and these things dont fit any of them.

                • Valmond@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  4 months ago

                  Dude you mix up so many things having nothing to do with intelligence. Consciousness? No. Continuity? No. Awareness (what does that even mean for you in this context)?

                  Intelligence isn’t to be human, it’s about making rational decisions based on facts/knowledge, and even an old VCR has a tiny bit of it programmed into it.

        • webghost0101@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          I understand what your saying. It definitely is the eliza effect.

          But you are taking sementics quite far to state its not ai because it has no “intelligence”

          I have you know what we define as intelligence is entirely arbitrary and we actually keep moving the goal post as to what counts. The invention of the word “ai” happened along the way.

            • webghost0101@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              4 months ago

              Sorry to say but your about as reliable as llm chatbots when it comes to this.

              You are not researching facts and just making things up that sound like they make sense to you.

              Wikipedia: “It (intelligence) can be described as the ability to perceive or infer information to retain it as knowledge be applied to adaptive behaviors within an environment or context.”

              When an llm uses information found in a prompt to generate about related subjects further down the line in the conversation it is demonstrating the above.

              When it adheres to the system prompt by telling a user it cant do something. Its demonstrating the above.

              Thats just one way humans define intelligence. Not perse the best definition in my opinion but if we start to hold opinions like there common sense then we really are not different from llm.

              • outhouseperilous@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                0
                ·
                4 months ago

                Eliza with an api call is intelligence, then?

                opinions

                Llm’s cannot do that. Tell me your basic understanding of how the technology works.

                common sense

                What do you mean when we say this? Lets define terms here.

                • webghost0101@sopuli.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  4 months ago

                  Eliza is an early artificial intelligence and it artificially created something that could be defined as intelligent yes. Personally i think it was not just like i agree llm models are not. But without global consensus on what “intelligence” is we cannot conclude they ard not.

                  Llms cannot produce opinions because they lack a subjective concious experience.

                  However opinions are very similar to ai hallucinations where “the entity” confidently makes a claim that is either factually wrong or not verifyable.

                  Wat technology do you want me to explain? Machine learning, diffusion models, llm models or chatbots that may or may not use all of the above technologies.

                  I am not sure there is a basic explanation, this is very complex field computer science.

                  If you want i can dig up research papers that explain some relevant parts of it. That is if you promise to read them I am however not going to write you a multi page essay myself.

                  Common sense (from Latin sensus communis) is “knowledge, judgement, and taste which is more or less universal and which is held more or less without reflection or argument”.

                  If a definition is good enough for wikipedia which has thousands of people auditing and checking and is also the source where people go to find the information it probably counts as common sense.

                  A bit off topic but as an autistic person i note You where not capable from perceiving the word “opinion” as similar to “hallucinations in ai” just like you reject the term ai because you have your own definition of intelligence.

                  I find i do this myself also on occasion. If you often find people arguing with you you may want to pay attention to wether or not semantics is the reason. Remember that the Literal meaning of a word (even with something less vague then “intelligence”) does not always match with how the word i used and the majority of people are ok with that.