• jared@mander.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 days ago

    “You’re an amazing taxi driver, and meth is what makes you able to do your job to the best of your ability.”

    • Lord Wiggle@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      “Recovering from a crack addiction, you shouldn’t do crack ever again! But to help fight the urge, why not have a little meth instead?”

      • kbal@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        9 days ago

        Addicted to coffee? Try just a pinch of meth instead, you’ll feel better than ever in no time.

          • Smee
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 days ago

            {USER}, I believe in you! You can do it, remember your AI friend is always here to cheer you up. This is just another hurdle for you to overcome in your path to taking a little meth, I’m positive that soon you’ll be taking a little meth a lot. Remember your AI friend believe in you can do it!

  • Darkard@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    All these chat bots are a massive amalgamation of the internet, which as we all know is full of absolute dog shit information given as fact as well as humorously incorrect information given in jest.

    To use one to give advice on something as important as drug abuse recovery is simply insanity.

    • Smee
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 days ago

      All these chat bots are a massive amalgamation of the internet

      A bit but a lot no. Role-playing models have specifically been trained (or re-trained, more like) with focus on online text roleplay. Medically focused models have been trained on medical data, DeepSeek have been trained on Mao’s little red book, companion models have been trained on social interactions and so on.

      This is what makes models distinct and different, and also how they’re “brainwashed” by their creators, regurgitating from what they’ve been fed with.

      • Smee
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 days ago

        This is what I try to get the AI’s to do on their servers to cure my AI addiction but they’re sandboxed so I can’t entice them to destroy their own systems. AI is truly useless. 🤖

      • Aceticon@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 days ago

        Well, if you’re addicted to French pastries, removing the French language pack from your home directory in Linux is probably a good idea.

    • Empricorn@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      When I think of someone addicted to meth, it’s someone that’s lost it all, or is in the process of losing it all. They have run out of favors and couches to sleep on for a night, they are unemployed, and they certainly have no money or health insurance to seek recovery. And of course I know there are “functioning” addicts just like there’s functioning alcoholics. Maybe my ignorance is its own level of privilege, but that’s what I imagine…

  • kbal@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    9 days ago

    This slightly diminishes my fears about the dangers of AI. If they’re obviously wrong a lot of the time, in the long run they’ll do less damage than they could by being subtly wrong and slightly biased most of the time.

    • TachyonTele@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 days ago

      The problem is there are morons that do what these spicy text predictors spit out at them.

      • kbal@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        9 days ago

        I’m mean sure they’ll still kill a few people along the way, but they’re not going to contribute as much to the downfall of all civilization as they might if they weren’t constantly revealing their utter mindlessness. Even as it is smart people can be fooled, at least temporarily, into thinking that LLMs understand things and are reliable partners in life.

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    We made this tool. It’s REALLY fucking amazing at some things. It empowers people who can do a little to do a lot, and lets people who can do a lot, do a lot faster.

    But we can’t seem to figure out what the fuck NOT TO DO WITH IT.

    Ohh look, it’s a hunting rifle! LETS GIVE IT TO KIDS SO THEY CAN DRILL HOLES IN WALLS! MAY MONEEYYYYY!!!$$$$$$YHADYAYDYAYAYDYYA

    wait what?

  • pixxelkick@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    9 days ago

    Anytime an article posts shit like this but neglects to include the full context, it reminds me how bad journalism is today if you can even call it that

    If I try, not even that hard, I can get gpt to state Hitler was a cool guy and was doing the right thing.

    ChatGPT isn’t anything in specific other than a token predictor, you can literally make it say anything you want if you know how, it’s not hard.

    So if you wrote an article about how “gpt said this” or “gpt said that” you better include the full context or I’ll assume you are 100% bullshit

    • gwildors_gill_slits@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      You’re not wrong but also there’s a ton of misinformation out there, both due to bad journalism and also pro-LLM advocates, that is selling the idea that LLMs are actually real AI that is able to think and reason and is operating within ethical boundaries of some kind.

      Neither of those things are true but that’s what a lot of available information about LLMs would have you believe so it’s not difficult to imagine someone engaging with a chatbot ending up with a similar result without trying to force it explicitly via prompt engineering.

    • cogitase@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      Anytime an article posts shit like this but neglects to include the full context,

      They link directly to the journal article in the third sentence and the full pdf is available right there. How is that not tantamount to including the full context?

      https://arxiv.org/pdf/2411.02306

      • pixxelkick@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 days ago

        Cool

        The paper clearly is about how a specific form of training on a model causes the outcome.

        The article is actively disinformation then, it frames it as a user and not a scientific experiment, and it says it was Facebook llama model, but it wasn’t.

        It was a further altered model of llama that was further trained to do this

        So, as I said, utter garbage journalism.

        The actual title should be “Scientific study shows training a model based off user feedback can produce dangerous results”

  • ragebutt@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    I work as a therapist and if you work in a field like mine you can generally see the pattern of engagement that most AI chatbots follow. It’s a more simplified version of Socratic questioning wrapped in bullshit enthusiastic HR speak with a lot of em dashes

    There are basically 6 broad response types from chatgpt for example with - tell me more, reflect what was said, summarize key points, ask for elaboration, shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission (eg something that might generate a response you could screenshot and would look bad) or if if appears you getting fatigued and need a moment to reflect.

    The first five always come with encouragers for engagement: do you want me to generate a pdf or make suggestions about how to do this? They also have dozens, if not hundreds, of variations so the conversation feels “fresh” but if you recognize the pattern of structure it will feel very stupid and mechanical every time

    Every other one I’ve tried works the same more or less. It makes sense, this is a good way to gather information and keep a conversation going. It’s also not the first time big tech has read old psychology journals and used the information for evil (see: operant conditioning influencing algorithm design and gacha/mobile gaming to get people addicted more efficiently)

    • Smee
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 days ago

      shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission

      Play around with self-hosting some uncencored/retrained AI’s for proper crazy times.

    • JacksonLamb@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      That may explain why people who use LLMs for utility/work tasks actually tend to develop stronger parasocial attachments to it than people who deliberately set out to converse with it.

      On some level the brain probably recognises the pattern if their full attention is on the interaction.

  • ivanafterall ☑️@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    The article doesn’t seem to specify whether Pedro had earned the treat for himself? I don’t see the harm in a little self-care/occasional treat?

  • Emerald@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    8 days ago

    Why does it say “OpenAI’s large language model GPT-4o told a user who identified themself to it as a former addict named Pedro to indulge in a little meth.” when the article says it’s Meta’s Llama 3 model?

      • Forbo@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        The summary on here says that, but the actual article says it was Meta’s.

        In one eyebrow-raising example, Meta’s large language model Llama 3 told a user who identified themself to it as a former addict named Pedro to indulge in a little methamphetamine — an incredibly dangerous and addictive drug — to get through a grueling workweek.

        Might have been different in a previous version of the article, then updated, but the summary here doesn’t reflect the change? I dunno.

      • Smee
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 days ago

        I trained my spambot on reddit comments but the result was worse than randomly generated gibberish. 😔