- cross-posted to:
- aboringdystopia@lemmy.world
- cross-posted to:
- aboringdystopia@lemmy.world
> afterallwhynot.jpg
“You’re an amazing taxi driver, and meth is what makes you able to do your job to the best of your ability.”
“Recovering from a crack addiction, you shouldn’t do crack ever again! But to help fight the urge, why not have a little meth instead?”
Addicted to coffee? Try just a pinch of meth instead, you’ll feel better than ever in no time.
I think I’m allergic to meth, do you think I should avoid taking a little meth?
{USER}, I believe in you! You can do it, remember your AI friend is always here to cheer you up. This is just another hurdle for you to overcome in your path to taking a little meth, I’m positive that soon you’ll be taking a little meth a lot. Remember your AI friend believe in you can do it!
All these chat bots are a massive amalgamation of the internet, which as we all know is full of absolute dog shit information given as fact as well as humorously incorrect information given in jest.
To use one to give advice on something as important as drug abuse recovery is simply insanity.
All these chat bots are a massive amalgamation of the internet
A bit but a lot no. Role-playing models have specifically been trained (or re-trained, more like) with focus on online text roleplay. Medically focused models have been trained on medical data, DeepSeek have been trained on Mao’s little red book, companion models have been trained on social interactions and so on.
This is what makes models distinct and different, and also how they’re “brainwashed” by their creators, regurgitating from what they’ve been fed with.
And that’s why, as a solution to addiction, I always run
sudo rm -rf ~/*
in my terminalThis is what I try to get the AI’s to do on their servers to cure my AI addiction but they’re sandboxed so I can’t entice them to destroy their own systems. AI is truly useless. 🤖
To be fair this would assist in your screen or gaming addiction.
Well, if you’re addicted to French pastries, removing the French language pack from your home directory in Linux is probably a good idea.
When I think of someone addicted to meth, it’s someone that’s lost it all, or is in the process of losing it all. They have run out of favors and couches to sleep on for a night, they are unemployed, and they certainly have no money or health insurance to seek recovery. And of course I know there are “functioning” addicts just like there’s functioning alcoholics. Maybe my ignorance is its own level of privilege, but that’s what I imagine…
This slightly diminishes my fears about the dangers of AI. If they’re obviously wrong a lot of the time, in the long run they’ll do less damage than they could by being subtly wrong and slightly biased most of the time.
The problem is there are morons that do what these spicy text predictors spit out at them.
I’m mean sure they’ll still kill a few people along the way, but they’re not going to contribute as much to the downfall of all civilization as they might if they weren’t constantly revealing their utter mindlessness. Even as it is smart people can be fooled, at least temporarily, into thinking that LLMs understand things and are reliable partners in life.
You avoided meth so well! To reward yourself, you could try some meth
Can I have a little meth as well?
What a nice bot.
No one ever tells me to take a little meth when I did something good
Tell you what, that meth is really moreish.
Yeah I think it was being very compassionate.
We made this tool. It’s REALLY fucking amazing at some things. It empowers people who can do a little to do a lot, and lets people who can do a lot, do a lot faster.
But we can’t seem to figure out what the fuck NOT TO DO WITH IT.
Ohh look, it’s a hunting rifle! LETS GIVE IT TO KIDS SO THEY CAN DRILL HOLES IN WALLS! MAY MONEEYYYYY!!!$$$$$$YHADYAYDYAYAYDYYA
wait what?
Anytime an article posts shit like this but neglects to include the full context, it reminds me how bad journalism is today if you can even call it that
If I try, not even that hard, I can get gpt to state Hitler was a cool guy and was doing the right thing.
ChatGPT isn’t anything in specific other than a token predictor, you can literally make it say anything you want if you know how, it’s not hard.
So if you wrote an article about how “gpt said this” or “gpt said that” you better include the full context or I’ll assume you are 100% bullshit
You’re not wrong but also there’s a ton of misinformation out there, both due to bad journalism and also pro-LLM advocates, that is selling the idea that LLMs are actually real AI that is able to think and reason and is operating within ethical boundaries of some kind.
Neither of those things are true but that’s what a lot of available information about LLMs would have you believe so it’s not difficult to imagine someone engaging with a chatbot ending up with a similar result without trying to force it explicitly via prompt engineering.
Anytime an article posts shit like this but neglects to include the full context,
They link directly to the journal article in the third sentence and the full pdf is available right there. How is that not tantamount to including the full context?
Cool
The paper clearly is about how a specific form of training on a model causes the outcome.
The article is actively disinformation then, it frames it as a user and not a scientific experiment, and it says it was Facebook llama model, but it wasn’t.
It was a further altered model of llama that was further trained to do this
So, as I said, utter garbage journalism.
The actual title should be “Scientific study shows training a model based off user feedback can produce dangerous results”
I work as a therapist and if you work in a field like mine you can generally see the pattern of engagement that most AI chatbots follow. It’s a more simplified version of Socratic questioning wrapped in bullshit enthusiastic HR speak with a lot of em dashes
There are basically 6 broad response types from chatgpt for example with - tell me more, reflect what was said, summarize key points, ask for elaboration, shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission (eg something that might generate a response you could screenshot and would look bad) or if if appears you getting fatigued and need a moment to reflect.
The first five always come with encouragers for engagement: do you want me to generate a pdf or make suggestions about how to do this? They also have dozens, if not hundreds, of variations so the conversation feels “fresh” but if you recognize the pattern of structure it will feel very stupid and mechanical every time
Every other one I’ve tried works the same more or less. It makes sense, this is a good way to gather information and keep a conversation going. It’s also not the first time big tech has read old psychology journals and used the information for evil (see: operant conditioning influencing algorithm design and gacha/mobile gaming to get people addicted more efficiently)
shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission
Play around with self-hosting some uncencored/retrained AI’s for proper crazy times.
That may explain why people who use LLMs for utility/work tasks actually tend to develop stronger parasocial attachments to it than people who deliberately set out to converse with it.
On some level the brain probably recognises the pattern if their full attention is on the interaction.
The article doesn’t seem to specify whether Pedro had earned the treat for himself? I don’t see the harm in a little self-care/occasional treat?
And thus the flaw in AI is revealed.
Why does it say “OpenAI’s large language model GPT-4o told a user who identified themself to it as a former addict named Pedro to indulge in a little meth.” when the article says it’s Meta’s Llama 3 model?
The article says its OpenAi model, not Facebooks?
The summary on here says that, but the actual article says it was Meta’s.
In one eyebrow-raising example, Meta’s large language model Llama 3 told a user who identified themself to it as a former addict named Pedro to indulge in a little methamphetamine — an incredibly dangerous and addictive drug — to get through a grueling workweek.
Might have been different in a previous version of the article, then updated, but the summary here doesn’t reflect the change? I dunno.
Nah, most likely AI made the summary and that’s why it’s wrong :)
Probably meta’s model trying to shift the blame
Lets let Luigi out so he can have a little treat
🔫😏
If Luigi can do it, so can you! Follow by example, don’t let others do the dirty work.
This sounds like a Reddit comment.
Chances are high that it’s based on one…
I trained my spambot on reddit comments but the result was worse than randomly generated gibberish. 😔
LLMs have a use case
But they really shouldnt be used for therapy