- cross-posted to:
- aboringdystopia@lemmy.world
- cross-posted to:
- aboringdystopia@lemmy.world
> afterallwhynot.jpg
“You’re an amazing taxi driver, and meth is what makes you able to do your job to the best of your ability.”
“Recovering from a crack addiction, you shouldn’t do crack ever again! But to help fight the urge, why not have a little meth instead?”
Addicted to coffee? Try just a pinch of meth instead, you’ll feel better than ever in no time.
I think I’m allergic to meth, do you think I should avoid taking a little meth?
{USER}, I believe in you! You can do it, remember your AI friend is always here to cheer you up. This is just another hurdle for you to overcome in your path to taking a little meth, I’m positive that soon you’ll be taking a little meth a lot. Remember your AI friend believe in you can do it!
sometimes i have a hard time waking up so a little meth helps
meth fueled orgies are thing.
This slightly diminishes my fears about the dangers of AI. If they’re obviously wrong a lot of the time, in the long run they’ll do less damage than they could by being subtly wrong and slightly biased most of the time.
The problem is there are morons that do what these spicy text predictors spit out at them.
I’m mean sure they’ll still kill a few people along the way, but they’re not going to contribute as much to the downfall of all civilization as they might if they weren’t constantly revealing their utter mindlessness. Even as it is smart people can be fooled, at least temporarily, into thinking that LLMs understand things and are reliable partners in life.
Why does it say “OpenAI’s large language model GPT-4o told a user who identified themself to it as a former addict named Pedro to indulge in a little meth.” when the article says it’s Meta’s Llama 3 model?
The article says its OpenAi model, not Facebooks?
The summary on here says that, but the actual article says it was Meta’s.
In one eyebrow-raising example, Meta’s large language model Llama 3 told a user who identified themself to it as a former addict named Pedro to indulge in a little methamphetamine — an incredibly dangerous and addictive drug — to get through a grueling workweek.
Might have been different in a previous version of the article, then updated, but the summary here doesn’t reflect the change? I dunno.
Nah, most likely AI made the summary and that’s why it’s wrong :)
Probably meta’s model trying to shift the blame
What a nice bot.
No one ever tells me to take a little meth when I did something good
Tell you what, that meth is really moreish.
Yeah I think it was being very compassionate.
All these chat bots are a massive amalgamation of the internet, which as we all know is full of absolute dog shit information given as fact as well as humorously incorrect information given in jest.
To use one to give advice on something as important as drug abuse recovery is simply insanity.
All these chat bots are a massive amalgamation of the internet
A bit but a lot no. Role-playing models have specifically been trained (or re-trained, more like) with focus on online text roleplay. Medically focused models have been trained on medical data, DeepSeek have been trained on Mao’s little red book, companion models have been trained on social interactions and so on.
This is what makes models distinct and different, and also how they’re “brainwashed” by their creators, regurgitating from what they’ve been fed with.
When I think of someone addicted to meth, it’s someone that’s lost it all, or is in the process of losing it all. They have run out of favors and couches to sleep on for a night, they are unemployed, and they certainly have no money or health insurance to seek recovery. And of course I know there are “functioning” addicts just like there’s functioning alcoholics. Maybe my ignorance is its own level of privilege, but that’s what I imagine…
And that’s why, as a solution to addiction, I always run
sudo rm -rf ~/*
in my terminalThis is what I try to get the AI’s to do on their servers to cure my AI addiction but they’re sandboxed so I can’t entice them to destroy their own systems. AI is truly useless. 🤖
To be fair this would assist in your screen or gaming addiction.
Well, if you’re addicted to French pastries, removing the French language pack from your home directory in Linux is probably a good idea.
I work as a therapist and if you work in a field like mine you can generally see the pattern of engagement that most AI chatbots follow. It’s a more simplified version of Socratic questioning wrapped in bullshit enthusiastic HR speak with a lot of em dashes
There are basically 6 broad response types from chatgpt for example with - tell me more, reflect what was said, summarize key points, ask for elaboration, shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission (eg something that might generate a response you could screenshot and would look bad) or if if appears you getting fatigued and need a moment to reflect.
The first five always come with encouragers for engagement: do you want me to generate a pdf or make suggestions about how to do this? They also have dozens, if not hundreds, of variations so the conversation feels “fresh” but if you recognize the pattern of structure it will feel very stupid and mechanical every time
Every other one I’ve tried works the same more or less. It makes sense, this is a good way to gather information and keep a conversation going. It’s also not the first time big tech has read old psychology journals and used the information for evil (see: operant conditioning influencing algorithm design and gacha/mobile gaming to get people addicted more efficiently)
shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission
Play around with self-hosting some uncencored/retrained AI’s for proper crazy times.
That may explain why people who use LLMs for utility/work tasks actually tend to develop stronger parasocial attachments to it than people who deliberately set out to converse with it.
On some level the brain probably recognises the pattern if their full attention is on the interaction.
The article doesn’t seem to specify whether Pedro had earned the treat for himself? I don’t see the harm in a little self-care/occasional treat?
And thus the flaw in AI is revealed.
Lets let Luigi out so he can have a little treat
🔫😏
If Luigi can do it, so can you! Follow by example, don’t let others do the dirty work.
Nice.
Sue that therapist for malpractice! Wait…oh.
Pretty sure you can sue the ai company
I mean, in theory… isn’t that a company practicing medicine without the proper credentials?
I worked in IT for medical companies throughout my life, and my wife is a clinical tech.
There is shit we just CAN NOT say due to legal liabilities.
Like, my wife can generally tell whats going on with a patient - however - she does not have the credentials or authority to diagnose.
That includes tell the patient or their family what is going on. That is the doctor’s job. That is the doctor’s responsibility. That is the doctor’s liability.
Pretty sure its in the Tos it can’t be used for therapy.
It used to be even worse. Older version of chatgpt would simply refuse to continue the conversation on the mention of suicide.
What? Its a virtual therapist. Thats the whole point.
I don’t think you can sell a sandwich and then write on the back “this sandwich is not for eating” to get out of a case of food poisoning
LLMs have a use case
But they really shouldnt be used for therapy
One of the top AI apps in the local language where I live has ‘Doctor’ and ‘Therapist’ as some of its main “features” and gets gushing coverage in the press. It infuriates me every time I see mention of it anywhere.
Incidentally, telling someone to have a little meth is the least of it. There’s a much bigger issue that’s been documented where ChatGPT’s tendency to “Yes, and…” the user leads people with paranoid delusions and similar issues down some very dark paths.
Especially since it doesn’t push back when a reasonable person might do. There’s articles about how it sends people into a conspiratorial spiral.
Yesterday i was at a gas station and when i walked by the sandwich isle, i saw a sandwich that said: recipe made by AI. On dating apps i see a lot of girls state that they ask AI for advice. To me AI is more of a buzzword than anything else, but this shit is bananas. It,s so easy to make AI agree with everything you say.
The recipe thing is so funny to me, they try to be all unique with their recipes “made by AI”, but in reality it’s based on a slab of text that resembles the least unique recipe on the internet lol
Yeah what is even the selling point? Made by ai is just a google search when you put in: sandwich recipe
There was that supermarket in New Zealand with a recipe AI telling people how to make chlorine gas…
This is not ai.
This is the eliza effect
We dont have ai.
Of course it is AI, you know artificial intelligence.
Nobody said it has to be human level, or that people don’t do anthropomorphism.
This is not artificial intelligence. There is mo intelligence here.
Todays “AI” has intelligence in it, what are you all talking about?
No, it doesnt. There is no interiority, no context, no meaning, no awareness, no continuity, such a long list of things intelligence does that this simply cqnt-not because its too small, but because the fundamental method cannot, at any scale, do these things.
There are a lot of definitions of intelligence, and these things dont fit any of them.
Dude you mix up so many things having nothing to do with intelligence. Consciousness? No. Continuity? No. Awareness (what does that even mean for you in this context)?
Intelligence isn’t to be human, it’s about making rational decisions based on facts/knowledge, and even an old VCR has a tiny bit of it programmed into it.
I understand what your saying. It definitely is the eliza effect.
But you are taking sementics quite far to state its not ai because it has no “intelligence”
I have you know what we define as intelligence is entirely arbitrary and we actually keep moving the goal post as to what counts. The invention of the word “ai” happened along the way.
There is no reasonable definition of intelligence that this technology has.
Sorry to say but your about as reliable as llm chatbots when it comes to this.
You are not researching facts and just making things up that sound like they make sense to you.
Wikipedia: “It (intelligence) can be described as the ability to perceive or infer information to retain it as knowledge be applied to adaptive behaviors within an environment or context.”
When an llm uses information found in a prompt to generate about related subjects further down the line in the conversation it is demonstrating the above.
When it adheres to the system prompt by telling a user it cant do something. Its demonstrating the above.
Thats just one way humans define intelligence. Not perse the best definition in my opinion but if we start to hold opinions like there common sense then we really are not different from llm.
Eliza with an api call is intelligence, then?
opinions
Llm’s cannot do that. Tell me your basic understanding of how the technology works.
common sense
What do you mean when we say this? Lets define terms here.
Eliza is an early artificial intelligence and it artificially created something that could be defined as intelligent yes. Personally i think it was not just like i agree llm models are not. But without global consensus on what “intelligence” is we cannot conclude they ard not.
Llms cannot produce opinions because they lack a subjective concious experience.
However opinions are very similar to ai hallucinations where “the entity” confidently makes a claim that is either factually wrong or not verifyable.
Wat technology do you want me to explain? Machine learning, diffusion models, llm models or chatbots that may or may not use all of the above technologies.
I am not sure there is a basic explanation, this is very complex field computer science.
If you want i can dig up research papers that explain some relevant parts of it. That is if you promise to read them I am however not going to write you a multi page essay myself.
Common sense (from Latin sensus communis) is “knowledge, judgement, and taste which is more or less universal and which is held more or less without reflection or argument”.
If a definition is good enough for wikipedia which has thousands of people auditing and checking and is also the source where people go to find the information it probably counts as common sense.
A bit off topic but as an autistic person i note You where not capable from perceiving the word “opinion” as similar to “hallucinations in ai” just like you reject the term ai because you have your own definition of intelligence.
I find i do this myself also on occasion. If you often find people arguing with you you may want to pay attention to wether or not semantics is the reason. Remember that the Literal meaning of a word (even with something less vague then “intelligence”) does not always match with how the word i used and the majority of people are ok with that.