Anytime an article posts shit like this but neglects to include the full context, it reminds me how bad journalism is today if you can even call it that
If I try, not even that hard, I can get gpt to state Hitler was a cool guy and was doing the right thing.
ChatGPT isn’t anything in specific other than a token predictor, you can literally make it say anything you want if you know how, it’s not hard.
So if you wrote an article about how “gpt said this” or “gpt said that” you better include the full context or I’ll assume you are 100% bullshit
Anytime an article posts shit like this but neglects to include the full context,
They link directly to the journal article in the third sentence and the full pdf is available right there. How is that not tantamount to including the full context?
The paper clearly is about how a specific form of training on a model causes the outcome.
The article is actively disinformation then, it frames it as a user and not a scientific experiment, and it says it was Facebook llama model, but it wasn’t.
It was a further altered model of llama that was further trained to do this
So, as I said, utter garbage journalism.
The actual title should be “Scientific study shows training a model based off user feedback can produce dangerous results”
You’re not wrong but also there’s a ton of misinformation out there, both due to bad journalism and also pro-LLM advocates, that is selling the idea that LLMs are actually real AI that is able to think and reason and is operating within ethical boundaries of some kind.
Neither of those things are true but that’s what a lot of available information about LLMs would have you believe so it’s not difficult to imagine someone engaging with a chatbot ending up with a similar result without trying to force it explicitly via prompt engineering.
Anytime an article posts shit like this but neglects to include the full context, it reminds me how bad journalism is today if you can even call it that
If I try, not even that hard, I can get gpt to state Hitler was a cool guy and was doing the right thing.
ChatGPT isn’t anything in specific other than a token predictor, you can literally make it say anything you want if you know how, it’s not hard.
So if you wrote an article about how “gpt said this” or “gpt said that” you better include the full context or I’ll assume you are 100% bullshit
They link directly to the journal article in the third sentence and the full pdf is available right there. How is that not tantamount to including the full context?
https://arxiv.org/pdf/2411.02306
Cool
The paper clearly is about how a specific form of training on a model causes the outcome.
The article is actively disinformation then, it frames it as a user and not a scientific experiment, and it says it was Facebook llama model, but it wasn’t.
It was a further altered model of llama that was further trained to do this
So, as I said, utter garbage journalism.
The actual title should be “Scientific study shows training a model based off user feedback can produce dangerous results”
You’re not wrong but also there’s a ton of misinformation out there, both due to bad journalism and also pro-LLM advocates, that is selling the idea that LLMs are actually real AI that is able to think and reason and is operating within ethical boundaries of some kind.
Neither of those things are true but that’s what a lot of available information about LLMs would have you believe so it’s not difficult to imagine someone engaging with a chatbot ending up with a similar result without trying to force it explicitly via prompt engineering.