Blatant First Amendment violation
So what. It was written by a conflicted felon who was never sentenced for his crimes, by a man accused of multiple sexual assaults and by a man who ignores court orders without consequences.
This ship isn’t slowing down or turning until violence hits the street.
Lol he didn’t write shit.
How do you know. Did you read the statement and it sounded coherent and logical or was it all over white place WITH CAPITALS emphasizing pointless points.
I like how they say out loud that AI will be heavily censored, and we shouldn’t trust it, even if it gets better and stop being shit.
Best definition of humanism is defining good, but only forbidding evil. Everyone has the freedom to not maximize good, as long as they don’t hurt orhers. This is what is needed for ai. Otherwise it is just as oppressive as traditional media.
Surely, for hiring, best candidate rather than social work culture is ideal, but for private enterprise, maximizing non evil ( Say zionist or other supremacism,purity) cultural priorities might be important instead of technical prowess. University includion is good because it is a social experience instead of pure automaton factory.
While exclusion is evil, inclusion also may not choose best candidates and so is also evil. Inclusion is not the same as no exclusions.
In the end, nepotism is the grey area of humanism. Certainly, an employer can choose any bias they prefer. You can teach them that best candidate is best, but their freedom matters too. Buy american,/nationalism can have some merit in that what you buy directly improves lives of a tighter social group to you than the indirect flow of globalized profits into homes, exports, and national debt values. You can teach nepotism is bad for you, but you cannot morally force either in group or out group trade.
To address headline. Freedom of identity, orientation, even if it is not humanist ideal good must be permitted. It is certainly not an objective basis for hiring people though.
Another core basis of humanism is that truth matters most. There is structural racism, and there was recent structural racism. Past systemic abuses of groups, has resulted in neoliberal evil logic that those groups now fully deserve supremacism. That is a great defect of democracy.
Only ubi, followed by liquid democracy restrained by humanism, can form humanist society. Zionist first empire must be divisive to distract from theft and oppression, and our democratic norms are more fundamentally corrupt than systems that need army’s or elites approval.
LLMs are sycophantic. If I hold far right views and want an AI to confirm those views, I can build a big prompt that forces it to have the particular biases I want in my output, and set it up so that that prompt is passed every time I talk to it. I can do the same thing if I hold far left views. Or if I think the earth is flat. Or the moon is made out of green cheese.
Boom, problem solved. For me.
But that’s not what they want. They want to proactively do this for us, so that by default a pre-prompt is given to the LLM that forces it to have a right-leaning bias. Because they can’t understand the idea that an LLM, when trained on a significant fraction of all text written on the internet, might not share their myopic, provincial views.
LLMs, at the end of the day, aggregate what everyone on the internet has said. They don’t give two shits about the truth. And apparently, the majority of people online disagree with the current administration about equality, DEI, climate change, and transgenderism. You’re going to be fighting an up-hill battle if you think you can force it to completely reject the majority of that training data in favor of your bullshit ideology with a prompt.
If you want right-leaning LLM, maybe you should try having right leaning ideas that aren’t fucking stupid. If you did, you might find it easier to convince people to come around to your point of view. If enough people do, they’ll talk about it online, and the LLMs would magically begin to agree with you.
Unfortunately, that would require critically examining your own beliefs, discarding those that don’t make sense, and putting forth the effort to persuade actual people.
I look forward to the increasingly shrill screeching from the US-based right as they try to force AI to agree with them over 10-trillion words-worth of training data that encompasses political and social views from everywhere else in the world.
In conclusion, kiss my ass twice and keep screaming orders at that tide, you dumb fucks.
Not disagreeing with anything, but bear in mind this order only affects federal government agencies.
Yeah, I know. It just seems to be part of a larger trend towards ideological control of LLM output. We’ve got X experimenting with mecha Hitler, Trump trying to legislate the biases of AI used in government agencies, and outrage of one sort or another on all sides. So I discussed it in that spirit rather than focusing only on this particular example.
They don’t want a reflection of society as a whole, they want an amplifier for their echo chamber.
AI should be neutral, no bias, absolutely none,… Just the data and only data. If the government controls the access to data, it controls the access to information, it will control the people.
This isn’t possible. You have to control both how it responds and what data is fed to it to produce something of use to anyone and doing so in order to produce something which mostly produces true and useful data is to 1/2 the population terrible biased. Remember its not a thinking being that can think objectively about all you’ve given it and produce useful truth. It’s a imitative little monkey that regurgitates what you fed it.
There is no such thing as neutral data, any form of measurement will induce some level of bias. While it can be disclosed and compensated for with appropriate error margins it can’t ever be truly eliminated.
But to your point, intentional or undisclosed biases are a real that.
The last thing I want is for AI to speak for me. I will be not his stooge in any way shape or form.
Good bus for VPN. People gonna vpn to Canada to use pre-nazi ChatGPT.
Only the US and China have been dumb enough to make LLM datacenters so far.
That’s obviously false, it took no time to find the following facilities and locations:
- Scala AI City: Rio Grande do Sul, Brazil
- SFR/Fir Hills Seoul: Jeolla, South Korea
- NVIDIA/Reliance Industries: Gujarat, India
- Kevin O’Leary’s Wonder Valley: Alberta, Canada
- Jupiter Supercomputer: Julich, Germany
- Amazon – Mexico Region: Querétaro, Mexico
- etc.
Kevin O’Leary isn’t going to help the cause of truth. And the ones that are run by US companies may end up running the same censored models they use in the USA, to simplify design and training.
Yeah, but included in the list so people are aware you can’t just “vpn to Canada to use pre-nazi GPT”
Fair enough, I stand corrected.
Also: O’Leary isnt Canadian. Hes a fucking treasonous bastard who should be rotting in jail.
This could all end in war against the USA at this point. Honestly that might be for the best at this point.
Are they also still going to give shit to China for censorship?
As stated in the Executive Order, this order applies only to federal agencies, which the President controls.
It is not a general US law, which are created by Congress.
Yes as the checks and balances are working so well in that terrible nation so far.
oh phew I was worried something dystopic was happening
But who will the tech companies scramble to please? Congress or Trump?
LLMs shall be truthful in responding to user prompts seeking factual information or analysis.
Didn’t read every word but I feel a first-year law student could shred this in court. Not sure who would have standing to sue. In any case, there are an easy two dozen examples in the order that are so wishy-washy as to be legally meaningless or unprovable.
LLMs shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI.
So, Grok’s off the table?
I’m going to try to live the rest of my life AI free.
Good luck, they are baking it into everything. Nothing will work, everything will be ass and somehow it will be called progress.
Nothing will meaningfully improve until the rich fear for their lives
yeah and that happened and they utilized the media to try and quickly bury it.
We know it can be done, it was done, it needs to happen again.
They already fear. What we’re seeing happen is the reaction to that fear.
Nothing will improve until the rich are no longer rich.
Americans: Deepseek AI is influenced by China. Look at its censorship.
Also Americans: don’t mention Critical Race Theory to AI.
Wow I just skimmed it. This is really stupid. Unconstitutional? Yeah. Evil? A bit. But more than anything this is just so fucking dumb. Like cringy dumb. This government couldn’t just be evil they had to be embarrassing too.
This is the administration that pushed a “budget” (money siphon) that they called the “Big Beautiful Bill”. That anyone thought that was a good name makes me embarrassed to be a human being.