• Typotyper@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      12 days ago

      So what. It was written by a conflicted felon who was never sentenced for his crimes, by a man accused of multiple sexual assaults and by a man who ignores court orders without consequences.

      This ship isn’t slowing down or turning until violence hits the street.

  • Kyden Fumofly@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    12 days ago

    I like how they say out loud that AI will be heavily censored, and we shouldn’t trust it, even if it gets better and stop being shit.

  • humanspiral@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    12 days ago

    Best definition of humanism is defining good, but only forbidding evil. Everyone has the freedom to not maximize good, as long as they don’t hurt orhers. This is what is needed for ai. Otherwise it is just as oppressive as traditional media.

    Surely, for hiring, best candidate rather than social work culture is ideal, but for private enterprise, maximizing non evil ( Say zionist or other supremacism,purity) cultural priorities might be important instead of technical prowess. University includion is good because it is a social experience instead of pure automaton factory.

    While exclusion is evil, inclusion also may not choose best candidates and so is also evil. Inclusion is not the same as no exclusions.

    In the end, nepotism is the grey area of humanism. Certainly, an employer can choose any bias they prefer. You can teach them that best candidate is best, but their freedom matters too. Buy american,/nationalism can have some merit in that what you buy directly improves lives of a tighter social group to you than the indirect flow of globalized profits into homes, exports, and national debt values. You can teach nepotism is bad for you, but you cannot morally force either in group or out group trade.

    To address headline. Freedom of identity, orientation, even if it is not humanist ideal good must be permitted. It is certainly not an objective basis for hiring people though.

    Another core basis of humanism is that truth matters most. There is structural racism, and there was recent structural racism. Past systemic abuses of groups, has resulted in neoliberal evil logic that those groups now fully deserve supremacism. That is a great defect of democracy.

    Only ubi, followed by liquid democracy restrained by humanism, can form humanist society. Zionist first empire must be divisive to distract from theft and oppression, and our democratic norms are more fundamentally corrupt than systems that need army’s or elites approval.

  • blackstampede@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    12 days ago

    LLMs are sycophantic. If I hold far right views and want an AI to confirm those views, I can build a big prompt that forces it to have the particular biases I want in my output, and set it up so that that prompt is passed every time I talk to it. I can do the same thing if I hold far left views. Or if I think the earth is flat. Or the moon is made out of green cheese.

    Boom, problem solved. For me.

    But that’s not what they want. They want to proactively do this for us, so that by default a pre-prompt is given to the LLM that forces it to have a right-leaning bias. Because they can’t understand the idea that an LLM, when trained on a significant fraction of all text written on the internet, might not share their myopic, provincial views.

    LLMs, at the end of the day, aggregate what everyone on the internet has said. They don’t give two shits about the truth. And apparently, the majority of people online disagree with the current administration about equality, DEI, climate change, and transgenderism. You’re going to be fighting an up-hill battle if you think you can force it to completely reject the majority of that training data in favor of your bullshit ideology with a prompt.

    If you want right-leaning LLM, maybe you should try having right leaning ideas that aren’t fucking stupid. If you did, you might find it easier to convince people to come around to your point of view. If enough people do, they’ll talk about it online, and the LLMs would magically begin to agree with you.

    Unfortunately, that would require critically examining your own beliefs, discarding those that don’t make sense, and putting forth the effort to persuade actual people.

    I look forward to the increasingly shrill screeching from the US-based right as they try to force AI to agree with them over 10-trillion words-worth of training data that encompasses political and social views from everywhere else in the world.

    In conclusion, kiss my ass twice and keep screaming orders at that tide, you dumb fucks.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      Not disagreeing with anything, but bear in mind this order only affects federal government agencies.

      • blackstampede@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        12 days ago

        Yeah, I know. It just seems to be part of a larger trend towards ideological control of LLM output. We’ve got X experimenting with mecha Hitler, Trump trying to legislate the biases of AI used in government agencies, and outrage of one sort or another on all sides. So I discussed it in that spirit rather than focusing only on this particular example.

  • MunkysUnkEnz0@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    13 days ago

    AI should be neutral, no bias, absolutely none,… Just the data and only data. If the government controls the access to data, it controls the access to information, it will control the people.

    • michaelmrose@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      This isn’t possible. You have to control both how it responds and what data is fed to it to produce something of use to anyone and doing so in order to produce something which mostly produces true and useful data is to 1/2 the population terrible biased. Remember its not a thinking being that can think objectively about all you’ve given it and produce useful truth. It’s a imitative little monkey that regurgitates what you fed it.

    • SinAdjetivos@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      13 days ago

      There is no such thing as neutral data, any form of measurement will induce some level of bias. While it can be disclosed and compensated for with appropriate error margins it can’t ever be truly eliminated.

      But to your point, intentional or undisclosed biases are a real that.

  • betanumerus@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    11 days ago

    The last thing I want is for AI to speak for me. I will be not his stooge in any way shape or form.

      • SinAdjetivos@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        13 days ago

        That’s obviously false, it took no time to find the following facilities and locations:

        • Scala AI City: Rio Grande do Sul, Brazil
        • SFR/Fir Hills Seoul: Jeolla, South Korea
        • NVIDIA/Reliance Industries: Gujarat, India
        • Kevin O’Leary’s Wonder Valley: Alberta, Canada
        • Jupiter Supercomputer: Julich, Germany
        • Amazon – Mexico Region: Querétaro, Mexico
        • etc.
        • floofloof@lemmy.ca
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          12 days ago

          Kevin O’Leary isn’t going to help the cause of truth. And the ones that are run by US companies may end up running the same censored models they use in the USA, to simplify design and training.

  • markstos@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    12 days ago

    As stated in the Executive Order, this order applies only to federal agencies, which the President controls.

    It is not a general US law, which are created by Congress.

  • shalafi@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    LLMs shall be truthful in responding to user prompts seeking factual information or analysis.

    Didn’t read every word but I feel a first-year law student could shred this in court. Not sure who would have standing to sue. In any case, there are an easy two dozen examples in the order that are so wishy-washy as to be legally meaningless or unprovable.

    LLMs shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI.

    So, Grok’s off the table?

    • M0oP0o@mander.xyzOP
      link
      fedilink
      arrow-up
      0
      ·
      12 days ago

      Good luck, they are baking it into everything. Nothing will work, everything will be ass and somehow it will be called progress.

  • ByteOnBikes@discuss.online
    link
    fedilink
    arrow-up
    0
    ·
    13 days ago

    Americans: Deepseek AI is influenced by China. Look at its censorship.

    Also Americans: don’t mention Critical Race Theory to AI.

  • 0ops@piefed.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    13 days ago

    Wow I just skimmed it. This is really stupid. Unconstitutional? Yeah. Evil? A bit. But more than anything this is just so fucking dumb. Like cringy dumb. This government couldn’t just be evil they had to be embarrassing too.

    • nickwitha_k (he/him)@lemmy.sdf.org
      link
      fedilink
      arrow-up
      0
      ·
      12 days ago

      This is the administration that pushed a “budget” (money siphon) that they called the “Big Beautiful Bill”. That anyone thought that was a good name makes me embarrassed to be a human being.