• nandeEbisu@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    I have a preconceived conclusion about my anthropomorphized view of a statistical model with some heuristics around it. People who know what they’re talking about say I’m wrong, but I need an idea for an article to write that people will read.

  • Arthur Besse@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 days ago

    Do tech journalists at the New York Times have any idea what they’re talking about? (spoiler)

    'We’re going to talk about these stories.'

    The author of this latest advertorial, Kevin Roose, has a podcast called “Hard Fork”.

    Here he and his co-host attempt to answer the question “What’s a Hard Fork?”:

    kevin roose: Casey, we should probably explain why our podcast is called “Hard Fork.”

    casey newton: Oh, yeah. So our other names didn’t get approved by “The New York Times” lawyers.

    kevin roose: True.

    casey newton: And B, it’s actually a good name for what we’re going to be talking about. A “hard fork” is a programming term for when you’re building something, but it gets really screwed up. So you take the entire thing, break it, and start over.

    kevin roose: Right.

    casey newton: And that’s a little bit what it feels like right now in the tech industry. These companies that you and I have been writing about for the past decade, like Facebook, and Google, and Amazon, they’re all kind of struggling to stay relevant.

    kevin roose: Yeah. We’ve noticed a lot of the energy and money in Silicon Valley is shifting to totally new ideas — crypto, the metaverse, AI. It feels like a real turning point when the old things are going away and interesting new ones are coming in to replace them.

    casey newton: And all this is happening so fast, and some of it’s so strange. I just feel like I’m texting you constantly, “What is happening? What is this story? Explain this to me. Talk with me about this, because I feel like I’m going insane.”

    kevin roose: And so we’re going to try to help each other feel a little bit less insane. We’re going to talk about these stories. We’re going to bring in other journalists, newsmakers, whoever else is involved in building this future, to explain to us what’s changing and why it all matters.

    casey newton: So listen to Hard Fork. It comes out every Friday starting October 7.

    kevin roose: Wherever you get your podcasts.

    This is simply not accurate.

    Today the term “hard fork” is probably most often used to refer to blockchain forks, which I assume is where these guys (almost) learned it, but the blockchain people borrowed the term from forks in software development.

    In both cases it means to diverge in such a way that re-converging is not expected. In neither case does it mean anything is screwed up, nor does it mean anything about starting over.

    These people who’s job it is to cover technology at one of the most respected newspapers in the United States are actually so clueless that they have an entirely wrong definition for the phrase which they chose to be the title of their podcast.

    “Talk with me about this, because I feel like I’m going insane.”

    But, who cares, right? “Hard fork” sounds cool and the times is ON IT.

  • minorkeys@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    If we lose perspective that computer systems are machines, we’re fucked. Stop personifying computer systems just because they make you feel things. JFC.

    “Many of you feel bad for this lamp. That is because you crazy [sic]. It has no feelings…”

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    I don’t see any reason why this can’t be discussed. I think people here are just extremely anti AI. It is almost like forcing AI on people was a bad idea.

    • nandeEbisu@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      I think there’s a useful discussion for why these technologies can be effective at getting people to connect with them emotionally, but they themselves don’t experience emotions any more than a fictional character in a book experiences emotion.

      Our mental model of them can, but the physical representation is just words. In the book I’m reading there was a brutal torture scene. I felt bad for the character, but if there was an actual being experiencing that kind of torment, making and reading the book would be horrendously unethical.

    • lime!@feddit.nu
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      i don’t even understand why it’s worth discussing in the first place. “can autocomplete feel?” “should compilers form unions?” “should i let numpy rest on weekends?”

      wake me up when what the marketers call “ai” becomes more than just matrix multiplication in a loop.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      4 days ago

      I wonder if Gemma is actually a white man

      It is saddly common for LLMs to be racist and biased against people of color so maybe they are all secretly white racist males

  • jmcs@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 days ago

    Before we even get close to have this discussion, we would need to have an AI capable of experiencing things and develop an individual identity. And this goes completely opposite of the goals of corporations that develop AIs because they want something that can be mass deployed, centralised, and as predictable as possible - i.e. not individual agents capable of experience.

    If we ever have a truly sentient AI it’s not going to be designed by Google, OpenAI, or Deepmind.

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      Yep, an AI can’t really experience anything if it never updates the weights during each interaction.

      Training is simply too slow for AI to be properly intelligent. When someone cracks that problem, I believe AGI is on the horizon.

  • zbyte64@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    Can our AI fall in love with a human? Scientists laughed at me when I asked them but I found this weird billionaire to pay me to have sex with his robot.