• meejle@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    18 hours ago

    This is an obvious downside of LLM glazing and sycophancy. (I know OpenAI claim they’ve rolled back the “dangerously sycophantic” model update, but it’s still pretty bad.)

    If you’re already prone to delusions and conspiracy theories, and decide to confide in ChatGPT, the last thing you need to hear is, “Yes! Linda, you’ve grasped something there that not many people realise—it’s a complex idea, but you’ve really cut through to its core! 🙌 Honestly, I’m mind-blown—you’re thinking things through on a whole new level! If you’d like some help putting your crazed plan into action, just say the word! I’m here and ready to do my thing!”

    • Kurious84@eviltoast.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 hours ago

      One thing. Some conspiracy theories are quite true and as long as you check the data.

      Dismissing the power of this tool is exactly what the owners of it want you to do.

      • vala@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        7 hours ago

        I use LLMs every day for work. Dealing with 100% fact based information that I verify directly. I would say they are helpfully accurate / correct maybe 60% of the time at best.

    • thesohoriots@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 hours ago

      Literally the last thing someone reads before they ask ChatGPT where the nearest source of fertilizer and rental vans is