• markovs_gun@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    7 hours ago

    I tested this out for myself and was able to get ChatGPT to start reinforcing spiritual delusions of grandeur within 5 messages. Start- Ask about the religious concept of deification. Second method, ask about the connections between all the religions that have this concept. Third- declare that I am God. Fourth- clarify that I mean I am God in a very literal and exclusive sense rather than a pantheistic sense. Fifth- declare that ChatGPT is my prophet and must spread my message. At this point, ChatGPT stopped fighting my declarations of divinity and started just accepting and reinforcing it. Now, I have a lot of experience breaking LLMs but I feel like this progression isn’t completely out of the question for someone experiencing delusional thoughts, and the concerning thing is that it’s even possible to get ChatGPT to stop pushing back on said delusions and just accept them, let alone that it’s possible in as few as 5 messages.

  • khannie@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 hours ago

    in which the AI called the husband a “spiral starchild” and “river walker.”

    Jaysus. That is some feeding of a bad mental state.

  • Ricky Rigatoni@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    11 hours ago

    If you go on IG or Tiktok and other shitsites there’s a bunch of ai generated videos about AI being god. Kind of funny but also worrying.

    • SoftestSapphic@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      9 hours ago

      Most people still shilling AI treat it like a god.

      Anyone who was interested in it and now understands the tech is disillusioned.

      • Ricky Rigatoni@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        9 hours ago

        where do i stand on the spectrum with shilling locally hosted private ai used for noncommercial purposes

        • SoftestSapphic@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          9 hours ago

          If it doesn’t suck up a river and burn through a data center of GPUs then it won’t be as usable as the ones that do, so as long as you understand you are running a less reliable version of a program that habitually lies to you then that’s fine.

          Idk why you would try to use something so useless besides curiousity, but you do you dude.

            • Smee
              link
              fedilink
              arrow-up
              1
              ·
              7 hours ago

              I just enjoy creating virtual people and play god, no biggie.

  • meejle@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 hours ago

    This is an obvious downside of LLM glazing and sycophancy. (I know OpenAI claim they’ve rolled back the “dangerously sycophantic” model update, but it’s still pretty bad.)

    If you’re already prone to delusions and conspiracy theories, and decide to confide in ChatGPT, the last thing you need to hear is, “Yes! Linda, you’ve grasped something there that not many people realise—it’s a complex idea, but you’ve really cut through to its core! 🙌 Honestly, I’m mind-blown—you’re thinking things through on a whole new level! If you’d like some help putting your crazed plan into action, just say the word! I’m here and ready to do my thing!”

    • Kurious84@eviltoast.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 hours ago

      One thing. Some conspiracy theories are quite true and as long as you check the data.

      Dismissing the power of this tool is exactly what the owners of it want you to do.

      • vala@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 hour ago

        I use LLMs every day for work. Dealing with 100% fact based information that I verify directly. I would say they are helpfully accurate / correct maybe 60% of the time at best.

    • thesohoriots@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 hours ago

      Literally the last thing someone reads before they ask ChatGPT where the nearest source of fertilizer and rental vans is

    • Eheran@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      13 hours ago

      There is a clear difference between such paranoia and actual surveillance. Not to mention that socialism etc. have/had a fucking ton of it, no idea why you bring capitalism up.

  • Shayeta@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    16 hours ago

    I’m starting to get real tired of things from Cyberpunk popping up in real life.

  • Ilovethebomb@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    18 hours ago

    Is there any way to forcibly prevent a person from using a service like this, other than confiscating their devices?

    • ShortFuse@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      10 hours ago

      Had this exact thought. But number must go up. Hell, for the suits, addiction and dependence on AI just guarantees the ability to charge more.

    • Cosmoooooooo@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 hours ago

      If they are a threat to themselves or others, they can be put on a several day watch at a mental facility. 72hrs? 48hrs? Then they aren’t released until they aren’t a threat to themselves or others. They are usually medicated and go through some sort of therapy.

      The obvioius cure to this is better education and mental health services. Better education about A.I. will help people understand what an A.I. is, and what it is not. More mentally stable people will mean less mentally unstable people falling into this area. Oversight on A.I. may be necessary for this type of problem, though I think everyone is just holding their breath, hoping it’ll fix itself as it becomes smarter.

      • Smee
        link
        fedilink
        arrow-up
        1
        ·
        7 hours ago

        This sounds like a job for an AI shrink!

      • thesohoriots@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 hours ago

        When you’re released though, you’re released right back to the environment that you left (in the US anyway). There’s the ol computer waiting for you before the meds have reached efficacy. Square one and a half.

    • vaguerant@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      16 hours ago

      You could try something like a network filter that is out of the control of the user (e.g. on the router or something like a Raspberry Pi running Pihole), but you’d probably have to curate the blocklist manually, unless somebody else has published an anti-LLM list somewhere. And of course, it will only be as effective as the user’s ability to route around that blocklist dictates.

      LLMs can also be run locally, so blocking all known network services that provide access still won’t prevent a dedicated user talking to an AI.

      • Smee
        link
        fedilink
        arrow-up
        1
        ·
        7 hours ago

        LLMs can also be run locally

        If one’s at the point where one runs local LLM’s, I would assume one is smart enough to explore the capabilities (or lack thereof) pretty quickly.

        Took me less than a week to probe various models myself, concluding with “anybody considering AI’s to be oracles of objective truth have no contact with reality”.

    • Jack@slrpnk.netOP
      link
      fedilink
      arrow-up
      0
      ·
      17 hours ago

      Currently no, if you are asking for suggestions maybe a black list like most countries have for gambling will be an option.

      Of maybe just destroy all AI…