• foggy@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    7 days ago

    A lot of lemmy is very anti-Ai. As an artist I’m very anti-Ai. As a veteran developer I’m very pro AI (with important caveats). I see it’s value; I see it’s threat.

    I know I’m not in good company when I talk about its value on Lemmy.

    • Natanox@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 days ago

      Completely with you on this one. It’s awful when used to generate “art”, but once you’ve learned its short-comings and never blindly trust it it is such a phenomenal help in learning and assisting with code or finding something you’ve a hard time to find the right words for. And aside from generative use-cases neural networks are also phenomenally useful for assisting tasks in science, medicine and so on.

      It’s just unfortunate we’re still in the “find out” phase of the information age. It’s like with the industrialization ~200 years ago, just with data… and unfortunately the lessons seem to be equally rough. All the generative tech will deal painful blows to our culture.

      • JayDee@lemmy.sdf.org
        link
        fedilink
        arrow-up
        2
        ·
        7 days ago

        That’s a view from the perspective of utility, yeah. The downvotes here are likely also from a ethics standpoint, since most LLMs currently trained are doing so by using other peoples’ work without permission, all while using large amounts of water for cooling, and energy from our mostly coal-powered grid. This is also not mentioning the physical and emotional labor that many untrained workers are required to do when sifting through the datasets of these LLMs, removing unsavory data for extremely low wages.

        A smaller, more specialized LLM could likely perform this same functionality with a much less training, on a more exclusive data set (probably only a couple of terabytes at its largest I’d wager), and would likely be small enough to run on most users’ computers after training. That’d be the more ethical version of this use case.

        • Natanox@discuss.tchncs.deOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 days ago

          True, those are awful problems. The whole internet is suffering due to this, I constantly read about fedi instances being literally DDoS’ed by robots.txt ignoring, IP-block circumventing crawlers. Unfortunately there’s no way to prevent any of this right now with our current set of technologies… the best thing we could do is make it a state- or even UN-level affair, reducing the amount of simultaneous training and focus on cooperation instead of competition while upholding high worker’s rights. However that would also be very anti-capitalistic and supposedly “stifle innovation” (as if that’s important in comparison to idk, our world burning?), so it won’t happen either. Banning it completely of course is also impossible, humanity is still way too divided and its benefits for “defends” (against ourselves) too high.

          In regards to running locally, we currently see a new wave of chip designs that’ll enable this on increasingly reasonable devices. I myself get a new laptop with XDNA2 chip and 32gb RAM today where I want to try running Codestral w/ Linux (the driver arrived natively in 6.14). Technically it should be possible to run any ~30b model on those newer chips (potentially slightly quantized), I’ll definitely try this a little bit and probably write a thread about it in the OpenSuse forums if you’re interested.

    • JayDee@lemmy.sdf.org
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      7 days ago

      I think it’s important to also use the more specific term here: LLM. We’ve been creating AI automation for years for ourselves, the difference now is that software vendors are adding LLMs to the mix now.

      I’ve hear this argument before in other instances. Ghidra, for example, just had an LLM pipeline rigged up by LaurieWired to take care of the more tedious process of renaming various functions during reverse engineering. It’s not the end of the analysis process during reverse engineering, it just takes out a large amount of busy work. I don’t know about the use-case you described but it sounds similar. It also seems feasible that you could train an AI system on your own system (given you have enough reversed engineered programs) and then run it locally to do this kind of work, which is a far cry from the disturbingly large LLMs that are guzzling massive amounts of data and energy to learn and run.

      EDIT: To be clear, because LaurieWired’s pipeline still relies on normal LLMs which are unethically trained, her pipeline using it is also unethical. It has the potential to be ethical, but currently is unethical.