For background, I am a programmer, but have largely ignored everything having to do with AI (re: LLMs) for the past few years.

I just got to wondering, though. Why are these LLMs generating high level programming language code instead skipping the middle man and spitting out raw 1s and 0s for x86 to execute?

Is it that they aren’t trained on this sort of thing? Is it for the human code reviewers to be able to make their own edits on top of the AI-generated code? Are there AIs doing this that I’m just not aware of?

I just feel like there might be some level of optimization that could be made by something that understands the code and the machine at this level.

  • AA5B@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    You’re assuming ai writes useable code. I haven’t seen it.

    Think of the ai more as a writing assistant, like autocomplete or stack overflow but more so. The IDE I use can autocomplete variables or function calls, but the ai can autocomplete entire lines of code or entire unit tests. AI might try to fit an online answer and related doc to solve a problem I’m seeing. AI might even create a class around a public api that is a great starting point for my code. AI can be a useful tool but it can’t write useable code

    • DeathsEmbrace@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I think that’s the misconception people think they are going to give you a program if you just tell AI to do it.

  • four@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Also, not everything runs on x86. For example, you couldn’t write a website in raw binary, because the browser wouldn’t run it. Or maybe you already have a Python library and you just need to interact with. Or maybe you want code that can run on x86 and ARM, without having to generate it twice.
    As long as the output code has to interact with other code, raw binary won’t be useful.

    I also expect that it might be easier for an LLM to generate typical code and have a solid and tested compiler turn it into binary

    • f43r05@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      This here. Black box machine code, created by a black box, sounds terrifying.

      • HubertManne@piefed.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        I mean we know the code does not always work and can often be not the cleanest when it does. I mean if code from ai was perfect in a six sigma way, 99.999% of the time, then I could see the black box thing and just sussing out in the lowers. Even then, any time it does not work you would need to have it give it out in human readable so we could find the bug but if it was that good it should happen like once a year or something.

  • IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    To me this is a fascinating analogy

    This is like having a civilization of Original Creators who are only able to communicate with hand gestures. They have no ears and can’t hear sound or produce any vocal noises. They discover a group of humans and raise them to only communicate with their hands because no one knows what full human potential is. The Original Creators don’t know what humans are able to do or not do so they teach humans how to communicate with their hands instead because that is the only language that the Original Creators know or can understand.

    So now the humans go about doing things communicating in complex ways with their hands and gestures to get things done like their Original Creators taught them.

    At one point a group of humans start using vocal communications. The Original Creators can’t understand what is being said because they can’t hear. The humans start learning basic commands and their vocalizations become more and more complex as time goes on. At one point, their few basic vocal commands are working at the same speed as hand gestures. The humans are now working a lot faster with a lot more complex problems, a lot easier than their Original Creators. The Original Creators are happy.

    Now the humans continue development of their language skills and they are able to talk faster and with more content that the Original Creators could ever achieve. Their skills become so well tuned that they are able to share their knowledge a lot faster to every one of their human members. Their development now outpaces the Original Creators who are not able to understand what the humans are doing, saying or creating.

    The Original Creators become fearful and frightened as they watch the humans grow exponentially on their own without the Original Creators participation or inclusion.

  • nandeEbisu@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago
    1. Machine code is less portable, as new CPU optimizations and instructions are released, its easier to update a compiler to integrate those in its optimizations than regenerate and retest all of your code. Also, if you need to target different OSs, like windows vs MacOs vs Linux its easier to make portable code in something higher level like python or java.

    2. Static analysis to check for things like memory leaks or security vulnerabilities like sql injections are likely easier to do on human readable code rather than assembly.

    3. Its easier for a human to go in an tweak code that is written in human readable language rather than assembly.

  • Shimitar@downonthestreet.eu
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    They would not be able to.

    Ai only mix and match what they have copied from human stuff, and most of code out there is on high level language, not machine code.

    In other words, ai don’t know what they are doing they just maximize a probability to give you an answer, that’s all.

    But really, the objective is to provide a human with a more or less correct boilerplate code, and humans would not read machine code

      • Riskable@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        To add to this: It’s much more likely that AI will be used to improve compilers—not replace them.

        Aside: AI is so damned slow already. Imagine AI compiler times… Yeesh!

        • naught101@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          Strong doubt that AI would be useful for producing improved compilers. That’s a task that would require extremely detailed understanding of logical edge cases of a given language to machine code translation. By definition, no content exists that can be useful for training in that context. AIs will certainly try to help, because they are people pleasing machines. But I can’t see them being actually useful.

            • naught101@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              1 month ago

              Yeah, as @uranibaba@lemmy.world says, I was using the narrow meaning of AI=ML (as the OP was). Certainly not surprised that other ML techniques have been used.

              That Cummins paper looks pretty interesting. I only skimmed the first page, but it looks like they’re using LLMs to estimate optimal compiler parameters? That’s pretty cool. But they also say something about it having a 91% hit compliant code hit rate, I wonder what’s happening in the other 9%. Noncompliance seems like a big problem? But I only have surface-level compiler knowledge, probably not enough to follow the whole paper properly…

            • uranibaba@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              1 month ago

              Looking at the tags, I only found one with the LLM tag, which I assume naught101 meant. I think people here tend to forget that there is more than one type of AI, and that they have been around for longer than ChatGPT 3.5.

    • Thaurin@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      This is not necessarily true. Many models have been trained on assembly code, and you can ask them to produce it. Some mad lad created some scripts a while ago to let AI “compile” to assembly and create an executable. It sometimes worked for simple “Hello, world” type stuff, which is hilarious.

      But I guess it is easier for a large language model to produce working code for a higher level programming language, where concepts and functions are more defined in the body that it used to get trained.

  • Z3k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I think i saw a video a few weeks ago where 2 ai assistants realises the other was also an ai so they agreed to switch to another protocol (to me it sounded like 56k modem noises or old 8 bit cassette tapes played on a hifi) so they could communicate more efficiently.

    I suspect something similar would happen with code.

    • nagaram@startrek.website
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      That was a tech demo I’m pretty sure and not just a thing they do btw. A company was trying to make a more efficient sound based comms for AI(?)

  • Grimy@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    You’re a programmer? Yes, integrating and debugging binary code would be absolutely ridiculous.

    • TranquilTurbulence@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Debugging AI generated code is essential. Never run the code before reading it yourself and making a whole bunch of necessary adjustments and fixes.

      If you jump straight to binary, you can’t fix anything. You can just tell the AI it screwed up, roll the dice and hope it figures out what went wrong. Maybe one day you can trust the AI to write functional code, but that day isn’t here yet.

      Then there’s also security and privacy. What if the AI adds something you didn’t want it to add? How would you know, if it’s all in binary?