• jorm1s@sopuli.xyz
    link
    fedilink
    arrow-up
    4
    ·
    2 days ago

    Isn’t writing tests with AI like a really bad idea? I mean, the whole point of writing separate tests is hoping that you won’t make the same mistakes twice, and therefore catch any behavior in the code that does not match your intent. But If you use an LLM to write a test using said code as context (instead of the original intent you would use yourself), there’s a risk that it’ll just write a test case that makes sure the code contains the wrong behavior.

    Okay, it might still be okay for regression testing, but you’re still missing most of the benefit you’d get by writing the tests manually. Unless you only care about closing tickets, that is.

    • Grazed@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      2 days ago

      “Unless you only care about closing tickets, that is.”

      Perfect. I’ll use it for tests at work then.

    • Emily (she/her)@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      2 days ago

      I’ve used it most extensively for non-professional projects, where if I wasn’t using this kind of tooling to write tests they would simply not be written. That means no tickets to close either. That said, I am aware that the AI is almost always at best testing for regression (I have had it correctly realise my logic is incorrect and write tests that catch it, but that is by no means reliable) Part of the “hand holding” I mentioned involves making sure it has sufficient coverage of use cases and edge cases, and that what it expects to be the correct is actually correct according to intent.

      I essentially use the AI to generate a variety of scenarios and complementary test data, then further evaluating it’s validity and expanding from there.