• RizzoTheSmall@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    I personally find copilot is very good at rigging up test scripts based on usings and a comment or two. Babysit it closely and tune the first few tests and then it can bang out a full unit test suite for your class which allows me to focus on creative work rather than toil.

    It can come up with some total shit in the actual meat and potatoes of the code, but boilerplate stuff like tests it seems pretty spot on.

  • Hawk@lemmynsfw.com
    link
    fedilink
    arrow-up
    1
    ·
    2 days ago

    The key is identifying how to use these tools and when.

    Local models like Qwen are a good example of how these can be used, privately, to automate a bunch of repetitive non-determistic tasks. However, they can spot out some crap when used mindlessly.

    They are great for skett hing out software ideas though, ie try a 20 prompts for 4 versions, get some ideas and then move over to implementation.

  • penquin@lemm.ee
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    If you know what you’re doing, AI is actually a massive help. You can make it do all the repetitive shit for you. You can also have it write the code and you either clean it or take the pieces that works for you. It saves soooooo much time and I freaking love it.

    • deadbeef79000@lemmy.nz
      link
      fedilink
      arrow-up
      3
      ·
      2 days ago

      That’s the thing, it’s a useful assistant for an expert who will be able to verify any answers.

      It’s a disaster for anyone who’s ignorant of the domain.

    • ikidd@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      I knocked off an android app in Flutter/Dart/Supabase in about a week of evenings with Claude. I have never used Flutter before, but I know enough coding to fix things and give good instructions about what I want.

      It would even debug my android test environment for me and wrote automated tests to debug the application, as well as spit out the compose files I needed to set up the Supabase docker container and SQL queries to prep the database and authentication backend.

      That was using 3.5Sonnet, and from what I’ve seen of 3.7, it’s way better. I think it cost me about $20 in tokens. I’ve never used AI to code anything before, this was my first attempt. Pretty cool.

      • FauxLiving@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        I used 3.7 on a project yesterday (refactoring to use a different library). I provided the documentation and examples in the initial context and it re-factored the code correctly. It took the agent about 20 minutes to complete the re-write and it took me about 2 hours to review the changes. It would have taken me the entire day to do the changes manually. The cost was about $10.

        It was less successful when I attempted to YOLO the rest of my API credits by giving it a large project (using langchain to create an input device that uses local AI to dictate as if it were a keyboard). Some parts of the codes are correct, the langchain stuff is setup as I would expect. Other parts are simply incorrect and unworkable. It’s assuming that it can bind global hotkeys in Wayland, configuration required editing python files instead of pulling from a configuration file, it created install scripts instead of PKGBUILDs, etcetc.

        I liken it to having an eager newbie. It doesn’t know much, makes simple mistakes, but it can handle some busy work provided that it is supervised.

        I’m less worried about AI taking my job then my job turning into being a middle-manager for AI teams.

        • ikidd@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 day ago

          I think the further you get out in to esoteric or new things, the less they have to draw on. I’ve had a bit of the same issue building Lora telemetry on ESP32 with specific radio modules because there might be a couple of realworld examples out there of using those libraries.

          • FauxLiving@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            1 day ago

            I feel this pain.

            I’ve been trying to get simple telemetry working over lora on a ESP32-C6, LLMs are largely worthless in this. We gotta fall back to old school RTFM models

    • tunetardis@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      I turned on copilot in VSCode for the first time this week. The results so far have been less than stellar. It’s batting about .100 in terms of completing code the way I intended. Now, people tell me it needs to learn your ways, so I’m going to give it a chance. But one thing it has done is replaced the normal auto-completion which showed you what sort of arguments a function takes with something that is sometimes dead wrong. Like the code will not even compile with the suggested args.

      It also has a knack for making me forget what I was trying to do. It will show me something like the left side picture with a nice rail stretching off into the distance when I had intended it to turn, and then I can’t remember whether I wanted to go left or right? I guess it’s just something you need to adjust to. Like you need to have a thought fairly firmly in your mind before you begin typing so that you can react to the AI code in a reasonable way? It may occasionally be better than what you have it mind, but you need to keep the original idea in your head for comparison purposes. I’m not good at that yet.

      • Ledivin@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        7 hours ago

        I haven’t personally used it, but my coworker said using Cursor with the newest Claude model is a gamechanger and he can’t go back anymore 🤷‍♂️ he hasn’t really liked anything outside of cursor yet

      • penquin@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        1 day ago

        I don’t mess with any of those in-IDE assistants. I find them very intrusive and they make me less efficient. So many suggestions pop up and I don’t like that, and like you said, I get confused. The only time I thought one of them (codium) was somewhat useful is when I asked it to make tests for the file I was on. It did get all the positive tests correct, but all the negative ones wrong. Lol. So, I naturally default to the AI in the browser.

        • tunetardis@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 days ago

          Thanks, it makes me feel relieved to hear I’m not the only one finding it a little overwhelming! Previously, I had been using chatgpt and the like where I would be hunting for the answer to a particularly esoteric programming question. I’ve had a fair amount of success with that, though occasionally I would catch it in the act of contradicting itself, so I’ve learned you have to follow up on it a bit.

          • penquin@lemm.ee
            link
            fedilink
            arrow-up
            1
            ·
            1 day ago

            Oh yeah, of course. You can’t just trust it 100%. One time Claude gave me a piece of code that was a nasty bug that could have caused some serious issues. It was a one liner that deleted an employee from database by mere searching said employee with their name. Thankfully I caught it in the dev environment before it got into prod (assuming AQ missed it, too) and started deleting people. lol.

    • Buckshot@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      It’s taken me a while to learn how to use it and where it works best but I’m coming around to where it fits.

      Just today i was doing a new project, i wrote a couple lines about what i needed and asked for a database schema. It looked about 80% right. Then asked for all the models for the ORM i wanted and it did that. Probably saved an hour of tedious typing.

      • penquin@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        I’m telling you. It’s fantastic for the boring and repetitive garbage. Databases? Oh hell yeah, it does really well on that, too. You have no idea how much I hate working with SQL. The ONLY thing it still struggles with so far is negative tests. For some reason, every single AI I’ve ever tried did good on positive tests, but just plain bad in the negative ones.

    • 2deck@lemmy.world
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      2 days ago

      If you’re having to do repetitive shit, you might reconsider your approach.

  • friend_of_satan@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    God, seriously. Recently I was iterating with copilot for like 15 minutes before I realized that it’s complicated code changes could be reduced to an if statement.

  • x00z@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    Not to be that guy, but the image with all the traintracks might just be doing it’s job perfectly.

    • MajorHavoc@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      2 days ago

      I mean, not quite every project. Some of my projects have been turned off for not being useful enough before they had time to get that bad. Lol.

      I suppose you covered that with given time, though.

    • xthexder@l.sw0.com
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      They mean time to write the code, not compile time. Let’s be honest, the AI will write it in Python or Javascript anyway

  • Gxost@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    It depends. AI can help writing good code. Or it can write bad code. It depends on the developer’s goals.

    • AES_Enjoyer@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      It depends. AI can help writing good code. Or it can write bad code

      I’ll give you a hypothetical: a company is to hire someone for coding. They can either hire someone who writes clean code for $20/h, or someone who writes dirty but functioning code using AI for $10/h. What will many companies do?

      • Gxost@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        Many companies chose cheap coders over good coders, even without AI. Companies I heard of have pretty bad code bases, and they don’t use AI for software development. Even my company preferred cheap coders and fast development, and the code base from that time is terrible, because our management didn’t know what good code is and why it’s important. For such companies, AI can make development even faster, and I doubt code quality will suffer.

  • jcg@halubilo.social
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    You can get decent results from AI coding models, though…

    …as long as somebody who actually knows how to program is directing it. Like if you tell it what inputs/outputs you want it can write a decent function - even going so far as to comment it along the way. I’ve gotten O1 to write some basic web apps with Node and HTML/CSS without having to hold its hand much. But we simply don’t have the training, resources, or data to get it to work on units larger than that. Ultimately it’d have to learn from large scale projects, and have the context size to be able to hold if not the entire project then significant chunks of it in context and that would require some very beefy hardware.

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 days ago

      Generally only for small problems. Like things lower than 300 lines of code. And the problem generally can’t be a novel problem.

      But that’s still pretty damn impressive for a machine.

      • MajorHavoc@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        But that’s still pretty damn impressive for a machine.

        Yeah. I’m so dang cranky about all the overselling, that how cool I think this stuff is often gets lost.

        300 lines of boring code from thin air is genuinely cool, and gives me more time to tear my hair out over deployment problems.

  • mesamunefire@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    Im looking forward in the next 2 years when AI apps are in the wild and I get to fix them lol.

    As a SR dev, the wheel just keeps turning.

    • xmunk@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      I’m being pretty resistant about AI code Gen. I assume we’re not too far away from “Our software product is a handcrafted bespoke solution to your B2B needs that will enable synergies without exposing your entire database to the open web”.

      • mesamunefire@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        It has its uses. For templeting and/or getting a small project off the ground its useful. It can get you 90% of the way there.

        But the meme is SOOO correct. AI does not understand what it is doing, even with context. The things JR devs are giving me really make me laugh. I legit asked why they were throwing a very old version of react on the front end of a new project and they stated they “just did what chatgpt told them” and that it “works”. Thats just last month or so.

        The AI that is out there is all based on old posts and isnt keeping up with new stuff. So you get a lot of the same-ish looking projects that have some very strange/old decisions to get around limitations that no longer exist.

        • WrittenInRed [any]@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Yeah, I think personally LLMs are fine for like writing a single function, or to rubber duck with for debugging or thinking through some details of your implementation, but I’d never use one to write a whole file or project. They have their uses, and I do occasionally use something like ollama to talk through a problem and get some code snippets as a starting point for something. Trying to do too much more than that is asking for problems though. It makes it way harder to debug because it becomes reading code you haven’t written, it can make the code style inconsistent, and a non-insignifigant amount of the time even in short code segments it will hallucinate a non existent function or implement something incorrectly, so using it to write massive amounts of code makes that way more likely.