Please remove it if unallowed

I see alot of people in here who get mad at AI generated code and I am wondering why. I wrote a couple of bash scripts with the help of chatGPT and if anything, I think its great.

Now, I obviously didnt tell it to write the entire code by itself. That would be a horrible idea, instead, I would ask it questions along the way and test its output before putting it in my scripts.

I am fairly competent in writing programs. I know how and when to use arrays, loops, functions, conditionals, etc. I just dont know anything about bash’s syntax. Now, I could have used any other languages I knew but chose bash because it made the most sense, that bash is shipped with most linux distros out of the box and one does not have to install another interpreter/compiler for another language. I dont like Bash because of its, dare I say weird syntax but it made the most sense for my purpose so I chose it. Also I have not written anything of this complexity before in Bash, just a bunch of commands in multiple seperate lines so that I dont have to type those one after another. But this one required many rather advanced features. I was not motivated to learn Bash, I just wanted to put my idea into action.

I did start with internet search. But guides I found were lacking. I could not find how to pass values into the function and return from a function easily, or removing trailing slash from directory path or how to loop over array or how to catch errors that occured in previous command or how to seperate letter and number from a string, etc.

That is where chatGPT helped greatly. I would ask chatGPT to write these pieces of code whenever I encountered them, then test its code with various input to see if it works as expected. If not, I would ask it again with what case failed and it would revise the code before I put it in my scripts.

Thanks to chatGPT, someone who has 0 knowledge about bash can write bash easily and quickly that is fairly advanced. I dont think it would take this quick to write what I wrote if I had to do it the old fashioned way, I would eventually write it but it would take far too long. Thanks to chatGPT I can just write all this quickly and forget about it. If I want to learn Bash and am motivated, I would certainly take time to learn it in a nice way.

What do you think? What negative experience do you have with AI chatbots that made you hate them?

  • corroded@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    2
    ·
    3 months ago

    When it comes to writing code, there is a huge difference between code that works and code that works *well." Lets say you’re tasked with writing a function that takes an array of RGB values and converts them to grayscale. ChatGPT is probably going to give you two nested loops that iterate over the X and Y values, applying a grayscale transformation to each pixel. This will get the job done, but it’s slow, inefficient, and generally not well-suited for production code. An experienced programmer is going to take into account possible edge cases (what if a color is out of the 0-255 bounds), apply SIMD functions and parallel algorithms, factor in memory management (do we need a new array or can we write back to the input array), etc.

    ChatGPT is great for experienced programmers to get new ideas; I use it as a modern version of “rubber ducky” debugging. The problem is that corporations think that LLMs can replace experienced programmers, and that’s just not true. Sure, ChatGPT can produce code that “works,” but it will fail at edge cases and will generally be inefficient and slow.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      Exactly. LLMs may replace interns and junior devs, they won’t replace senior devs. And if we replace all of the interns and junior devs, who is going to become the next senior devs?

      As a senior dev, a lot of my time is spent reviewing others’ code, doing pair-programming, etc. Maybe in 5-10 years, I could replace a lot of what they do with an LLM, but then where would my replacement come from? That’s not a great long-term direction, and it’s part of how we ended up with COBOL devs making tons of money because financial institutions are too scared to port it to something more marketable.

      When I use LLMs, it’s like you said, to get hints as to what options I have. I know it’s sampling from a bunch of existing codebases, so having the LLM go figure out what’s similar can help. But if I ask the LLM to actually generate code, it’s almost always complete garbage unless it’s really basic structure or something (i.e. generate a basic web server using ), but even in those cases, I’d probably just copy/paste from the relevant project’s examples in the docs.

      That said, if I had to use an LLM to generate code for me, I’d draw the line at tests. I think unit tests should be hand-written so we at least know the behavior is correct given certain inputs. I see people talking about automating unit tests, and I think that’s extremely dangerous and akin to “snapshot” tests, which I find almost entirely useless, outside of ensuring schemas for externally-facing APIs are consistent.