My opinion on AI code

March 17, 2025 | Aurora

My opinion on AI

I feel like it needs to be said. AI code is not a great thing to do, but I cannot say it's a horrible thing to do either. I feel like I'm gonna have this question asked a lot anyway, so I might as well put it out in the blog post. I was watching 🔗 a video by Brodie Robertson that literally just talks about whether or not AI contributions are a good thing, and conclusion, they're not, spoiler alert, I agree.

But what you have got to realize is that in 2025, people are going to start learning how to program. And in 2025, they also know that they can ask any large language model for any assistance. This is fine. AI assisted code is fine. It always has been, it always will be. I do not disagree with asking for help. However, the minute you let the AI write all your code for you and you only tweak minor things, that's when you're doing it wrong.

The fact still remains, an AI only knows what it knows from reading what is on the internet and what is on the internet is likely copyrighted or protected by some sort of license. Using set code would be a violation of set license or copyright. And that's not even getting into the fact that this isn't even how you're supposed to write code. If you don't understand the language you're about to use, don't use the language.

A good example of this is this very website thing. I know very basic JavaScript, so I asked an AI to help me fix the rest of my JavaScript so that I can make this thing that reads Markdown files and turns them into HTML. That thing is responsible for generating this post you are reading right now. Outside of that, I tend to avoid JavaScript because I do not know enough about it yet to feel comfortable using it. The code in use here is AI-assisted. Keyword: assisted. And this, this is fine.

Completely AI generated code is not fine. Because as I said above, it is more than likely a license violation or copyright infringement. Either that, or it's pulled straight from some kind of documentation that doesn't even apply to the current situation and just won't work.

But Aurora, what if I taught my own LLM from my own works?

Well, then there's the following issue. If you have in fact done this, then how is the AI supposed to know what's right or wrong? What does work and doesn't work? Because, as we all know, an AI is not able to test their own code.

TL;DR then: Do I agree with A.I. being used in code? Yes. Do i think its good? Not in its current state. Do I want people to depend on it? No. As long as you have your A.I. assist you, I am fine with it. Just don't make contributions that are entirely AI because they more than likely have used code from some other repo with some other license that you cannot use in your project because of licensing conflict. In 2025, people will unfortunately grow up to learn to ask an AI instead of the internet, and to be frank, that's fine. Things change.