GitHub and OpenAI recently announced Copilot, a new AI tool aimed to help devs. Whether it’s actually helpful remains to be seen.
At the end of June, GitHub CEO, Nat Friedman, announced in a blog post a new tool to help developers. In collaboration with OpenAI, GitHub developed Copilot, which uses Artificial Intelligence to suggest functions or code snippets to devs as they write code. In order to make these suggestions, Copilot draws context from the code being worked on and adapts as you type. While this sounds like it could be useful, it actually seems like a bad idea.
There are a few reasons this may not be as useful as GitHub and OpenAI hope it to be. For one thing, devs already copy and paste code functions and snippets from stack overflow. A lot of the time, devs know what the next function needs to be and can quickly grab it. Not every time, sometimes you have to hunt for what you need or collaborate with another dev to problem solve. But there are platforms for that already, and the AI isn’t going to be able to problem-solve for you.
And while Stack Overflow copy-paste is commonplace, it already dilutes code quality. Inexperienced devs often don’t even customize, and often don’t fully understand the code they are using. An AI making this process less taxing will increase the likelihood and frequency of code slop.
Another reason it may not be as useful as intended is because it could actually cause problems and more work down the line. What happens when a dev plugs in a function suggested by Copilot that has an unintended use? Or has broken code in it that isn’t caught? Or it actually breaks the whole project? These are all problems with varying degrees of severity.
Then there’s code quality. AI is supposed to be used to help us complete tasks, but humans are very good at being lazy. Coders, especially the good ones, are all lazy because they don’t want to do things more than once. They want to get it right the first time and never look at it again. But with this tool, coders will become more lazy and reliant on AI to potentially skirt critical thinking in their job. This creates a decrease in coding knowledge, which will create unstable, insecure code and fragility.
When code quality goes down and fragility goes up, now you’re asking your devs to go figure out what’s causing the problem. More work for an already overworked department. Not only is it more work, but because they’ve copied and pasted and used suggestions throughout the project, they may not even know where to start looking for a problem. Which means it’s going to take that much longer to find and fix.
With the shortage of devs and coders right now, it’s understandable that someone would want to come up with a way to help these teams. But this may not be the best way to go about it. The aim is altruistic, it seems, both of these companies aim to help devs as part of their missions, but motives don’t always impact outcomes. Even the best intentions can fall short or have unexpected consequences in various situations. Hopefully this tool is actually useful and doesn’t cause problems down the road, but only time will tell. In the meantime, devs and business leaders should ensure that all code that uses this tool is peer-reviewed, at a minimum, prior to deployment. Regular code review cycles are also recommended for every business to ensure stability and security.