Ok let’s give a little bit of context. I will turn 40 yo in a couple of months and I’m a c++ software developer for more than 18 years. I enjoy to code, I enjoy to write “good” code, readable and so.
However since a few months, I become really afraid of the future of the job I like with the progress of artificial intelligence. Very often I don’t sleep at night because of this.
I fear that my job, while not completely disappearing, become a very boring job consisting in debugging code generated automatically, or that the job disappear.
For now, I’m not using AI, I have a few colleagues that do it but I do not want to because one, it remove a part of the coding I like and two I have the feeling that using it is cutting the branch I’m sit on, if you see what I mean. I fear that in a near future, ppl not using it will be fired because seen by the management as less productive…
Am I the only one feeling this way? I have the feeling all tech people are enthusiastic about AI.
So far it is mainly an advanced search engine, someone still needs to know what to ask it, interpret the results and correct them. Then there’s the task of fitting it into an existing solution / landscape.
Then there’s the 50% of non coding tasks you have to perform once you’re no longer a junior. I think it’ll be mainly useful for getting developers with less experience productive faster, but require more oversight from experienced devs.
At least for the way things are developing at the moment.
If your job truly is in danger, then not touching AI tools isn’t going to change that. The best you can do for yourself is to explore what these tools can do for you and figure out if they can help you become more productive so that you’re not first on the chopping block. Maybe in doing so, you’ll find other aspects of programming that you enjoy just as much and don’t yet get automated away with these tools. Or maybe you’ll find that they’ll not all they’re hyped up to be and ease your worry.
There’s a massive amount of hype right now, much like everything was blockchains for a while.
AI/ML is not able to replace a programmer, especially not a senior engineer. Right now I’d advise you do your job well and hang tight for a couple of years to see how things shake out.
(me = ~50 years old DevOps person)
I’m only on my very first year of DevOps, and already I have five years worth of AI giving me hilarious, sad and ruinous answers regarding the field.
I needed proper knowledge of Ansible ONCE so far, and it managed to lie about Ansible to me TWICE. AI is many things, but an expert system it is not.
Well, technically “expert system” is a type of AI from a couple of decades ago that was based on rules.
Great advice. I would add to it just to learn leveraging those tools effectively. They are great productivity boost. Another side effect once they become popular is that some skills that we already have will be harder to learn so they might be in higher demand.
Anyway, make sure you put aside enough money to not have to worry about such things 😃
Nobody knows if and when programming will be automated in a meaningful way. But once we have the tech to do it, we can automate pretty much all work. So I think this will not be a problem for programmers until it’s a problem for everyone.
Here is an alternative Piped link(s):
Uncle Bob’s response when asked if AI will takeover software engineering job.(1m)
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
To answer your question directly: The debate has been going on in the broader public since ChatGPT 3 dropped
To answer how you’re feeling: that’s valid, because a lot of big pockets seem to not care at all about the ethical considerations.
I disagree with the other posts here that you’re overreacting. I think that AI will replace most jobs (maybe as high as 85% at some point). Consider becoming a plumber or an electrician. Until the robots will become commonplace in 20 years from now, you will have a job that AI won’t be able to touch much. And people won’t run out of asses or gaming. So they’ll be stable professions for quite a while. You can still code in your free time, as a hobby. And don’t cry for the lost revenue of being a programmer, because that will happen to everyone who will be affected by AI. You’ll just have another job while the others won’t. That’s the upside.
I understand that this comment is not what people want to hear with their wishful thinking, so they’ll downvote it. But I gotta say it how I see it. AI is the biggest revolution since the industrial revolution.
“AI” is a bubble. A lot of these concerns will go away this year once the bean-counters do the math and realize that the benefits of running generative neural networks aren’t worth the costs.
A single chatGPT query costs about 50-500 times as much energy as a pre-Bard Google search, to say nothing of the engineering time needed to build the models. And, since LLM outputs can’t be trusted, the end users will still need writers and developers to go over everything and check for hallucinations.
The trajectory here closely mimics “Web3”, when people thought that massively redundant distributed ledgers were going to be the next big thing, despite the fact that traditional electronic ledgers beat the blockchain in literally every aspect of performance, efficiency, and security.
Soon, “AI” will be just as synonymous with “plagirism” as “cryptocurrency” is with “scam”.
With the difference that the industrial revolution created a lot of new jobs with better pay. While AI doesn’t. I see people suggesting that this has happened before and soon it will turn the economic situation into something much better. But I don’t see that at all. Just because it’s also a huge revolution, doesn’t mean it will have the same effects.
As you have written, people will have to switch into manual jobs like layering bricks and wiping butts. The pay in these jobs won’t increase just because more people have to work them.
If you are truly feeling super anxious, feel free to dm me. Have released gen AI tech though admittedly only in that space for about a year and a half and… Ur good. Happy to get in depth about it but genuinely you are good for so many reasons that I’d be happy to expand upon.
Main point though for programmers will be it’s expensive as fuck to get any sort of process going that will produce complex systems of code. And frankly I’m being a bit idealistic there. That’s without even considering the amount of time. Love AI, but hype is massively misleading the reality of the tech.
I’m less worried and disturbed by the current thing people are calling AI than I am of the fact that every company seems to be jumping on the bandwagon and have zero idea how it can and should be applied to their business.
Companies are going to waste money on it, drive up costs, and make the consumer pay for it, causing even more unnecessary inflation.
As for your points on job security — your trepidation is valid, but premature, by numerous decades, in my opinion. The moment companies start relying on these LLMs to do their programming for them is the moment they will inevitably end up with countless bugs and no one smart enough to fix them, including the so-called AI. LLMs seem interesting and useful on the surface, and a person can show many examples of this, but at the end of the day, it’s regurgitating fed content based on rules and measures with knob-tuning — I do not yet see objective strong evidence that it can effectively replace a senior developer.
Companies are going to waste money on it, drive up costs, and make the consumer pay for it, causing even more unnecessary inflation.
The “AI” bubble will burst this year, I’d put money on it if I had any.
The last time we saw a bubble like this was “Web3” and we all know how that turned out.
Have you seen the shit code it confidently spews out?
I wouldn’t be too worried.
Well I seen, I even code reviewed without knowing, when I asked colleague what happened to him, he said “I used chatgpt, I’m not sure to understand what this does exactly but it works”. Must confess that after code review comments, not much was left of the original stuff.
If I am going to poke small holes in the argument, the exact same thing happens every day when coders google a problem and find a solution on Stack Exchange or the like and copy/paste it into the code without understanding what it does. Yes, it was written initially by someone who understood it, but the end result is the exact same. Code that was implemented without understanding the inner workings.
The difference being that googling the problem and visiting a page on stackoverflow costs 50-500 times less energy than using ChatGPT.
Really? I haven’t done the ChatGPT thing, but I know I have spent days searching for solutions to some of the more esoteric problems I run into. I can’t imagine that asking an AI then debugging the return would be any more intensive as long as the AI solution functioned enough to be a starting point.
That’s the thing, how do you determine whether or not the “AI solution functions enough” without having a human review it?
The economics aren’t there because LLM outputs aren’t trustworthy, and the kind of expertise you’d need to validate them is functionally equivalent to that which could be employed to write the code in the first place.
“Generative AI” is an inefficient solution to a problem that’s already been solved by the existence of coding support forums like StackOverflow. Sure, it can be neat to ask it for example code or a bedtime story, but once the novelty wears off all you’re left with is an expensive plagirism machine that won’t even notice when it confidently lies to you.
I have a strong opinion that the problem is more one of people attempting to solve every problem with their shiny new hammer. AI, in the current incarnations, is very good at many things. When implemented properly, LLMs are great at filtering huge amounts of text data or performing semantic analysis. SD does produce images and can be directed.
LLMs are not a replacement for thought. SD is not a replacement for an artist. They are all tools for helping people do things.
I am designing a hypothetical LLM architecture for analyzing the relational structure of a story and mapping it out. I am hoping that it will be capable of generating a meaningful relationship network at the end. It is a very specific goal and a very specific structure. It won’t write a story; it won’t produce dialog; it won’t build a plot. What it will do is build a network of places and characters that can be used to make decisions for all of those things. I want something that helps with internal consistency of models doing other things. So if a GPT model were to write something, it could be fact-checked against the world network to see if what it is saying is reasonable.
Yeah, not using it isn’t going to help you when the bottom line is all people care about.
It might take junior dev roles, and turn senior dev into QA, but that skillset will be key moving forward if that happens. You’re only shooting yourself in the foot by refusing to integrate it into your work flows, even if it’s just as an assistant/troubleshooting aid.
It’s not going to take junior dev roles) it’s going to transform whole workflow and make dev job more like QA than actual dev jobs, since difference between junior middle and senior is often only with scope of their responsibility (I’ve seen companies that make junior do fullstack senior job while on the paper they still was juniors and paycheck was something between junior and middle dev and these companies is majority in rural area)
Give Copilot or similar a try. AI or similar is pretty garbage at the more complex aspects of programming, but it’s great at simple boilerplate code. At least for me, that doesn’t seem like much of a loss.
I’d like to thank you all for all your interesting comments and opinion.
I see a general trends not being too worried because of how the technology works.
The worrysome part being what capitalism and management can think but that’s just an update of the old joke “A product manager is a guy that think 9 women can make a baby in 1 month”. And anyway, if not that there will be something else, it’s how our society is.
Now, I feel better, and I understand that my first point of view of fear about this technology and rejection of it is perhaps a very bad idea. I really need to start using it a bit in order to known this technology. I already found some useful use cases that can help me (get inspiration while naming things, generate some repetitive unit test cases, using it to help figuring out about well-known API, …).
Many have already touched on this, but you hit the nail on the head with the third paragraph. Always smart to prepare but any attempt to use this to reduce workers will go horribly. Saving isn’t crazy in this regard but wouldn’t plan on it being long term until LLMs can become less expensive, have better reasoning and most importantly have at all better performance on longer context windows without impact on performance. These aren’t easy solves, they brush up on fundamentals limits of the tech
Thought about this some more so thought I’d add a second take to more directly address your concerns.
As someone in the film industry, I am no stranger to technological change. Editing in particular has radically changed over the last 10 to 20 years. There are a lot of things I used to do manually that are now automated. Mostly what it’s done is lower the barrier to entry and speed up my job after a bit of pain learning new systems.
We’ve had auto-coloring tools since before I began and colorists are still some of the highest paid folks around. That being said, expectations have also risen. Good and bad on that one.
Point is, a lot of times these things tend to simplify/streamline lower level technical/tedious tasks and enable you to do more interesting things.