Video Thumbnail

5 new Software Engineer Jobs that AI will make (that aren't for AI specialists)

This is the internet of bugs. We're gonna talk about five ways that ai will create new software engineering jobs, or at least new opportunities. My name's carl. I have been a software professional for this is my 35th year. I have a child who's now a college freshman who's majoring in computer science. Because of that, i've been getting a lot of questions from college students asking about what life is like as a software engineer.

So i started this youtube channel to answer those questions and other questions that come up. So this was not the video i planned to do next. Based on the comments and the reaction to my last video, i bummed a whole bunch of people out. Sorry about that. But i'm not gonna sugarcoat stuff. And i'm gonna tell you when i see things that i think are wrong. I'm gonna try to give you mitigation steps to the extent that i can. I'm trying to help, but i'm not trying to crush anybody's soul. So i figured for this one, i would move some stuff around and i bump up in my schedule the video that has some good news.

So contrary to popular belief or what seems to be popular belief, i'm not an ai hater. I'm a hype hater. I'm actually very pro ai. I think this is a bunch of hype and i think that getting to much better. Ai is a lot more complicated than this. I might put that in another video at some point, but. So quick caveat. It's always possible i'm wrong on parts of all of these, but it's my best educated professional guess. Don't bet your house on what i'm about to say, but i'm relatively confident. Feel free to argue with me in the comments or each other in the comments as long as you stick to facts and evidence. No insults, please. And no garbage like: "the word 'never' is a long time. Do you know what it means?"" cause that's not going to be useful. Yes, we know what 'never' means.

If i said 'never', i meant 'never.'. Also, i'm going to try to say llm for large language model instead of ai. Forgive me for putting ai in the title of the video in the other videos. I hate it when words get co-opted by marketing and ai used to really mean something and now it's become meaning something else. And i know i'm contributing to it. And i'm sorry about that. But if i don't say ai, then the algorithm will never show my stuff to the people who are buying ai hype. And i think somebody needs to be the voice of reason.

So forgive me the commercialism there. Okay, number one: new startups. Thanks to a commenter for helping me kind of clarify my thoughts on this one. Quick history recap, sorry, i'm old, but i think my perspective is kind of why you're all here. So let me kind of tell you how something like this played out in the past. Back when the web started, it took an expert to manually set up a website. I used to do that. Setting up a website for someone was getting their domain name and getting a server and getting an isp and getting a line pulled from the isp to them and then getting a computer and putting the web server on and all that kind of stuff. It's all a few clicks now. But it took somebody like me, an expert to go manually set up a website. And what that meant was it was really expensive and what that meant was that only rich companies had websites. If you couldn't justify the cost of bringing an expert out or hiring one to be able to put your website up, you just didn't have one. Then came things like blogger and wordpress and squarespace. Allowed companies to get websites for a whole lot cheaper. This killed the market for experts getting paid to put websites up and running. After a while, i didn't get any more jobs hooking a server up in somebody's office. But that's okay because it meant a ton of companies and a ton of startup ideas and new ventures that didn't have enough money to hire an expert at the beginning to connect them to the internet. Now it could have one. And the startup cost of putting your website on the web was actually really small, especially compared to what it used to be. And a bunch of those companies that put up their websites for cheap turned into big businesses. And then those businesses turned around and hired a bunch of software engineers and hired a bunch of it people and a bunch of other stuff. So i can't prove it. I can't do a controlled experiment. But i'm convinced that far more of us got jobs from those companies growing and needing to hire people like us than it would have been if it was the case that it still required us to come and build the website manually. So i think that was a net win for us. I'm sure it was a net win for the rest of society. I'm convinced the same thing is gonna happen with llm. I expect that a ton of startups right now would get created except the person that wants to start them isn't technical and can't find a technical co-founder who's willing to work for equity and doesn't have enough money to hire somebody technical. I've had lots of offers from people, "hey, come and be our cto and work for equity.". And well, if i was gonna do that. I'd do that on my own idea. So "no.".

But now at least some percentage of those businesses or ideas or entrepreneurs ought to be able to use an llm to get their business off the ground and get a very quick and dirty product out. The thing about startups is you need to find a market fit, you need to grow your customer base. And that's a lot of iterating, it's a lot of tweaking stuff, it's a lot of figuring out what users like and what users don't like, it's a bunch of stuff. And most of the people that try that fail and that's okay, but some of them stick and they find a niche that may or may not be the one that they started with, but then they can grow.

And once they grow, i'm guessing that most of them will find or many of them at least will find that the code that llm generated for them has limitations. And it's hard to work on and it's got bugs. And it's hard to scale. And at that point, they're gonna be like, well, we've got enough revenue. This product as a service or whatever that we started with at the llm bill for us is not gonna be able to meet our needs going forward. So now it's time to hire some engineers, now it's time to hire some people that are technical to be able to help us through the next phase of our growth. And i'm convinced, like last time, that there's gonna end up being more jobs for us under that scenario than there would if no one could start an online business without a technical officer. Okay, number two, automation that creates more products. There's another person(s) in the comments from one of my previous videos that helped me clarify my thinking on this. So thanks to you two. Imagine five people work on a assembly line and then a robot comes out that can do the work of five people with one operator. Generally one of two things can happen at this point, either end up with one machine, one person that's an operator, four layoffs, and end up outputting the same amount of stuff. Or you can buy five of these robots, keep all five of your employees and you have 25 times the output or some mix or match combination, but we're just gonna talk about the two extremes at this point. Usually what happens is you keep the one person and you keep the one robot and you keep the same amount of output. And you let the other four people go.

Now, why is that? Well, it's because the robots are expensive. They're probably a lot more expensive than that one operator salary. And also 25 times the raw materials is expensive or whatever it is that you're building and there's no guarantee you're gonna be able to sell 25 times the product and you don't wanna spend a whole bunch of money to buy raw materials in order to make the thing, make 25 times as much of the thing and then just have it sit in the warehouse 'cause you don't know that there's enough people that want it and you won't drive the price down. So imagine the same scenario but with software, five programmers, new llm tool that makes each one five times faster. What are the options? Well, in this case, the robot, the machine isn't expensive. The license for the llm tool is either o(1), it's a constant, you bought it for your whole company or it's o(log(n)) where basically you've got a scaled thing where if you add another employee or add another license to it's gonna add a little bit of cost but nothing like the same as the number of employees and the whole that you pay for that tool for your company is probably cheaper than the single salary of one of your engineers.

There are no raw materials cost with software, right? Don't need to buy a bunch of pig iron. You don't need to buy a bunch of chips in order to increase the amount of stuff you build and if you end up having to throw more stuff on the servers, well, it's just the cloud, you turn on more instances. So no big deal, right? There's a lot more incentive to make more products or more features faster or what you were building faster or try a/b stuff and do it one way and then do it another way and then swap back and forth and see which one's better. There are a bunch of options that you've got there that aren't anymore expensive really than what you were doing before. And so you can be more productive instead of just cutting your staff down to one person. No, some people are always going to cut their staff down to one person. Some companies are just like that. And probably in this scenario more often you'l get a mix where maybe we won't do 25 times the output but there will still be opportunities for more products and more stuff. As tools give us the opportunity to build more stuff and the more companies build more stuff, the more they need us. So is that going to offset all of the people that do get laid off? I don't know, we can never know for sure. But i'm guessing it's going to be close. I'm on the optimistic side about that. A few years from now we might look back and turns out i was wrong but i'm guessing that the layoffs from getting the tools are going to be lower than the new opportunities, but we'l see.

Number three, customizing llm models. So let's talk briefly about llm's work. Caveat, i'm not an ai expert. I'm not a machine learning expert. I'm not a training expert. So i'm going to do kind of a quick thing. You've got basically this giant thing in the cloud with lots and lots and lots of simulated neurons, billions and billions of them. And then you show it a bunch of test data, it learns from the test data and then show it stuff prompts and then it spit stuff out. Greatly oversimplifying, i know, but that training of these giant models are really expensive. I've heard estimates in the billions of dollars. I don't know if that's true or not. It doesn't really matter. Just trust me, they're really expensive. What that means is it's too expensive to train a new multi-billion parameter model for every single company that wants llm to do something specific. It's just that the only ones that are going to be able to have those bazillion node models are going to be the companies that have a lot of money to spend on training.

So what happens is they've created various ways of taking a big generic model and making it more specific for your specific use case. And my guess is there's going to be a lot of work for engineers that don't necessarily have to be machine learning experts because i've done it to customize output to match requirements. So i'm going to use stable diffusion as an example here. It's an image generation thing and there's an interface to it called comfyui, which is basically one of those things where you've got your pictures and draw lines between them. I think that illustrates the ideas pretty well. If you only take one thing away from this video, if you walk away from here and you're a software engineer and there's only one thing you remember, make it this, learning how the output of llm models gets customized. To a specific business, to a specific use case is probably the most useful thing that you can learn. Not for next year, but for going forward. I think it's how a lot of software work is going to be done in the future.

-. So this is comfyui. It's a user interface for stable diffusion. I'm making a simple workflow that just generates a picture of a shark. Then i'm going to turn on the lora, which is a low rate adaptation that makes things kind of cartoony. And it's going to turn the shark into a cartoon shark. And then i'm going to turn that back off and i'm going to turn on the control net at the top, which basically is going to force the shark into the shape of the bump map in that image. This is how llms can get modified after the fact to change their output to look more like what you want. This isn't just for graphics, you can do it for other things too. I'm going quick if you want more info, tell me in the comments. So now i'm going to get, there we go. Now i've got the first shark we had only, it's in the shape of the bump map that i told it to use. And now i'm going to turn the cartoony back on so i can combine the two and i'm going to get a cartoony shark. And i'm going to get it in that same pose that the bump map insists that it lands in.

So do some research on that. There are papers in the description. It's a really good thing for you to learn if you're a software engineer. Okay, number four: debugging llm created code and stuff. I take it as a given that all non-trivial pieces of software have bugs in them. In 35 years, nothing. I've never seen any piece of non-trivial software that hasn't had at least one bug in an eventual. And if you believe that llms are able to exist without bugs in them and you believe that code can be generated, doesn't have any bugs in it, i don't know what color the sky is in your world. I hope it's pretty where you are. Thanks for watching this far. I don't know if i can help you. Next question is, can't an llm just do all of its own debugging itself? Well, i would say no. And i would say that there are people that are smarter than me that would say no. So here's a quote from brian kernighan who's one of the guys that invented, you know, just the c language in unix.

Quote: "everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?". I don't know if an llm's gonna be able to do that. Question about this is, "does this really count as an extra or an additional job? "" i think so. I think that debugging an llm is gonna be fundamentally harder than debugging human written code for three reasons. One, you can't understand what the llm was thinking at the time 'cause it doesn't really think. They hallucinate. They were trained on code that was found on the internet and the code that they found on the internet will have bugs in it because if you've ever looked at stackoverflow, there's not just perfect code on stack overflow, trust me. And then llm's are software themselves and software inherently has bugs. So by definition, or by extension, llm's will have bugs and their models will have bugs. Now, some of those bugs that are generated by the llm are gonna get caught before you ship, but just like with human programmers, a lot of them won't. And i believe that debugging those will be a lot harder than debugging human written code. So i've debugged a lot of code in my career. I spent a lot of time taking over projects that were written by the lowest bidder and they hit a wall and i came in and tried to bail them out.

And i can tell you a few things about taking over somebody else's code base or trying to debug or read somebody else's code. It's much harder to debug when the variable names or the function names or the class names are just wrong. Sometimes you make a class and it's called something and then over time it comes to actually do something else and you don't go back and change the name of it. And then somebody coming into the code takes a look at it. And they think that class does something and it doesn't do that thing. The llm put things together that it found in the data set that were close together. You're not guaranteed that the variables that you're going to be debugging with actually mean what they would have meant if the person that named them was somebody who you understood their thought process. Second, i can tell you it's much harder to debug a code base when the person who wrote the code speaks a different language than you. And none of the names make any sense when they get jumbled in your head. And you can try google translate on them but half of them don't translate 'cause they're in terms of art or they're abbreviations or they're things that only the person that wrote it knew. And you're like having to reverse engineer what all this stuff does just by looking at what the actual thing does and trying to figure out what it's supposed to do. It makes it so much harder. And the llm speaks a foreign language in a way that no one from eastern europe or india or china or wherever could possibly.

And then i can tell you that it's much harder to debug when the person that wrote the code you're trying to debug brought in a design pattern from a different language or a different industry into a realm that doesn't fit into normally. I see this a lot when i'm looking at iphone code that was written by an android developer that had never really written or learned iphone stuff. And they do these things that are android specific and you're looking at it going, that's a weird pattern, why would you do that? That causes weird problems in this particular realm because the way that queues and asynchronous stuff works and iphone is different than it is on android, same goes the other way too.

A lot of times people grab a function that's similarly named to the function that existed in the environment that came from and they assume it's gonna work the same way. And they use it the same way. And it turns out that there are so differences about that's a thing that i'm guessing the llm is gonna have a hard time with. It's just, it makes it really hard to debug. You end up having to rewrite a bunch of it a lot of times. So i'm expecting that to happen a lot with code that llms write. And that's why i believe that debugging llm code will be harder and that's why i think there are gonna need more people to do it than we need to debug code now. So that's why i think there are gonna be additional jobs. Okay, second thing. So this is item four subsection two. People are really bad at regression bugs. Llm's will be worse. So a regression bug is when you fix a thing, you create a feature, you fix a bug. And then later you fix another bug or you add another feature and that accidentally affects the functionality of something you had done previously. The new thing that you built works fine, but something that you built a long time ago that used to work fine now breaks. And you can get into this whack-a-mole situation where every time you fix a thing, you break something somewhere else.

It's a pet peeve. I'l be talking about a lot about it in future videos. So subscribe if you wanna see those. But my belief is that llm's are gonna be a lot worse than that because it takes a lot of context to understand what all the things are that the software used to do, trying to be sure that you're not touching anything that's going to break anything else that used to happen. It's really hard for people to do that. And people don't have word limits in their prompts. I can't imagine it's not gonna be way worse when using an llm for that. But i've never seen anybody use an llm for the size of code base i'm talking about. I think that's gonna be awful, but we'l have to see what happens. But i just, i know really competent developers that are really bad at that. And i can't imagine llm's gonna be better than it. And then third, there would just be bugs in the llms themselves. We talked earlier about customizing the output. And my guess is there's gonna be some amount of things like lora's and controlnets and that kind of thing. I'm guessing there's gonna be some of that. It's like, okay, we need this llm every time we try to do this, spits out a python pattern, except we're trying to write in golang. This particular thing, we can't let it do that because golang doesn't work that way as python. I'm just making up the languages, but you get the idea. But what we need to do is basically create a lora or a controlnet or something that will tweak the output as it comes out so that it will generate code that makes sense for the language that we're trying to get it to use for some reason. Does that make sense? So i'm guessing that there are going to be opportunities and jobs for people to basically take the code that comes out on llm, change the llm so that the code that comes out is more like what you want and works around bugs that you find that the llm insists on doing this bad way. We've come to the last one. Now you can argue that this one is just a different case of the debugging one before, but it merits its own bullet point as far as i'm concerned. And just for the record, this thing i'm about to talk about terrifies me far more than the idea of like skynet killing us all, the science fiction, you know, the war with the machines, the machines become sentient and take over the planet, you know, i'm not worried about that.

I'm worried about this thing is called adversarial attacks. This is a thing where people take software and they intentionally try to figure out how to make it behave in bad ways so they can take advantage of it and exploit it. So we have a pretty good handle on what a buffer overrun bug is. It's been happening since the morris worm in the 80s, i think. And we know how you're supposed to write code to keep buffer overruns from happening, right? It's known at this point what you're supposed to do. There's a list of the most common bad software practices that allow security exploits, right? And when you get a giant list of the most exploitable bugs from the last year, almost always one of them is tied to one of the bad practices. But we know all those bad practices, but our track record on writing code that has the ability to be exploited, that people take advantage of, is just abysmal. We, as an industry, do awful about that and it seems to be getting worse. I've got this video on the bug count going up and software is getting worse. So you can go look at that if you want. But even for the things that we know create problems, we as an industry are crap writing our code the way we're supposed to so that we don't have those problems.

By contrast, we have no idea what the most common exploit vectors for elements are gonna be, no idea. And there are gonna be a ton of people that are going to be beating on llms and they're gonna be writing llms to beat on llms and they're gonna be writing adversarial machine learning things to try to figure out how to make llms go wrong. And there are gonna be people that are gonna find an exploit that no one's found yet. And there's gonna be a period of time that they're gonna be able to do whatever the heck they want until somebody figures out what it is that they're doing and figures out how to get around it. This might be a nightmare. I'm terrified of this. I have no idea how this is gonna work. It might not be bad, but then you would have thought that about buffer overflows, right? I've read an interview with the guy that decided that he was gonna have a null zero character terminate a string in , he had no idea that it was gonna be, i think i read an article to call it the billion dollar mistake, which is underselling it, if anything. But he had no idea all the different ways that was going to be used to steal money from people and to crash servers and to take over servers in the future. And we have no idea, no idea what the possibilities are for how people are gonna be able to hijack llms to do what they want it to do when the person that trained the llm had no idea of what they wanted. So some examples. Here's an article piece of tape, make a tesla speed up, made the tesla go into the wrong lane. I'm not just picking on tesla. Here's a machine learning example from openai themselves. All these articles will be in the description. Here's a thing that makes chatbots spit out their data. And here's another one. It's just, it's crazy. So on that happy note, i hope i didn't bum everyone out with that last item. Hey, it's job security, right? I hope not everybody's walking away from this depressed, like with my last video. But that's what i've got for you today.

👇 Give it a try