I've been building software projects for over 20 years, and like all things, the devil is in the details. You start with simple ideas that you realize with small pieces of software. Those small pieces start to collect into components that interact, and pretaty soon, you are spending time managing those components alongside the actual work of fixing bugs and adding new features. As your business grows, you also have to start thinking about scalability and how you can keep ahead of your users so that they always have a good experience.
Unfortunately, most of us don't start with this scaling in mind, and we end up having to do a few refactors along the way. We might even start in one language and transition to another. We might start on Windows and transition to Linux. We might start on simple basic servers and then transition over to larger multicore devices that offer a lot more computing power in the same space. Some of us even build our own hardware because we have such high performance requirements that the standard commodity hardware just doesn't fit the model we need.
Things are changing, though. We are starting to move to a world where it is someone else's problem to keep us scaling, and we can focus more on our business. This is the real value of the cloud - to give you computing resources to scale but abstracting the complex details away so that you aren't having to spend time on it. Part of what is helping make this possible is a revolution in software design. You can call it microservices if you want, but it is really more than that. It is kind of like the service-oriented architecture that we all talked about 10 years ago, but instead of lumping all our services together onto one large service bus, we're treating each service as its own entity.
Running things in containers really demonstrates the value of this. The functionality that you build in each container is intentionally small in scope because you know that you are going to be using a lot of them to scale up to whatever number of users you will have. By keeping the scope of functionality for each component small, you reduce the overall complexity, and complexity is the thing that will bring down any good system. I have another post about why big systems fail, and it's really kind of crazy how many projects have resulted in failure simply because the complexity eventually became too much to deal with.
When you think about how operating systems work, you realize that we've been doing this kind of thing for a really long time. Everything you do in whatever operating system you use - mousing around, typing text, etc. - is driven by small pieces of software. Someone didn't write one large program to describe all of those things. That would have been a nightmare to develop, test, and upgrade (imagine if Windows was all executed out of one file - every time there was a security patch, you would have to completely stop the OS, patch that file, and then start again). Each little piece of functionality was developed separately. Most of the time, these pieces of functionality live in libraries that are loaded by other pieces of software, i.e., the applications you run. It is only recently that we've started applying the principles of building small to our applications as well, but it is hardly anything new.
There is incredible power in reusability, and by breaking functionality down into the fundamental pieces, you will find there is a lot of redundancy. There is also a lot of stuff that you really don't need to write yourself. Resources like Docker Hub have enabled developers to jumpstart projects incredibly fast because they can start with baselines that already have many of the components they need already in place and configured, and developers are just talking to simple APIs. You will still have the work of building business logic, but you hand off the work of storing data or queuing messages to other components that just work right out of the box. What if a component is missing some functionality you need? You just extend it and add the missing functionality, and Docker makes this trivial.
You still have that maintainence issue, though. You are going to have to have people around who are watching your applications to make sure that things are running smoothly. Fortunately, there are frameworks evolving now that are making this easier as well. Kubernetes has garnered a lot of attention of late, and for good reason. Kubernetes and similar frameworks make it so that computing resources are just that - resources that you can apply to tasks. Instead of building your infrastructure specifically for the types of things you want to run, you make it more generic and let Kubernetes figure out how best to run the jobs that you have. Now losing servers becomes less of a hassle because Kubernetes can just direct the work at other available resources. You can still have fine grain control if you need it, but with components becoming simpler, chances are that you won't need that control as much.
Now the only thing I need to worry about is whether AI eventually becomes advanced enough that no one needs humans to write code any more. Thankfully I don't think that's going to happen any time soon.