Well, this is for the future me.
Just last week, Amazon had a major layoff and most of the news outlets are claiming this is driven by AI.
This is probably not the case, and let me document why I think so at this moment.
Chips Chips Chips
You might think that Amazon, as a Big Tech company, would have everything fully automated end to end and now we can just easily get rid of the humans and let the robots get to work.
Let me tell you, that's not the case.
As most companies, Amazon still relies on manual processes that simply can't be replaced easily with AI.
This is of course not a bad thing per se, but it goes to show that at this point the company is not ready to drop in AI in place of humans and keep the business running.
Does that mean that kind of work cannot be automated? I think it can, but LLM-based agents at this point are just not reliable enough to handle it end to end.
But that might change.
Andy Jassy also said that this layoff is not AI-driven.
But a good argument I heard that really made sense to me is that this move is meant to ensure Amazon has more cash on hand to buy more and more NVIDIA chips, making sure it doesn't keep falling behind in the Cloud AI race.
This makes a lot of sense since Wall Street is currently not happy with AWS revenue growth as it is not seeing the same growth numbers as Google Cloud and Azure.
Is this the case or not? I'm not sure, but it definitely makes more sense than saying this is purely driven by AI productivity gains.
What about Software Engineering work?
Right now, I still can't really delegate most of my workload to any of the existing AI agents for software engineering.
Contrary to the widespread understanding that AI is great at greenfield and bad at brownfield, most of my productivity gains at the moment come from understanding the change that I want and asking the agent to implement it for me.
It has been quite a while since I had to manually write a Java unit test, especially when I can tell it to just follow whatever patterns are already in the codebase.
This, of course, makes me much happier as a developer since I never enjoyed the boilerplatty nature of some code and now can just write the parts that I want and delegate the rest.
So, are we cooked?
Although some CEOs sitting in their ivory towers think you can already replace most humans with LLMs, my personal opinion is that we are far from it.
Yes, I do think AI will change how we do most things but at this point in time, it is still too unreliable to handle a lot of tasks and it might stay that way for a long time.
The so desired productivity gains are still pretty hard to quantify, a lot in part because measuring productivity for software developers was never easy in the first place, contrary to what McKinsey thinks.
Some recent research even suggests that it actually hinders productivity, despite the impressions of software developers.
As most people, I'm not sure how this will pan out in our industry. My hope is that we can convince leaders that software development is still multifaceted work that needs thinking and not a stochastic parrot behind it, especially at places like Big Tech which have so much internal clutter and bloatware.
Bubble?
Another topic that has been floating around quite a lot these days is the idea that we're in a bubble.
Some people say we are and some say we don't and I tend to agree with what Jeff Bezos said:
This is a kind of industrial bubble [...] The good ideas and the bad ideas. And investors have a hard time in the middle of this excitement, distinguishing between the good ideas and the bad ideas. And that's also probably happening today
I think that is really true at this moment; the excitement is sky high while actual productivity and financial gains of using AI are still not common, as a recent MIT study shed light on.
Personally, I still think in the end the good stuff will survive, but it will definitely make some victims in the process, just like the Internet Bubble.
Closing thoughts
Well, for now I still think software developers are not fully cooked.
We do have to adapt and learn how to use LLMs to improve our effectiveness and not be left behind.
Still, AGI might come and then make sure we are fully cooked.
But at this point, the only thing I can do is watch the future unfold and try to make the best of it.