Well, this is for future me.
Just last week, Amazon had a major layoff and most of the news outlets are claiming this is driven by AI.
I think this is not the case and let me document why I think so at this point.
Chips Chips Chips
You might think that Amazon as a Big Tech company, would have everything fully automated end to end and now we can just easily get rid of the humans and let the robots get to work.
Let me tell you, it's not.
A LOT of internal work is still done manually in ways you'd think are just not allowed at such a big company.
This of course is not bad per se, but just goes to show that at this point the company is not ready to just drop in AI in place of humans and keep the business still running.
Does that mean that kind of work cannot be automated? I think it can but LLM-based agents at this point are just not as reliable to fully do it.
But that might change.
Andy Jassy, if he can be trusted, also says that this is not AI driven.
But a good argument I heard that that really made sense to me, is that actually this is a move to ensure Amazon has more cash in their hands to buy more and more NVIDIA chips to make sure they don't keep falling behind in the Cloud AI race.
This makes a lot of sense since Wall Street is currently not happy with AWS revenue growth as it is not seeing the same growth numbers as Google Cloud and Azure.
Is this the case or not? I'm not sure, but it definitely makes more sense than saying this is purely driven by AI productivity gains.
What about Software Engineering work?
Right now, I still can't really delegate most of my work to any of the existing AI agents for Software Engineering work.
Contrary to the widespread understanding that AI is great at greenfield and bad at brownfield, most of my productivity gains at the moment come from understanding the change that I want and asking the agent to code monkey it for me.
It has been quite a while since I had to write manually a Java unit test, especially when I can tell it to just follow whatever patterns are already in the codebase.
This of course makes me much happier as a developer since I never enjoyed the boilerplatty nature of some code and now can just write the parts that I want and delegate the rest.
So we are cooked?
Although CEOs sitting in their ivory towers think you can already replace most humans with LLMs, my personal opinion is that we are far from it.
Yes, AI will change most of how we do everything but it is not as clear to me that it will completely replace people and I hope that it stays that way.
Internal Leadership is increasingly more obsessive about productivity gains and we are having to report how we are making sure we need less "head count" by using AI.
The thing is, at this point it is still incredibly difficult to quantify the gains we are seeing because measuring productivity of software developers was never easy in the first place, contrary to what McKinsey thinks.
So I'm still not sure at this moment how this will pan out internally. My hope is that we can convince leadership that software development is still a multifaceted work that needs thinking and not a stochastic parrot behind it, specially at places like BigTech which have so much internal clutter and bloatware.
Bubble?
Another topic that has been floating around quite a lot these days is the idea that we're in a bubble.
Some people say we are and some say we don't and I tend to agree with what Jeff Bezos said:
This is a kind of industrial bubble [...] The good ideas and the bad ideas. And investors have a hard time in the middle of this excitement, distinguishing between the good ideas and the bad ideas. And that's also probably happening today
I think that is really true at this moment, the excitement is sky high while actual productivity and financial gains of using AI are still not common as a recent MIT study shed light on.
Personally I still think in the end the good stuff will survive but it will definitely make some victims in the process, just like the Internet Bubble.
Closing thoughts
Well, for now I still think software developers are not fully cooked.
We do have to adapt and learn how to use LLMs to improve our effectiveness and not be left behind.
Still, AGI might come and then make sure we are fully cooked.
But at this point, the only thing I can do is watch the future unfold and try to make the best of it.