Improved Means for Achieving Deteriorated Ends
On LLMs
2025-02-05
I caught myself responding to a former coworker's post about LLMs. I took the bait like a fool. But my response summarized my views well enough to be worth preserving. So here we are.
Excerpted from what they wrote (which is performative LinkedIn nonsense IMO):
There seems to be an anti-AI movement happening among software engineers and even engineering leaders...
If the only thing we can do is create more code, then we're going to be quickly replaced. However if we take the time to use the tools and really integrate them into our day-to-day workflow productivity can only increase.
My response:
I don't see any backing you've provided for this claim. I agree however that the point of an engineer is not just to produce code. My objections with LLMs aren't about the quality of their output but their inability to understand or possess intent.
Intention is the most important single thing I think we can bring to our work. I think AI is primarily disruptive to it. It is delegation in a way that shirks the responsibility of intent.
2025-03-17
Well, I got caught again. This time in an ex-calendly slack. A coworker that I have a healthy amount of respect for started a thread asking if when the tools (i.e. LLMs) are good enough, there's any reason to do work that the tool could be guided to do for you.
Excerpted from what they wrote:
I think the big limit at the moment is the fact that for most products the scope of the full application (all the source code, configuration, and product requirements) is bigger than the maximum buffer of existing LLM models, so you need a human that has the full context of the product guiding the agent and deciding where to apply it. But when we get to a place where the agent can handle all of that context, the game becomes very different because at that point the agent can pretty much do it all. At that point, I'm not certain a human ever needs to know how to code again...
But I do think that before we get there, we will rapidly veer in the direction of the primary value of a human being how well they can guide and apply an AI agent, not how well they can do a job that the agent can do instead...
My response:
I find this to be such a disappointing take. I feel like I'm seeing analogues of it in many places. I think I understand what inspires it. The capabilities of current LLM models are impressive in many ways and still evolving. Avoiding busywork to focus on more important things using new tools is good. Alfred North Whitehead said, "Civilization advances by extending the number of important operations which we can perform without thinking about them." Hmm.
I don't know if "prompt engineering" will usurp traditional programming in 1 year or 10 years. Maybe it has and I missed it. Yet when I think about the problems that afflict product development at companies I have worked for, the speed of shipping code was not the determining factor in success or failure. Velocity was not the thing that interested me most about my engineers. If I could wave a magic wand and fix 5 things about the org, speed of "requirements -> PR or completed epic" would not be in the top 5. It's just a thing we can measure.
At this point in my career, I would much rather the people subjected to the technology be the ones advocating for it. If my engineers feel the best way to do their work is with LLMs, I don't want to stand in their way. From what I have observed though it feels like LLMs are being pushed on engineers (and other roles) with promises to management of increased output much more than I see engineers asking for access to make their own lives easier.
I find it hard not to be angry about the discourse around it, which is my own failing, but I hope that instead of focusing on how LLMs enable us to be faster I can encourage conversations about what values they free us up to invest in more fervently.
I want to hear people tell me how they use that extra bandwidth aside from shipping a few extra stories a sprint. I want to hear about better conversations between product, design, and engineering. I want to hear about increased operational focus and making it easier for other teams to find the docs they need. I want to hear about better customer discovery calls.
If all we're saying is that we should let autocomplete do the "boring" parts of the job, I might not I agree but I can respect that. But that isn't what I'm hearing. One thing is for sure: Building software ain't what it used to be.