In most cases, LLMs will not replace humans or reduce labor costs as companies hope. They will •increase• labor costs, in the form of tedious clean-up and rebuilding customer trust.
After a brief sugar high in which LLMs rapidly and easily create messes that look like successes, a whole lot of orgs are going to find themselves climbing out of deep holes of their own digging.
Example from @Joshsharp:
https://aus.social/@Joshsharp/112646263257692603
Those who’ve worked in software will immediate recognize the phenomenon of “messes that look like successes.”
One of my old Paulisms is that the real purpose of a whole lot of software processes is to make large-scale failure look like a string of small successes.
The crisp “even an executive can understand it” version of the OP is:
AI increases labor costs
(“Why?” “Because it’s labor-intensive to clean up its messes.”)
I said “the purpose of a whole lot of software processes is to make large-scale failure look like a string of small successes.”
Huh? What does that look like??
It looks like this:
Meetings held
Plan signed off
Tests passed
Iterations iterated
Velocity increased
Thing implemented
Checkpoints checked
Thing released
Blinkenlights blink
Line goes up
Thing updated
Software never •really• solves the problem it was supposed to solve in the first place, creates more problems
@inthehands You get what you measure.
But you get only what you measure, because now you're measuring, nothing else matters.
@inthehands "a costly myth"