hachyderm.io is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hachyderm is a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide. Note that many non-user account types have restrictions - please see our About page.

Administered by:

Server stats:

9K
active users

All the above is also true (though perhaps in different proportions) of humans writing code! But here’s the big difference:

When humans write the code, those humans are •thinking• about the problem the whole time: understanding where those flaws might be hiding, playing out the implications of business assumptions, studying the problem up close.

When AI write the code, none of that happens. It’s a tradeoff: faster code generation at the cost of reduced understanding.

2/

The effect of AI is to reduce the cost of •generating code• by a factor of X at the cost of increasing the cost of •thinking about the problem• by a factor of Y.

And yes, Y>1. A thing non-developers do not understand about code is that coding a solution is a deep way of understanding a problem — and conversely, using code that’s dropped in your lap greatly increases the amount of problem that must be understood.

3/

Increase the cost of generating code by a factor of X; increase the cost of understanding by a factor of Y. How much bigger must X be than Y for that to pay off?

Check that OP again: if a software engs spend on average 1 hr/day writing code, and assuming (optimistically!) that they only work 8 hr days, then a napkin sketch of your AI-assisted cost of coding is:

1 / X + 7 * Y

That means even if X = ∞ (and it doesnt, but even if!!), then Y cannot exceed ~1.14.

Hey CXO, you want that bet?

4/

This is a silly thumbnail-sized model, and it’s full of all kinds of holes.

Maybe devs straight-up waste 3 hours a day, so then payoff is Y < 1.25 instead! Maybe the effects are complex and nonlinear! Maybe this whole quantification effort is doomed!

Don’t take my math too seriously. I’m not actually setting up a useful predictive model here; I’m making a point.

5/

Though my model is quantitatively silly, it does get at the heart of something all too real:

If you see the OP and think it means software development is on the cusp of being automatable because devs only spend ≤1/8 of their time actually typing code, you’d damn well better understand how they spend the other ≥7/8 of their time — and how your executive decisions, necessarily made from a position of ignorance* if you are an executive, impact that 7/8.

/end (with footnote)

Jeff Miller (orange hatband)

@inthehands I appreciate how your line of argument chimes with Fred Brooks' "No Silver Bullet", along the lines of essential complexity of the problem and the solution matching up, except the comparison here being the embedded understanding (hopefully) in code written specifically for the problem, versus the embedded unchecked assumptions in generated code.

Novel software libraries have a similar problem: easy to adopt, not necessarily easy to evaluate.