hachyderm.io is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hachyderm is a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide. Note that many non-user account types have restrictions - please see our About page.

Administered by:

Server stats:

9.9K
active users

This essay from @jenniferplusplus is very good, and very important.

It’s good enough and important enough that I’m just going to QFT the heck out of it here on Mastodon until I annoy you into readying the whole thing.

jenniferplusplus.com/losing-th

This essay isn’t the last word on AI in software — but what it says is the ground level for having any sort of coherent discussion about the topic that isn’t all hype and panic.

1/

Jennifer++ · Losing the imitation gameAI cannot develop software for you, but that's not going to stop people from trying to make it happen anyway. And that is going to turn all of the easy software development problems into hard problems.

“Artificial Intelligence is an unhelpful term. It serves as a vehicle for people's invalid assumptions. It hand-waves an enormous amount of complexity regarding what ‘intelligence’ even is or means.“

“Our understanding of intelligence is a moving target. We only have one meaningful fixed point to work from. We assert that humans are intelligent. Whether anything else is, is not certain. What intelligence itself is, is not certain.”

2/

“While the capabilities are fantasy, the dangers are real. These tools have denied people jobs, housing, and welfare. All erroneously. They have denied people bail and parole, in such a racist way it would be comical if it wasn't real.

👇👇👇
“And the actual function of AI in all of these situations is to obscure liability for the harm these decisions cause.”

3/

“What [LLM] parameters don't represent is anything like knowledge or understanding. That's just not what LLMs do. The model doesn't know what those tokens mean. I want to say it only knows how they're used, but even that is over stating the case, because it doesn't •know• things. It •models• how those tokens are used.

“…The model doesn't know, or understand, or comprehend anything about that data any more than a spreadsheet containing the same information would understand it.”

4/

Here it is: the One Weird Thing that people who aren’t programmers (or are bad programmings) just don’t understand about writing software. This is it. If you miss this, you’ll miss what LLMs can and can’t do for software development. You’ll be prey to the hype, a mark for the con.

6/

“They're positioning this tool as a universal solution, but it's only capable of doing the easy part. And even then, it's not able to do that part reliably. Human engineers will still have to evaluate and review the code that an AI writes. But they'll now have to do it without the benefit of having anyone who understands it.”

7/

“No one can explain it. No one can explain what they were thinking when they wrote it. No one can explain what they expect it to do.

“Every choice made in writing software is a choice not to do things in a different way. And there will be no one who can explain why they made this choice, and not those others. In part because it wasn't even a decision that was made. It was a probability that was realized.”

8/

You might be surprised to learn that I actually think LLMs have the potential to be not only fun but genuinely useful. “Show me some bullshit that would be typical in this context” can be a genuinely helpful question to have answered, in code and in natural language — for brainstorming, for seeing common conventions in an unfamiliar context, for having something crappy to react to.

Alas, that does not remotely resemble how people are pitching this technology.

9/

I love, for example, this student’s reaction to having ChatGPT trying to write some of her paper:
hachyderm.io/@inthehands/10949

Indignant outrage is a powerful thought-sharpening tool!

Alas, AI vendors are not pitching LLMs as indignant outrage generators.

10/

Paul Cantrell

I’ve heard from several students that LLMs have been really useful to them in that “where the !^%8 do I even start?!” phase of learning a new language, framework, or tool. Documentation frequently fails to share common idioms; discovering the right idiom in the current context is often difficult. And “What’s a pattern that fits here, never mind the correctness of the details?” is a great question for an LLM.

Alas, the AI hype is around LLMs •replacing• thought, not •prompting• it.

11/

The hard part of programming is •thinking about what you’re doing•, because the computer that runs your code isn’t going to do that.

And as Jennifer points out in the essay, we do that by thinking about code. Not just about our abstract mental models, not just about our natural language descriptions of the code, but about the code itself. Where human understanding meets machine interpretation, •that’s• where the real work is, •that’s• what makes software hard:

12/

Code is cost. It costs merely by •existing• in any context where it might run. Code is a burden we bear because (we hope) the cost is worth it.

What happens if we write code with a tool that (1) decreases the cost per line of •generating• code while (2) vastly increasing the cost per line of •maintaining• that code? How do we use such a tool wisely? Can we?

Useful conversation about that starts on this ground floor:

jenniferplusplus.com/losing-th

/end

Jennifer++ · Losing the imitation gameAI cannot develop software for you, but that's not going to stop people from trying to make it happen anyway. And that is going to turn all of the easy software development problems into hard problems.

@inthehands One of the obscure methods I have been using recently when I'm writing (not coding, actual 'write a story' type thing) is when I blank page for a while, I kick one of the LLM bots and send it a semi-related prompt.

Then I get so irked by how it completely missed the point, that it gets my competitive backbrain going and saying 'damnit, I know how to say it better than THAT'. And once I start, that solves the 'getting started' portion.

Does fuckall for plot, characters, ect, but...

@inthehands i found the blog post a mixed bag. The specific points about software development are very good.

But the general critique of LLMs is lacking. Two examples:

1. She claims AI/LLMs are not intelligent, but does specify what that means.
Maybe AI then is intelligent by *some* measure?
2. She claims the model parameters don't represent knowledge or understanding. Same problem here: what *is* knowledge?

She falls into the same fallacy as the AI hype train: vague terms.

@inthehands the big value i see in this blog post is the concretness of the risk descriptions and the factual references to actual AI project failures.

@elhult @inthehands

If these are all the objections you can raise, consider the blog post a success – and be mindful to not be part of the problem.

@inthehands

If you are interested, this is a meditation on this fact that I wrote a long time ago now.

mutable-states.com/who-are-the

the hard part of programming is *also* thinking about what future you is going to be doing

🙂

mutable-states.comMutable States - Who are the Programmers

@psu_13 @inthehands Yeah, or to use an example of a joke about code quality:

"The hard part of programming is ensuring that the next person to read the code who happens to be an axe murderer who knows where you live doesn't have cause to use that knowledge.".

@inthehands this last bit is the only place I've seen value in LLM. When you're introduced with a new thing, with little to no context, you can often get more information associated with the thing by using a LLM, which will then help you figure out where to go next (or at least give you more to search on).

I almost want to say it's like a context generator, but that's too generous - word association seems more appropriate. Still surprisingly useful in certain situations, like (as you noted) when you have no idea where to even start.