@ct_bergstrom’s observation about police writing reports with LLMs [shudder] generalizes well:
“Yes, [human processes] aren't always the most accurate, but introducing an additional layer of non-accountability is bad.”
Communication has consequences. Who is responsible for those consequences?
My wise English prof mother always says: “Good writing is good thinking.” When we automate the writing, who is doing the thinking?
https://fediscience.org/@ct_bergstrom/113028760435643985
Conversely, if some writing process •is• so full of pro forma boilerplate that it can be automated by LLMs — and I have serious doubts about police reports fitting this criterion, but if so — what is wrong with that process? Why are we making people jump through purposeless hoops, add filler material, check non-information-bearing boxes? In communication, automatability correlates with bullshit.
Having a fluff-filled communication process and then using an LLM to make it efficient is like putting the flush handle 3 blocks away from the toilet and then buying a Humvee to go flush it.