TL;DR: induced demand isn’t just for highways
(but if you DR, well, it’s a good read)
(short follows | UPDATE: not short after all)
From @davidzipper:
https://mastodon.social/@davidzipper/113068475472028605
My own position on self-driving cars has gradually shifted over the last decade.
I used to think that they were promising not because they’re good, but because human drivers are so incredibly bad. A consistent machine with consistent attention, while still fallible and still dangerous, at least isn’t drunk or texting while driving.
Two things have shifted my feelings on the matter:
The first is what the article in the OP lays out: induced demand. If you make something easier, people will generally do more of it.
It’s dangerous to have people not paying the true cost of their own decisions. That’s already the case with driving, to an extreme (carbon tax now!!), but at least driving is really, really annoying. Annoyance is a poor proxy for driving’s true cost, but it’s •something•. Removing that backstop is dangerous.
The second is the difference between what I expected from self-driving vs how it’s actually evolving.
I imagined that self-driving cars would involve a narrowing of the parameter space: more consistent driving behavior, maybe some fixed and standardized cues/signals on roads, the expectation of human takeover for novel situations like construction zones, maybe even a shift toward a sort of herd efficiency in traffic.
Consistency. Automation that works better because it tries to •do less•. But…
…instead, AI is off on this ridiculous investor-driven “rEpLAcE hOoMAnS GAI BABEEEE” hype bender, in which self-driving cars have to be like human drivers except more magicaler.
What that means in practice is that consistency — the •one thing• that machines really have on humans in this space! — goes out the window. Self-driving cars are full of weird failure modes and bizarre quirks. They’re drunk and texting all the time, except they behave in ways that are even less predictable than humans.
One of the good-and-bad things that happens when we move human activity into software is a •narrowing of the problem space•.
Humans are full of ad hoc decisions. We fudge. We finagle. We mess up, but we also fix up. Humans are the adaptable part of complex systems. Human are both producers of and defenders against failure. (https://how.complexsystems.fail/)
When you moving a task into software, one of the central questions is “What happens to that human flexibility?”
Usually, at least if we’re doing a good job, the answer is “we split it:”
One part of the problem becomes simpler, less flexible, more consistent. We make up rules: “every item has exactly one price,” or “every has one price per discount-item combination,” or “every item has N SKUs, each of which has one price per….“ The rules evolve, they adapt, they grow — but they remain consistent until we update them.
The beauty and the peril of software is consistency: •it follows those rules we invent•.
Beauty? Because consistency can really pay off.
Peril? Because sometimes we need exceptions.
I said we “split” the problem. Software takes one part of the job, a version of the problem that is simplified so that machine consistency is •possible•. The other part of the job: human intervention. We build software to loop in humans to say, “eh, damaged item, I’m giving you a discount” or whatever. •If• we’re doing it right.
Consistency with a dash of human intervention.
One classic way this goes wrong is when we forget the “human intervention” part.
You end up with these Kafkaesque nightmares where somebody is stuck in an infinite product return loop or their insurance claim is denied or the state thinks they’re dead or they get a zillion parking tickets because their custom license plate spells “NULL” (https://arstechnica.com/cars/2019/08/wiseguy-changes-license-plate-to-null-gets-12k-in-parking-tickets/)…and a human is stuck in process hell because •the software just does that• and software is hard to change.
I thought •that• was where self-driving cars were going to land: narrowed problem space, sometimes they fail, but at least they’re really consistent. Not great, but again, arguably an improvement over human drivers.
But nooooo. Now, thanks to the Glorious Dawn of AI Megahype, we have companies falling over themselves to replace all those annoying expensive humans…with •randomness•.
This is just bonkers to me.
I mean, software is…kind of terrible. It’s expensive to build and maintain. It constantly throws our bad assumptions back in our faces. It removes the human flexibility that keeps systems afloat, unless we work hard to prevent that.
But at least it’s consistent.
Whatever it does, it •keeps doing that thing• with a high degree of reliability. It doesn’t forget to write things down, or lose that scrap of paper, or show up to work high. When it fails, 99.9% it’s because humans told it to.
@inthehands one of the big assumptions behind AI hype -- the unspoken presupposition -- is that the 99.9% reliability of traditional software will be complemented by the apparent capacities of generative systems and all the exponential possibilities entailed therein
in practice, because the generative systems are making stuff up, they're going to pollute traditional software into uselessness with absolute garbage inputs.
they're fundamentally two different things, and they cannot interface
@thedansimonson @inthehands But the problem there is conflating "generative AI" with all of machine learning, no? It is quite possible to build reliable (safety critical) software systems that solve hard problems using machine learning AND do not "hallucinate" anything. But there is no known way to do it cheaply.
@fgcallari @thedansimonson
There is indeed tremendous unexplored potential in that space. Classifier systems (ML or not) can outperform humans for some problems, and can give an expedited first step for others. When the model turns to human augmentation instead of human replacement, things get a lot more sensible. Maybe we’ll get there on the other side of this hype cycle.
@inthehands @thedansimonson both human augmentation and replacement. There are plenty of economically important problems with currently human-driven solutions that require rare/costly skills, and some are just not solvable even if plentiful skilled humans were magically available, if we require that humans be always in control. Example: air traffic.