TL;DR: induced demand isn’t just for highways
(but if you DR, well, it’s a good read)
(short follows | UPDATE: not short after all)
From @davidzipper:
https://mastodon.social/@davidzipper/113068475472028605
My own position on self-driving cars has gradually shifted over the last decade.
I used to think that they were promising not because they’re good, but because human drivers are so incredibly bad. A consistent machine with consistent attention, while still fallible and still dangerous, at least isn’t drunk or texting while driving.
Two things have shifted my feelings on the matter:
The first is what the article in the OP lays out: induced demand. If you make something easier, people will generally do more of it.
It’s dangerous to have people not paying the true cost of their own decisions. That’s already the case with driving, to an extreme (carbon tax now!!), but at least driving is really, really annoying. Annoyance is a poor proxy for driving’s true cost, but it’s •something•. Removing that backstop is dangerous.
The second is the difference between what I expected from self-driving vs how it’s actually evolving.
I imagined that self-driving cars would involve a narrowing of the parameter space: more consistent driving behavior, maybe some fixed and standardized cues/signals on roads, the expectation of human takeover for novel situations like construction zones, maybe even a shift toward a sort of herd efficiency in traffic.
Consistency. Automation that works better because it tries to •do less•. But…
…instead, AI is off on this ridiculous investor-driven “rEpLAcE hOoMAnS GAI BABEEEE” hype bender, in which self-driving cars have to be like human drivers except more magicaler.
What that means in practice is that consistency — the •one thing• that machines really have on humans in this space! — goes out the window. Self-driving cars are full of weird failure modes and bizarre quirks. They’re drunk and texting all the time, except they behave in ways that are even less predictable than humans.
One of the good-and-bad things that happens when we move human activity into software is a •narrowing of the problem space•.
Humans are full of ad hoc decisions. We fudge. We finagle. We mess up, but we also fix up. Humans are the adaptable part of complex systems. Human are both producers of and defenders against failure. (https://how.complexsystems.fail/)
When you moving a task into software, one of the central questions is “What happens to that human flexibility?”
Usually, at least if we’re doing a good job, the answer is “we split it:”
One part of the problem becomes simpler, less flexible, more consistent. We make up rules: “every item has exactly one price,” or “every has one price per discount-item combination,” or “every item has N SKUs, each of which has one price per….“ The rules evolve, they adapt, they grow — but they remain consistent until we update them.
The beauty and the peril of software is consistency: •it follows those rules we invent•.
Beauty? Because consistency can really pay off.
Peril? Because sometimes we need exceptions.
I said we “split” the problem. Software takes one part of the job, a version of the problem that is simplified so that machine consistency is •possible•. The other part of the job: human intervention. We build software to loop in humans to say, “eh, damaged item, I’m giving you a discount” or whatever. •If• we’re doing it right.
Consistency with a dash of human intervention.
One classic way this goes wrong is when we forget the “human intervention” part.
You end up with these Kafkaesque nightmares where somebody is stuck in an infinite product return loop or their insurance claim is denied or the state thinks they’re dead or they get a zillion parking tickets because their custom license plate spells “NULL” (https://arstechnica.com/cars/2019/08/wiseguy-changes-license-plate-to-null-gets-12k-in-parking-tickets/)…and a human is stuck in process hell because •the software just does that• and software is hard to change.
I thought •that• was where self-driving cars were going to land: narrowed problem space, sometimes they fail, but at least they’re really consistent. Not great, but again, arguably an improvement over human drivers.
But nooooo. Now, thanks to the Glorious Dawn of AI Megahype, we have companies falling over themselves to replace all those annoying expensive humans…with •randomness•.
This is just bonkers to me.
I mean, software is…kind of terrible. It’s expensive to build and maintain. It constantly throws our bad assumptions back in our faces. It removes the human flexibility that keeps systems afloat, unless we work hard to prevent that.
But at least it’s consistent.
Whatever it does, it •keeps doing that thing• with a high degree of reliability. It doesn’t forget to write things down, or lose that scrap of paper, or show up to work high. When it fails, 99.9% it’s because humans told it to.
That consistency is the whole appeal of computers. Without that, why would any organization ever want to delegate anything to software?!
And now we have executives falling over themselves to replace it with “random human-imitating chaos machine?”
Really?
Really?!?
I just…Do you even…What do you think…
[the remainder of this thread is incoherent muttering]
Replies from @stfp and @donw highlight the issue of liability and accountability, which is spot on, a central question here:
https://h4.io/@stfp/113068916131522497
https://mastodon.coffee/@donw/113068874134659387
Re this from @thedansimonson, the phrase “information pollution” has been rattling around in my head a lot lately:
https://lingo.lol/@thedansimonson/113068984297050648
AI-generated nonsense. Google results filling with content-farmed garbage (written by humans and by AI). Steve Bannon’s “flooding the zone with shit.” GIGO.
→ all “information pollution”
Re this from @thatandromeda, I also think that there’s •still• immense promise in automated driver assistance for accident prevention. For example, I’ve driven a couple of cars with radar cruise control that prevents rear-ending people at speed, and found it more helpful than not.
But that sort of thing doesn’t seem to be where the money is flowing.
Good point from @mkj here:
https://social.mkj.earth/@mkj/113069128757909759
99 Percent Invisible did a good episode about this:
https://99percentinvisible.org/episode/children-of-the-magenta-automation-paradox-pt-1/
(As always, web text is a summary; full story is in the audio)
@inthehands I think you’re right on all of this. I do wonder if perhaps it would still represent an improvement. After all, one of the biggest mistakes people make (IMNSHO) is they compare a flawed outcome to a utopian possibility, not the way something will actually happen otherwise.
I’m less worried right now about the flawed self drive solutions than I am flawed legal structures around them. This situation in Cali where nobody currently can be fined for their misbehavior is untenable.
@donw
Making •drivers• liable for accidents they cause, regardless of whether they chose to delegate their driving to a machine, would whip this whole thing into shape real damn fast.
@inthehands Making drivers liable for normal driving accidents would be a big improvement :/ but I do wonder about the waymo situation. I don’t know if I think it’s just to hold passengers of robotaxis liable. So who has criminal liability? There may be civil liability but at some point a fine is just a price. And the amount courts have often assigned as a value to a human life is a rounding error for some of these VC funded ops.
@donw @inthehands I don’t think it’s just to hold the passengers of these taxis accountable. They have been sold a bill of goods about how the taxis are supposed to operate, how reliable they are, etc. The way the cars actually work is opaque to them.
Also, if a human taxi driver screws up and causes an accident, the passenger isn’t responsible for that unless they were distracting or interfering with the driver, right?
@inthehands I wish corporations and bureaucracies weren‘t as obsessed with trashing these human interventions and making everything rigorously compliant.
@inthehands @thatandromeda Dad hired a car last week. He *hated* it because it fought back if he tried to change lanes without signalling. I quite liked that particular feature for forcing him out of bad habits.
Sadly the dashboard readout that frequently told him he needed to change up two gears didn't help with his lazy gear changing. Never buy an ex-rental!
@JetlagJen @inthehands Yes, I am with you, playing a tiny violin for your dad ;)
@inthehands The whole LLM situation makes me flash back on the daily; I remember clearly what it looked and felt like siting in that software class in the early 90s, talking about expert systems vs neural networks.
The neural networks part shared the story of the tank spotting system that they trained to perfection till it consistently found the tanks. Till they “tried it in prod” and it turned out the training set tank photos were all on a cloudy day.
That data set was at least consistent.
@donw
That story is such a classic example!
@inthehands well, as a mid-50s programmer I myself am a classic :)
@inthehands What I thought some time ago is that automaker ultimately would come to a conclusion is that driving is too hard. That to automate it they’d basically need an AGI and it would have to be real-time. And that they would give up on that idea. Instead they would form an alliance, come up with some sort of standard to mark up roads for cars (e.g. lane beacons, traffic rules (i.e. lights/signs) over radio) and car mesh nets for coordination and leave visual auto-pilots for autobahns and interstate roads that have nothing interesting happening on them. Implementation of the standard can be split between them (sponsoring it partially to promote sales) and local governments (from road taxes or whatever).
In this scenario a manually driven car can signal to others “watch out and give me space”. It doesn't need any complex computing and probably doesn't even need any new hardware on new cars or can be a simple beacon that can be retrofitted on "vintage autos”.
From the trajectory car makers take at the moment It doesn't seem like this even crossed their mind. They keep trying to achieve a major shift in operation without any change to infrastructure.
I mean, a real-time near AGI is surely cool but is it easier/cheaper/faster than infrastructure adaptation?
@inthehands unaccountability.
The magical "it's not me, it's the software, and we don't even put the algo in it ourself, can't be our fault".
Even if it's obviously deeply broken, it can superficially seems to be true and give an escape from prison card.
@inthehands The religious adherents pushing generative AI as a solution sincerely believe that the human element is worthless and we're better off without it.
That's the whole bit.
@inthehands one of the big assumptions behind AI hype -- the unspoken presupposition -- is that the 99.9% reliability of traditional software will be complemented by the apparent capacities of generative systems and all the exponential possibilities entailed therein
in practice, because the generative systems are making stuff up, they're going to pollute traditional software into uselessness with absolute garbage inputs.
they're fundamentally two different things, and they cannot interface
@thedansimonson @inthehands But the problem there is conflating "generative AI" with all of machine learning, no? It is quite possible to build reliable (safety critical) software systems that solve hard problems using machine learning AND do not "hallucinate" anything. But there is no known way to do it cheaply.
@fgcallari @thedansimonson
There is indeed tremendous unexplored potential in that space. Classifier systems (ML or not) can outperform humans for some problems, and can give an expedited first step for others. When the model turns to human augmentation instead of human replacement, things get a lot more sensible. Maybe we’ll get there on the other side of this hype cycle.
@inthehands @thedansimonson both human augmentation and replacement. There are plenty of economically important problems with currently human-driven solutions that require rare/costly skills, and some are just not solvable even if plentiful skilled humans were magically available, if we require that humans be always in control. Example: air traffic.
@fgcallari @inthehands yes. from a cost perspective, a lot of applications of older techniques are simply ignored. the problem space wasn't exhausted, but few were willing to invest in fully exploring it from a commercial perspective.
@thedansimonson @inthehands sadly safety is not a "feature" amenable to scaling in a short VC-funded development cycle. Fundamentally, safety must be backed into the entire development culture, in an org willing to experiment (and lose money) until your safety-critical widgets are really ready. Where "readiness" is decided by a customer with the financial clout to shut you down if you get it wrong, or a government that'll jail you if you lie about performance.
@fgcallari @thedansimonson
I never thought I’d miss the consumer-driven version of capitalism, but the investor-driven version sure makes it look good.
@inthehands @thedansimonson or government-driven. Example: the FAA and the NRC are very technically adept and successful government regulators. While the case for the NRC was obvious to all from the start, the FAA only become important because "a passenger plane falling from the sky every 1000 flight hours (or, in VC-speak, "99.9% safe! Buy it now!") was not only not good enough for insurers, but also not good enough for passengers when the majority of passengers where wealthy and powerful.
@inthehands @thedansimonson RE: "obvious to all from the start", with the obvious caveat of pluto-libertarians who are funding a government takeover while building their personal fuckyouall nuke shelter in New Zealand. God help all those living within 200 miles downwind of a nuclear power station if Project 2025 graduates take over the NRC.
@inthehands @trochee Maybe *your* software doesn’t show up to work high. :-)
@inthehands It's not the point of your thread (which is great, btw) but why can't we have both? Why can't he have human operated vehicles that include the full complement of software and sensors, but the automation exists only to ensure that the operator obeys all traffic laws. Trying to roll through a stop sign? The vehicle stops. Trying to speed? The vehicle slows doesn’t exceed the speed limit. IF (and it's a big IF) autonomous vehicles are safer, it’s only because they follow the rules.
@inthehands and let’s not forget unaccountability. Will we be able to fine or otherwise punish the orgs providing self driving capabilities for errors made in the field, to the point that they will quickly adapt vehicle behavior?