hachyderm.io is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hachyderm is a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide. Note that many non-user account types have restrictions - please see our About page.

Administered by:

Server stats:

8.9K
active users

My own position on self-driving cars has gradually shifted over the last decade.

I used to think that they were promising not because they’re good, but because human drivers are so incredibly bad. A consistent machine with consistent attention, while still fallible and still dangerous, at least isn’t drunk or texting while driving.

Two things have shifted my feelings on the matter:

The first is what the article in the OP lays out: induced demand. If you make something easier, people will generally do more of it.

It’s dangerous to have people not paying the true cost of their own decisions. That’s already the case with driving, to an extreme (carbon tax now!!), but at least driving is really, really annoying. Annoyance is a poor proxy for driving’s true cost, but it’s •something•. Removing that backstop is dangerous.

The second is the difference between what I expected from self-driving vs how it’s actually evolving.

I imagined that self-driving cars would involve a narrowing of the parameter space: more consistent driving behavior, maybe some fixed and standardized cues/signals on roads, the expectation of human takeover for novel situations like construction zones, maybe even a shift toward a sort of herd efficiency in traffic.

Consistency. Automation that works better because it tries to •do less•. But…

…instead, AI is off on this ridiculous investor-driven “rEpLAcE hOoMAnS GAI BABEEEE” hype bender, in which self-driving cars have to be like human drivers except more magicaler.

What that means in practice is that consistency — the •one thing• that machines really have on humans in this space! — goes out the window. Self-driving cars are full of weird failure modes and bizarre quirks. They’re drunk and texting all the time, except they behave in ways that are even less predictable than humans.

One of the good-and-bad things that happens when we move human activity into software is a •narrowing of the problem space•.

Humans are full of ad hoc decisions. We fudge. We finagle. We mess up, but we also fix up. Humans are the adaptable part of complex systems. Human are both producers of and defenders against failure. (how.complexsystems.fail/)

When you moving a task into software, one of the central questions is “What happens to that human flexibility?”

how.complexsystems.failHow Complex Systems Fail

Usually, at least if we’re doing a good job, the answer is “we split it:”

One part of the problem becomes simpler, less flexible, more consistent. We make up rules: “every item has exactly one price,” or “every has one price per discount-item combination,” or “every item has N SKUs, each of which has one price per….“ The rules evolve, they adapt, they grow — but they remain consistent until we update them.

The beauty and the peril of software is consistency: •it follows those rules we invent•.

Beauty? Because consistency can really pay off.

Peril? Because sometimes we need exceptions.

I said we “split” the problem. Software takes one part of the job, a version of the problem that is simplified so that machine consistency is •possible•. The other part of the job: human intervention. We build software to loop in humans to say, “eh, damaged item, I’m giving you a discount” or whatever. •If• we’re doing it right.

Consistency with a dash of human intervention.

One classic way this goes wrong is when we forget the “human intervention” part.

You end up with these Kafkaesque nightmares where somebody is stuck in an infinite product return loop or their insurance claim is denied or the state thinks they’re dead or they get a zillion parking tickets because their custom license plate spells “NULL” (arstechnica.com/cars/2019/08/w)…and a human is stuck in process hell because •the software just does that• and software is hard to change.

Ars Technica · Geeky license plate earns hacker $12,000 in parking ticketsBy Jonathan M. Gitlin

I thought •that• was where self-driving cars were going to land: narrowed problem space, sometimes they fail, but at least they’re really consistent. Not great, but again, arguably an improvement over human drivers.

But nooooo. Now, thanks to the Glorious Dawn of AI Megahype, we have companies falling over themselves to replace all those annoying expensive humans…with •randomness•.

This is just bonkers to me.

I mean, software is…kind of terrible. It’s expensive to build and maintain. It constantly throws our bad assumptions back in our faces. It removes the human flexibility that keeps systems afloat, unless we work hard to prevent that.

But at least it’s consistent.

Whatever it does, it •keeps doing that thing• with a high degree of reliability. It doesn’t forget to write things down, or lose that scrap of paper, or show up to work high. When it fails, 99.9% it’s because humans told it to.

That consistency is the whole appeal of computers. Without that, why would any organization ever want to delegate anything to software?!

And now we have executives falling over themselves to replace it with “random human-imitating chaos machine?”

Really?

Really?!?

I just…Do you even…What do you think…

[the remainder of this thread is incoherent muttering]

@inthehands @thatandromeda Dad hired a car last week. He *hated* it because it fought back if he tried to change lanes without signalling. I quite liked that particular feature for forcing him out of bad habits.

Sadly the dashboard readout that frequently told him he needed to change up two gears didn't help with his lazy gear changing. Never buy an ex-rental!

@JetlagJen @inthehands Yes, I am with you, playing a tiny violin for your dad ;)