hachyderm.io is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hachyderm is a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide. Note that many non-user account types have restrictions - please see our About page.

Administered by:

Server stats:

9.4K
active users

I once worked at a company that sold industry-specific core-business software to deep-pocketed corps who couldn’t / wouldn’t / shouldn’t roll their own. I got into a discuss with my manager about whether our products were essentially — my words — a hoax.

Me: “Look, our products are riddled with bugs and holes. They’re nearly impossible to deploy, manage, and maintain. They frequently don’t even work •at all• on the putative release date, and we sell the mop-up as expensive ‘consulting.’”

1/

“How can it not be a hoax?!”

He said something that completely changed how I look at the workings of business:

“Paul, you are making the mistake of comparing our software to your ideal of what it •should• be. That’s not what these companies are doing. They’re comparing it to what they already have now. And what they have now is •terrible•.”

2/

He continued: “They’re doing business with Excel spreadsheets, or ancient mainframes, or in many cases still using pen and paper processes [this was the early 00s], and those processes are just wildly labor-intensive and error-ridden. They lose unimaginable amounts of money to this. For them to pay us a measly few million to get software that takes 18 months to get deployed and just barely working? That is a •huge• improvement for them.”

In short: our product sucked, but it wasn’t a hoax.

3/

There’s a weird disconnect about gen AI between the MBA crowd and the tech crowd: either it’s the magical make-money sauce CEOs can just pour on everything, or it’s fake and it’s all a hoax.

A lot of that is just gullibility and hype at play, huge amounts of investor money and wishful thinking desperately hoping to find huge payoffs in whiz-bang tech.

But: companies do actually deploy gen AI, and it sucks, and they •don’t stop•. Why?!

4/

I suspect that conversation long ago might shed some light on how companies are actually viewing gen AI right now. Behind all the flashy “iT cOuLD bE sKYnEt” nonsense, there’s something much more disappointingly cynical but rational: Gen AI sucks. They know it sucks. But in some cases, in some situations, viewed through certain bottom-line lenses, it sucks slightly less.

5/

So Megacorp’s new AI customer support tool describes features that don’t exist, or tells people to eat nails and glue, or is just •wrong•.

Guess what? Their hapless, undertrained, poverty-wage, treated-like-dirt humans who used to handle all the support didn’t actually help people either. Megacorp demanded throughput so high and incentivized ticket closure so much that their support staff were already leading people on wild goose chases, cussing them out, and/or quitting on the spot.

6/

Gen AI doesn’t cuss people out, doesn’t quit on the spot, and has extremely high throughput. It leads people on wild goose chases •far• more efficiently than the humans. And hell, sometimes, just by dumb luck, it’s actually right! Like…maybe more than half the time!

When your previous baseline is the self-made nightmare of late stage capitalism tech support, that is •amazing•.

7/

And you can control it (sort of)! And it protects you from liability (maybe)! And all it takes is money and environmental disaster!

Run that thought process across other activites where corps are deploying gen AI.

I suspect a lot of us, despite living in this modern corporate hellscape, still fail to understand just how profoundly •broken• the operations of big businesses truly are, how much they function on fakery and deception and nonsense.

So gen AI is fake? So what. So is business.

8/

I am hamming this up for cynical dramatic effect, but I do think there’s a serious thought here: so much activity within business delivers so little of actual value to the world that replacing slow human nonsense crap with fast automated nonsense crap seems like a win.

Trying to imagine the world through MBA goggles on, it seems perfectly rational.

When people consider gen AI, I ask them to ask themselves: “Does it matter if it’s wrong?” Often, the answer is “no.”

9/

If you’ll indulge another industry story — sorry, this thread is going to get absurdly long — let me tell you about one of the worst clients I ever had:

Group of brothers. They’d made fuck-you money in marketing or something. They founded a startup with a human benefit angle, do some good for the world, yada yada.

Common now, but new-ish idea at the time: gamified online health & well-being platform that a company (or maybe insurer, whatever) offers to its employees.

10/

The big brilliant idea at the heart of the product they were building? The Life Score: a number that quantifies your overall well-being, a number that you can try to raise by doing healthy activities.

How exactly was this number to be calculated? Eh, details.

11/

They had this elaborate business plan: the market opportunity, the connections, the moving parts — and in the middle of this giant world-domination scheme, a giant hole. Just black box (currently empty) labeled “magic number that makes people get healthier.”

The core feature of their product, the lynchpin that would make the entire thing actually useful, was just a big-ass TBD.

12/

I was hired to implement, but quickly realized they had no idea what they wanted me to build. Worse: they hadn't hired any of the people (like, say, a health actuary or a behavioral psychologist) who would be remotely qualified to help them figure it out. The architect of their giant system was a chemical engineer of some kind who was trying to get into tech. Lots of big ideas about what it would •look like•, but nobody in sight had a clue how this thing would actually •work•. Zero R&D.

13/

Paul Cantrell

No worries. Designers were cranking out UI! Marketers were…marketing! Turning the Life Score from vague founder notion to working system was a troublesome afterthought.

So…like a fool, I tried to help them suss it out. It turned out they •did• sort of have a notion:

1. Intake questionnaire about your lifestyle
2. Assign points to responses
3. System suggests healthy activities
4. Each activity adds points to your score if you do it

14/

And then, like a •damn• fool, I pointed out to them the gaping chasm between (2) and (4). Think about it: at the start, the score measures (however dubiously) the state of your health. But after you do some activities, the score measures how many activities you did.

The score •changes meaning• after intake. And it's designed to go up over time. Even if your health is getting worse.

And like an •utter• damn fool, I thought this was a flaw.

15/

It was only after the whole contract crashed and burned (they were, it turns out, truly awful people) that I realized that my earnest data-conscious questions were threatening their whole model.

Their product was there to make the “healthy” line go up. Not to actually make people healthy, no! Just to make the line go up.

It was an offer of plausible deniability: for users, for their employers, for everyone. We can all •pretend• we’re getting healthier! Folks will pay good money for that.

16/

Of •course• their whole business plan had a gaping hole at the center. That was the point! If that Life Score is •accurate•, if it actually describes the real-world state of a person’s health in any kind of meaningful way, that wrecks the whole thing.

Now, of course, there would be no Paul to ask them annoying questions about the integrity of their metrics. They’d just build it with gen AI.

17/

Would gen AI actually be a good way to help people get healthy with this product? No. But that was never the goal.

Would gen AI have been a good option for these rich people trying to get richer by building a giant hoax box that lets a bunch of parties plausibly claim improved employee health regardless of reality? Hell yes.

18/

Again, my gen AI question: Does it matter if it’s wrong?

I mean, in some situations, yes…right? Like, say, vehicles? that can kill people?

Tesla’s out there selling these self-crashing cars that are •clearly• not ready for prime time, and trap people inside with their unopenable-after-accident doors and burn them alive. And they’re •still• selling crap-tons of those things.

If it doesn’t matter to •them•, how many biz situations are there where “fake and dangerous” is 100% acceptable?

19/

Does it matter if it’s wrong?

In the nihilism of this current stage of capitalism, “no” sure looks like a winning bet.

/end

Because I’ve apparently driven some people to despair with this thread, some rays of hope:

First, note that point of my very first example is that the product was •not• a hoax. It really did make things better for real people. Sometimes it can be hard for perfectionists like myself to accept that better is •better•. Sometimes we do actually build things that matter, even if they kind of stink relative to some nonexistent Platonic ideal. Take the win, Paul!

Ah, but the rest of the thread…

A/1

As many point out, there are systemic forces at play that create these situations where “fake and dangerous” is acceptable to a business, even if it harms actual humans. It’s not just that there are awful people; it’s that our system creates the awful people it needs.

Leaving aside grandiose utopian theories, a few concrete things that could help:

1. Giving antitrust real teeth
2. Repeat 1 for emphasis
3. Making industry less investor-driven
4. …which means reducing wealth inequality

A/2

Expanding on 3 & 4 a bit: Right now, there’s a huge mass of concentrated wealth whose owners aren’t going to spend it, so they want to invest it. And they want returns. They want more returns than actually are out there to be found. So industry needs to •invent• those returns.

That means a company that will never and can never ever deliver anything of true value can still make it if in convinces investors it might grow.

“Does it matter if it’s wrong?” Not if it's convincing to investors!

A/3

Much the same thing created the 2008 mortgage crisis. Wealth needs a place to roost → mortgages look safe → more demand for mortgage investments than there are mortgages → people find ways to invent mortgages that shouldn’t exist, to attract investment.

Then: mortgages. Now: AI.

It’s not just a hype bubble; it’s an “oversupply of investment dollars” bubble. Is there a word for that? I am not an economist.

@Npars01 replied with more on this here: mstdn.social/@Npars01/11360447

A/4

Mastodon 🐘Nicole Parsons (@Npars01@mstdn.social)@inthehands@hachyderm.io Extend this thought experiment to political campaign funding and tech billionaires. Silicon Valley bought an election win for a set of GOP crooks because "innovation" has been redefined as: 1. Successful scams & frauds 2. Tax evasion 3. Corporate welfare & subsidies 4. Monopolies 5. Regulatory capture 6. Pollution & climate denial 7. Deregulation Silicon Valley does not want saleable products that generate revenue. They want Saudi cash. They want Russian oligarchs... 1/2

Weirdly, the old consumer-driven version of capitalism, where profit came from sales, now looks lovely compared to the current investor-driven version where profit comes from wooing the ultra-wealthy.

Sorry, Marxists in the replies, I am not optimistic about the inevitable market swing bringing about revolution and glorious utopia. It didn’t in 2008, and it won’t now either. But I do believe the pendulum will swing back, at least.

And we do have greater than zero power to help it swing.

A/5

OK, I promised you optimism:

The world, including the business world, is •full• of good people doing good things. Yes, there are actually businesses that do manage to do good sometimes…but that's not where I’m going with this.

Even in the most useless, soul-sucking, useless, everything-is-fake business environment, there are people solving fun problems, forming collegial relationships, being kind to each other, finding joy in the mundane. And •that• is life’s real work.

A/6

Yes, a whole lot of business activity is fake. It’s playing with sand castles and pretending that princesses are going to live in them. But hell, at least it’s people playing!

If like me it’s hard for you to keep the MBA goggles on, and you keep asking “What about real good for real people?!,” well, all the sand castles come and go, profits come and go, and the lives lived along the way matter more than any of it.

A/7

I don’t want to downplay the harm our systems do. The world is •full• of preventable harm. It’s enraging, and it’s heartbreaking.

All I’m saying is this: don’t neglect the importance of the humans living their lives in the middle of all the corporate nonsense. I’m concerned about the quality of the product, but in the end, I’m a lot more concerned about the lives of the people building and using it. And the product isn’t the most important thing in any of their lives.

A/8

Even in the fakest and most ridiculous places I’ve worked, including the ones upthread, I’ve seen people being beautiful, exemplary frigging human beings, caring for each other, living good lives that are bigger and better and so much more important than the boulder we’re rolling up the hill just to see it roll down.

•That• is the most important thing in the world. I’d like less waste along the way. I’d sure like less harm. But I recognize that we are very, very far from hopeless.

A/9

@inthehands

Extend this thought experiment to political campaign funding and tech billionaires.

Silicon Valley bought an election win for a set of GOP crooks because "innovation" has been redefined as:
1. Successful scams & frauds
2. Tax evasion
3. Corporate welfare & subsidies
4. Monopolies
5. Regulatory capture
6. Pollution & climate denial
7. Deregulation

Silicon Valley does not want saleable products that generate revenue.

They want Saudi cash. They want Russian oligarchs...

1/2

@Npars01 We need to destroy business school as it currently exists. It is a training academy for sociopaths.

@Npars01 Well where do evil managers and CEOs come from?

People go to business school and they are taught about Jack Welch and Jeff Bezos and Steve Jobs. They are taught that these people are heroes because they made the stock go up. They are taught that exploiting cheap labor and making disposable products is good because it generates more "value" for shareholders. They are taught to be parasites and feel good about it.

That mass production of parasites needs to stop.

@Npars01
Yep, this is all at the heart of it. It’s the same thing that brought us the 2008 crash: too much investor money looking for returns that don’t exist. Then it was just investor wanting more mortgages to invest in than there were mortgages to reasonably offer. But this time, it’s broader, and I fear much worse.

@Npars01 @inthehands Money from Silicon Valley may have bought an election, but that came from the billionaire class. Similarly, technology isn't bad by nature, it's how that technology is used by people that's good or bad.

Let's call out the small number of disgustingly rich people who did this rather than attribute it to an entire region.

@earth2marsh @inthehands

That's the thing. It isn't just a small number of people in one region.

Silicon Valley relies on oil money.

The fossil fuel industry is supported by everyone who gases up their cars or heats their home.

A third of the top 100 Twitter investors are from Fidelity retirement funds. Another set are from Russia & Saudi Arabia.

The fossil fuel industry has targeted Silicon Valley with a flood of investment dollars for AI, cryptocurrency, & social media in recent years

@Npars01 @inthehands I live in Silicon Valley along with millions of others. Many of us strongly disapprove of the behavior of our mega-rich neighbors. SV has been targeted by fossil fuels because of growth opportunities, and unfettered capitalism has a bottomless appetite for growth.

When place names like Russia/Saudi Arabia/Silicon Valley are used to refer to a relatively small set of oligarchs, it makes it harder for those who identify with those places to hear your point.

@Npars01 @inthehands I am not saying you're wrong. I actually appreciate the point you're trying to make! I am saying that othering is dangerous. Divisions make it harder to resist.

@Npars01 @earth2marsh @inthehands

👏👏👏

The link between Silicon Valley techbros and dirty petrostate money is horribly underreported and understood.

Thank you for mentioning it. One important example is that Saudi Arabia's the second largest investor in Twitter. Now, why would that be?

@TCatInReality @earth2marsh @inthehands

For manipulating public opinion.

Despotic petro-states, like Russia, Qatar, & Saudi Arabia, are waging undeclared wars on every democracy that attempts climate action.

They've gained allies with conservative bigots, white supremacists, & #KochNetwork to thwart a fossil fuel phase out.

As climate change accelerates, that phase out becomes increasingly imperative.

Keeping captive consumers captive
americanprogress.org/article/t

democracynow.org/2024/11/21/co

Center for American Progress · These Fossil Fuel Industry Tactics Are Fueling Democratic BackslidingBy Steve Bonitatibus

@inthehands spot on for so much of what seems to be going on right now - every last health or weight loss type app

@inthehands off topic to this thread, but damn Paul, you've been posting some amazing and on point thoughts and stories the past few days. Thanks for sharing!

@rrdot
Cheers! Screaming into the void may be futile, but I guess it’s nice when somebody enjoys the concert?!

@inthehands or as I usually put it, the bullshit machine looks awful nice to the folks that have made it with bullshit.

@inthehands

Wow ... Yeah ... I mean .... That's just too close to the mark (and also reflects SO DAMN MUCH of why I fled IT a few years back).

It really all makes sense.

@fedithom The problem, of course, is that you are utterly right about everything you wrote.

And it's depressing af.

@blindcoder
I didn't really write much of anything. All the smart stuff here was written by @inthehands :)

@fedithom @inthehands Sorry, that's what I get for trying to respond to the correct post :D

@inthehands I'm just off to build a browser tweak to replace "ai" with "giant hoax box"

@jfrench

"Giant hoax box" also shortens to GHB, which is (in French at least) the drug bad people put in other people's drink to abuse them, and while the image is disgusting, it does fit Gen AI business model.

@inthehands

@inthehands, thanks for the interesting read! Made perfect sense to me, and I have indeed struggled until now to understand how so many allegedly business-savvy people could see the extent of GenAI hallucinations and still think "this is exactly what we need!"

@inthehands that was worth the long thread !

I'd add that for AI to be a valid business option it doesn't even need to be better than the existing solution, it just need to be more profitable than doing nothing.

It's easier for a company to bet on a new project that mostly costs electricity and hardware and that can be stopped at any moment than on one that requires hiring people, and will need even more people if it succeeds.

@inthehands No, it was a useful thread, just, you know, a bit depressing. 😂

@inthehands you are absolutely right, this happens in tech, and all business. It just might not be what happens with genAI though. GenAI might actually be the hoax us cynics think it is and be a total wipeout of hundreds of billions of dollars (and emissions and wrecked jobs.) The church of Altman looks a lot more like a hoax than not. Part of it "sucking less" is like a psychic.

@paulmwatson
I mean, yes, and to hell with the church of Altman. I do think this is a bubble and there will be a crash.

@inthehands @paulmwatson
A bubble and popping in the not too distant future it is. Two quick thoughts:

Building on the point about MBAs, using LLMs is a "plausible optimization measure" they can present to shareholders. That alone can be worth it, similar to using consultants to avoid direct responsibility. :(

On the other hand, research groups and start-ups with genuinely good projects were suddenly able to get funding (just switch ML to AI...). Wasteful? Very, but a small silver lining. :)

@caranea @inthehands we couldn't leverage our ML work into AI funding, the investors wanted the generative AI story. Oh well.

@inthehands that resonates. A factor that deserves some more attention is that inside those large corps, most employees are completely cynical. They don't care about doing a good job anymore at all, they just care about not getting burnt and taking home the money. Most, not all, but that hardly matters.

This leads me to the conclusion that chatgpt would do better than most employees in large orgs at writing internal emails, for example. Because _nothing creative of use_ happens in there anyway.

@inthehands in a sane world, after you realized the whole health score was a hoax, you'd be able (or even required) to report them to some kind of institution that would send inspections and lawsuits their way.

We don't live in a sane world.

@inthehands Also, regarding the customer support example:
Yes, it saves time for the company, and it was valuable human time that was being spent on pointless things.

But it makes the *customer* waste *more time* doing pointless things

Moreover, it creates an assymmetry, where the company can spend relatively little time to make the customer waste a lot of time.

So AI is a weapon.

And as such, it should be regulated.

@wolf480pl
Someone once remarked that patients are the free labor of the US medical system, and that’s really stuck with me. And it’s not just medical where that pattern occurs.

@inthehands
way way *way* back when

I had that oft-considered-a-superpower skillset that allowed me to be able to connect computers to 'the internet'

wasn't a thing for me, a near-hobby, and I was always learning.

got pitched on 'consulting' by >1 outfit

"uh, no. I can't see billing a client just to sit there and read manuals"

Them: "Yer never gonna make it in the real world with that attitude"

@inthehands
yrs later
kinda at wits-end for having this strange multi-discipline skill-set that spanned frame carpentry to network design & implementation

was in a chat with a former boss and mentioned: "maybe I should just go into consulting"

they literally lol'd
"for all your practical skills, you utterly lack the most critical"

me: (offended) "which is?"

them: "You lack the ability to slither under a closed door"

@inthehands This thread started out as incredibly deflating and ended up flatly horrifying.

@inthehands I once half in jest claimed that the real reason proprietary software companies don't disclose their source code is not because it is a 'valuable business secret' but because they are ashamed of it. The code quality, I mean. Those rare opportunities I have had to look at source have not changed my mind...

@martinvermeer
Yes, and not just ashamed of the quality; in many causes it would expose the product as being an empty promise, maybe even expose outright negligence or fraud.

@inthehands In the end I feel like our economic world is a house of cards that holds up so many people doing silly things.

But those things let them have full and interesting lives. It is an inefficient make work program.

We should lift more people up with it so they can be better enabled and supported in their full and interesting lives.

And also pushing the pendulum towards the work itself being a little more meaningful and beneficial.