I’ve long respected Bruce Schneier as an expert on computer security (and his book, Data and Goliath was well worth a read).

This recent essay “Can We Build Trustworthy AI?”, co-authored with Nathan Sanders, was outstanding.

“The problem isn’t the technology…it’s who owns it. Today’s AIs are primarily created and run by large technology companies, for their benefit and profit. Sometimes we are permitted to interact with the chatbots, but they’re never truly ours. That’s a conflict of interest, and one that destroys trust.

The transition from awe and eager utilization to suspicion to disillusionment is a well worn one in the technology sector.

Realistically, we should all be preparing for a world where AI is not trustworthy. … This will better prepare you to take advantage of AI tools, rather than be taken advantage by them.

Just like with any human, building trust with an AI will be hard won through interaction over time. We will need to test these systems in different contexts, observe their behavior, and build a mental model for how they will respond to our actions. Building trust in that way is only possible if these systems are transparent about their capabilities, what inputs they use and when they will share them, and whose interests they are evolving to represent.”

I, for one, absolutely appreciate the cool attitude, and Buddhist style nibbidā (disenchantment) in this article. The likes of Bruce Schneier have been around the track enough times in the IT world to see the recurring deeper sociological patterns which the cheerleaders of the latest big IT trend - AI - quickly overlook, what with the big dollar signs in their eyes. Each shiny new big thing attracts a wave of money which gets used in ways which become slimier and slimier over time, and when the toll is eventually taken about how beneficial the trend was overall, there’s a big, big hangover after the party is over, which we sort of never recover from, as a society.

The world hasn’t even come to terms with the social-level wreckage of too much screen time in general, the macro-level effects of Social Media, or the macro-level economic impacts of Cryptocurrency. We’re all generally still trying to find our sea legs, as it were, with these earlier big game changers, then along comes AI to churn up the sea a whole bunch more - as though we were in need of any more of that.

I made a post about this on Mastodon, if you’d like to comment.