Terms and Conditions May Apply.

There’s something almost cute about the way tech giants talk about ethics. When Google first unveiled its AI Principles in 2018, it wasn’t so much a revolution as it was a rebrand. A Silicon Valley promise ring. A pinky swear to humanity that the machines they build wouldn’t become too evil, too biased, too interested in targeting civilians with drone surveillance. These were not hard laws, mind you. Not regulations. Just some thoughtfully curated bullet points delivered in the soothing cadence of a keynote speaker who once read Kant for Beginners on a flight to Davos.

Here’s the thing about principles they're like the rules taped up in a high school classroom. Grandly stated “Respect Others!” “No Phones!” “No Cheating!”, universally ignored. No teeth. No enforcement. And if the teacher or tech CEO doesn’t feel like punishing the golden child when they cheat off your test or, say, violate user privacy or develop biased facial recognition software—well, guess what? That’s just discretion.

Google’s AI Principles, for all their performative humility, read less like a binding code of conduct and more like the manifesto of a man at Burning Man who just discovered ethics on a podcast. “Be socially beneficial,” they say. “Avoid creating or reinforcing unfair bias.” “Be accountable to people.” Sure, Jan. These are not goals with metrics. These are aspirations dressed in a Patagonia fleece vest.

Now contrast that with UNESCO’s Recommendation on the Ethics of Artificial Intelligence, which at least tries to define measurable commitments, robust human oversight, risk assessment mechanisms, periodic auditing, impact studies. Boring? Yes. Bureaucratic? Absolutely. But also, you know, useful. Like a filing cabinet full of receipts, instead of a hand-drawn dreamcatcher that says “Fairness!” in Comic Sans.

The real question is how do we know if Google is living up to its commitments? And the answer is—we don’t. Because Google, like most tech giants, has no real incentive to tell us when they screw up. There’s no governing body with subpoena power demanding transparency. No global AI ethics sheriff riding into town to inspect the codebase. Instead, we rely on the occasional whistleblower, a few good journalists, and the rare brave soul willing to read the Terms of Service all the way through without succumbing to madness.

And let’s not forget, these same principles were penned shortly after Project Maven. A U.S. military drone project powered by Google’s AI that went public and caused an employee revolt. The backlash forced Google to scrap the project. But here’s the part that doesn’t fit into a press release, they didn’t stop because it was wrong, they stopped because it was loud.

So no, we don’t need more ethics principles. We need mechanisms, audits, fines, labor unions, regulatory agencies with actual teeth. Until then, Google’s AI Ethics might as well be etched on a dorm room wall in glow-in-the-dark ink. Reassuring at first glance, utterly invisible under scrutiny.

And if all else fails? Just wait. Because these things always cycle. The same companies promising AI for Good™ today will pivot to “AI for Market Dominance” by Q3. And we’ll blink, and it’ll be 2040, and we’ll be the ones muttering as we try to explain to our AI grandkids why we once believed machines could be taught to behave better than the humans who built them.

Previous
Previous

Final Course Reflection

Next
Next

Siri, fix Chico.