Externalising Responsibility with AI
Companies play a tug of war game with governments over costs. If a company can move something that is a cost to themselves to either the government or people they increase their bottom line. These costs become externalities, essentially someone else’s problem.
The government bailout of banks post-2008 is the most famous recent memory example of this. The banks made bank (lol) on high risk packaged debts until it collapsed and governments (and by extension the taxpayers) picked up the bill.
You see this tug of war between companies and governments when a business starts to pitch setting up a HQ or data centre etc in a city. They’re essentially kicking off a process of negotiation that aims to get as much as they can (tax breaks, land deals, energy deals, access to water and other infra) for the benefits of bringing jobs to that area and (hopefully) boosting the economy.
Corporate Responsibility as an Externality
One slightly abstract battle has been that of corporate responsibility. Who is really responsible if a multinational-owned chemical plant sprays toxins into the air and poisons half a million people? Do you arrest the CEO? The local plant manager? The people that hid the failed safety tests?
The answer when it comes to the Bhopal Disaster was to pin the blame as far down the ladder as you can.
Corporate responsibility is a cost, it has legal implications that could tank most companies. If money is made on risk and you can externalise the costs of those risks then you have rigged the game in your favour. The more you rig the game, the longer you survive, the riskier the games you play, the closer you get to disaster.
Management structures commonly misunderstand one principle of delegation: if you assign a task to someone, the responsibility for that task being delivered is still with you. i.e. if you choose to delegate a task and you choose the wrong person and they screw it up, that’s on you, you made the wrong choice.
AI: The Ultimate Scapegoat
Now to the point of this whole thing: what happens when you delegate that task to an AI agent and a disaster happens?
Was your prompt to blame?
Is the model vendor implementing resource constraints at peak times so they’re to blame?
Did the guardrails change so that now your agents don’t behave like they used to?
Companies are trying to externalise responsibility and AI is a tool for that.