Incentive gradients and The Midas Project’s theory of change

Why start an industry watchdog organization calling out irresponsible AI developers?

Companies move along incentive gradients. Imagine this as a 3D landscape with peaks and valleys, downward slopes and upward climbs. 

Companies move along this landscape. They want to follow the path of least resistance. They’re constantly moving in the easiest, cheapest direction, just as a ball is drawn toward rolling down a hill.

When it comes to growing quickly and making shareholders a boatload of money, this is great. But when it comes to making responsible and pro-social decisions, things get more complicated.

See, taking corporate social responsibility seriously is often costly and difficult, akin to pushing the company up a steep hill. At any moment, the company may be tempted to simply give up — to save themselves the time, energy, and money — and let gravity pull them downward in an endless race-to-the-bottom. The financial incentives are simply too strong for multi-billion dollar corporations to be expected to always do the right thing, first try, by default.

One role of government and civil society is to shift this incentive gradient. 

For example, the potential sanctions imposed by regulators increase the costs of behaving irresponsibly — whether it’s polluting waterways or acting monopolistically — and this helps ensure that markets are serving the interests of everyone, not just corporate shareholders. In essence, it’s leveling the terrain of the incentive gradient, raising the valleys and lowering the mountains. 

But regulations can’t address everything. They move slowly. They’re often reactive, rather than proactive. And ask anyone who has worked in politics: achieving bipartisan consensus on even the most bare-minimum of regulations can be a challenge.

That’s why nonprofit industry watchdog groups, like The Midas Project, have a critical role to play. 

It’s only in the darkness that corporations feel comfortable cutting corners, taking risks, and neglecting social responsibility. If they knew there is a public interest group watching them, capable of bringing to light all the ways they are taking unnecessary risks, externalizing costs, and harming the world, the incentive gradient shifts. Acting irresponsibly becomes more costly. Acting ethically may indeed become the path of least resistance.

We fear that the current incentive gradient for AI developers is leading us toward an AI catastrophe, a valley which we find ourselves rolling down faster and faster every year. In their race to build smarter-than-human artificial intelligence, we’re already seeing AI companies cutting corners: rushing through safety testing, stealing copyright material, lobbying against regulation, and silencing whistleblowers.

The Midas Project has one goal: hold those companies accountable, and make it so that an AI catastrophe is no longer the path of least resistance — no longer the default outcome of a profit-driven race to the bottom.

How are we doing this? By shining a light on corporate misbehavior.