Lighthouse illustration with light beams, moon, and clouds – Watchtower section of The Midas Project website.
Lighthouse illustration with light beams, moon, and clouds – Watchtower section of The Midas Project website.

Holding The Hyperscalers Accountable

Holding The Hyperscalers Accountable

We monitor, investigate, and report on the practices of leading AI companies to ensure transparency, privacy, and ethical standards are maintained.

We monitor, investigate, and report on the practices of leading AI companies to ensure transparency, privacy, and ethical standards are maintained.

As Featured In

The Midas Project is a watchdog nonprofit working to ensure that AI technology benefits everybody, not just the companies developing it.

The Midas Project is a watchdog nonprofit working to ensure that AI technology benefits everybody, not just the companies developing it.

We lead strategic initiatives to monitor tech companies, counter corporate propaganda, raise awareness about corner cutting, and advocate for the responsible development of emerging technology.

We lead strategic initiatives to monitor tech companies, counter corporate propaganda, raise awareness about corner cutting, and advocate for the responsible development of emerging technology.

Stay Informed

The latest investigations, reports, and updates on how AI companies are—or aren’t—meeting their responsibilities.

Our Initiatives for Accountable AI

We lead targeted investigations and advocacy efforts to promote transparency, expose risks, and ensure that emerging AI technologies are developed responsibly and in the public interest.

An Open Letter to OpenAI

10,000+ people signed our joint open letter calling upon OpenAI to answer seven key questions about its upcoming restructuring. Signatories include Nobel laureates, civil society organizations, former OpenAI employees, all calling upon OpenAI for greater transparency.

An Open Letter to OpenAI

10,000+ people signed our joint open letter calling upon OpenAI to answer seven key questions about its upcoming restructuring. Signatories include Nobel laureates, civil society organizations, former OpenAI employees, all calling upon OpenAI for greater transparency.

An Open Letter to OpenAI

10,000+ people signed our joint open letter calling upon OpenAI to answer seven key questions about its upcoming restructuring. Signatories include Nobel laureates, civil society organizations, former OpenAI employees, all calling upon OpenAI for greater transparency.

The OpenAI Files

The OpenAI Files is the most comprehensive collection to date of documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI. It compiles credible reports and insider accounts that reveal how the company has drifted from its original mission.

The OpenAI Files

The OpenAI Files is the most comprehensive collection to date of documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI. It compiles credible reports and insider accounts that reveal how the company has drifted from its original mission.

The OpenAI Files

The OpenAI Files is the most comprehensive collection to date of documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI. It compiles credible reports and insider accounts that reveal how the company has drifted from its original mission.

Seoul Tracker

At the 2024 AI Safety Summit in Seoul, South Korea, sixteen leading tech organizations pledged to implement "red line" risk evaluation policies for frontier AI models. The deadline has now arrived, but not everyone has lived up to their commitment. This tracker assesses progress across the five key components.

Seoul Tracker

At the 2024 AI Safety Summit in Seoul, South Korea, sixteen leading tech organizations pledged to implement "red line" risk evaluation policies for frontier AI models. The deadline has now arrived, but not everyone has lived up to their commitment. This tracker assesses progress across the five key components.

Seoul Tracker

At the 2024 AI Safety Summit in Seoul, South Korea, sixteen leading tech organizations pledged to implement "red line" risk evaluation policies for frontier AI models. The deadline has now arrived, but not everyone has lived up to their commitment. This tracker assesses progress across the five key components.

No Deepfakes for Democracy

As Big Tech companies rapidly develop and deploy new AI technologies, deepfakes, or hyper-realistic AI-generated video, images, and audio, are quickly blurring the lines between truth and fiction.

No Deepfakes for Democracy

As Big Tech companies rapidly develop and deploy new AI technologies, deepfakes, or hyper-realistic AI-generated video, images, and audio, are quickly blurring the lines between truth and fiction.

No Deepfakes for Democracy

As Big Tech companies rapidly develop and deploy new AI technologies, deepfakes, or hyper-realistic AI-generated video, images, and audio, are quickly blurring the lines between truth and fiction.

Join our Movement

Help us push tech companies to prioritize safety, transparency, and public interest in AI development.
Your voice matters—and together, we can demand real change.

Help us push tech companies to prioritize safety, transparency, and public interest in AI development.
Your voice matters—and together, we can demand real change.

We believe AI should serve people -not just profits.

Sign now if you agree.

Tech companies should prioritize safe AI development over their bottom line.


Sign now if you agree.

We believe AI should serve people -not just profits.

Sign now if you agree.

Sign up to stay involved

Sign up to stay involved

Sign up to stay involved

Frequently Asked Questions

What people often ask about our mission, work, and how to get involved.

We engage in a combination of research, outreach, and public advocacy to ensure that AI companies are meeting public expectations, and living up to their past promises, when it comes to ensuring responsible AI development and deployment. The most important component of our work is helping to identify and disseminate industry best practices for AI development. We review technical literature, regulatory guidance, and case studies to distill concrete measures—such as frontier-model risk assessments, red-teaming requirements, audit regimes, and whistle-blower protections—and advocate for the most important voluntary steps that companies can take today to ensure they are acting responsible. We also monitor whether companies follow their stated policies and industry norms. When evidence shows back-tracking or inadequate controls, we document these gaps and publicly press for corrective action—mobilizing employees, customers, and civil-society allies until the company adopts the necessary safeguards. Finally, we publicize our research to help ensure the public is aware of how AI developers stack up on safety and responsibility. We release concise scorecards, incident analyses, and memos so that regulators, investors, and the wider public can see how individual developers perform on safety and responsibility.
Various AI experts including Nick Bostrom and Stuart Russell have compared the development of advanced AI to the myth of King Midas. According to the legend, King Midas once asked a powerful satyr to make it so that whatever he touched instantly turned into gold. At first, he was thrilled with his new powers. But the King soon discovered that he couldn’t touch food, water, or even his family without instantly turning them to metal. In other words, the sudden attainment of an incredible power with insufficiently well-specified goals and safeguards led to a terrible tragedy. Much like King Midas, tech companies are now eagerly pursuing incredible wealth and power by developing artificial intelligence, a technology that will change our world forever. But how will we know that it is designed in alignment with our collective human values? If we misspecify even a single goal or safeguard for these systems, how will we prevent them from causing an incredible catastrophe? In the words of Stuart Russell, “​​If you continue on the current path, the better AI gets, the worse things get for us. For any given incorrectly stated objective, the better a system achieves that objective, the worse it is.”
The Midas Project is a nonprofit initiative founded in early 2024. To this day, the majority of our campaign participants are unpaid volunteers who contribute in their free time. We are a nonprofit, tax-exempt, 501(c)(3) organization that relies on donations from the public to continue our work.
No. One of our central values is a pro-technology attitude. Progress in technology has improved lives for millions of people around the globe (after all, without it, we wouldn’t have penicillin, air conditioning, or the internet). Artificial intelligence is already being used by millions to help improve medicine, education, and overall living standards. We believe this progress should continue, and we hope AI will be a positive force in the world. However, we are also realists — and skeptical realists at that. We believe advanced AI systems may be a “dual-use” technology that can be used for harm as well. In order to avert social inequality, concentration of power, or AI-driven catastrophes, everybody needs to have a voice at the table when decisions about development and deployment are being made. Currently, the vast majority of these decisions about the future of AI are being made in shadowy corporate boardrooms with little oversight and accountability. That’s why The Midas Project is committed to raising awareness about the risks of AI, and ensuring that global citizens are given a chance to make their voice heard.
If you’d like to get involved, consider signing up for our newsletter, joining as an official volunteer, or making a charitable donation today.
You can email us at info@themidasproject.com, or reach out via the form on our contact page.

We can’t do it alone.

Support our Work

Support our Work

We’re a volunteer-led nonprofit fighting for responsible AI.

Your support helps us research, advocate, and push companies to do better.