

Holding The Hyperscalers Accountable
Holding The Hyperscalers Accountable
We monitor, investigate, and report on the practices of leading AI companies to ensure transparency, privacy, and ethical standards are maintained.
We monitor, investigate, and report on the practices of leading AI companies to ensure transparency, privacy, and ethical standards are maintained.
The Midas Project is a watchdog nonprofit working to ensure that AI technology benefits everybody, not just the companies developing it.
The Midas Project is a watchdog nonprofit working to ensure that AI technology benefits everybody, not just the companies developing it.
We lead strategic initiatives to monitor tech companies, counter corporate propaganda, raise awareness about corner cutting, and advocate for the responsible development of emerging technology.
We lead strategic initiatives to monitor tech companies, counter corporate propaganda, raise awareness about corner cutting, and advocate for the responsible development of emerging technology.
Stay Informed
The latest investigations, reports, and updates on how AI companies are—or aren’t—meeting their responsibilities.
Aug 14, 2025
The Midas Project joins letter calling for investigation into xAI's tolerance for nonconsensual deepfakes
Today, The Midas Project joined a group of 15 organizations calling on state and federal regulators to investigate and enforce laws against xAI.
Read More >
Aug 4, 2025
Joint letter requests transparency from OpenAI about its restructuring
A coalition of over 100 Nobel Prize winners, professors, whistleblowers, and public figures have released an open letter calling on OpenAI to answer 7 questions.
Read More >
Jul 10, 2025
The Midas Project lodges complaint with IRS over possible OpenAI tax law violations
Our findings reveal a pattern of concerning decisions that may threaten OpenAI's nonprofit status and charitable mission.
Read More >
Jul 8, 2025
How Anthropic’s AI Safety Framework Misses the Mark
Anthropic’s RSP suffers from a lack of credibility caused by last-minute changes, as well as unclear language that makes it difficult for the policy to be understood.
Read More >
Jun 18, 2025
"The OpenAI Files" documents a turbulent decade at Sam Altman's nonprofit-turned-tech-giant
We believe OpenAI still has a narrow window to reclaim its mission. Here's how.
Read More >
May 13, 2025
xAI misses a second, self-imposed deadline to implement a Frontier Safety Policy
Elon Musk's company has said nothing about if and when they plan to implement a full frontier safety policy meeting the standards of the Seoul commitment.
Read More >
Feb 28, 2025
Announcing the Seoul Commitment Tracker
Our report reveals that most AI companies are breaking the safety promises they made to the world last year.
Read More >
Dec 14, 2024
Scaling, Reasoning, and Unknown Unknowns
Upcoming model releases could rapidly render the field — and, thus, our world — unfamiliar again.
Read More >
Oct 25, 2024
AI Developers Are Bungling Their Dress Rehearsal
Here’s a sobering fact: artificial intelligence today is the worst it will ever be. From here on out, AI will only become more and more capable — and dangerous.
Read More >
Oct 4, 2024
How deepfakes threaten democracy — and what you can do to help
The Midas Project is proud to join a coalition of partner organizations calling on tech companies to protect their users from deceptive or misleading synthetic media.
Read More >
Aug 14, 2025
The Midas Project joins letter calling for investigation into xAI's tolerance for nonconsensual deepfakes
Today, The Midas Project joined a group of 15 organizations calling on state and federal regulators to investigate and enforce laws against xAI.
Read More >
Aug 4, 2025
Joint letter requests transparency from OpenAI about its restructuring
A coalition of over 100 Nobel Prize winners, professors, whistleblowers, and public figures have released an open letter calling on OpenAI to answer 7 questions.
Read More >
Jul 10, 2025
The Midas Project lodges complaint with IRS over possible OpenAI tax law violations
Our findings reveal a pattern of concerning decisions that may threaten OpenAI's nonprofit status and charitable mission.
Read More >
Jul 8, 2025
How Anthropic’s AI Safety Framework Misses the Mark
Anthropic’s RSP suffers from a lack of credibility caused by last-minute changes, as well as unclear language that makes it difficult for the policy to be understood.
Read More >
Jun 18, 2025
"The OpenAI Files" documents a turbulent decade at Sam Altman's nonprofit-turned-tech-giant
We believe OpenAI still has a narrow window to reclaim its mission. Here's how.
Read More >
May 13, 2025
xAI misses a second, self-imposed deadline to implement a Frontier Safety Policy
Elon Musk's company has said nothing about if and when they plan to implement a full frontier safety policy meeting the standards of the Seoul commitment.
Read More >
Feb 28, 2025
Announcing the Seoul Commitment Tracker
Our report reveals that most AI companies are breaking the safety promises they made to the world last year.
Read More >
Dec 14, 2024
Scaling, Reasoning, and Unknown Unknowns
Upcoming model releases could rapidly render the field — and, thus, our world — unfamiliar again.
Read More >
Oct 25, 2024
AI Developers Are Bungling Their Dress Rehearsal
Here’s a sobering fact: artificial intelligence today is the worst it will ever be. From here on out, AI will only become more and more capable — and dangerous.
Read More >
Oct 4, 2024
How deepfakes threaten democracy — and what you can do to help
The Midas Project is proud to join a coalition of partner organizations calling on tech companies to protect their users from deceptive or misleading synthetic media.
Read More >
Aug 14, 2025
The Midas Project joins letter calling for investigation into xAI's tolerance for nonconsensual deepfakes
Today, The Midas Project joined a group of 15 organizations calling on state and federal regulators to investigate and enforce laws against xAI.
Read More >
Aug 4, 2025
Joint letter requests transparency from OpenAI about its restructuring
A coalition of over 100 Nobel Prize winners, professors, whistleblowers, and public figures have released an open letter calling on OpenAI to answer 7 questions.
Read More >
Jul 10, 2025
The Midas Project lodges complaint with IRS over possible OpenAI tax law violations
Our findings reveal a pattern of concerning decisions that may threaten OpenAI's nonprofit status and charitable mission.
Read More >
Jul 8, 2025
How Anthropic’s AI Safety Framework Misses the Mark
Anthropic’s RSP suffers from a lack of credibility caused by last-minute changes, as well as unclear language that makes it difficult for the policy to be understood.
Read More >
Jun 18, 2025
"The OpenAI Files" documents a turbulent decade at Sam Altman's nonprofit-turned-tech-giant
We believe OpenAI still has a narrow window to reclaim its mission. Here's how.
Read More >
May 13, 2025
xAI misses a second, self-imposed deadline to implement a Frontier Safety Policy
Elon Musk's company has said nothing about if and when they plan to implement a full frontier safety policy meeting the standards of the Seoul commitment.
Read More >
Feb 28, 2025
Announcing the Seoul Commitment Tracker
Our report reveals that most AI companies are breaking the safety promises they made to the world last year.
Read More >
Dec 14, 2024
Scaling, Reasoning, and Unknown Unknowns
Upcoming model releases could rapidly render the field — and, thus, our world — unfamiliar again.
Read More >
Oct 25, 2024
AI Developers Are Bungling Their Dress Rehearsal
Here’s a sobering fact: artificial intelligence today is the worst it will ever be. From here on out, AI will only become more and more capable — and dangerous.
Read More >
Oct 4, 2024
How deepfakes threaten democracy — and what you can do to help
The Midas Project is proud to join a coalition of partner organizations calling on tech companies to protect their users from deceptive or misleading synthetic media.
Read More >
Aug 14, 2025
The Midas Project joins letter calling for investigation into xAI's tolerance for nonconsensual deepfakes
Today, The Midas Project joined a group of 15 organizations calling on state and federal regulators to investigate and enforce laws against xAI.
Read More >
Aug 4, 2025
Joint letter requests transparency from OpenAI about its restructuring
A coalition of over 100 Nobel Prize winners, professors, whistleblowers, and public figures have released an open letter calling on OpenAI to answer 7 questions.
Read More >
Jul 10, 2025
The Midas Project lodges complaint with IRS over possible OpenAI tax law violations
Our findings reveal a pattern of concerning decisions that may threaten OpenAI's nonprofit status and charitable mission.
Read More >
Jul 8, 2025
How Anthropic’s AI Safety Framework Misses the Mark
Anthropic’s RSP suffers from a lack of credibility caused by last-minute changes, as well as unclear language that makes it difficult for the policy to be understood.
Read More >
Jun 18, 2025
"The OpenAI Files" documents a turbulent decade at Sam Altman's nonprofit-turned-tech-giant
We believe OpenAI still has a narrow window to reclaim its mission. Here's how.
Read More >
May 13, 2025
xAI misses a second, self-imposed deadline to implement a Frontier Safety Policy
Elon Musk's company has said nothing about if and when they plan to implement a full frontier safety policy meeting the standards of the Seoul commitment.
Read More >
Feb 28, 2025
Announcing the Seoul Commitment Tracker
Our report reveals that most AI companies are breaking the safety promises they made to the world last year.
Read More >
Dec 14, 2024
Scaling, Reasoning, and Unknown Unknowns
Upcoming model releases could rapidly render the field — and, thus, our world — unfamiliar again.
Read More >
Oct 25, 2024
AI Developers Are Bungling Their Dress Rehearsal
Here’s a sobering fact: artificial intelligence today is the worst it will ever be. From here on out, AI will only become more and more capable — and dangerous.
Read More >
Oct 4, 2024
How deepfakes threaten democracy — and what you can do to help
The Midas Project is proud to join a coalition of partner organizations calling on tech companies to protect their users from deceptive or misleading synthetic media.
Read More >
Aug 14, 2025
The Midas Project joins letter calling for investigation into xAI's tolerance for nonconsensual deepfakes
Today, The Midas Project joined a group of 15 organizations calling on state and federal regulators to investigate and enforce laws against xAI.
Read More >
Aug 4, 2025
Joint letter requests transparency from OpenAI about its restructuring
A coalition of over 100 Nobel Prize winners, professors, whistleblowers, and public figures have released an open letter calling on OpenAI to answer 7 questions.
Read More >
Jul 10, 2025
The Midas Project lodges complaint with IRS over possible OpenAI tax law violations
Our findings reveal a pattern of concerning decisions that may threaten OpenAI's nonprofit status and charitable mission.
Read More >
Jul 8, 2025
How Anthropic’s AI Safety Framework Misses the Mark
Anthropic’s RSP suffers from a lack of credibility caused by last-minute changes, as well as unclear language that makes it difficult for the policy to be understood.
Read More >
Jun 18, 2025
"The OpenAI Files" documents a turbulent decade at Sam Altman's nonprofit-turned-tech-giant
We believe OpenAI still has a narrow window to reclaim its mission. Here's how.
Read More >
May 13, 2025
xAI misses a second, self-imposed deadline to implement a Frontier Safety Policy
Elon Musk's company has said nothing about if and when they plan to implement a full frontier safety policy meeting the standards of the Seoul commitment.
Read More >
Feb 28, 2025
Announcing the Seoul Commitment Tracker
Our report reveals that most AI companies are breaking the safety promises they made to the world last year.
Read More >
Dec 14, 2024
Scaling, Reasoning, and Unknown Unknowns
Upcoming model releases could rapidly render the field — and, thus, our world — unfamiliar again.
Read More >
Oct 25, 2024
AI Developers Are Bungling Their Dress Rehearsal
Here’s a sobering fact: artificial intelligence today is the worst it will ever be. From here on out, AI will only become more and more capable — and dangerous.
Read More >
Oct 4, 2024
How deepfakes threaten democracy — and what you can do to help
The Midas Project is proud to join a coalition of partner organizations calling on tech companies to protect their users from deceptive or misleading synthetic media.
Read More >
Aug 14, 2025
The Midas Project joins letter calling for investigation into xAI's tolerance for nonconsensual deepfakes
Today, The Midas Project joined a group of 15 organizations calling on state and federal regulators to investigate and enforce laws against xAI.
Read More >
Aug 4, 2025
Joint letter requests transparency from OpenAI about its restructuring
A coalition of over 100 Nobel Prize winners, professors, whistleblowers, and public figures have released an open letter calling on OpenAI to answer 7 questions.
Read More >
Jul 10, 2025
The Midas Project lodges complaint with IRS over possible OpenAI tax law violations
Our findings reveal a pattern of concerning decisions that may threaten OpenAI's nonprofit status and charitable mission.
Read More >
Jul 8, 2025
How Anthropic’s AI Safety Framework Misses the Mark
Anthropic’s RSP suffers from a lack of credibility caused by last-minute changes, as well as unclear language that makes it difficult for the policy to be understood.
Read More >
Jun 18, 2025
"The OpenAI Files" documents a turbulent decade at Sam Altman's nonprofit-turned-tech-giant
We believe OpenAI still has a narrow window to reclaim its mission. Here's how.
Read More >
May 13, 2025
xAI misses a second, self-imposed deadline to implement a Frontier Safety Policy
Elon Musk's company has said nothing about if and when they plan to implement a full frontier safety policy meeting the standards of the Seoul commitment.
Read More >
Feb 28, 2025
Announcing the Seoul Commitment Tracker
Our report reveals that most AI companies are breaking the safety promises they made to the world last year.
Read More >
Dec 14, 2024
Scaling, Reasoning, and Unknown Unknowns
Upcoming model releases could rapidly render the field — and, thus, our world — unfamiliar again.
Read More >
Oct 25, 2024
AI Developers Are Bungling Their Dress Rehearsal
Here’s a sobering fact: artificial intelligence today is the worst it will ever be. From here on out, AI will only become more and more capable — and dangerous.
Read More >
Oct 4, 2024
How deepfakes threaten democracy — and what you can do to help
The Midas Project is proud to join a coalition of partner organizations calling on tech companies to protect their users from deceptive or misleading synthetic media.
Read More >
Aug 14, 2025
The Midas Project joins letter calling for investigation into xAI's tolerance for nonconsensual deepfakes
Today, The Midas Project joined a group of 15 organizations calling on state and federal regulators to investigate and enforce laws against xAI.
Read More >
Aug 4, 2025
Joint letter requests transparency from OpenAI about its restructuring
A coalition of over 100 Nobel Prize winners, professors, whistleblowers, and public figures have released an open letter calling on OpenAI to answer 7 questions.
Read More >
Jul 10, 2025
The Midas Project lodges complaint with IRS over possible OpenAI tax law violations
Our findings reveal a pattern of concerning decisions that may threaten OpenAI's nonprofit status and charitable mission.
Read More >
Jul 8, 2025
How Anthropic’s AI Safety Framework Misses the Mark
Anthropic’s RSP suffers from a lack of credibility caused by last-minute changes, as well as unclear language that makes it difficult for the policy to be understood.
Read More >
Jun 18, 2025
"The OpenAI Files" documents a turbulent decade at Sam Altman's nonprofit-turned-tech-giant
We believe OpenAI still has a narrow window to reclaim its mission. Here's how.
Read More >
May 13, 2025
xAI misses a second, self-imposed deadline to implement a Frontier Safety Policy
Elon Musk's company has said nothing about if and when they plan to implement a full frontier safety policy meeting the standards of the Seoul commitment.
Read More >
Feb 28, 2025
Announcing the Seoul Commitment Tracker
Our report reveals that most AI companies are breaking the safety promises they made to the world last year.
Read More >
Dec 14, 2024
Scaling, Reasoning, and Unknown Unknowns
Upcoming model releases could rapidly render the field — and, thus, our world — unfamiliar again.
Read More >
Oct 25, 2024
AI Developers Are Bungling Their Dress Rehearsal
Here’s a sobering fact: artificial intelligence today is the worst it will ever be. From here on out, AI will only become more and more capable — and dangerous.
Read More >
Oct 4, 2024
How deepfakes threaten democracy — and what you can do to help
The Midas Project is proud to join a coalition of partner organizations calling on tech companies to protect their users from deceptive or misleading synthetic media.
Read More >
Aug 14, 2025
The Midas Project joins letter calling for investigation into xAI's tolerance for nonconsensual deepfakes
Today, The Midas Project joined a group of 15 organizations calling on state and federal regulators to investigate and enforce laws against xAI.
Read More >
Aug 4, 2025
Joint letter requests transparency from OpenAI about its restructuring
A coalition of over 100 Nobel Prize winners, professors, whistleblowers, and public figures have released an open letter calling on OpenAI to answer 7 questions.
Read More >
Jul 10, 2025
The Midas Project lodges complaint with IRS over possible OpenAI tax law violations
Our findings reveal a pattern of concerning decisions that may threaten OpenAI's nonprofit status and charitable mission.
Read More >
Jul 8, 2025
How Anthropic’s AI Safety Framework Misses the Mark
Anthropic’s RSP suffers from a lack of credibility caused by last-minute changes, as well as unclear language that makes it difficult for the policy to be understood.
Read More >
Jun 18, 2025
"The OpenAI Files" documents a turbulent decade at Sam Altman's nonprofit-turned-tech-giant
We believe OpenAI still has a narrow window to reclaim its mission. Here's how.
Read More >
May 13, 2025
xAI misses a second, self-imposed deadline to implement a Frontier Safety Policy
Elon Musk's company has said nothing about if and when they plan to implement a full frontier safety policy meeting the standards of the Seoul commitment.
Read More >
Feb 28, 2025
Announcing the Seoul Commitment Tracker
Our report reveals that most AI companies are breaking the safety promises they made to the world last year.
Read More >
Dec 14, 2024
Scaling, Reasoning, and Unknown Unknowns
Upcoming model releases could rapidly render the field — and, thus, our world — unfamiliar again.
Read More >
Oct 25, 2024
AI Developers Are Bungling Their Dress Rehearsal
Here’s a sobering fact: artificial intelligence today is the worst it will ever be. From here on out, AI will only become more and more capable — and dangerous.
Read More >
Oct 4, 2024
How deepfakes threaten democracy — and what you can do to help
The Midas Project is proud to join a coalition of partner organizations calling on tech companies to protect their users from deceptive or misleading synthetic media.
Read More >
Aug 14, 2025
The Midas Project joins letter calling for investigation into xAI's tolerance for nonconsensual deepfakes
Today, The Midas Project joined a group of 15 organizations calling on state and federal regulators to investigate and enforce laws against xAI.
Read More >
Aug 4, 2025
Joint letter requests transparency from OpenAI about its restructuring
A coalition of over 100 Nobel Prize winners, professors, whistleblowers, and public figures have released an open letter calling on OpenAI to answer 7 questions.
Read More >
Jul 10, 2025
The Midas Project lodges complaint with IRS over possible OpenAI tax law violations
Our findings reveal a pattern of concerning decisions that may threaten OpenAI's nonprofit status and charitable mission.
Read More >
Jul 8, 2025
How Anthropic’s AI Safety Framework Misses the Mark
Anthropic’s RSP suffers from a lack of credibility caused by last-minute changes, as well as unclear language that makes it difficult for the policy to be understood.
Read More >
Jun 18, 2025
"The OpenAI Files" documents a turbulent decade at Sam Altman's nonprofit-turned-tech-giant
We believe OpenAI still has a narrow window to reclaim its mission. Here's how.
Read More >
May 13, 2025
xAI misses a second, self-imposed deadline to implement a Frontier Safety Policy
Elon Musk's company has said nothing about if and when they plan to implement a full frontier safety policy meeting the standards of the Seoul commitment.
Read More >
Feb 28, 2025
Announcing the Seoul Commitment Tracker
Our report reveals that most AI companies are breaking the safety promises they made to the world last year.
Read More >
Dec 14, 2024
Scaling, Reasoning, and Unknown Unknowns
Upcoming model releases could rapidly render the field — and, thus, our world — unfamiliar again.
Read More >
Oct 25, 2024
AI Developers Are Bungling Their Dress Rehearsal
Here’s a sobering fact: artificial intelligence today is the worst it will ever be. From here on out, AI will only become more and more capable — and dangerous.
Read More >
Oct 4, 2024
How deepfakes threaten democracy — and what you can do to help
The Midas Project is proud to join a coalition of partner organizations calling on tech companies to protect their users from deceptive or misleading synthetic media.
Read More >
Aug 14, 2025
The Midas Project joins letter calling for investigation into xAI's tolerance for nonconsensual deepfakes
Today, The Midas Project joined a group of 15 organizations calling on state and federal regulators to investigate and enforce laws against xAI.
Read More >
Aug 4, 2025
Joint letter requests transparency from OpenAI about its restructuring
A coalition of over 100 Nobel Prize winners, professors, whistleblowers, and public figures have released an open letter calling on OpenAI to answer 7 questions.
Read More >
Jul 10, 2025
The Midas Project lodges complaint with IRS over possible OpenAI tax law violations
Our findings reveal a pattern of concerning decisions that may threaten OpenAI's nonprofit status and charitable mission.
Read More >
Jul 8, 2025
How Anthropic’s AI Safety Framework Misses the Mark
Anthropic’s RSP suffers from a lack of credibility caused by last-minute changes, as well as unclear language that makes it difficult for the policy to be understood.
Read More >
Jun 18, 2025
"The OpenAI Files" documents a turbulent decade at Sam Altman's nonprofit-turned-tech-giant
We believe OpenAI still has a narrow window to reclaim its mission. Here's how.
Read More >
May 13, 2025
xAI misses a second, self-imposed deadline to implement a Frontier Safety Policy
Elon Musk's company has said nothing about if and when they plan to implement a full frontier safety policy meeting the standards of the Seoul commitment.
Read More >
Feb 28, 2025
Announcing the Seoul Commitment Tracker
Our report reveals that most AI companies are breaking the safety promises they made to the world last year.
Read More >
Dec 14, 2024
Scaling, Reasoning, and Unknown Unknowns
Upcoming model releases could rapidly render the field — and, thus, our world — unfamiliar again.
Read More >
Oct 25, 2024
AI Developers Are Bungling Their Dress Rehearsal
Here’s a sobering fact: artificial intelligence today is the worst it will ever be. From here on out, AI will only become more and more capable — and dangerous.
Read More >
Oct 4, 2024
How deepfakes threaten democracy — and what you can do to help
The Midas Project is proud to join a coalition of partner organizations calling on tech companies to protect their users from deceptive or misleading synthetic media.
Read More >
Aug 14, 2025
The Midas Project joins letter calling for investigation into xAI's tolerance for nonconsensual deepfakes
Today, The Midas Project joined a group of 15 organizations calling on state and federal regulators to investigate and enforce laws against xAI.
Read More >
Aug 4, 2025
Joint letter requests transparency from OpenAI about its restructuring
A coalition of over 100 Nobel Prize winners, professors, whistleblowers, and public figures have released an open letter calling on OpenAI to answer 7 questions.
Read More >
Jul 10, 2025
The Midas Project lodges complaint with IRS over possible OpenAI tax law violations
Our findings reveal a pattern of concerning decisions that may threaten OpenAI's nonprofit status and charitable mission.
Read More >
Jul 8, 2025
How Anthropic’s AI Safety Framework Misses the Mark
Anthropic’s RSP suffers from a lack of credibility caused by last-minute changes, as well as unclear language that makes it difficult for the policy to be understood.
Read More >
Jun 18, 2025
"The OpenAI Files" documents a turbulent decade at Sam Altman's nonprofit-turned-tech-giant
We believe OpenAI still has a narrow window to reclaim its mission. Here's how.
Read More >
May 13, 2025
xAI misses a second, self-imposed deadline to implement a Frontier Safety Policy
Elon Musk's company has said nothing about if and when they plan to implement a full frontier safety policy meeting the standards of the Seoul commitment.
Read More >
Feb 28, 2025
Announcing the Seoul Commitment Tracker
Our report reveals that most AI companies are breaking the safety promises they made to the world last year.
Read More >
Dec 14, 2024
Scaling, Reasoning, and Unknown Unknowns
Upcoming model releases could rapidly render the field — and, thus, our world — unfamiliar again.
Read More >
Oct 25, 2024
AI Developers Are Bungling Their Dress Rehearsal
Here’s a sobering fact: artificial intelligence today is the worst it will ever be. From here on out, AI will only become more and more capable — and dangerous.
Read More >
Oct 4, 2024
How deepfakes threaten democracy — and what you can do to help
The Midas Project is proud to join a coalition of partner organizations calling on tech companies to protect their users from deceptive or misleading synthetic media.
Read More >
Aug 14, 2025
The Midas Project joins letter calling for investigation into xAI's tolerance for nonconsensual deepfakes
Today, The Midas Project joined a group of 15 organizations calling on state and federal regulators to investigate and enforce laws against xAI.
Read More >
Aug 4, 2025
Joint letter requests transparency from OpenAI about its restructuring
A coalition of over 100 Nobel Prize winners, professors, whistleblowers, and public figures have released an open letter calling on OpenAI to answer 7 questions.
Read More >
Jul 10, 2025
The Midas Project lodges complaint with IRS over possible OpenAI tax law violations
Our findings reveal a pattern of concerning decisions that may threaten OpenAI's nonprofit status and charitable mission.
Read More >
Jul 8, 2025
How Anthropic’s AI Safety Framework Misses the Mark
Anthropic’s RSP suffers from a lack of credibility caused by last-minute changes, as well as unclear language that makes it difficult for the policy to be understood.
Read More >
Jun 18, 2025
"The OpenAI Files" documents a turbulent decade at Sam Altman's nonprofit-turned-tech-giant
We believe OpenAI still has a narrow window to reclaim its mission. Here's how.
Read More >
May 13, 2025
xAI misses a second, self-imposed deadline to implement a Frontier Safety Policy
Elon Musk's company has said nothing about if and when they plan to implement a full frontier safety policy meeting the standards of the Seoul commitment.
Read More >
Feb 28, 2025
Announcing the Seoul Commitment Tracker
Our report reveals that most AI companies are breaking the safety promises they made to the world last year.
Read More >
Dec 14, 2024
Scaling, Reasoning, and Unknown Unknowns
Upcoming model releases could rapidly render the field — and, thus, our world — unfamiliar again.
Read More >
Oct 25, 2024
AI Developers Are Bungling Their Dress Rehearsal
Here’s a sobering fact: artificial intelligence today is the worst it will ever be. From here on out, AI will only become more and more capable — and dangerous.
Read More >
Oct 4, 2024
How deepfakes threaten democracy — and what you can do to help
The Midas Project is proud to join a coalition of partner organizations calling on tech companies to protect their users from deceptive or misleading synthetic media.
Read More >
Our Initiatives for Accountable AI
We lead targeted investigations and advocacy efforts to promote transparency, expose risks, and ensure that emerging AI technologies are developed responsibly and in the public interest.
An Open Letter to OpenAI
10,000+ people signed our joint open letter calling upon OpenAI to answer seven key questions about its upcoming restructuring. Signatories include Nobel laureates, civil society organizations, former OpenAI employees, all calling upon OpenAI for greater transparency.
An Open Letter to OpenAI
10,000+ people signed our joint open letter calling upon OpenAI to answer seven key questions about its upcoming restructuring. Signatories include Nobel laureates, civil society organizations, former OpenAI employees, all calling upon OpenAI for greater transparency.
An Open Letter to OpenAI
10,000+ people signed our joint open letter calling upon OpenAI to answer seven key questions about its upcoming restructuring. Signatories include Nobel laureates, civil society organizations, former OpenAI employees, all calling upon OpenAI for greater transparency.
The OpenAI Files
The OpenAI Files is the most comprehensive collection to date of documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI. It compiles credible reports and insider accounts that reveal how the company has drifted from its original mission.
The OpenAI Files
The OpenAI Files is the most comprehensive collection to date of documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI. It compiles credible reports and insider accounts that reveal how the company has drifted from its original mission.
The OpenAI Files
The OpenAI Files is the most comprehensive collection to date of documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI. It compiles credible reports and insider accounts that reveal how the company has drifted from its original mission.
Seoul Tracker
At the 2024 AI Safety Summit in Seoul, South Korea, sixteen leading tech organizations pledged to implement "red line" risk evaluation policies for frontier AI models. The deadline has now arrived, but not everyone has lived up to their commitment. This tracker assesses progress across the five key components.
Seoul Tracker
At the 2024 AI Safety Summit in Seoul, South Korea, sixteen leading tech organizations pledged to implement "red line" risk evaluation policies for frontier AI models. The deadline has now arrived, but not everyone has lived up to their commitment. This tracker assesses progress across the five key components.
Seoul Tracker
At the 2024 AI Safety Summit in Seoul, South Korea, sixteen leading tech organizations pledged to implement "red line" risk evaluation policies for frontier AI models. The deadline has now arrived, but not everyone has lived up to their commitment. This tracker assesses progress across the five key components.
No Deepfakes for Democracy
As Big Tech companies rapidly develop and deploy new AI technologies, deepfakes, or hyper-realistic AI-generated video, images, and audio, are quickly blurring the lines between truth and fiction.
No Deepfakes for Democracy
As Big Tech companies rapidly develop and deploy new AI technologies, deepfakes, or hyper-realistic AI-generated video, images, and audio, are quickly blurring the lines between truth and fiction.
No Deepfakes for Democracy
As Big Tech companies rapidly develop and deploy new AI technologies, deepfakes, or hyper-realistic AI-generated video, images, and audio, are quickly blurring the lines between truth and fiction.
Join our Movement
Help us push tech companies to prioritize safety, transparency, and public interest in AI development.
Your voice matters—and together, we can demand real change.
Help us push tech companies to prioritize safety, transparency, and public interest in AI development.
Your voice matters—and together, we can demand real change.
We believe AI should serve people -not just profits.
Sign now if you agree.
Tech companies should prioritize safe AI development over their bottom line.
Sign now if you agree.
We believe AI should serve people -not just profits.
Sign now if you agree.
Sign up to stay involved
Sign up to stay involved
Sign up to stay involved
Frequently Asked Questions
What people often ask about our mission, work, and how to get involved.