Category: Articles

  • Campaign Progress Update: Cognition AI releases new “Acceptable Usage Policy”

    Campaign Progress Update: Cognition AI releases new “Acceptable Usage Policy”

    Over 500 people have signed our petition against Cognition since we launched it this summer. In that time, we’ve been calling the company out for having never once publicly discussed safety and responsible usage. But yesterday, that changed. Cognition released an acceptable usage policy that details what obligations users have to them in using their…

  • Following the trendlines: The pace of AI progress

    Following the trendlines: The pace of AI progress

    If there’s one thing to know about the current state of AI development, it’s this: Things are moving faster than anyone anticipated. For a long time, there was uncertainty about whether the set of methods known as machine learning would ever be able to achieve human-level general intelligence, let alone surpass it. In the late…

  • Incentive gradients and The Midas Project’s theory of change

    Incentive gradients and The Midas Project’s theory of change

    Why start an industry watchdog organization calling out irresponsible AI developers? Companies move along incentive gradients. Imagine this as a 3D landscape with peaks and valleys, downward slopes and upward climbs.  Companies move along this landscape. They want to follow the path of least resistance. They’re constantly moving in the easiest, cheapest direction, just as…

  • Which tech companies are taking AI risk seriously?

    Which tech companies are taking AI risk seriously?

    Tech companies are locked in an all-out race to develop and deploy advanced AI systems. There’s a lot of money to be made, and indeed, plenty of opportunities to improve the world. But there are also serious risks — and racing to move as quickly as possible can make detecting and averting these risks a…

  • Magic.dev has finally released a risk evaluation policy. How does it measure up?

    Magic.dev has finally released a risk evaluation policy. How does it measure up?

    Big news: the AI coding startup Magic.dev has released a new risk evaluation policy this week. Referred to as their “AGI Readiness Policy” and developed in collaboration with the nonprofit METR, this announcement follows in the footsteps of Responsible Scaling Policies (RSPs) released by companies like Anthropic, OpenAI, and Google Deepmind. So how does it…

  • Why has Cognition fallen behind the industry standard for AI Safety?

    Why has Cognition fallen behind the industry standard for AI Safety?

    Amid fierce debates surrounding AI safety, it’s easy to forget that most disagreements concern what particular risks we face and how to address them. Very few people will try to argue that there are no risks or that serious caution isn’t warranted. In light of this, there is an emerging consensus among policy experts (and…

  • Why are AI employees demanding a “right to warn” the public?

    Why are AI employees demanding a “right to warn” the public?

    This week, another warning flag was raised concerning the rapid progress of advanced artificial intelligence technology. This time, it took the form of an open letter authored by current and former employees at some of the world’s top AI labs — and cosigned by leading experts including two of the three “godfathers of AI.” This…

  • How financial interests took over OpenAI

    How financial interests took over OpenAI

    How did an idealistic nonprofit, hoping to ensure advanced AI “benefits humanity as a whole,” turn into a $82 billion mega-corporation cutting corners and rushing to scale commercial products? In 2015, OpenAI announced its existence to the world through a post on their blog. In it, they write: “OpenAI is a non-profit artificial intelligence research…

  • How AI Chatbots Work: What We Know, and What We Don’t

    How AI Chatbots Work: What We Know, and What We Don’t

    AI chatbots (such as Anthropic’s Claude and OpenAI’s ChatGPT) have already transformed our world. On the surface, they appear remarkably capable of friendly, natural conversations. But below the surface lies sophisticated artificial intelligence driving their abilities, and a great deal of uncertainty about exactly how they work and what they are capable of. In this…

  • Are AI companies using copyrighted data?

    Are AI companies using copyrighted data?

    The current era of training large AI models requires three fundamental requirements: advanced algorithms, advanced computer chips, and a lot of data. This last component has become a sticking point for AI companies in recent years, including OpenAI, Anthropic, Meta, and Google. These companies have essentially hovered up the entire internet in their fight to…