-
How deepfakes threaten democracy — and what you can do to help
On July 26, Elon Musk uploaded a video to the social media platform X (formerly known as Twitter), narrated by what appeared to be Vice President Kamala Harris. Soon, however, it becomes clear that it wasn’t her. In the video, her sound-alike claims that she doesn’t “know the first thing about running the country” and…
-
What we learned about OpenAI during this week’s Senate hearing
The US government is carefully watching AI companies. Every month, more and more DC insiders are waking up to the incredible amount of AI progress that may await us in the coming years, and how serious the implications of this will be. On Wednesday, September 18, the US Senate Subcommittee on Privacy, Technology, and the…
-
Magic.dev
Released a statement on AI safety priorities, and announced an upcoming v2 of their responsible scaling policy.
-
OpenAI
Released preparedness scorecard for their newest model, o1.
-
Campaign Progress Update: Cognition AI releases new “Acceptable Usage Policy”
Over 500 people have signed our petition against Cognition since we launched it this summer. In that time, we’ve been calling the company out for having never once publicly discussed safety and responsible usage. But yesterday, that changed. Cognition released an acceptable usage policy that details what obligations users have to them in using their…
-
Cognition
Released an acceptable usage policy, along with a reporting email for security vulnerabilities.
-
OpenAI
Removed author from GPT-4o system card.
-
Following the trendlines: The pace of AI progress
If there’s one thing to know about the current state of AI development, it’s this: Things are moving faster than anyone anticipated. For a long time, there was uncertainty about whether the set of methods known as machine learning would ever be able to achieve human-level general intelligence, let alone surpass it. In the late…
-
OpenAI
Adjusted authorship for a two-year-old article on their approach to alignment (with no substantive changes to the content)
-
Join “Red Teaming In Public”
“Red Teaming in Public” is a project, originally started by Nathan Labenz and Pablo Eder in June 2024. The goal is to catalyze a shift toward higher standards for AI developers. Labenz shared the following details in the project’s announcement on X: For context, we are pro-technology “AI Scouts” who believe in the immense potential…
Got any book recommendations?