AI Transition Model
A causal framework mapping how AI development could lead to different outcomes.
- Key Factors — What shapes AI trajectories
- Scenarios — Possible futures
LongtermWiki is an encyclopedic resource for AI safety. It helps funders, researchers, and policymakers understand the landscape of AI risks and interventions.
The site contains 400+ pages covering risks, technical approaches, governance proposals, key debates, and the organizations and researchers working on these problems.
AI Transition Model
A causal framework mapping how AI development could lead to different outcomes.
Knowledge Base
Key Debates
Structured arguments on contested questions:
Field Reference
Who’s working on what:
High-quality pages worth reading:
| Topic | Page | Description |
|---|---|---|
| Risk | Deceptive AlignmentRiskDeceptive AlignmentComprehensive analysis of deceptive alignment risk where AI systems appear aligned during training but pursue different goals when deployed. Expert probability estimates range 5-90%, with key empir...Quality: 75/100 | AI systems that appear aligned during training but pursue different goals when deployed |
| Risk | Racing DynamicsRiskRacing DynamicsRacing dynamics analysis shows competitive pressure has shortened safety evaluation timelines by 40-60% since ChatGPT's launch, with commercial labs reducing safety work from 12 weeks to 4-6 weeks....Quality: 72/100 | How competition between labs may compromise safety |
| Response | AI ControlSafety AgendaAI ControlAI Control is a defensive safety approach that maintains control over potentially misaligned AI through monitoring, containment, and redundancy, offering 40-60% catastrophic risk reduction if align...Quality: 75/100 | Using untrusted AI safely through monitoring and restrictions |
| Response | Export ControlsPolicyUS AI Chip Export ControlsComprehensive empirical analysis finds US chip export controls provide 1-3 year delays on Chinese AI development but face severe enforcement gaps (140,000 GPUs smuggled in 2024, only 1 BIS officer ...Quality: 73/100 | Restricting AI chip exports as a governance lever |
| Capability | Language ModelsCapabilityLarge Language ModelsComprehensive analysis of LLM capabilities showing rapid progress from GPT-2 (1.5B parameters, 2019) to o3 (87.5% on ARC-AGI vs ~85% human baseline, 2024), with training costs growing 2.4x annually...Quality: 60/100 | Current capabilities and safety implications of LLMs |
This resource reflects an AI safety community perspective. It takes seriously the possibility of existential risk from AI and maps the arguments, organizations, and research from that viewpoint.
What it does well:
What it does less well:
Alternative viewpoints: Gary Marcus’s Substack↗✏️ blogGary Marcus's SubstackGary Marcus's Substack offers expert analysis and commentary on artificial intelligence, focusing on responsible AI development and potential risks.Source ↗Notes (AI skepticism) • Yann LeCun’s posts↗🔗 webYann LeCun's postsI apologize, but the provided content appears to be an error page from X (formerly Twitter) and does not contain any substantive text from Yann LeCun's posts. Without the actual...Source ↗Notes (AGI skepticism) • Timnit Gebru et al.↗🔗 webTimnit Gebru et al.'s workThe Distributed AI Research Institute (DAIR) examines AI systems' societal impacts, emphasizing harm reduction and equitable technological futures. Their work centers on exposin...Source ↗Notes (AI ethics)