Picture1.png

<aside> 🤖 We hope you apply for the AISST Policy Fellowship for fall 2024! Here, you can find a draft of the syllabus we’ll use throughout the fellowship. It contains all the papers we’ll read together, as well as several others, marked “additional readings,” which we’ve provided for participants to read independently if they’re especially interested in a particular topic. Note that this is a draft and subject to change.

</aside>

Week 1: Introduction to machine learning and AI

  1. But what is a neural network? (3Blue1Brown, 2017). Watch first 12.5 min.
  2. The AI Triad and what it means for national security strategy [Exec Summary Only] (Buchanan, 2020) 5 min.
  3. 4 charts that show why AI progress is unlikely to slow down (Henshall, 2023). 8 min.
  4. Can AI Scaling Continue Through 2030? [Introduction, Chip Manufacturing Constraints (only the summary at the top, up to “Current Production and Projections”, then Figure 4), Data Scarcity, What Constraint is Most Limiting?, Will Labs Attempt to Scale These to New Heights?] (Sevilla et al., 2024) [20 m]

Additional readings:

Week 2: Overview of risks from advanced AI systems

  1. International Scientific Report on the Safety of Advanced AI (Bengio et al. 2024)
    1. Executive Summary (pp. 9-14)
    2. § 4.1 Malicious use risks (pp. 41-47)
    3. § 4.2.3 Loss of control (pp. 51-53)
    4. § 4.3.1-4.3.1 Labour market risks, Global AI Divide, & Market concentration risks and single points of failure (pp. 54-59)
    5. § 4.4.2 Cross-cutting societal risk factors (pp. 66-67)
  2. Harms from increasingly agentic algorithmic systems (Chan et al., 2023). Read pgs. 2-6 (§ 1 & 2). 10 min.
  3. An overview of catastrophic AI risks (Hendrycks et al., 2023). Read pgs. 38–40 (§ 5.3, half of 5.4). 10 min.
  4. Why AI alignment could be hard with modern deep learning **(Cotra, 2021). 20 min.

Additional readings:

Week 3: Safety standards and regulations

  1. From Principles to Rules: A Regulatory Approach for Frontier AI (Schuett et al., 2024) [Parts I, IIB, IIIA, IVB]
  2. Managing extreme AI risks amid rapid progress (Bengio et al. 2024)
  3. Breaking Down the Biden AI EO: Ensuring Safe and Secure AI (Baker, 2023)
  4. Existing authorities for oversight of frontier AI models (Bullock et al., 2024) Read pgs. 1-16.

Additional readings: