OpenAI Cofounder: The 27 Papers to Read to Know 90% About AI

These are the papers Ilia Sutskever gave John Carmack

In partnership with

Whenever I have wanted to become super smart on a topic I always start by doing the same thing: I go find every single piece of content written on the topic, I print them out, and I read every single one of them.

Every time I do this I feel like there is a high chance that I am the only person in the world who has done so for that particular set of articles.

Now, the only issue with this plan (although it does work, try it), is that a lot of the articles you read end up being low quality. Not everyone is good at writing an article that has new useful information.

The best plan is if you somehow can get a list of only the really good articles that do have unique insights.

I recently came across a collection of these types of articles on the topic of AI that I would recommend all of you to read. I myself am printing them out tomorrow and reading all of them. I challenge you to do the same.

These are the 27 articles that Ilia Sutskever, the cofounder of OpenAI, told John Carmack to read (the creator of Doom and programming legend) if he wanted to very quickly become super smart on the topic of AI and how it is being developed right now.

  1. The Annotated Transformer

  2. The First Law of Complexodynamics

  3. The Unreasonable Effectiveness of Recurrent Neural Networks

  4. Understanding LSTM Networks

  5. Recurrent Neural Network Regulation

  6. Keeping Neural Networks Simple by Minimizing the Description Length of the Weights

  7. Pointer Networks

  8. ImageNet Classification with Deep Convolutional Neural Networks

  9. Order Matters: Sequence to Sequence for Sets

  10. GPipe: Easy Scaling with Micro-Batch Pipeline Parallelism

  11. Deep Residual Learning for Image Recognition

  12. Multi-Scale Context Aggregation by Dilated Convolutions

  13. Neural Message Passing for Quantum Chemistry

  14. Attention Is All You Need

  15. Neural Machine Translation By Jointly Learning To Align And Translate

  16. Identity Mappings in Deep Residual Networks

  17. A simple neural network module for relational reasoning

  18. Variational Lossy Autoencoder

  19. Relational recurrent neural networks

  20. Quantifying the Rise and Fall of Complexity in Closed Systems: The Coffee Automaton

  21. Neural Turing Machines

  22. Deep Speech 2: End-to-End Speech Recognition in English and Mandarin

  23. Scaling Laws for Neural Language Models

  24. A Tutorial Introduction to the Minimum Description Length Principle

  25. Machine Super Intelligence

  26. Kolmogorov Complexity and Algorithmic Randomness

  27. CS231n Convolutional Neural Networks for Visual Recognition

    Course Website

Let me know if you print them :)

Thank you to our sponsors:

The secret to growing on LinkedIn in 2024

The secret to growing on LinkedIn in 2024

There's a rising demand for high-quality video content on LinkedIn. This presents a major opportunity for brands (and thought leaders) ready to step in.

Here’s how you can capitalize:

  1. Grab your company's existing video assets, like interviews or webinars

  2. Generate dozens of clips using OpusClip

  3. Schedule your whole month’s worth of video clips on LI using our new Calendar feature

  4. Reply to comments on your videos to foster your new connections

Want the full breakdown, with examples from companies like SaaStr and Chili Piper? Read it for free here