

Bottleneck: Some next words are very hard to predict
Chain-of-Thought (CoT) prompting: A work-around (Wei et al, 2022)
Train LLM to do CoT explicitly (OpenAI blog post, Nathan Lambert)

New scaling law (Noam Brown, Jason Wei)

Benchmarks getting saturated more quickly (David Rein)
