If you’ve been following conferences like AAAI, ICLR, or NeurIPS, you may have noticed something worrying over the last couple of years: the main tracks are increasingly dominated by theory-heavy papers. Dense proofs, toy models, and formal guarantees now take up most of the space — while real-world AI experiments feel sidelined.
Several factors feed this trend:
This isn’t just about trends — it has real consequences:
The result is clear: applied AI research is increasingly crowded out. The main conference tracks, once a space for practical innovation, now seem to reward formalism over real-world impact.
This trend isn’t just frustrating — it could be dangerous for the field. If applied research continues to be sidelined, the gap between AI theory and real-world deployment will only grow. Students and labs may start prioritizing what gets accepted over what actually matters. Incentives are shifting — and not for the better.
We don’t need to pretend that the path forward is easy — it isn’t. But it’s important to recognize the structural pressures shaping AI research today. Reviewers, conference organizers, and research labs all play a role in what gets visibility and credit.
Applied researchers, students, and practitioners deserve a voice in these conversations. Even small actions — questioning incentives, advocating for venues that value experiments, or highlighting real-world impact — can help prevent the field from drifting too far into theory while leaving practical AI behind.