Human-level performance via ML was *not* proven impossible with complexity theory [D]
Rebuttal published in Computational Brain & Behavior counters the 'Ingenia Theorem' claim that AGI via ML was mathematically impossible, arguing the original proof fails because 'human-level classifier' was never formally defined.
Excerpt
Van Rooij, Guest, Adolfi, Kolokolova, and Rich [claimed to have proven that AGI via ML is impossible](https://link.springer.com/article/10.1007/s42113-024-00217-5) in *Computational Brain & Behavior* in 2024. The basic idea was to try to reduce a known NP-hard problem to the problem of learning a human-level classifier from data. The purported result, called "Ingenia Theorem" by the authors, made some noise on the internet, including here.
My paper showing that the proof is irreparably broken is now [also out in CBB](https://link.springer.com/article/10.1007/s42113-026-00284-w) (ungated preprint [here](https://arxiv.org/abs/2411.06498)).
The basic issue is that "human-level classifier" is not mathematically defined, which the authors solve by ... never defining it. They have a construct that corresponds to "distribution of human situation-behaviour tuples" when they introduce the problem, but the construct then gets swapped out for "for all polytime-sampleable distributions" when it comes time to doing the formal proof. This means that the paper, if you find-and-replace human situation-behavior tuples for ImageNet inputs/labels, also proves that learning to classify ImageNet is intractable.
Blogpost discussion similar attempts from Penrose to Chomsky [here](https://mikeguerzhoy.substack.com/p/barriers-to-complexity-theoretic).
Read at source: https://www.reddit.com/r/MachineLearning/comments/1tc1xr3/humanlevel_performance_via_ml_was_not_proven/