核心概念
Post-Selection in Deep Learning is statistically invalid and traditional cross-validation does not rescue it.
統計
"The first peer-reviewed papers on Deep Learning misconduct are [32], [37], [36]."
"NNWT and PGNN guarantee to reach a zero validation error due to Post-Selection step during training."
"NNWT and PGNN should not generalize well, as they simply find the luckiest fit in the absence of a test."
引用
"Post-Selection is invalid statistically even in the presence of nest cross-validation."
"NNWT and PGNN with input-output cross-validation can give a zero validation error."