Why does CTC result in peaky behavior?
From MaRDI portal
Publication:6368974
arXiv2105.14849MaRDI QIDQ6368974
Albert Zeyer, Hermann Ney, Ralf Schlüter
Publication date: 31 May 2021
Abstract: The peaky behavior of CTC models is well known experimentally. However, an understanding about why peaky behavior occurs is missing, and whether this is a good property. We provide a formal analysis of the peaky behavior and gradient descent convergence properties of the CTC loss and related training criteria. Our analysis provides a deep understanding why peaky behavior occurs and when it is suboptimal. On a simple example which should be trivial to learn for any model, we prove that a feed-forward neural network trained with CTC from uniform initialization converges towards peaky behavior with a 100% error rate. Our analysis further explains why CTC only works well together with the blank label. We further demonstrate that peaky behavior does not occur on other related losses including a label prior model, and that this improves convergence.
Has companion code repository: https://github.com/rwth-i6/returnn-experiments/tree/master/2021-formal-peaky-behavior-ctc
This page was built for publication: Why does CTC result in peaky behavior?
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6368974)