A model for learning to segment temporal sequences, utilizing a mixture of RNN experts together with adaptive variance
DOI10.1016/J.NEUNET.2008.09.005zbMath1254.68215DBLPjournals/nn/NamikawaT08arXiv0706.1317OpenAlexW2020621358WikidataQ48872888 ScholiaQ48872888MaRDI QIDQ1932118
Publication date: 17 January 2013
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0706.1317
maximum likelihood estimationrecurrent neural networkmixture of expertsself-organizationsegmentation of temporal sequences
Learning and adaptive systems in artificial intelligence (68T05) Numerical analysis or methods applied to Markov chains (65C40)
Related Items (1)
Uses Software
Cites Work
- Unnamed Item
- Dynamical networks in function dynamics
- Functional dynamics. I: Articulation process
- Learning to imitate stochastic time series in a compositional way by chaos
- Learning Nonregular Languages: A Comparison of Simple Recurrent Networks and LSTM
- Recurrent Neural Networks with Small Weights Implement Definite Memory Machines
- Dynamics and computation in functional shifts
- Chaotic itinerancy
- Functional dynamics. II: Syntactic structure
This page was built for publication: A model for learning to segment temporal sequences, utilizing a mixture of RNN experts together with adaptive variance