The Power of Vacillation in Language Learning
From MaRDI portal
Publication:4268851
DOI10.1137/S0097539793249694zbMath0939.68099OpenAlexW2002485318MaRDI QIDQ4268851
Publication date: 28 October 1999
Published in: SIAM Journal on Computing (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/s0097539793249694
Learning and adaptive systems in artificial intelligence (68T05) Formal languages and automata (68Q45)
Related Items (42)
Intrinsic complexity of learning geometrical concepts from positive data ⋮ Effectivity Questions for Kleene’s Recursion Theorem ⋮ Learnability: Admissible, co-finite, and hypersimple languages ⋮ Learning in the presence of inaccurate information ⋮ Effectivity questions for Kleene's recursion theorem ⋮ Results on memory-limited U-shaped learning ⋮ Strongly non-U-shaped language learning results by general techniques ⋮ Infinitary self-reference in learning theory ⋮ On aggregating teams of learning machines ⋮ Prudence in vacillatory language identification ⋮ Learning theory in the arithmetic hierarchy. II. ⋮ Computability-theoretic learning complexity ⋮ Machine induction without revolutionary paradigm shifts ⋮ Gold-Style Learning Theory ⋮ Uncountable automatic classes and learning ⋮ Prescribed Learning of R.E. Classes ⋮ Learning in Friedberg Numberings ⋮ Numberings Optimal for Learning ⋮ When unlearning helps ⋮ Learning by switching type of information. ⋮ Non-U-shaped vacillatory and team learning ⋮ Learnability and positive equivalence relations ⋮ Learning in Friedberg numberings ⋮ On the non-existence of maximal inference degrees for language identification ⋮ Learnability of automatic classes ⋮ Numberings optimal for learning ⋮ Hypothesis spaces for learning ⋮ Synthesizing learners tolerating computable noisy data ⋮ Vacillatory and BC learning on noisy data ⋮ Variations on U-shaped learning ⋮ Learnability and positive equivalence relations ⋮ Topological separations in inductive inference ⋮ Resource restricted computability theoretic learning: Illustrative topics and problems ⋮ Hypothesis Spaces for Learning ⋮ U-shaped, iterative, and iterative-with-counter learning ⋮ Learning correction grammars ⋮ Prescribed learning of r.e. classes ⋮ Uncountable Automatic Classes and Learning ⋮ Maximal machine learnable classes ⋮ The synthesis of language learners. ⋮ Incremental concept learning for bounded data mining. ⋮ On the role of update constraints and text-types in iterative learning
This page was built for publication: The Power of Vacillation in Language Learning