On choosing between experimenting and thinking when learning (Q1308981)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: On choosing between experimenting and thinking when learning |
scientific article; zbMATH DE number 465562
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | On choosing between experimenting and thinking when learning |
scientific article; zbMATH DE number 465562 |
Statements
On choosing between experimenting and thinking when learning (English)
0 references
2 June 1994
0 references
The paper describes a model of inductive inference which extends the Bayesian approach by explicitly considering the computational cost of formulating predictions to be tested. Given the prior probabilities for each possible theory explaining the current set of observations (which is initially empty), the system called ALICE updates these probabilities in a Bayesian manner as evidence is gathered. Gathering the evidence is performed by either running an experiment or predicting the results of a particular experiment. After a finite number of steps, ALICE can eliminate a finite number of theories, thus reducing the number of possible ones. At no point, however, she can be certain to have discovered the truth. That is, for each initial set of probabilities, ALICE is expected to perform an infinite amount of work to discover the truth. It is shown how this amount of work depends on the number of incorrect theories. Several strategies for finding minimum-cost spanning trees describing optimal solutions are discussed. These are: (i) the refute-most-weight strategy, which maximizes the expected total probability of the theories eliminated by the chosen action, and thus eliminates wrong theories as quickly as possible; (ii) the minimize-entropy strategy which minimizes the entropy of the posteriori probability distribution; (iii) the maximize-leader strategy, which maximizes the highest probability assigned to any theory; and (iv) the minimize-error strategy, which minimizes the expected total probability assigned to incorrect theories. It is shown that all these strategies perform within a constant factor of the optimum in eliminating wrong theories, in terms of the expected cost required to eliminate the first \(r\) theories.
0 references
learning
0 references
probabilistic reasoning
0 references
adaptive systems
0 references
inductive inference
0 references
0.7367121577262878
0 references
0.7367121577262878
0 references
0.7337971925735474
0 references