Installieren Sie die genialokal App auf Ihrem Startbildschirm für einen schnellen Zugriff und eine komfortable Nutzung.
Tippen Sie einfach auf Teilen:
Und dann auf "Zum Home-Bildschirm [+]".
Bei genialokal.de kaufen Sie online bei Ihrer lokalen, inhabergeführten Buchhandlung!
Computational Learning Theory presents the theoretical issues in machine learning and computational models of learning. This book covers a wide range of problems in concept learning, inductive inference, and pattern recognition. Organized into three parts encompassing 32 chapters, this book begins with an overview of the inductive principle based on weak convergence of probability measures. This text then examines the framework for constructing learning algorithms. Other chapters consider the formal theory of learning, which is learning in the sense of improving computational efficiency as opposed to concept learning. This book discusses as well the informed parsimonious (IP) inference that generalizes the compatibility and weighted parsimony techniques, which are most commonly applied in biology. The final chapter deals with the construction of prediction algorithms in a situation in which a learner faces a sequence of trials, with a prediction to be given in each and the goal of the learner is to make some mistakes. This book is a valuable resource for students and teachers.
ForewordInvited Lecture Inductive Principles of the Search for Empirical Dependences (Methods Based on Weak Convergence of Probability Measures)Technical Papers Polynomial Learnability of Semilinear Sets Learning Nested Differences of Intersection-Closed Concept Classes A Polynomial-Time Algorithm for Learning k-Variable Pattern Languages from Examples On Learning from Exercises On Approximate Truth Informed Parsimonious Inference of Prototypical Genetic Sequences Complexity Issues in Learning by Neural Nets Equivalence Queries and Approximate Fingerprints Learning Read-Once Formulas Using Membership Queries Learning Simple Deterministic Languages Learning in the Presence of Inaccurate Information Convergence to Nearly Minimal Size Grammars by Vacillating Learning Machines Inductive Inference with Bounded Number of Mind Changes Learning Via Queries to an Oracle Learning Structure from Data: A Survey A Statistical Approach to Learning and Generalization in Layered Neural Networks The Light Bulb Problem From On-Line to Batch Learning A Parametrization Scheme for Classifying Models of Learnability On the Role of Search for Learning Elementary Formal System as a Unifying Framework for Language Learning Identification of Unions of Languages Drawn from an Identifiable Class Induction from the General to the More General Space-Bounded Learning and the Vapnik-Chervonenkis Dimension Reliable and Useful LearningShort Abstracts The Strength of Weak Learnability On the Complexity of Learning form Counterexamples Generalizing the PAC Model: Sample Size Bounds From Metric Dimension-Based Uniform Convergence Results A Theory of Learning Simple Concepts Under Simple Distributions Learning Binary Relations and Total Orders The Weighted Majority AlgorithmAuthor Index