Intermediate Robot Building
David Cook (Auteur)
Download : EUR 15,15 (as of 01/27/2013 20:42 PST)
(Consultez la liste Meilleures ventes Artificial Intelligence pour des informations officielles sur le classement actuel de ce produit.)
During the past decade there has been an explosion in computation and information technology. With it has come vast amounts of data in a variety of fields such as medicine, biology, finance, and marketing. The challenge of understanding these data has led to the development of new tools in the field of statistics, and spawned new areas such as data mining, machine learning, and bioinformatics. Many of these tools have common underpinnings but are often expressed with different terminology. This book describes the important ideas in these areas in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of color graphics. It should be a valuable resource for statisticians and anyone interested in data mining in science or industry. The book's coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting--the first comprehensive treatment of this topic in any book. Trevor Hastie, Robert Tibshirani, and Jerome Friedman are professors of statistics at Stanford University. They are prominent researchers in this area: Hastie and Tibshirani developed generalized additive models and wrote a popular book of that title. Hastie wrote much of the statistical modeling software in S-PLUS and invented principal curves and surfaces. Tibshirani proposed the Lasso and is co-author of the very successful An Introduction to the Bootstrap. Friedman is the co-inventor of many data-mining tools including CART, MARS, and projection pursuit. FROM THE REVIEWS: TECHNOMETRICS "[This] is a vast and complex book. Generally, it concentrates on explaining why and how the methods work, rather than how to use them. Examples and especially the visualizations are principle features...As a source for the methods of statistical learning...it will probably be a long time before there is a competitor to this book."
This book investigates propositional intuitionistic and modal logics from an entirely new point of view, covering quite recent and sometimes yet unpublished results. It mainly deals with the structure of the category of finitely presented Heyting and modal algebras, relating it both with proof theoretic and model theoretic facts: existence of model completions, amalgamability, Beth definability, interpretability of second order quantifiers and uniform interpolation, definability of dual connectives like difference, projectivity, etc. are among the numerous topics which are covered. Dualities and sheaf representations are the main techniques in the book, together with Ehrenfeucht-Fraisse games and bounded bisimulations. The categorical instruments employed are rich, but a specific extended Appendix explains to the reader all concepts used in the text, starting from the very basic definitions to what is needed from topos theory. Audience: The book is addressed to a large spectrum of professional logicians, from such different areas as modal logics, categorical and algebraic logic, model theory and universal algebra.
When it was first published in 1972, Hubert Dreyfus's manifesto on the inherent inability of disembodied machines to mimic higher mental functions caused an uproar in the artificial intelligence community. The world has changed since then. Today it is clear that "good old-fashioned AI," based on the idea of using symbolic representations to produce general intelligence, is in decline (although several believers still pursue its pot of gold), and the focus of the Al community has shifted to more complex models of the mind. It has also become more common for AI researchers to seek out and study philosophy. For this edition of his now classic book, Dreyfus has added a lengthy new introduction outlining these changes and assessing the paradigms of connectionism and neural networks that have transformed the field. At a time when researchers were proposing grand plans for general problem solvers and automatic translation machines, Dreyfus predicted that they would fail because their conception of mental functioning was naive, and he suggested that they would do well to acquaint themselves with modern philosophical approaches to human beings. What Computers Can't Do was widely attacked but quietly studied. Dreyfus's arguments are still provocative and focus our attention once again on what it is that makes human beings unique. Hubert L. Dreyfus, who is Professor of Philosophy at the University of California, Berkeley, is also the author of Being-in-the-World. A Commentary on Heidegger's Being and Time, Division I.