Posted on

Computer Vision

Computer Vision
by Richard Szeliski

Humans perceive the three-dimensional structure of the world with apparent ease. However, despite all of the recent advances in computer vision research, the dream of having a computer interpret an image at the same level as a two-year old remains elusive. Why is computer vision such a challenging problem and what is the current state of the art?

Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images. It also describes challenging real-world applications where vision is being successfully used, both for specialized applications such as medical imaging, and for fun, consumer-level tasks such as image editing and stitching, which students can apply to their own personal photos and videos.

More than just a source of “recipes,” this exceptionally authoritative and comprehensive textbook/reference also takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene. These problems are also analyzed using statistical models and solved using rigorous engineering techniques

Topics and features:

  • Structured to support active curricula and project-oriented courses, with tips in the Introduction for using the book in a variety of customized courses
  • Presents exercises at the end of each chapter with a heavy emphasis on testing algorithms and containing numerous suggestions for small mid-term projects
  • Provides additional material and more detailed mathematical topics in the Appendices, which cover linear algebra, numerical techniques, and Bayesian estimation theory
  • Suggests additional reading at the end of each chapter, including the latest research in each sub-field, in addition to a full Bibliography at the end of the book
  • Supplies supplementary course material for students at the associated website, http://szeliski.org/Book/

Suitable for an upper-level undergraduate or graduate-level course in computer science or engineering, this textbook focuses on basic techniques that work under real-world conditions and encourages students to push their creative boundaries. Its design and exposition also make it eminently suitable as a unique reference to the fundamental techniques and current research literature in computer vision.


Computer Vision
by Richard Szeliski

Humans perceive the three-dimensional structure of the world with apparent ease. However, despite all of the recent advances in computer vision research, the dream of having a computer interpret an image at the same level as a two-year old remains elusive. Why is computer vision such a challenging problem and what is the current state of the art?

Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images. It also describes challenging real-world applications where vision is being successfully used, both for specialized applications such as medical imaging, and for fun, consumer-level tasks such as image editing and stitching, which students can apply to their own personal photos and videos.

More than just a source of “recipes,” this exceptionally authoritative and comprehensive textbook/reference also takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene. These problems are also analyzed using statistical models and solved using rigorous engineering techniques

Topics and features:

  • Structured to support active curricula and project-oriented courses, with tips in the Introduction for using the book in a variety of customized courses
  • Presents exercises at the end of each chapter with a heavy emphasis on testing algorithms and containing numerous suggestions for small mid-term projects
  • Provides additional material and more detailed mathematical topics in the Appendices, which cover linear algebra, numerical techniques, and Bayesian estimation theory
  • Suggests additional reading at the end of each chapter, including the latest research in each sub-field, in addition to a full Bibliography at the end of the book
  • Supplies supplementary course material for students at the associated website, http://szeliski.org/Book/

Suitable for an upper-level undergraduate or graduate-level course in computer science or engineering, this textbook focuses on basic techniques that work under real-world conditions and encourages students to push their creative boundaries. Its design and exposition also make it eminently suitable as a unique reference to the fundamental techniques and current research literature in computer vision.


Computer Vision
by Anup Basu, Xiaobo Li

This book contains a selection of papers which were presented at the Vision Interface ’92 Conference. It also includes several invited articles from prominent researchers in the field, suggesting future directions in Computer Vision.

Deep Learning for Computer Vision
by Rajalingappaa Shanmugamani

Learn how to model and train advanced neural networks to implement a variety of Computer Vision tasks

Key Features

  • Train different kinds of deep learning model from scratch to solve specific problems in Computer Vision
  • Combine the power of Python, Keras, and TensorFlow to build deep learning models for object detection, image classification, similarity learning, image captioning, and more
  • Includes tips on optimizing and improving the performance of your models under various constraints

Book Description

Deep learning has shown its power in several application areas of Artificial Intelligence, especially in Computer Vision. Computer Vision is the science of understanding and manipulating images, and finds enormous applications in the areas of robotics, automation, and so on. This book will also show you, with practical examples, how to develop Computer Vision applications by leveraging the power of deep learning.

In this book, you will learn different techniques related to object classification, object detection, image segmentation, captioning, image generation, face analysis, and more. You will also explore their applications using popular Python libraries such as TensorFlow and Keras. This book will help you master state-of-the-art, deep learning algorithms and their implementation.

What you will learn

  • Set up an environment for deep learning with Python, TensorFlow, and Keras
  • Define and train a model for image and video classification
  • Use features from a pre-trained Convolutional Neural Network model for image retrieval
  • Understand and implement object detection using the real-world Pedestrian Detection scenario
  • Learn about various problems in image captioning and how to overcome them by training images and text together
  • Implement similarity matching and train a model for face recognition
  • Understand the concept of generative models and use them for image generation
  • Deploy your deep learning models and optimize them for high performance

Who this book is for

This book is targeted at data scientists and Computer Vision practitioners who wish to apply the concepts of Deep Learning to overcome any problem related to Computer Vision. A basic knowledge of programming in Python—and some understanding of machine learning concepts—is required to get the best out of this book.


A Guide to Convolutional Neural Networks for Computer Vision
by Salman Khan, Hossein Rahmani, Syed Afaq Ali Shah, Mohammed Bennamoun

Computer vision has become increasingly important and effective in recent years due to its wide-ranging applications in areas as diverse as smart surveillance and monitoring, health and medicine, sports and recreation, robotics, drones, and self-driving cars. Visual recognition tasks, such as image classification, localization, and detection, are the core building blocks of many of these applications, and recent developments in Convolutional Neural Networks (CNNs) have led to outstanding performance in these state-of-the-art visual recognition tasks and systems. As a result, CNNs now form the crux of deep learning algorithms in computer vision.

This self-contained guide will benefit those who seek to both understand the theory behind CNNs and to gain hands-on experience on the application of CNNs in computer vision. It provides a comprehensive introduction to CNNs starting with the essential concepts behind neural networks: training, regularization, and optimization of CNNs. The book also discusses a wide range of loss functions, network layers, and popular CNN architectures, reviews the different techniques for the evaluation of CNNs, and presents some popular CNN tools and libraries that are commonly used in computer vision. Further, this text describes and discusses case studies that are related to the application of CNN in computer vision, including image classification, object detection, semantic segmentation, scene understanding, and image generation.

This book is ideal for undergraduate and graduate students, as no prior background knowledge in the field is required to follow the material, as well as new researchers, developers, engineers, and practitioners who are interested in gaining a quick understanding of CNN models.


Computer Vision, Models, and Inspection
by A. Dave Marshall, Ralph R. Martin

The main focus of this book is on the uses of computer vision for inspection and model based matching. It also provides a short, self contained introductory course on computer vision. The authors describe various state-of-the-art approaches to probems and then set forth their proposed approach to matching and inspection. They deal primarily with 3-D vision but also discuss 2-D vision strategies when relevant.The book is suitable for researchers, final year undergraduates and graduate students. Useful review questions at the end of each chapter allow this book to be used for self-study.

Handbook of Machine and Computer Vision
by Alexander Hornberg

The second edition of this accepted reference work has been updated to reflect the rapid developments in the field and now covers both 2D and 3D imaging.
Written by expert practitioners from leading companies operating in machine vision, this one-stop handbook guides readers through all aspects of image acquisition and image processing, including optics, electronics and software. The authors approach the subject in terms of industrial applications, elucidating such topics as illumination and camera calibration. Initial chapters concentrate on the latest hardware aspects, ranging from lenses and camera systems to camera-computer interfaces, with the software necessary discussed to an equal depth in later sections. These include digital image basics as well as image analysis and image processing. The book concludes with extended coverage of industrial applications in optics and electronics, backed by case studies and design strategies for the conception of complete machine vision systems. As a result, readers are not only able to understand the latest systems, but also to plan and evaluate this technology.
With more than 500 images and tables to illustrate relevant principles and steps.

Computer Vision
by Pedram Azad, Tilo Gockel, Rüdiger Dillmann

Computer vision is probably the most exciting branch of image processing, and the number of applications in robotics, automation technology and quality control is constantly increasing. Unfortunately entering this research area is, as yet, not simple. Those who are interested must first go through a lot of books, publications and software libraries. With this book, however, the first step is easy. The theoretically founded content is understandable and is supplemented by many practical examples. Source code is provided with the specially developed platform-independent open source library IVT in the programming language C/C++. The use of the IVT is not necessary, but it does make for a much easier entry and allows first developments to be quickly produced.

Advances in Computer Vision and Information Technology
by K. V. Kale

The latest trends in information technology represent a new intellectual paradigm for scientific exploration and the visualization of scientific phenomena. This title covers the emerging technologies in the field. Academics, engineers, industrialists, scientists and researchers engaged in teaching, and research and development of computer science and information technology will find the book useful for their academic and research work. This text includes 225 articles covering the following topics:

  • Advance Networking and Security/Wireless Networking/Cyber Laws.
  • Advance Software Computing.
  • Artificial Intelligence/Natural Language Processing/Neural Networks.
  • Bioinformatics/Biometrics.
  • Data Mining/E-Commerce/E-Learning.
  • Image Processing, Content Based Image Retrieval, Medical and Bio-Medical Imaging, Wavelets.
  • Information Processing/Audio and Text Processing/Cryptology, Steganography and Digital Watermarking.
  • Pattern Recognition/Machine Vision/Image Motion, Video Processing.
  • Signal Processing and Communication/Remote Sensing.
  • Speech Processing & Recognition, Human Computer Interaction.
  • Information and Communication Technology.

Readings in Computer Vision
by Martin A. Fischler, Oscar Firschein

The field of computer vision combines techniques from physics, mathematics, psychology, artificial intelligence, and computer science to examine how machines might construct meaningful descriptions of their surrounding environment. The editors of this volume, prominent researchers and leaders of the SRI International AI Center Perception Group, have selected sixty papers, most published since 1980, with the viewpoint that computer vision is concerned with solving seven basic problems:

  • Reconstructing 3D scenes from 2D images
  • Decomposing images into their component parts
  • Recognizing and assigning labels to scene objects
  • Deducing and describing relations among scene objects
  • Determining the nature of computer architectures that can support the visual function
  • Representing abstractions in the world of computer memory
  • Matching stored descriptions to image representation

Each chapter of this volume addresses one of these problems through an introductory discussion, which identifies major ideas and summarizes approaches, and through reprints of key research papers. Two appendices on crucial assumptions in image interpretation and on parallel architectures for vision applications, a glossary of technical terms, and a comprehensive bibliography and index complete the volume.


Posted on

Soft Computing & Intelligent Systems

Analysis and Design of Intelligent Systems Using Soft Computing Techniques
by Patricia Melin, Oscar Castillo, Eduardo G. Ramírez, Witold Pedrycz

This book comprises a selection of papers from IFSA 2007 on new methods for ana- sis and design of hybrid intelligent systems using soft computing techniques. Soft Computing (SC) consists of several computing paradigms, including fuzzy logic, n- ral networks, and genetic algorithms, which can be used to produce powerful hybrid intelligent systems for solving problems in pattern recognition, time series prediction, intelligent control, robotics and automation. Hybrid intelligent systems that combine several SC techniques are needed due to the complexity and high dimensionality of real-world problems. Hybrid intelligent systems can have different architectures, which have an impact on the efficiency and accuracy of these systems, for this reason it is very important to optimize architecture design. The architectures can combine, in different ways, neural networks, fuzzy logic and genetic algorithms, to achieve the ultimate goal of pattern recognition, time series prediction, intelligent control, or other application areas. This book is intended to be a major reference for scientists and engineers interested in applying new computational and mathematical tools to design hybrid intelligent systems. This book can also be used as a reference for graduate courses like the f- lowing: soft computing, intelligent pattern recognition, computer vision, applied ar- ficial intelligence, and similar ones. The book is divided in to twelve main parts. Each part contains a set of papers on a common subject, so that the reader can find similar papers grouped together.

Soft Computing and Intelligent Systems Design
by Fakhreddine Karray, Fakhreddine O. Karray, Clarence W. De Silva

Traditional artificial intelligence (AI) techniques are based around mathematical techniques of symbolic logic, with programming in languages such as Prolog and LISP invented in the 1960s. These are referred to as “crisp” techniques by the soft computing community. The new wave of AI methods seeks inspiration from the world of biology, and is being used to create numerous real-world intelligent systems with the aid of soft computing tools. These new methods are being increasingly taught at the upper end of the curriculum, sometimes as an adjunct to traditional AI courses, and sometimes as a replacement for them. Where a more radical approach is taken and the course is being taught at an introductory level, we have recently published Negnevitsky’s book. Karray and Silva will be suitable for the majority of courses which will be found at an advanced level. Karray and de Silva cover the problem of control and intelligent systems design using soft-computing techniques in an integrated manner. They present both theory and applications, including industrial applications, and the book contains numerous worked examples, problems and case studies. Covering the state-of-the-art in soft-computing techniques, the book gives the reader sufficient knowledge to tackle a wide range of complex systems for which traditional techniques are inadequate.


Intelligent Systems and Soft Computing
by Behnam Azvine, Nader Azarmi, Detlef D. Nauck

Artificial intelligence has, traditionally focused on solving human-centered problems like natural language processing or common-sense reasoning. On the other hand, for a while now soft computing has been applied successfully in areas like pattern recognition, clustering, or automatic control. The papers in this book explore the possibility of bringing these two areas together.
This book is unique in the way it concentrates on building intelligent software systems by combining methods from diverse disciplines, such as fuzzy set theory, neuroscience, agent technology, knowledge discovery, and symbolic artificial intelligence. The first part of the book focuses on foundational aspects and future directions; the second part provides the reader with an overview of recently developed software tools for building flexible intelligent systems; the final section studies developed applications in various fields.

Soft Computing and Intelligent Systems
by Madan M. Gupta

The field of soft computing is emerging from the cutting edge research over the last ten years devoted to fuzzy engineering and genetic algorithms. The subject is being called soft computing and computational intelligence. With acceptance of the research fundamentals in these important areas, the field is expanding into direct applications through engineering and systems science.

This book cover the fundamentals of this emerging filed, as well as direct applications and case studies. There is a need for practicing engineers, computer scientists, and system scientists to directly apply “fuzzy” engineering into a wide array of devices and systems.


Soft Computing and Intelligent Systems
by Madan M. Gupta

Outline of a computational theory of perceptions based on computing with words / L.A. Zadeh — Introduction to soft computing and intelligent control systems / N.K. Sinha and M.M. Gupta — Computational issues in intelligent control / X.D. Koutsoukos and P.J. Antsaklis — Neural networks — a guided tour / S. Haykin — On generating variable structure organization using a genetic algorithm / A.K. Zaidi and A.H. Levis — Evolutionary algorithms and neural networks / R.G.S. Asthana — Neural networks and fuzzy systems / P. Musilek and M.M. Gupta — Fuzzy neural networks / P. Musilek and M.M. Gupta — A cursory look at parallel and distributed architectures and biologically inspired computing / S.K. Basu — Developments in learning control systems / J.X. Xu … [et al.] — Techniques for genetic adaptive control / W.K. Lennon and K.M. Passino — Cooperative behavior of intelligent agents : theory and practice / L. Vlacic, A. Engwirda, and M. Kajitani — Expert systems in process diagnosis …

Hybrid Intelligent Systems for Pattern Recognition Using Soft Computing
by Patricia Melin, Oscar Castillo

This monograph describes new methods for intelligent pattern recognition using soft computing techniques including neural networks, fuzzy logic, and genetic algorithms. Hybrid intelligent systems that combine several soft computing techniques are needed due to the complexity of pattern recognition problems. Hybrid intelligent systems can have different architectures, which have an impact on the efficiency and accuracy of pattern recognition systems, to achieve the ultimate goal of pattern recognition. This book also shows results of the application of hybrid intelligent systems to real-world problems of face, fingerprint, and voice recognition. This monograph is intended to be a major reference for scientists and engineers applying new computational and mathematical tools to intelligent pattern recognition and can be also used as a textbook for graduate courses in soft computing, intelligent pattern recognition, computer vision, or applied artificial intelligence.


Handbook of Research on Novel Soft Computing Intelligent Algorithms
by Pandian Vasant

As technologies grow more complex, modeling and simulation of new intelligent systems becomes increasingly challenging and nuanced; specifically in diverse fields such as medicine, engineering, and computer science. Handbook of Research on Novel Soft Computing Intelligent Algorithms: Theory and Practical Applications explores emerging technologies and best practices to effectively address concerns inherent in properly optimizing advanced systems. With applications in areas such as bio-engineering, space exploration, industrial informatics, information security, and nuclear and renewable energies, this exceptional reference will serve as an important tool for decision makers, managers, researchers, economists, and industrialists across a wide range of scientific fields.

International Proceedings on Advances in Soft Computing, Intelligent Systems and Applications
by M. Sreenivasa Reddy, K. Viswanath, Shiva Prasad K.M.

The book focuses on the state-of-the-art technologies pertaining to advances in soft computing, intelligent system and applications. The Proceedings of ASISA 2016 presents novel and original work in soft computing, intelligent system and applications by the experts and budding researchers. These are the cutting edge technologies that have immense application in various fields. The papers discuss many real world complex problems that cannot be easily handled with traditional mathematical methods. The exact solution of the problems at hand can be achieved with soft computing techniques. Soft computing represents a collection of computational techniques inheriting inspiration from evolutionary algorithms, nature inspired algorithms, bio-inspired algorithms, neural networks and fuzzy logic.


Soft Computing Based Modeling in Intelligent Systems
by Valentina Emilia Balas, János Fodor, Annamária R. Várkonyi-Kóczy

The book “Soft Computing Based Modeling in Intelligent Systems”contains the – tended works originally presented at the IEEE International Workshop SOFA 2005 and additional papers. SOFA, an acronym for SOFt computing and Applications, is an international wo- shop intended to advance the theory and applications of intelligent systems and soft computing. Lotfi Zadeh, the inventor of fuzzy logic, has suggested the term “Soft Computing.” He created the Berkeley Initiative of Soft Computing (BISC) to connect researchers working in these new areas of AI. Professor Zadeh participated actively in our wo- shop. Soft Computing techniques are tolerant to imprecision, uncertainty and partial truth. Due to the large variety and complexity of the domain, the constituting methods of Soft Computing are not competing for a comprehensive ultimate solution. Instead they are complementing each other, for dedicated solutions adapted to each specific pr- lem. Hundreds of concrete applications are already available in many domains. Model based approaches offer a very challenging way to integrate a priori knowledge into procedures. Due to their flexibility, robustness, and easy interpretability, the soft c- puting applications will continue to have an exceptional role in our technologies. The applications of Soft Computing techniques in emerging research areas show its mat- ity and usefulness. The IEEE International Workshop SOFA 2005 held Szeged-Hungary and Arad- Romania in 2005 has led to the publication of these two edited volumes. This volume contains Soft Computing methods and applications in modeling, optimisation and prediction.

Posted on

Nature Of Statistical Learning Theory

The Nature of Statistical Learning Theory
by Vladimir N. Vapnik

The aim of this book is to discuss the fundamental ideas which lie behind the statistical theory of learning and generalization. It considers learning from the general point of view of function estimation based on empirical data. Omitting proofs and technical details, the author concentrates on discussing the main results of learning theory and their connections to fundamental problems in statistics. These include: – the general setting of learning problems and the general model of minimizing the risk functional from empirical data – a comprehensive analysis of the empirical risk minimization principle and shows how this allows for the construction of necessary and sufficient conditions for consistency – non-asymptotic bounds for the risk achieved using the empirical risk minimization principle – principles for controlling the generalization ability of learning machines using small sample sizes – introducing a new type of universal learning machine that controls the generalization ability.

The Nature of Statistical Learning Theory
by Vladimir Vapnik

The aim of this book is to discuss the fundamental ideas which lie behind the statistical theory of learning and generalization. It considers learning as a general problem of function estimation based on empirical data. Omitting proofs and technical details, the author concentrates on discussing the main results of learning theory and their connections to fundamental problems in statistics. These include: * the setting of learning problems based on the model of minimizing the risk functional from empirical data * a comprehensive analysis of the empirical risk minimization principle including necessary and sufficient conditions for its consistency * non-asymptotic bounds for the risk achieved using the empirical risk minimization principle * principles for controlling the generalization ability of learning machines using small sample sizes based on these bounds * the Support Vector methods that control the generalization ability when estimating function using small sample size. The second edition of the book contains three new chapters devoted to further development of the learning theory and SVM techniques. These include: * the theory of direct method of learning based on solving multidimensional integral equations for density, conditional probability, and conditional density estimation * a new inductive principle of learning. Written in a readable and concise style, the book is intended for statisticians, mathematicians, physicists, and computer scientists. Vladimir N. Vapnik is Technology Leader AT&T Labs-Research and Professor of London University. He is one of the founders of statistical learning theory, and the author of seven books published in English, Russian, German, and Chinese.

The Nature of Statistical Learning Theory
by Vladimir Vapnik

The aim of this book is to discuss the fundamental ideas which lie behind the statistical theory of learning and generalization. It considers learning as a general problem of function estimation based on empirical data. Omitting proofs and technical details, the author concentrates on discussing the main results of learning theory and their connections to fundamental problems in statistics. These include: * the setting of learning problems based on the model of minimizing the risk functional from empirical data * a comprehensive analysis of the empirical risk minimization principle including necessary and sufficient conditions for its consistency * non-asymptotic bounds for the risk achieved using the empirical risk minimization principle * principles for controlling the generalization ability of learning machines using small sample sizes based on these bounds * the Support Vector methods that control the generalization ability when estimating function using small sample size. The second edition of the book contains three new chapters devoted to further development of the learning theory and SVM techniques. These include: * the theory of direct method of learning based on solving multidimensional integral equations for density, conditional probability, and conditional density estimation * a new inductive principle of learning. Written in a readable and concise style, the book is intended for statisticians, mathematicians, physicists, and computer scientists. Vladimir N. Vapnik is Technology Leader AT&T Labs-Research and Professor of London University. He is one of the founders of statistical learning theory, and the author of seven books published in English, Russian, German, and Chinese.

An Introduction to Statistical Learning
by Gareth James, Daniela Witten, Trevor Hastie, Robert Tibshirani

An Introduction to Statistical Learning provides an accessible overview of the field of statistical learning, an essential toolset for making sense of the vast and complex data sets that have emerged in fields ranging from biology to finance to marketing to astrophysics in the past twenty years. This book presents some of the most important modeling and prediction techniques, along with relevant applications. Topics include linear regression, classification, resampling methods, shrinkage approaches, tree-based methods, support vector machines, clustering, and more. Color graphics and real-world examples are used to illustrate the methods presented. Since the goal of this textbook is to facilitate the use of these statistical learning techniques by practitioners in science, industry, and other fields, each chapter contains a tutorial on implementing the analyses and methods presented in R, an extremely popular open source statistical software platform.

Two of the authors co-wrote The Elements of Statistical Learning (Hastie, Tibshirani and Friedman, 2nd edition 2009), a popular reference book for statistics and machine learning researchers. An Introduction to Statistical Learning covers many of the same topics, but at a level accessible to a much broader audience. This book is targeted at statisticians and non-statisticians alike who wish to use cutting-edge statistical learning techniques to analyze their data. The text assumes only a previous course in linear regression and no knowledge of matrix algebra.


Statistical Learning Theory and Stochastic Optimization
by Olivier Catoni

Statistical learning theory is aimed at analyzing complex data with necessarily approximate models. This book is intended for an audience with a graduate background in probability theory and statistics. It will be useful to any reader wondering why it may be a good idea, to use as is often done in practice a notoriously “wrong” (i.e. over-simplified) model to predict, estimate or classify. This point of view takes its roots in three fields: information theory, statistical mechanics, and PAC-Bayesian theorems. Results on the large deviations of trajectories of Markov chains with rare transitions are also included. They are meant to provide a better understanding of stochastic optimization algorithms of common use in computing estimators. The author focuses on non-asymptotic bounds of the statistical risk, allowing one to choose adaptively between rich and structured families of models and corresponding estimators. Two mathematical objects pervade the book: entropy and Gibbs measures. The goal is to show how to turn them into versatile and efficient technical tools, that will stimulate further studies and results.


Estimation of Dependences Based on Empirical Data
by V. Vapnik

Twenty-?ve years have passed since the publication of the Russian version of the book Estimation of Dependencies Based on Empirical Data (EDBED for short). Twen- ?ve years is a long period of time. During these years many things have happened. Looking back, one can see how rapidly life and technology have changed, and how slow and dif?cult it is to change the theoretical foundation of the technology and its philosophy. I pursued two goals writing this Afterword: to update the technical results presented in EDBED (the easy goal) and to describe a general picture of how the new ideas developed over these years (a much more dif?cult goal). The picture which I would like to present is a very personal (and therefore very biased) account of the development of one particular branch of science, Empirical – ference Science. Such accounts usually are not included in the content of technical publications. I have followed this rule in all of my previous books. But this time I would like to violate it for the following reasons. First of all, for me EDBED is the important milestone in the development of empirical inference theory and I would like to explain why. S- ond, during these years, there were a lot of discussions between supporters of the new 1 paradigm (now it is called the VC theory ) and the old one (classical statistics).

Measures of Complexity
by Vladimir Vovk, Harris Papadopoulos, Alexander Gammerman

This book brings together historical notes, reviews of research developments, fresh ideas on how to make VC (Vapnik–Chervonenkis) guarantees tighter, and new technical contributions in the areas of machine learning, statistical inference, classification, algorithmic statistics, and pattern recognition.

The contributors are leading scientists in domains such as statistics, mathematics, and theoretical computer science, and the book will be of interest to researchers and graduate students in these domains.


Statistical Learning and Data Sciences
by Alexander Gammerman, Vladimir Vovk, Harris Papadopoulos

This book constitutes the refereed proceedings of the Third International Symposium on Statistical Learning and Data Sciences, SLDS 2015, held in Egham, Surrey, UK, April 2015.
The 36 revised full papers presented together with 2 invited papers were carefully reviewed and selected from 59 submissions. The papers are organized in topical sections on statistical learning and its applications, conformal prediction and its applications, new frontiers in data analysis for nuclear fusion, and geometric data analysis.

The Elements of Statistical Learning
by Trevor Hastie, Robert Tibshirani, Jerome Friedman

During the past decade there has been an explosion in computation and information technology. With it have come vast amounts of data in a variety of fields such as medicine, biology, finance, and marketing. The challenge of understanding these data has led to the development of new tools in the field of statistics, and spawned new areas such as data mining, machine learning, and bioinformatics. Many of these tools have common underpinnings but are often expressed with different terminology. This book describes the important ideas in these areas in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of color graphics. It is a valuable resource for statisticians and anyone interested in data mining in science or industry. The book’s coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting—the first comprehensive treatment of this topic in any book.

This major new edition features many topics not covered in the original, including graphical models, random forests, ensemble methods, least angle regression & path algorithms for the lasso, non-negative matrix factorization, and spectral clustering. There is also a chapter on methods for “wide” data (p bigger than n), including multiple testing and false discovery rates.

Trevor Hastie, Robert Tibshirani, and Jerome Friedman are professors of statistics at Stanford University. They are prominent researchers in this area: Hastie and Tibshirani developed generalized additive models and wrote a popular book of that title. Hastie co-developed much of the statistical modeling software and environment in R/S-PLUS and invented principal curves and surfaces. Tibshirani proposed the lasso and is co-author of the very successful An Introduction to the Bootstrap. Friedman is the co-inventor of many data-mining tools including CART, MARS, projection pursuit and gradient boosting.