68:40

Lecture 1 | Machine Learning (Stanford)
Lecture 1 | Machine Learning (Stanford)
Lecture by Professor Andrew Ng for Machine Learning (CS 229) in the Stanford Computer Science department. Professor Ng provides an overview of the course in this introductory meeting. This course provides a broad introduction to machine learning and statistical pattern recognition. Topics include supervised learning, unsupervised learning, learning theory, reinforcement learning and adaptive control. Recent applications of machine learning, such as to robotic control, data mining, autonomous navigation, bioinformatics, speech recognition, and text and web data processing are also discussed. Complete Playlist for the Course: www.youtube.com CS 229 Course Website: www.stanford.edu Stanford University: www.stanford.edu Stanford University Channel on YouTube: www.youtube.com
1:59

Machine Learning: About the class
Machine Learning: About the class
Stanford University will be offering a free, online machine learning class in Fall 2011, taught by Prof. Andrew Ng. Sign up at ml-class.org
76:16

Lecture 2 | Machine Learning (Stanford)
Lecture 2 | Machine Learning (Stanford)
Lecture by Professor Andrew Ng for Machine Learning (CS 229) in the Stanford Computer Science department. Professor Ng lectures on linear regression, gradient descent, and normal equations and discusses how they relate to machine learning. This course provides a broad introduction to machine learning and statistical pattern recognition. Topics include supervised learning, unsupervised learning, learning theory, reinforcement learning and adaptive control. Recent applications of machine learning, such as to robotic control, data mining, autonomous navigation, bioinformatics, speech recognition, and text and web data processing are also discussed. Complete Playlist for the Course: www.youtube.com CCS 229 Course Website: www.stanford.edu Stanford University: www.stanford.edu Stanford University Channel on YouTube: www.youtube.com
16:27

The Future of Robotics and Artificial Intelligence (Andrew Ng, Stanford University, STAN 2011)
The Future of Robotics and Artificial Intelligence (Andrew Ng, Stanford University, STAN 2011)
(May 21, 2011) Andrew Ng (Stanford University) is building robots to improve the lives of millions. From autonomous helicopters to robotic perception, Ng's research in machine learning and artificial intelligence could result one day in a robot that can clean your house. STAN: Society, Technology, Art and Nature, was Stanford University's prototype conferecne for TEDxStanford, and showcased some of the university's top faculty, students, alumni and performers in an intense four-hour event laced with surprising appearances and memorable experiences. STAN, modeled after TED, explored big questions about society, technology, art and nature in a format that invites feedback and engagement. Stanford University: www.stanford.edu STAN 2011: stan2011.stanford.edu Andrew Ng ai.stanford.edu Stanford University Channel on YouTube: www.youtube.com
40:48

NIPS 2011 Big Learning - Algorithms, Systems, & Tools Workshop: Machine Learning...
NIPS 2011 Big Learning - Algorithms, Systems, & Tools Workshop: Machine Learning...
Big Learning Workshop: Algorithms, Systems, and Tools for Learning at Scale at NIPS 2011 Invited Talk: Machine Learning and Hadoop by Josh Wills Abstract: We'll review common use cases for machine learning and advanced analytics found in our customer base at Cloudera and ways in which Apache Hadoop supports these use cases. We'll then discuss upcoming developments for Apache Hadoop that will enable new classes of applications to be supported by the system.
73:14

Lecture 3 | Machine Learning (Stanford)
Lecture 3 | Machine Learning (Stanford)
Lecture by Professor Andrew Ng for Machine Learning (CS 229) in the Stanford Computer Science department. Professor Ng delves into locally weighted regression, probabilistic interpretation and logistic regression and how it relates to machine learning. This course provides a broad introduction to machine learning and statistical pattern recognition. Topics include supervised learning, unsupervised learning, learning theory, reinforcement learning and adaptive control. Recent applications of machine learning, such as to robotic control, data mining, autonomous navigation, bioinformatics, speech recognition, and text and web data processing are also discussed. Complete Playlist for the Course: www.youtube.com CS 229 Course Website: www.stanford.edu Stanford University: www.stanford.edu Stanford University Channel on YouTube: www.youtube.com
3:36

Machines Can Learn
Machines Can Learn
ai-one's Topic-Mapper API enables programmers to build machine learning into applications. ai-one has discovered a new form of artificial intelligence that detects the inherent structure of data at the byte-level.
55:40

Machine Learning in Science and Engineering [21C3]
Machine Learning in Science and Engineering [21C3]
Machine Learning in Science and Engineering A Brief Introduction into Machine Learning with a few Application Examples A broad overview about the current stage of research in Machine Learning starting with the general motivation and the setup of learning problems and discussion of state-of-the-art learning algorithms for novelty detection, classification and regression. Additionally, machine learning methods used for spam detection, intrusion detection, brain computer interace and biological sequence analysis are outlined. The talk is going to have three parts: (a) What is Machine Learning about? This includes a general motivation, the setup of learning problems (suppervised vs unsupervised; batch vs online). I'll mention typical examples (eg OCR, Text-classification, medical Diagnosis, biological sequence analysis, time series prediction) and use them as motivation. (b) What are state-of-the-art learning techniques? With a minimal amount of theory, I'll describe some methods including a currently very successful and easily applicable method called Support Vector Machines. I'll provide references to standard literature and implementations of these algorithms. (c) I'll discuss a few applications in greater detail, to show how Machine Learning can be successfully applied in practice. These include: 1) spam detection 2) face detection and reconstruction 3) intelligent hard disk spin (online learning) 4) biological sequence analysis & drugs discovery 5) network intrusion <b>...</b>
8:31

Machine Learning (Introduction + Data Mining VS ML)
Machine Learning (Introduction + Data Mining VS ML)
8:15

ai-one SDK for Machine Learning Applications
ai-one SDK for Machine Learning Applications
Overview of ai-one Topic-Mapper SDK for building machine learning applications with lightweight ontologies.
31:57

Scala and Machine Learning with Andrew McCallum
Scala and Machine Learning with Andrew McCallum
In this video from the Northeast Scala Symposium, Andrew McCallum, Professor of Computer Science at University of Massachusetts Amherst, is going discuss trends in machine learning using Scala. Martin Odersky didn't initially expect Scala to find a following in the field of machine learning because of machine learning's large appetite for memory and numeric computation. But the field is expanding in new ways, with interest in parallel and distributed computation, dynamically changing model structures, and the desire to put easy-to-use DSLs into the hands of non-experts. This talk will describe these trends and discuss several machine learning projects that use Scala, including FACTORIE, a 30k-line DSL for graphical models whose development is being sponsored by Google and the NSF.
6:00

LSE Research: The Mathematics of Machine Learning
LSE Research: The Mathematics of Machine Learning
Computers struggle with tasks we find simple. But try to describe explicitly the difference between the handwritten numerals 1 and 7, and you begin to appreciate the problem. Professor Martin Anthony explains what role mathematicians play in making computers less stupid. Diagnosing tumours, playing video games, detecting credit card fraud, recognising faces, reading handwriting they dont seem like similar tasks, but they are all cases where 'machine learning' is employed to enable computers to make intelligent decisions. And although the various tasks look very different, the mathematics behind them is remarkably similar, as Professor Martin Anthony explains in this short film. When computers fail to do something we find easy reading handwriting, recognising faces its tempting to think of them as stupid machines. But its often the case that tasks we find relatively easy to perform evade explicit codification. How, for example, would you specify rules which correctly identified cats and only cats including three-legged cats but excluded dogs? Employing ideas from probability theory, statistics, linear algebra, geometry and discrete mathematics, machine learning aims to generate systems of instructions algorithms that allow computers to perform cognitive-style tasks. In abstract terms, machine learning involves detecting patterns in very large datasets, clustering together similar objects and distinguishing dissimilar ones. This could help with the detection of anomalies <b>...</b>
10:42

Weka Machine Learning Tutorial 02: Explorer-Preprocess
Weka Machine Learning Tutorial 02: Explorer-Preprocess
59:48

Biologically Inspired Machine Learning
Biologically Inspired Machine Learning
(March 31, 2010) Venkat Rangan, a hardware engineer at Qualcomm Incorporated, discusses hardware, software, and networking challenges that humans will face in a creating a neuromorphic computer. Stanford University: www.stanford.edu Stanford School of Engineering: soe.stanford.edu Stanford Engineering Everywhere: see.stanford.edu Stanford University Channel on YouTube: www.youtube.com
41:57

[PURDUE MLSS] Introduction to Machine Learning by Dale Schuurmans Part 1/6
[PURDUE MLSS] Introduction to Machine Learning by Dale Schuurmans Part 1/6
Lecture slides: learning.stat.purdue.edu Abstract of the talk: This course will provide a simple unified introduction to batch training algorithms for supervised, unsupervised and partially-supervised learning. The concepts introduced will provide a basis for the more advanced topics in other lectures. The first part of the course will cover supervised training algorithms, establishing a general foundation through a series of extensions to linear prediction, including: nonlinear input transformations (features), L2 regularization (kernels), prediction uncertainty (Gaussian processes), L1 regularization (sparsity), nonlinear output transformations (matching losses), surrogate losses (classification), multivariate prediction, and structured prediction. Relevant optimization concepts will be acquired along the way. The second part of the course will then demonstrate how unsupervised and semi-supervised formulations follow from a relationship between forward and reverse prediction problems. This connection allows dimensionality reduction and sparse coding to be unified with regression, and clustering and vector quantization to be unified with classification—even in the context of other extensions. Current convex relaxations of such training problems will be discussed. The last part of the course covers partially-supervised learning—the problem of learning an input representation concurrently with a predictor. A brief overview of current research will be presented, including <b>...</b>
0:35

IBM Watson: Computer Understands Natural Language
IBM Watson: Computer Understands Natural Language
IBM's Watson is a real time, natural language processing computer that employs deep analytics and machine learning capabilities to answer questions. Watson's ability will be tested on the game show Jeopardy! Visit ibmwatson.com for more information.
49:46

Machine Learning in Support of Family Coordination
Machine Learning in Support of Family Coordination
Google Tech Talk (more info below) June 1, 2011 Presented by Scott Davidoff, Ph.D. ABSTRACT This talk describes how my work with busy families: (1) identifies how their coordination breaks down (from 3 years of fieldwork and experience prototyping) (2) Identifies how we can apply unsupervised machine learning to this problem context (using mobile phone GPS) (3) Creates a new way to visualize the calendar that combines manually input and learned information
2:18

Machine Learning: What you will learn
Machine Learning: What you will learn
What you will learn in the Stanford free online machine learning class in Fall 2011. Sign up at ml-class.org
28:24

Recommendation Engines using Machine Learning, and JRuby by Matt Kirk
Recommendation Engines using Machine Learning, and JRuby by Matt Kirk
Ever wonder how netflix can predict what rating you would give to a movie? How do recommendation engines get built? Well, it's possible with JRuby and it's fairly straight forward. Many engines are built purely on support vector machine regressions which map arrays of data onto a classifier, like a star. In this talk I'll explain how support vector machines are built, and how do make a simple movie prediction model all in JRuby.
12:16

Machine Learning- Al Barrentine, Jumo (Part 1)
Machine Learning- Al Barrentine, Jumo (Part 1)
Part 1 of 4 - Al Barrentine from Jumo gives a 30 minute talk on Machine Learning at GameChanger in NYC for the September Python Meetup Group. "I'd be happy to give a talk on machine learning in Python, particularly some of the nuts and bolts of integrating techniques like clustering and classification into a webapp. I think we all read about the various data mining libraries (pattern, scikit learn, nltk, datasciencetoolkit, pyml, pybrain, etc.) on Hacker News and get excited about them but I don't see a lot of these techniques used in industry, especially in the startup world. It would be titled something like "Machine Learning for Web Developers."
60:11

[PURDUE MLSS] Introduction to Machine Learning by Dale Schuurmans Part 3/6
[PURDUE MLSS] Introduction to Machine Learning by Dale Schuurmans Part 3/6
Lecture slides: learning.stat.purdue.edu Abstract of the talk: This course will provide a simple unified introduction to batch training algorithms for supervised, unsupervised and partially-supervised learning. The concepts introduced will provide a basis for the more advanced topics in other lectures. The first part of the course will cover supervised training algorithms, establishing a general foundation through a series of extensions to linear prediction, including: nonlinear input transformations (features), L2 regularization (kernels), prediction uncertainty (Gaussian processes), L1 regularization (sparsity), nonlinear output transformations (matching losses), surrogate losses (classification), multivariate prediction, and structured prediction. Relevant optimization concepts will be acquired along the way. The second part of the course will then demonstrate how unsupervised and semi-supervised formulations follow from a relationship between forward and reverse prediction problems. This connection allows dimensionality reduction and sparse coding to be unified with regression, and clustering and vector quantization to be unified with classification—even in the context of other extensions. Current convex relaxations of such training problems will be discussed. The last part of the course covers partially-supervised learning—the problem of learning an input representation concurrently with a predictor. A brief overview of current research will be presented, including <b>...</b>
61:36

[PURDUE MLSS] Large-scale Machine Learning and Stochastic Algorithms by Leon Bottou (Part 1/6)
[PURDUE MLSS] Large-scale Machine Learning and Stochastic Algorithms by Leon Bottou (Part 1/6)
Lecture notes: learning.stat.purdue.edu Large-scale Machine Learning and Stochastic Algorithms During the last decade, data sizes have outgrown processor speed. We are now frequently facing statistical machine learning problems for which datasets are virtually infinite. Computing time is then the bottleneck. The first part of the lecture centers on the qualitative difference between small-scale and large-scale learning problem. Whereas small-scale learning problems are subject to the usual approximation--estimation tradeoff, large-scale learning problems are subject to a qualitatively different tradeoff involving the computational complexity of the underlying optimization algorithms in non-trivial ways. Unlikely optimization algorithm such as stochastic gradient show amazing performance for large-scale machine learning problems. The second part makes a detailed overview of stochastic learning algorithms applied to both linear and nonlinear models. In particular I would like to spend time on the use of stochastic gradient for structured learning problems and on the subtle connection between nonconvex stochastic gradient and active learning. See other lectures at Purdue MLSS Playlist: www.youtube.com