Teaser Image

Brandon John Grenier

Artificial Intelligence • Machine Learning • Big Data


Informally, machine learning is a field of study that gives computers the ability to learn without being explicitly programmed. Put simply, if a program can improve the performance of a task by using previous experience, you can say that program has learned from the experience. A more formal definition of Machine Learning was produced by Tom Mitchell in 1998:

A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.

The ability for software to learn from experience is significantly different from how software is written today. Typically, if a developer wanted to improve the performance of a task (to make it run faster, more efficiently, or cater for new conditions) the developer would first have think about how to acheive the performance gain, and then explicitly program the software to make the improvement.

Machine Learning Predictions: Classification and Regression

A machine learning algorithm can make two types of predictions. You will need to choose the approprite prediction type based on the problem you’re trying to solve.

Classification Problems

These are problems where we want a machine learning algorithm to predict a known, discrete result from an existing set of data. This is like selecting a value from a drop down menu.

Facial recognition, handwriting recognition and recommendations are all examples of classification problems. For facial recognition, we want the algorithm to predict a discrete value (a specific person) from a known list (a database of people). Handwriting recognition is similar - we want to predict discrete values (specific words) from an existing set of data (the english dictionary). Recommendations are also classification problems - when Netflix recommends movies and Amazon recommends products you may like, they are both producing lists of discrete values that represent a subset of their entire cataloge of products.

Regression Problems

These are problems where we want a machine learning algorithm to produce a continuous, unconstrained value.

Stock market prediction is an example of a regression problem. In this case, we want the machine learning algorithm to produce a continuous value - the price of a stock. Predicting housing prices is another example of a regression problem.

Supervised, Unsupervised and Reinforcement Learning

In order for a machine learning algorithm to learn from experience, it needs to be taught how to learn in the first place. There are three broad techniques that can be used train machine learning algorihthms:

Supervised Learning

Supervised learning is a training method where you already know the correct answers (you know the desired output). You train the algorithm by providing it with data as well as the answers. After training, the algorithm should be able to make predictions on novel data that it hasn’t seen before.

Spam filtering is a classic example of supervised learning. You train the algorithm by providing it with examples of spam, and tell the algorithm it’s spam. You provide the algorithm with examples of legitimate email, and tell the algorithm it’s legitimate email. With enough data, the algorithm should be able to classify email as legitimate or spam for emails it hasn’t seen before.

Unsupervised Learning

Unsupervised learning is a training method where you don’t already know the correct answer (i.e. you don’t know the desired output). Unsupervised learning typically supports the grouping or clustering of data.

If we used unsupervised learning for image recognition, the algorithm wouldn’t be able to identify a particular type of thing - unsupervised learning would not come back with predictions saying things like “this is a photo of a cat”, “this is definitley a dog”, “this is a person”.

Instead, an unsupervised learning algorithm would recognise that all photos of cats look similar, and group them together. It would create similar clusters for humans and dogs, and would recognise that the “pattern” for a cat is different from the “pattern” for a dog or human. The end result of the algorithm would be three unique clusters, and behaviourally would continue to cluster photos into one of it’s known groups, even when it’s exposed to new photos it hasn’t seen before.

After the unsupervised learning has clustered data, it would be simple enough to “tag” the clusters. For example, once our machine learning algorithm has created the cat, dog and human clusters, it would be straightforward at this stage to tell the algorithm “hey, these are cats, these are dogs and these are people”. So, when the machine learning algorithm processed any new photos, it would be able to understand exactly which type of thing it’s looking at.