Eli5 What is machine learning?

463 views

Like. What is it?

In: 8

14 Answers

Anonymous 0 Comments

It’s the approach of letting a computer learn how to solve a problem on its own instead of telling it how to do it.

You do this by proving a lot of input data and the correct output data. For instance, you give it 1000 pictures and tell it “this is a cat, this is a dog, another dog, another cat, etc”. Eventually, the computer will find patterns in the data, for instance that if there are whiskers in the picture then it’s a cat.

Anonymous 0 Comments

There are two ways to get machines to do useful stuff:

1. Programming: this is us telling the machine exactly what to do, for example we instruct machines to take numbers and multiply them together and do other kinds of math to them until they are transformed into some kind of useful result

2. Machine learning: this is us telling the machine to take examples of numbers as input, together with correct results that we expected for those input numbers and “learn” how the inputs relate to the outputs, we call this **”training”**.

The training happens by iteratively adjusting some internal values based on some rules which we call a machine learning **”model”** kind or type (there are many kinds).

The machine keeps track of some internal values and that makes it remember what it has learned, we call these values the **”parameters”** of the model.

The model (usually) adjusts the parameters based on the difference between its guess (it’s actually what it calculated by applying the model rules which does math between the inputs and the parameters, we call this a **”prediction”**) about each input and the expected result we give it. So if it would have predicted an output far away from the real output we expected it would adjust its internal parameters more than if the prediction was close to the expected result, the model is presented with examples ideally until it stops adjusting its parameters (because it perfectly predicts all expected examples), then we call the training completed. A lot of research goes into figuring out when to stop learning because in real life the model doesn’t predict perfectly the outputs and it never stops, also it could become worse at predicting outputs for inputs it was not trained with if it keeps adjusting its parameters too much during training, it’s like learning to play tennis against the wall a little too well and then not being able to adjust to playing against people so easily.

Once the model is trained then we can use it to make predictions about inputs we don’t know the actual result about, we call this **”inference”** and this is usually the useful part of machine learning. This is pretty straight forward, we just apply the rules of the model that combine our input values with the learned parameters and outputs a result.

Anonymous 0 Comments

It’s the approach of letting a computer learn how to solve a problem on its own instead of telling it how to do it.

You do this by proving a lot of input data and the correct output data. For instance, you give it 1000 pictures and tell it “this is a cat, this is a dog, another dog, another cat, etc”. Eventually, the computer will find patterns in the data, for instance that if there are whiskers in the picture then it’s a cat.

Anonymous 0 Comments

There are two ways to get machines to do useful stuff:

1. Programming: this is us telling the machine exactly what to do, for example we instruct machines to take numbers and multiply them together and do other kinds of math to them until they are transformed into some kind of useful result

2. Machine learning: this is us telling the machine to take examples of numbers as input, together with correct results that we expected for those input numbers and “learn” how the inputs relate to the outputs, we call this **”training”**.

The training happens by iteratively adjusting some internal values based on some rules which we call a machine learning **”model”** kind or type (there are many kinds).

The machine keeps track of some internal values and that makes it remember what it has learned, we call these values the **”parameters”** of the model.

The model (usually) adjusts the parameters based on the difference between its guess (it’s actually what it calculated by applying the model rules which does math between the inputs and the parameters, we call this a **”prediction”**) about each input and the expected result we give it. So if it would have predicted an output far away from the real output we expected it would adjust its internal parameters more than if the prediction was close to the expected result, the model is presented with examples ideally until it stops adjusting its parameters (because it perfectly predicts all expected examples), then we call the training completed. A lot of research goes into figuring out when to stop learning because in real life the model doesn’t predict perfectly the outputs and it never stops, also it could become worse at predicting outputs for inputs it was not trained with if it keeps adjusting its parameters too much during training, it’s like learning to play tennis against the wall a little too well and then not being able to adjust to playing against people so easily.

Once the model is trained then we can use it to make predictions about inputs we don’t know the actual result about, we call this **”inference”** and this is usually the useful part of machine learning. This is pretty straight forward, we just apply the rules of the model that combine our input values with the learned parameters and outputs a result.