eli5 what is Machine learning

404 views

Hy, I wanna learn about what is Machine Learning.
1. What are its inputs ( what do we give it)
2. Outputs ( what do we get from it)
3. What can we do with in field of cows and cow farming.
Thanksss

In: 4

3 Answers

Anonymous 0 Comments

Machine Learning isn’t just one thing. It’s a whole field of things, like math or physics or whatever, that deals with creating computer programs that can “learn” from past data to better predict future data.

A basic example would be something like a (binary) classifier: a program that answers a yes-or-no question about a thing based on information about it. For example, a classifier might be trained on crime data to say someone is or isn’t worth the police looking into more, or might be trained on photos to decide whether a part in a factory is defective or not. Such a classifier might be trained on a past data set of lots of people’s data and past experience with whether they were or weren’t criminals, which “teaches” the classifier how to decide who is or isn’t a criminal.

The inputs and outputs of a machine learning model can be different depending on the type of model. A computer vision model, for example, might have a photo as an input and a list of things that appear in that photo as an output. A classifier might have a bunch of different types of data as inputs and a yes or no as an output. A neural net that has been trained to play Starcraft might have the current game state as an input and where it should click or what key it should press next as an output.

ML has applications anywhere that you have a bunch of data and want to build a machine to automatically make decisions from it. For cattle farming, for example, it might do things like detect potentially-diseased cows, decide how much feed to give the cow, or decide the best time to slaughter one for meat.

Anonymous 0 Comments

The current wave of machine learning can be boiled down to pattern matching. You have a set of starting inputs and a bunch of outcomes. You use the ML algorithms to match a bunch of starting states into potential outputs. It basically gives you a % chance that your outcome is going to be something.

One example may be to see when cows produce the best milk. So maybe you’re looking for a certain mix of nutrients in the milk or maybe volume. You start gathering data about the cows. It can be when they eat, what they eat, what other cows they interact with. The color of the cow, spots on the cow, etc. You can pick pretty much anything you want to feed it into the algorithm.

The more data you have the better. Maybe it turns out cows produce the most milk after eating after 3pm. Maybe for some reason all cows with 3 spots produce the best milk. You don’t really know, all you know is that for some reason all of these things impact your outcome. So going forward maybe you adjust the feeding schedule or only get cows with 3 spots.

Word of caution – this is where correlation does not mean causation comes into play. ML basically just looks for correlation, not causation. Not to say its worthless. The idea is that the more data and more data points you have, the less likely your algorithm will be biased (Even then there can be unconscious biases) and less the correlation/causation relationship matters. And in some cases, that’s good enough.

Anonymous 0 Comments

In the simplest way I can put it: Have you ever played [Wordle](https://www.nytimes.com/games/wordle/index.html)? It’s like machine learning but backwards. You are the machine that is being learned.

Your boundaries are:

* English words
* Six characters
* You will be told when you are incorrect
* You will be told what letters are correct
* You will be told what letters are present in an incorrect location
* You will be told when you are finished.

So in this analogy, the “inputs” are the boundaries above, and the “outputs” are the guesses you give. With each output/guess you produce, you are given feedback/input on how correct you are, and from there you give another guess/output.

Real machine learning uses billions of guesses instead of 6, and has a feedback machine which replies to each of those guesses, and the programmer is really only guiding the feedback machine in a general direction.

RE: Cows, imagine you want the cows to do something, but you don’t get to talk to the cows, and you don’t get to talk to the ranchers either. Instead you are a congress member who makes laws and you try to get the ranchers to do what you want them to do that will result in the cows doing the thing you want. Sounds annoying if you had only one cow, or only one rancher, but if you wanted to scale it to a billion cows and a million ranchers than it wouldn’t take any more effort, and that’s part of the power of machine learning.