What is Machine Learning?

586 views

Wasn’t sure if the flair mathematics was more appropriate or not. Can you what is it and what is its purpose as opposed to statistics?

In: Engineering

6 Answers

Anonymous 0 Comments

Basically it’s when you take some kind of data set, say a bunch of pictures of faces and the emotion that the person in the picture is conveying, and feed that into a computer. The computer creates a math equation that takes different parts of the picture as variables, and tries to solve for constants that make that equation give the result it already knows it’s supposed to get.

It’s kind of like when you give Excel some points on a graph and ask it to come up with a curve to fit, except with a lot more points and in a lot of dimensions. There’s a lot of different ways to do the exact math behind that, though the common way for machine learning is to make a big array of linear equations.

Anonymous 0 Comments

First of all, there are some similarity between Machine Learning (ML) and stats, there are even some definite overlap, e.g. Linear Regression. Hence, the memes:

https://miro.medium.com/max/700/1*x7P7gqjo8k2_bj2rTQWAfg.jpeg

https://miro.medium.com/max/2560/1*mXeEWBymq-UXPXF2Oai5mg.jpeg

https://i.redd.it/4f71u8ti5hg31.jpg

What makes things more complicated is that ML is an extremely rapidly growing field, what it IS changes from month to month (usually growing). So it is kinda hard to tell the difference.

So here is my extremely simplified version:

* The purpose of statistics is that so HUMAN can understand the data

* The purpose of ML is to make PREDICTION.

For example, you are a CEO, and sales is down this month. If you want to know why, so you can do something about it, you get a statisticians. If you want to know what is sales for next month, so you know how much stock you have to order, you get an ML expert.

But here is a good example where things get complicated. Explainable AI and causal inference are definitely under ML, although they are supposed to answer the “why” question, on top of making prediction. There are also things like clustering, that also attempts to give human insights into the data.

A better explanations is that some tools, like neural network, are born out of the ML field, and regardless of what is used for. It will always be considered as ML, not stats.

******************

So here’s my summary. In the past, everything is stats. Then some people are interested in making predictions using the tools in stats, so ML was simply a subfield of stats. Then a number of ML specific tools (like neural network) were made. So now, anything that uses those tools, are considered as ML, regardless or purpose (e.g. XAI and causal inference). Worse, people are enhancing those ML tools with classical stats tools (e.g. Bayesian + neural network). In those cases, more often than not, the ML labels stick.

(I know this is a bad history, because ML is born out of CS and AI, not stats. But ELI5)

So, the distinction between ML and stats is more like the distinction between a main course and a dessert. It is hard to make hard and fast rule, but you can feel it.

Anonymous 0 Comments

I am neither a mathematician nor know anything professionally about this. Most of my knowledge stems from Code Bullet videos. The numbers in this text will be as an example, not necessarily near the reality, since I am no pro.

As far as I understand it, it is like traininga pet just with a computer programme. You begin with a programme that’s capable to do, e.g. 5 different things, but it doesn’t know when or in which order and when to stop. Then you create a testing environment (like a laboratory in my understanding), where you give the programme the option to do anything at random. And then you observe and judge, how good this programme did. Then you repeat this step. (Or you give 500 programmes the opportunity to do their things in a random way simultaneously and judge them all). Then you pick the one, who did the best and say to it, that it may change some of its internal behaviour algorithm (like the ways it decides, when and in which order to do one or more of these 5 things) a little bit, and let it change its behaviour in 500 different ways. And then you judge those new 500 programmes again, judge which did the best, and repeat.

For me it sounds like the concept of trial and error, just not once, but many, many times to finally come to something, that works good.

Anonymous 0 Comments

It’s a general term for making a system which takes feedback from its actions to modify future actions. Things like artificial intelligence, neural networks, and evolutionary algorithms all fall under machine learning.

I’m not entirely sure what you mean by “as opposed to statistics” as they aren’t directly related. Perhaps it implies you can configure a thing to behave based on statistical analysis in a way that it can’t adapt should that analysis differ from actual observed results. Whereas a machine learning system could theoretically take those observed results and dynamically change itself to better align itself to how it should perform in the future. But most machine learning involves some level of statistical analysis to train it in the first place, so they aren’t really mutually exclusive.

Anonymous 0 Comments

So basically, you write a program that can change itself. Then, you give it a task. As it tries to complete the task, it makes changes to the code. Any changes that make it better at that task stay, and changes that make it worse get tossed. Repeat this thousands of times, and you get something that can complete the task pretty well.

Anonymous 0 Comments

A computer has a load of data and a “known” end result. The computer performs calculations on the data and sees how close it gets to the known end result. It then does some different calculations and sees if it gets closer to the result or further away. It then uses this knowledge to inform future changes. Eventually it can get to (or very close to) the known answer by performing just the right processing. Do this a few times with a few different data sets and known results – this is known as “training” – and then you can unleash it on a data set where you *don’t* already know the answer and be fairly confident that it’ll get the right end result.

As a very useful visual example, here’s a video of a machine learning attempt to get a bipedal rig to walk without falling over: