i genuinely don’t get why changepoint detection is hard. if you graph data, you can see where changes in it are. why do people use several algorithms to figure this out? is it because you’ll have data that follows weird time intervals and you can’t graph it first, or for some other reason? i guess i just don’t understand why you need to use an alogrithm to tell you where your data changes over time when you can typically see for yourself. I recognize that this sounds so dumb to people with experience in R, data science, etc., I just cant wrap my head around this concept.
thank you to anyone who answers!
In: Mathematics
I mean, I am sure you are looking at the most easy and simple graphs imaginable where the graph is perfectly smooth and has a clear singular inflection point. None of that stuff is hard.
The issue is when you move to actual real data that is messy and changes are small. It’s easy to see a trend when it’s going up 100 points a day then one instant it switches and is going down 100 points a day. but it’s hard to see something when it’s got a 0.68 trend upwards that varies by .80 per day and then that ends.
Latest Answers