i genuinely don’t get why changepoint detection is hard. if you graph data, you can see where changes in it are. why do people use several algorithms to figure this out? is it because you’ll have data that follows weird time intervals and you can’t graph it first, or for some other reason? i guess i just don’t understand why you need to use an alogrithm to tell you where your data changes over time when you can typically see for yourself. I recognize that this sounds so dumb to people with experience in R, data science, etc., I just cant wrap my head around this concept.
thank you to anyone who answers!
In: Mathematics
> you can see where changes in it are
How do you distinguish expected normal variance from a shift in the underlying statistical properties of the data? How do you tell an anomaly from noise? How can you spot a level of change that would break a model’s ability to forecast?
What qualifies as a significant change depends on the context so we need tools to help us find them. And what looks important or significant to a human might not actually be significant mathematically or vice versa 🙂
Latest Answers