Homoscedasticity is not a 5-year-old comprehension word. But in analysis of groups of things, where you want to compare averages (means) of the groups, it is a way to think about how different the individuals are within the group. You generally will assume that differences amongst people in a group are the same from group to group, but that that the averages might be different. E.g. maybe one class of students has a 92/100 average on a test and another class of students has a 56/100 average on the test, but the spread of scores around those averages is approximately the same (homoscedasticity), thus there is a “smart” class and a “less smart” class on the whole. When the spread differs between the groups, that violates the “assumption of homoscedasticity.” The reason this is a problem is that you use the spread of scores to determine how different the averages are expected to be by chance. If one groups has a different spread of scores than another, it messes with the understanding of how spread apart the average might be – making it hard to tell if the difference between the two classes (e.g. 92 vs 56) is a “real” difference, or just something that would have happened by chance. There’s no “fix” to a dataset that violates homoscedasticity, but there is a number of alternative approaches to take if this violation is detected – but by far the best way to deal with it is to identify the reason why one (or more) groups has wildly different spreads of scores than the other(s).
Source: taught college statistics for over a decade.
Latest Answers