In general, you can make data say whatever you like, especially when it comes to make projections.
One of the most common methods would be called ‘p-hacking’ if it was done by a single researcher but is generally called ‘cherry-picking’ when done by outside agencies. Let’s say 100 different researchers study whether Coca-Cola consumption aids athletic performance.
Now, we’re pretty sure it doesn’t. Coca-Cola is just fizzy sugar water, after all. There should be no correlation between athletic performance and consuming it.
But if you have 100 different studies about it, *some* of them are going to show a correlation based on pure random chance. Just take those ones and ignore the rest.
You can also manipulate studies by selecting models that implicitly favor your point of view. If you want to prove the police are racist, you define ‘racist’ as ‘disparate outcomes’ and make no attempt to explain disparate outcomes by any other method. If you want to make solar panels look good, you emphasize ongoing costs in the early days of adoption and ignore the more expensive elements of life cycle cost. And so forth.
Essentially, there is so much subjectivity to the process that you can ‘prove’ anything you want.
Nor does peer review particularly help. Peer review is primarily a review of methods, not a critique of content. As long as your methodology is correct and your conclusions aren’t *too* ridiculous, it’ll pass peer review.
As a result, most ‘studies’ you read in the press really need to be treated with skepticism. They’re inevitably going to match the bias of either the researchers or the funders, especially when neither of those individuals have any vested interest in being right.
The studies that tend to be very accurate are ones where the researchers/funders suffer serious consequences for being wrong. If an oil company funds a study to determine where to drill for oil, it’s a good bet that study is going to be fairly accurate because they lose a lot of money if it’s wrong.
Latest Answers