Let’s start off with the fact that in many parts of the world there is still a huge stigma/discrimination against a whole host of personal traits (obvious ones are gender & sexuality). This running log of your Internet activity could easily share information to people and compromise your safety, wellbeing and job/life prospects.
That aside, in a “safe” country, there is still potential to do harm – a lot of this won’t be direct discrimination by a human, but algorithms can very effectively amplify existing inequalities. The problem is we really don’t really know where this data goes or how it gets used and the algorithms are very difficult to understand.
When you apply for a mortgage online, does your bank use your browsing data to develop a risk profile? Has the algorithm inadvertently created different risk models for men/women/white/black/homosexual people. I’ll give an example of how this can happen:
Let’s say (and these are fabricated numbers), that men are 95% likely to pay off a loan, and women are 96% likely to pay off a loan. That’s basically equal – but there is a small difference. Let’s say 100 people (50 men, 50 women) ask a bank for a mortgage. The bank only has the money to give a mortgage to 20 of them. Statistically the best thing for the bank to do is to only offer those loans to women since they are most likely to pay the loan off. So through a very tiny statistical difference, you have massively discriminated. Now this data will loop back into your risk algorithm, and since you extended no loans to men your updated algorithm doesn’t even consider offering loans to men. Fortunately the bank isn’t actually allowed to use your gender to determine these things. However it’s almost impossible to be certain that some algorithm hasn’t inadvertently made an indirect link to gender. For example the algorithm might note that people who buy dresses, or makeup, or sanitary products or any product where the market is female dominated are more likely to pay off their loan – and therefore it biases for gender.
The algorithm has no concept of morals or what is right/wrong it’s just looking at statistics – and if statistically the algorithm sees that people who view certain websites are riskier than people who don’t then it’s going to discriminate. Whenever something is limited in supply, loans, housing, jobs, then every business wants to target the “best” candidate and this can quickly create discrimination.
Latest Answers