How can advertisers knowing my information and tracking me to better serve me ads cause me personal harm or injury?

147 views

[ad_1]

How can advertisers knowing my information and tracking me to better serve me ads cause me personal harm or injury?

In: Technology
[ad_2]

you look for diving gear, this data is collected “to better serve you ads”…and yes, you suddenly get ads for overpriced trips to scuba-diving locations or gear you might wanna buy, nothing wrong with that.

but if your insurance company buys that data and then decides “scuba diving is a dangerous activity, we gonna up your premium/cancel your insurance”, then that sucks.

you looking for insight into a rare disease then buying medications….can lead to your children not getting insurance because of you.

but there are tons of other situations where it can be detrimental to let companies track you, there was this famous story of a girl in the US where Walmart traced her spending patterns, determined she was most likely pregnant and thus send her corresponding offers….unfortunately she was still living with her parents who where shocked to find out she was pregnant this way.

generally the issue isn’t that the data is used for ads (though that in itself is generally not good, for example airplane companies often increase the price for a flight if they see how have checked out this flight before, if you delete your cookies you might get a totally different rate), but how it can be used by other parties further down the line.

because this data is collected and then it is shared/sold. and companies have gotten really good at extrapolating stuff out of these patterns.

Personalized ads seem like a Good Thing™, right? I mean, you’d rather see an ad that’s relevant to you than one for something you don’t care about.

But it all comes down to trust. Do you trust that the advertising network that you’re giving your data to will (a) protect that information and (b) use that information responsibly.

If they fail to protect your information (allow their systems to be hacked, etc.), the information about you could fall into the hands of people who want to do something more malicious than sell you fancy shoes or overpriced jeans.

Or they might use that information irresponsibly to make more money themselves. A social network might feed you content that is erroneous or factually wrong because it knows that content will keep you engaged with their platform. (Remember, social networks are advertising platforms; they sell *you* to the advertisers. The content is just there to make sure you stick around to see the ads.)

Let’s start off with the fact that in many parts of the world there is still a huge stigma/discrimination against a whole host of personal traits (obvious ones are gender & sexuality). This running log of your Internet activity could easily share information to people and compromise your safety, wellbeing and job/life prospects.

That aside, in a “safe” country, there is still potential to do harm – a lot of this won’t be direct discrimination by a human, but algorithms can very effectively amplify existing inequalities. The problem is we really don’t really know where this data goes or how it gets used and the algorithms are very difficult to understand.

When you apply for a mortgage online, does your bank use your browsing data to develop a risk profile? Has the algorithm inadvertently created different risk models for men/women/white/black/homosexual people. I’ll give an example of how this can happen:

Let’s say (and these are fabricated numbers), that men are 95% likely to pay off a loan, and women are 96% likely to pay off a loan. That’s basically equal – but there is a small difference. Let’s say 100 people (50 men, 50 women) ask a bank for a mortgage. The bank only has the money to give a mortgage to 20 of them. Statistically the best thing for the bank to do is to only offer those loans to women since they are most likely to pay the loan off. So through a very tiny statistical difference, you have massively discriminated. Now this data will loop back into your risk algorithm, and since you extended no loans to men your updated algorithm doesn’t even consider offering loans to men. Fortunately the bank isn’t actually allowed to use your gender to determine these things. However it’s almost impossible to be certain that some algorithm hasn’t inadvertently made an indirect link to gender. For example the algorithm might note that people who buy dresses, or makeup, or sanitary products or any product where the market is female dominated are more likely to pay off their loan – and therefore it biases for gender.

The algorithm has no concept of morals or what is right/wrong it’s just looking at statistics – and if statistically the algorithm sees that people who view certain websites are riskier than people who don’t then it’s going to discriminate. Whenever something is limited in supply, loans, housing, jobs, then every business wants to target the “best” candidate and this can quickly create discrimination.

If you think that advertisement networks is just simply showing certain advertisements to anyone who have visited web pages containing the name of the product you are sorely mistaken. It was discovered many years ago that you could find certain behavioral patterns among people if you had enough data about them. You can essentially group people together based on their current state of mind. But more importantly you can study how people change their mind and look at what they were doing to make them change their minds. It can be simple words that you read or it might be more complex imagery which will predictably take you from one thought to the next. This can be done not only through advertisement but also using dynamic content in news sites and social media sites which make it harder to detect. And although it can be hard to prove if this is being done it should also be possible to use these techniques to automatically generate a script for people to perform that will reliably change the mind of the people in the target audience into whatever you want. However this will likely be an awkward script as most of these triggers are simple words or phrases without much relevance to each other so you would still require a good writer to string the words together in a sensible way unless you want someone to just stand on a stage repeating seamingly random rambling words over and over again with a few mumbled half harted attempts at bridging the words together.

So now the advertisers are not just trying to offer you products but are actively manipulating your mind in a very clinical and determined way. Sure they can manipulate you into buying their products. However it can be much more intrusive then you might think. For example it could be possible to manipulate people into having kids so they would end up consuming more products. And they could make people vote for candidates that is against your self interest. Or maybe get you to do activities that would require you to buy medication later in life.