Machine learning has a privacy problem

Artificial intelligence - Machine learning

Despite all of its benefits, artificial intelligence is introducing some controversial issues, and it’s not all about stealing our jobs. Thanks to machine learning algorithms, assets that were previously in our sole possession are no longer private and can be accessed and used by anyone who has the nerves to do a little search and install the right applications.

Given the recent developments in machine learning technology, along with the ubiquity of the web and the fact that your data is forever, it is fair to say that soon, privacy as we knew it before will no longer exist.

Here are just some of the instances of how machine learning is invading user privacy.

The facial recognition nightmare

The recent rollout of the facial recognition app FindFace is a reminder of just how dangerous a seemingly harmless technology can become when offered at a public level with no implemented safeguards. Facial recognition technology is nothing new and is being widely used by companies such as Facebook and Google to offer services to their customers.

However, the difference with the Russian FindFace service is its open access to the extensive image database stored in VK.com, the Russian equivalent of Facebook. Anyone can submit the picture of a stranger to the service and gain access to the person’s identity and social media profile within seconds. The system has proven to be very efficient, and has even beaten Google’s face recognition algorithm at the MegaFace facial recognition contest.

While the availability of such technology can be beneficial in numerous use cases such as crime investigation, it can easily be employed nefariously. For instance, authoritarian regimes can take high definition images of demonstrations and protests, and use the technology to find and crack down on participants. Other disturbing uses can include harassing and stalking innocent people by identifying them on the internet.

The app was used by a St. Petersburg photographer to show how easy it is to identify people you don’t know. So if a total stranger comes up to you, calls you by your name and reveals some of your most intimate information, which you though was private to you and your tight circle of confidantes, don’t be surprised. This is just the beginning of the end of privacy. And it’s being made possible thanks to the technology that we love and hate.

All is not lost though, and if you want to counter the effects of controversial apps such as FindFace, the Kaspersky Blog offers a few tricks on how to take pictures that can’t be analyzed by face recognition algorithms. But for the most part, the angles and poses it suggests will make you look stupid, and they won’t help you with pictures you’ve already posted on the internet.

Unearthing your most intimate secrets

Machine learning is being used profusely in online advertisement in order to understand user preferences and create more targeted campaigns and improve click-through rates. The effect is sometimes interesting, but tends to get creepy and annoying. I visit a lot of company websites for the research I do for my articles, but that doesn’t mean that I’m necessarily interested in buying the products and solutions those companies offer. But apparently, Google and other platforms that spy on you are not smart enough to understand that.

In many cases, as soon as I leave a company’s website, I start seeing ads for that company pop up in literally every website that features ads, which I find pretty dumb and invasive. That is something that I can cope and dismiss as minor nuisance.

Others have had a more painful experience.

In 2014, Princeton sociology professor Janet Vertesi went to great lengths to hide her pregnancy from online marketing companies and their data-hungry detection algorithms. She wanted to experiment if it was possible to avoid the episode of the teenage girl whose shopping patterns betrayed her pregnancy to her local Target store even before her parents knew about it.

Vertesi did everything she could, from using the Tor browser, to avoiding social media, telling friends not to mention anything about their expectations, paying in cash-purchased gift cards, using unrelated emails and having goods delivered to addresses that weren’t directly linked to her.

But the experience made her look like a rude friend and family member, a bad and potentially criminal citizen carrying out illicit activities, and in the end, her secret was unearthed seven months into her pregnancy.

And Vertesi is well versed on the laws and principles of online privacy. Others think that preserving online privacy is as easy as clearing a check box and opting out of data collection programs. They don’t know what they’re in for.

Even your handwriting can be forged

Another one of your unique assets has just been compromised, this time by researchers at University College London. A recent machine learning algorithm developed by UCL, dubbed “My Text in your Handwriting,” examines samples of a person’s handwriting, as little as a paragraph, and starts generating that looks like it’s been written by that person.

The technology certainly has some very novel uses, such as helping people disabled by strokes to continue delivering handwritten text, or the translation of comic books with the handwriting of the original author.

But it can also be put to some very evil uses, such as forging a letter in a person’s name and signature in legal cases or in order to create counterfeit products. Signature of popular figures are being purchased at very high prices, and if they can be replicated with authenticity, it can become a lucrative source of income for fraudsters.

In some more creepy cases, the technology might even be used to forge letters and documents in the name of historical figures in order to alter the course of history. But document forgery requires more than just handwriting replication, and it may still be a while before such endeavors can be carried out with success.

The scientists claim in case “My Text in your Handwriting” is used for forgery, the same technology can be used to detect its illicit uses. But that would require a human being to become suspicious in the first place. In tests that were carried out on humans who were foretold about the technology, the participants were only able to detect forged handwriting in 60 percent of the cases. The technology will be much more likely to slip by an oblivious user who doesn’t even know about its existence.

Are we ready for the future of machine learning?

Don’t get me wrong. I love machine learning, and I think that it will become a prominent part of everything we do in the near future. It is one of the main keywords I search in Google when looking for news, and whatever story I’m writing, I usually try to spice it up with a machine learning angle.

But nonetheless, I believe that we should raise awareness and cover all sides of a story, and if there are ill uses to a technology, we should all know about it. That said, while we enjoy as machine learning invades our lives, we should keep our eyes open for its trade-offs, such as how it affects our privacy.

As a general final tip, I would advise you to think twice (or maybe thrice) before sharing pictures, posting comments or even revealing your handwriting. There’s always a hungry machine ready to intake everything you put on the internet.

What other creepy uses have you experienced with machine learning? Share with us in the comments section.

1 COMMENT

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.