So says Michal Kosinsky, a professor at the Stanford Graduate School of Business, a frequent writer and speaker on data mining, and perhaps the most stoic person on Earth in the face of the caches of data that social media has spawned about everyone who has ever left a smudge of a digital footprint.
Those growing footprints emerge from our use of Twitter, Instagram, online retailers, our Facebook activity—the likes, the comments, the quizzes.
In 2015, thousands of people took a simple personality quiz on Facebook called This is Your Digital Life. It led British data firm Cambridge Analytica to compromise the data of 87 million Facebook users in a now infamous breach used to gather voter data prior to the 2016 presidential election. As the company had done for the Brexit campaign, it planned to help Donald Trump run a successful campaign.
With stolen info in hand, the company analyzed profiles, grouping users according to political affiliations, “liked” pages, and other similarities.
Fake Facebook and Twitter accounts—on the far ends of both political leanings—were created, then linked to fringe news sites invoking fear and outrage. Brittany Kaiser, a kind of unwitting whistleblower for Cambridge Analytica, would later state that the company had pinpointed a large group of individuals as “persuadables.”
As Carole Cadwalladr, an investigative journalist for The Guardian who first reported on the company’s scandal, wrote, “They started using informational warfare.”
In the aftermath of the scandal, some users who learned they’d been hacked had not taken the quiz. But at least one of their friends had, and that’s all it took.
By April 2018, when Facebook informed users whether their data was involved, millions of eyes were opened to the mass of data they had casually left on social media.
Kosinsky is in the minority when it comes to the sharing of data; he likens it to taxes, a necessity to keep society running, pointing to potential breakthroughs in medicine, for instance. For most others, the long-term reaction is alarm.
David Carroll is best known for legally challenging Cambridge Analytica to recapture and send him his own breached data, using European data protection law. His story was featured in the 2019 film The Great Hack, which lays out the scandal in detail, from multiple perspectives.
“. . . I knew that the data from our online activity wasn’t just evaporating,” Carroll says near the start of the film. “As I dug deeper, I realized these digital traces of ourselves are being mined into a trillion-dollar-a-year industry. We are now the commodity. But we were so in love with the gift of this connected world that no one bothered to read the terms and conditions.”
Fear was the prevailing emotion, he says. He wondered who had been feeding it to everyone.
Calming the fear: What businesses can do
Privacy concerns and related fears can affect the way a business operates and, more importantly, how clients and customers relate to them.
Pedro Uria-Recio, an expert in AI, data analytics, and digital marketing, and head of Axiata Analytics, believes businesses need to take initiatives to calm this pervasive culture of fear.
“We can now do things that were impossible a few years ago, and existing ethical and legal frameworks cannot prescribe what we can do,” Uria-Recio wrote in an editorial about big data ethics for Towards Data Science. “While there is still no black or white, experts agree on a few principles.”
Those include keeping private data and identities private, treating shared private information as confidential material, and giving customers a transparent view of how their data is being used.
On a more existential note, he warns that business should not use big data to interfere with human will, stressing that big data analytics “can moderate and even determine who we are before we make up our own minds. Companies need to begin to think about the kind of predictions and inferences that should be allowed and the ones that should not.”
Another huge problem area is the potential to institutionalize biases, including racism and sexism. “Machine learning algorithms can absorb unconscious biases in a population and amplify them via training samples,” he wrote. And the decisions should not remain in the C-Suite. “. . . anyone involved in handling big data should have a voice in the ethical discussion about the way data is used,” Uria-Recio wrote. “Companies should openly discuss these dilemmas in formal and informal forums. When people do not see ethics playing in their organization, people in the long run go away.”
Measuring the effects of big data
Longtime investor and venture capitalist Roger MacNamee serves as something of a harbinger in this age of free-flowing data. He said studies have shown that the first 10 to 15 minutes people spend on social media are usually filled with pleasant activities, such as checking in on family and friends.
“But if you stay on it long enough, they keep throwing things at you that make you either afraid or angry,” he says.
MacNamee, who chronicled his experience in the 2019 book “Zucked: Waking up to the Facebook Catastrophe,” has teamed with former Google design ethicist Tristan Harris and other activists to expose the dark side of social media and start a national conversation about its usage and effects on personal data and mental health, among other factors.
Harris says his years at Google spurred his activism.
“The internet is not evolving at random,” Harris said in a recent TED Talk. “The reason it feels like it’s sucking us in the way it is, is because of this race for attention . . . Technology is not neutral, and it becomes this race to the bottom of the brain stem of who can go lower to get it.”
Massive scale, incredible speed
How did we get here? MacNamee says after the tech bubble of 2000, a group known as “the Paypal mafia”—made up of former Paypal employees including Elon Musk, Peter Thiel, and others who would go on to make billions in the tech business—began building Web 2.0, which centered not on pages but on people. It was the basis for social media as we know it.
“The Paypal mafia realized that Moore’s law and Metcalfe’s law—the two laws that talk about processing power and networks—were about to hit crossing points, where there would be enough resources to do whatever you wanted to do,” MacNamee says.
“They subscribed to the notion that you could disrupt, you could change things and not be responsible for the consequences of your actions, which was incredibly convenient if you’re about to go out and create giant global enterprises.”
They used blitzscaling, a set of techniques that allow companies to achieve massive scale at an incredible speed. They eliminated friction. And within a decade, these future moguls achieved global dominance.
The idea was to make social media free, so it would be ad-dependent. But how would that work? It’s a familiar scenario: rewards such as likes and notifications to come back to the site would
create habits.
But, as Harris notes, the implications weren’t that innocuous. The filter groups appealed to people’s baser instincts, what MacNamee calls the “lizard brain,” to create tribes based on fear and outrage.
Which, as it turns out, is just what is needed to persuade people to vote the way you want them to.
Data using consumers
MacNamee says our interactions with big data are far larger in scope than easily can be imagined. In a talk with the science and technology group Big Think in March, he talked about the growing problematic areas of social media.
“When you watch this business model go to its final point, they’re tracking everything,” he said, adding that social media companies buy users’ information on numerous fronts: credit cards, location, cellular carriers, health apps. “They buy data wherever they can get it. They create this high-resolution picture of you.”
He also hints that Google is using its Captcha verification service to fuel its own Artificial Intelligence efforts.
In January, the British publication TechRadar expanded on the method: “Behind the scenes of one of the most popular Captcha systems—Google’s Recaptcha—your humanoid clicks have been helping figure out things that traditional computing just can’t manage, and in the process you’ve been helping to train Google’s AI to be even smarter.
“You know, when Recaptcha asks you to identify street signs? Essentially, you’re playing a very small role in piloting a driverless car somewhere, at some point in the future. So it is hugely convenient then that Google has as its disposal hundreds of millions of internet users to work for it: by using Recaptcha to tackle these problems, Google can use our need to prove we’re human to force us to use our very human intuitions to build its database.”
If it sounds like the stuff of science fiction, consider the popular TV anthology series “Black Mirror,” which creates nightmarish tales that are fueled by somewhat fathomable technology.
In “The Entire History of You,” one of the series’ most acclaimed episodes, people wear eye implants that record every aspect of their lives. Samsung was awarded a patent in 2016 for contact lenses that take photos whenever the wearer blinks. Episodes will be viewable via a smart phone livestream.
The frighteningly prescient “Nosedive” explores a world in which people can rate their interactions with others via a real-time app, just by pressing a button on a phone. Dip below a three, and you’re in trouble. The accumulated social rating, or score, becomes the basis for everything from popularity to mortgage eligibility to romantic pairings.
You can imagine where this one is going.