Wednesday, March 21, 2018

Did Cambridge Analytica’s Facebook Harvesting Decide the US Elections?

Have the Russians and Cambridge Analytica been abusing Facebook to influence the US and British elections by spreading fake news to receptive users?
Absolutely Yes!

Has this abuse had an impact on the US elections and Brexit?
Most likely not!

It becomes increasingly clear that people like Donald Trump (through his former confidant Steve Bannon) and Vladimir Putin have been (illegally) harvesting user profiles on Facebook and setting up Twitter bots to spread fakenews on Facebook and Twitter.
Facebook by its own account admitted that fake news produced by the Russians and US Alt-right proponents might have been seen by millions of users. What has not been shown, however, is what effect this has had on the actual voting behavior.

I argue that this effect has been minimal, most likely smaller than the voting margin for Trump in the US elections, and for Brexit in the UK.

While a group of researchers has done an thorough analysis of the dynamics of the spread of fake news on Twitter, they are not measuring what influence this has had on the actual voting behavior. While they found that fake news spread faster than true news, they ignore three key issues:
  1. Echo chambers: the political spectrum has become Balkanized, with people only consuming news they believe in. This effect is reinforced through fake news Twitter bots flooding the feeds of alt-right believers with fake news.
  2. The personality of Twitter users: it has been shown that Twitter users are more extrovert and neurotic than the average population, which also happen to be personality characteristics of alt-right folks.
  3. Twitter bots: today bots have become so sophisticated that it is hard to distinguish them from real people. Fake news spreaders have become adept at using them to game Twitter metrics so that their tweets score high on Twitter.

I argue that the impact of this rumormongering is vastly overrated. I think we should regard the average reader on Twitter and Facebook smarter than unquestioningly passing on whatever rumor is being served up. In my view, these messages full of half-truths and fake news supportive of Brexit and Donald Trump were mostly passed on by followers and believers of Brexit and Trump who had made up their minds well before reading these fake news items, and did not need any influencing.

A second argument comes from the harvesting of user profiles on Facebook, which enables fake news spreaders to identify user personality characteristics based on an approach our team has pioneered a long time ago, which has been also used by a group of other researchers, implemented on a large scale by another team who passed it on to Cambridge Analytica. I don’t think that knowing the personality characteristics of a Facebook user will make her/him so much more receptive to fake news – again people are smarter than that: either they have made up their mind before, or they are not likely to change their opinion because they are more Open, Conscientious, Extrovert, Agreeable, or Neurotic (this are the characteristics measured by the OCEAN personality test researchers commonly use for this type of analysis).

The best approach for fact-checking is a critical, well-informed mind. Our transparency engine offers a powerful tool to support the critical mind.

Saturday, February 10, 2018

Human-computer symbiosis, or computers taking over?

In the earlier days of computer science (1990ties), when I was a post-doc at MIT, there was huge discussion among AI researchers, with people like Marvin Minsky on one side, who said that computers eventually would become smarter than humans, and the majority agreeing that this would never be possible. Fast-forward 25 years, and there is no question that Marvin Minsky was right.

Researchers today envision one of three possible scenarios: the first, least fearsome one, that the human is telling the machine what to do, secondly, we have true human-machine symbiosis with computer and human being equal partners on a task, or third, the machine telling the human what to do. Looking at the current stock market fluctuations, it unfortunately seems we already firmly reached the third scenario, with computers taking over. When after multiyear growth and a record high of the Dow on January 26 2018, a few days later on February 4, the Dow lost 1175 points in a single day, this was its biggest point loss ever, all thanks to automated computer trading. A large part of these fluctuations originated in algorithms tied to exchange traded index funds (E.T.F.), as these are responsible for up to 38% of all trading.  While the impact of these fluctuations can be enormous for the individual investor and companies tied to these trades, there are no worries for the firms trading in these E.T.F.’s, while their robots are doing all their trading automatically.  According to one of the operators of such an investment firm, ”if the market goes down 4 percent, I don’t even care. That is the beauty of systems. Once you build it, there is literally nothing left to do.”

Humans might be tearing their hair out - the psychological fear index VIX beginning of February 2018 rose to multiyear highs – while robots calmly keep on doing their work. This means it is not the shrewd investor anymore who is beating the market, but it is computers equipped with the latest artificial intelligence algorithms.

In earlier research studying crowd behavior on Twitter, we observed a similar pattern, with Twitter bots and humans interacting in a tightly woven dialogue of tweets and retweets, where it becomes nearly impossible to distinguish between humans and robots. More research on untangling the madness of the crowd from the wisdom of the swarm is sorely needed, knowing that both the wisdom and the madness today comes not only from humans, but in increasingly larger parts from the interaction between AI algorithms simulating and beating the human brain and real human brains.
I therefore propose the following (open-ended) research agenda:
  1.  Study the automated robot-controlled feedback loop of the predictive capabilities of online social media on stock prices. This will help develop an algorithmic framework capable of dealing with the impact of artificial intelligence on collective (human) intelligence.
  2. Study the evolution of digital tribes: automatic filtering done by social media companies to preselect the content users get to see on their platforms uses similar techniques as (1). As this preselection is done by automated algorithms in combination with human preferences, it is often argued that it creates information bubbles in which individuals only choose to see content that confirms their world view and no content that opposes it. This leads to the formation of tribes. For instance, we might track the fragmentation of humans co-located in nation states into different digital tribes, such as patriots (fatherlanders), technocrats(nerds), environmentalist (treehuggers), and spiritualists. 
  3. Study the dissemination and evolution of fake news, which reflect the alternative realities of different information bubbles, viewed through the lens of the different tribes.
  4. Study human emotions through sensing of "honest signals" through smart sensors. To enable computers to collaborate with humans on a human-computer symbiotic level, computers will need to get an understanding of the feedback loop between human emotions and computer algorithms.  The goal is to develop alternative means for measuring individual emotions directly from the source – humans – using body sensing systems such as smartwatches and smartphones to track for instance individual happiness, fear and stress to get a baseline system for calibration with online social media.
This list is just a start, your ideas are most welcome?