Wednesday, November 28, 2018

Highly Intelligent Is Not Highly Creative – How to Be Truly Innovative

In research as well as in entrepreneurship, there are many characteristics distinguishing an exceptional researcher or entrepreneur from just a good one. In particular there is a key difference between highly intelligent and highly creative people, as these characteristics are not necessarily correlated. Intelligence is commonly defined as the ability to acquire and apply existing knowledge and skills, while creativity is the ability for something novel and valuable. New research and new enterprises can therefore be grouped into four categories:

Highly intelligent, incrementally creative researchers or entrepreneurs add an incremental twist to existing methods and knowledge. For instance, a researcher might add mathematical bells and whistles to widely accepted ideas and concepts - in Renaissance Europe they might have added an additional proof that Earth was indeed flat and that the Sun was circling around it. Most of the papers in Nature and Science fall into this category, cementing established ideas by repeating them using more complex algorithms and procedures or by writing the first overview paper categorizing trends in an emerging field (which has the nice side effect of boosting the author’s h-index as it will be widely cited). Among entrepreneurs, these are the people reaping the biggest rewards, Bill Gates falls into this category, as he is a master of recognizing emerging trends, which are then integrated in his products.

Highly creative, incrementally intelligent researchers and entrepreneurs, on the other hand, challenge existing wisdom, frequently upending established beliefs. However, as the execution of their ideas is frequently improvised and not presented in a polished way, they have a hard time gaining acceptance – when Galileo Galilei presented convincing proof that Earth was indeed circling around the Sun, and not the other way around, the Roman Inquisition in 1615 found this “foolish and absurd in philosophy, and formally heretical since it explicitly contradicts in many places the sense of Holy Scripture”. Highly creative researchers in more modern times still have a hard time challenging established wisdom, and more often than not fail in making their ideas stick. In particular, is it near impossible to get these ideas into Nature and Science, as today’s reviewers, just as the Roman Inquisition 400 years earlier, deem their ideas “foolish and absurd”.

Highly creative, highly intelligent researchers and entrepreneurs are the small group of people, who succeed in making their disruptive ideas stick. Other than Galileo Galilei, today’s Nobel Prize winners might get recognition of their disruptive ideas while still alive, although sometimes they have to wait 40 years for their ideas to be recognized by the scientific establishment, as happened to Barbara McClintock, who discovered genetic regulation mechanisms in maize in 1944, and got the Nobel for this insight only in 1983. In between she stopped publishing her research due to being ridiculed by her peers. I would also put entrepreneurs like Steve Jobs and Elon Musk into this category, as they are taking huge personal risks promoting and commercializing novel and untested technologies.

To convince one’s peers and society about novel thinking and products, researchers and entrepreneurs need other essential personality characteristics besides intelligence and creativity. Only by being highly persistent and highly empathic can one really succeed in getting others to accept new ideas.

In an excellent article, Albert-Laszlo Barabasi tells the story of Douglas Prasher, who pioneered a process leading to a Nobel Prize in Chemistry, but was unable to find funding for his research, and became a courtesy driver for a Toyota dealership, while Martin Chalfie and Roger Tsien, two of his more persistent collaborators shared the Nobel Prize. However, these two collaborators showed their empathy, by recognizing the contribution of Prasher in their Nobel acceptance speeches, and assisting him to subsequently return to science.

Sunday, November 25, 2018

Principles of Profiling Users with AI

Two days ago I read a Washington Post article about 3 Californian AI startups that profile users based on opaque AI algorithms. Their products calculate a quality score for people in different domains without explaining how it is calculated

  • calculates a “risk rating” of babysitters based on their social media activities.
  • HireVue analyzes tone, word choice, and facial movement of job candidates to predict their skill on the job.
  • Fama does employee screening on social media and internal HR data to prevent what they call “brand risk” such as sexual harassment, bullying, or insider threats of employees. 

The main problem with these and similar systems is that they use machine learning, in particular deep learning as a black box. Their algorithm gives back a score claiming high numerical accuracy without explaining how it has been calculated.

Our own Condor software is doing similar things, showing a bird’s eye view of the communication patterns of organizations based on their E-mail, or social media archives. There is one key difference though, we apply the “Google Maps Privacy Principle”: aggregated information is shown to all users, the individual information is only shown to the affected individual. The principle is derived from Google Maps, which becomes truly useful by aggregating the location information of Android users with location tracking turned on and iPhone users with Google Maps turned on through dynamically tracking their smartphone location. But the only individual who knows her/his own personal location is the owner of the phone. Google Maps therefore aggregates global information and returns individually useful information to the individual user.

This approach is what we are trying to pursue in our own work:
(1) Show aggregated information to the public, and individual information only to the affected individual.
There are however applications where the individual user has to be identified to others. These applications can be split into two categories.
(2) The application needs to identify the user, and the user gets a benefit from being identified, for example as a “rock star” employee, most collaborative employee, etc.
(3) The application needs to identify the user, and the user has a disadvantage from being identified, for example as a potential security risk, low performer, etc.

The applications from predictim, HireVue, and Fama are clearly in category (3). Users are convicted by a machine learning algorithm without knowing why. The algorithm operates as a black box. While arguments can be made for category (2) applications to run in such a mode – the user gets a pleasant surprise, even if s/he does not know why, this is clearly not acceptable for category (3) systems. At the very least does the user need to know why s/he has been convicted.
As I assume that more and more systems in category (3) will be built, for instance by law enforcement, I envision the need for an impartial authority, which can be public or private, to check and certify the accuracy of these AI-based prediction systems.

A second point which sets Condor apart from other AI-based prediction systems is the transparency of its algorithms. The scoring algorithms applying the "seven honest signals of collaboration" and a list of predefined "virtual tribes" is documented in great detail in over 150 academic papers and fivbooks. This is quite different from e.g. predictim's case studies, which predict the past, without disclosing how it is done. There is no guarantee that the training data of the past will still be valid to detect future criminals.

 It is still early days, so please tell me what you think, I would love to hear your opinion

Thursday, November 15, 2018

Why Donald Trump is No Leader

There are many ways to identify exemplary leaders. One of my preferred categorizations comes from here. It lists humility, curiosity, and empathy as the main criteria of successful leadership. I am afraid that on all three criteria Donald Trump scores zero:

Humility means that exemplary leaders treat everybody with respect, are not afraid of criticism and are willing to admit their own mistakes. They are willing to put their own ego into the background for the sake of others. Donald Trump stands for the opposite, as an egomaniac person who is obsessed with his own power and glory.

Curiosity means that a leader is constantly looking to further his own understanding, is willing to defer to the knowledge of others, and thirsty for new information. One of the first actions of Donald Trump was to cut funding in research, denying scientific facts about vaccination and climate change, showing zero scientific curiosity.

Empathy means that we treat others with compassion, try to understand what is going on inside their own minds, try not to hurt their feelings, and try to be not just nice, but also kind. Trump’s easy hiring and firing of allies, and not liking losers like John McCain who was tortured as a prisoner of war, sadly demonstrates utter lack of empathy.

So, sorry, Mr Trump, you are not a leader. But perhaps I am just living in an alternative reality and everything in reality (TV) is totally different!

Wednesday, October 17, 2018

What Emails Reveal About Your Performance At Work

Recently HR analyst Josh Bersin interviewed Praful Tickoo, head of HR analytics, and Piyus Mehta, CHRO at Genpact, about the work we have been doing together for the last 5 years to identify both rock stars and flight risk of employees. His excellent blog post is here.

Wednesday, March 21, 2018

Did Cambridge Analytica’s Facebook Harvesting Decide the US Elections?

Have the Russians and Cambridge Analytica been abusing Facebook to influence the US and British elections by spreading fake news to receptive users?
Absolutely Yes!

Has this abuse had an impact on the US elections and Brexit?
Most likely not!

It becomes increasingly clear that people like Donald Trump (through his former confidant Steve Bannon) and Vladimir Putin have been (illegally) harvesting user profiles on Facebook and setting up Twitter bots to spread fakenews on Facebook and Twitter.
Facebook by its own account admitted that fake news produced by the Russians and US Alt-right proponents might have been seen by millions of users. What has not been shown, however, is what effect this has had on the actual voting behavior.

I argue that this effect has been minimal, most likely smaller than the voting margin for Trump in the US elections, and for Brexit in the UK.

While a group of researchers has done an thorough analysis of the dynamics of the spread of fake news on Twitter, they are not measuring what influence this has had on the actual voting behavior. While they found that fake news spread faster than true news, they ignore three key issues:
  1. Echo chambers: the political spectrum has become Balkanized, with people only consuming news they believe in. This effect is reinforced through fake news Twitter bots flooding the feeds of alt-right believers with fake news.
  2. The personality of Twitter users: it has been shown that Twitter users are more extrovert and neurotic than the average population, which also happen to be personality characteristics of alt-right folks.
  3. Twitter bots: today bots have become so sophisticated that it is hard to distinguish them from real people. Fake news spreaders have become adept at using them to game Twitter metrics so that their tweets score high on Twitter.

I argue that the impact of this rumormongering is vastly overrated. I think we should regard the average reader on Twitter and Facebook smarter than unquestioningly passing on whatever rumor is being served up. In my view, these messages full of half-truths and fake news supportive of Brexit and Donald Trump were mostly passed on by followers and believers of Brexit and Trump who had made up their minds well before reading these fake news items, and did not need any influencing.

A second argument comes from the harvesting of user profiles on Facebook, which enables fake news spreaders to identify user personality characteristics based on an approach our team has pioneered a long time ago, which has been also used by a group of other researchers, implemented on a large scale by another team who passed it on to Cambridge Analytica. I don’t think that knowing the personality characteristics of a Facebook user will make her/him so much more receptive to fake news – again people are smarter than that: either they have made up their mind before, or they are not likely to change their opinion because they are more Open, Conscientious, Extrovert, Agreeable, or Neurotic (this are the characteristics measured by the OCEAN personality test researchers commonly use for this type of analysis).

The best approach for fact-checking is a critical, well-informed mind. Our transparency engine offers a powerful tool to support the critical mind.

Saturday, February 10, 2018

Human-computer symbiosis, or computers taking over?

In the earlier days of computer science (1990ties), when I was a post-doc at MIT, there was huge discussion among AI researchers, with people like Marvin Minsky on one side, who said that computers eventually would become smarter than humans, and the majority agreeing that this would never be possible. Fast-forward 25 years, and there is no question that Marvin Minsky was right.

Researchers today envision one of three possible scenarios: the first, least fearsome one, that the human is telling the machine what to do, secondly, we have true human-machine symbiosis with computer and human being equal partners on a task, or third, the machine telling the human what to do. Looking at the current stock market fluctuations, it unfortunately seems we already firmly reached the third scenario, with computers taking over. When after multiyear growth and a record high of the Dow on January 26 2018, a few days later on February 4, the Dow lost 1175 points in a single day, this was its biggest point loss ever, all thanks to automated computer trading. A large part of these fluctuations originated in algorithms tied to exchange traded index funds (E.T.F.), as these are responsible for up to 38% of all trading.  While the impact of these fluctuations can be enormous for the individual investor and companies tied to these trades, there are no worries for the firms trading in these E.T.F.’s, while their robots are doing all their trading automatically.  According to one of the operators of such an investment firm, ”if the market goes down 4 percent, I don’t even care. That is the beauty of systems. Once you build it, there is literally nothing left to do.”

Humans might be tearing their hair out - the psychological fear index VIX beginning of February 2018 rose to multiyear highs – while robots calmly keep on doing their work. This means it is not the shrewd investor anymore who is beating the market, but it is computers equipped with the latest artificial intelligence algorithms.

In earlier research studying crowd behavior on Twitter, we observed a similar pattern, with Twitter bots and humans interacting in a tightly woven dialogue of tweets and retweets, where it becomes nearly impossible to distinguish between humans and robots. More research on untangling the madness of the crowd from the wisdom of the swarm is sorely needed, knowing that both the wisdom and the madness today comes not only from humans, but in increasingly larger parts from the interaction between AI algorithms simulating and beating the human brain and real human brains.
I therefore propose the following (open-ended) research agenda:
  1.  Study the automated robot-controlled feedback loop of the predictive capabilities of online social media on stock prices. This will help develop an algorithmic framework capable of dealing with the impact of artificial intelligence on collective (human) intelligence.
  2. Study the evolution of digital tribes: automatic filtering done by social media companies to preselect the content users get to see on their platforms uses similar techniques as (1). As this preselection is done by automated algorithms in combination with human preferences, it is often argued that it creates information bubbles in which individuals only choose to see content that confirms their world view and no content that opposes it. This leads to the formation of tribes. For instance, we might track the fragmentation of humans co-located in nation states into different digital tribes, such as patriots (fatherlanders), technocrats(nerds), environmentalist (treehuggers), and spiritualists. 
  3. Study the dissemination and evolution of fake news, which reflect the alternative realities of different information bubbles, viewed through the lens of the different tribes.
  4. Study human emotions through sensing of "honest signals" through smart sensors. To enable computers to collaborate with humans on a human-computer symbiotic level, computers will need to get an understanding of the feedback loop between human emotions and computer algorithms.  The goal is to develop alternative means for measuring individual emotions directly from the source – humans – using body sensing systems such as smartwatches and smartphones to track for instance individual happiness, fear and stress to get a baseline system for calibration with online social media.
This list is just a start, your ideas are most welcome?