Monday, October 07, 2013

Coolhunting the German Elections

In preparation for the German parliament (“Bundestag”) elections, I analyzed the social media landscape using Condor and tried to make educated predictions on the election results. In this blog post I will outline the methods and concepts used during Coolhunting.

Political Landscape in Germany

In total, 34 political parties ran for parliament. In my opinion, seven of them had a fair chance to overcome the 5% barrier, which prevents parties with a low number of votes from being represented in the German parliament.


Elections offer a wide range of interesting Coolhunting topics. First and foremost: Prediction. When looking at the political landscape and the different social media policies of each party, it is quite obvious that one cannot predict the detailed outcomes of an election when just looking at social media. To give a viable example, the German Pirates – a rather small party consisting mainly of tech-savvy young (relative to other parties) people accounted for more than 40% of all hash tag mentions during the last election in 2009, whereas larger parties such as SPD only got around 18% (1).


When comparing the hash tag mentions of the different parties, it becomes obvious that predicting the whole election while only using social media data is not feasible. I therefore decided to focus on the proportion of votes between two parties which are comparable regarding social media policy and profiles. For this reason I decided to take a closer look at SPD and CDU/CSU, which have both been continuously in parliament since the establishment of the German Federal Republic in 1945 and have comparable social media policies.


For Coolhunting I used an Amazon EC2 Instance running two Condor 2.6 installations with distinct Twitter accounts: one fetching tweets for “SPD” and one fetching tweets for “CDU”. Fetching tweets started 9/19 around 10pm and ended 9/22 12pm EDT catching around 5000 tweets, equally split between the two parties. The algorithm only fetched tweets in the German language that were geocoded to Germany and had retweets, which kept the amount of collected tweets low, but the quality high. 

The figure above shows the fetched Twitter network and the betweenness centrality for both search terms. When using the betweenness centrality for calculation, the ration of potential votes between SPD : CDU is 34% : 66%.


Wikipedia fetching was done right after the elections were finished and reflects a potential vote ratio for SPD : CDU of 39.24% : 60.76%.


For fetching web content I used Google CSE right after the elections had finished, leading to a prediction a ratio of votes for SPD : CDU of 41.06% : 58.94%.


When comparing the election results to other predictions by German newspapers, a representative poll done by “Zeit” newspaper one week before the election turned out to be most accurate. A simple hashtag count for #SPD and #CDU done by “Bild” newspaper instead failed completely.

The best coolhunting accuracy was achieved using Wikipedia suggesting that the swarm is more representative than the crowd in this case. However Twitter proved to be the best indicator for current mood swings as the system is far more reactive than Wikipedia or a Google search.

1 comment:

  1. After reading this article i have gained knowledge about media literacy, internet safety, web 2.0 tools,tech tips.
    I will look forward for further information.Keep it up.Great article
    success story of freelancer