Many news organizations report on the volume of social media activity surrounding specific brands and individuals. Early indications are we’ll be hearing a lot about this during Canada’s federal election.

Mention counts are important because they provide an indication of how much interest there is in the election, and specific players in it. However, it’s only one measure of many that reveal the election narrative. Political campaign teams, parties, analysts and pollsters are much more interested in sentiment and affinity. While sentiment and affinity can be measured, it’s hard to do so — particularly in politics.

Some companies have created automated sentiment analysis software to analyze the text of social media updates and assign a sentiment of positive, neutral or negative. I’ve tested some of these tools as they apply to politics and found them, at best, to be about 50-60% accurate… 80% under very controlled circumstances.

By the way, neutral content is content that simply states fact rather than expresses a sentiment or stance. It’s important to include neutral in the analysis since simply reporting on positive and negative suggests that content can only reveal that a person likes or dislikes a politician. Leaving it out skews the results.

The problem is, software is not good at understanding nuances of language such as irony and sarcasm. More importantly, software doesn’t understand context or compound context (when several politicians and/or issues are mentioned in the same message) which is a problem in politics. From whose point of view are you assessing the content?

The sentiment challenge

Let’s take a few sentiment challenges. The goal is to determine if the content of a tweet is negative, neutral or positive towards the mentioned politician.

By the way, I’ve picked examples from Twitter for this analysis since Facebook’s newsfeed and search algorithms make it difficult to track down relevant public content. While I do use Facebook a lot, Twitter is a much more open communications platform.

For each of the following five tweets, read the content and assign a single sentiment: positive, neutral or negative. Then, read the analysis, below.

  1. “Middle class Canadians need help and they need growth, but @pmharper’s budget offers neither.”
  2. “Strange that @ThomasMulcair is attending Flora MacDonald’s funeral today. Did he know her? Is @pmharper attending?”
  3. “Concern CBC leadership failing toALERT Cdns #ecocide law & negligent values by @pmharper @ThomasMulcair  @JustinTrudeau”
  4. “@JustinTrudeau asks why didn’t @pmharper  have a ‘more balanced plan’ when oil prices dropped?”
  5. “We can’t eliminate child poverty if the parents are poor.” – @ElizabethMay”

Challenge #1

“Middle class Canadians need help and they need growth, but @pmharper’s budget offers neither.”

This tweet could go one of two ways. It could be a statement of fact (neutral), and can just as easily be flagged as critical of the Prime Minister (negative). Software went the easy route and said it as neutral. If I told you this tweet was issued by Liberal candidate @RichardsonLipon, we’d know for sure that this is negative.

Here’s the thing, though. That was made easy by three elements:

  • the message was clear (no irony or sarcasm)
  • the subject was singular (only one leader mentioned)
  • we dug deeper than the tweet to determine context (Liberal candidate)

Challenge #2

“Strange that @ThomasMulcair is attending Flora MacDonald’s funeral today. Did he know her? Is @pmharper attending?”

This is more complicated. We’re talking about two leaders and there’s no clear criticism, just questions. Are we criticising Thomas Mulcair for political opportunism? Or, praising him for respect? Are we trying to call out the Prime Minister? Or, asking a respectful question about him? Can it be both neutral and negative at the same time?

The software says it’s neutral. Which means, both leaders will be given a neutral mention for this tweet in the tally. That’s probably a fair assessment since a quick glance of author @RealPolitikWest’s Twitter stream indicates some balanced content.

Challenge #3

“Concern CBC leadership failing toALERT Cdns #ecocide law & negligent values by @pmharper @ThomasMulcair  @JustinTrudeau”

The language is clearly negative. We know that much. But about who? Three party leaders are mentioned in this tweet. Are they all part of the failing leadership?

The software thinks so. And the negative it aapplied to the tweet counts against all three leaders. It’s likely the criticism is directed at the Prime Minister, and the other leaders are simply being “copied” on the tweet.

Challenge #4

“@JustinTrudeau asks why didn’t @pmharper  have a ‘more balanced plan’ when oil prices dropped?”

Negative towards the Prime Minister? Team Trudeau and critics of the PM would certainly agree.

However, this tweet comes from CBC’s Hannah Thibedeau. Having this context, we know this is a reporting of facts.

Challenge #5

“We can’t eliminate child poverty if the parents are poor.” – @ElizabethMay”

Any Elizabeth May supporter will tell you this is a positive tweet. A journalist would tell you it’s a neutral quoting of a statement Ms. May made. The software will tell you phrases like “child poverty” and “parents are poor” are negative. And that’s what Ms. May gets for this tweet by apparent Green Party supporter @kylejaymes.

Conclusion

Software is a long way off from being able to properly analyze and apply sentiment ratings to political content. Getting accurate, near-real-time sentiment data, even from a random sample, requires a lot of people working to code that data — particularly with the election which is producing tens of thousands of tweets each day.

Share This

Share this post with your friends!