Client Login | Search
 
Tweets

Entries in human analysis (11)

Friday
Nov152013

The Value of the Measurement Enforcer

By John Scappini, Media Analyst

Hockey is a game deep-rooted in tradition and as such, “advanced” statistics are still a ways away from reaching a tipping point with fans. There was no Moneyball moment for hockey, just a slow and steady drumbeat by passionate fans who espoused foreign-sounding things like Corsi and Fenwick over the traditional attributes like grit, toughness, and heart. The prime target, for lack of a better term, this season of the advanced hockey stats crowd are the Toronto Maple Leafs, who have stacked their team with traditional, old-school -type players, who are just as likely to drop the mitts and get into a fight as they are to score a goal. As you might imagine, the advanced statistics folks weren’t too high on them at the beginning of the season, predicting a rather dire year for the Maple Leafs.

By the Connecticut State Library via Flickr

And yet here we are, with a quarter of the season played, and the Leafs are in third place in their conference, with the sixth-most wins in the league, and 23 total points. That’s better than the Pittsburgh Penguins and level with both the Boston Bruins and the Los Angeles Kings—advanced stats darlings. So, what gives?

In reading the debate over the Maple Leafs, I couldn’t help but think of media measurement done through automated, computerized aggregates. Sure, you can collect the data—the wins, the losses, the hits, the saves—but without someone there to explain the story or the narrative to you, you’re getting an incomplete picture. A deeper look at the Maple Leafs reveals that they have atrocious puck possession numbers, their shooting percentage is down from last year, and their lofty early-season penalty killing statistics are already regressing toward league averages. In other words, they have been incredibly lucky so far this year and that luck is due to run out soon!

Recently, a CARMA client experienced a huge surge in social media coverage that barely moved the needle in terms of favorability. The incomplete, automated/aggregated picture would tell the client that they generated a volume surge with not much change in favorability: there was a lot of moderately favorable coverage this month. So, what gives? A deeper look showed the client that, in fact, the coverage was as favorable as it could be—the surge resulted from a large number of retweets on the client’s participation in a charity event!—and that the moderately favorable coverage had been constricted simply by the 140-character limit of Twitter.

Of course, the bottom line is important. You have to win enough games to make the playoffs, no matter how you win them. But in our measurement world, you also have to dig deeper than just the standings or a box score and provide clients with data-driven insights beyond just wins and losses. It’s those details that matter too. 

Wednesday
Nov142012

Being Smart About Intelligence

By Adam Gallagher, Media Analyst

The intelligence community hopes to start using today’s tweets to predict tomorrow’s threats. A recent NPR story drew attention to the attempts of private companies and government agencies to analyze social media content, and what happens when the analysis relies too heavily on automation.

The intelligence field is adapting to a world changed by social media, a world in which Big Data is even bigger thanks to the user-generated content social media enables. In this ever-growing mountain of information, intelligence leaders see an opportunity to read deeper into patterns that could reveal life-saving information. By analyzing the trends in the data, some believe events like the Arab Spring can be anticipated. Indeed, the NPR story reported the 2011 Yemeni Revolution was foreseen through social media analysis.

However, the analysis remains imperfect. One of the problems encountered by the intelligence community is one common to all media analysts: the algorithms designed to sift through the research lack the sophistication to determine the context, nuance and depth of the data. Human researchers, like the ones we use here at CARMA, can solve this problem, utilizing their cognitive abilities to discern exactly what a tweet, post or article is conveying.  Methods that exclude a human element risk losing the accurate insights social media analysis can bring.

Photo by Michael Baird on Flickr

For instance, the NPR article described how one company predicted that the attack on the U.S. consulate in Libya would be traced back to a group in Yemen. We now know they were wrong, although their data was sound. The problem was the Yemeni group shares the same name as a group in Libya, confusing the algorithm, which lumped the two groups together.

This instance is only the most obvious example of the limits of automated research. Other limits include the inability to recognize tone, sarcasm or metaphor. And as anyone who has ever read a heated exchange on social media knows, sarcasm is prevalent, tone is nearly omnipresent, and a cap of 140 characters leaves a lot more said than written. To analyze the content of tweets based strictly on what is written and not by what is meant can lead to highly inaccurate results.

These limitations are the reason we believe so strongly in human-based research. A human researcher is able to distinguish the two groups based on the context of the tweet; the president of the company in NPR’s article admitted as much and assured the reporter that a human element will be involved in the process. By using human researchers to evaluate articles and utilizing sophisticated technology to aid in efficiency and quality, CARMA is able to provide accurate insights into important data. While it’s a new and exciting time for media analysts, human-based evaluation is an aspect of measurement that proves itself to be a valuable means of ensuring high quality analysis time and time again.

Wednesday
Oct312012

The Perfect Marriage of Media Monitoring, Measurement and Analysis

By Joan Kilanowski, Consultant to CARMA

In an ideal media monitoring, measurement with an automated dashboard and human analysis would have the perfect marriage. It sometimes isn’t easy to find that perfect match, and it may take meeting with many perspective partners.  But, once the match is found, the partnership will assist you and your team in unlimited ways.   But, similar to a real-life marriage, it will also require constant nurturing and care to keep it strong and relevant.

Automated dashboards and their features can be misunderstood, misused or not utilized to their full capacity.  When they are not used properly, dashboards can be underwhelming in their performance or overestimated in their abilities. User and supplier misunderstandings and misinformation often undermine desired results.  But if the data is robust and updated frequently, then the results are strong and trusted.  If the metrics are accurate and trustworthy, the dashboard can help you see the impact of the stories. Automated dashboards can give you a big world view and a day-to-day pulse of how your company or a client reputation is perceived, a gauge on important issues, and the presence of competitors’ initiatives.  Good media intelligence gives you great power and insight.  So an accurate, frequently-updated dashboard will give you the ability to respond to the information it provides with fast, targeted responses.

On the other side of this measurement partnership, human analysts excel at collecting truly pertinent data and offering keener insights and a deeper understanding of how the coverage relates to the performance indices that are important to you.  These trained professionals help you grasp the importance of the dashboard outputs and see what’s under their surface. How is the coverage unearthed by the dashboard correlated to your sales?  How is your brand being perceived by your targeted audience?  What messages are not being received clearly?   What negative issues are brewing and need to be watched?  Who are the influencers that are talking about your messages and brand?  Human analysts can take the appropriate data and sliver it down to what is most important to your initiatives. 

An automated dashboard that performs the functions that are needed for your daily activities, plus an analysis company that gives deep and meaningful insights about your media coverage, will create a long-lasting and perfect marriage. 

For further information on media monitoring and dashboards, check out a few of these resources including these buyers guides to PR monitoring, this extensive guide to social media monitoring tools, and this guide to using an analytics dashboard

 

 

 

Wednesday
Apr112012

Is A Picture Worth A Thousand Words?

By Elizabeth Ballard, Director

Image Credit: DooFi

An image can convey a lot of information in an instant. Visual elements in media coverage are increasingly important as sharing, re-purposing, and re-posting images becomes easier. Tracking and quantifying the value of brand visibility through images appearing in media coverage is an important part of evaluating how a product or brand stands up to its competitors.

A number of factors must be examined to understand and measure the impact of images and graphics appearing in media attention. Among the things to consider include: 

1) Is the image prominently displayed? Is it above the fold? On the landing page? How many clicks does it take to get to the image?

2) Can you see a logo? A partial logo?

3) How big is the image relative to the article? How big is it relative to other images appearing in that publication?

4) Is the image a joke? Is it sarcastic?  Is it an ad campaign turned meme?

5) Is there a caption?  How does it affect the meaning of the visual?

6) Is the visual negative?

7) Is your logo/product/brand displayed alone or is it grouped with competitors?

An automated media analysis tool probably could tell you if an image appeared, where it appeared, and maybe even if a logo is displayed. But measuring the value of that image will be lost on a machine, as most, if not all, automated media analysis tools are able to examine only the text of articles or the transcripts of broadcast pieces when assessing sentiment.

In contrast, human analysts are able to review the whole article or watch the actual broadcast piece, and can evaluate the significance and impact of the visual elements when assessing sentiment. The ability to take a deeper look at images, place them in the intended context, and assess their significance by using some of the criteria above is vital and requires a level of intelligence that machines do not have. This represents just one more reason that investing in human-based research is worthwhile.

Tuesday
Mar202012

The Fundamental Theorem of Favorability Analysis

By Chris Scully, VP of Research at CARMA International

In 1987, professional poker player David Sklansky published The Theory of Poker outlining his thoughts on the underlying theories and concepts for winning at all the variations of the card game. In this book, he unveiled the Fundamental Theorem of Poker.

Photo credit: Viri GSimply put, the theorem states that anytime you're playing poker and your opponents do something (such as bet, call, raise, or fold) that they wouldn't do if they knew all your cards, then you win money. Also, anytime you do something (again, such as bet, call, raise, or fold) that you wouldn't do if you knew all your opponents' cards, then you lose money.

Using this as an inspiration, I'd like to offer what I call the Fundamental Theorem of Favorability Analysis: Anytime a story says something about a company or organization that the entity would want the story to say, then that discussion is favorable. Anytime a story says something about a company or organization that the entity would not want the story to say, then that discussion is unfavorable.

For this theorem, I define "about the company or organization" broadly such that it includes discussion of that entity's products and services, organizational mission or goals, management, financial performance, standing as an employer or corporate citizen, etc.  I also use the word "story" broadly to incorporate new reports, opinion pieces, and all types of social media hits (blogs, tweets, Facebook status updates, etc.).  Lastly, I define "that discussion" as being the part of the story saying that certain something the entity would or would not like the story to say.  This discussion could be as brief as a word or two or as expansive as several paragraphs or more. 

Incorporating this theorem into a favorability assessment methodology is relatively easy to do when using human analysts. I think many people – even those outside the PR industry – already understand this concept intuitively, and it's easy to formalize it by establishing guidelines that enable coders to recognize instances when the theorem should be applied. 

In contrast, I think it's a pretty difficult task for automated offerings to incorporate the Fundamental Theorem of Favorability Analysis into their sentiment algorithms. Foremost, software doesn't have any intuition, which means that every specific sentiment rule that a software offering follows must be programmed. Also, the ways in which a story can convey information that invokes the Fundamental Theorem of Favorability Analysis is limitless, and thus, no programmers could ever devise software that accounts for all possible favorable and unfavorable discussions. 

I believe the practical impossibility of incorporating the Fundamental Theorem of Favorability Analysis into media analysis software is a main cause of The Neutral Problem that is so prevalent in automated offerings.  However, since many programmers of media analysis software don't come from a PR background, it's possible that some don't quite grasp how truly vital this theorem is to assessing media coverage accurately, and thus, they don't devote enough of their efforts to accounting for the theorem in their programming.

Regardless of the causes, I think it's clear that until automated offerings can incorporate the Fundamental Theorem of Favorability Analysis into their algorithms, human-based media analysis is always going to produce more accurate favorability assessments.