Note: This article is the last in a 3-part series that outlines technology solutions to the big challenges that traditional media companies (particularly those in Asia) currently face. If you have not read the first two parts yet, do take a look here and here.
May 8, 2016
In the last two posts, I have talked about how media-houses can drive internal cultural and technological changes to adapt for the web. This post is about how companies can and should collect data about how readers are using their site and social media. It then talks about how companies can use this information to drive content strategy and personalize offerings for users.
Google Analytics (GA) is used by practically all media houses around the world, both for editorial and advertising purposes. While GA is a great tool for getting a high-level overview of how your content is performing, it does lack features that are essential for media companies (and tech companies in general):
GA’s shortcomings for editorial have created an opportunity for other players to fill in the void, and Parse.ly and Chartbeat have done so effectively. These companies give users a comprehensive, visual overview of content performance on their website. Parse.ly also allows you to access some granular data, although it does not measure scroll depth. It is an effective, albeit somewhat expensive, option if you do not have in-house data engineering and infrastructure capabilities.
(Note: an earlier version of this post claimed that Parse.ly does not allow access to raw data. This has since been rectified.)
Media companies are not traditionally known for high-end tech, but companies like BuzzFeed and the New York Times are increasingly changing that perception. Both these companies store and analyse their own data as well as third-party data, and use it to understand their audience and improve their content strategy.
BuzzFeed’s POUND, for instance, is something that a company can only do if they build their own data collection and analysis capabilities.
With this in mind, we have built our own data collection and social listening capabilities at The Broadline. This has helped us understand how our users behave on the site, and what they talk about online.
We scrape Facebook and Google Trends (along with articles on popular websites) every 30 minutes to find out what people are sharing and searching for. This can allow content-creators to figure out what content to publish in the short-term, as well as what kind of content tends to persist over time.