Big Data

New social media tool strives to balance algorithms with human editors

Interview with Visual Revenue's Dennis Mortensen

Any community editor knows that there’s an art to the perfect tweet–a tweet composed so intriguingly and pushed out at just the right time that your followers go wild with clicks, retweets and comments. To complement this art, several new platforms have sprung up to provide hard data on what’s most and least effective.

One of the newest on the market is from Visual Revenue, a company that already provides an analytics solution that helps publishers determine which stories to feature on their websites’ homepage, where on the homepage (eg, center of the page with a graphic, below the “scroll” as a bullet point) and for how long. 

I spoke with Dennis Mortensen, chief executive of Visual Revenue (and Lean Back 2.0 digital disruptor), on how their “social editorial suite” balances algorithms and human editors. 

Why did you launch the new social tool?

We set out with this bold idea of becoming the Bloomberg terminal of the newsroom. In that mission, we don’t think of the homepage or social media or email as being necessarily more important than the other. If you’re The Economist or the Atlantic or the Boston Globe, the most important channel today is the homepage because you have a brand so strong that readers will just type your name directly into the browser.

However, if you want to increase your audience, you need to expose your content to a new audience often. A lot of publishers will go to Facebook or other social media channels to achieve that. But it’s very difficult to measure it. You can measure tweets and favorites and likes and other vanity metrics. And that’s nifty. But if I’m the social media editor of the Atlantic and I put out a piece of content at 1pm, what are that expectations for that content at that specific time given what everyone else is talking about? We can tell the editor that his content has been successful and we can also tell him that for some reason it didn’t resonate, so he might want to try a new headline or push it out a different time or add an image to it.

Can you talk a little more about the kind of feedback the social tool gives editors? How does it help them?

You and I have personal social accounts. If you post something to Twitter right now, you’ll see a number of people retweet it; some will favorite it; some will even comment on it. That’s all good and fine. But how do you know if it’s really been successful? What is the threshold? Beyond if absolutely nothing happens or if you witness success you’ve never seen before. Anything in between those two scenarios, you just have to operate on a feeling. We want to go above and beyond that. So if the Atlantic sends out a piece of content, we’ll expect at least 450 people to click on it over the next 60 minutes. If that’s not the case, it wasn’t successful. We want to be that specific. 

And we don’t just try to maximize clicks at all costs. We make recommendations, but they need to fit within an editorial framework. Editors might tell us how many stories from one category should be pushed out, what stories shouldn’t be pushed out, how many photo galleries can be sent in a given day, etc. It’s the same editorial structure we’ve put in place for our homepage editor, and we’ve adapted it for the social channel.


How do you maintain that balance between humans and algorithms when it comes to social media?

Some people to this day have this idea that if we just become good enough–if we get better science or improve the tech–that we could someday replace the human editor. But everything that we build is built with the idea that we’re here to empower the editor, not takes his job.

We include tools and features that let editors put their opinions in. We list our recommendations in such a way that an editor can go “no thank you” to one recommendation, then “yes,” “yes,” “yes,” “no” to the others. Just by clicking yes or no, we immediately recalibrate to new settings. We’ll take those four stories on the banking crisis in Spain that you didn’t want to push out, and we’re remove those and come back with other suggestions. The editors ultimately decide what gets pushed out.

It’s naive to believe that you can fully automate the Facebook or Twitter stream of publications like The Economist, the Atlantic or the New Yorker. The whole reason you read these publications is because they’re not automated. That doesn’t mean publications can’t be data driven. They should. But they should use it to their advantage.

Beyond retweets and likes, how does the new social tool measure success?

We don’t want to fall into the trap of creating yet another analytics application. Editors are going to throw up if they see another pie chart. We’ve come up with a single metric for which we measure success:  the ability for that social push on Twitter or Facebook to drive traffic to the story itself. Retweets, likes, favorites, comments and mentions are all fantastic proxies for whether anybody is  likely to click onto the story. But success is measured on the other side of the click.  It’s one metric.

Just to clarify, your only metric of success is if readers actually go on to the publisher’s website?

Yes. Or it could be defined as wherever the publishers want readers to go. All other metrics become secondary to this for the editor to measure audience interaction.

How does the new social tool integrate with your front page management program?

It fully integrates. In social, sometimes success is defined by the headline that you choose to use for that push. It’s the only thing people see. If you scan 100 tweets over your lunch break, you inevitably have to skip some. But you might fall in love with one because of the headline. We already offer instant headline testing, so we pull those type of headlines into the social suite. We provide editors with the opportunity to use data they already had in a new way.

And the headline testing is always surprisingly effectively. People assume that you’ll squeeze out an additional 5-10 percent click throughs out of a new headline for the same story. But there’s actually a dramatic difference–usually a 40 to 45 percent increase. We would like to think that people should be able to see the story for what it is at the very beginning. But the fact of the matter is that there’s so much information out there, you need to select what you’re going to read, and one way is through the headline.