Futurebook 2012: For the Love of Data
Last week was the Futurebook Conference 2012, run by The Bookseller – a must-attend for most of the publishing industry that goes well beyond the book to look at how our rapidly-changing industry has evolved over the past year, and what our major concerns are at the moment. With an impressive program covering agents, international retailers, self-publishing, and consumer insight, for me the most telling session of the day was the first I attended – a panel on pricing strategies with Paul Rhodes from Orb Entertainment, Michael Tamblyn from Kobo, Eloy Sasot from HarperCollins, Rachel Willmer founder of Luzme.com, and Orna Ross founder of The Alliance of Independent Authors and self-published author.
I didn’t find the session telling because the assumptions we make about pricing are damaging to our business, or because there was an ultimate conclusion about the ‘pricing sweet spot’ drawn from this panel (that is such a behemoth of an argument). Rather, because in a panel that should have been based wholly on hypotheses, experimentation and conclusions backed up by observation, there was no mention of case studies from the panelists at all.
Instead of opening the pricing debate again, here are three statements I heard on the day that I vehemently disagree with:
- a 99p (or 20p) ebook does more to devalue books than a free ebook
- Trade publishers do not understand the concept and value of dynamic pricing or free ebooks, and will/do not employ them
- Giving away a book for free is a good way to make people buy that book later
All of these I would have been willing to concede had any of the panelists produced a consumer study, sales statistics for a book, or indeed any kind of concrete example to support them. Ultimately, however, it seemed the underlying assumption was: the way I buy and discover ebooks is probably the same as the way everyone else buys and discovers ebooks.
I genuinely don’t think statistics weren’t brought out because people didn’t have them. Tamblyn proved later in the day that Kobo is looking into readers’ behaviour in microscopic detail, and I have no doubt that Sasot’s brain is probably composed largely of numbers. However, publishers don’t often share these with one another – perhaps we’re worried about others getting ahead of the game by using our failings as their pole vault, so we generalise at conferences and hope people don’t look too far into it.
But a continuing theme of the day was that publishers should own their data sets and be spending significant time analysing them. In the session on building a direct-to-consumer business, Louise Vinter from Random House spoke about social media channels and analytics, and the importance of recognising different target audiences within the user group of each platform.
Maybe this is the true advantage larger publishing houses have, the Penguin Houses and Hachettes of the industry. Consumer data (captured through behaviour rather than surveys) that allows incredibly detailed targeting of direct-to-consumer messages, and learnings from price experimentation across varied retailers – both shared at group-level to allow every division the best possible chance at informing their choices.
My hope is, I suppose, that the silence at conferences when it comes to raw data is not indicative of a lack within the industry. Because (as was pointed out on the day) yeah, our industry is driven by editorial instinct and creativity, but our business decisions need to be as informed as technically possible – you know our competitors’ are. We will die at the hands of our assumptions.