As noted in the first chapter of our previous Trends report, artificial intelligence (AI) is now concentrated in the hands of a small group of technology players who control the user discovery path from beginning to end. This level of control by a handful of major corporations is a cause for concern. Fortunately, filter bubbles and advances in recommendation and predictive technologies can be used to your advantage if you know how.
“ Technology giants, not the government, are building the artificially intelligent future. And unless the government vastly increases how much it spends on research into such technologies, it is the corporations that will decide how to deploy them ”
- Farhad Manjoo, technology columnist, The New York Times, 2017
As things currently stand, artificial intelligence is more a promise for the future than a fine-tuned cluster of technologies. It’s true for the self-driving car and for that virtual assistant who will buy your airplane tickets for you someday. True, too, for the recommendation engine with a Saturday night movie list that you can’t miss. These technologies are still far from perfect and the transitional zone we’re in now is a great opportunity for learning.
So how can we better understand the algorithms that track our daily digital doings, analyze our tastes and preferences, predict our needs, and offer us the products and content most likely to grab our attention?
In this first chapter of our Trends Report: 2017 Mid-Year Update, we shed light on the limits as well as the opportunities of resorting to smart algorithms today. We also shed light on certain AI advancements that impact how media content is marketed and promoted.
Filter bubbles (as we defined them in our July 2015 Trends Report) allow you to reach audiences that are already interested in the types of content you produce at a lower cost. A fruit tree can be used as a good analogy here. Because it costs you virtually nothing to reach the lowest hanging fruit with a filter bubble. For example, selling a horror film to fans of the horror genre can be very cost effective through social networks such as Facebook.
Filter bubbles also make it easy to reach your first audience, one that will build a solid foundation for your content, simplifying the acquisition of new fans and even new investments. As Stitch Media producer Evan Jones puts it: “If you can develop that filtered audience, the latter audience that comes afterward sees the number behind the product and the social proof carries it forward.”
In other words, the filter bubble can be extremely effective in narrowcasting to reach niche audiences, build a fan culture, and develop communities. It’s a different story, however, when your objective is to be more proactive with the goal of reaching more people, a wider audience, or simply new audiences.
It’s definitely more expensive to promote content to an audience that has not already indicated an interest in the category your content belongs in. For example, promoting a documentary series or documentary film about climate change to those skeptical about the concept will cost much more than preaching to the converted, whose faith in the causes of climate change is rock solid.
You only have to go to the pages explaining how to win an ad bid on Facebook to see that the more relevant the content is to the user – that is, the more it matches their filter bubble – the greater your chances of winning the bid to post your message to that audience. On the other hand, trying to reach a new audience of the uninitiated will require more money in terms of putting in a bid with any chance of winning.
Seeing as it is now necessary to reach multiple audiences in their respective ‘bubbles’ before being able to attract them to your project, you should expect much greater complexity in your marketing activities, including the need to develop a multitude of different messages that will resonate with each targeted filter bubble.
The progress achieved in these fields could soon widen the possibilities for search engines. Google’s voice recognition system has hit a word accuracy rate of 95 %, which confirms what many people already take as a given: in the short term, voice recognition technologies are the most promising segment of AI research.
According to Eric Scherer of Méta-Média, these technologies “can help us better index contents to make them easier to find and distribute them in a more relevant manner or even conduct searches within these text, photo or video objects. Following Microsoft and Google, IBM Watson is now proposing to the media a new service that analyzes video metadata in order to more efficiently target excerpts and clips.”
Existing recommendation engines have to deal with a problem similar to a filter bubble. Algorithms developed from limited data, like recent behaviour or past purchases, for example, create recommendation loops that can sometimes make users feel like their world is closing in on them. As more and more recommendation engines rely on it, AI can now run its analyses on a wide range of online activities and behaviours to produce more comprehensive, holistic user profiles. These new possibilities are based on so-called predictive technologies, capable of providing users with relevant content at just the right moment and on the most appropriate screen.
Toward ‘affective’ computing
In the case of Facebook, the algorithmic strategies based on relevance or hypertargeting mentioned above (also used by Google AdWord and SnapAds) seem at first sight to be profitable based on the conversion rates the platform is known for. According to Mary Meeker’s Internet Trends 2017 report, 26% of US consumers who click on a Facebook ad actually make a purchase.
At the same time, the use of ad-blocking software continues to grow. Canadians are the No. 2 users of ad-blockers on computers in the world. The message to advertisers is clear. Users do not appreciate the huge amounts of data being collected on them, especially for the sole purpose of sending more hypertargeted ads their way. The whole process makes them feel like their every move is being watched.
All the algorithms and automation promised by programmatic advertising are really still under development. The good news is that the growing awareness of users, coupled with the need for trust and accountability, is beginning to exert overt pressure on the giants of the web. Recent examples include Facebook, which has made its moderation rules public, and Apple, which will add to its Safari browser a new Intelligent Tracking Prevention system to prevent websites from collecting browser data through machine learning and block video autoplay.
Without going so far as to expect these giants to become fully transparent, there is, nevertheless, an opportunity for producers and advertisers to get closer to what users really want, in a way that doesn’t go blindly hand in hand with so-called intelligent technologies. To do this, we must learn to use these technologies wisely and creatively and always with a healthy dose of human intelligence:
“In the era of hyper-personalization, the challenge for marketers in 2017 and beyond will be how to provide consumers with what they don’t know—with moments that break from the expected and familiar, that enable discovery, and that fuel imaginative thinking. The capability to facilitate these moments will be increasingly critical for brands to differentiate themselves in a world of algorithmically-driven sameness. Serendipity will be essential to creativity and discovery, but also—crucially for marketers—to build trust and authenticity.”
– John Watton (Adobe)
In the advertising sector, the use of AI and algorithms basically translates to an increased use of ‘programmatic’ advertising. But advertiser confidence in giants Google and Facebook, which dominate the digital advertising market, has been shaken in recent months in what is now known as the Adpocalypse.
In February, in response to advertiser concerns, both Google and Facebook agreed to audits of their marketing metrics, mere weeks after Procter & Gamble, the largest US advertiser, slammed digital platforms for their lack of transparency. In March, industry heavyweights globally (AT&T, Johnson & Johnson, PepsiCo, Wal-Mart, L’Oréal, Toyota, …) suspended digital advertising on Google’s YouTube over concerns that programmatic ads were appearing on channels that broadcast offensive videos. Analysts predict the boycott will cost Google $750 million US.
The Canadian impact doesn’t appear to be as far-reaching, but Canada’s ad industry stakeholders remain cautious about programmatic advertising overall: “Many still worry about fraud and brand safety when algorithms make media decisions,” writes Jeromy Lloyd of Brunico, which conducted a survey of Canadian marketers and agencies earlier this year.
 Ad bidding: An online advertising strategy that consists of automating the allotment of advertising space to advertisers based on different criteria such as advertisers’ monetary bids and advertising targets as well as, in the case of Facebook, users’ browsing histories and preferences.