This is the third of a series of three posts that further examine subjects covered in Discoverability: Toward a Common Frame of Reference. This third post deals with the latest developments in terms of algorithms with respect to recommendations.
In the fall of 2016, TiVo, the company that came out with one of the first personal video recorders (PVRs), announced the upcoming launch of a new-generation user interface that has been reconfigured to integrate and standardize content originating from pay television, on-demand television and online video service outlets.
In its announcement, TiVo focuses on a new feature that should greatly improve content discoverability, i.e., Predictions, a recommendation engine that operates based on detailed analyses of viewing habits. The analysis covers not only the content viewed by the user but also the periods of the day when content is viewed and at what frequency it is viewed.
By launching this new user interface, this company founded in 1999—which has had its ups and downs in recent years and was bought back by Rovi (which adopted TiVo as its corporate name—is seeking to transform television viewing into a user-friendly and personalized experience.
Other companies are active in the personalized television sector, where users feel like watching better than they know themselves. At the International Broadcasting Conference (IBC) held in Amsterdam in September 2016, roundtable participants who gathered to discuss the future of personalized television agreed on one point: recommender systems will become increasingly ‘intelligent’ and even better at predicting consumers’ preferences.
According to Android TV’s CEO who took part in the discussion, more ideal would be a content assistant powered by machine learning that would take much of the work out of navigation for the viewer. However, for that to become a reality, participants agreed that the industry would first need to focus on facilitating content discoverability before inundating the market with new content.
An international research community behind the recommendation
At about the same moment, the recommender system community gathered in Boston for the tenth annual ACM Conference on Recommender Systems organized by the Association for Computing Machinery (ACM).
Recommender systems form a very broad field that extends from online dating services (Match.com in particular) to e-commerce (Amazon being the best known platform). In 2007, the first conference lasted two days and had received 35 presentation proposals. Nine years later, the conference took place over a 5-day period and 294 proposals had been received. It was sponsored by Google, Spotify, Pandora, the Alibaba Group, Netflix, Amazon and Microsoft—to name but the best known.
Recommendation is a particular form of information filtering. Based on our past behaviours, it uses similarities to produce an informative list of a given user’s preferences. In the near future, recommender systems may no longer simply use our machines interactions to come up with personalized lists; they may also exploit measurable aspects of our personality and emotions.
One of the workshops during the Boston conference dealt specifically with the issue of ‘personality in personalized systems.’ Our personality and our emotions shape our daily lives, as explained in the workshop’s presentation, and have a strong influence on our preferences, decisions and behaviour in general.
In recent years, users’ emotions and personality have begun to play an important role in recommender systems, a role that consists of analyzing contextual data such as a user’s content choices or the time needed to discover the content. Recent studies conducted in this field have shown that individual personalities could be associated with individual musical preferences. (For instance, a research posits that extroverted personalities prefer pop music whereas people who are rather closed to new experiences prefer religious music.)
During the workshop, the suggestion was made to take into consideration certain technology breakthroughs (the subtle recording of emotions by video or physiological sensors, for example) to generate voluminous datasets and improve recommender systems.
At the IBC Conference, an Android TV rep evoked a scenario in which people wore sensors to enable their television set to predict what they wanted to watch based on their mindset. The journalist who covered this roundtable discussion judged the scenario rather farfetched but it’s nevertheless perfectly conceivable in a world where it is possible to program behaviour prediction algorithms based on interactions between humans, as recently demonstrated by researchers of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).
A developing market
The report titled “Discoverability: Toward a Common Frame of Reference” evoked the arrival of an algorithm market:
Proprietary algorithms—the complex, secret formulas behind the success of Google and Facebook—are well on their way to becoming ultimate strategic assets in the digital economy. They are the key to a future economy driven by connected devices and artificial intelligence, according to Peter Sondergaard, vice-president of research for Gartner, an advanced technology research and advisory firm. He foresees an “algorithm economy” in which new markets will open where the most sophisticated algorithms will be game-changing assets traded at high prices.
Recently, investors Elon Musk and Sam Altman financed Open AI, a start-up that is seeking to establish an open algorithm market in the avowed goal of preventing Google and Facebook from dominating the artificial intelligence sector. In this market, anyone can download an algorithm and anyone can pay to use it.
Algorithms maybe not be infallible magical formulas, but they have become a fundamental asset for anyone who wants to triumph in the fourth industrial revolution, the technological revolution engaged a few decades ago according to the World Economic Forum.