This article is presented as part of an editorial partnership between CMF Trends and Méta-Media. ©  All rights reserved.
Last week, we introduced six major trends in artificial intelligence applied in the media. Extended reality, information tool, fight against fake news, smart speakers … Let’s see how we can push even further the uses of AI in our industries.
7. Artificial intelligence for indexing, archiving and optimizing searches
Previously, search engines only worked with text. With the advent of AI, searches can now be done on images, video and sound. Thanks to a combination of image recognition technologies, machine learning, speech-to-text, NLP, and face, object and place recognition, AI can automate metadata creation for content to enhance archiving and, in particular, make the content more discoverable. Data structuring, like the EBUCore format, is the key step in making content exploitation automatic. Conversion of data formats, transcoding, extraction of audio and sub-titles, or transfers/copies/purges (FTP, HTTP) are all content management tasks that can be automated, enabling close to real-time cataloguing. Automated indexing also speeds up the work of reporters and facilitates fact checking.
A given piece of content has a very short lifespan. Without appropriate metadata, it is impossible to locate a specific subject given everything that has been produced. This is why it is important to optimize metadata creation. AI makes metadata creation faster, less expensive, and more precise, as long as it is linked to enough data.
It is nearly impossible for a media outlet to develop proprietary solutions that it can control 100%. Many turnkey solutions are available, many of them relaying to cloud systems from Microsoft, Google, Amazon, IBM, OpenText, Oracle, etc.
Newsbridge, which is very present in the media industry, offers an automated, real-time indexing solution that uses image recognition for rush stock. This makes it possible to simultaneously optimize the process of producing a topic, and extend content lifespan by facilitating reuse at a later date. A live translation function is also offered for interviews.
The NYT has been using Editor, an AI-based tool, since 2015 to streamline fact-checking and information formatting. When reporters write an article, they use tags to flag key elements. The machine learns to locate these elements, understand the article’s topic, and does a real-time search to extract information on the subject. BBC News Lab launched a similar tagging technology, called Juicer; another tool, Summa, uses language recognition to index content better. LEANKR enables high-level video indexing with automatic tagging, the creation of smart vignettes, and an in-video search engine thanks to Natural Language Processing, speech-to-text and OCR.
In fact, AI helps optimize the accuracy of search results. Computer vision technology also makes it possible to better handle image content and speed up the production process. Today, machines can easily identify individuals or situations in pictures to generate legends or feed more comprehensive databases.
8. Artificial intelligence for targeting and customization
Recommendation algorithms are not new. The pioneer, Tapestry, celebrated its 25th anniversary in 2017. Through recommendation algorithms, AI is the perfect tool for tailoring the content distribution strategy in real time: analyzing trends on social networks to target the best time to broadcast, analyzing audiences, automatically generating titles/summaries/illustrations with key words and hashtags that guarantee greater content visibility, personalized newsletters, customized playlists …
Content that is tailored to each user’s profile, journey, factoring in contextual data (place, time, weather …). Focus groups have been replaced by the real behaviour of real users.
Amazon, Facebook and Netflix are the textbook cases of customization. Netflix tailors its entire home page. Its Meson system coupled with machine learning (collecting data to evolve constantly) even offers a customized visual (9 versions) on which the user is most likely to click, depending on their usage journey and context. Goal: to find the greatest combination of series that could suit segments to satisfy users, instead of content that corresponds to the most users. Algorithms are thus underlying creativity and diversity, rather than standardization.
AI can automate content curation, regularly update theme-based playlists, profile users to make customized recommendations. According to a study by Reuters, 59% of media use artificial intelligence to recommend articles, or plan to do so. Your Weekly Edition is the NYT’s personalized newsletter. Launched in June 2018, it sends a personalized selection (using algorithmic and human curation) of content and has one goal: only show users content they haven’t seen yet. Amazon Personalize enables developers with no machine-learning experience to easily create customization functionalities. Freshr, a Messenger bot, summarizes the biggest news of the moment based on the user’s tastes, every morning, in just 5 minutes. It is aimed at 20- to 35-year-olds.
Recommendation algorithms are far from perfect. Economist Matthew Gentzkow refers to a “personalization paradox” to describe their disappointing side. How many times have we been shown content we’ve already purchased, or just content our friends post on Facebook? There, too, advances in AI could help find the right balance between customization and smart content promotion. Traditional methods could sometimes be just as effective: RAD, CBC’s journalism lab, uses online audience surveys to offer content that is suited to their expectations.
9. Artificial intelligence and accessibility
Automated transcription technology makes one part of the reporter’s life easier, optimizing work time and even making content accessible for people with limitations, by automating subtitles (speech to text), turning text into sound (text to speech), image context recognition for audio description, or even real-time translation.
AI Media TV offers sub-titling and transcriptions for events, live and in replay. They have just launched Scribblr.ai. Funded by Google DNI, Trint is a transcription tool that automatically transcribes audio and video streams. It is used by AP and incorporated into Adobe Premiere. Mediawen generates video content translation in real time using IBM Watson and text to speech, with voice synthesis or sub-titling. AFP has developed Transcriber, allowing its reporters to automate interview transcription.
10. Artificial intelligence for producing video and creating
Media has a growing need to make short format content tailored to social networks, so many start-ups have emerged to offer turnkey solutions. AI can now be used to automatically generate text from graphic documents, or video from texts. AI also assists with the various technical steps in capturing and broadcast. It is involved in image post-production and special effects. The number of solutions that contain AI bricks for developing video montages and media management has increased exponentially in the last few years.
Thanks to image recognition, AI can analyze video rushes and produce a coherent montage. Most of the major editing software publishers, like Adobe, Avid and Element (Amazon subsidiary) have already added automatic video processing functions to save editors time. Adobe and Stanford have developed an AI tool that automates a portion of the video editing work while leaving the creative part to the human being. The tool can, for example, propose several different montages for a scene with dialogue. With its simplified editing tools, Gingalab creates automated, personalized videos and automatically generates “best of” videos based on a pre-defined editorial line (humour, tension, focus on a protagonist, etc.), which are then automatically posted on social networks; analytics are also aggregated.
In September 2018, the BBC broadcast an emission that was entirely created by a robot. “Made By Machine: When AI Met The Archive” assembled a portion of the BBC’s packed archives into a one-hour show. It wasn’t always coherent, an accusation leveled at the AI writers of Sunspring, It’s No Game and Zone Out).
While GAN (Generative Adversarial Networks) technology helps improve robot copying of creations, AI is clearly not ready to replace the artists: it is solely based on probabilistic and combinatorial systems that have no symbolic intelligence or emotional capacity.
11. Artificial intelligence for monetizing and predicting success
From advanced audience analytics to detecting the right target, machine learning algorithms help marketing separate conjecture from essential tasks. By segmenting behavioural data, analyzing audiences, and detecting trends, AI can predict the potential commercial success of content before it is broadcast. Advanced analytics can thus discover new models, correlations and trends to enhance decision making. AI intervenes in the marketing chain: customer acquisition (audience analysis and segmentation, scoring and targeting, visual context identification), transformation (customization and recommendation, content creation, optimization of sites and media, automated campaign piloting), and loyalty building (conversational agents, customer program automation, behaviour analysis, assignment calculation and predictions).
AI can now even collect “emotional data” to analyze our behaviour using our emotions in addition to our clicks. This is the nth degree of customization: media that proposes content suited to how we feel at the time. Datakalab’s Frank Tapiro describes the change as follows: “I’ve created emotion for thirty years. Now, I’m using neuroscience and data to measure emotion.” Amazon is even working on a wristband to track our emotions.
Prevision.io is an online platform (SAAS) for automatically creating predictive models based on datasets (internal or external, structured or unstructured) and viewing the results on dashboards. This machine learning platform identifies predictive scenarios in order to predict audience losses and unsubscribes, and to manage pricing for advertising screens. It promotes its solution’s transparency by explaining each result and putting forward recommendations for actions and/or impact assessments. Le Parisien-Les Echos group recently received financing from Google DNI for an anti-churn (unsubscribe) program. Called High Fidelity, the project is to enable pooling of data from call centres, newsletters, mailings, and interactions on apps and websites, predicting a cascade of unsubscribes to prevent massive reader loss. With Project Feels, the NYT is selling premium advertising space based on reader sentiment. A Swedish company, Vionlabs indexes content based on automatic emotion recognition. It analyzes content and creates graphs by representing various emotional moments. The data will then be used to feed an emotion-based recommendation engine.
AI is used to find out as much as possible about users so as to target the best time–and way–to suggest going to a paid subscription. AI is becoming a decision-making aid and anti-churn tool.
12. Artificial intelligence and media ethics
In a crisis of confidence, using AI and opaque recommendation algorithms involving behavioural analysis may not be an obvious choice for the media. In fact, using AI means that clear rules have to be established, and transparent documentation provided for the audience. The Big Data that powers AI is based on massive data collection (including personal data). Data ownership and independence from third parties is critical to developing an independent ecosystem; this could be critical to long-term business survival, particularly for media businesses.
However, most of the datasets and algorithms available in the GAFA clouds are biased, even racist.
How can public service values like information and education be integrated into a recommendation algorithm?
How can we federate around an issue to enliven public debate?
How can we continue to play a recommending role in social cohesion?
What level of recommendation do we want?
Where is the right balance between personalization and content discovery?
The British government has launched an observatory on AI use in the public service. The BBC applies its rules of ethics in the program “Responsible Machine Learning in the Public Interest,” joined by the EBU, whose Big Data working group ponders ethical use of algorithms in public media to avoid bias and deal with issues the tool cannot yet grapple with: inequality pertaining to artificial intelligence, neurohacking, tech sovereignty, and, in particular, the need for complementarity between the brain and artificial intelligence.
AI’s interpretability and explainability—English neologisms—are the biggest challenge. The intelligibility of algorithms in general, and particularly AI algorithms, has become a dominant criterion, raised in France’s Villani Report, and spotlighted with Europe’s GDPR. The first approach to being transparent is to clearly state that content or a recommendation are being proposed by an algorithm, either in whole or in part.
On the other hand, AI’s potential can also reach niche audiences for which a media outlet does not have the means to make content. Algorithms make it possible to create fully personalized playlists based on very niche subjects. The media could also leave it up in the air. Here, Jonnie Penn, guest speaker at the EBU’s November 2018 Workshop The impact of AI on Media, asserts a need for “data deserts,” areas “protected from data” to leave room for “healthy differences of opinions.”
The buzz around AI could inflate expectations too much: AI is not a miracle solution and, in most of the cases described above, it needs to be attached to a human being, especially when creating content. Nonetheless, it is operational on the demand side, in the areas of broadcasting, content access, and monetization. It has great potential for the social good in helping navigate through masses of content by optimizing searches and personalized recommendations, and for preventing manipulation.