2017-01-16

Contextual targeting has been around for some time now, but the technology used to enhance the process is getting smarter all the time. Here Nick Welch, VP of Business Development at ADmantX, a contextual targeting company, explains how his company’s technology works.  The Tech Sessions is a series of articles aimed at explaining the technology behind the video advertising industry.

Outstream and in-read video advertising grew a massive 440 percent in the first half of 2016, and now accounts for 40 percent of total video ad spend, presenting a huge opportunity for publishers to effectively monetise content through video advertising. But to make the most of outstream video, publishers must ensure video ads are appropriate for on-page content and relevant to their audiences. This can be achieved via contextual video placements that use an advanced page-level approach to content classification – rather than relying on broad categories or channels – to ensure a precise and comprehensive understanding of page text content, which of course boosts the customer’s propensity to interact with the brand communication/offer.

How can I make sure video ads are being served to the most appropriate audiences?

Simple, you need a deep understanding of the online readers’ interests and a real-time statistical/advanced machine learning technology to make it happen. In this way, your contextual video placement will be enriched with the users’ profiles with higher propensity to interact with the message. At the base of building these advanced users’ profiles is an understanding of their online interests, which will strengthen the effectiveness of your video ad – known as advanced contextual video placement.

So what are the key steps publishers should take in creating contextual video placements?

Replace keywords with semantic technology

Buyers historically think about keywords when discussing context, and these are commonly used to classify web pages, but keywords alone are not a robust or precise way of understanding the real context of online content. Classification of web content using keywords is based solely on probability and statistics and on the presence of certain words in a piece of content.

Instead of looking for keywords, semantic technology takes into account each and every word in a sentence, just as humans do naturally when they are reading, writing, or listening. Contextual targeting companies approach this in different ways, but ADmantX’s core technology – which leverages Natural language Processing and cognitive technology with the broadest semantic neural network at its heart – uses four key methods to establish comprehension. These are morphological analysis to understand word forms, grammatical analysis to comprehend parts of speech, logical analysis to identify how words relate to one another, and semantic analysis or disambiguation to determine the true context of words or phrases.

Some examples of how semantic analysis ensures a more in-depth understanding of content include:

Prepositions: Prepositions are disregarded in keyword analysis but can significantly impact the meaning of a phrase. In the two sentences ‘I’ve been flying my bird of prey’ and ‘I’ve been flying my bird as prey’, the change in preposition alters the meaning dramatically and this difference can only be identified through semantic technology.

Verbs: Semantic analysis can recognise how a word or phrase changes its meaning depending on the verb placed in front of it. For example, the phrase ‘big apple’ has a very different meaning in the sentence ‘we drove to the Big Apple’ than in the sentence ‘I ate a big apple.’

Homonyms: Multiple words that have the same spelling but different meanings can be distinguished through semantic analysis by looking at the context in which they appear. For instance the technology understands the difference between ‘stock’ in the three distinct contexts of the stock market, inventory, and cooking.

Semantic technology produces a conceptual map of the content on a page including the main topics covered, the specific emotions the content evokes in the reader, the general sentiment of the text (positive, negative or neutral), the entities (people, places, organizations) and the main lemmas (products or common nouns/verbs deemed relevant by our analyser) present.

By extracting this type of highly accurate data, publisher inventory becomes more valuable to advertisers as they can truly understand its context. Ad placements and context can be perfectly matched and advertisers can be sure their video ads are served in a relevant, brand-safe environment. This data also allows publishers to move onto the next stage of contextual video placements – building customer profiles starting from the deep understanding of the online readers’ interests.

Build user profiles to meet advertiser goals

To make their inventory more attractive to advertisers, publishers need to consider the goals advertisers are trying to achieve and create user profiles to meet those goals. Performance data in the eyes of video is likely to be completed views, or possibly sales if there is an ecommerce mechanism linked within the video, so advertisers need to understand which users are most likely to complete a view or make a purchase to know who to target.

In conjunction with a DMP, publishers can use their own data and predictive modelling to develop profiles of users who have the propensity to watch a video to completion, or to buy a product they see advertised.

The process for creating advanced models for behaviour profiling involves a number of steps and results in the implementation of a full Artificial Intelligence system to manage propensity profiles of customers. Firstly data flows are analysed, defined, and evaluated to form a baseline. Then features that enrich behaviour description are constructed to produce the reference data set. These features may be inferred from data flows or integrated from external sources. Next multidimensional data analysis techniques and inference models are used to explore and identify the impact distinct feature patterns have on behaviour.

It’s clear that the extracted online users’ interests are fundamental in the propensity profiles creation.

This data is then entered into the statistical model to create the profiling and prediction mechanism. The performance of this functional model is continually improved by the application of an advanced machine learning solution that learns from each propensity event (video completion, buy actions, etc). Finally automated monitoring systems continually assess the model to determine when it reaches maximum efficiency, when new approaches are required, and when other models should be incorporated into the process.

The explosion of outstream and in-read video advertising provides a lucrative revenue stream for publishers to effectively monetise content, but to truly make the most of this trend they must ensure they have a deep understanding of both page-level content and their audiences that can be used to offer conceptual video ad placements and make their inventory more attractive to advertisers.

Subscribe to Weekly VAN Newsletter

Email Address

Show more