2016-10-26

About

It takes less than 4 minutes for a piece of news to spread across Online, TV and Radio. In the office or on the road, Streem connects you with news monitoring from each source, mention alerts, keyword and industry tracking, and realtime audience analytics – delivered live to your Desktop or Mobile device within a minute of publication or broadcast.

Track competitors, produce reports, take news intelligence wherever you go with Australia’s fast, flexible and trusted news intelligence platform.



Background

Media monitoring and aggregation apps are changing the way news is discovered and consumed. Competition within this space is increasing immensely as each new app and service promises to deliver a more personalized and streamlined experienced than those that have come before. Ultimately, the winners and losers in this battle for market share will be decided by those who best understand the content they are sharing, and use this knowledge to provide cutting edge personalization, in-depth analytics and reader satisfaction.

The Challenges

Frustrated with both the accuracy and ROI they were seeing from an incumbent solution, Streem decided to evaluate their options in sourcing an alternative provider. They had 3 key points of consideration in evaluating and benchmarking various solutions; Performance, Cost and Setup investment.

Streem’s customers require targeted, informed and flexible news alerts based on their individual interests. Therefore, what the team at Streem required was a fast, API-based service that allowed them to analyze large streams of content in as close to real-time as possible.

Dealing with vast amounts of content, Streem needed the ability to intelligently identify mentions of people, organizations, keywords and locations while categorizing content into standardized buckets. An automated workflow would allow them to scale their monitoring beyond human capabilities and deliver targeted news alerts as close to publication time as possible.

The Solution

Using the AYLIEN Text Analysis API, Streem have built an automated content analysis workflow which sources, tags and categorizes content by extracting what matters using entity/concept extraction and categorization capabilities.

Key points of information are extracted from each individual piece of content and then analyzed using Natural Language Processing (NLP) and Machine Learning techniques, providing Streem with a more accurate solution, faster time to value and an overall greater return on investment.

“The accuracy of Aylien was higher than competing providers, and the integration process was much simpler.” -Elgar Welch, Streem

Endpoints used

Streem are using our Entity and Concept Extraction endpoints to identify keywords, mentions of people, places and organizations along with any key information like monetary or percentage values in news articles and blogs, and our Classification endpoint to then categorize content into predefined buckets that suit their users taste.

Let’s take a closer look at each endpoint and how Streem use them within their processes;

Entity Extraction

The Entity Extraction endpoint is used to extract named entities (people, organizations, products and locations) and values (URLs, emails, telephone numbers, currency amounts and percentages) mentioned in a body of text or web pages.

Here’s an example from our live demo. We entered the URL for an article from the Business Insider and received the following results;



As you can see from the results, mentioned entities are extracted and compiled. By extracting entities, Streem can easily understand what people, places, organizations, products, etc., are mentioned in the content they analyze, making it easy to provide relevant, targeted results to their users.

The Concept Extraction endpoint extracts named entities mentioned in a document, disambiguates and cross-links them to DBpedia and Linked Data entities, along with their semantic types (including DBpedia and schema.org types).

Classification

The Classification endpoint classifies, or categorizes, a piece of text according to your choice of taxonomy, either IPTC Subject Codes or IAB QAG.

We took this TechCrunch article on Tesla Motors and analyzed the URL and received the following classification results;

Note the the two columns labelled Score and Confident?. By providing confidence scores, users can define their own parameters in terms of what confidence levels to accept, decline or review.

The outcome

Streem now ingest and analyze tens of thousands of pieces of content on a daily basis in near real time. Their backend process, powered by the AYLIEN Text Analysis API, extracts key pieces of information on which their users can build tailored, flexible searches, alerts and informed monitoring capabilities around news events that matter to them.

Using AYLIEN’s state of the art solutions, the team at Streem now have more time to invest in their own product offering, delivering the best news aggregation service possible to their users.

The post AYLIEN Customer Case Study – Streem appeared first on AYLIEN.

Show more