2016-06-23

I know what you’re thinking. Isn’t SpringOne Platform for Java developers? What does that have to do with me? I don’t write apps, I deal with data.

If this were 2013, you’d be right. But in 2016, application development and data represent two sides of the same coin. In order for applications to deliver exceptional user experiences, they must be infused with insights gleaned from Big Data analytics and data science. And in order for Big Data analytics and data science to be relevant to the business, insights must be surfaced in applications. That’s where SpringOne Platform 2016 comes in.

Modern applications require new architectural models that support distributed systems, data microservices and dynamic data pipelines to get the right insights to the right applications at the right time. There are a number sessions at this year’s SpringOne Platform that are specifically designed to provide data professionals with the knowledge they need to collaborate with developers to create compelling, data-driven software and applications.

So if you’re a data professional and think SpringOne Platform is just for developers, think again. Take a look below at just some of the data-themed sessions at the show and register here to join your fellow data pros at SpringOne Platform this August in Las Vegas.

Basics of Data with Spring and Cloud Foundry

Introduction to Spring Data

It’s 2016. Are you still writing data queries by hand? Feeling locked into your relational database due to having written gobs of SQL operations? In this live coding session, Pivotal Spring team member Greg Turnquist introduces Spring Data, which allows developers to overcome both challenges.

Recommended reading: Add Custom Functionality to a Spring Data Repository

Consuming Data Services with Spring Apps on Cloud Foundry

Applications running on Cloud Foundry often need to connect to data services such as relational databases, document and data structure stores, and messaging services. Spring Boot and the Cloud Foundry Java Buildpack provide auto-configuration capabilities that make it possible to connect to data services with no application code changes for simple use cases, yet back away gracefully when custom configuration of connections is required. In this talk, Pivotal Senior Software Engineer Scott Frederick covers how these auto-configuration mechanisms work, their limitations, and how to explicitly configure connections when the need arises.

Recommended reading: Getting Started Deploying Spring Apps on Cloud Foundry

Spring Data and In-Memory Data Management in Action

Cloud Foundry plus Apache Geode (incubating) is a powerful combination for supporting real-time, data-driven applications. In this session Pivot and Spring team member John Blum present live codes a Spring Boot-based application powered by Apache Geode (which powers Pivotal GemFire) running on Cloud Foundry. You’ll learn in-memory computing and data management concepts including data access and querying using Spring Data Repositories and GemFire OQL, complex/real-time event processing with GemFire CQs, data affinity using GemFire Functions conveniently implemented and executed with Spring Data GemFire Function annotation support and finally effective testing strategies and techniques for testing highly-concurrent, distributed applications using Spring’s test framework along with JUnit, Mockito and MultithreadedTC.

Recommended watching: Building Highly Scalable Spring Applications with In-memory Distributed Data Grids

Spring and Big Data

Hadoop is a powerful platform for Big Data analytics, but getting data into Hadoop and extracting analytics results from it can be challenging. In this talk, Thomas Risberg, a Software Engineer at Pivotal, discusses how to develop Big Data pipelines using Spring technologies that can run both locally and in the cloud. You’ll learn how to stream data into HDFS, run a Spark or a Hive job, and extract the results from HDFS or Cassandra for presentation. After attending this talk you will have an understanding of how a combination of Spring projects can help build Big Data solutions that incorporate and orchestrate many diverse technologies.

Recommended reading: Introduction to Apache Spark for the Spring Developer

Spring with Apache NiFi

Spring is popular with developers for its emphasis on simplicity, modularity and productivity when it comes to work-flow orchestration and complex event processing. Apache NiFi, data-flow orchestration tool developed at the National Security Agency, on the other hand, is a new addition to the already rich Big Data technology stack. Can the two complement one another? This hands-on talk by Hortonworks Principal Architect Oleg Zhurakousky provides and introduction to Apache NiFi, demonstrate its core features while concentrating on why, where and how of integrating the two technologies.

Recommended reading: Apache Nifi … What Is It and Why Does It Matter?

Data Architectures and Microservices

Data Microservices in the Cloud

Spring Cloud Data Flow enables you to create data pipelines for many common use-case such as data ingestion, real-time analytics and data import/export –  critical capabilities for supporting data-driven smart applications. In this session, Mark Pollack, Spring Cloud Data Flow lead at Pivotal, introduces Spring Cloud Data Flow’s architecture and demonstrates the orchestration capabilities of long-running and short-lived data-centric applications on multiple runtime platforms such as Cloud Foundry, Kubernetes, Apache Mesos and Apache YARN.

Recommended listening: A Quick Look at Spring Cloud Data Flow

Architecting for Cloud Native Data: Data Microservices Done Right Using Spring Cloud

How do you get data-driven insights from your analytics environment to operational applications to intelligently automate business processes? In this session Pivotal Technical Director Fred Melo introduces Spring Cloud Stream, a framework for building data-driven microservices supporting smart applications. Fred will explore its architecture model, walk attendees through how to orchestrates data microservices into an advanced data pipelining solution, and illustrate with a live demo.

Recommended reading: Spring Cloud Stream: The New Event-driven Microservice Framework

Building Resilient and Evolutionary Data Microservices

Data usually outlives application code, and you have to be prepared to deploy streams that can cope with the evolution of that data that is in motion. Building off of the previous session, Pivotal’s Vinicius Carvalho discusses how to build resiliency into data microservices with Spring Cloud Stream and Spring Cloud Data Flow. Carvalho, an Advisory Platform Architect at Pivotal, explores the role of a centralized schema repository and how to work with different data models and protocols to achieve schema evolution.

Recommended reading: Introduction to Spring Cloud Data Flow

Where Does Geode Fit in Modern System Architectures?

Today’s apps are no longer the straightforward, database-backed web applications. Applications have become more sophisticated, as they’ve had to meet the need to scale, to be reliable, fault-tolerant, and to integrate with other systems. In this talk, Pivotal Software Engineer Eitan Suez explores one particular fit for Geode in the context of a CQRS architecture, and welcomes you to attend the session and to contribute by sharing how you’ve put Geode to use in your organization.

Recommended reading: Introduction to Apache Geode

Data Science and Cloud Native Applications

Data Science-Powered Apps for Internet of Things

The Internet of Things (IoT) is the source of massive volumes of data. But exploiting that data, rather than just storing it, requires data-driven applications. In this session, Pivotal Data Scientist Chris Rawles, describes approaches to developing IoT apps, including an interactive demo centered around classification of human activities. See the guts of IoT apps and learn about the tools required to develop these applications yourself.

Recommended watching: Scoring-as-a-Service to Operationalize Algorithms for Real-Time

Operationalizing Data Science Using Cloud Foundry

Unfortunately, PowerPoint is where many data science algorithms and models go to die. Not anymore! In this session, Alpine Data Labs Vice President of Engineering Lawrence Spracklen will demo how the joint solution between Alpine’s Chorus Platform and Cloud Foundry addresses this problem and closes the gap between data science insights and business value. We will demo an example of creating a machine learning model leveraging data within MPP databases such as Apache HAWQ (incubating) or Greenplum Database integrated with the Chorus Platform and then deploying this as a micro service within Cloud Foundry as a scoring engine. This turn-key solution will show attendees how easy it is to plug in analytic insights into end user applications that scale, without going through lengthy development cycles.

Recommended reading: Simplifying Data Science Workflows with Pivotal Cloud Foundry

Customer Use Cases and Stories

Design Tradeoffs in Distributed Systems—How Southwest Airlines Uses Geode

Southwest.com is the world’s largest airline website by number of visitors. Every day, Apache Geode (incubating) improves how Southwest Airlines schedules nearly 4,000 flights and serves over 500,000 passengers. It’s an essential component of Southwest’s ability to reduce flight delays and support future growth. In this talk, Southwest Airlines’ Brian Dunlap discusses how he and his team use Geode, a scale-out in-memory data grid, including how to approach design tradeoffs required when working with distributed systems and fast data.

Recommended watching: Southwest Airlines Takes Geode to Scale

Panel Discussion: Delivering Information in Context with Smart Applications

Great software companies aren’t just adept at storing, managing and analyzing data. They are also expert at leveraging data to fundamentally change the user experience, improve profitability and efficiency, and even develop entirely new business models. This requires developing smart applications that deliver information in context. In this panel discussion with Southwest Airlines, Humana and CIBC, we’ll explore how great software companies are upending traditional industries through the use of data and smart applications, discuss the implications for enterprises in more traditional industries, and provide a blueprint for developing smart applications to deliver information in context.

Recommended reading: Business Intelligence is Dead, Long Live Business Intelligence

Wall St. Derivative Risk Solutions Using Geode

In this talk, CIBC’s Andre Langevin and Pivotal’s Mike Stolz discuss how Apache Geode (incubating) forms the core of many Wall Street derivative risk solutions. By externalizing risk from trading systems, Geode-based solutions provide cross-product risk management at speeds suitable for automated hedging, while simultaneously eliminating the back office costs associated with traditional trading system based solutions.

Recommended reading: Open GemFire Takes On In-Memory Upstarts

With all this great content, consider joining us at SpringOne Platform. The conference takes place at the Aria in Las Vegas August 1-4, 2016. Register today and use my pivotal-kelly-300 discount code to save yourself $300 off the registration and see you in Vegas.

Show more