Five Sessions to Attend at Kafka Summit San Francisco

Clock Icon

3 min read

Pencil Icon

Aug 25, 2017

Five Sessions to Attend at Kafka Summit San Francisco

Kafka Summit San Francisco brings together thousands of companies across the globe that build their businesses on top of Apache Kafka. The developers responsible for this revolution need a place to share their experiences on this journey.

This year, Kafka Summit San Francisco will offer even more breakout sessions led by Kafka subject matter experts and top Kafka customers. This informative mixture of lectures, demonstrations, and guest speakers is geared towards keeping attendees informed on technical content, customer stories, and new launch announcements.

SingleStore is exhibiting in the Sponsor Expo, so stop by our kiosk #109 to view a demo and speak with our subject matter experts.

Here are our top recommended sessions for you to attend at the event.

Efficient Schemas in Motion with Kafka and Schema Registry

10:30 am – 11:10 am, Pipelines Track
Pat Patterson, Community Champion, StreamSets Inc.

Apache Avro allows data to be self-describing, but carries an overhead when used with message queues, such as Apache Kafka. Confluent’s open source Schema Registry integrates with Kafka to allow Avro schemas to be passed ‘by reference’, minimizing overhead, and can be used with any application that uses Avro. Learn about Schema Registry, using it with Kafka, and leveraging it in your application.

Kafka Stream Processing for Everyone

12:10 pm – 12:50 pm, Streams Track
Nick Dearden, Director of Engineering, Confluent

The rapidly expanding world of stream processing can be confusing and daunting, with new concepts to learn (various types of time semantics, windowed aggregate changelogs, and so on) but also new frameworks and programming models. Multiply this by the operational complexities of multiple distributed systems and the learning curve is steep indeed. Come hear how to simplify your streaming life.

From Scaling Nightmare to Stream Dream: Real-time Stream Processing at Scale

1:50 pm – 2:30 pm, Pipelines Track
Amy Boyle, Software Engineer, New Relic

On the events pipeline team at New Relic, Kafka is the thread that stitches our micro-service architecture together. We receive billions of monitoring events an hour, which customers rely on us to alert on in real-time. Facing a ten fold+ growth in the system, learn how we avoided a costly scaling nightmare by switching to a streaming system, based on Kafka. We follow a DevOps philosophy at New Relic. Thus, I have a personal stake in how well our systems perform. If evaluation deadlines are missed, I loose sleep and customers loose trust. Without necessarily setting out to from the start, we’ve gone all in, using Kafka as the backbone of an event-driven pipeline, as a datastore, and for streaming updates to the system. Hear about what worked for us, what challenges we faced, and how we continue to scale our applications.

Kafka Connect Best Practices – Advice from the Field

2:40 pm – 3:20 pm, Pipelines Track
Randall Hauch, Engineer, Confluent

This talk will review the Kafka Connect Framework and discuss building data pipelines using the library of available Connectors. We’ll deploy several data integration pipelines and demonstrate:

  • best practices for configuring, managing, and tuning the connectors
  • tools to monitor data flow through the pipeline
  • using Kafka Streams applications to transform or enhance the data in flight.

“One Could, But Should One?”: Streaming Data Applications on Docker

5:20 pm – 6:00 pm, Use Case Track
Nikki Thean Staff Engineer, Etsy

Should you containerize your Kafka Streams or Kafka Connect apps? I’ll answer this popular question by describing the evolution of streaming platforms at Etsy, which we’ve run on both Docker and bare metal, and what we learned on the way. Attendees will learn about the benefits and drawbacks of each approach, plus some tips and best practices for running your Kafka apps in production.


Share