Use Case Track  

How John Deere uses Flink to process millions of sensor measurements per second

 

The John Deere data platform receives and processes millions of sensor measurements per second from machines around the world. In this talk, we'll discuss the importance of stream data processing and how we are using it to improve machine automation and to help our customers improve the efficiency of their farm operations. We'll review the evolution of our streaming data platform, discuss lessons learned along the way, and share why we chose Flink to solve some of our most difficult problems. Finally, we'll walk through one of our Flink applications in detail and share techniques that we use to process data at massive scale.

Authors

Greg Finch
Greg Finch
John Deere

Greg Finch

Greg Finch is a Senior Product Manager in the John Deere Intelligent Solutions Group. Working in the data platform team, he helps to define, design, and build systems that process sensor data from machines around the world. He enjoys using advanced technologies to help make powerful machines smarter, more reliable, and more efficient.

Adam Butler
Adam Butler
John Deere

Adam Butler

I have worked at John Deere nearly 15 years and have over 20 years experience developing software across a range of platforms and technologies for a variety of consumers. Currently, as a Product Lead in Deere's Data Lake initiatives, I work with streaming and big data processing architectures to allow our customers to view, analyze and manipulate their data, to optimize precision agriculture. Using Apache Flink, along with AWS technologies, our team currently processes over 4 Billion complex, geospatial, records a day.