Chennai, Tamil Nadu, India
Information Technology
Full-Time
Uber
Overview
About The Role
Uber's business relies on insights derived from real-time data by using Streaming Analytics. Our team built two streaming platforms, AthenaX and Flink as a Service (FaaS) on top of Apache Flink that powers many core trip flow applications with 4 9 availability and sub second latency, such as surge pricing for marketplace, push notifications to Apps, ETA calculations for maps.
Our team has grown a lot in the last few years (currently 2K+ yarn nodes, 2800+ pipelines). As part of the Athena team, you will design, implement, optimize, and manage large scale streaming computing infrastructure. You will work on problems like unification of stream and batch, common DSL for streaming analytics, streaming ingestion for data lake, and minimum downtime support that will impact multiple business use cases at Uber scale. At the same time, you will also have the opportunity to collaborate with the open source community for Flink, Presto, Pinot, and Kafka.
What the Candidate Will Do
Learn the internals of big data infrastructure at Uber scale.
Deep-Dive, the internal of Apache Flink, improves the platform usability and efficiency by building Presto SQL top of Flink, optimizing on the runtime, data delivery completeness, unifying streaming and batch processing top of Flink.
Design and implement distributed algorithms for streaming engine reliability to achieve zero downtime for critical use cases.
Work with multiple partner teams within and outside of Uber and build cross-functional solutions in a collaborative work environment.
Be actively involved in the Flink open source community by making code contributions, giving talks, and participating in community activities.
5+ years of total experience Solid understanding of Java for backend / systems software development.
2+ years of experience building large scale distributed software systems. Experience managing streaming processing systems with a strong availability SLA.
Experience working with Apache Flink, Apache Samza/Storm, Apache Calcite, Apache Spark or similar analytics technologies.
Experience working with SQL Compiler, SQL Plan / Runtime Optimization. Experience working with Large Scale distributed systems, HDFS / Yarn / Kubernetes
Uber's business relies on insights derived from real-time data by using Streaming Analytics. Our team built two streaming platforms, AthenaX and Flink as a Service (FaaS) on top of Apache Flink that powers many core trip flow applications with 4 9 availability and sub second latency, such as surge pricing for marketplace, push notifications to Apps, ETA calculations for maps.
Our team has grown a lot in the last few years (currently 2K+ yarn nodes, 2800+ pipelines). As part of the Athena team, you will design, implement, optimize, and manage large scale streaming computing infrastructure. You will work on problems like unification of stream and batch, common DSL for streaming analytics, streaming ingestion for data lake, and minimum downtime support that will impact multiple business use cases at Uber scale. At the same time, you will also have the opportunity to collaborate with the open source community for Flink, Presto, Pinot, and Kafka.
What the Candidate Will Do
Learn the internals of big data infrastructure at Uber scale.
Deep-Dive, the internal of Apache Flink, improves the platform usability and efficiency by building Presto SQL top of Flink, optimizing on the runtime, data delivery completeness, unifying streaming and batch processing top of Flink.
Design and implement distributed algorithms for streaming engine reliability to achieve zero downtime for critical use cases.
Work with multiple partner teams within and outside of Uber and build cross-functional solutions in a collaborative work environment.
Be actively involved in the Flink open source community by making code contributions, giving talks, and participating in community activities.
- Basic Qualifications
5+ years of total experience Solid understanding of Java for backend / systems software development.
- Preferred Qualifications
2+ years of experience building large scale distributed software systems. Experience managing streaming processing systems with a strong availability SLA.
Experience working with Apache Flink, Apache Samza/Storm, Apache Calcite, Apache Spark or similar analytics technologies.
Experience working with SQL Compiler, SQL Plan / Runtime Optimization. Experience working with Large Scale distributed systems, HDFS / Yarn / Kubernetes
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in