Overview
This role is for one of the Weekday's clients
We are looking for a Senior Data Engineer to design, build, and scale our data platform, enabling efficient data processing and analytics across the organization. In this role, you will develop and optimize batch and real-time data pipelines, ensuring high-performance data infrastructure that drives business insights. This is an excellent opportunity for someone passionate about modern data technologies, cloud-native environments, and scalable data systems.
Requirements:
Key Responsibilities::
Data Platform Development::
- Design, develop, and maintain a scalable and reliable data platform that supports batch and real-time data processing.
- Optimize the platform for cost efficiency, performance, and scalability to meet growing business needs.
- Build and manage ETL/ELT pipelines for data ingestion, transformation, and distribution across teams.
- Ensure pipelines are resilient, high-performing, and scalable.
- Implement low-latency, high-throughput streaming solutions using technologies like Apache Kafka and Apache Flink.
- Design and optimize data lakes and data warehouses for efficient data retrieval and cost management in cloud environments.
- Work with modern table formats like Hudi, Iceberg, or Delta Lake.
- Implement monitoring solutions and data quality frameworks to ensure data accuracy, reliability, and security.
- Identify and resolve performance bottlenecks in data pipelines.
- Work closely with data scientists, analysts, and product teams to develop scalable data solutions that meet evolving business needs.
- Implement best practices for data security, compliance, and privacy to ensure integrity across the platform.
- Provide mentorship and technical guidance to junior engineers.
- Drive best practices in system design, coding, and deployment.
- Automate data infrastructure management using Terraform or Ansible for consistency and efficiency.
- 6+ years of experience in data engineering with a focus on scalable data platforms.
- Strong experience designing and implementing batch and real-time data pipelines in cloud environments (AWS preferred).
- Expertise in distributed data processing frameworks (Apache Spark, Presto) and streaming technologies (Kafka, Flink).
- Hands-on experience with data lake architectures and modern table formats (Hudi, Iceberg, or Delta Lake).
- Strong programming skills in Python and/or Java, with experience building scalable and maintainable data solutions.
- Proficiency in SQL and/or NoSQL databases.
- Experience managing cloud infrastructure using Infrastructure as Code (Terraform, Ansible).
- Performance tuning and optimization of large-scale data systems.
- Strong problem-solving skills with the ability to prioritize tasks and adapt to evolving project needs.
- Excellent communication and leadership skills, with a track record of mentoring junior engineers.
- Bachelor’s/Master’s degree in Computer Science, Engineering, or a related field.
- Experience with DataOps and CI/CD pipelines for data engineering.
- Hands-on experience building data platforms from scratch.
- Familiarity with Kubernetes is a plus.
- Startup experience, with the ability to innovate and adapt in a fast-paced environment.
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in