Hyderabad, Telangana, India
Information Technology
Full-Time
UST
Overview
Role Description
UST is looking for a Senior Data Engineer with expertise in Apache Spark, PySpark, and Airflow to design, develop, and optimize scalable data pipelines. The ideal candidate should have strong SQL skills, experience working with NoSQL databases, and a deep understanding of data workflows and orchestration. This role requires hands-on development, collaboration with data teams, and monitoring/troubleshooting of data pipelines to ensure data quality and performance.
Responsibilities
Spark,Pyspark,Airflow,Sql
UST is looking for a Senior Data Engineer with expertise in Apache Spark, PySpark, and Airflow to design, develop, and optimize scalable data pipelines. The ideal candidate should have strong SQL skills, experience working with NoSQL databases, and a deep understanding of data workflows and orchestration. This role requires hands-on development, collaboration with data teams, and monitoring/troubleshooting of data pipelines to ensure data quality and performance.
Responsibilities
- Design, develop, and maintain scalable data pipelines using Apache Spark and PySpark for processing large datasets.
- Implement data workflows and orchestration using Apache Airflow to ensure reliable and efficient pipeline execution.
- Optimize data storage and retrieval processes by leveraging strong SQL skills and understanding NoSQL databases (e.g., MongoDB, Cassandra).
- Collaborate with data scientists and analysts to understand data requirements and implement scalable solutions.
- Monitor and troubleshoot data pipelines to maintain data quality, reliability, and performance.
- Design and implement data models and schemas to support business requirements and ensure data consistency.
- Evaluate and recommend new technologies and tools to enhance data processing and analytics.
- Document technical designs, procedures, and operational processes related to data engineering.
- 5+ years of experience in Data Engineering.
- Strong hands-on experience with Apache Spark and PySpark for large-scale data processing.
- Expertise in Apache Airflow for workflow automation and orchestration.
- Strong SQL skills for optimizing queries and managing structured data.
- Experience with NoSQL databases such as MongoDB, Cassandra, or similar.
- Proven ability to troubleshoot and optimize data pipelines for performance and reliability.
- Experience with cloud data platforms (AWS, Azure, GCP).
- Knowledge of Big Data technologies (Hadoop, Hive).
- Understanding of CI/CD pipelines for data engineering.
- Familiarity with containerization (Docker, Kubernetes).
- Experience with real-time data processing frameworks like Kafka, Flink.
- 5 to 12 years of experience in Data Engineering.
- Trivandrum, Chennai, Kochi, Bangalore, Pune, Noida, Hyderabad
Spark,Pyspark,Airflow,Sql
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in