Gurugram, Haryana, India
Information Technology
Other
Weekday

Overview
This role is for one of the Weekday's clients
We are looking for a Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. You will work with both structured and unstructured data, ensuring seamless data ingestion, transformation, and storage to support analytics, machine learning, and business intelligence initiatives. You will collaborate closely with data scientists, analysts, and software engineers to develop high-performance data solutions.
Requirements:
Key Responsibilities::
- Design, develop, and manage ETL/ELT pipelines for processing large-scale datasets efficiently.
- Work with SQL and NoSQL databases to ensure optimized data storage and retrieval.
- Develop and maintain data lakes and data warehouses using cloud-based solutions such as AWS, GCP, or Azure.
- Implement data quality, integrity, and governance best practices for validation and monitoring.
- Optimize data workflows and performance tuning to enhance query speed and system efficiency.
- Collaborate with cross-functional teams to integrate data solutions into various applications and services.
- Implement real-time and batch data processing using tools like Apache Spark, Kafka, or Flink.
- Work with cloud-based data services (BigQuery, Redshift, Snowflake) to build scalable and cost-effective solutions.
- Automate data pipeline deployment using CI/CD and infrastructure-as-code tools.
- Monitor and troubleshoot data pipeline issues to minimize downtime and ensure reliability.
- 3+ years of experience in data engineering, data architecture, or related fields.
- Strong proficiency in Python, SQL, and scripting for data processing.
- Hands-on experience with big data frameworks such as Apache Spark, Hadoop, or Flink.
- Experience with ETL tools like Apache Airflow, DBT, or Talend.
- Knowledge of cloud platforms (AWS, GCP, Azure) and their data services (Redshift, BigQuery, Snowflake, etc.).
- Familiarity with data modeling, indexing, and query optimization techniques.
- Experience with real-time data streaming using Kafka, Kinesis, or Pub/Sub.
- Proficiency in Docker and Kubernetes for deploying data pipelines.
- Strong problem-solving and analytical skills, with a focus on performance optimization.
- Understanding of data security, governance, and compliance best practices.
- Experience integrating machine learning workflows into data engineering pipelines.
- Knowledge of Infrastructure-as-Code (IaC) tools like Terraform or CloudFormation.
- Familiarity with graph databases and time-series databases.
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in