Overview
We are looking for an experienced Data Engineer to design, build, and maintain scalable data pipelines and architectures. The ideal candidate must work closely with data analytics consultants to ensure efficient data processing and management. If you have a passion for big data, automation, and cloud technologies, we’d love to hear from you.
Roles and Responsibilities
- Design, develop, and maintain scalable data pipelines and ETL processes to support data ingestion, transformation, and storage.
- Build and optimize data warehouses, lakes, and real-time streaming solutions.
- Ensure data integrity, quality, efficiency, consistency, and security across platforms.
- Collaborate with cross-functional teams including data scientists, analysts, and software engineers to understand data requirements and deliver high-quality data solutions.
- Work with cloud platforms (AWS, Azure, GCP) to optimize data storage and processing.
- Implement data governance and security best practices to ensure data privacy and compliance with regulatory requirements.
- Develop and maintain APIs for data access and integration with external systems.
- Automate data workflows and implement best practices for DevOps and CI/CD.
- Architect and optimize data models and schemas to support analytical and operational use cases.
- Monitor and troubleshoot data pipelines to ensure reliability, performance, and scalability.
- Stay abreast of emerging technologies and best practices in data engineering and contribute to continuous improvement initiatives.
- Provide mentoring and technical assistance to junior Data Engineers and other team members, ensuring knowledge sharing and skill development.
· Directly engage with potential clients to understand their data challenges, propose scalable data solutions, and support the pre-sales process by showcasing the business value of data-driven insights.
Required Skills & Qualifications:
· B.Tech/ M.Tech/ Any relevant background
· 2+ years of experience as Data Engineer
· Proficiency in SQL, Python, Scala, or Java for data processing.
· Experience with ETL/ELT tools.
· Hands-on experience with cloud data platforms (AWS Redshift, Google BigQuery, Azure Synapse, etc.).
· Knowledge of distributed systems and big data technologies.
· Strong understanding of data modeling, database design, and performance optimization.
· Familiarity with CI/CD pipelines and infrastructure as code (Terraform, Docker, Kubernetes) is a plus.
· Above 60% aggregate marks or CGPA 6.5 and above
· No active backlogs. Previous backlogs must be cleared within the normal course completion period.
Job Type: Full-time
Pay: ₹700,000.00 - ₹800,000.00 per year
Benefits:
- Food provided
- Health insurance
- Leave encashment
- Paid time off
- Provident Fund
- Work from home
Supplemental Pay:
- Performance bonus
Experience:
- Data Engineer: 2 years (Required)
Work Location: In person
Application Deadline: 16/03/2025