
Overview
Role Description
Require a Senior Data Engineer
This role involves designing, building, and optimizing scalable data platforms using AWS and Databricks. The engineer will collaborate with data scientists, analysts, and business stakeholders to support analytics, machine learning, and regulatory reporting. Expertise in Python, Spark, SQL, and Cloud-based data engineering best practices is essential. Strong experience in ETL, data lakes, and security compliance within a highly regulated environment such as the pharmaceutical industry is highly desirable.
Years of Experience
5-7 years of Experience
Skills Required
- Cloud Data Engineering – Strong experience in AWS services (S3, Lambda, Glue, Redshift, EMR) and Databricks for data processing, transformation, and analytics.
- Big Data Processing – Expertise in Apache Spark and SQL for handling large-scale datasets and optimizing data pipelines.
- ETL & Data Integration – Designing and implementing ETL/ELT workflows to integrate structured and unstructured data from diverse sources.
- Programming & Automation – Proficiency in Python (or Scala) for data engineering, automation, and orchestration.
- Data Architecture & Governance – Experience in data modeling, data lakes, warehouse design, and security best practices in a regulated industry.
- Performance Optimization – Tuning query performance, managing compute resources, and optimizing Databricks clusters for cost and efficiency.
- Regulatory Compliance – Understanding of GxP, HIPAA, or other biopharmaceutical regulatory requirements for data security and privacy.
Job Type: Full-time
Pay: ₹1,000,000.00 - ₹2,000,000.00 per year
Benefits:
- Health insurance
- Work from home
Schedule:
- Day shift
Experience:
- Data Engineer: 4 years (Required)
- AWS: 2 years (Required)
- Big Data: 2 years (Required)
- Databricks: 2 years (Preferred)
- ETL: 2 years (Required)
- Python (or Scala): 1 year (Required)
Work Location: Remote
Expected Start Date: 01/04/2025