Hyderabad, Telangana, India
Social Good & Community Development
Full-Time
Merck Group
Overview
Work Your Magic with us!
Ready to explore, break barriers, and discover more? We know you’ve got big plans – so do we! Our colleagues across the globe love innovating with science and technology to enrich people’s lives with our solutions in Healthcare, Life Science, and Electronics. Together, we dream big and are passionate about caring for our rich mix of people, customers, patients, and planet. That`s why we are always looking for curious minds that see themselves imagining the unimageable with us.
Your Role
Our Practices unite individuals with similar skills and capabilities to forge a powerful expert community dedicated to setting the bar in our Data & AI expertise.
The Data Engineering Practice is committed to transforming raw data into actionable insights by seamlessly integrating information from diverse source systems and building robust data pipelines that ensure accuracy, accessibility, and timeliness across the organization.
In your role as a Senior Data Engineer in the Data Engineering Practice, you will design, develop, and maintain data pipelines using pyspark,Fivetran to automate data extraction, loading, and transformation processes. Collaborating within a Product team, engaging with the Product Owner, Data Engineers, Business Analysts, and Stakeholders to identify data requirements and user needs, and developing solutions that effectively address our business objectives.
As Data Integration Engineer, you will be responsible to troubleshoot and resolve issues related to data extraction, loading, and transformation, including error logging and root cause analysis. Work closely with data Stewards, Sector data office, Product Managers and other stakeholders to understand data requirements and ensure pipelines meet business needs. Document data pipeline processes, configurations, and best practices to ensure maintainability and knowledge sharing within the team Monitor and optimize the performance of data pipelines to ensure efficient and reliable data transfer.
Ensure data integration processes comply with organizational data security and compliance policies.
Who you are:
Apply now and become a part of our diverse team!
Ready to explore, break barriers, and discover more? We know you’ve got big plans – so do we! Our colleagues across the globe love innovating with science and technology to enrich people’s lives with our solutions in Healthcare, Life Science, and Electronics. Together, we dream big and are passionate about caring for our rich mix of people, customers, patients, and planet. That`s why we are always looking for curious minds that see themselves imagining the unimageable with us.
Your Role
Our Practices unite individuals with similar skills and capabilities to forge a powerful expert community dedicated to setting the bar in our Data & AI expertise.
The Data Engineering Practice is committed to transforming raw data into actionable insights by seamlessly integrating information from diverse source systems and building robust data pipelines that ensure accuracy, accessibility, and timeliness across the organization.
In your role as a Senior Data Engineer in the Data Engineering Practice, you will design, develop, and maintain data pipelines using pyspark,Fivetran to automate data extraction, loading, and transformation processes. Collaborating within a Product team, engaging with the Product Owner, Data Engineers, Business Analysts, and Stakeholders to identify data requirements and user needs, and developing solutions that effectively address our business objectives.
As Data Integration Engineer, you will be responsible to troubleshoot and resolve issues related to data extraction, loading, and transformation, including error logging and root cause analysis. Work closely with data Stewards, Sector data office, Product Managers and other stakeholders to understand data requirements and ensure pipelines meet business needs. Document data pipeline processes, configurations, and best practices to ensure maintainability and knowledge sharing within the team Monitor and optimize the performance of data pipelines to ensure efficient and reliable data transfer.
Ensure data integration processes comply with organizational data security and compliance policies.
Who you are:
- Around 5-7 years of experience in Data Engineering, governance, and security.
- Proficiency in programming languages such as Python and SQL.
- Proficiency in PySpark and Git version control is a must.
- Experience with any data and analytics platforms like Palantir Foundry, Databricks, and others.
- Good knowledge of any one of the cloud services like AWS, Azure, or GCP.
- Good experience with tools for data modeling, warehousing and databases (e.g. Python, Hadoop, SAS, Business Objects, ORACLE, and Hive).
- Ability to write clean, efficient, and maintainable code.
- Experience in designing and developing products in an agile product development environment through proper modularization and packaging.
- Experience working with unstructured, semi-structured, and structured data.
Apply now and become a part of our diverse team!
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in