Pune, Maharashtra, India
Finance & Banking
Full-Time
UST
Overview
Role Description
Hiring Location: Chennai
Experience Range: 5+ years in Data Engineering or related fields
Job Summary
As a Senior Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines on an AWS cloud platform to support our data-driven initiatives. Your expertise in ETL data ingestion frameworks/tools will play a critical role in ensuring efficient data processing and integration.
Key Responsibilities
Pyspark,Lambda/Glue,Python
Hiring Location: Chennai
Experience Range: 5+ years in Data Engineering or related fields
Job Summary
As a Senior Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines on an AWS cloud platform to support our data-driven initiatives. Your expertise in ETL data ingestion frameworks/tools will play a critical role in ensuring efficient data processing and integration.
Key Responsibilities
- Data Pipeline Management: Create and maintain data ingestion pipelines, models, and architectures required to support the growing Data Marketing business.
- Collaboration with Cross-functional Teams: Work with Product Management, business partners, and Data Science team members to understand and create solutions that meet their needs.
- Quality Assurance: Collaborate with Quality Engineers to ensure solutions meet business requirements and deliver high-quality results.
- Automation Implementation: Implement automation processes where possible to enhance efficiency and streamline workflows.
- Cloud Data Pipeline Frameworks: Strong experience with data pipeline management frameworks on major cloud platforms, such as AWS (preferred), Azure, or Google Cloud.
- ETL Tools Expertise: In-depth knowledge of ETL (Extract, Transform, Load) frameworks/tools like Azure Data Factory, Google Data Fusion, and SSIS to facilitate data integration and ensure data quality.
- Proficiency in Python: Hands-on experience using Python to develop data processing scripts, manipulation, transformation tasks, and data engineering solutions.
- Source Control & Agile Methodologies: Familiarity with source control practices (e.g., BitBucket) and Scrum Agile software development methodologies to collaborate effectively with cross-functional teams.
- AWS Ecosystem Knowledge: Strong understanding of AWS services, including training jobs, processing jobs, and Sagemaker, to leverage cloud services efficiently for data engineering workflows.
- Large-Data Solutions: Experience with large-scale data solutions and working with high-volume datasets.
- Machine Learning Frameworks: Familiarity with Scikit-learn, PyTorch, and Huggingface for building and deploying transformer models, including sentence transformers, for data-driven applications.
- Communication Skills: Excellent verbal, written, and interpersonal communication skills to collaborate with teams and stakeholders effectively.
-
- Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience).
- At least 5 years of experience in Data Engineering, with a strong focus on ETL processes, Python development, and cloud-based data solutions.
Pyspark,Lambda/Glue,Python
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in