Hyderabad, Telangana, India
Information Technology
Full-Time
Virtusa
Overview
Responsibilities
Design, develop, and maintain efficient ETL/ELT data pipelines using Python and relevant data processing frameworks.
Optimize and tune PostgreSQL queries for maximum performance, including a deep understanding of indexing, partitioning, and query optimization.
Build and manage data pipelines in AWS, ensuring scalability and reliability.
Interpret large datasets and analyze results to provide actionable insights.
Collaborate with cross-functional teams to understand data needs and implement solutions accordingly.
Ensure data integrity, security, and compliance throughout all stages of the ETL process.
Troubleshoot and resolve pipeline issues, ensuring high availability and performance.
Document all data pipeline workflows, processes, and maintenance procedures.
Requirements
Strong proficiency in PostgreSQL with a deep understanding of query optimization, indexing, and partitioning.
Solid experience in developing ETL/ELT pipelines using Python and data processing frameworks.
Extensive experience with AWS services for building and maintaining data pipelines.
Strong analytical skills with the ability to interpret complex datasets and generate insights quickly.
Familiarity with data integration tools, cloud technologies, and automation frameworks.
Excellent communication skills, with the ability to collaborate with both technical and non-technical teams.
Design, develop, and maintain efficient ETL/ELT data pipelines using Python and relevant data processing frameworks.
Optimize and tune PostgreSQL queries for maximum performance, including a deep understanding of indexing, partitioning, and query optimization.
Build and manage data pipelines in AWS, ensuring scalability and reliability.
Interpret large datasets and analyze results to provide actionable insights.
Collaborate with cross-functional teams to understand data needs and implement solutions accordingly.
Ensure data integrity, security, and compliance throughout all stages of the ETL process.
Troubleshoot and resolve pipeline issues, ensuring high availability and performance.
Document all data pipeline workflows, processes, and maintenance procedures.
Requirements
Strong proficiency in PostgreSQL with a deep understanding of query optimization, indexing, and partitioning.
Solid experience in developing ETL/ELT pipelines using Python and data processing frameworks.
Extensive experience with AWS services for building and maintaining data pipelines.
Strong analytical skills with the ability to interpret complex datasets and generate insights quickly.
Familiarity with data integration tools, cloud technologies, and automation frameworks.
Excellent communication skills, with the ability to collaborate with both technical and non-technical teams.
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in