600000 - 1800000 INR - Annual
Bangalore, Karnataka, India
Information Technology
Other
Ensocode
Overview
We are seeking a skilled and motivated AWS Data Engineer with 4+ years of experience to join our dynamic team. In this role, you will be responsible for designing, implementing, and maintaining data pipelines on the AWS platform. You will work with large data sets, leverage your expertise in Python, PySpark, and SQL, and collaborate with data scientists and business analysts to build scalable data solutions.
Key Responsibilities:
- Design, develop, and maintain data pipelines on AWS services (such as AWS Glue, S3, Lambda, Redshift, Athena, and RDS).
- Write and optimize complex SQL queries to process large data sets and improve data retrieval performance.
- Work with Python and PySpark for data transformation and ETL (Extract, Transform, Load) processes.
- Collaborate with data architects, data scientists, and analysts to understand business requirements and translate them into technical solutions.
- Ensure data quality, accuracy, and integrity by implementing automated testing, monitoring, and validation processes.
- Perform data profiling and data analysis to identify trends, anomalies, and performance bottlenecks.
- Implement AWS security best practices to protect sensitive data and ensure compliance.
- Create and maintain documentation for data pipelines, workflows, and processes.
- Optimize data storage and retrieval using various AWS services like S3, DynamoDB, and Redshift to ensure cost-effective and scalable data solutions.
Required Skills & Qualifications:
- 4+ years of experience in data engineering or a similar role.
- Strong proficiency in Python for data engineering tasks.
- Hands-on experience with PySpark for distributed data processing and transformation.
- Expertise in SQL for data querying, optimization, and troubleshooting.
- Strong experience with AWS data services, including but not limited to AWS Glue, S3, Redshift, Lambda, Athena, and RDS.
- Knowledge of ETL processes, data pipelines, and data integration techniques.
- Familiarity with cloud architecture, security, and cost management on AWS.
- Excellent problem-solving skills and ability to troubleshoot data pipeline issues.
- Strong communication skills to work with both technical and non-technical stakeholders.
Preferred Skills:
- Experience with Apache Kafka for stream processing.
- Familiarity with DevOps practices and tools (e.g., Terraform, Jenkins).
- Experience in data visualization or reporting tools such as Tableau, Power BI, or similar.
- Knowledge of big data technologies like Hadoop or Spark.
- Exposure to machine learning workflows or integration with ML models.
Job Type: Full-time
Pay: ₹600,000.00 - ₹1,800,000.00 per year
Benefits:
- Provident Fund
Schedule:
- Day shift
Supplemental Pay:
- Performance bonus
Application Question(s):
- What is your notice period ?
Experience:
- total work: 4 years (Preferred)
- data engineer: 4 years (Preferred)
Work Location: In person
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in