Overview
We are seeking a highly motivated and skilled Data Engineer with 5-7 years of experience in Databricks, PySpark, and Tableau, along with strong expertise in data pipeline development and reporting. The ideal candidate should be passionate about building scalable data solutions and delivering high-quality work. If you are a results-driven individual looking to make an impact, we encourage you to apply.
---
Key Responsibilities:
· Design, develop, and maintain scalable ETL/ELT data pipelines using Databricks and PySpark.
· Optimize and improve data processing workflows for performance and reliability.
· Work with structured and unstructured data to build efficient data models.
· Implement data governance, security, and best practices.
· Collaborate with data analysts and business stakeholders to develop insightful Tableau dashboards and reports.
· Ensure data integrity, consistency, and accuracy across multiple systems.
· Troubleshoot and resolve issues related to data ingestion, transformation, and reporting.
· Work in an agile environment and actively contribute to sprint planning and reviews.
---
Required Skills & Experience:
· 5-7 years of experience in Data Engineering.
· Strong hands-on expertise with Databricks, PySpark, and Spark.
· Proficiency in writing complex SQL queries for data extraction and transformation.
· Experience with ETL/ELT pipelines, data warehousing, and big data technologies.
· Hands-on experience with Tableau (or other reporting tools) for dashboard creation and data visualization.
· Familiarity with cloud platforms (AWS, Azure, or GCP) and storage solutions.
· Experience with version control systems like Git.
· Strong problem-solving skills and ability to work independently.
· Excellent communication and teamwork skills.
Job Type: Full-time
Pay: ₹1,400,000.00 - ₹1,500,000.00 per year
Schedule:
- Day shift
Experience:
- data engineer: 5 years (Required)
Location:
- Pune, Maharashtra (Preferred)
Work Location: In person