
Overview
Job Title: Consultant
Experience Required: 3–5 Years
Location: Bangalore, Hyderabad, Pune
Mode of work: Hybrid (3 days from office)
Job Summary:
We are seeking a skilled and motivated Data Engineer with 3 to 5 years of experience to
join our growing team. The ideal candidate will have hands-on expertise in building robust,
scalable data pipelines, working with modern data platforms, and enabling data-driven
decision-making across the organization. You’ll work closely with data scientists, analysts,
and engineering teams to build and maintain efficient data infrastructure and tooling.
Key Responsibilities:
Design, develop, and maintain scalable ETL/ELT pipelines to support analytics and product
use cases.
Collaborate with data analysts, scientists, and business stakeholders to gather requirements
and translate them into data solutions.
Manage data integrations from various internal and external data sources.
Optimize data workflows for performance, cost-efficiency, and reliability.
Build and maintain data models and data warehouses using industry best practices.
Monitor, troubleshoot, and improve existing data pipelines.
Implement data quality frameworks and ensure data governance standards are followed.
Contribute to documentation, code reviews, and knowledge sharing within the team.
Required Skills & Qualifications:
Bachelor's degree in Computer Science, Engineering, Information Technology, or a related
field.
3–5 years of experience as a Data Engineer or in a similar data-focused role.
Strong command of SQL and proficiency in Python.
Good Engineering practices.
Experience with data pipeline orchestration tools such as Apache Airflow or equivalent.
Hands-on experience with cloud data platforms (AWS/GCP/Azure) and services such as S3,
Redshift, BigQuery, or Azure Data Lake.
Experience in data warehousing concepts and tools like Snowflake, Redshift, databricks,
Familiarity with version control tools such as Git.
Strong analytical and communication skills.
Preferred Qualifications:
Exposure to big data tools and frameworks such as Spark, Hadoop, or Kafka.
Experience with containerization (Docker/Kubernetes).
Familiarity with CI/CD pipelines and automation in data engineering.
Awareness of data security, privacy, and compliance principles.
Job Types: Full-time, Permanent
Pay: Up to ₹1,700,000.00 per year
Benefits:
- Health insurance
- Provident Fund
- Work from home
Schedule:
- Day shift
- Monday to Friday
Supplemental Pay:
- Commission pay
- Joining bonus
Work Location: In person