
Overview
Job Title: Data Engineer
Location: Ahmedabad
Mode: 5 days Onsite/ Week
Client: Recruitment Smart
About Client:
Recruitment Smart is a Generative AI tech startup on a global mission of bringing the AI revolution in improving Talent Acquisition, with operations in 70+ countries. Fusing the expertise of industry veterans from recruitment and tech, our solutions are powered by the cutting-edge Generative AI technology. We are committed to addressing diverse customer needs, and craft products finely attuned to the demands of enterprise, mid-size, and startup businesses. Our talent intelligence solutions seamlessly integrates across platforms at organisational scales.
Key Responsibilities:
● Design, develop, and maintain ETL pipelines: Efficiently extract, transform, and load (ETL) large datasets from various data sources to the data warehouse or data lakes.
● Build and optimize data transformation workflows: Work closely with data scientists, analysts, and software engineers to design data pipelines that handle large volumes of structured and unstructured data.
● Data representation and UI integration: Implement data visualization solutions using tools and techniques (e.g., GraphQL or similar) for a drag-and-drop interface, enabling non-technical users to access and manipulate data.
● Collaborate with cross-functional teams: Work with front-end engineers, backend developers, and product managers to ensure smooth integration of data systems into customer-facing applications.
● Monitor and maintain data pipeline performance: Continuously improve data flow, ensuring high availability, consistency, and integrity.
● Ensure data security and compliance: Implement security best practices and ensure compliance with data privacy regulations.
Key Qualifications:
● Bachelor’s degree in Computer Science, Information Technology, or a related field.
● 5+ years of experience working with ETL pipelines, data transformation, and processing large datasets.
● Proficiency in ETL tools (e.g., Apache Airflow, Talend, Informatica, etc.) and strong experience with SQL and NoSQL databases.
● Experience with cloud platforms (AWS, GCP, Azure) for building and scaling data pipelines.
● Experience with GraphQL or similar API-based querying languages.
● Proficiency in data visualization tools and libraries.
● Familiarity with drag-and-drop UI/UX principles and ability to integrate data-driven interfaces for users.
● Strong programming skills in Python, Java, Scala, or similar.
● Solid understanding of data warehousing and big data technologies.
● Good problem-solving skills and the ability to think critically about data flow and architecture.
● Ability to work independently and as part of a team in a fast-paced environment.
Preferred Qualifications:
● Experience with machine learning or data science workflows.
● Familiarity with real-time data processing (e.g., Apache Kafka, Flink).
● Knowledge of CI/CD practices and DevOps.
Job Types: Full-time, Permanent
Pay: ₹1,500,000.00 - ₹2,000,000.00 per year
Benefits:
- Health insurance
- Provident Fund
Schedule:
- Day shift
Supplemental Pay:
- Yearly bonus
Application Question(s):
- Please share your Current Notice period, Current CTC, Expected CTC
Experience:
- ETL: 5 years (Required)
- Data Engineering: 5 years (Required)
Work Location: In person