Overview
Data Engineering with Strong Migration Exp
100% Remote
6+ months Contract
Required Exp: 5+ years
Salary Range: 90 KPM
Job Description: We are seeking a skilled and motivated Data Engineer to join our team. The ideal candidate will have >5 years' experience with >2 years' experience working specifically with Databricks. Databricks Certification is preferred. DBT Cloud development experience and exposure to data vault modelling techniques is good to have.
The client is migrating from Teradata / Informatica to Databricks / dbt. Any experience working on migrations to Databricks is preferred.
Key Responsibilities:
• Design, develop, and maintain scalable data pipelines and ETL processes using Databricks.
• Learn dbt cloud quickly, onboard oneself with training courses and help from team members.
• Utilize Data Vault methodology to design and build data warehouses.
• Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality data solutions.
• Optimize and tune data processing workflows for performance and scalability.
• Ensure data quality, integrity, and security across all data platforms.
• Own and document data pipelines and data lineage.
• Monitor and troubleshoot data pipeline issues to ensure timely and accurate data delivery.
Qualifications:
• Bachelor's degree in computer science, Information Technology, or a related field.
• Proven experience as a Data Engineer, with a strong focus on Databricks.
• Strong programming skills in SQL, Python, or other relevant languages.
• Experience with cloud platforms such as AWS, Azure, or GCP.
• Ability to work on multiple areas like Data pipeline ETL, Data modelling C design, writing complex SQL queries etc.
• Excellent problem-solving skills and attention to detail.
• Effective communication and collaboration skills.
Preferred Qualifications:
• Hands-on experience with dbt Cloud for data modelling and transformations is preferred.
• Exposure to Data Vault methodology and data warehousing concepts is preferred
• Experience with other data engineering tools and technologies (e.g., Apache Spark, Airflow).
• Knowledge of data governance and data security best practices.
• Familiarity with CI/CD pipelines and DevOps practices.