Bangalore, Karnataka, India
Manufacturing & Industrial
Full-Time
Gainwell Technologies
Overview
Summary
We’re looking for a dynamic Data Engineer with Apache Spark and AWS experience to join the data analytics team at Gainwell Technologies. You will have the opportunity to work as part of a cross-functional team to define, design and deploy frameworks for data collection, normalization, transformation, storage and reporting on AWS to support the analytic missions of Gainwell and its clients
Your role in our mission
We’re looking for a dynamic Data Engineer with Apache Spark and AWS experience to join the data analytics team at Gainwell Technologies. You will have the opportunity to work as part of a cross-functional team to define, design and deploy frameworks for data collection, normalization, transformation, storage and reporting on AWS to support the analytic missions of Gainwell and its clients
Your role in our mission
- Design, develop and deploy data pipelines including ETL-processes for getting, processing and delivering data using Apache Spark Framework.
- Monitor, manage, validate and test data extraction, movement, transformation, loading, normalization, cleansing and updating processes. Build complex databases that are useful, accessible, safe and secure.
- Coordinates with users to understand data needs and delivery of data with a focus on data quality, data reuse, consistency, security, and regulatory compliance.
- Collaborate with team-members on data models and schemas in our data warehouse.
- Collaborate with team-members on documenting source-to-target mapping.
- Conceptualize and visualize data frameworks
- Communicate effectively with various internal and external stakeholders.
- Bachelor's degree in computer sciences or related field
- 3 years of experience working with big data technologies on AWS/Azure/GCP
- 2 years of experience in the Apache Spark/DataBricks framework (Python/Scala)
- Experience working with different between database structures (e.g., transaction based vs. data warehouse)
- Databricks and AWS developer/architect certifications a big plus
- Design, develop and deploy data pipelines including ETL-processes for getting, processing and delivering data using Apache Spark Framework.
- Monitor, manage, validate and test data extraction, movement, transformation, loading, normalization, cleansing and updating processes. Build complex databases that are useful, accessible, safe and secure.
- Coordinates with users to understand data needs and delivery of data with a focus on data quality, data reuse, consistency, security, and regulatory compliance.
- Collaborate with team-members on data models and schemas in our data warehouse.
- Collaborate with team-members on documenting source-to-target mapping.
- Conceptualize and visualize data frameworks
- Communicate effectively with various internal and external stakeholders.
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in