Overview
KEY RESPONSIBILITIES:
· Understand the factories , manufacturing process , data availability and avenues for improvement
· Brainstorm , together with engineering, manufacturing and quality problems that can be solved using the acquired data in the data lake platform.
· Define what data is required to create a solution and work with connectivity engineers , users to collect the data
· Create and maintain optimal data pipeline architecture.
· Assemble large, complex data sets that meet functional / non-functional business requirements.
· Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery for greater scalability
· Work on data preparation, data deep dive , help engineering, process and quality to understand the process/ machine behavior more closely using available data
· Deploy and monitor the solution
· Work with data and analytics experts to strive for greater functionality in our data systems.
· Work together with Data Architects and data modeling teams.
SKILLS /COMPETENCIES
· Good knowledge of the business vertical with prior experience in solving different use cases in the manufacturing or similar industry
· Ability to bring cross industry learning to benefit the use cases aimed at improving manufacturing process
· Problem Scoping/definition Skills:
§ Experience in problem scoping, solving, quantification
§ Strong analytic skills related to working with unstructured datasets.
§ Build processes supporting data transformation, data structures, metadata, dependency and workload management.
§ Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
§ Ability to foresee and identify all right data required to solve the problem
· Data Wrangling Skills:
§ Strong skill in data mining, data wrangling techniques for creating the required analytical dataset
§ Experience building and optimizing ‘big data’ data pipelines, architectures and data sets
§ Adaptive mindset to improvise on the data challenges and employ techniques to drive desired outcomes
· Programming Skills:
§ Experience with big data tools: Spark, Delta,CDC,NiFi, Kafka, etc
§ Experience with relational SQL, NoSQL databases and query languages, including oracle, Hive, sparkQL.
§ Experience with object-oriented languages: Scala, Java, C++ etc.
· Visualization Skills
§ Know how of any visualization tools such as PowerBI, Tableau
§ Good storytelling skills to present the data in simple and meaningful manner
· Data Engineering Skills
§ Strong skill in data analysis techniques to generate finding and insights by means of exploratory data analysis
§ Good understanding of how to transform and connect the data of various types and form
§ Great numerical and analytical skills
§ Identify opportunities for data acquisition
§ Explore ways to enhance data quality and reliability
§ Build algorithms and prototypes
§ Reformulating existing frameworks to optimize their functioning.
§ Good understanding of optimization techniques to make the system performant for requirements.
Job Types: Full-time, Permanent
Pay: ?1,800,000.00 - ?2,000,000.00 per year
Schedule:
- Morning shift
Experience:
- total work: 10 years (Required)
- Scala: 4 years (Required)
- Delta: 4 years (Required)
Work Location: Remote