
Overview
Location: Bengaluru, Pune
Job code: 101183
Posted on: Apr 08, 2025
About Us:
AceNet Consulting is a fast-growing global business and technology consulting firm specializing in business strategy, digital transformation, technology consulting, product development, start-up advisory and fund-raising services to our global clients across banking & financial services, healthcare, supply chain & logistics, consumer retail, manufacturing, eGovernance and other industry sectors.
We are looking for hungry, highly skilled and motivated individuals to join our dynamic team. If you’re passionate about technology and thrive in a fast-paced environment, we want to hear from you.
Job Summary :
We are looking for a skilled Python Developer who is having experience in creating large scale data processing pipelines using a Python and Spark based framework. Responsible to work with different aspects of the Spark ecosystem, including Spark SQL, Data Frames, Datasets, and Streaming & should possess strong SQL skills.
- Design and development of highly optimized and scalable ETL applications using Python and Spark.
- Undertaking end-to-end project delivery (from inception to post-implementation support), including review and finalization of business requirements, creation of functional specifications and/or system designs, and ensuring that end-solution meets business needs and expectations.
- Development of new transformation processes to load data from source to target, or performance tuning of existing ETL code (mappings, sessions).
- Analysis of existing designs and interfaces and applying design modifications or enhancements.
- Coding and documenting data processing scripts and stored procedures.
- Providing business insights and analysis findings for ad-hoc data requests.
- Testing software components and complete solutions (including debugging and troubleshooting) and preparing migration documentation.
- Providing reporting-line transparency through periodic updates on project or task status.
- Solid Understanding of data engineering concepts and best practices.
- Good understanding of dimensional modelling and Hive/Hadoop ecosystem.
- Excellent understanding of Job Scheduling mechanisms like Autosys, TWS.
- Excellent problem solving and analytical skills and verbal and written communication skills.
- Experience in optimizing large data loads.
- Excellent understanding of Unix ecosystem and should have experience in creating the shell scripts.
- Exposure to an Agile Development environment would be a plus.
- Strong understanding of Data warehousing domain.
- Ability to architect an ETL solution and data conversion strategy.
- Opportunities to work on transformative projects, cutting-edge technology and innovative solutions with leading global firms across industry sectors.
- Continuous investment in employee growth and professional development with a strong focus on up & re-skilling.
- Competitive compensation & benefits, ESOPs and international assignments.
- Supportive environment with healthy work-life balance and a focus on employee well-being.
- Open culture that values diverse perspectives, encourages transparent communication and rewards contributions.
How to Apply:
If you are interested in joining our team and meet the qualifications listed above, please apply and submit your resume highlighting why you are the ideal candidate for this position.