
Overview
Business overview:
Galytix (GX) is delivering on the promise of AI.
GX has built specialised knowledge AI assistants for the banking and insurance industry. Our assistants are fed by sector-specific data and knowledge and easily adaptable through ontology layers to reflect institution-specific rules.
GX ai assistants are designed for Individual Investors, Credit and Claims professionals. Our assistants are being used right now in global financial institutions. Proven, trusted, non-hallucinating, our assistants are empowering financial professionals and delivering 10x improvements by supporting them in their day-to-day tasks.
Role overview:
As a Sr. Data Engineer, you’ll drive the design, development, and optimization of our data engineering pipelines and services with a strong software engineering mindset.
You'll also play a pivotal role in fostering growth and collaboration within the team, mentoring junior colleagues, and refining processes to help us maintain excellence.
If you’re excited to make a lasting impact both technically and as a team leader, we’d love to meet you.
Desired skills:
- A university degree from a reputed University with a record of academic excellence in Mathematics, Computer Science, Engineering, Physics or similar.
- 8+ years of relevant experience in Data Engineering, warehousing, ETL, automation, big data and cloud technologies, etc.
- Ability to write clean, scalable, maintainable code in Python. Has good understanding of software engineering concepts and patterns. Proficiency in other languages like Scala, Java, C#, C++ are an advantage.
- Proven record of building and maintaining data pipelines deployed in at least one of the big 3 cloud ML stacks (AWS, Azure, GCP).
- Hands on experience with
- Open-source ETL, and data pipeline and orchestration such tools as Apache Airflow and Nifi.
- Large Scale/ Big Data technology, such as Hadoop, Spark, Hive, Impala, PrestoDb, Kafka.
- Workflow orchestration tools like Apache Airflow.
- Containerisation with Docker, deployment on Kubernetes.
- NoSql and graph databases.
- Unix server administration and shell scripting.
- Experience in building scalable data pipelines for highly unstructured data.
- Experience in building DWH and data lakes architectures.
- Experience in working in cross-functional team with software engineers, data scientists and machine learning engineers.
- Experience in working with or leading an off-shore team.
- Proven record of building data science environments deploying ML solutions in at least one of the big 3 cloud ML stacks (Azure/AWS/GCP) and on Kubernetes clusters.
- Excellent written and verbal command of English.
- Strong problem-solving, analytical, and quantitative skills.
- A professional attitude and service orientation with the ability to work with our international teams.
Why you do not want to miss this career opportunity?
- We are a mission-driven firm that is revolutionising the Insurance and Banking industry. We are not aiming to incrementally push the current boundaries, we redefine them.
- Customer centric organisation with innovation at the core of everything we do.
- Capitalise on an unparalleled career progression opportunity.
- Work closely with senior leaders who have individually served several CEOs in Fortune 100 companies globally.
- Develop highly valued skills and build connections in the industry by working with top-tier Insurance and Banking clients on their mission-critical problems and deploying solutions integrated into their day-to-day workflows and processes.