
Overview
Svitla Systems Inc. is looking for a Middle Data Engineer for a full-time position (40 hours per week) in India. Our client is the world’s largest travel guidance platform, helping hundreds of millions each month become better travelers, from planning to booking to taking a trip. Travelers across the globe use the site and app to discover where to stay, what to do, and where to eat based on guidance from those who have been there before.
With more than 1 billion reviews and opinions from nearly 8 million businesses, travelers turn to clients to find deals on accommodations, book experiences, and reserve tables at delicious restaurants. They discover great places nearby as a travel guide company, available in 43 markets and 22 languages.
As a member of the Data Platform Enterprise Services Team, you will collaborate with engineering and business stakeholders to build, optimize, maintain, and secure the full data vertical, including tracking instrumentation, information architecture, ETL pipelines, and tooling that provide key analytics insights for business-critical decisions at the highest levels of product, finance, sales, CRM, marketing, data science, and more, all in a dynamic environment of continuously modernizing tech stack including highly scalable architecture, cloud-based infrastructure, and real-time responsiveness.
REQUIREMENTS:
- 4+ years of experience in data engineering or general software development.
- Knowledge of big data technologies such as Snowflake, Databricks, and BigQuery.
- Expertise in data design and data modeling.
- Understanding in developing complex ETL processes from concept to implementation to deployment and operations, including SLA definition, performance measurements, and monitoring.
- Understanding writing and optimizing SQL queries; data exploration skills with a proven record of querying and analyzing large datasets.
- Knowledge of the AWS ecosystem, including storage (S3) and compute (EKS, ECS, Fargate) services.
- Experience with relational databases like Postgres and programming languages like Python and/or Java.
- Knowledge of cloud data warehouse concepts.
- Organized and detail-oriented person with a strong sense of ownership. Ability to work in a fast-paced and dynamic environment.
- Strong verbal and written communication skills. Ability to effectively communicate with both business and technical teams.
- Ability to make progress on projects independently, intense curiosity, and an enthusiasm for solving complex problems.
- BS or MS degree in Computer Science or a related technical discipline.
RESPONSIBILITIES:
- Provide the organization’s data consumers with high-quality data sets by data curation, consolidation, and manipulation from various large-scale (terabyte and growing) sources.
- Build data pipelines and ETL processes that interact with terabytes of data on leading platforms such as Snowflake and BigQuery.
- Develop and improve the enterprise data by creating efficient and scalable data models for use across the organization.
- Partner with the analytics, data science, CRM, and machine learning teams.
- Take responsibility for enterprise data integrity, validation, and documentation.
- Solve data pipeline failure events and implement sound anomaly detection.
WE OFFER
- US and EU projects based on advanced technologies.
- Competitive compensation based on skills and experience.
- Annual performance appraisals.
- Remote-friendly culture and no micromanagement.
- Personalized learning program tailored to your interests and skill development.
- Bonuses for article writing, public talks, other activities.
- 15 PTO days, 10 national holidays.
- Free webinars, meetups and conferences organized by Svitla.
- Fun corporate celebrations and activities.
- Awesome team, friendly and supportive community!
ABOUT SVITLA
If you are interested in our vacancy, please send your CV.
We will be happy to see you in our friendly team :)
LET'S MEET IN PERSON
Yuliia Mamitova
Why hesitate? Apply now