Bangalore, Karnataka, India
Information Technology
Full-Time
Jash Data Sciences
Overview
Do you love solving real-world data problems with the latest and best techniques? And having fun while solving them in a team! Then come join our high-energy team of passionate data people. Jash Data Sciences is the right place for you.
We are a cutting-edge Data Sciences and Data Engineering startup based in Pune, India.
We believe in continuous learning and evolving together. And we let the data speak!
What will you be doing?
- You will be discovering trends in the data sets and developing algorithms to transform raw data for further analytics
- Create Data Pipelines to bring in data from various sources, with different formats, transforming it, and finally loading it to the target database.
- Implement ETL/ ELT processes in the cloud using tools like AirFlow, Glue, Stitch, Cloud Data Fusion, DataFlow.
- Design and implement Data Lake, Data Warehouse, and Data Marts in AWS, GCP, or Azure using Redshift, BigQuery, PostgreSQL, etc.
- Creating efficient SQL queries and understanding query execution plans for tuning queries on engines like PostgreSQL.
- Performance tuning of OLAP/ OLTP databases by creating indices, tables, and views.
- Write Python scripts for orchestration of data pipelines
- Have thoughtful discussions with customers to understand their data engineering requirements. Break complex requirements into smaller tasks for execution.
What do we need from you?
- Strong Python coding skills with basic knowledge of algorithms/data structures and their application.
- Strong understanding of Data Engineering concepts including ETL, ELT, Data Lake, Data Warehousing, and Data Pipelines.
- Experience designing and implementing Data Lakes, Data Warehouses, and Data Marts that support terabytes scale data.
- A track record of implementing Data Pipelines on public cloud environments (AWS/GCP/Azure) is highly desirable
- A clear understanding of Database concepts like indexing, query performance optimization, views, and various types of schemas.
- Hands-on SQL programming experience with knowledge of windowing functions, subqueries, and various types of joins.
- Experience working with Big Data technologies like PySpark/ Hadoop
- A good team player with an ability to communicate with clarity
- Show us your git repo/ blog!
Qualification
- 1-2 years of experience working on Data Engineering projects for Data Engineer I
- 2-5 years of experience working on Data Engineering projects for Data Engineer II
- 1-5 years of Hands-on Python programming experience
- Bachelors/Masters's degree in Computer Science is good to have
- Courses or Certifications in the area of Data Engineering will be given a higher preference.
- Candidates who have demonstrated a drive for learning and keeping up to date with technology by continuing to do various courses/self-learning will be given high preference.
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in