Bangalore, Karnataka, India
Information Technology
Full-Time
nexionpro
Overview
Job Title: Data Engineer (with Spark, Snowflake, and Hive expertise)
Location: [Location]
Employment Type: [Full-time/Contract/Remote]
Department: Data Engineering / Big Data
We're looking for a passionate Data Engineer to join our growing team and help build scalable, high-performance data pipelines using modern big data technologies like Spark, Snowflake, and Hive.
Key Responsibilities:
- Data Pipeline Development: Design, build, and maintain efficient data pipelines using tools like Apache Spark, Snowflake, and Hive.
- Big Data Architecture: Architect and implement scalable solutions for processing and storing large datasets using distributed systems.
- ETL Processes: Develop ETL processes to extract, transform, and load data from various sources into Snowflake and other data warehouses.
- Data Integration: Work closely with data scientists, analysts, and other stakeholders to integrate data from multiple sources.
- Performance Optimization: Optimize data pipelines for performance, scalability, and reliability.
- Data Quality & Monitoring: Implement data quality checks and monitoring tools to ensure data consistency and availability.
- Collaboration: Work with cross-functional teams to understand business requirements and deliver data solutions that drive insights and value.
- Documentation: Maintain detailed documentation of data workflows, architectures, and processes.
Requirements:
- Experience: 3+ years of experience as a Data Engineer or in a similar role working with big data technologies.
- Spark: Strong experience with Apache Spark, including Spark SQL, Spark Streaming, and batch processing.
- Snowflake: Proficiency with Snowflake for building data warehouses, writing SQL queries, and managing cloud-based data solutions.
- Hive: Experience with Apache Hive for data warehousing, including partitioning, optimization, and query performance tuning.
- Cloud Platforms: Experience with cloud platforms like AWS, Azure, or GCP (Google Cloud Platform).
- ETL Tools: Familiarity with ETL tools and frameworks (e.g., Apache NiFi, Airflow, Talend).
- SQL: Advanced SQL skills for querying and manipulating large datasets.
- Programming Languages: Proficiency in Python, Scala, or Java for data manipulation and pipeline development.
- Version Control: Knowledge of version control systems like Git.
Preferred Skills:
- Big Data Ecosystem: Experience with other big data tools such as Kafka, Hadoop, and Flink.
- Data Modeling: Strong understanding of data modeling techniques and best practices for big data systems.
- Machine Learning: Familiarity with machine learning concepts and how to support data science teams with data engineering tasks.
- DevOps: Familiarity with DevOps practices, including CI/CD pipelines for data engineering projects.
Soft Skills:
- Problem-Solving: Strong analytical and troubleshooting skills.
- Communication: Excellent communication skills to collaborate effectively with cross-functional teams.
- Attention to Detail: Ability to identify issues and optimize solutions for both performance and data accuracy.
Why Join Us?
- Innovative Projects: Work on cutting-edge data engineering projects that push the boundaries of technology.
- Collaborative Environment: Join a talented and passionate team that values collaboration and continuous learning.
- Growth Opportunities: We believe in investing in our employees’ professional growth through mentorship and career development.
Job Types: Full-time, Permanent
Pay: ₹1,000,000.64 - ₹3,000,214.88 per year
Schedule:
- Day shift
Work Location: In person
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in