Ahmedabad, Gujarat, India
Information Technology
Vertoz
Overview
Job Information
Industry
IT Services
Date Opened
02/20/2025
Job Type
Software Engineering
Work Experience
1-3 years
City
Mumbai
State/Province
Maharashtra
Country
India
Zip/Postal Code
400080
Job Description
What we want:
We are looking for a savvy Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up.
We are looking for a savvy Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up.
Who we are:
Vertoz (NSEI: VERTOZ), an AI-powered MadTech and CloudTech Platform offering Digital Advertising, Marketing and Monetization (MadTech) & Digital Identity, and Cloud Infrastructure (CloudTech) caters to Businesses, Digital Marketers, Advertising Agencies, Digital Publishers, Cloud Providers, and Technology companies. For more details, please visit our website here.
What you will do:
- Responsible for the documentation, design, development, and architecture of Hadoop applications.
- Must have at least 1-2 years of hands-on working knowledge on Big Data technologies such as Impala, Hive, Hadoop, Spark, Spark streaming, Kafka, etc.
- Excellent programming skills in Python.
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience with relational SQL and NoSQL databases, including Vertica.
- Experience with cloud services.
- Cloudera Hadoop Distribution, Shell Scripting, Superset, Hands-on with cluster management.
- Development: Create and maintain scalable big data applications using Python,Spark, Hive, and Impala.
- Data Pipelines: Develop and optimize data processing pipelines to handle large datasets.
- Integration: Implement data ingestion, transformation, and loading processes.
- Collaboration: Work with data scientists and analysts to meet data requirements.
- Quality Control: Ensure data quality, integrity, and security.
- Performance: Monitor and troubleshoot performance issues to improve efficiency.
- Documentation: Contribute to code reviews, testing, and comprehensive documentation.
Requirements
- Education: Bachelor’s or Master’s degree in Computer Science, IT, or a related field.
- Experience: 1-2 years in a Big Data Developer role.
- Proficiency in Python.
- Strong experience with Apache Spark.
- Hands-on experience with Hive and Impala.
- Familiarity with Hadoop, HDFS, Kafka, and other big data tools.
- Knowledge of data modeling, ETL processes, and data warehousing concepts.
- Soft Skills: Excellent problem-solving, communication, and teamwork skills.
Benefits
- No dress codes
- Flexible working hours
- 5 days working
- 24 Annual Leaves
- International Presence
- Celebrations
- Team outings
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in