Information Technology
Other
Multicloud4u Technologies
Overview
Job Overview:
We are seeking a highly skilled Data Engineer to join our team and help us build and maintain robust, scalable data pipelines and architectures. You will play a key role in optimizing and managing our data infrastructure, enabling our data scientists and analysts to make better-informed decisions and produce meaningful business insights.
Key Responsibilities:
- Data Pipeline Development: Design, implement, and maintain data pipelines that ingest, process, and store large amounts of structured and unstructured data from various sources.
- Database Management: Develop and manage relational and non-relational databases (SQL and NoSQL), ensuring high availability, performance, and scalability.
- Data Integration: Integrate data from multiple sources, ensuring accurate and consistent data availability for downstream applications and analytical teams.
- Optimization & Performance Tuning: Monitor, troubleshoot, and optimize data processing workflows for speed, reliability, and efficiency.
- Collaboration: Work closely with data scientists, analysts, and other stakeholders to ensure seamless data flows and empower teams with the right data at the right time.
- ETL Processes: Develop and maintain ETL processes to ensure that data is clean, consistent, and timely.
- Data Quality & Governance: Implement and maintain data validation, quality control, and governance processes to ensure data integrity.
- Automation: Automate repetitive data engineering tasks and processes using appropriate tools and frameworks.
Required Skills & Qualifications:
- Experience: 5+ years of experience in data engineering, data architecture, or a related field.
- Technical Expertise:
- Proficiency in programming languages such as Python, Java, or Scala.
- Strong experience with SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra).
- Expertise in data pipeline frameworks like Apache Kafka, Apache Airflow, or AWS Glue.
- Experience with cloud platforms like AWS, Google Cloud, or Azure.
- Familiarity with big data technologies like Hadoop, Spark, or Hive.
- Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes).
- Tools & Frameworks:
- Familiarity with version control systems like Git.
- Experience working with data warehouses (e.g., Snowflake, Redshift, Big Query).
- Experience with data integration tools like Talend, Informatica, or similar ETL tools.
- Analytical Skills: Strong problem-solving skills and the ability to troubleshoot and resolve complex data issues.
- Collaboration: Excellent communication skills to work across teams and with both technical and non-technical stakeholders.
Preferred Qualifications:
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- Familiarity with machine learning frameworks and models.
- Experience with real-time data processing and stream processing frameworks.
Benefits:
- Competitive salary and benefits package.
- Health, dental, and vision insurance.
- Flexible work hours and remote options.
- Opportunities for growth and career development.
- Collaborative and dynamic work environment.
Job Type: Contractual / Temporary
Contract length: 12 months
Pay: ₹1,500,000.00 - ₹2,500,000.00 per year
Benefits:
- Health insurance
- Provident Fund
Schedule:
- Day shift
- Monday to Friday
Work Location: In person
Application Deadline: 03/04/2025
Expected Start Date: 21/04/2025
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in