Bangalore, Karnataka, India
Information Technology
Full-Time
YASH Technologies Middle East
Overview
We use cookies to offer you the best possible website experience. Your cookie preferences will be stored in your browser’s local storage. This includes cookies necessary for the website's operation. Additionally, you can freely decide and change any time whether you accept cookies or choose to opt out of cookies to improve website's performance, as well as cookies used to display content tailored to your interests. Your experience of the site and the services we are able to offer may be impacted if you do not accept all cookies.
Press Tab to Move to Skip to Content Link
Skip to main content
Search by Location
Employee Login
Search by Keyword
Search by Location
Show More Options
Loading...
Requisition ID
All
Skills
All
Select How Often (in Days) To Receive An Alert:
Create Alert
Select How Often (in Days) To Receive An Alert:
Apply now »
Date: Mar 25, 2025
Job Requisition Id: 60689
Location:
Bangalore, KA, IN Bangalore, KA, IN
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation.
At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future.
We are looking forward to hire Python Professionals in the following areas :
Job Description:
Job Title: Data Engineer/ DevOps - Enterprise Big Data Platform
Right to Hire requirement
In this role, you will be part of a growing, global team of data engineers, who collaborate in DevOps mode, to enable business with state-of-the-art technology to leverage data as an asset and to take better informed decisions.
The Enabling Functions Data Office Team is responsible for designing, developing, testing, and supporting automated end-to-end data pipelines and applications on Enabling Function’s data management and analytics platform (Palantir Foundry, AWS and other components).
The Foundry Platform Comprises Multiple Different Technology Stacks, Which Are Hosted On Amazon Web Services (AWS) Infrastructure Or Own Data Centers. Developing Pipelines And Applications On Foundry Requires:
Our Hyperlearning workplace is grounded upon four principles
Press Tab to Move to Skip to Content Link
Skip to main content
- Home Page
- Home Page
- Life At YASH
- Core Values
- Careers
- Business Consulting Jobs
- Digital Jobs
- ERP
- IT Infrastructure Jobs
- Sales & Marketing Jobs
- Software Development Jobs
- Solution Architects Jobs
- Join Our Talent Community
- Social Media
- Facebook
Search by Location
- Home Page
- Home Page
- Life At YASH
- Core Values
- Careers
- Business Consulting Jobs
- Digital Jobs
- ERP
- IT Infrastructure Jobs
- Sales & Marketing Jobs
- Software Development Jobs
- Solution Architects Jobs
- Join Our Talent Community
- Social Media
- Facebook
Employee Login
Search by Keyword
Search by Location
Show More Options
Loading...
Requisition ID
All
Skills
All
Select How Often (in Days) To Receive An Alert:
Create Alert
Select How Often (in Days) To Receive An Alert:
Apply now »
- Apply Now
- Start apply with LinkedIn
- Please wait...
Date: Mar 25, 2025
Job Requisition Id: 60689
Location:
Bangalore, KA, IN Bangalore, KA, IN
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation.
At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future.
We are looking forward to hire Python Professionals in the following areas :
Job Description:
Job Title: Data Engineer/ DevOps - Enterprise Big Data Platform
Right to Hire requirement
In this role, you will be part of a growing, global team of data engineers, who collaborate in DevOps mode, to enable business with state-of-the-art technology to leverage data as an asset and to take better informed decisions.
The Enabling Functions Data Office Team is responsible for designing, developing, testing, and supporting automated end-to-end data pipelines and applications on Enabling Function’s data management and analytics platform (Palantir Foundry, AWS and other components).
The Foundry Platform Comprises Multiple Different Technology Stacks, Which Are Hosted On Amazon Web Services (AWS) Infrastructure Or Own Data Centers. Developing Pipelines And Applications On Foundry Requires:
- Proficiency in SQL / Scala / Python (Python required; all 3 not necessary)
- Proficiency in PySpark for distributed computation
- Familiarity with Ontology, Slate
- Familiarity with Workshop App basic design/visual competency
- Familiarity with common databases (e.g. Oracle, mySQL, Microsoft SQL). Not all types required
- This position will be project based and may work across multiple smaller projects or a single large project utilizing an agile project methodology.
- Tech / B.Sc./M.Sc. in Computer Science or related field and overall 6+ years of industry experience
- Strong experience in Big Data & Data Analytics
- Experience in building robust ETL pipelines for batch as well as streaming ingestion.
- Big Data engineers with a firm grounding in Object Oriented Programming and an advanced level knowledge with commercial experience in Python, PySpark and SQL
- Interacting with RESTful APIs incl. authentication via SAML and OAuth2
- Experience with test driven development and CI/CD workflows
- Knowledge of Git for source control management
- Agile experience in Scrum environments like Jira
- Experience in visualization tools like Tableau or Qlik is a plus
- Experience in Palantir Foundry, AWS or Snowflake is an advantage
- Basic knowledge of Statistics and Machine Learning is favorable
- Problem solving abilities
- Proficient in English with strong written and verbal communication
- Responsible for designing, developing, testing and supporting data pipelines and applications
- Industrialize data pipelines
- Establishes a continuous quality improvement process to systematically optimize data quality
- Collaboration with various stakeholders incl. business and IT
- Bachelor (or higher) degree in Computer Science, Engineering, Mathematics, Physical Sciences or related fields
- 6+ years of experience in system engineering or software development
- 3+ years of experience in engineering with experience in ETL type work with databases and Hadoop platforms.
- Hadoop General: Deep knowledge of distributed file system concepts, map-reduce principles and distributed computing. Knowledge of Spark and differences between Spark and Map-Reduce. Familiarity of encryption and security in a Hadoop cluster.
- Data management / data structures.
- Must be proficient in technical data management tasks, i.e. writing code to read, transform and store data
- XML/JSON knowledge
- Experience working with REST APIs
- Spark Experience in launching spark jobs in client mode and cluster mode. Familiarity with the property settings of spark jobs and their implications to performance.
- Application Development Familiarity with HTML, CSS, and JavaScript and basic design/visual competency.
- SCC/Git Must be experienced in the use of source code control systems such as Git.
- ETL Experience with developing ELT/ETL processes with experience in loading data from enterprise sized RDBMS systems such as Oracle, DB2, MySQL, etc.
- Authorization Basic understanding of user authorization (Apache Ranger preferred)
- Programming Must be at able to code in Python or expert in at least one high level language such as Java, C, Scala.
- Must have experience in using REST APIs.
- SQL Must be an expert in manipulating database data using SQL. Familiarity with views, functions, stored procedures and exception handling.
- AWS General knowledge of AWS Stack (EC2, S3, EBS, …)
- IT Process Compliance*|SDLC experience and formalized change controls.
- Working in DevOps teams, based on Agile principles (e.g. Scrum)
- ITIL knowledge (especially incident, problem and change management)
- Specific information related to the position:
- Physical presence in primary work location (Bangalore)
- Flexible to work CEST and US EST time zones (according to team rotation plan)
- Willingness to travel to Germany, US and potentially other locations (as per project demand)
Our Hyperlearning workplace is grounded upon four principles
- Flexible work arrangements, Free spirit, and emotional positivity
- Agile self-determination, trust, transparency, and open collaboration
- All Support needed for the realization of business goals,
- Stable employment with a great atmosphere and ethical corporate culture
- Apply Now
- Start apply with LinkedIn
- Please wait...
- Careers Home
- View All Jobs
- Top Jobs
- Blogs
- Events
- Webinars
- Media
- Contact Us
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in