Hyderabad, Telangana, India
Social Good & Community Development
Full-Time
Amazon
Overview
DESCRIPTION
BASIC QUALIFICATIONS
The Mesa team enables other Amazon businesses to move faster by providing tools and solutions to common problems, with a notable focus on ensuring vendors and content providers are paid globally. If you are ready to be part of a global program and work hard to help many Amazon teams succeed in this mission, this position is right for you. We offer a creative, fast paced, innovative work environment where you'll be helping drive a key part of Amazon's innovation on behalf of vendors and content providers across the globe.The Mesa team builds reusable software for a multitude of Amazon businesses, including Kindle, Amazon Video, Amazon Music, and more. You'll work directly with engineers, product managers, leadership, customers, and other stakeholders from these various businesses to address issues in our space that have some of the highest customer pain points.
BASIC QUALIFICATIONS
Company - ADCI MAA 15 SEZ - K20
Job ID: A2943019
BASIC QUALIFICATIONS
- 2+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with SQL
- Experience with one or more scripting language (e.g., Python, KornShell)
- Experience in Unix
- Experience in Troubleshooting the issues related to Data and Infrastructure issues.
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.
- Knowledge of distributed systems as it pertains to data storage and computing
- Experience in building or administering reporting/analytics platforms
- Deep understanding of data, analytical techniques, and how to connect insights to the business, and you have practical experience in insisting on highest standards on operations in ETL and big data pipelines. With our Amazon Music Unlimited and Prime Music services, and our top music provider spot on the Alexa platform, providing high quality, high availability data to our internal customers is critical to our customer experiences.
- Assist the DISCO team with management of our existing environment that consists of Redshift and SQL based pipelines. The activities around these systems will be well defined via standard operation procedures (SOP) and typically involve approving data access requests, subscribing or adding new data to the environment
- SQL data pipeline management (creating or updating existing pipelines)
- Perform maintenance tasks on the Redshift cluster.
- Assist the team with the management of our next-generation AWS infrastructure. Tasks includes infrastructure monitoring via CloudWatch alarms, infrastructure maintenance through code changes or enhancements, and troubleshooting/root cause analysis infrastructure issues that arise, and in some cases this resource may also be asked to submit code changes based on infrastructure issues that arise.
The Mesa team enables other Amazon businesses to move faster by providing tools and solutions to common problems, with a notable focus on ensuring vendors and content providers are paid globally. If you are ready to be part of a global program and work hard to help many Amazon teams succeed in this mission, this position is right for you. We offer a creative, fast paced, innovative work environment where you'll be helping drive a key part of Amazon's innovation on behalf of vendors and content providers across the globe.The Mesa team builds reusable software for a multitude of Amazon businesses, including Kindle, Amazon Video, Amazon Music, and more. You'll work directly with engineers, product managers, leadership, customers, and other stakeholders from these various businesses to address issues in our space that have some of the highest customer pain points.
BASIC QUALIFICATIONS
- 1+ years of data engineering experience
- Experience with SQL
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
- Experience with one or more scripting language (e.g., Python, KornShell)
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.
Company - ADCI MAA 15 SEZ - K20
Job ID: A2943019
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in