Overview
Upwork has partnered with an Enterprise client that specializes in helping businesses across all industries modernize through cloud adoption to achieve data-driven transformation. Their team of experts leverages the latest Google Cloud technologies and best practices to create customized solutions for each client.
Currently, they are looking for a Senior Consultant with deep expertise in AWS to engage on a 12-month assignment with the potential to extend.
This role will be responsible for leading and scaling cloud infrastructure, ensuring high availability, automation, and security across AWS, GCP, and Kubernetes environments.
Job Description
As a key member of the Platform Engineering team, you will:
- Design, build, and maintain highly scalable, resilient, and cost-optimized cloud infrastructure.
- Implement best-in-class DevOps practices, CI/CD pipelines, and observability solutions.
- Automate workflows, optimize cloud performance, and strengthen the microservices architecture.
- Collaborate closely with developers, SREs, and security teams to enhance automation and security.
This is an in-office position based in Noida or Gurgaon, India, requiring overlap with U.S. working hours and occasional weekend work.
Qualifications
- 7+ years of hands-on DevOps experience with strong AWS expertise; SRE or Platform Engineering experience is a plus.
- Proven experience managing high-throughput workloads with variable traffic spikes. Industry experience in live sports and media streaming is highly valued.
- Deep knowledge of Kubernetes architecture, workload management, networking, RBAC, and autoscaling.
- Hands-on expertise with AWS services including VPC, IAM, EC2, Lambda, RDS, EKS, and S3; familiarity with GCP (GKE) is a plus.
- Experience with Terraform for cloud provisioning (Helm knowledge is desirable).
- Strong understanding of FinOps principles for cloud cost optimization.
- Hands-on experience with CI/CD automation using Jenkins, ArgoCD, and GitHub Actions.
- Proficiency in service mesh technologies such as Istio, Linkerd, or Consul.
- Familiarity with monitoring and logging tools such as CloudWatch, Google Logging, Prometheus, Grafana, and distributed tracing tools like Jaeger.
- Proficiency in Python and/or Go for automation, infrastructure tooling, and performance tuning is highly desirable.
- Strong understanding of DNS, routing, load balancing, VPNs, firewalls, WAF, TLS, and IAM.
- Experience managing MongoDB, Kafka, or Pulsar for large-scale data processing is a plus.
- Proven ability to troubleshoot production issues, optimize performance, and ensure high availability with multi-region disaster recovery architectures.
Nice to have:
- Contributions to open-source DevOps projects or a strong technical blogging presence.
- Experience with KEDA-based autoscaling in Kubernetes.
Additional Information
Opportunity to work in a diverse team!
Competitive hourly rate!