Senior DevOps Engineer will design, build, and operate highly scalable, secure, and resilient Big Data platforms. This role will focus on Kubernetes-based infrastructure, data engineering stacks, and cloud-native automation, supporting real-time and batch analytics workloads. Work closely with data engineers, backend engineers, security, and product teams to enable fast, reliable, and compliant data delivery.
Qualification and Experience
- Minimum bachelors degree in IT, Computer Science/Engineering or any other related technical discipline.
- Minimum 5 years of prior work experience in DevOps / Platform Engineering
Job Description
Platform & Infrastructure
- Design and manage Kubernetes clusters (On-prem K8s)
- Build and operate high-availability infrastructure for Big Data workloads
- Implement Infrastructure as Code (IaC) using Terraform
- Optimize cluster performance, networking, and storage for data-intensive systems
Big Data & Streaming Support
- Support and operate Big Data stacks, including: Apache Spark (batch & streaming), Apache Flink, Kafka, Hadoop / HDFS (where applicable), Trino / Presto
- Enable scalable execution of data pipelines on Kubernetes
- Manage object storage systems (S3, MinIO)
CI/CD & Automation
- Design and maintain CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins,
- Implement GitOps workflows for Kubernetes deployments
- Automate build, test, deployment, and rollback processes
- Enforce versioning and release management best practices
Observability & Reliability
- Implement monitoring, logging, and alerting:
- Prometheus, Grafana
- Define and track SLIs, SLOs, and SLAs
- Lead incident response, root cause analysis (RCA), and postmortems
Security & Compliance
- Implement DevSecOps best practices
- Manage secrets using Vault / AWS Secrets Manager / K8s Secrets
- Enforce RBAC, network policies, and container security
- Support compliance requirements (ISO)
Leadership & Collaboration
- Mentor junior DevOps engineers
- Define platform standards, architecture patterns, and best practices
- Collaborate with architects and engineering leaders on roadmap planning
- Participate in architecture and design reviews
Required Skills
Core DevOps & Cloud
- Strong experience with Kubernetes (production-grade clusters)
- Deep knowledge of Linux, networking, and distributed systems
- Infrastructure as Code: Terraform (mandatory)
Big Data & Data Platforms
- Hands-on experience supporting Big Data ecosystems
- Understanding of data lake, data warehouse, and Lakehouse architectures
- Experience with streaming and real-time data processing
CI/CD & Tooling
- Strong experience with CI/CD tools and Git-based workflows
- Containerization: Docker, Helm, Kustomize
Scripting & Programming
- Proficiency in Bash, Python
- Ability to write automation scripts and tooling
Additional Skills
- Experience with FinTech or data-heavy platforms
- Knowledge of Airflow / Dagster for data orchestration
- Experience with multi-tenant Kubernetes platforms
- Cost optimization experience
Benefits of Working at eXtensoData
- A stellar opportunity to work with the rising company.
- The amazing and passionate young team, a beautiful office space.
- Trust of the biggest FinTech company.
- One-of-a-kind company culture and growth opportunities to accelerate your career progression.
How to apply?
We are always keen to meet energetic and talented professionals who would like to join our team. Click on the button below and submit your application to apply for the post.