We found 21 jobs matching to your search

Advance Search

Skills

Locations

Experience

Job Description

Responsibilities: • DevOps specialist supporting the client AWS Cloud environments • Setup and automation of high availability clusters with AWS Autoscaling, Load Balancers and Route53 SSO in cloud • Estimating AWS usage costs and identifying operational cost control mechanisms • Monitoring via Splunk Cloudwatch Cloudtrail Required Skills: • 6-8 years of experience with AWS Cloud Platform and DevOps Enginnering • Strong experience with developing infrastructure as code build and deployment pipelines • Hands on Experience with IaaS Terraform • Previous work experience in Agile and Scrum methodologies practices • Previous experience designing and developing a multitude application using almost all of the main services of the AWS stack (e.g. EC2, ECS, EKS, s3, RDS, VPC, IAM, ELB, Cloud Watch, Route 53, Lamba and CloudFormation • Working knowledge of AWS, VPC subnets InternetGateway and Route Table • Experience with Java Spring Boot, CI/CD pipeline tools would be advantageous • Worked on deployment automation using Shell with more concentration of DevOps and CI/CD Jenkins

Responsibilities

Responsibilities: • DevOps specialist supporting the client AWS Cloud environments • Setup and automation of high availability clusters with AWS Autoscaling, Load Balancers and Route53 SSO in cloud • Estimating AWS usage costs and identifying operational cost control mechanisms • Monitoring via Splunk Cloudwatch Cloudtrail Required Skills: • 6-8 years of experience with AWS Cloud Platform and DevOps Enginnering • Strong experience with developing infrastructure as code build and deployment pipelines • Hands on Experience with IaaS Terraform • Previous work experience in Agile and Scrum methodologies practices • Previous experience designing and developing a multitude application using almost all of the main services of the AWS stack (e.g. EC2, ECS, EKS, s3, RDS, VPC, IAM, ELB, Cloud Watch, Route 53, Lamba and CloudFormation • Working knowledge of AWS, VPC subnets InternetGateway and Route Table • Experience with Java Spring Boot, CI/CD pipeline tools would be advantageous • Worked on deployment automation using Shell with more concentration of DevOps and CI/CD Jenkins
  • Salary : Rs. 11,00,000.0 - Rs. 25,00,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :AWS DevOps

Job Description

oftware development, Continuous delivery are not only buzzwords for you?! Your specialties are Python, SQL, Postgres, Grafana, ELK, Docker, Jenkins. Come, share your skills and be creative within our feature team in charge of the inventories of the Cloud service on our Hybrid Cloud Platform as Build and Run Developer / DevOps. Responsibilities Help design, develop and maintain robust applications in an as a services model on Cloud platform Evaluate, implement and standardize new tools / solutions to continuously improve the Cloud Platform Leverage expertise to driving organization and departments technical vision in development teams Liaise with global and local stakeholders and influence technical roadmaps Passionately contributing towards hosting a thriving developer community Encourage contribution towards inner and open sourcing, leading by example Profile - Experience and exposer to good programming practices including Coding and Testing standards - Passion and Experience in proactively investigating, evaluating and implementing new technical solutions with continuously improvement - Possess good development culture and familiarity to industry wide best practices - Production mindset with keen focus on reliability and quality - Passionate about being a part of distributed self-sufficient feature team with regular deliverables - Proactive learner and own skills about Scrum, Data, Automation - Strong technical ability to monitor, investigate, analyze and fix production issues. - Ability to ideate and collaborate through inner and open sourcing - Ability to Interact with client managers, developers, testers and cross functional teams like architects - Experience working in Agile Team and exposure to agile / SAFE development methodologies. - Minimum 5+ years of experience in software development and architecture. - Good experience of design and development including object-oriented programming in python, cloud native application development, APIs and micro-service - Good experience with relational databases like PostgreSQL and ability to build robust SQL queries - Knowledge of Grafana for data visualization and ability to build dashboard from various data sources - Experience in big technologies like Elastic search and FluentD - Experience in hosting applications using Containerization [Docker, Kubernetes] - Good understanding of CI/CD and DevOps and Proficient with tools like GIT, Jenkin, Sonar - Good system skills with linux OS and bash scripting - Understanding of the Cloud and cloud services

Responsibilities

oftware development, Continuous delivery are not only buzzwords for you?! Your specialties are Python, SQL, Postgres, Grafana, ELK, Docker, Jenkins. Come, share your skills and be creative within our feature team in charge of the inventories of the Cloud service on our Hybrid Cloud Platform as Build and Run Developer / DevOps. Responsibilities Help design, develop and maintain robust applications in an as a services model on Cloud platform Evaluate, implement and standardize new tools / solutions to continuously improve the Cloud Platform Leverage expertise to driving organization and departments technical vision in development teams Liaise with global and local stakeholders and influence technical roadmaps Passionately contributing towards hosting a thriving developer community Encourage contribution towards inner and open sourcing, leading by example Profile - Experience and exposer to good programming practices including Coding and Testing standards - Passion and Experience in proactively investigating, evaluating and implementing new technical solutions with continuously improvement - Possess good development culture and familiarity to industry wide best practices - Production mindset with keen focus on reliability and quality - Passionate about being a part of distributed self-sufficient feature team with regular deliverables - Proactive learner and own skills about Scrum, Data, Automation - Strong technical ability to monitor, investigate, analyze and fix production issues. - Ability to ideate and collaborate through inner and open sourcing - Ability to Interact with client managers, developers, testers and cross functional teams like architects - Experience working in Agile Team and exposure to agile / SAFE development methodologies. - Minimum 5+ years of experience in software development and architecture. - Good experience of design and development including object-oriented programming in python, cloud native application development, APIs and micro-service - Good experience with relational databases like PostgreSQL and ability to build robust SQL queries - Knowledge of Grafana for data visualization and ability to build dashboard from various data sources - Experience in big technologies like Elastic search and FluentD - Experience in hosting applications using Containerization [Docker, Kubernetes] - Good understanding of CI/CD and DevOps and Proficient with tools like GIT, Jenkin, Sonar - Good system skills with linux OS and bash scripting - Understanding of the Cloud and cloud services
  • Salary : Rs. 12,00,000.0 - Rs. 14,00,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Specialist Software Engineer-Python + Cloud (Insights & Observability)

Job Description

Please share profiles for below skills.  Location – first preference Pune than other locations.  Level 10 and 9  Work mode – 5 days work from office.  4+ years of Devops experience with Python scripting.  3+ years of experience in Kubernetes and ansible  Conversant with Linux/Unix environments  Extended experience with HELM, ArgoCD, Vault, Rancher, git repos is required.  Experience on azure will be a plus.

Responsibilities

Please share profiles for below skills.  Location – first preference Pune than other locations.  Level 10 and 9  Work mode – 5 days work from office.  4+ years of Devops experience with Python scripting.  3+ years of experience in Kubernetes and ansible  Conversant with Linux/Unix environments  Extended experience with HELM, ArgoCD, Vault, Rancher, git repos is required.  Experience on azure will be a plus.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Devops Engineer

Job Description

JD for the Devops requirement : expert knowledge in Red Hat OpenShift Kubernetes, Argo CD, Helm charts, GiT who has good hands on experience in the deployments, certificate rotation, worker node updates( having experience in the IBM cloud would be added advantage)”

Responsibilities

JD for the Devops requirement : expert knowledge in Red Hat OpenShift Kubernetes, Argo CD, Helm charts, GiT who has good hands on experience in the deployments, certificate rotation, worker node updates( having experience in the IBM cloud would be added advantage)”
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Devops Engineer

Job Description

Devlops Engineer

Responsibilities

Devlops Engineer
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Devlops Engineer

Job Description

Client:- Tech Mahindra Position :- DevOps Engineer Location :- Santa Clara, CA (Day one onsite) Duration:- 6-12 Months Start Date:- ASAP Rate: $ 60/hr Note: The candidate must be from PST or MST time zone only. Job Description • Skilled engineer to join our team and contribute to the successful implementation and management of our cloud-based infrastructure. • Stay up to date with the latest trends and advancements in Python, APIs, web server administration, application server administration, and Machine Learning technologies. • The ideal candidate will have expertise in Cloud Kubernetes/OpenShift, application lifecycle management, experience with Cloud Kubernetes clusters, and a strong background in CICD in architecture design, containerization, Dockers, and AWS. Detailed JD: • Know the installation and configuration of Python of all the latest version, the knowhows of a python upgrade, maintaining python environments. • Knowledge in storage management of unstructured data. Good understanding of on-prem (NAS) and cloud storage technologies (FSX, Snap-Mirror, 53) • Administer and maintain Python-based APIs, web servers, and application servers to ensure optimal performance and availability. • Monitor and troubleshoot system issues, including performance bottlenecks, server crashes, and connectivity problems. Collaborate with development teams to ensure the seamless integration of APIs and wels services into existing systems. • Implement security measures and best practices to protect APIs and servers from unauthorized access and potential threats. • Manage and maintain Machine Learning infrastructure, including the deployment and monitoring of models, data pipelines, and other ML components. • Administer and maintain the Databricks platform, including cluster management, user access, and security configuration. • Monitor and troubleshoot issues related to Databricks clusters, job scheduling, data pipelines, and data processing workflows. • Collaborate with data engineers and data scientists to optimize and tune Databricks performance, ensuring efficient data processing and analytics. • Work closely with IT operations teams to ensure seamless integration of Databricks with other data storage systems, data lakes, and data warehouses. • Automate and streamline Databricks administration tasks using scripting and automation tools. • Manage the entire lifecycle of applications running on Cloud Kubernetes/OpenShift platforms. • Monitor and troubleshoot application issues, perform upgrades, and ensure high availability. • Design, deploy, and maintain Kubernetes clusters on cloud platforms such as AWS. Deploy and manage applications on AWS using services like EC2, S3, RDS, etc. • Ensure scalability, security, and optimal performance of the Kubernetes infrastructure. • Implement Continuous Integration and Continuous Deployment (CICD) pipelines for application releases. • Configure and maintain CICD tools and frameworks such as Jenkins, GitLab CI/CD, or similar. Collaborate with ross-functional teams to design and architect scalable and resilient cloud-based solutions. • Containerize applications using Docker to enable portability and scalability. • Implement and configure monitoring tools and frameworks such as Prometheus, Grafana, or similar. • Set up monitoring dashboards to track the health, performance, and availability of applications and infrastructure.

Responsibilities

Client:- Tech Mahindra Position :- DevOps Engineer Location :- Santa Clara, CA (Day one onsite) Duration:- 6-12 Months Start Date:- ASAP Rate: $ 60/hr Note: The candidate must be from PST or MST time zone only. Job Description • Skilled engineer to join our team and contribute to the successful implementation and management of our cloud-based infrastructure. • Stay up to date with the latest trends and advancements in Python, APIs, web server administration, application server administration, and Machine Learning technologies. • The ideal candidate will have expertise in Cloud Kubernetes/OpenShift, application lifecycle management, experience with Cloud Kubernetes clusters, and a strong background in CICD in architecture design, containerization, Dockers, and AWS. Detailed JD: • Know the installation and configuration of Python of all the latest version, the knowhows of a python upgrade, maintaining python environments. • Knowledge in storage management of unstructured data. Good understanding of on-prem (NAS) and cloud storage technologies (FSX, Snap-Mirror, 53) • Administer and maintain Python-based APIs, web servers, and application servers to ensure optimal performance and availability. • Monitor and troubleshoot system issues, including performance bottlenecks, server crashes, and connectivity problems. Collaborate with development teams to ensure the seamless integration of APIs and wels services into existing systems. • Implement security measures and best practices to protect APIs and servers from unauthorized access and potential threats. • Manage and maintain Machine Learning infrastructure, including the deployment and monitoring of models, data pipelines, and other ML components. • Administer and maintain the Databricks platform, including cluster management, user access, and security configuration. • Monitor and troubleshoot issues related to Databricks clusters, job scheduling, data pipelines, and data processing workflows. • Collaborate with data engineers and data scientists to optimize and tune Databricks performance, ensuring efficient data processing and analytics. • Work closely with IT operations teams to ensure seamless integration of Databricks with other data storage systems, data lakes, and data warehouses. • Automate and streamline Databricks administration tasks using scripting and automation tools. • Manage the entire lifecycle of applications running on Cloud Kubernetes/OpenShift platforms. • Monitor and troubleshoot application issues, perform upgrades, and ensure high availability. • Design, deploy, and maintain Kubernetes clusters on cloud platforms such as AWS. Deploy and manage applications on AWS using services like EC2, S3, RDS, etc. • Ensure scalability, security, and optimal performance of the Kubernetes infrastructure. • Implement Continuous Integration and Continuous Deployment (CICD) pipelines for application releases. • Configure and maintain CICD tools and frameworks such as Jenkins, GitLab CI/CD, or similar. Collaborate with ross-functional teams to design and architect scalable and resilient cloud-based solutions. • Containerize applications using Docker to enable portability and scalability. • Implement and configure monitoring tools and frameworks such as Prometheus, Grafana, or similar. • Set up monitoring dashboards to track the health, performance, and availability of applications and infrastructure.
  • Salary : Rs. 60.0 - Rs. 60.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :DevOps Engineer

Job Description

Position :- DevOps Engineer Location :- Denver, CO (Day one onsite) Duration: 6-12 Months Start Date: ASAP Client:- Tech Mahindra Find someone with 5+ Years experience

Responsibilities

Job Description • Experience as a DevOps Engineer or similar software engineering role • Extensive experience with Docker, Kubernetes, and Containers • Experience working with AWS Cloud Infrastructure and Networking • IAM, VPC, Security Groups, CodePipelines, CodeBuilder, CodeDeploy • Experience creating Job Monitoring & Performance Scripts for Containerized Applications • Strong Cloud Native Fundamentals • Strong knowledge of AWS & Big Data Engineering Technologies - Spark, ElasticSearch, Logstash, Kibana, Airflow, AWS Kinesis, AWS Lambda • Strong UNIX/Linux systems administration skills, including configuration, troubleshooting and automation • Awareness of critical concepts in DevOps and Agile principles • Configuration and managing databases such as PostgreSQL, Druid, Redis, Etc. • Extensive knowledge in Python, Git • Experience in Agile / Scrum Methodologies • Knowledge of Java, Scala a plus • High Energy, Problem Solving & Curious Attitude
  • Salary : Rs. 70.0 - Rs. 70.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :DevOps Engineer

Job Description

MSA Designation : API Dev+Microsvs+Python+DevOps+Cloud-L2-IMS Level - L2 Category - Niche / Special Key Responsibilities: • Drive Assets Referential Functional Strategy, solution definition and execution for client needs. • Responsible for project planning, work product quality, project execution and project reviews. • Thrive to become a subject matter expert on internal Group Asset Referential System. • Become an advocate for Asset and CI’s along with their usage and support user enablement and adoption. • Develop in-depth business requirements for large projects -across a variety of business functions in all size projects. • Work with all related business and support unit stakeholders, including the product owner and technical leader , to analyze and challenge the business requirements, leading to functional discussions and specifications, precise usage and detailed user stories. • Demonstrate expertise on business processes, life-cycles, and data drivers. • Drive testing activities on new solutions introduced into production. • Drive input to development team on requirements and enhancements to existing data sets and reporting technologies. • Collaborate on the definition of KPIs and metrics to monitor and run the business. • Plan and execute communication, training, and documentation to enable successful user adoption. • Drive proof-of-concepts, tool evaluations, and data quality activities. • Guide and mentor other Business Analysts in the team. • Demonstrate innovation & thought leadership toward enhancing business value, process improvements and customer impact. • Participate in design discussions and be able to implement any design decisions. • Participate high –level Architecture designs for the Assets Referential Strategy. • Understanding of Infrastructure Domain/ITIL to provide more business site of Reporting utilization's • Excellent upstream and downstream communication skills, ability to communicate complex ideas verbally and through documentation. Profile Senior Software Engineer (Cloud Engineer) 5-6+ years of overall experience in IT Industry and 6+years of experience in Devops and Public Cloud (AWS/Azure) as Cloud Engineer. Profile Required * Test, build, and maintain landing zone environment in AWS based on security best practices and industry standards. * Implement infrastructure automation using tools like Terraform, Ansible, or puppet. * Configure and manage services such as VPCs, subnets, security groups, IAM roles, ALB/ACM, S3, SSO, IAM and resources. * Control Tower, VPC Peering, Transit Gateway, EKS etc, * Firm understanding of networking concepts. * Develop and implement CI/CD pipelines for application deployments. * Understanding of the benefits of continuous integration and continuous delivery (CI/CD) in a cloud environment. * Integrate security tools and processes into the DevOps workflow. * Monitor infrastructure health and performance, identify and troubleshoot issues proactively. * Collaborate with developers and other engineers to define and implement infrastructure standards and best practices. * Stay up-to-date with the latest DevOps technologies and trends. * Develop and maintain documentation for infrastructure automation and configuration management. * Contribute to the continuous improvement of the DevOps process and tooling. * Ability to learn new technologies quickly and adapt to changing environments. * Knowledge of agile methodologies like Scrum or Kanban and experience working in agile teams. * Build, manage and operate infrastructure and configuration of all platform environments with a focus on automation and infrastructure as code. * Kubernetes Microservices Deployment.

Responsibilities

MSA Designation : API Dev+Microsvs+Python+DevOps+Cloud-L2-IMS Level - L2 Category - Niche / Special Key Responsibilities: • Drive Assets Referential Functional Strategy, solution definition and execution for client needs. • Responsible for project planning, work product quality, project execution and project reviews. • Thrive to become a subject matter expert on internal Group Asset Referential System. • Become an advocate for Asset and CI’s along with their usage and support user enablement and adoption. • Develop in-depth business requirements for large projects -across a variety of business functions in all size projects. • Work with all related business and support unit stakeholders, including the product owner and technical leader , to analyze and challenge the business requirements, leading to functional discussions and specifications, precise usage and detailed user stories. • Demonstrate expertise on business processes, life-cycles, and data drivers. • Drive testing activities on new solutions introduced into production. • Drive input to development team on requirements and enhancements to existing data sets and reporting technologies. • Collaborate on the definition of KPIs and metrics to monitor and run the business. • Plan and execute communication, training, and documentation to enable successful user adoption. • Drive proof-of-concepts, tool evaluations, and data quality activities. • Guide and mentor other Business Analysts in the team. • Demonstrate innovation & thought leadership toward enhancing business value, process improvements and customer impact. • Participate in design discussions and be able to implement any design decisions. • Participate high –level Architecture designs for the Assets Referential Strategy. • Understanding of Infrastructure Domain/ITIL to provide more business site of Reporting utilization's • Excellent upstream and downstream communication skills, ability to communicate complex ideas verbally and through documentation. Profile Senior Software Engineer (Cloud Engineer) 5-6+ years of overall experience in IT Industry and 6+years of experience in Devops and Public Cloud (AWS/Azure) as Cloud Engineer. Profile Required * Test, build, and maintain landing zone environment in AWS based on security best practices and industry standards. * Implement infrastructure automation using tools like Terraform, Ansible, or puppet. * Configure and manage services such as VPCs, subnets, security groups, IAM roles, ALB/ACM, S3, SSO, IAM and resources. * Control Tower, VPC Peering, Transit Gateway, EKS etc, * Firm understanding of networking concepts. * Develop and implement CI/CD pipelines for application deployments. * Understanding of the benefits of continuous integration and continuous delivery (CI/CD) in a cloud environment. * Integrate security tools and processes into the DevOps workflow. * Monitor infrastructure health and performance, identify and troubleshoot issues proactively. * Collaborate with developers and other engineers to define and implement infrastructure standards and best practices. * Stay up-to-date with the latest DevOps technologies and trends. * Develop and maintain documentation for infrastructure automation and configuration management. * Contribute to the continuous improvement of the DevOps process and tooling. * Ability to learn new technologies quickly and adapt to changing environments. * Knowledge of agile methodologies like Scrum or Kanban and experience working in agile teams. * Build, manage and operate infrastructure and configuration of all platform environments with a focus on automation and infrastructure as code. * Kubernetes Microservices Deployment.
  • Salary : Rs. 8,00,000.0 - Rs. 12,00,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Senior Software Engineer - Public Cloud

Job Description

Missions Design, development of functionalities, development of automated tests, production and exploitation of infrastructure products on AWS and / or Azure. Apply good development practices in Agile methodology Development of APIs for managing these products Supporting Public Cloud services in Multi-Cloud environment (AWS) CI/CD maintenance, support, validation and ensuring respect of standards and good practices. Analyzing, correcting and improving issues on Platform infrastructure. Maintenance and evolution of the monitoring stack. Helping the dev team on every subject that are related to infrastructure, network and security even on big data perimeter. All in an OpenSource universe and a Software Company culture (Jira, GitHub, Jenkins, Sonar, Ansible, Terraform ...) and you may have to publish your products in OpenSource (on github.com). Exposure to agile development methodologies Understanding of Infrastructure Domain/ITIL Ability to convert business requirement into technical design document and deliver end to end solution Should have strong debugging skills to identify issues and provide solutions Capable of quickly learning and implementing new tools/technologies Effective communication and presentation skills Ability to Interact with client managers, developers, testers and other cross functional teams Strong involvement in process Improvement​ Profile 7-10 years of professional experience in a software development environment primarily in a DevOps role. Bachelor’s Degree in Technology, Computer Science, Information Systems Knowledge of Agile methodology Experienced, quality and product-oriented developer. Fluent in the AWS APIs Independently setup automated testing tools in the CI pipeline. Exceptional Knowledge on Public Cloud Platform (AWS) and tools to do development using Python/ Terraform / Ansible Python/ JSON scripting. CI/CD (Ansible, AWS, Jenkins, GIT) Good knowledge on Python/ PowerShell Good knowledge on Security on Public Cloud (AWS or Azure) Good knowledge on Public Cloud APIs/Services (AWS and / or Azure) Awareness of Network communications.

Responsibilities

Missions Design, development of functionalities, development of automated tests, production and exploitation of infrastructure products on AWS and / or Azure. Apply good development practices in Agile methodology Development of APIs for managing these products Supporting Public Cloud services in Multi-Cloud environment (AWS) CI/CD maintenance, support, validation and ensuring respect of standards and good practices. Analyzing, correcting and improving issues on Platform infrastructure. Maintenance and evolution of the monitoring stack. Helping the dev team on every subject that are related to infrastructure, network and security even on big data perimeter. All in an OpenSource universe and a Software Company culture (Jira, GitHub, Jenkins, Sonar, Ansible, Terraform ...) and you may have to publish your products in OpenSource (on github.com). Exposure to agile development methodologies Understanding of Infrastructure Domain/ITIL Ability to convert business requirement into technical design document and deliver end to end solution Should have strong debugging skills to identify issues and provide solutions Capable of quickly learning and implementing new tools/technologies Effective communication and presentation skills Ability to Interact with client managers, developers, testers and other cross functional teams Strong involvement in process Improvement​ Profile 7-10 years of professional experience in a software development environment primarily in a DevOps role. Bachelor’s Degree in Technology, Computer Science, Information Systems Knowledge of Agile methodology Experienced, quality and product-oriented developer. Fluent in the AWS APIs Independently setup automated testing tools in the CI pipeline. Exceptional Knowledge on Public Cloud Platform (AWS) and tools to do development using Python/ Terraform / Ansible Python/ JSON scripting. CI/CD (Ansible, AWS, Jenkins, GIT) Good knowledge on Python/ PowerShell Good knowledge on Security on Public Cloud (AWS or Azure) Good knowledge on Public Cloud APIs/Services (AWS and / or Azure) Awareness of Network communications.
  • Salary : Rs. 0.0 - Rs. 18,00,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Senior DevOps Engineer - AWS