We found 243 jobs matching to your search

Advance Search

Skills

Locations

Experience

Job Description

: Digital : Snowflake~PL/SQL~Unix Shell Scripting and text processing tools • Administer and manage Snowflake environments including users, roles, warehouses, databases, and schemas • Monitor system performance and optimize warehouse usage, queries, and costs • Implement and manage security controls, data masking, and auditing • Perform capacity planning, resource monitoring, and usage analysis • Support data loads, integrations, and troubleshoot Snowflake-related issues • Work closely with development, data engineering, and business teams for operational support • Automate routine administrative tasks using SQL, Python, or shell scripting 8Ensure high availability, backup, recovery, and disaster recovery readiness • Maintain documentation, operational procedures, and best practices • Demonstrate strong problem-solving skills and proactive incident prevention • Administer and manage Snowflake environments including users, roles, warehouses, databases, and schemas vvRequirements

Responsibilities

: Digital : Snowflake~PL/SQL~Unix Shell Scripting and text processing tools • Administer and manage Snowflake environments including users, roles, warehouses, databases, and schemas • Monitor system performance and optimize warehouse usage, queries, and costs • Implement and manage security controls, data masking, and auditing • Perform capacity planning, resource monitoring, and usage analysis • Support data loads, integrations, and troubleshoot Snowflake-related issues • Work closely with development, data engineering, and business teams for operational support • Automate routine administrative tasks using SQL, Python, or shell scripting 8Ensure high availability, backup, recovery, and disaster recovery readiness • Maintain documentation, operational procedures, and best practices • Demonstrate strong problem-solving skills and proactive incident prevention • Administer and manage Snowflake environments including users, roles, warehouses, databases, and schemas vvRequirements
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :: Digital : Snowflake~PL/SQL~Unix Shell Scripting and text processing tools

Job Description

Workday Skills: Workday HCM Technical

Responsibilities

Workday Skills: Workday HCM Technical
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Workday

Job Description

Digital : Microsoft Azure~Digital : Databricks Role Descriptions: Azure Databricks EngineerKey responsibilitiesKey ResponsibilitiesPlatform InfrastructureOversee Databricks platform configuration| resource management| workspace structuring| and cluster optimization.Monitor and troubleshoot performance issues across clusters| jobs| notebooks| and pipelines.Implement governance| security| compliance and data access control using Role-Based Access Control (RBAC) and Unity Catalog.Pipeline Development ArchitectureDesign and implement end-to-end data pipelines using PySpark| SQL| and Delta Lake within a medallion architecture using Data factory And Data bricks.Build real-time and batch DLT pipelines using Databricks Delta Live Tables with a focus on reliability and scalability.Optimize Lakehouse architecture for performance| cost-efficiency| and data integrity.Automate data ingestion| transformation| and validation| including support for streaming (Autoloader) and scheduled workflows.Perform data transformations| cleansing and validations using data quality rules for consistent and accurate data sets.Manage and monitor job orchestration| ensuring efficient pipelines run and reliability.CICD DevOpsDesign and maintain CICD pipelines for Databricks artifacts (notebooks| jobs| libraries) using tools such as Azure DevOps| GitHub Actions| Terraform or Jenkins.Support trunk-based development| deployment workflows| and infrastructure-as-code practices.Manage version control and automated testing using Git and related DevOps practices.Collaboration DeliveryCollaborate with product owners| business stakeholders| and data teams to gather requirements and translate them into technical solutions.Drive the adoption of best practices in coding| versioning| testing| deployment| monitoring| and security.Provide thought leadership on the best practices in Data Engineering| Architecture and Cloud Computing.Performance OptimizationDeliver optimized spark jobs and SQL queries for large scale data processing.Implement partitioning| caching and indexing strategies to improve performance and scalability of big data workloads.Conduct POCs for capacity planning and recommend appropriate infrastructure optimizations for cost effectiveness.Documentation Knowledge SharingCreated detailed documentations and review them for data workflows| SOPs| Architectural reviews etc.Mentor junior team members and promote a culture of learning and innovation.Promote the culture of optimization and cost saving and enable research driven development.Required QualificationsTechnical Expertise5 years in data engineering| with a strong focus on Databricks and Azure ecosystems.Deep hands-on experience with Data Factory| Databricks Lakehouse Architecture| Delta Lake| PySpark| and Spark job optimization.Proficiency in Python| SQL| and optionally Scala for building scalable ETLELT pipelines.Strong SQL skills are essential| with hands-on experience in SQL Server or other RDBMS platforms.Strong experience in designing and optimizing DLT pi

Responsibilities

Digital : Microsoft Azure~Digital : Databricks Role Descriptions: Azure Databricks EngineerKey responsibilitiesKey ResponsibilitiesPlatform InfrastructureOversee Databricks platform configuration| resource management| workspace structuring| and cluster optimization.Monitor and troubleshoot performance issues across clusters| jobs| notebooks| and pipelines.Implement governance| security| compliance and data access control using Role-Based Access Control (RBAC) and Unity Catalog.Pipeline Development ArchitectureDesign and implement end-to-end data pipelines using PySpark| SQL| and Delta Lake within a medallion architecture using Data factory And Data bricks.Build real-time and batch DLT pipelines using Databricks Delta Live Tables with a focus on reliability and scalability.Optimize Lakehouse architecture for performance| cost-efficiency| and data integrity.Automate data ingestion| transformation| and validation| including support for streaming (Autoloader) and scheduled workflows.Perform data transformations| cleansing and validations using data quality rules for consistent and accurate data sets.Manage and monitor job orchestration| ensuring efficient pipelines run and reliability.CICD DevOpsDesign and maintain CICD pipelines for Databricks artifacts (notebooks| jobs| libraries) using tools such as Azure DevOps| GitHub Actions| Terraform or Jenkins.Support trunk-based development| deployment workflows| and infrastructure-as-code practices.Manage version control and automated testing using Git and related DevOps practices.Collaboration DeliveryCollaborate with product owners| business stakeholders| and data teams to gather requirements and translate them into technical solutions.Drive the adoption of best practices in coding| versioning| testing| deployment| monitoring| and security.Provide thought leadership on the best practices in Data Engineering| Architecture and Cloud Computing.Performance OptimizationDeliver optimized spark jobs and SQL queries for large scale data processing.Implement partitioning| caching and indexing strategies to improve performance and scalability of big data workloads.Conduct POCs for capacity planning and recommend appropriate infrastructure optimizations for cost effectiveness.Documentation Knowledge SharingCreated detailed documentations and review them for data workflows| SOPs| Architectural reviews etc.Mentor junior team members and promote a culture of learning and innovation.Promote the culture of optimization and cost saving and enable research driven development.Required QualificationsTechnical Expertise5 years in data engineering| with a strong focus on Databricks and Azure ecosystems.Deep hands-on experience with Data Factory| Databricks Lakehouse Architecture| Delta Lake| PySpark| and Spark job optimization.Proficiency in Python| SQL| and optionally Scala for building scalable ETLELT pipelines.Strong SQL skills are essential| with hands-on experience in SQL Server or other RDBMS platforms.Strong experience in designing and optimizing DLT pi
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role : Digital : Microsoft Azure~Digital : Databricks

Job Description

Microsoft Power Platform

Responsibilities

Microsoft Power Platform
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role : Microsoft Power Platform

Job Description

Digital: Cloud DevOps Descriptions: - Knowledge on cloud - AWS| Azure or GCP Linux Hands-on working experience Hands-on in Docker containers and building Code as an Infrastructure using Kubernetes (Eg Terraform) Application Monitoring Logging tools like Prometheus and Grafana| Zabbix etc Familiar with one or more DevOps IT Operation Tools like Code management and build tools (Eg GitLab Maven...) Continuous integration tools (Eg Jenkins Nexus frog...) - Configuration Deployment tools (Eg Ansible Chef) Ansible playbooks hands-on experience - Exposure to setup and configurations related to Elasticsearch Kafka| Memcached| DB etc - Experience with web servers like nginx| tomcat and JBoss. - Experience in Shell| Python scripting Linux knowledge. - Experience in container orchestration tools like Kubernetes| OpenShift and etc.

Responsibilities

Digital: Cloud DevOps Descriptions: - Knowledge on cloud - AWS| Azure or GCP Linux Hands-on working experience Hands-on in Docker containers and building Code as an Infrastructure using Kubernetes (Eg Terraform) Application Monitoring Logging tools like Prometheus and Grafana| Zabbix etc Familiar with one or more DevOps IT Operation Tools like Code management and build tools (Eg GitLab Maven...) Continuous integration tools (Eg Jenkins Nexus frog...) - Configuration Deployment tools (Eg Ansible Chef) Ansible playbooks hands-on experience - Exposure to setup and configurations related to Elasticsearch Kafka| Memcached| DB etc - Experience with web servers like nginx| tomcat and JBoss. - Experience in Shell| Python scripting Linux knowledge. - Experience in container orchestration tools like Kubernetes| OpenShift and etc.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Digital: Cloud DevOps

Job Description

• Minimum of bachelor’s degree of engineering in any reputed institution • Minimum 3-5 years of Java/J2EE development experience with Minimum 1-2 years of Finacle DEH/FEBA experience • Good understanding of Java concepts and technologies, Spring boot, Web services(REST), Security etc. • Good understanding in any one of the RDBMS (Oracle, SQL Server) • Awareness of design principles, design patterns, performance tuning, profiling • Strong debugging and troubleshooting skills • Demonstrate good judgement in selecting methods and techniques for obtaining solutions • Exposure to one or more tools: Git, Jira, Jenkins, SonarQube, Maven, Gradle • Preferable to have an exposure to Web containers (tomcat/jboss), Docker, Kubernetes & Cloud deployments.

Responsibilities

• Minimum of bachelor’s degree of engineering in any reputed institution • Minimum 3-5 years of Java/J2EE development experience with Minimum 1-2 years of Finacle DEH/FEBA experience • Good understanding of Java concepts and technologies, Spring boot, Web services(REST), Security etc. • Good understanding in any one of the RDBMS (Oracle, SQL Server) • Awareness of design principles, design patterns, performance tuning, profiling • Strong debugging and troubleshooting skills • Demonstrate good judgement in selecting methods and techniques for obtaining solutions • Exposure to one or more tools: Git, Jira, Jenkins, SonarQube, Maven, Gradle • Preferable to have an exposure to Web containers (tomcat/jboss), Docker, Kubernetes & Cloud deployments.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Edgeverve Open Req.

Job Description

Incident Management

Responsibilities

Incident Management
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role : Incident Management

Job Description

Digital : Microsoft Azure~DigiRole Descriptions: Azure Databricks Platform AdministratorKey responsibilitiesAzure Databricks Platform AdministratorKey responsibilitiesAzure Databricks Platform AdministratorKey responsibilitiesDatabricks AdministrationWorkspace setup| configuration| and governance Unity Catalog configuration and management Cluster policies| job clusters| access controls Secret scopes and credential management External locations and storage credentials setup Monitoring cluster performance and cost optimization Managing notebooks| repos| workflows| and permissions Managing Delta Lake storage and table propertiesAzure Cloud AdministrationADLS Gen2 storage setup and managementAzure networking (VNet| private endpoints| firewall rules)Azure Key Vault integrationManaged identities Service principals configurationAzure RBAC and IAM governanceStorage access via SAS| OAuth| service principalAzure monitoring and logging setupPlatform OperationsEnvironment setup (Dev QA Prod)CICD integration (Azure DevOpsGitHub)Deployment automationIncident troubleshootingCapacity planning and scaling Essential Skills: Azure Databricks Platform Administratortal : Databricks

Responsibilities

Digital : Microsoft Azure~Digital : Databricks Role Descriptions: Azure Databricks Platform AdministratorKey responsibilitiesAzure Databricks Platform AdministratorKey responsibilitiesAzure Databricks Platform AdministratorKey responsibilitiesDatabricks AdministrationWorkspace setup| configuration| and governance Unity Catalog configuration and management Cluster policies| job clusters| access controls Secret scopes and credential management External locations and storage credentials setup Monitoring cluster performance and cost optimization Managing notebooks| repos| workflows| and permissions Managing Delta Lake storage and table propertiesAzure Cloud AdministrationADLS Gen2 storage setup and managementAzure networking (VNet| private endpoints| firewall rules)Azure Key Vault integrationManaged identities Service principals configurationAzure RBAC and IAM governanceStorage access via SAS| OAuth| service principalAzure monitoring and logging setupPlatform OperationsEnvironment setup (Dev QA Prod)CICD integration (Azure DevOpsGitHub)Deployment automationIncident troubleshootingCapacity planning and scaling Essential Skills: Azure Databricks Platform Administrator
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Digital : Microsoft Azure~Digital : Databricks

Job Description

67594951: • Primary mandate skill required – Strong L3 Security incident analysis / Lead, SIEM – Microsoft Sentinel, Microsoft Defender EDR , Sentinel One EDR, KQL Querry knowledge • Secondary mandate skill required – MITRE Attack Framework knowledge, Networking concepts, Use case optimization suggestions. • Flexible to hire in any location – If not, please mention job location – Chennai / Bangalore / Pune / Rest of India except Hyderabad • Detailed Job Description – Attached the JD. 67595201: • Primary mandate skill required – 1. EDR platform management and optimization experience in tools :  MS Defender and SentinelOne. 2. SIEM solution management and implementation in MS sentinel.  Detection Engineering  Log source management  KQL logic and Defender Advanced hunting query building  Logic App implementation  SOAR playbook and use cases creation  AIR implementation  M365 Copilot Agent creation and implementation  Dashboard creation and optimization • Secondary mandate skill required – Azure WAF, AWS WAF and F5 DCS WAF(Distributed cloud) – Configuration, maintenance, and optimization. • Flexible to hire in any location – If not, please mention job location – Chennai / Bangalore / Pune / Hyderabad • Detailed Job Description – Attached the JD.

Responsibilities

67594951: • Primary mandate skill required – Strong L3 Security incident analysis / Lead, SIEM – Microsoft Sentinel, Microsoft Defender EDR , Sentinel One EDR, KQL Querry knowledge • Secondary mandate skill required – MITRE Attack Framework knowledge, Networking concepts, Use case optimization suggestions. • Flexible to hire in any location – If not, please mention job location – Chennai / Bangalore / Pune / Rest of India except Hyderabad • Detailed Job Description – Attached the JD. 67595201: • Primary mandate skill required – 1. EDR platform management and optimization experience in tools :  MS Defender and SentinelOne. 2. SIEM solution management and implementation in MS sentinel.  Detection Engineering  Log source management  KQL logic and Defender Advanced hunting query building  Logic App implementation  SOAR playbook and use cases creation  AIR implementation  M365 Copilot Agent creation and implementation  Dashboard creation and optimization • Secondary mandate skill required – Azure WAF, AWS WAF and F5 DCS WAF(Distributed cloud) – Configuration, maintenance, and optimization. • Flexible to hire in any location – If not, please mention job location – Chennai / Bangalore / Pune / Hyderabad • Detailed Job Description – Attached the JD.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :SOC Analyst