: Digital : Snowflake~PL/SQL~Unix Shell Scripting and text processing tools
• Administer and manage Snowflake environments including users, roles, warehouses, databases, and schemas
• Monitor system performance and optimize warehouse usage, queries, and costs
• Implement and manage security controls, data masking, and auditing
• Perform capacity planning, resource monitoring, and usage analysis
• Support data loads, integrations, and troubleshoot Snowflake-related issues
• Work closely with development, data engineering, and business teams for operational support
• Automate routine administrative tasks using SQL, Python, or shell scripting 8Ensure high availability, backup, recovery, and disaster recovery readiness
• Maintain documentation, operational procedures, and best practices
• Demonstrate strong problem-solving skills and proactive incident prevention
• Administer and manage Snowflake environments including users, roles, warehouses, databases, and schemas vvRequirements
Responsibilities
: Digital : Snowflake~PL/SQL~Unix Shell Scripting and text processing tools
• Administer and manage Snowflake environments including users, roles, warehouses, databases, and schemas
• Monitor system performance and optimize warehouse usage, queries, and costs
• Implement and manage security controls, data masking, and auditing
• Perform capacity planning, resource monitoring, and usage analysis
• Support data loads, integrations, and troubleshoot Snowflake-related issues
• Work closely with development, data engineering, and business teams for operational support
• Automate routine administrative tasks using SQL, Python, or shell scripting 8Ensure high availability, backup, recovery, and disaster recovery readiness
• Maintain documentation, operational procedures, and best practices
• Demonstrate strong problem-solving skills and proactive incident prevention
• Administer and manage Snowflake environments including users, roles, warehouses, databases, and schemas vvRequirements
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Role Category :Programming & Design
Role :: Digital : Snowflake~PL/SQL~Unix Shell Scripting and text processing tools
Digital : Microsoft Azure~Digital : Databricks
Role Descriptions: Azure Databricks EngineerKey responsibilitiesKey ResponsibilitiesPlatform InfrastructureOversee Databricks platform configuration| resource management| workspace structuring| and cluster optimization.Monitor and troubleshoot performance issues across clusters| jobs| notebooks| and pipelines.Implement governance| security| compliance and data access control using Role-Based Access Control (RBAC) and Unity Catalog.Pipeline Development ArchitectureDesign and implement end-to-end data pipelines using PySpark| SQL| and Delta Lake within a medallion architecture using Data factory And Data bricks.Build real-time and batch DLT pipelines using Databricks Delta Live Tables with a focus on reliability and scalability.Optimize Lakehouse architecture for performance| cost-efficiency| and data integrity.Automate data ingestion| transformation| and validation| including support for streaming (Autoloader) and scheduled workflows.Perform data transformations| cleansing and validations using data quality rules for consistent and accurate data sets.Manage and monitor job orchestration| ensuring efficient pipelines run and reliability.CICD DevOpsDesign and maintain CICD pipelines for Databricks artifacts (notebooks| jobs| libraries) using tools such as Azure DevOps| GitHub Actions| Terraform or Jenkins.Support trunk-based development| deployment workflows| and infrastructure-as-code practices.Manage version control and automated testing using Git and related DevOps practices.Collaboration DeliveryCollaborate with product owners| business stakeholders| and data teams to gather requirements and translate them into technical solutions.Drive the adoption of best practices in coding| versioning| testing| deployment| monitoring| and security.Provide thought leadership on the best practices in Data Engineering| Architecture and Cloud Computing.Performance OptimizationDeliver optimized spark jobs and SQL queries for large scale data processing.Implement partitioning| caching and indexing strategies to improve performance and scalability of big data workloads.Conduct POCs for capacity planning and recommend appropriate infrastructure optimizations for cost effectiveness.Documentation Knowledge SharingCreated detailed documentations and review them for data workflows| SOPs| Architectural reviews etc.Mentor junior team members and promote a culture of learning and innovation.Promote the culture of optimization and cost saving and enable research driven development.Required QualificationsTechnical Expertise5 years in data engineering| with a strong focus on Databricks and Azure ecosystems.Deep hands-on experience with Data Factory| Databricks Lakehouse Architecture| Delta Lake| PySpark| and Spark job optimization.Proficiency in Python| SQL| and optionally Scala for building scalable ETLELT pipelines.Strong SQL skills are essential| with hands-on experience in SQL Server or other RDBMS platforms.Strong experience in designing and optimizing DLT pi
Responsibilities
Digital : Microsoft Azure~Digital : Databricks
Role Descriptions: Azure Databricks EngineerKey responsibilitiesKey ResponsibilitiesPlatform InfrastructureOversee Databricks platform configuration| resource management| workspace structuring| and cluster optimization.Monitor and troubleshoot performance issues across clusters| jobs| notebooks| and pipelines.Implement governance| security| compliance and data access control using Role-Based Access Control (RBAC) and Unity Catalog.Pipeline Development ArchitectureDesign and implement end-to-end data pipelines using PySpark| SQL| and Delta Lake within a medallion architecture using Data factory And Data bricks.Build real-time and batch DLT pipelines using Databricks Delta Live Tables with a focus on reliability and scalability.Optimize Lakehouse architecture for performance| cost-efficiency| and data integrity.Automate data ingestion| transformation| and validation| including support for streaming (Autoloader) and scheduled workflows.Perform data transformations| cleansing and validations using data quality rules for consistent and accurate data sets.Manage and monitor job orchestration| ensuring efficient pipelines run and reliability.CICD DevOpsDesign and maintain CICD pipelines for Databricks artifacts (notebooks| jobs| libraries) using tools such as Azure DevOps| GitHub Actions| Terraform or Jenkins.Support trunk-based development| deployment workflows| and infrastructure-as-code practices.Manage version control and automated testing using Git and related DevOps practices.Collaboration DeliveryCollaborate with product owners| business stakeholders| and data teams to gather requirements and translate them into technical solutions.Drive the adoption of best practices in coding| versioning| testing| deployment| monitoring| and security.Provide thought leadership on the best practices in Data Engineering| Architecture and Cloud Computing.Performance OptimizationDeliver optimized spark jobs and SQL queries for large scale data processing.Implement partitioning| caching and indexing strategies to improve performance and scalability of big data workloads.Conduct POCs for capacity planning and recommend appropriate infrastructure optimizations for cost effectiveness.Documentation Knowledge SharingCreated detailed documentations and review them for data workflows| SOPs| Architectural reviews etc.Mentor junior team members and promote a culture of learning and innovation.Promote the culture of optimization and cost saving and enable research driven development.Required QualificationsTechnical Expertise5 years in data engineering| with a strong focus on Databricks and Azure ecosystems.Deep hands-on experience with Data Factory| Databricks Lakehouse Architecture| Delta Lake| PySpark| and Spark job optimization.Proficiency in Python| SQL| and optionally Scala for building scalable ETLELT pipelines.Strong SQL skills are essential| with hands-on experience in SQL Server or other RDBMS platforms.Strong experience in designing and optimizing DLT pi
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Role Category :Programming & Design
Role : Digital : Microsoft Azure~Digital : Databricks
Digital: Cloud DevOps
Descriptions:
- Knowledge on cloud - AWS| Azure or GCP Linux Hands-on working experience Hands-on in Docker containers and building Code as an Infrastructure using Kubernetes (Eg Terraform) Application Monitoring Logging tools like Prometheus and Grafana| Zabbix etc Familiar with one or more DevOps IT Operation Tools like Code management and build tools (Eg GitLab Maven...) Continuous integration tools (Eg Jenkins Nexus frog...)
- Configuration Deployment tools (Eg Ansible Chef) Ansible playbooks hands-on experience
- Exposure to setup and configurations related to Elasticsearch Kafka| Memcached| DB etc
- Experience with web servers like nginx| tomcat and JBoss.
- Experience in Shell| Python scripting Linux knowledge.
- Experience in container orchestration tools like Kubernetes| OpenShift and etc.
Responsibilities
Digital: Cloud DevOps
Descriptions:
- Knowledge on cloud - AWS| Azure or GCP Linux Hands-on working experience Hands-on in Docker containers and building Code as an Infrastructure using Kubernetes (Eg Terraform) Application Monitoring Logging tools like Prometheus and Grafana| Zabbix etc Familiar with one or more DevOps IT Operation Tools like Code management and build tools (Eg GitLab Maven...) Continuous integration tools (Eg Jenkins Nexus frog...)
- Configuration Deployment tools (Eg Ansible Chef) Ansible playbooks hands-on experience
- Exposure to setup and configurations related to Elasticsearch Kafka| Memcached| DB etc
- Experience with web servers like nginx| tomcat and JBoss.
- Experience in Shell| Python scripting Linux knowledge.
- Experience in container orchestration tools like Kubernetes| OpenShift and etc.
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
• Minimum of bachelor’s degree of engineering in any reputed institution
• Minimum 3-5 years of Java/J2EE development experience with Minimum 1-2 years of Finacle DEH/FEBA experience
• Good understanding of Java concepts and technologies, Spring boot, Web services(REST), Security etc.
• Good understanding in any one of the RDBMS (Oracle, SQL Server)
• Awareness of design principles, design patterns, performance tuning, profiling
• Strong debugging and troubleshooting skills
• Demonstrate good judgement in selecting methods and techniques for obtaining solutions
• Exposure to one or more tools: Git, Jira, Jenkins, SonarQube, Maven, Gradle
• Preferable to have an exposure to Web containers (tomcat/jboss), Docker, Kubernetes & Cloud deployments.
Responsibilities
• Minimum of bachelor’s degree of engineering in any reputed institution
• Minimum 3-5 years of Java/J2EE development experience with Minimum 1-2 years of Finacle DEH/FEBA experience
• Good understanding of Java concepts and technologies, Spring boot, Web services(REST), Security etc.
• Good understanding in any one of the RDBMS (Oracle, SQL Server)
• Awareness of design principles, design patterns, performance tuning, profiling
• Strong debugging and troubleshooting skills
• Demonstrate good judgement in selecting methods and techniques for obtaining solutions
• Exposure to one or more tools: Git, Jira, Jenkins, SonarQube, Maven, Gradle
• Preferable to have an exposure to Web containers (tomcat/jboss), Docker, Kubernetes & Cloud deployments.
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance