Job Posting Title: USI-CTS SAP Basis 5 - 8 years
Description: Should have hands on experience on Implementation and Support Projects Hands on experience on S4 Hana and Fiori Proficient with SAP Java systems , worked on Java addons like MII and PHands on experience on ABAP systems such as TM GTS EWM Hands on experience on System integration and configuration Hands on experience on Hana DB Worked on Linux OS preferably on Azure Exposure tBODS and BOIS and replication with SLT Good thave exposure tSolMan functionalities like Managed System
Responsibilities
Job Posting Title: USI-CTS SAP Basis 5 - 8 years
Description: Should have hands on experience on Implementation and Support Projects Hands on experience on S4 Hana and Fiori Proficient with SAP Java systems , worked on Java addons like MII and PHands on experience on ABAP systems such as TM GTS EWM Hands on experience on System integration and configuration Hands on experience on Hana DB Worked on Linux OS preferably on Azure Exposure tBODS and BOIS and replication with SLT Good thave exposure tSolMan functionalities like Managed System
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
: Digital : Snowflake~PL/SQL~Unix Shell Scripting and text processing tools
• Administer and manage Snowflake environments including users, roles, warehouses, databases, and schemas
• Monitor system performance and optimize warehouse usage, queries, and costs
• Implement and manage security controls, data masking, and auditing
• Perform capacity planning, resource monitoring, and usage analysis
• Support data loads, integrations, and troubleshoot Snowflake-related issues
• Work closely with development, data engineering, and business teams for operational support
• Automate routine administrative tasks using SQL, Python, or shell scripting 8Ensure high availability, backup, recovery, and disaster recovery readiness
• Maintain documentation, operational procedures, and best practices
• Demonstrate strong problem-solving skills and proactive incident prevention
• Administer and manage Snowflake environments including users, roles, warehouses, databases, and schemas vvRequirements
Responsibilities
: Digital : Snowflake~PL/SQL~Unix Shell Scripting and text processing tools
• Administer and manage Snowflake environments including users, roles, warehouses, databases, and schemas
• Monitor system performance and optimize warehouse usage, queries, and costs
• Implement and manage security controls, data masking, and auditing
• Perform capacity planning, resource monitoring, and usage analysis
• Support data loads, integrations, and troubleshoot Snowflake-related issues
• Work closely with development, data engineering, and business teams for operational support
• Automate routine administrative tasks using SQL, Python, or shell scripting 8Ensure high availability, backup, recovery, and disaster recovery readiness
• Maintain documentation, operational procedures, and best practices
• Demonstrate strong problem-solving skills and proactive incident prevention
• Administer and manage Snowflake environments including users, roles, warehouses, databases, and schemas vvRequirements
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Role Category :Programming & Design
Role :: Digital : Snowflake~PL/SQL~Unix Shell Scripting and text processing tools
We are looking for a skilled Azure Data Engineer with hands-on experience in building scalable data solutions using Azure Data Factory and Azure Databricks. The candidate will be responsible for developing robust data pipelines, optimizing data workflows, and supporting analytics solutions in the Azure cloud environment.Roles & Responsibilities: -Design and develop end-to-end data pipelines using Azure Data Factory.-Build scalable data processing solutions using Azure Databricks (PySpark / Spark SQL).-Develop and optimize ETL/ELT workflows for structured and unstructured data.-Integrate data from multiple sources such as APIs, databases, and cloud platforms.-Implement and manage Azure Data Lake Storage (ADLS Gen2) and data warehouse solutions.-Monitor, troubleshoot, and optimize pipeline performance and processing efficiency.-Implement data quality checks, validation rules, and governance standards.-Collaborate with data analysts, business stakeholders, and DevOps teams for delivery.-Manage CI/CD pipelines and deployments using Azure DevOps.-Ensure data security, compliance, and cost/performance optimization in Azure.Professional & Technical Skills: --5–6 years of experience in Data Engineering.-Strong hands-on expertise in:- Azure Data Factory (pipelines, triggers, linked services, Integration Runtime)-Azure Databricks (PySpark, Spark SQL, Delta Lake)-Experience with Azure Data Lake Storage (ADLS Gen2).-Strong knowledge of SQL and relational databases.-Proficiency in Python for data processing and automation.-Solid understanding of data warehousing and dimensional modeling.-Experience in large-scale distributed data processing.-Working knowledge of CI/CD, Git, and Azure DevOps.-Strong analytical and problem-solving skills.Additional Information: -Hands-on experience in Delta Lake performance tuning.-Knowledge of streaming frameworks (Kafka / Azure Event Hub).-Experience with Power BI integration.-Microsoft Certified: Azure Data Engineer Associate (or equivalent).-Experience working in Agile / Scrum environments.
Responsibilities
We are looking for a skilled Azure Data Engineer with hands-on experience in building scalable data solutions using Azure Data Factory and Azure Databricks. The candidate will be responsible for developing robust data pipelines, optimizing data workflows, and supporting analytics solutions in the Azure cloud environment.Roles & Responsibilities: -Design and develop end-to-end data pipelines using Azure Data Factory.-Build scalable data processing solutions using Azure Databricks (PySpark / Spark SQL).-Develop and optimize ETL/ELT workflows for structured and unstructured data.-Integrate data from multiple sources such as APIs, databases, and cloud platforms.-Implement and manage Azure Data Lake Storage (ADLS Gen2) and data warehouse solutions.-Monitor, troubleshoot, and optimize pipeline performance and processing efficiency.-Implement data quality checks, validation rules, and governance standards.-Collaborate with data analysts, business stakeholders, and DevOps teams for delivery.-Manage CI/CD pipelines and deployments using Azure DevOps.-Ensure data security, compliance, and cost/performance optimization in Azure.Professional & Technical Skills: --5–6 years of experience in Data Engineering.-Strong hands-on expertise in:- Azure Data Factory (pipelines, triggers, linked services, Integration Runtime)-Azure Databricks (PySpark, Spark SQL, Delta Lake)-Experience with Azure Data Lake Storage (ADLS Gen2).-Strong knowledge of SQL and relational databases.-Proficiency in Python for data processing and automation.-Solid understanding of data warehousing and dimensional modeling.-Experience in large-scale distributed data processing.-Working knowledge of CI/CD, Git, and Azure DevOps.-Strong analytical and problem-solving skills.Additional Information: -Hands-on experience in Delta Lake performance tuning.-Knowledge of streaming frameworks (Kafka / Azure Event Hub).-Experience with Power BI integration.-Microsoft Certified: Azure Data Engineer Associate (or equivalent).-Experience working in Agile / Scrum environments.
Salary : Rs. 0.0 - Rs. 1,80,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
9 HDC2 Summary: We are looking for a skilled Azure Data Engineer with hands-on experience in building scalable data solutions using Azure Data Factory and Azure Databricks. The candidate will be responsible for developing robust data pipelines, optimizing data workflows, and supporting analytics solutions in the Azure cloud environment.Roles & Responsibilities: -Design and develop end-to-end data pipelines using Azure Data Factory.-Build scalable data processing solutions using Azure Databricks (PySpark / Spark SQL).-Develop and optimize ETL/ELT workflows for structured and unstructured data.-Integrate data from multiple sources such as APIs, databases, and cloud platforms.-Implement and manage Azure Data Lake Storage (ADLS Gen2) and data warehouse solutions.-Monitor, troubleshoot, and optimize pipeline performance and processing efficiency.-Implement data quality checks, validation rules, and governance standards.-Collaborate with data analysts, business stakeholders, and DevOps teams for delivery.-Manage CI/CD pipelines and deployments using Azure DevOps.-Ensure data security, compliance, and cost/performance optimization in Azure.Professional & Technical Skills: --5–6 years of experience in Data Engineering.-Strong hands-on expertise in:- Azure Data Factory (pipelines, triggers, linked services, Integration Runtime)-Azure Databricks (PySpark, Spark SQL, Delta Lake)-Experience with Azure Data Lake Storage (ADLS Gen2).-Strong knowledge of SQL and relational databases.-Proficiency in Python for data processing and automation.-Solid understanding of data warehousing and dimensional modeling.-Experience in large-scale distributed data processing.-Working knowledge of CI/CD, Git, and Azure DevOps.-Strong analytical and problem-solving skills.Additional Information: -Hands-on experience in Delta Lake performance tuning.-Knowledge of streaming frameworks (Kafka / Azure Event Hub).-Experience with Power BI integration.-Microsoft Certified: Azure Data Engineer Associate (or equivalent).-Experience working in Agile / Scrum environments
Responsibilities
9 HDC2 Summary: We are looking for a skilled Azure Data Engineer with hands-on experience in building scalable data solutions using Azure Data Factory and Azure Databricks. The candidate will be responsible for developing robust data pipelines, optimizing data workflows, and supporting analytics solutions in the Azure cloud environment.Roles & Responsibilities: -Design and develop end-to-end data pipelines using Azure Data Factory.-Build scalable data processing solutions using Azure Databricks (PySpark / Spark SQL).-Develop and optimize ETL/ELT workflows for structured and unstructured data.-Integrate data from multiple sources such as APIs, databases, and cloud platforms.-Implement and manage Azure Data Lake Storage (ADLS Gen2) and data warehouse solutions.-Monitor, troubleshoot, and optimize pipeline performance and processing efficiency.-Implement data quality checks, validation rules, and governance standards.-Collaborate with data analysts, business stakeholders, and DevOps teams for delivery.-Manage CI/CD pipelines and deployments using Azure DevOps.-Ensure data security, compliance, and cost/performance optimization in Azure.Professional & Technical Skills: --5–6 years of experience in Data Engineering.-Strong hands-on expertise in:- Azure Data Factory (pipelines, triggers, linked services, Integration Runtime)-Azure Databricks (PySpark, Spark SQL, Delta Lake)-Experience with Azure Data Lake Storage (ADLS Gen2).-Strong knowledge of SQL and relational databases.-Proficiency in Python for data processing and automation.-Solid understanding of data warehousing and dimensional modeling.-Experience in large-scale distributed data processing.-Working knowledge of CI/CD, Git, and Azure DevOps.-Strong analytical and problem-solving skills.Additional Information: -Hands-on experience in Delta Lake performance tuning.-Knowledge of streaming frameworks (Kafka / Azure Event Hub).-Experience with Power BI integration.-Microsoft Certified: Azure Data Engineer Associate (or equivalent).-Experience working in Agile / Scrum environments
Salary : Rs. 0.0 - Rs. 2,16,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Job Posting Title: USI-SAP Syniti with 4plus years of expereince
Description: Description
Should be hands on SAP Syniti. Experience in SAP retail projects is preferable
Responsibilities
Job Posting Title: USI-SAP Syniti with 4plus years of expereince
Description: Description
Should be hands on SAP Syniti. Experience in SAP retail projects is preferable
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
As a Custom Software Engineer, you will engage in the development of custom software solutions that are designed to meet specific business needs. Your typical day will involve coding, enhancing components across various systems or applications, and collaborating with team members to ensure the delivery of scalable and high-performing solutions using modern frameworks and agile practices. You will also participate in discussions to address challenges and contribute to the overall success of the projects you are involved in. Roles & Responsibilities: - Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to gather requirements and translate them into technical specifications.- Conduct code reviews and provide constructive feedback to peers to ensure code quality and best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Analytics Services.- Experience with cloud computing platforms and services.- Strong understanding of software development methodologies, particularly agile.- Familiarity with programming languages such as C#, Java, or Python.- Knowledge of database management and data integration techniques. Additional Information: - The candidate should have minimum 3 years of experience in Microsoft Azure Analytics Services.
Responsibilities
As a Custom Software Engineer, you will engage in the development of custom software solutions that are designed to meet specific business needs. Your typical day will involve coding, enhancing components across various systems or applications, and collaborating with team members to ensure the delivery of scalable and high-performing solutions using modern frameworks and agile practices. You will also participate in discussions to address challenges and contribute to the overall success of the projects you are involved in. Roles & Responsibilities: - Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to gather requirements and translate them into technical specifications.- Conduct code reviews and provide constructive feedback to peers to ensure code quality and best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Analytics Services.- Experience with cloud computing platforms and services.- Strong understanding of software development methodologies, particularly agile.- Familiarity with programming languages such as C#, Java, or Python.- Knowledge of database management and data integration techniques. Additional Information: - The candidate should have minimum 3 years of experience in Microsoft Azure Analytics Services.
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
We are looking for a skilled Azure Data Engineer with hands-on experience in building scalable data solutions using Azure Data Factory and Azure Databricks. The candidate will be responsible for developing robust data pipelines, optimizing data workflows, and supporting analytics solutions in the Azure cloud environment.Roles & Responsibilities: -Design and develop end-to-end data pipelines using Azure Data Factory.-Build scalable data processing solutions using Azure Databricks (PySpark / Spark SQL).-Develop and optimize ETL/ELT workflows for structured and unstructured data.-Integrate data from multiple sources such as APIs, databases, and cloud platforms.-Implement and manage Azure Data Lake Storage (ADLS Gen2) and data warehouse solutions.-Monitor, troubleshoot, and optimize pipeline performance and processing efficiency.-Implement data quality checks, validation rules, and governance standards.-Collaborate with data analysts, business stakeholders, and DevOps teams for delivery.-Manage CI/CD pipelines and deployments using Azure DevOps.-Ensure data security, compliance, and cost/performance optimization in Azure.Professional & Technical Skills: --5–6 years of experience in Data Engineering.-Strong hands-on expertise in:- Azure Data Factory (pipelines, triggers, linked services, Integration Runtime)-Azure Databricks (PySpark, Spark SQL, Delta Lake)-Experience with Azure Data Lake Storage (ADLS Gen2).-Strong knowledge of SQL and relational databases.-Proficiency in Python for data processing and automation.-Solid understanding of data warehousing and dimensional modeling.-Experience in large-scale distributed data processing.-Working knowledge of CI/CD, Git, and Azure DevOps.-Strong analytical and problem-solving skills.Additional Information: -Hands-on experience in Delta Lake performance tuning.-Knowledge of streaming frameworks (Kafka / Azure Event Hub).-Experience with Power BI integration.-Microsoft Certified: Azure Data Engineer Associate (or equivalent).-Experience working in Agile / Scrum environments.
Responsibilities
We are looking for a skilled Azure Data Engineer with hands-on experience in building scalable data solutions using Azure Data Factory and Azure Databricks. The candidate will be responsible for developing robust data pipelines, optimizing data workflows, and supporting analytics solutions in the Azure cloud environment.Roles & Responsibilities: -Design and develop end-to-end data pipelines using Azure Data Factory.-Build scalable data processing solutions using Azure Databricks (PySpark / Spark SQL).-Develop and optimize ETL/ELT workflows for structured and unstructured data.-Integrate data from multiple sources such as APIs, databases, and cloud platforms.-Implement and manage Azure Data Lake Storage (ADLS Gen2) and data warehouse solutions.-Monitor, troubleshoot, and optimize pipeline performance and processing efficiency.-Implement data quality checks, validation rules, and governance standards.-Collaborate with data analysts, business stakeholders, and DevOps teams for delivery.-Manage CI/CD pipelines and deployments using Azure DevOps.-Ensure data security, compliance, and cost/performance optimization in Azure.Professional & Technical Skills: --5–6 years of experience in Data Engineering.-Strong hands-on expertise in:- Azure Data Factory (pipelines, triggers, linked services, Integration Runtime)-Azure Databricks (PySpark, Spark SQL, Delta Lake)-Experience with Azure Data Lake Storage (ADLS Gen2).-Strong knowledge of SQL and relational databases.-Proficiency in Python for data processing and automation.-Solid understanding of data warehousing and dimensional modeling.-Experience in large-scale distributed data processing.-Working knowledge of CI/CD, Git, and Azure DevOps.-Strong analytical and problem-solving skills.Additional Information: -Hands-on experience in Delta Lake performance tuning.-Knowledge of streaming frameworks (Kafka / Azure Event Hub).-Experience with Power BI integration.-Microsoft Certified: Azure Data Engineer Associate (or equivalent).-Experience working in Agile / Scrum environments.
Salary : Rs. 12,00,000.0 - Rs. 15,00,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Informatica CDI (Cloud Data Integration)
Relevant Experience Range in Required Skills: 6 to 8 Years
Job Description:
Informatica CDI - Production Support
Responsibilities
Informatica CDI (Cloud Data Integration)
Relevant Experience Range in Required Skills: 6 to 8 Years
Job Description:
Informatica CDI - Production Support
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance