We found 1584 jobs matching to your search

Advance Search

Skills

Locations

Experience

Job Description

As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems, contributing to the overall efficiency and reliability of data management within the organization. Roles & Responsibilities: - Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to understand data requirements and deliver effective solutions.- Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and processes.- Familiarity with cloud platforms such as AWS or Azure.- Knowledge of data warehousing concepts and technologies. Additional Information: - The candidate should have minimum 2 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required.

Responsibilities

As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems, contributing to the overall efficiency and reliability of data management within the organization. Roles & Responsibilities: - Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to understand data requirements and deliver effective solutions.- Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and processes.- Familiarity with cloud platforms such as AWS or Azure.- Knowledge of data warehousing concepts and technologies. Additional Information: - The candidate should have minimum 2 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required.
  • Salary : Rs. 0.0 - Rs. 1,10,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Data Engineer

Job Description

SAP Developer

Responsibilities

SAP Developer
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :SAP Developer

Job Description

As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems, contributing to the overall efficiency and effectiveness of data management within the organization. Roles & Responsibilities: - Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to understand data requirements and deliver effective solutions.- Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and processes.- Familiarity with cloud platforms such as AWS or Azure.- Knowledge of data warehousing concepts and technologies. Additional Information: - The candidate should have minimum 3 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required.

Responsibilities

As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems, contributing to the overall efficiency and effectiveness of data management within the organization. Roles & Responsibilities: - Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to understand data requirements and deliver effective solutions.- Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and processes.- Familiarity with cloud platforms such as AWS or Azure.- Knowledge of data warehousing concepts and technologies. Additional Information: - The candidate should have minimum 3 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required.
  • Salary : Rs. 0.0 - Rs. 1,300.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Data Engineer

Job Description

Key Responsibilities - Provide 24x7 production support for Mainframe databases (such as DB2 for z/OS, IMS, VSAM, or IDMS). by participating in rotational shifts and on-call duties. Administer, monitor, and maintain Mainframe databases in production and non-production environments. Implement and manage backup, recovery, tuning, high availability and DR (Disaster Recovery) strategies. Perform routine health checks, proactively identify performance bottlenecks, and optimize queries and database configurations. Automate routine operations and reporting using Mainframe utilities, scripting languages, or job scheduling tools (JCL, REXX, Python, Shell, etc.). Manage database security, user permissions, and compliance with organizational policies. Coordinate and execute database patching, upgrades, and migrations with minimal downtime. Collaborate with application and infrastructure teams to support deployment activities and resolve database-related issues. Monitor and resolve incidents, including root cause analysis and documentation of resolutions. Automate routine operational tasks using scripts or management tools. Maintain accurate documentation of database procedures, configurations, and support processes for internal knowledge sharing.

Responsibilities

Key Responsibilities - Provide 24x7 production support for Mainframe databases (such as DB2 for z/OS, IMS, VSAM, or IDMS). by participating in rotational shifts and on-call duties. Administer, monitor, and maintain Mainframe databases in production and non-production environments. Implement and manage backup, recovery, tuning, high availability and DR (Disaster Recovery) strategies. Perform routine health checks, proactively identify performance bottlenecks, and optimize queries and database configurations. Automate routine operations and reporting using Mainframe utilities, scripting languages, or job scheduling tools (JCL, REXX, Python, Shell, etc.). Manage database security, user permissions, and compliance with organizational policies. Coordinate and execute database patching, upgrades, and migrations with minimal downtime. Collaborate with application and infrastructure teams to support deployment activities and resolve database-related issues. Monitor and resolve incidents, including root cause analysis and documentation of resolutions. Automate routine operational tasks using scripts or management tools. Maintain accurate documentation of database procedures, configurations, and support processes for internal knowledge sharing.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role : Mainframe IMS & DB2 DBA

Job Description

Mandatory Skill: Strong exp in guidewire claims, should have hands on exp on, complex configuration changes. claims Integration knowledge. FNOL & Post FNOL flow Should have experience on working in GW cloud

Responsibilities

Mandatory Skill: Strong exp in guidewire claims, should have hands on exp on, complex configuration changes. claims Integration knowledge. FNOL & Post FNOL flow Should have experience on working in GW cloud
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Guidewire Claims Center Developer

Job Description

Service desk Management

Responsibilities

Service desk Management
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Service desk Management

Job Description

As a Quality Engineer, you will enable full stack solutions through multi-disciplinary team planning and ecosystem integration to accelerate delivery and drive quality across the application lifecycle. Your typical day will involve performing continuous testing for security, API, and regression suites, creating automation strategies, and supporting data and environment configurations. You will also participate in code reviews, monitor and report defects, and engage in continuous improvement activities for the end-to-end testing process, ensuring that the highest quality standards are met throughout the project lifecycleData Analysis – SQL, Visualization (visualization skills are low priority, but must be good with SQL) Knowledge/Exposure to Azure ecosystem, esp. ADF, Azure Databricks, Python• Understanding of building data products (things like data mapping, transformation (business and technical), validation of data products – data quality and adherence to specs). Roles & Responsibilities: - Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and implement automated testing scripts to enhance testing efficiency.- Collaborate with cross-functional teams to ensure seamless integration of testing processes. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Analytics.- Strong analytical skills to interpret complex data sets.- Experience with data visualization tools to present findings effectively.- Familiarity with automation testing frameworks and tools.- Ability to troubleshoot and resolve issues in testing environments. Additional Information: - The candidate should have minimum 3 years of experience in Data Analytics.- This position is based at our Bengaluru office.- A 15 years full time education is required.

Responsibilities

As a Quality Engineer, you will enable full stack solutions through multi-disciplinary team planning and ecosystem integration to accelerate delivery and drive quality across the application lifecycle. Your typical day will involve performing continuous testing for security, API, and regression suites, creating automation strategies, and supporting data and environment configurations. You will also participate in code reviews, monitor and report defects, and engage in continuous improvement activities for the end-to-end testing process, ensuring that the highest quality standards are met throughout the project lifecycleData Analysis – SQL, Visualization (visualization skills are low priority, but must be good with SQL) Knowledge/Exposure to Azure ecosystem, esp. ADF, Azure Databricks, Python• Understanding of building data products (things like data mapping, transformation (business and technical), validation of data products – data quality and adherence to specs). Roles & Responsibilities: - Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and implement automated testing scripts to enhance testing efficiency.- Collaborate with cross-functional teams to ensure seamless integration of testing processes. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Analytics.- Strong analytical skills to interpret complex data sets.- Experience with data visualization tools to present findings effectively.- Familiarity with automation testing frameworks and tools.- Ability to troubleshoot and resolve issues in testing environments. Additional Information: - The candidate should have minimum 3 years of experience in Data Analytics.- This position is based at our Bengaluru office.- A 15 years full time education is required.
  • Salary : Rs. 10,00,000.0 - Rs. 12,00,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Data Analytics

Job Description

Job Description Looking for SAP TM Consultant/Sr. Consultant who has S4 TM Sidecar/Standalone implementation experience. This is a client facing role.

Responsibilities

Job Description Looking for SAP TM Consultant/Sr. Consultant who has S4 TM Sidecar/Standalone implementation experience. This is a client facing role.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :SAP TM

Job Description

As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems, contributing to the overall efficiency and reliability of data management within the organization. Roles & Responsibilities: - Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to understand data requirements and deliver effective solutions.- Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and processes.- Familiarity with cloud platforms such as AWS or Azure.- Knowledge of data warehousing concepts and technologies. Additional Information: - The candidate should have minimum 2 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required.

Responsibilities

As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems, contributing to the overall efficiency and reliability of data management within the organization. Roles & Responsibilities: - Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to understand data requirements and deliver effective solutions.- Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and processes.- Familiarity with cloud platforms such as AWS or Azure.- Knowledge of data warehousing concepts and technologies. Additional Information: - The candidate should have minimum 2 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Data Engineer