We found 1704 jobs matching to your search

Advance Search

Skills

Locations

Experience

Job Description

INFYSYJP00003472/560590-Business Analyst

Responsibilities

INFYSYJP00003472/560590-Business Analyst
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :INFYSYJP00003472/560590-Business Analyst

Job Description

As a Custom Software Engineer, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to gather requirements, developing innovative solutions, and ensuring that the applications meet the highest standards of quality and performance. You will also engage in problem-solving activities, providing guidance and support to your team members while fostering a collaborative environment that encourages creativity and efficiency. Roles & Responsibilities: - Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve development processes to increase efficiency. Professional & Technical Skills: - Must To Have Skills: Proficiency in Spring Boot.- Good To Have Skills: Experience with microservices architecture.- Strong understanding of RESTful API design and development.- Familiarity with cloud platforms such as AWS or Azure.- Experience with database technologies like MySQL or MongoDB. Additional Information: - The candidate should have minimum 5 years of experience in Spring Boot.- This position is based at our Pune office.- A 15 years full time education is required.

Responsibilities

As a Custom Software Engineer, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to gather requirements, developing innovative solutions, and ensuring that the applications meet the highest standards of quality and performance. You will also engage in problem-solving activities, providing guidance and support to your team members while fostering a collaborative environment that encourages creativity and efficiency. Roles & Responsibilities: - Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve development processes to increase efficiency. Professional & Technical Skills: - Must To Have Skills: Proficiency in Spring Boot.- Good To Have Skills: Experience with microservices architecture.- Strong understanding of RESTful API design and development.- Familiarity with cloud platforms such as AWS or Azure.- Experience with database technologies like MySQL or MongoDB. Additional Information: - The candidate should have minimum 5 years of experience in Spring Boot.- This position is based at our Pune office.- A 15 years full time education is required.
  • Salary : Rs. 0.0 - Rs. 2,16,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Custom Software Engineer

Job Description

Forgerock Identity Management Role Descriptions: Essential Skills: Digital Microsoft Power BI Location : ~NOIDA~THANE Skills: Digital : Microsoft Power BI Experience Required: 6-8

Responsibilities

  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Digital Microsoft Power BI

Job Description

Forgerock Identity Management Role Descriptions: Proven experience as Fusion TesterOracle Fusion PeopleHCMOracle Fusion workflow(People Stream) TestingManual Testing Essential Skills: Proven experience as Fusion TesterOracle Fusion PeopleHCMOracle Fusion workflow(People Stream) TestingManual Testing Desirable Skills: Keyword: Skills: Oracle Fusion HCM Workforce Deployment Experience Required: 4-6

Responsibilities

Forgerock Identity Management Role Descriptions: Proven experience as Fusion TesterOracle Fusion PeopleHCMOracle Fusion workflow(People Stream) TestingManual Testing Essential Skills: Proven experience as Fusion TesterOracle Fusion PeopleHCMOracle Fusion workflow(People Stream) TestingManual Testing Desirable Skills: Keyword: Skills: Oracle Fusion HCM Workforce Deployment Experience Required: 4-6
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Role : Oracle Fusion HCM Consultant

Job Description

Design, develop, and maintain automation test scripts using Playwright with TypeScript. Create and maintain existing Playwright automation frameworks (preferably custom-built frameworks). Automate test scenarios for retail business workflows and ensure high-quality coverage. Contribute to and improve automation strategies, standards, and best practices. Work with cross-functional teams to understand requirements and convert them into robust automation test cases. Integrate automated tests into CI/CD pipelines using tools such as Azure DevOps, Jenkins, GitHub Actions, or similar. Identify, document, and track defects using Jira; participate in defect triage and support teams during root cause analysis. Perform automation feasibility analysis and provide effort estimations. Support execution of regression suites and continuous testing activities. Communicate effectively with stakeholders, providing clear updates and insights on automation progress. Independently handle automation tasks, solve technical challenges, and drive automation initiatives with minimal supervision.

Responsibilities

Design, develop, and maintain automation test scripts using Playwright with TypeScript. Create and maintain existing Playwright automation frameworks (preferably custom-built frameworks). Automate test scenarios for retail business workflows and ensure high-quality coverage. Contribute to and improve automation strategies, standards, and best practices. Work with cross-functional teams to understand requirements and convert them into robust automation test cases. Integrate automated tests into CI/CD pipelines using tools such as Azure DevOps, Jenkins, GitHub Actions, or similar. Identify, document, and track defects using Jira; participate in defect triage and support teams during root cause analysis. Perform automation feasibility analysis and provide effort estimations. Support execution of regression suites and continuous testing activities. Communicate effectively with stakeholders, providing clear updates and insights on automation progress. Independently handle automation tasks, solve technical challenges, and drive automation initiatives with minimal supervision.
  • Salary : Rs. 0.0 - Rs. 1,00,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role : 560435-Playwright With Typescript

Job Description

0 BBDC1A Summary: As a Data Engineer, your typical day involves designing, developing, and maintaining comprehensive data solutions that support the generation, collection, and processing of data. You will be responsible for creating efficient data pipelines that facilitate smooth data flow across various systems. Ensuring the accuracy and quality of data is a key part of your role, along with implementing extract, transform, and load processes to enable seamless data migration and deployment. Your work will be integral to supporting data-driven decision-making and operational efficiency within the organization. Roles & Responsibilities: - Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to understand data requirements and deliver scalable data solutions.- Monitor and optimize data workflows to improve performance and reliability.- Document data processes and maintain clear communication with stakeholders regarding data pipeline status.- Assist junior team members by sharing knowledge and providing guidance on best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark, Apache Spark, AWS Glue.- Good To Have Skills: Experience with Apache Spark, AWS Glue.- Strong knowledge of distributed computing frameworks and big data processing techniques.- Experience in building and managing scalable ETL pipelines.- Familiarity with cloud-based data services and infrastructure.- Ability to troubleshoot and resolve data pipeline issues efficiently. Additional Information: - The candidate should have minimum 3 years of experience in PySpark.- This position is based at our Bhubaneswar office.- A 15 years full time education is required.

Responsibilities

0 BBDC1A Summary: As a Data Engineer, your typical day involves designing, developing, and maintaining comprehensive data solutions that support the generation, collection, and processing of data. You will be responsible for creating efficient data pipelines that facilitate smooth data flow across various systems. Ensuring the accuracy and quality of data is a key part of your role, along with implementing extract, transform, and load processes to enable seamless data migration and deployment. Your work will be integral to supporting data-driven decision-making and operational efficiency within the organization. Roles & Responsibilities: - Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to understand data requirements and deliver scalable data solutions.- Monitor and optimize data workflows to improve performance and reliability.- Document data processes and maintain clear communication with stakeholders regarding data pipeline status.- Assist junior team members by sharing knowledge and providing guidance on best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark, Apache Spark, AWS Glue.- Good To Have Skills: Experience with Apache Spark, AWS Glue.- Strong knowledge of distributed computing frameworks and big data processing techniques.- Experience in building and managing scalable ETL pipelines.- Familiarity with cloud-based data services and infrastructure.- Ability to troubleshoot and resolve data pipeline issues efficiently. Additional Information: - The candidate should have minimum 3 years of experience in PySpark.- This position is based at our Bhubaneswar office.- A 15 years full time education is required.
  • Salary : Rs. 0.0 - Rs. 1,56,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role : Data Engineer

Job Description

.Net Full Stack

Responsibilities

.Net Full Stack
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role : .Net Full Stack

Job Description

Project Details: , the project is focused on RLT accounts cleanup, specifically identifying duplicate RLT accounts in Salesforce, distinguishing between Golden and duplicate accounts, and aligning all relevant data to the Golden record according to business rules. Project Scope: till end of 2026 Number of resource required 4-5 Start date : ASAP Skill set requirement: Data Profiling & Data Quality Analysis • Identifying duplicates, incomplete records, invalid values, and inconsistencies across facilities, contacts. • Understanding “Golden Record” vs duplicate records logic – De Duping based on business rules • Good Understanding on HCO data (Health Care organization) • Check completeness, accuracy, consistency, and reliability before analysis Match & Merge Concepts • Record matching (exact & fuzzy) to create 360 view of the account • Survivorship rules (what data stays on Golden Record) Data cleaning • Handle missing values • Remove duplicates • Correct inconsistencies and align/move data from duplicate to golden record SQL skills • Strong proficiency in SQL in using joins, group by, subqueries • Filter, sort, aggregate, and join datasets on Salesforce Programming for analysis • Python or R • Exposure to cloud platforms like AWS, Azure or GCP Business and domain understanding • Pharma business understanding especially on HCO data • Recommend actions based on findings Must-have: • Data quality concepts & able to identify duplicate accounts for USA Health care organization. • Salesforce object understanding and basic knowledge on SOQL to join different objects • Basic understanding on Deduping concepts for Health care accounts. • Good communication skills and able to work with stakeholders.

Responsibilities

Project Details: , the project is focused on RLT accounts cleanup, specifically identifying duplicate RLT accounts in Salesforce, distinguishing between Golden and duplicate accounts, and aligning all relevant data to the Golden record according to business rules. Project Scope: till end of 2026 Number of resource required 4-5 Start date : ASAP Skill set requirement: Data Profiling & Data Quality Analysis • Identifying duplicates, incomplete records, invalid values, and inconsistencies across facilities, contacts. • Understanding “Golden Record” vs duplicate records logic – De Duping based on business rules • Good Understanding on HCO data (Health Care organization) • Check completeness, accuracy, consistency, and reliability before analysis Match & Merge Concepts • Record matching (exact & fuzzy) to create 360 view of the account • Survivorship rules (what data stays on Golden Record) Data cleaning • Handle missing values • Remove duplicates • Correct inconsistencies and align/move data from duplicate to golden record SQL skills • Strong proficiency in SQL in using joins, group by, subqueries • Filter, sort, aggregate, and join datasets on Salesforce Programming for analysis • Python or R • Exposure to cloud platforms like AWS, Azure or GCP Business and domain understanding • Pharma business understanding especially on HCO data • Recommend actions based on findings Must-have: • Data quality concepts & able to identify duplicate accounts for USA Health care organization. • Salesforce object understanding and basic knowledge on SOQL to join different objects • Basic understanding on Deduping concepts for Health care accounts. • Good communication skills and able to work with stakeholders.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :RLT Account Cleanup Program

Job Description

SAP BTP + CPI

Responsibilities

SAP BTP + CPI
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :SAP BTP + CPI