We found 1751 jobs matching to your search

Advance Search

Skills

Locations

Experience

Job Description

As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities: - Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Develop and optimize data pipelines to enhance data processing efficiency.- Monitor and troubleshoot data quality issues to ensure accuracy and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in MicroStrategy Business Intelligence.- Good To Have Skills: Experience with Snowflake Data Warehouse.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and data integration techniques.- Familiarity with cloud-based data solutions and architectures. Additional Information: - The candidate should have minimum 5 years of experience in MicroStrategy Business Intelligence.- This position is based at our Pune office.- A 15 years full time education is required.

Responsibilities

As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities: - Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Develop and optimize data pipelines to enhance data processing efficiency.- Monitor and troubleshoot data quality issues to ensure accuracy and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in MicroStrategy Business Intelligence.- Good To Have Skills: Experience with Snowflake Data Warehouse.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and data integration techniques.- Familiarity with cloud-based data solutions and architectures. Additional Information: - The candidate should have minimum 5 years of experience in MicroStrategy Business Intelligence.- This position is based at our Pune office.- A 15 years full time education is required.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :MicroStrategy Business Intelligence

Job Description

INFYSYJP00004605/ECMS 552361 | SAP PM EAM

Responsibilities

INFYSYJP00004605/ECMS 552361 | SAP PM EAM
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :INFYSYJP00004605/ECMS 552361 | SAP PM EAM

Job Description

Required Skillset and Experience: 1. Over all up to 5 years of working experience, preferably in SQL, ETL (Talend) 2. Must have 2+ years of experience in Talend Enterprise/Open studio and related tools like Talend API, Talend Data Catalog, TMC, TAC etc. 3. Must have understanding of database design, data modeling 4. Hands-on experience in any of the coding language (Java or Python etc.) Secondary Skillset/Good to have: 1. 1+ years of experience into MS power Apps/Power Automate 2. Basic experience using Power Apps (Canvas / Model-Driven). 3. Understanding of Power Automate workflows. 4. Familiarity with Microsoft Dataverse, SharePoint

Responsibilities

Required Skillset and Experience: 1. Over all up to 5 years of working experience, preferably in SQL, ETL (Talend) 2. Must have 2+ years of experience in Talend Enterprise/Open studio and related tools like Talend API, Talend Data Catalog, TMC, TAC etc. 3. Must have understanding of database design, data modeling 4. Hands-on experience in any of the coding language (Java or Python etc.) Secondary Skillset/Good to have: 1. 1+ years of experience into MS power Apps/Power Automate 2. Basic experience using Power Apps (Canvas / Model-Driven). 3. Understanding of Power Automate workflows. 4. Familiarity with Microsoft Dataverse, SharePoint
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Business Intelligence - Ranjita Kumari

Job Description

INFYSYJP00002550/557965 - SAP FICO

Responsibilities

INFYSYJP00002550/557965 - SAP FICO
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :INFYSYJP00002550/557965 - SAP FICO

Job Description

Role Summary We are looking for a .NET engineer with strong backend fundamentals and experience working with relational databases and messaging systems. You will design, build, and maintain reliable services, integrate with message queues (preferably IBM MQ), and collaborate with cross-functional teams to deliver scalable solutions. Exposure to Kubernetes and observability tools is a plus. Key Responsibilities • Design, develop, test, and maintain applications/services using .NET (C#). • Build and optimize data access layers using PostgreSQL (or other relational databases such as SQL Server/MySQL). • Develop and maintain integrations using message queues (preferably IBM MQ; RabbitMQ/ActiveMQ/Kafka also acceptable). • Write clean, maintainable, and well-tested code; participate in code reviews and engineering best practices. • Troubleshoot production issues and improve performance, reliability, and scalability of services. • Collaborate with product, QA, DevOps, and other engineering teams for end-to-end delivery. • Contribute to CI/CD practices, documentation, and operational readiness. Must Have Skills / Qualifications • 4–6 years hands-on experience with .NET / C# (APIs/services, background workers, integrations). • Strong experience with PostgreSQL or any relational database, including schema design and SQL query optimization. • Experience working with message queues and asynchronous processing patterns (preferably IBM MQ). • Solid understanding of software engineering fundamentals: OOP, design patterns, REST APIs, debugging, and performance tuning. • Familiarity with Git-based version control and CI/CD workflows. Good to Have • Experience deploying and running workloads on Kubernetes. • Exposure to Razor templates (ASP.NET MVC/Razor Pages) for server-side rendering. • Experience with monitoring/observability using Grafana (dashboards, alerts) and related telemetry concepts.

Responsibilities

Full Stack Development / DT893JP00010083 / 3 – 6 years / Phani Sarraju Description: 4–6 years of software development experience in building and supporting backend/services using .NET. Role Summary We are looking for a .NET engineer with strong backend fundamentals and experience working with relational databases and messaging systems. You will design, build, and maintain reliable services, integrate with message queues (preferably IBM MQ), and collaborate with cross-functional teams to deliver scalable solutions. Exposure to Kubernetes and observability tools is a plus. Key Responsibilities • Design, develop, test, and maintain applications/services using .NET (C#). • Build and optimize data access layers using PostgreSQL (or other relational databases such as SQL Server/MySQL). • Develop and maintain integrations using message queues (preferably IBM MQ; RabbitMQ/ActiveMQ/Kafka also acceptable). • Write clean, maintainable, and well-tested code; participate in code reviews and engineering best practices. • Troubleshoot production issues and improve performance, reliability, and scalability of services. • Collaborate with product, QA, DevOps, and other engineering teams for end-to-end delivery. • Contribute to CI/CD practices, documentation, and operational readiness. Must Have Skills / Qualifications • 4–6 years hands-on experience with .NET / C# (APIs/services, background workers, integrations). • Strong experience with PostgreSQL or any relational database, including schema design and SQL query optimization. • Experience working with message queues and asynchronous processing patterns (preferably IBM MQ). • Solid understanding of software engineering fundamentals: OOP, design patterns, REST APIs, debugging, and performance tuning. • Familiarity with Git-based version control and CI/CD workflows. Good to Have • Experience deploying and running workloads on Kubernetes. • Exposure to Razor templates (ASP.NET MVC/Razor Pages) for server-side rendering. • Experience with monitoring/observability using Grafana (dashboards, alerts) and related telemetry concepts.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Full Stack Development - Phani Sarraju

Job Description

ESP Scheduling

Responsibilities

ESP Scheduling
  • Salary : Rs. 0.0 - Rs. 1,00,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :INFYSYJP00002645/557785_ESP Scheduling

Job Description

Proficiency in the Ab Initio Graphical Development Environment (GDE): Building, modifying, and debugging graphs using standard components (Reformat, Join, Sort, Rollup, Normalize, Lookup, etc.), custom transforms, and embedded code. Understanding and Assessing Auto-Converted Graphs: Graphs produced by Ab Initio's automated COBOL/IMS conversion tool are not standard hand-built graphs. They follow machine-generated patterns — often verbose, deeply nested, or structured in ways that differ significantly from graphs written from scratch. In this environment, these converted graphs must be assessed and modified to implement new requirements or fix defects. This requires the ability to trace generated logic back to the original COBOL source, identify the relevant transform or component within an auto-generated structure, and make targeted, safe changes without disrupting the surrounding converted logic. Metadata Management: Working with the Enterprise Meta Environment (EME) for version control, dependency analysis, impact analysis, and data lineage. Parameter Handling: Using Parameter Definition Language (PDL) effectively. Orchestration and Workflow: Conduct It (or Express It) for scheduling and managing job flows — this largely replaces JCL and IMS transaction management. Job scheduling is handled via Atomic Automation, which orchestrates Ab Initio workloads in the production environment. A critical aspect of this environment is that Atomic Automation workflows contain parallel job dependencies — multiple jobs may execute concurrently with interdependencies that must be understood when diagnosing failures or assessing the impact of a change. This is distinct from the sequential step-by-step flow within an individual job; the broader workflow topology must also be considered. Data Flow Traceability and File/Dataset Lineage: A critical problem-solving skill in this environment is the ability to trace data content backwards and forwards through job flows and workflows — following a file or dataset from its point of creation through each transformation it undergoes across jobs, graphs, and workflow stages. This includes understanding what populates a file, how it is transformed at each step, where it is consumed downstream, and how parallel workflow paths may contribute to or depend on its content. This traceability underpins three core data concerns that must always be considered: Data Integrity: Ensuring that transformations preserve the accuracy and consistency of data values as they move through the system — detecting where values may be incorrectly computed, overwritten, or corrupted relative to what the original IMS application would have produced. Missing Data: Identifying conditions under which records or fields may be absent, dropped, or skipped — whether due to filtering logic, join mismatches, conditional branches, or upstream job failures — and understanding the downstream impact of that absence. Data Retention: Understanding how long data persists at each stage — which files are transient (used within a single run), which are retained across cycles, and how GDG-style generational patterns control the lifecycle of datasets. Knowing what data is available, for how long, and under what conditions is essential for recovery, reprocessing, and audit support. Data Processing and Integration: Handling large-scale ETL/ELT processes, including migrated IMS segment data, copybooks, EBCDIC, packed decimal, and zoned decimal formats. Administration and Operations: Co Operating System runtime management, monitoring, logging, error handling, deployment, and performance tuning (parallelism, multifile systems, resource optimization).

Responsibilities

Proficiency in the Ab Initio Graphical Development Environment (GDE): Building, modifying, and debugging graphs using standard components (Reformat, Join, Sort, Rollup, Normalize, Lookup, etc.), custom transforms, and embedded code. Understanding and Assessing Auto-Converted Graphs: Graphs produced by Ab Initio's automated COBOL/IMS conversion tool are not standard hand-built graphs. They follow machine-generated patterns — often verbose, deeply nested, or structured in ways that differ significantly from graphs written from scratch. In this environment, these converted graphs must be assessed and modified to implement new requirements or fix defects. This requires the ability to trace generated logic back to the original COBOL source, identify the relevant transform or component within an auto-generated structure, and make targeted, safe changes without disrupting the surrounding converted logic. Metadata Management: Working with the Enterprise Meta Environment (EME) for version control, dependency analysis, impact analysis, and data lineage. Parameter Handling: Using Parameter Definition Language (PDL) effectively. Orchestration and Workflow: Conduct It (or Express It) for scheduling and managing job flows — this largely replaces JCL and IMS transaction management. Job scheduling is handled via Atomic Automation, which orchestrates Ab Initio workloads in the production environment. A critical aspect of this environment is that Atomic Automation workflows contain parallel job dependencies — multiple jobs may execute concurrently with interdependencies that must be understood when diagnosing failures or assessing the impact of a change. This is distinct from the sequential step-by-step flow within an individual job; the broader workflow topology must also be considered. Data Flow Traceability and File/Dataset Lineage: A critical problem-solving skill in this environment is the ability to trace data content backwards and forwards through job flows and workflows — following a file or dataset from its point of creation through each transformation it undergoes across jobs, graphs, and workflow stages. This includes understanding what populates a file, how it is transformed at each step, where it is consumed downstream, and how parallel workflow paths may contribute to or depend on its content. This traceability underpins three core data concerns that must always be considered: Data Integrity: Ensuring that transformations preserve the accuracy and consistency of data values as they move through the system — detecting where values may be incorrectly computed, overwritten, or corrupted relative to what the original IMS application would have produced. Missing Data: Identifying conditions under which records or fields may be absent, dropped, or skipped — whether due to filtering logic, join mismatches, conditional branches, or upstream job failures — and understanding the downstream impact of that absence. Data Retention: Understanding how long data persists at each stage — which files are transient (used within a single run), which are retained across cycles, and how GDG-style generational patterns control the lifecycle of datasets. Knowing what data is available, for how long, and under what conditions is essential for recovery, reprocessing, and audit support. Data Processing and Integration: Handling large-scale ETL/ELT processes, including migrated IMS segment data, copybooks, EBCDIC, packed decimal, and zoned decimal formats. Administration and Operations: Co Operating System runtime management, monitoring, logging, error handling, deployment, and performance tuning (parallelism, multifile systems, resource optimization).
  • Salary : Rs. 0.0 - Rs. 1,74,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Ab Initio

Job Description

Vblock

Responsibilities

Vblock
  • Salary : Rs. 0.0 - Rs. 1,00,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :INFYSYJP00002639/558057_Vblock

Job Description

Job Summary: We are looking for a skilled SAP BODS + FICO Consultant with strong expertise in data integration and financial processes. The candidate will be responsible for designing, developing, and supporting data migration, ETL processes using SAP BODS, and working closely with SAP FICO modules for financial data management and reporting. Key Responsibilities: SAP BODS (BusinessObjects Data Services): Design, develop, and maintain ETL jobs using SAP BODS Perform data extraction, transformation, and loading from multiple sources Handle data migration and data quality processes Optimize data workflows and ensure performance tuning Troubleshoot and resolve ETL-related issues SAP FICO (Financial Accounting & Controlling): Work on core FICO modules: GL, AP, AR, Asset Accounting, Cost Center Accounting Support financial data integration between SAP and non-SAP systems Assist in financial reporting and reconciliation processes Collaborate with business users to gather requirements and provide solutions Support month-end and year-end closing activities Required Skills: Strong hands-on experience in SAP BODS (ETL development) Good functional knowledge of SAP FICO Experience in data migration projects (LSMW / BODS / S/4HANA migrations preferred) Knowledge of SQL, data warehousing concepts Understanding of financial processes and reporting Strong problem-solving and analytical skills Preferred Skills: Experience with SAP S/4HANA Exposure to SAP BW / HANA Knowledge of data quality and data governance tools Experience in integration with third-party systems Qualifications: Bachelor’s degree in Finance, Accounting, IT, or related field SAP Certification in FICO or BODS is a plus Soft Skills: Good communication and stakeholder management Ability to work independently and in team environments Strong documentation skills

Responsibilities

Job Summary: We are looking for a skilled SAP BODS + FICO Consultant with strong expertise in data integration and financial processes. The candidate will be responsible for designing, developing, and supporting data migration, ETL processes using SAP BODS, and working closely with SAP FICO modules for financial data management and reporting. Key Responsibilities: SAP BODS (BusinessObjects Data Services): Design, develop, and maintain ETL jobs using SAP BODS Perform data extraction, transformation, and loading from multiple sources Handle data migration and data quality processes Optimize data workflows and ensure performance tuning Troubleshoot and resolve ETL-related issues SAP FICO (Financial Accounting & Controlling): Work on core FICO modules: GL, AP, AR, Asset Accounting, Cost Center Accounting Support financial data integration between SAP and non-SAP systems Assist in financial reporting and reconciliation processes Collaborate with business users to gather requirements and provide solutions Support month-end and year-end closing activities Required Skills: Strong hands-on experience in SAP BODS (ETL development) Good functional knowledge of SAP FICO Experience in data migration projects (LSMW / BODS / S/4HANA migrations preferred) Knowledge of SQL, data warehousing concepts Understanding of financial processes and reporting Strong problem-solving and analytical skills Preferred Skills: Experience with SAP S/4HANA Exposure to SAP BW / HANA Knowledge of data quality and data governance tools Experience in integration with third-party systems Qualifications: Bachelor’s degree in Finance, Accounting, IT, or related field SAP Certification in FICO or BODS is a plus Soft Skills: Good communication and stakeholder management Ability to work independently and in team environments Strong documentation skills
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :sap bods/fico