RGS: 10716492
Location: ~PUNE~HYDERABAD~INDORE~
Skills: Digital: Microsoft Azure~Digital : Site Reliability Engineering (SRE)
Experience Required: 6-8
Descriptions:
- Lead reliability| availability| and performance engineering for large-scale production systems. Architect and automate infrastructure using IaC tools and advanced CICD pipelines. Develop and operate highly scalable environments across AWS Azure GCP.
- Implement and optimize observability stacks using Splunk| Elastic| Grafana| Honeycomb or similar tools.
- Establish and refine SLIs SLOs while driving a data-driven reliability culture.
- Own incident response| post-mortems| and long-term preventive engineering.
- Enhance system resilience through chaos testing| auto-healing mechanisms| and advanced automation.
- Collaborate with engineering teams to design fault-tolerant| maintenance architectures.
- Strengthen infrastructure security| compliance| and cost management practices.
- Mentor engineers and champion SRE best practices across teams.
Responsibilities
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Role Category :Programming & Design
Role :Digital: Microsoft Azure~Digital : Site Reliability Engineering (SRE)
please help us with IAM 1 M level profiles for the below JD?
Below is the SO for M level.
• Minimum of 8 to 12 years IAM product suite implementation experience.
• Certified Okta professional (preferred). Should have 4 years hands on OKTA experience.
• OKTA Auth0 experience(preferred)
• Ansible, Terraform Automation(preferred)
• Experience with Okta Access Gateway and Okta IWA design installation configuration and operation Experience with On Premise Application Lifecycle Management and provisioning.
• Experience with Cloud Lifecycle Management including SCIM and API integration Strong skills in designing and configuring Service Provider interfaces SAML v2 MFA and OAuth2 implementations.
• Expert product and service support by highly technical resources to provide problem resolution and platform support feature creation and implementation including updates to design code and specification.
• Multifactor Authentication (MFA) password resets - AD Synchronization health - Onboarding of new applications and services - Application Onboarding
• IBM Security Access Manager experience(preferred)
Responsibilities
please help us with IAM 1 M level profiles for the below JD?
Below is the SO for M level.
• Minimum of 8 to 12 years IAM product suite implementation experience.
• Certified Okta professional (preferred). Should have 4 years hands on OKTA experience.
• OKTA Auth0 experience(preferred)
• Ansible, Terraform Automation(preferred)
• Experience with Okta Access Gateway and Okta IWA design installation configuration and operation Experience with On Premise Application Lifecycle Management and provisioning.
• Experience with Cloud Lifecycle Management including SCIM and API integration Strong skills in designing and configuring Service Provider interfaces SAML v2 MFA and OAuth2 implementations.
• Expert product and service support by highly technical resources to provide problem resolution and platform support feature creation and implementation including updates to design code and specification.
• Multifactor Authentication (MFA) password resets - AD Synchronization health - Onboarding of new applications and services - Application Onboarding
• IBM Security Access Manager experience(preferred)
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Requirement ID: 10596845
Job Title: Developer
Work Location: Chennai, TN / Mumbai, MH / Pune, MH
Skill Required: BPS: Data moduler
Experience Range in Required Skills: 7+ Years
Duration: 6 months (extendable)
Job Description:
Key responsibilities
You will be responsible for · Defining the data architecture framework covering data modelling, security, virtualization, governance, reference and master data, data visualization · Defining reference architecture and patterns that others can follow to create and improve data systems aligning with industry standards · Designing data models leveraging ER, Dimensional and Data vault modelling technique. · Designing data security and controls to address customer’s data privacy needs inline to current regulations such as GDPR, CCPA etc. · Designing technical solutions leveraging data virtualization techniques and tools such as Denodo etc. · Collaborating and coordinating with multiple departments, senior stakeholders including C level, partners, and external vendors · Designing architecture solutions that are in line with business objectives · Providing technical leadership, oversight, and direction to project / execution team · Building effective relationships with customers, CoE, partners and vendors · You will be responsible for verifying requirements, Data solution architecture & design assurance, developing a delivery plan, providing thought leadership for all data solutions, including designing and development that meet and exceed customer expectations. · Analyzing and translating business needs into long-term solution data models · Developing best practices for data modeling to ensure consistency within the data landscape · To be successful in this role, you will be experienced with the Cloud-based data Solution Architectures, Software Development Life Cycle (including both Agile & Waterfall), Data Engineering and ETL tools/platforms, and data modeling practices. · Be a close partner & collaborate with data engineers, developers, DevOps Engineers, Data scientists, and technical leads to build data pipelines, develop feedback loops to improve data quality, data ingestion, data streaming, workflows, data transformation jobs automation. · Design, build, test and Deploy Data pipelines and blueprints at scale. · Design data-driven product and solution framework in collaboration with enterprise data architects and cloud solution architects. · Excellent Code review, Code Quality, Data & application security experience. · Drive productivity and stability while accelerating time-to-market through automation of the software lifecycle. · Drive continuous quality, stability, and compliance to reduce risk · Automate and orchestrate releases at scale. · Measure, track and improve software delivery. Increase efficiency and reduce risks associated with software deliver.
Key Skills/Knowledge: · Bachelor's/Master’s degree in a relevant field – (e.g. Computer Science, Software engineering, Data Science, AI/ML). · 7+ years of experience Developing, deploying, and monitoring end to end data and analytics solutions with extensive knowledge of evaluation metrics and best practices. · 7+ years Data Warehouse/Data Lake Architecture and Development. · 5+ years Data Modeling & Architecture with experience in implementing at least one market leading banking and financial services data model i.e. from Oracle, Teradata, IBM · Experience on data modelling tools such as Erwin, ER Studio, SQL modeler is preferred · Experience in ETL/ELT, data Pipelines, Data Quality, blueprints development. · Non-Relational Database experience (Document DB, Graph DB, etc.) · Understanding of data structures, data modeling and data architecture · Strong communication skills and ability to work with ambiguity. · Able to provide direction/support to data modelling team members. · Strong architectural knowledge for data analytics systems patterns and their pros and cons. · Strong knowledge of data quality, metadata management, security frameworks / tools implemented for data on cloud architecture · Skillful resource to understand customer needs and provide target solutions, which is scalable, reliable, and highly available.
Responsibilities
Requirement ID: 10596845
Job Title: Developer
Work Location: Chennai, TN / Mumbai, MH / Pune, MH
Skill Required: BPS: Data moduler
Experience Range in Required Skills: 7+ Years
Duration: 6 months (extendable)
Job Description:
Key responsibilities
You will be responsible for · Defining the data architecture framework covering data modelling, security, virtualization, governance, reference and master data, data visualization · Defining reference architecture and patterns that others can follow to create and improve data systems aligning with industry standards · Designing data models leveraging ER, Dimensional and Data vault modelling technique. · Designing data security and controls to address customer’s data privacy needs inline to current regulations such as GDPR, CCPA etc. · Designing technical solutions leveraging data virtualization techniques and tools such as Denodo etc. · Collaborating and coordinating with multiple departments, senior stakeholders including C level, partners, and external vendors · Designing architecture solutions that are in line with business objectives · Providing technical leadership, oversight, and direction to project / execution team · Building effective relationships with customers, CoE, partners and vendors · You will be responsible for verifying requirements, Data solution architecture & design assurance, developing a delivery plan, providing thought leadership for all data solutions, including designing and development that meet and exceed customer expectations. · Analyzing and translating business needs into long-term solution data models · Developing best practices for data modeling to ensure consistency within the data landscape · To be successful in this role, you will be experienced with the Cloud-based data Solution Architectures, Software Development Life Cycle (including both Agile & Waterfall), Data Engineering and ETL tools/platforms, and data modeling practices. · Be a close partner & collaborate with data engineers, developers, DevOps Engineers, Data scientists, and technical leads to build data pipelines, develop feedback loops to improve data quality, data ingestion, data streaming, workflows, data transformation jobs automation. · Design, build, test and Deploy Data pipelines and blueprints at scale. · Design data-driven product and solution framework in collaboration with enterprise data architects and cloud solution architects. · Excellent Code review, Code Quality, Data & application security experience. · Drive productivity and stability while accelerating time-to-market through automation of the software lifecycle. · Drive continuous quality, stability, and compliance to reduce risk · Automate and orchestrate releases at scale. · Measure, track and improve software delivery. Increase efficiency and reduce risks associated with software deliver.
Key Skills/Knowledge: · Bachelor's/Master’s degree in a relevant field – (e.g. Computer Science, Software engineering, Data Science, AI/ML). · 7+ years of experience Developing, deploying, and monitoring end to end data and analytics solutions with extensive knowledge of evaluation metrics and best practices. · 7+ years Data Warehouse/Data Lake Architecture and Development. · 5+ years Data Modeling & Architecture with experience in implementing at least one market leading banking and financial services data model i.e. from Oracle, Teradata, IBM · Experience on data modelling tools such as Erwin, ER Studio, SQL modeler is preferred · Experience in ETL/ELT, data Pipelines, Data Quality, blueprints development. · Non-Relational Database experience (Document DB, Graph DB, etc.) · Understanding of data structures, data modeling and data architecture · Strong communication skills and ability to work with ambiguity. · Able to provide direction/support to data modelling team members. · Strong architectural knowledge for data analytics systems patterns and their pros and cons. · Strong knowledge of data quality, metadata management, security frameworks / tools implemented for data on cloud architecture · Skillful resource to understand customer needs and provide target solutions, which is scalable, reliable, and highly available.
Salary : Rs. 90,000.0 - Rs. 1,65,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Requirement ID: 10596315
Job Title: Bigdata Developer
Work Location: Chennai, TN / Pune, MH / Mumbai, MH
Skill Required: Digital: Bigdata and Hadoop Ecosystem - Python, Spark, Hive
Experience Range in Required Skills: 8-10 Years, Desired Experience Range 6+ years relevant
Duration: 6 months (extendable)
Job Description: Big Data
Must-Have
• Proficient in Scala, preferably certification from accredited institution
• Experience building enterprise software solutions · Knowledge of OOO concepts and patterns
• Basic knowledge of HDFS, Hive, Spark · Build required skill by doing quick research including google search
• Preferably compute science background ·
• Ability to present information in a concise and clear manner.
Good-to-Have • Basic knowledge of HDFS, Hive, Spark
• Basic knowledge OOO Language such as Java
• Scripting language unix/shell, Python
Responsibilities
Requirement ID: 10596315
Job Title: Bigdata Developer
Work Location: Chennai, TN / Pune, MH / Mumbai, MH
Skill Required: Digital: Bigdata and Hadoop Ecosystem - Python, Spark, Hive
Experience Range in Required Skills: 8-10 Years, Desired Experience Range 6+ years relevant
Duration: 6 months (extendable)
Job Description: Big Data
Must-Have
• Proficient in Scala, preferably certification from accredited institution
• Experience building enterprise software solutions · Knowledge of OOO concepts and patterns
• Basic knowledge of HDFS, Hive, Spark · Build required skill by doing quick research including google search
• Preferably compute science background ·
• Ability to present information in a concise and clear manner.
Good-to-Have • Basic knowledge of HDFS, Hive, Spark
• Basic knowledge OOO Language such as Java
• Scripting language unix/shell, Python
Salary : Rs. 90,000.0 - Rs. 1,65,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
RR- 68629331
Location: Pune and Chennai
Level : SA
Experience: 6 to 10 Years
3 days from office
Only immediate joiner
Skill Matrix:
Skills Total Exp
Mainframe
CICS
Cobol
VSAM
Mode of interview: Virtual
Recruiter POC: Mohan Raj
ID: 745505
Mandatory Skills Set – Cobol, DB2, CICS, JCL, VSAM
Nice to have – MQ, StorProc.
• Extensive hands of experience on Cobol-DB2-CICS-VSAM program.
• Strong analytical skills.
• Analyze system requirements and develop Cobol programs to meet business needs and improve operational workflows.
• Ensure compliance with industry standards and best practices in mainframe technology to maintain system integrity.
• Monitor system performance and troubleshoot issues to minimize downtime and maximize productivity.
• Collaborate with cross-functional teams to integrate DB2 databases, enhancing data accessibility and security.
• Develop and execute test plans to validate system functionality and ensure seamless integration.
• Implement process improvements to enhance system scalability and reduce operational costs.
• Drive innovation by exploring new technologies and methodologies to enhance mainframe capabilities.
• Knowledge on Git, IDZ and windsurf is plus.
Responsibilities
RR- 68629331
Location: Pune and Chennai
Level : SA
Experience: 6 to 10 Years
3 days from office
Only immediate joiner
Skill Matrix:
Skills Total Exp
Mainframe
CICS
Cobol
VSAM
Mode of interview: Virtual
Recruiter POC: Mohan Raj
ID: 745505
Mandatory Skills Set – Cobol, DB2, CICS, JCL, VSAM
Nice to have – MQ, StorProc.
• Extensive hands of experience on Cobol-DB2-CICS-VSAM program.
• Strong analytical skills.
• Analyze system requirements and develop Cobol programs to meet business needs and improve operational workflows.
• Ensure compliance with industry standards and best practices in mainframe technology to maintain system integrity.
• Monitor system performance and troubleshoot issues to minimize downtime and maximize productivity.
• Collaborate with cross-functional teams to integrate DB2 databases, enhancing data accessibility and security.
• Develop and execute test plans to validate system functionality and ensure seamless integration.
• Implement process improvements to enhance system scalability and reduce operational costs.
• Drive innovation by exploring new technologies and methodologies to enhance mainframe capabilities.
• Knowledge on Git, IDZ and windsurf is plus.
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Technical Skills
• Strong understanding of GenAI, LLMs, vector databases, ML workflows. (cont learning)
• Experience integrating AI into development workflows (copilots, test automation, documentation).
• Proficiency in Python/Java and cloud platforms (Azure/AWS/Google).
• Good grasp of enterprise SDLC, DevOps, APIs, microservices, security, and compliance.
Influence & Leadership
• Proven ability to influence teams without formal authority.
• Excellent stakeholder management across verticals and global counterparts.
• Translate complex AI topics into simple, actionable guidance.
• Align with central initiatives to drive AI adoption in respective dept.
Mindset & Traits
• Evangelist mindset, proactive learner, strong communicator.
• Comfortable with ambiguity and fast experimentation.
• Collaborative, customer-centric, and outcome-driven.
Preferred Qualifications
• 9+ years in software engineering, architect, delivery, or enterprise architecture.
• Experience in transformation programs / self-driver of owned initiatives.
• Exposure to enterprise-scale systems.
• Certifications in AI/ML, cloud, or agile practices.
Responsibilities
Technical Skills
• Strong understanding of GenAI, LLMs, vector databases, ML workflows. (cont learning)
• Experience integrating AI into development workflows (copilots, test automation, documentation).
• Proficiency in Python/Java and cloud platforms (Azure/AWS/Google).
• Good grasp of enterprise SDLC, DevOps, APIs, microservices, security, and compliance.
Influence & Leadership
• Proven ability to influence teams without formal authority.
• Excellent stakeholder management across verticals and global counterparts.
• Translate complex AI topics into simple, actionable guidance.
• Align with central initiatives to drive AI adoption in respective dept.
Mindset & Traits
• Evangelist mindset, proactive learner, strong communicator.
• Comfortable with ambiguity and fast experimentation.
• Collaborative, customer-centric, and outcome-driven.
Preferred Qualifications
• 9+ years in software engineering, architect, delivery, or enterprise architecture.
• Experience in transformation programs / self-driver of owned initiatives.
• Exposure to enterprise-scale systems.
• Certifications in AI/ML, cloud, or agile practices.
Salary : Rs. 10,00,000.0 - Rs. 14,00,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
These skills are essential because the applications now exist as Ab Initio graphs rather than COBOL
programs.
Proficiency in the Ab Initio Graphical Development Environment (GDE): Building, modifying,
and debugging graphs using standard components (Reformat, Join, Sort, Rollup, Normalize,
Lookup, etc.), custom transforms, and embedded code.
Understanding and Assessing Auto-Converted Graphs: Graphs produced by Ab Initio's
automated COBOL/IMS conversion tool are not standard hand-built graphs. They follow
machine-generated patterns — often verbose, deeply nested, or structured in ways that differ
significantly from graphs written from scratch. In this environment, these converted graphs must
be assessed and modified to implement new requirements or fix defects. This requires the
ability to trace generated logic back to the original COBOL source, identify the relevant transform
or component within an auto-generated structure, and make targeted, safe changes without
disrupting the surrounding converted logic.
Metadata Management: Working with the Enterprise Meta Environment (EME) for version
control, dependency analysis, impact analysis, and data lineage.
Parameter Handling: Using Parameter Definition Language (PDL) effectively.
Orchestration and Workflow: Conduct It (or Express It) for scheduling and managing job flows
— this largely replaces JCL and IMS transaction management. Job scheduling is handled via
Atomic Automation, which orchestrates Ab Initio workloads in the production environment. A
critical aspect of this environment is that Atomic Automation workflows contain parallel job
dependencies — multiple jobs may execute concurrently with interdependencies that must be
understood when diagnosing failures or assessing the impact of a change. This is distinct from the
sequential step-by-step flow within an individual job; the broader workflow topology must also be
considered.
Data Flow Traceability and File/Dataset Lineage: A critical problem-solving skill in this
environment is the ability to trace data content backwards and forwards through job flows
and workflows — following a file or dataset from its point of creation through each transformation
it undergoes across jobs, graphs, and workflow stages. This includes understanding what
populates a file, how it is transformed at each step, where it is consumed downstream, and how
parallel workflow paths may contribute to or depend on its content. This traceability underpins
three core data concerns that must always be considered:
Data Integrity: Ensuring that transformations preserve the accuracy and consistency of data
values as they move through the system — detecting where values may be incorrectly
computed, overwritten, or corrupted relative to what the original IMS application would have
produced.
Missing Data: Identifying conditions under which records or fields may be absent, dropped,
or skipped — whether due to filtering logic, join mismatches, conditional branches, or
upstream job failures — and understanding the downstream impact of that absence.
Data Retention: Understanding how long data persists at each stage — which files are
transient (used within a single run), which are retained across cycles, and how GDG-style
generational patterns control the lifecycle of datasets. Knowing what data is available, for
how long, and under what conditions is essential for recovery, reprocessing, and audit
support.
Data Processing and Integration: Handling large-scale ETL/ELT processes, including migrated
IMS segment data, copybooks, EBCDIC, packed decimal, and zoned decimal formats.
Administration and Operations: Co Operating System runtime management, monitoring,
logging, error handling, deployment, and performance tuning (parallelism, multifile systems,
resource optimization).
Testing, Validation, and Move to Production (MTP): This is a multi-layered discipline in the
converted Ab Initio environment and must be treated as a structured process, not an afterthought.
Unit Testing of Converted Graphs: Changes to auto-converted graphs require targeted unit
testing at the graph or component level — isolating the modified logic, constructing or
sourcing representative input data, and verifying that outputs match expected results relative
to the original IMS behavior. Because the converted code was machine-generated, even
small changes can have non-obvious ripple effects within the surrounding graph structure;
unit testing must be thorough and deliberate.
Data-Driven Validation: Test cases must be grounded in real or representative data —
including edge cases common in the original IMS environment (e.g., packed decimal
boundary values, missing segments, GDG rollover conditions). Comparing Ab Initio output
against known-good baseline results from the original system (or a prior run) is the most
reliable validation approach.
End-to-End and Integration Testing: Because jobs within workflows have parallel
dependencies, changes must be tested not just at the graph level but across the full job flow
— verifying that upstream outputs feed correctly into downstream jobs and that no parallel
branches are disrupted.
Move to Production (MTP) Coordination: MTP in this environment requires understanding
and coordinating multiple interdependent activities: packaging and promoting Ab Initio graph
changes through the EME; updating or validating Atomic Automation workflow definitions if
job dependencies change; confirming that MFS screen-related graph changes are consistent
with the deployed screen definitions; communicating the scope and timing of changes to
operations and business stakeholders; and verifying that production data files and GDG
generations are in the correct state prior to cutover. A practitioner must also understand the
rollback implications of a failed MTP — what state files and workflows will be in, and what
steps are needed to recover."
Responsibilities
These skills are essential because the applications now exist as Ab Initio graphs rather than COBOL
programs.
Proficiency in the Ab Initio Graphical Development Environment (GDE): Building, modifying,
and debugging graphs using standard components (Reformat, Join, Sort, Rollup, Normalize,
Lookup, etc.), custom transforms, and embedded code.
Understanding and Assessing Auto-Converted Graphs: Graphs produced by Ab Initio's
automated COBOL/IMS conversion tool are not standard hand-built graphs. They follow
machine-generated patterns — often verbose, deeply nested, or structured in ways that differ
significantly from graphs written from scratch. In this environment, these converted graphs must
be assessed and modified to implement new requirements or fix defects. This requires the
ability to trace generated logic back to the original COBOL source, identify the relevant transform
or component within an auto-generated structure, and make targeted, safe changes without
disrupting the surrounding converted logic.
Metadata Management: Working with the Enterprise Meta Environment (EME) for version
control, dependency analysis, impact analysis, and data lineage.
Parameter Handling: Using Parameter Definition Language (PDL) effectively.
Orchestration and Workflow: Conduct It (or Express It) for scheduling and managing job flows
— this largely replaces JCL and IMS transaction management. Job scheduling is handled via
Atomic Automation, which orchestrates Ab Initio workloads in the production environment. A
critical aspect of this environment is that Atomic Automation workflows contain parallel job
dependencies — multiple jobs may execute concurrently with interdependencies that must be
understood when diagnosing failures or assessing the impact of a change. This is distinct from the
sequential step-by-step flow within an individual job; the broader workflow topology must also be
considered.
Data Flow Traceability and File/Dataset Lineage: A critical problem-solving skill in this
environment is the ability to trace data content backwards and forwards through job flows
and workflows — following a file or dataset from its point of creation through each transformation
it undergoes across jobs, graphs, and workflow stages. This includes understanding what
populates a file, how it is transformed at each step, where it is consumed downstream, and how
parallel workflow paths may contribute to or depend on its content. This traceability underpins
three core data concerns that must always be considered:
Data Integrity: Ensuring that transformations preserve the accuracy and consistency of data
values as they move through the system — detecting where values may be incorrectly
computed, overwritten, or corrupted relative to what the original IMS application would have
produced.
Missing Data: Identifying conditions under which records or fields may be absent, dropped,
or skipped — whether due to filtering logic, join mismatches, conditional branches, or
upstream job failures — and understanding the downstream impact of that absence.
Data Retention: Understanding how long data persists at each stage — which files are
transient (used within a single run), which are retained across cycles, and how GDG-style
generational patterns control the lifecycle of datasets. Knowing what data is available, for
how long, and under what conditions is essential for recovery, reprocessing, and audit
support.
Data Processing and Integration: Handling large-scale ETL/ELT processes, including migrated
IMS segment data, copybooks, EBCDIC, packed decimal, and zoned decimal formats.
Administration and Operations: Co Operating System runtime management, monitoring,
logging, error handling, deployment, and performance tuning (parallelism, multifile systems,
resource optimization).
Testing, Validation, and Move to Production (MTP): This is a multi-layered discipline in the
converted Ab Initio environment and must be treated as a structured process, not an afterthought.
Unit Testing of Converted Graphs: Changes to auto-converted graphs require targeted unit
testing at the graph or component level — isolating the modified logic, constructing or
sourcing representative input data, and verifying that outputs match expected results relative
to the original IMS behavior. Because the converted code was machine-generated, even
small changes can have non-obvious ripple effects within the surrounding graph structure;
unit testing must be thorough and deliberate.
Data-Driven Validation: Test cases must be grounded in real or representative data —
including edge cases common in the original IMS environment (e.g., packed decimal
boundary values, missing segments, GDG rollover conditions). Comparing Ab Initio output
against known-good baseline results from the original system (or a prior run) is the most
reliable validation approach.
End-to-End and Integration Testing: Because jobs within workflows have parallel
dependencies, changes must be tested not just at the graph level but across the full job flow
— verifying that upstream outputs feed correctly into downstream jobs and that no parallel
branches are disrupted.
Move to Production (MTP) Coordination: MTP in this environment requires understanding
and coordinating multiple interdependent activities: packaging and promoting Ab Initio graph
changes through the EME; updating or validating Atomic Automation workflow definitions if
job dependencies change; confirming that MFS screen-related graph changes are consistent
with the deployed screen definitions; communicating the scope and timing of changes to
operations and business stakeholders; and verifying that production data files and GDG
generations are in the correct state prior to cutover. A practitioner must also understand the
rollback implications of a failed MTP — what state files and workflows will be in, and what
steps are needed to recover."
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
These skills are essential because the applications now exist as Ab Initio graphs rather than COBOL
programs.
Proficiency in the Ab Initio Graphical Development Environment (GDE): Building, modifying,
and debugging graphs using standard components (Reformat, Join, Sort, Rollup, Normalize,
Lookup, etc.), custom transforms, and embedded code.
Understanding and Assessing Auto-Converted Graphs: Graphs produced by Ab Initio's
automated COBOL/IMS conversion tool are not standard hand-built graphs. They follow
machine-generated patterns — often verbose, deeply nested, or structured in ways that differ
significantly from graphs written from scratch. In this environment, these converted graphs must
be assessed and modified to implement new requirements or fix defects. This requires the
ability to trace generated logic back to the original COBOL source, identify the relevant transform
or component within an auto-generated structure, and make targeted, safe changes without
disrupting the surrounding converted logic.
Metadata Management: Working with the Enterprise Meta Environment (EME) for version
control, dependency analysis, impact analysis, and data lineage.
Parameter Handling: Using Parameter Definition Language (PDL) effectively.
Orchestration and Workflow: Conduct It (or Express It) for scheduling and managing job flows
— this largely replaces JCL and IMS transaction management. Job scheduling is handled via
Atomic Automation, which orchestrates Ab Initio workloads in the production environment. A
critical aspect of this environment is that Atomic Automation workflows contain parallel job
dependencies — multiple jobs may execute concurrently with interdependencies that must be
understood when diagnosing failures or assessing the impact of a change. This is distinct from the
sequential step-by-step flow within an individual job; the broader workflow topology must also be
considered.
Data Flow Traceability and File/Dataset Lineage: A critical problem-solving skill in this
environment is the ability to trace data content backwards and forwards through job flows
and workflows — following a file or dataset from its point of creation through each transformation
it undergoes across jobs, graphs, and workflow stages. This includes understanding what
populates a file, how it is transformed at each step, where it is consumed downstream, and how
parallel workflow paths may contribute to or depend on its content. This traceability underpins
three core data concerns that must always be considered:
Data Integrity: Ensuring that transformations preserve the accuracy and consistency of data
values as they move through the system — detecting where values may be incorrectly
computed, overwritten, or corrupted relative to what the original IMS application would have
produced.
Missing Data: Identifying conditions under which records or fields may be absent, dropped,
or skipped — whether due to filtering logic, join mismatches, conditional branches, or
upstream job failures — and understanding the downstream impact of that absence.
Data Retention: Understanding how long data persists at each stage — which files are
transient (used within a single run), which are retained across cycles, and how GDG-style
generational patterns control the lifecycle of datasets. Knowing what data is available, for
how long, and under what conditions is essential for recovery, reprocessing, and audit
support.
Data Processing and Integration: Handling large-scale ETL/ELT processes, including migrated
IMS segment data, copybooks, EBCDIC, packed decimal, and zoned decimal formats.
Administration and Operations: Co Operating System runtime management, monitoring,
logging, error handling, deployment, and performance tuning (parallelism, multifile systems,
resource optimization).
Testing, Validation, and Move to Production (MTP): This is a multi-layered discipline in the
converted Ab Initio environment and must be treated as a structured process, not an afterthought.
Unit Testing of Converted Graphs: Changes to auto-converted graphs require targeted unit
testing at the graph or component level — isolating the modified logic, constructing or
sourcing representative input data, and verifying that outputs match expected results relative
to the original IMS behavior. Because the converted code was machine-generated, even
small changes can have non-obvious ripple effects within the surrounding graph structure;
unit testing must be thorough and deliberate.
Data-Driven Validation: Test cases must be grounded in real or representative data —
including edge cases common in the original IMS environment (e.g., packed decimal
boundary values, missing segments, GDG rollover conditions). Comparing Ab Initio output
against known-good baseline results from the original system (or a prior run) is the most
reliable validation approach.
End-to-End and Integration Testing: Because jobs within workflows have parallel
dependencies, changes must be tested not just at the graph level but across the full job flow
— verifying that upstream outputs feed correctly into downstream jobs and that no parallel
branches are disrupted.
Move to Production (MTP) Coordination: MTP in this environment requires understanding
and coordinating multiple interdependent activities: packaging and promoting Ab Initio graph
changes through the EME; updating or validating Atomic Automation workflow definitions if
job dependencies change; confirming that MFS screen-related graph changes are consistent
with the deployed screen definitions; communicating the scope and timing of changes to
operations and business stakeholders; and verifying that production data files and GDG
generations are in the correct state prior to cutover. A practitioner must also understand the
rollback implications of a failed MTP — what state files and workflows will be in, and what
steps are needed to recover.
Responsibilities
These skills are essential because the applications now exist as Ab Initio graphs rather than COBOL
programs.
Proficiency in the Ab Initio Graphical Development Environment (GDE): Building, modifying,
and debugging graphs using standard components (Reformat, Join, Sort, Rollup, Normalize,
Lookup, etc.), custom transforms, and embedded code.
Understanding and Assessing Auto-Converted Graphs: Graphs produced by Ab Initio's
automated COBOL/IMS conversion tool are not standard hand-built graphs. They follow
machine-generated patterns — often verbose, deeply nested, or structured in ways that differ
significantly from graphs written from scratch. In this environment, these converted graphs must
be assessed and modified to implement new requirements or fix defects. This requires the
ability to trace generated logic back to the original COBOL source, identify the relevant transform
or component within an auto-generated structure, and make targeted, safe changes without
disrupting the surrounding converted logic.
Metadata Management: Working with the Enterprise Meta Environment (EME) for version
control, dependency analysis, impact analysis, and data lineage.
Parameter Handling: Using Parameter Definition Language (PDL) effectively.
Orchestration and Workflow: Conduct It (or Express It) for scheduling and managing job flows
— this largely replaces JCL and IMS transaction management. Job scheduling is handled via
Atomic Automation, which orchestrates Ab Initio workloads in the production environment. A
critical aspect of this environment is that Atomic Automation workflows contain parallel job
dependencies — multiple jobs may execute concurrently with interdependencies that must be
understood when diagnosing failures or assessing the impact of a change. This is distinct from the
sequential step-by-step flow within an individual job; the broader workflow topology must also be
considered.
Data Flow Traceability and File/Dataset Lineage: A critical problem-solving skill in this
environment is the ability to trace data content backwards and forwards through job flows
and workflows — following a file or dataset from its point of creation through each transformation
it undergoes across jobs, graphs, and workflow stages. This includes understanding what
populates a file, how it is transformed at each step, where it is consumed downstream, and how
parallel workflow paths may contribute to or depend on its content. This traceability underpins
three core data concerns that must always be considered:
Data Integrity: Ensuring that transformations preserve the accuracy and consistency of data
values as they move through the system — detecting where values may be incorrectly
computed, overwritten, or corrupted relative to what the original IMS application would have
produced.
Missing Data: Identifying conditions under which records or fields may be absent, dropped,
or skipped — whether due to filtering logic, join mismatches, conditional branches, or
upstream job failures — and understanding the downstream impact of that absence.
Data Retention: Understanding how long data persists at each stage — which files are
transient (used within a single run), which are retained across cycles, and how GDG-style
generational patterns control the lifecycle of datasets. Knowing what data is available, for
how long, and under what conditions is essential for recovery, reprocessing, and audit
support.
Data Processing and Integration: Handling large-scale ETL/ELT processes, including migrated
IMS segment data, copybooks, EBCDIC, packed decimal, and zoned decimal formats.
Administration and Operations: Co Operating System runtime management, monitoring,
logging, error handling, deployment, and performance tuning (parallelism, multifile systems,
resource optimization).
Testing, Validation, and Move to Production (MTP): This is a multi-layered discipline in the
converted Ab Initio environment and must be treated as a structured process, not an afterthought.
Unit Testing of Converted Graphs: Changes to auto-converted graphs require targeted unit
testing at the graph or component level — isolating the modified logic, constructing or
sourcing representative input data, and verifying that outputs match expected results relative
to the original IMS behavior. Because the converted code was machine-generated, even
small changes can have non-obvious ripple effects within the surrounding graph structure;
unit testing must be thorough and deliberate.
Data-Driven Validation: Test cases must be grounded in real or representative data —
including edge cases common in the original IMS environment (e.g., packed decimal
boundary values, missing segments, GDG rollover conditions). Comparing Ab Initio output
against known-good baseline results from the original system (or a prior run) is the most
reliable validation approach.
End-to-End and Integration Testing: Because jobs within workflows have parallel
dependencies, changes must be tested not just at the graph level but across the full job flow
— verifying that upstream outputs feed correctly into downstream jobs and that no parallel
branches are disrupted.
Move to Production (MTP) Coordination: MTP in this environment requires understanding
and coordinating multiple interdependent activities: packaging and promoting Ab Initio graph
changes through the EME; updating or validating Atomic Automation workflow definitions if
job dependencies change; confirming that MFS screen-related graph changes are consistent
with the deployed screen definitions; communicating the scope and timing of changes to
operations and business stakeholders; and verifying that production data files and GDG
generations are in the correct state prior to cutover. A practitioner must also understand the
rollback implications of a failed MTP — what state files and workflows will be in, and what
steps are needed to recover.
Salary : Rs. 0.0 - Rs. 2,80,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Proficiency in the Ab Initio Graphical Development Environment (GDE): Building, modifying,
and debugging graphs using standard components (Reformat, Join, Sort, Rollup, Normalize,
Lookup, etc.), custom transforms, and embedded code.
Understanding and Assessing Auto-Converted Graphs: Graphs produced by Ab Initio's
automated COBOL/IMS conversion tool are not standard hand-built graphs. They follow
machine-generated patterns — often verbose, deeply nested, or structured in ways that differ
significantly from graphs written from scratch. In this environment, these converted graphs must
be assessed and modified to implement new requirements or fix defects. This requires the
ability to trace generated logic back to the original COBOL source, identify the relevant transform
or component within an auto-generated structure, and make targeted, safe changes without
disrupting the surrounding converted logic.
Metadata Management: Working with the Enterprise Meta Environment (EME) for version
control, dependency analysis, impact analysis, and data lineage.
Parameter Handling: Using Parameter Definition Language (PDL) effectively.
Orchestration and Workflow: Conduct It (or Express It) for scheduling and managing job flows
— this largely replaces JCL and IMS transaction management. Job scheduling is handled via
Atomic Automation, which orchestrates Ab Initio workloads in the production environment. A
critical aspect of this environment is that Atomic Automation workflows contain parallel job
dependencies — multiple jobs may execute concurrently with interdependencies that must be
understood when diagnosing failures or assessing the impact of a change. This is distinct from the
sequential step-by-step flow within an individual job; the broader workflow topology must also be
considered.
Data Flow Traceability and File/Dataset Lineage: A critical problem-solving skill in this
environment is the ability to trace data content backwards and forwards through job flows
and workflows — following a file or dataset from its point of creation through each transformation
it undergoes across jobs, graphs, and workflow stages. This includes understanding what
populates a file, how it is transformed at each step, where it is consumed downstream, and how
parallel workflow paths may contribute to or depend on its content. This traceability underpins
three core data concerns that must always be considered:
Data Integrity: Ensuring that transformations preserve the accuracy and consistency of data
values as they move through the system — detecting where values may be incorrectly
computed, overwritten, or corrupted relative to what the original IMS application would have
produced.
Missing Data: Identifying conditions under which records or fields may be absent, dropped,
or skipped — whether due to filtering logic, join mismatches, conditional branches, or
upstream job failures — and understanding the downstream impact of that absence.
Data Retention: Understanding how long data persists at each stage — which files are
transient (used within a single run), which are retained across cycles, and how GDG-style
generational patterns control the lifecycle of datasets. Knowing what data is available, for
how long, and under what conditions is essential for recovery, reprocessing, and audit
support.
Data Processing and Integration: Handling large-scale ETL/ELT processes, including migrated
IMS segment data, copybooks, EBCDIC, packed decimal, and zoned decimal formats.
Administration and Operations: Co Operating System runtime management, monitoring,
logging, error handling, deployment, and performance tuning (parallelism, multifile systems,
resource optimization).
Responsibilities
Proficiency in the Ab Initio Graphical Development Environment (GDE): Building, modifying,
and debugging graphs using standard components (Reformat, Join, Sort, Rollup, Normalize,
Lookup, etc.), custom transforms, and embedded code.
Understanding and Assessing Auto-Converted Graphs: Graphs produced by Ab Initio's
automated COBOL/IMS conversion tool are not standard hand-built graphs. They follow
machine-generated patterns — often verbose, deeply nested, or structured in ways that differ
significantly from graphs written from scratch. In this environment, these converted graphs must
be assessed and modified to implement new requirements or fix defects. This requires the
ability to trace generated logic back to the original COBOL source, identify the relevant transform
or component within an auto-generated structure, and make targeted, safe changes without
disrupting the surrounding converted logic.
Metadata Management: Working with the Enterprise Meta Environment (EME) for version
control, dependency analysis, impact analysis, and data lineage.
Parameter Handling: Using Parameter Definition Language (PDL) effectively.
Orchestration and Workflow: Conduct It (or Express It) for scheduling and managing job flows
— this largely replaces JCL and IMS transaction management. Job scheduling is handled via
Atomic Automation, which orchestrates Ab Initio workloads in the production environment. A
critical aspect of this environment is that Atomic Automation workflows contain parallel job
dependencies — multiple jobs may execute concurrently with interdependencies that must be
understood when diagnosing failures or assessing the impact of a change. This is distinct from the
sequential step-by-step flow within an individual job; the broader workflow topology must also be
considered.
Data Flow Traceability and File/Dataset Lineage: A critical problem-solving skill in this
environment is the ability to trace data content backwards and forwards through job flows
and workflows — following a file or dataset from its point of creation through each transformation
it undergoes across jobs, graphs, and workflow stages. This includes understanding what
populates a file, how it is transformed at each step, where it is consumed downstream, and how
parallel workflow paths may contribute to or depend on its content. This traceability underpins
three core data concerns that must always be considered:
Data Integrity: Ensuring that transformations preserve the accuracy and consistency of data
values as they move through the system — detecting where values may be incorrectly
computed, overwritten, or corrupted relative to what the original IMS application would have
produced.
Missing Data: Identifying conditions under which records or fields may be absent, dropped,
or skipped — whether due to filtering logic, join mismatches, conditional branches, or
upstream job failures — and understanding the downstream impact of that absence.
Data Retention: Understanding how long data persists at each stage — which files are
transient (used within a single run), which are retained across cycles, and how GDG-style
generational patterns control the lifecycle of datasets. Knowing what data is available, for
how long, and under what conditions is essential for recovery, reprocessing, and audit
support.
Data Processing and Integration: Handling large-scale ETL/ELT processes, including migrated
IMS segment data, copybooks, EBCDIC, packed decimal, and zoned decimal formats.
Administration and Operations: Co Operating System runtime management, monitoring,
logging, error handling, deployment, and performance tuning (parallelism, multifile systems,
resource optimization).
Salary : Rs. 0.0 - Rs. 1,74,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance