We found 1751 jobs matching to your search

Advance Search

Skills

Locations

Experience

Job Description

As a Custom Software Engineer, you will engage in the development of custom software solutions that involve designing, coding, and enhancing various components across systems or applications. Your typical day will include collaborating with team members to implement modern frameworks and agile practices, ensuring the delivery of scalable and high-performing solutions that are tailored to meet specific business needs. You will also participate in discussions to address challenges and contribute innovative ideas to improve the software development process. Roles & Responsibilities: - Proven ability to work with Java SpringBoot- JavaScript knowledge is nice to have- Understanding of Event based communication with Apache Kafka- Experience with test automation and continuous integration & continuous delivery (Github, Jenkins, Azure, Test & Lint, etc.)- Understanding of AI/ML specifically GenAl and LLMs would be favorable.- Design / UX understanding and sensibility is a nice to have.- Expert knowledge in Kafka is a nice to have.- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the design and architecture of software components to ensure they meet business requirements.- Collaborate with cross-functional teams to gather and analyze requirements for software development. Professional & Technical Skills: - Must To Have Skills: Proficiency in Spring Boot.- Good To Have Skills: Experience with JavaScript, Automated Testing, Java Enterprise Edition.- Strong understanding of RESTful API design and development.- Experience with microservices architecture and cloud deployment.- Familiarity with database technologies such as SQL and NoSQL. Additional Information: - The candidate should have minimum 3 years of experience in Spring Boot.- This position is based at our Bengaluru office.- A 15 years full time education is required.

Responsibilities

As a Custom Software Engineer, you will engage in the development of custom software solutions that involve designing, coding, and enhancing various components across systems or applications. Your typical day will include collaborating with team members to implement modern frameworks and agile practices, ensuring the delivery of scalable and high-performing solutions that are tailored to meet specific business needs. You will also participate in discussions to address challenges and contribute innovative ideas to improve the software development process. Roles & Responsibilities: - Proven ability to work with Java SpringBoot- JavaScript knowledge is nice to have- Understanding of Event based communication with Apache Kafka- Experience with test automation and continuous integration & continuous delivery (Github, Jenkins, Azure, Test & Lint, etc.)- Understanding of AI/ML specifically GenAl and LLMs would be favorable.- Design / UX understanding and sensibility is a nice to have.- Expert knowledge in Kafka is a nice to have.- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the design and architecture of software components to ensure they meet business requirements.- Collaborate with cross-functional teams to gather and analyze requirements for software development. Professional & Technical Skills: - Must To Have Skills: Proficiency in Spring Boot.- Good To Have Skills: Experience with JavaScript, Automated Testing, Java Enterprise Edition.- Strong understanding of RESTful API design and development.- Experience with microservices architecture and cloud deployment.- Familiarity with database technologies such as SQL and NoSQL. Additional Information: - The candidate should have minimum 3 years of experience in Spring Boot.- This position is based at our Bengaluru office.- A 15 years full time education is required.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Application Developer

Job Description

INFYSYJP00003141 539424_DNAFS_AMEX_Technology Lead

Responsibilities

INFYSYJP00003141 539424_DNAFS_AMEX_Technology Lead
  • Salary : Rs. 10,00,000.0 - Rs. 25,00,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :INFYSYJP00003141 539424_DNAFS_AMEX_Technology Lead

Job Description

Location: HYDERABAD,TS or NOIDA,UP or or BANGALORE, KA Role Descriptions: Back-end Engineer Java 1117| Spring Boot| .Net| Apollo GraphQL| Express| database Essential Skills: Back-end Engineer Java 1117| Spring Boot| .Net| Apollo GraphQL| Express| database Desirable Skills: Keyword: Skills: Advanced Java Concepts~Digital : Microservices~Digital : Spring Boot~Digital : Vue.js Experience Required: 6-8

Responsibilities

  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Keyword: Skills: Advanced Java Concepts~Digital : Microservices~Digital : Spring Boot~Digital : Vue.js

Job Description

These skills are essential because the applications now exist as Ab Initio graphs rather than COBOL programs. Proficiency in the Ab Initio Graphical Development Environment (GDE): Building, modifying, and debugging graphs using standard components (Reformat, Join, Sort, Rollup, Normalize, Lookup, etc.), custom transforms, and embedded code. Understanding and Assessing Auto-Converted Graphs: Graphs produced by Ab Initio's automated COBOL/IMS conversion tool are not standard hand-built graphs. They follow machine-generated patterns — often verbose, deeply nested, or structured in ways that differ significantly from graphs written from scratch. In this environment, these converted graphs must be assessed and modified to implement new requirements or fix defects. This requires the ability to trace generated logic back to the original COBOL source, identify the relevant transform or component within an auto-generated structure, and make targeted, safe changes without disrupting the surrounding converted logic. Metadata Management: Working with the Enterprise Meta Environment (EME) for version control, dependency analysis, impact analysis, and data lineage. Parameter Handling: Using Parameter Definition Language (PDL) effectively. Orchestration and Workflow: Conduct It (or Express It) for scheduling and managing job flows — this largely replaces JCL and IMS transaction management. Job scheduling is handled via Atomic Automation, which orchestrates Ab Initio workloads in the production environment. A critical aspect of this environment is that Atomic Automation workflows contain parallel job dependencies — multiple jobs may execute concurrently with interdependencies that must be understood when diagnosing failures or assessing the impact of a change. This is distinct from the sequential step-by-step flow within an individual job; the broader workflow topology must also be considered. Data Flow Traceability and File/Dataset Lineage: A critical problem-solving skill in this environment is the ability to trace data content backwards and forwards through job flows and workflows — following a file or dataset from its point of creation through each transformation it undergoes across jobs, graphs, and workflow stages. This includes understanding what populates a file, how it is transformed at each step, where it is consumed downstream, and how parallel workflow paths may contribute to or depend on its content. This traceability underpins three core data concerns that must always be considered: Data Integrity: Ensuring that transformations preserve the accuracy and consistency of data values as they move through the system — detecting where values may be incorrectly computed, overwritten, or corrupted relative to what the original IMS application would have produced. Missing Data: Identifying conditions under which records or fields may be absent, dropped, or skipped — whether due to filtering logic, join mismatches, conditional branches, or upstream job failures — and understanding the downstream impact of that absence. Data Retention: Understanding how long data persists at each stage — which files are transient (used within a single run), which are retained across cycles, and how GDG-style generational patterns control the lifecycle of datasets. Knowing what data is available, for how long, and under what conditions is essential for recovery, reprocessing, and audit support. Data Processing and Integration: Handling large-scale ETL/ELT processes, including migrated IMS segment data, copybooks, EBCDIC, packed decimal, and zoned decimal formats. Administration and Operations: Co Operating System runtime management, monitoring, logging, error handling, deployment, and performance tuning (parallelism, multifile systems, resource optimization). Testing, Validation, and Move to Production (MTP): This is a multi-layered discipline in the converted Ab Initio environment and must be treated as a structured process, not an afterthought. Unit Testing of Converted Graphs: Changes to auto-converted graphs require targeted unit testing at the graph or component level — isolating the modified logic, constructing or sourcing representative input data, and verifying that outputs match expected results relative to the original IMS behavior. Because the converted code was machine-generated, even small changes can have non-obvious ripple effects within the surrounding graph structure; unit testing must be thorough and deliberate. Data-Driven Validation: Test cases must be grounded in real or representative data — including edge cases common in the original IMS environment (e.g., packed decimal boundary values, missing segments, GDG rollover conditions). Comparing Ab Initio output against known-good baseline results from the original system (or a prior run) is the most reliable validation approach. End-to-End and Integration Testing: Because jobs within workflows have parallel dependencies, changes must be tested not just at the graph level but across the full job flow — verifying that upstream outputs feed correctly into downstream jobs and that no parallel branches are disrupted. Move to Production (MTP) Coordination: MTP in this environment requires understanding and coordinating multiple interdependent activities: packaging and promoting Ab Initio graph changes through the EME; updating or validating Atomic Automation workflow definitions if job dependencies change; confirming that MFS screen-related graph changes are consistent with the deployed screen definitions; communicating the scope and timing of changes to operations and business stakeholders; and verifying that production data files and GDG generations are in the correct state prior to cutover. A practitioner must also understand the rollback implications of a failed MTP — what state files and workflows will be in, and what steps are needed to recover."

Responsibilities

These skills are essential because the applications now exist as Ab Initio graphs rather than COBOL programs. Proficiency in the Ab Initio Graphical Development Environment (GDE): Building, modifying, and debugging graphs using standard components (Reformat, Join, Sort, Rollup, Normalize, Lookup, etc.), custom transforms, and embedded code. Understanding and Assessing Auto-Converted Graphs: Graphs produced by Ab Initio's automated COBOL/IMS conversion tool are not standard hand-built graphs. They follow machine-generated patterns — often verbose, deeply nested, or structured in ways that differ significantly from graphs written from scratch. In this environment, these converted graphs must be assessed and modified to implement new requirements or fix defects. This requires the ability to trace generated logic back to the original COBOL source, identify the relevant transform or component within an auto-generated structure, and make targeted, safe changes without disrupting the surrounding converted logic. Metadata Management: Working with the Enterprise Meta Environment (EME) for version control, dependency analysis, impact analysis, and data lineage. Parameter Handling: Using Parameter Definition Language (PDL) effectively. Orchestration and Workflow: Conduct It (or Express It) for scheduling and managing job flows — this largely replaces JCL and IMS transaction management. Job scheduling is handled via Atomic Automation, which orchestrates Ab Initio workloads in the production environment. A critical aspect of this environment is that Atomic Automation workflows contain parallel job dependencies — multiple jobs may execute concurrently with interdependencies that must be understood when diagnosing failures or assessing the impact of a change. This is distinct from the sequential step-by-step flow within an individual job; the broader workflow topology must also be considered. Data Flow Traceability and File/Dataset Lineage: A critical problem-solving skill in this environment is the ability to trace data content backwards and forwards through job flows and workflows — following a file or dataset from its point of creation through each transformation it undergoes across jobs, graphs, and workflow stages. This includes understanding what populates a file, how it is transformed at each step, where it is consumed downstream, and how parallel workflow paths may contribute to or depend on its content. This traceability underpins three core data concerns that must always be considered: Data Integrity: Ensuring that transformations preserve the accuracy and consistency of data values as they move through the system — detecting where values may be incorrectly computed, overwritten, or corrupted relative to what the original IMS application would have produced. Missing Data: Identifying conditions under which records or fields may be absent, dropped, or skipped — whether due to filtering logic, join mismatches, conditional branches, or upstream job failures — and understanding the downstream impact of that absence. Data Retention: Understanding how long data persists at each stage — which files are transient (used within a single run), which are retained across cycles, and how GDG-style generational patterns control the lifecycle of datasets. Knowing what data is available, for how long, and under what conditions is essential for recovery, reprocessing, and audit support. Data Processing and Integration: Handling large-scale ETL/ELT processes, including migrated IMS segment data, copybooks, EBCDIC, packed decimal, and zoned decimal formats. Administration and Operations: Co Operating System runtime management, monitoring, logging, error handling, deployment, and performance tuning (parallelism, multifile systems, resource optimization). Testing, Validation, and Move to Production (MTP): This is a multi-layered discipline in the converted Ab Initio environment and must be treated as a structured process, not an afterthought. Unit Testing of Converted Graphs: Changes to auto-converted graphs require targeted unit testing at the graph or component level — isolating the modified logic, constructing or sourcing representative input data, and verifying that outputs match expected results relative to the original IMS behavior. Because the converted code was machine-generated, even small changes can have non-obvious ripple effects within the surrounding graph structure; unit testing must be thorough and deliberate. Data-Driven Validation: Test cases must be grounded in real or representative data — including edge cases common in the original IMS environment (e.g., packed decimal boundary values, missing segments, GDG rollover conditions). Comparing Ab Initio output against known-good baseline results from the original system (or a prior run) is the most reliable validation approach. End-to-End and Integration Testing: Because jobs within workflows have parallel dependencies, changes must be tested not just at the graph level but across the full job flow — verifying that upstream outputs feed correctly into downstream jobs and that no parallel branches are disrupted. Move to Production (MTP) Coordination: MTP in this environment requires understanding and coordinating multiple interdependent activities: packaging and promoting Ab Initio graph changes through the EME; updating or validating Atomic Automation workflow definitions if job dependencies change; confirming that MFS screen-related graph changes are consistent with the deployed screen definitions; communicating the scope and timing of changes to operations and business stakeholders; and verifying that production data files and GDG generations are in the correct state prior to cutover. A practitioner must also understand the rollback implications of a failed MTP — what state files and workflows will be in, and what steps are needed to recover."
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Custom Software Engineer

Job Description

INFYSYJP00003695 559917 - Senior Appian Developer - Hyd - EAIS

Responsibilities

INFYSYJP00003695 559917 - Senior Appian Developer - Hyd - EAIS
  • Salary : Rs. 10,00,000.0 - Rs. 25,00,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :INFYSYJP00003695 559917 - Senior Appian Developer - Hyd - EAIS

Job Description

Training & Culture Building • Conduct workshops, labs, AI coaching sessions for engineers & managers. • Lead internal communities of practice for AI and GenAI. • Promote innovation through showcases, hackathons, and associated central initiatives. Metrics & Continuous Improvement • Report & Establish KPIs: productivity gains, adoption rates, automation impact (aligned with THRIVE). Required Skills & Experience Technical Skills • Strong understanding of GenAI, LLMs, vector databases, ML workflows. (cont learning) • Experience integrating AI into development workflows (copilots, test automation, documentation). • Proficiency in Python/Java and cloud platforms (Azure/AWS/Google). • Good grasp of enterprise SDLC, DevOps, APIs, microservices, security, and compliance. Influence & Leadership • Proven ability to influence teams without formal authority. • Excellent stakeholder management across verticals and global counterparts. • Translate complex AI topics into simple, actionable guidance. • Align with central initiatives to drive AI adoption in respective dept. Mindset & Traits • Evangelist mindset, proactive learner, strong communicator. • Comfortable with ambiguity and fast experimentation. • Collaborative, customer-centric, and outcome-driven. Preferred Qualifications • 9+ years in software engineering, architect, delivery, or enterprise architecture. • Experience in transformation programs / self-driver of owned initiatives. • Exposure to enterprise-scale systems. • Certifications in AI/ML, cloud, or agile practices.

Responsibilities

Training & Culture Building • Conduct workshops, labs, AI coaching sessions for engineers & managers. • Lead internal communities of practice for AI and GenAI. • Promote innovation through showcases, hackathons, and associated central initiatives. Metrics & Continuous Improvement • Report & Establish KPIs: productivity gains, adoption rates, automation impact (aligned with THRIVE). Required Skills & Experience Technical Skills • Strong understanding of GenAI, LLMs, vector databases, ML workflows. (cont learning) • Experience integrating AI into development workflows (copilots, test automation, documentation). • Proficiency in Python/Java and cloud platforms (Azure/AWS/Google). • Good grasp of enterprise SDLC, DevOps, APIs, microservices, security, and compliance. Influence & Leadership • Proven ability to influence teams without formal authority. • Excellent stakeholder management across verticals and global counterparts. • Translate complex AI topics into simple, actionable guidance. • Align with central initiatives to drive AI adoption in respective dept. Mindset & Traits • Evangelist mindset, proactive learner, strong communicator. • Comfortable with ambiguity and fast experimentation. • Collaborative, customer-centric, and outcome-driven. Preferred Qualifications • 9+ years in software engineering, architect, delivery, or enterprise architecture. • Experience in transformation programs / self-driver of owned initiatives. • Exposure to enterprise-scale systems. • Certifications in AI/ML, cloud, or agile practices.
  • Salary : Rs. 15,00,000.0 - Rs. 20,00,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :AI Platform & MLOps Engineer (Azure)

Job Description

Lead Software Engineer - Test Automation - (25000JDZ) Missions Key Responsibilities Lead and support change management activities across the product change and release management process Define, implement, and own test strategies, test plans, and quality metrics Hands-on involvement in manual and automation testing, with a focus on risk-based testing Design, review, and maintain test cases, test scenarios, and automation frameworks Guide and mentor QA team members on best practices, tools, and frameworks Review test results, identify defects, perform root cause analysis, and drive defect resolution Collaborate closely with developers, product owners, DevOps, and release teams Ensure testing alignment with CI/CD pipelines and enterprise quality standards Act as a point of contact for QA during releases and production rollouts Profile Required Experience 5–10 years of experience in Manual Testing and/or Automation Testing (Automation preferred) Proven experience in leading or mentoring QA teams Strong experience in test design, execution, and maintenance Solid knowledge of JUnit and TestNG frameworks Hands-on experience in building or maintaining test automation frameworks Experience working in Agile/Scrum environments Preferred / Nice-to-Have Skills Working knowledge of Tricentis Tosca (preferred, not mandatory) Proficiency in Selenium and Core Java Strong understanding of Object-Oriented Programming (OOP) concepts Experience with version control systems (e.g., Git) Knowledge of API test automation Knowledge of Tosca Standard and Image-based identification strategies Experience with Test Management tools such as Zephyr Scale or Xray Exposure to CI/CD tools such as GitHub Actions and Jenkins Experience in Investment Banking / Financial Services domain Exposure to enterprise-scale testing and regulated environments Soft Skills & Competencies Strong leadership and team-player mindset Excellent analytical and problem-solving skills Clear and effective communication and stakeholder collaboration Ability to work independently while driving team accountability Proactive, detail-oriented, and quality-focused

Responsibilities

Lead Software Engineer - Test Automation - (25000JDZ) Missions Key Responsibilities Lead and support change management activities across the product change and release management process Define, implement, and own test strategies, test plans, and quality metrics Hands-on involvement in manual and automation testing, with a focus on risk-based testing Design, review, and maintain test cases, test scenarios, and automation frameworks Guide and mentor QA team members on best practices, tools, and frameworks Review test results, identify defects, perform root cause analysis, and drive defect resolution Collaborate closely with developers, product owners, DevOps, and release teams Ensure testing alignment with CI/CD pipelines and enterprise quality standards Act as a point of contact for QA during releases and production rollouts Profile Required Experience 5–10 years of experience in Manual Testing and/or Automation Testing (Automation preferred) Proven experience in leading or mentoring QA teams Strong experience in test design, execution, and maintenance Solid knowledge of JUnit and TestNG frameworks Hands-on experience in building or maintaining test automation frameworks Experience working in Agile/Scrum environments Preferred / Nice-to-Have Skills Working knowledge of Tricentis Tosca (preferred, not mandatory) Proficiency in Selenium and Core Java Strong understanding of Object-Oriented Programming (OOP) concepts Experience with version control systems (e.g., Git) Knowledge of API test automation Knowledge of Tosca Standard and Image-based identification strategies Experience with Test Management tools such as Zephyr Scale or Xray Exposure to CI/CD tools such as GitHub Actions and Jenkins Experience in Investment Banking / Financial Services domain Exposure to enterprise-scale testing and regulated environments Soft Skills & Competencies Strong leadership and team-player mindset Excellent analytical and problem-solving skills Clear and effective communication and stakeholder collaboration Ability to work independently while driving team accountability Proactive, detail-oriented, and quality-focused
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Lead Software Engineer - Test Automation - (25000JDZ)

Job Description

Strong expertise in Azure Data Services: o Azure Data Lake, Azure Databricks, Azure Synapse, Azure Data Factory. • Proficiency in Python and PySpark for large-scale data processing. • Solid understanding of data lakehouse architectures, Delta Lake, and parquet formats. • Experience with ETL/ELT design, data modeling, and pipeline orchestration. • Familiarity with SQL (T-SQL, Spark SQL) for querying and transformations. • Knowledge of CI/CD practices and source control (Azure DevOps, Git, GitHub). • Strong problem-solving skills with ability to debug complex data issues.

Responsibilities

Strong expertise in Azure Data Services: o Azure Data Lake, Azure Databricks, Azure Synapse, Azure Data Factory. • Proficiency in Python and PySpark for large-scale data processing. • Solid understanding of data lakehouse architectures, Delta Lake, and parquet formats. • Experience with ETL/ELT design, data modeling, and pipeline orchestration. • Familiarity with SQL (T-SQL, Spark SQL) for querying and transformations. • Knowledge of CI/CD practices and source control (Azure DevOps, Git, GitHub). • Strong problem-solving skills with ability to debug complex data issues.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Azure Data Engineer

Job Description

SSE Bigdata Talend - (26000A43) Missions Design, develop, and maintain complex ETL pipelines using Talend Big Data components executed on the Spark framework. Build and manage ETL solutions to ingest data from structured and unstructured sources. Develop Talend Jobs, Joblets, and custom Java-based components. Perform installation, configuration, and maintenance of Talend Job Server, TAC Server, and related components. Optimize Talend jobs for performance, scalability, and parallel execution across multiple environments. Deploy Talend jobs across environments and support automated deployment pipelines. Create and manage Context Groups, parameterization frameworks, and custom routines. Implement robust error handling, monitoring, logging, alerting, and reporting mechanisms. Write and execute unit test cases and support integration testing. Participate in performance tuning, troubleshooting, and best-practice recommendations. Use Talend Administration Console (TAC) for job scheduling, deployment, and administration (advantage). Provide operational support for data services, including production issue resolution and coordination with platform teams. Profile Must‑Have Skills 4+ years of hands-on experience with Talend Big Data ETL platform. Strong experience developing Talend jobs on Spark-based architectures. Strong SQL programming skills (preferably SQL Server). Strong understanding of: End-to-end ETL lifecycle and data integration fundamentals Talend core components, transformations, orchestration, and optimization Joblets, custom components, and Java-based custom logic Error handling, monitoring, and performance tuning frameworks Java debugging fundamentals, especially during TAC executions. Excellent analytical, problem-solving, and troubleshooting skills. Strong communication skills and ability to collaborate with cross-functional teams. Big Data & Cloudera Platform Skills (Required) Advanced development and optimization using Hive and Impala. Strong hands-on experience with HDFS and Hadoop at the user level. Solid working knowledge of Linux and command-line environments. Experience in data modeling and data layout design on HDFS for analytical workloads. Proven ability in performance tuning of: Hive and Impala queries Partitioning strategies and file formats (Parquet, ORC) Operational support experience involving troubleshooting, monitoring, and coordination with Cloudera platform administrators.

Responsibilities

SSE Bigdata Talend - (26000A43) Missions Design, develop, and maintain complex ETL pipelines using Talend Big Data components executed on the Spark framework. Build and manage ETL solutions to ingest data from structured and unstructured sources. Develop Talend Jobs, Joblets, and custom Java-based components. Perform installation, configuration, and maintenance of Talend Job Server, TAC Server, and related components. Optimize Talend jobs for performance, scalability, and parallel execution across multiple environments. Deploy Talend jobs across environments and support automated deployment pipelines. Create and manage Context Groups, parameterization frameworks, and custom routines. Implement robust error handling, monitoring, logging, alerting, and reporting mechanisms. Write and execute unit test cases and support integration testing. Participate in performance tuning, troubleshooting, and best-practice recommendations. Use Talend Administration Console (TAC) for job scheduling, deployment, and administration (advantage). Provide operational support for data services, including production issue resolution and coordination with platform teams. Profile Must‑Have Skills 4+ years of hands-on experience with Talend Big Data ETL platform. Strong experience developing Talend jobs on Spark-based architectures. Strong SQL programming skills (preferably SQL Server). Strong understanding of: End-to-end ETL lifecycle and data integration fundamentals Talend core components, transformations, orchestration, and optimization Joblets, custom components, and Java-based custom logic Error handling, monitoring, and performance tuning frameworks Java debugging fundamentals, especially during TAC executions. Excellent analytical, problem-solving, and troubleshooting skills. Strong communication skills and ability to collaborate with cross-functional teams. Big Data & Cloudera Platform Skills (Required) Advanced development and optimization using Hive and Impala. Strong hands-on experience with HDFS and Hadoop at the user level. Solid working knowledge of Linux and command-line environments. Experience in data modeling and data layout design on HDFS for analytical workloads. Proven ability in performance tuning of: Hive and Impala queries Partitioning strategies and file formats (Parquet, ORC) Operational support experience involving troubleshooting, monitoring, and coordination with Cloudera platform administrators.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :SSE Bigdata Talend