We found 43 jobs matching to your search

Advance Search

Skills

Locations

Experience

Job Description

Job Title: Engineer Work Location: Chennai, TN / Kochi, KL / Bangalore, KA / Gurgaon, HA / Noida, UP / Bhubaneswar, OD / Kolkata, WB / Hyderabad, TG / Pune, MH Skill Required: Digital : Microsoft Azure~Digital : Databricks~Digital : PySpark Experience Range: 6-8 years Job Description: Leads large-scale, complex, cross-functional projects to build technical roadmap for the WFM Data Services platform. Leading and reviewing design artifacts Build and own the automation and monitoring frameworks that showcase reliable, accurate, easy-to-understand metrics and operational KPIs to stakeholders for data pipeline quality Execute proof of concept on new technology and tools to pick the best tools and solutions Supports business objectives by collaborating with business partners to identify opportunities and drive resolution. Communicating status and issues to Sr Starbucks leadership and stakeholders. Directing project team and cross functional teams on all technical aspects of the projects Lead with engineering team to build and support real-time, highly available data, data pipeline and technology capabilities Translate strategic requirements into business requirements to ensure solutions meet business needs Define implement data retention policies and procedures Define implement data governance policies and procedures Identify, design, and implement internal process improvements automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability Enable team to pursue insights and applied breakthroughs, while also driving the solutions to Starbucks scale Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of structured and unstructured data sources and using big data technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Perform root cause analysis to identify permanent resolutions to software or business process issues

Responsibilities

Job Title: Engineer Work Location: Chennai, TN / Kochi, KL / Bangalore, KA / Gurgaon, HA / Noida, UP / Bhubaneswar, OD / Kolkata, WB / Hyderabad, TG / Pune, MH Skill Required: Digital : Microsoft Azure~Digital : Databricks~Digital : PySpark Experience Range: 6-8 years Job Description: Leads large-scale, complex, cross-functional projects to build technical roadmap for the WFM Data Services platform. Leading and reviewing design artifacts Build and own the automation and monitoring frameworks that showcase reliable, accurate, easy-to-understand metrics and operational KPIs to stakeholders for data pipeline quality Execute proof of concept on new technology and tools to pick the best tools and solutions Supports business objectives by collaborating with business partners to identify opportunities and drive resolution. Communicating status and issues to Sr Starbucks leadership and stakeholders. Directing project team and cross functional teams on all technical aspects of the projects Lead with engineering team to build and support real-time, highly available data, data pipeline and technology capabilities Translate strategic requirements into business requirements to ensure solutions meet business needs Define implement data retention policies and procedures Define implement data governance policies and procedures Identify, design, and implement internal process improvements automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability Enable team to pursue insights and applied breakthroughs, while also driving the solutions to Starbucks scale Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of structured and unstructured data sources and using big data technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Perform root cause analysis to identify permanent resolutions to software or business process issues
  • Salary : Rs. 55,000.0 - Rs. 95,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Engineer

Job Description

Digital : Anaplan ob Description: Anaplan solution architect Essential Skills: Anaplan solution architect

Responsibilities

ob Description: Anaplan solution architect Essential Skills: Anaplan solution architect
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Digital : Anaplan

Job Description

: Oracle eBS R12 Financials Job Description: • Gather and analyze business requirements for financial processes. • Configure and support Oracle EBS Financial modules • General Ledger (GL)Accounts Payable (AP)Accounts Receivable (AR)Fixed Assets (FA)Cash ManagementSubledger Accounting (SLA)Prepare functional design documents (MD50), configuration documents (BR100), and test scripts (TE40). • Perform gap analysis and propose solutions. • Support full project lifecycle implementation, testing, UAT, go-live, and post-production. • Coordinate with technical teams for RICEW components (Reports, Interfaces, Conversions, Extensions, Workflows). Handle production support tickets and resolve issues within SLA. • Assist in month-end and year-end financial closing activities. • Provide end-user training and documentation.

Responsibilities

Job Description: • Gather and analyze business requirements for financial processes. • Configure and support Oracle EBS Financial modules • General Ledger (GL)Accounts Payable (AP)Accounts Receivable (AR)Fixed Assets (FA)Cash ManagementSubledger Accounting (SLA)Prepare functional design documents (MD50), configuration documents (BR100), and test scripts (TE40). • Perform gap analysis and propose solutions. • Support full project lifecycle implementation, testing, UAT, go-live, and post-production. • Coordinate with technical teams for RICEW components (Reports, Interfaces, Conversions, Extensions, Workflows). Handle production support tickets and resolve issues within SLA. • Assist in month-end and year-end financial closing activities. • Provide end-user training and documentation.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :: Oracle eBS R12 Financials

Job Description

Oracle EBS Supply Chain Management - Distribution Job Description: • Module ToolsetDetailsSCM ModulesOM, INV, PO, iProc, WMS, Shipping, BOMWIP, ASCP • oracle.comIntegration SkillsInterfaces, data migration, SQLPLSQL, SOA, ServiceNow, APIs • comImplementation MethodologyOracle AIM, RUP, Agile, release management, ITIL • 5 to 8 years Oracle EBS SCM R12 (SCM modules)Strong hands-on skills configuration, SQLPLSQL, test scripts, data migrationExposure to AIM methodology, ITIL, and integration tools (SOA, APIs)Excellent analytical, communication, client-facing abilities • Experience with cloud migrations or Oracle Integration Cloud preferredBonusOracle SCM Certifications ITIL Experience in ServiceNow, HealthcareFMCG domains SpanishArabic language skills in certain regions

Responsibilities

Job Description: • Module ToolsetDetailsSCM ModulesOM, INV, PO, iProc, WMS, Shipping, BOMWIP, ASCP • oracle.comIntegration SkillsInterfaces, data migration, SQLPLSQL, SOA, ServiceNow, APIs • comImplementation MethodologyOracle AIM, RUP, Agile, release management, ITIL • 5 to 8 years Oracle EBS SCM R12 (SCM modules)Strong hands-on skills configuration, SQLPLSQL, test scripts, data migrationExposure to AIM methodology, ITIL, and integration tools (SOA, APIs)Excellent analytical, communication, client-facing abilities • Experience with cloud migrations or Oracle Integration Cloud preferredBonusOracle SCM Certifications ITIL Experience in ServiceNow, HealthcareFMCG domains SpanishArabic language skills in certain regions
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role : Oracle EBS Supply Chain Management - Distribution

Job Description

: Oracle Financial Services Analytical Applications (OFSAA)

Responsibilities

Oracle Financial Services Analytical Applications (OFSAA)
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :: Oracle Financial Services Analytical Applications (OFSAA)

Job Description

Oracle EBS Supply Chain Management - Distribution

Responsibilities

  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role : Oracle EBS Supply Chain Management - Distribution

Job Description

Agile Way of Working~Progress - Openedge Job Description: Must-Have Technical Skills • Experience in Progress4GL/OpenEdge CHUI / GUI / Rest Services / Developer Studio • Knowledge and experience as Architect on Progress application • Knowledge of Database, DB Schema and key DBA activities. • Exposure to OpenEdge object oriented programming • Experience in support, development, and digital migration projects • Worked on OpenEdge 10.2B and above. • Developed application using Progress4GL/OpenEdge development platform. • Functional Testing to ensure deliverables meet Customer requirements • Analyzing the requirements and performing Gap analysis • Responsible to improve the performance & reliability of software applications and IT systems. • Exploring better ways to develop the product and fixing existing applications latency and other issues. • Knowledge of SDLC Processes, Agile & Project Implementation Life cycles • Experience in Agile development methodology is a must. • Working experience in DevOps environment is a must. Support Skills • Analyzing the tickets, providing resolution to users

Responsibilities

Job Description: Must-Have Technical Skills • Experience in Progress4GL/OpenEdge CHUI / GUI / Rest Services / Developer Studio • Knowledge and experience as Architect on Progress application • Knowledge of Database, DB Schema and key DBA activities. • Exposure to OpenEdge object oriented programming • Experience in support, development, and digital migration projects • Worked on OpenEdge 10.2B and above. • Developed application using Progress4GL/OpenEdge development platform. • Functional Testing to ensure deliverables meet Customer requirements • Analyzing the requirements and performing Gap analysis • Responsible to improve the performance & reliability of software applications and IT systems. • Exploring better ways to develop the product and fixing existing applications latency and other issues. • Knowledge of SDLC Processes, Agile & Project Implementation Life cycles • Experience in Agile development methodology is a must. • Working experience in DevOps environment is a must. Support Skills • Analyzing the tickets, providing resolution to users
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Agile Way of Working~Progress - Openedge

Job Description

Windows Powershell~ServiceNow Job Description: Collaborate with business and IT stakeholders to gather and analyze automation requirements. Design and develop ServiceNow workflows, flows (Flow Designer), and Orchestration Integration Hub solutions. Implement end-to-end automation for ITSM, ITOM, HRSD, or other ServiceNow modules. Create custom applications and extend ServiceNow platform capabilities as needed. Integrate ServiceNow with third-party systems (e.g., via RESTSOAP APIs, Integration Hub). Maintain technical documentation including process flows, solution designs, and test cases. Perform testing, validation, and deployment of automation solutions. Monitor performance and efficiency of automated workflows identify and implement improvements. Ensure compliance with change management and governance processes. Provide training and support to end-users and internal teams on ServiceNow automation features. Strong understanding of ServiceNow modules such as ITSM, ITOM, HRSD, or CSM. Experience with ServiceNow Flow Designer, Workflow Editor, and Integration Hub. Knowledge of scripting languages like JavaScript and familiarity with Glide API. Experience with web technologies (RESTSOAP, XML, JSON). Strong problem-solving, analytical, and communication skills. Ability to work). Strong dently and in a cross-functional team environment. ServiceNow Certified System Administrator (CSA) is required. Additional certifications (e.g., Certified Application Developer, ITSM, ITOM) are a plus.

Responsibilities

  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Windows Powershell~ServiceNow

Job Description

Job Title: Developer Work Location: - Hyderabad, TG// Kolkata, WB. Skill Required :- Digital : PySpark~Azure Data Factory Range: 6 to 8 Yrs Job Description:- Key Responsibilities1. PySpark DevelopmentDevelop and optimize ETL pipelines using PySpark and Spark SQL.Work with DataFrames, transformations, and partitioning.Handle data ingestion from various formats (Parquet, CSV, JSON, Delta).2. Azure Data Factory (ADF)Build and maintain ADF pipelines, datasets, linked services, and triggers.Integrate ADF pipelines with Azure Databricks, ADLS, Azure SQL, APIs, etc.Monitor pipeline runs and troubleshoot failures.3. Azure Cloud DatabricksWork with ADLS Gen2 for data storage and management.Run, schedule, and debug Databricks notebooks.Use Delta Lake for data processing and incremental loads.4. ETL Data ManagementImplement data cleansing, transformations, and validation checks.Follow standard data engineering best practices.Support production jobs and ensure data quality.5. DevOps CollaborationUse Git or Azure DevOps for code versioning.Participate in code reviews and documentation.Collaborate with analysts and data architects for requirements.Required Skills24 years of hands-on experience with PySpark and Spark SQL.Experience building data pipelines in Azure Data Factory.Working knowledge of Azure Databricks ADLS Gen2.Good SQL knowledge.Understanding of ETL concepts and data pipelines.Good to HaveExperience with Delta Lake (MERGE, schema evolution).Familiarity with CICD (Azure DevOpsGitHub).Exposure to Snowflake is a plus.Soft SkillsStrong analytical and problem-solving abilities.Good communication and teamwork skills.Ability to learn quickly and adapt to new tools. Essential Skills:- Role SummaryWe are looking for a Data Engineer with 24 years( more experience is welcomed) of hands-on experience in PySpark, Azure Data Factory (ADF), and Azure-based data pipelines. The ideal candidate should have strong skills in building ETL workflows, working with big-data technologies, and supporting production data processes.Key Responsibilities1. PySpark DevelopmentDevelop and optimize ETL pipelines using PySpark and Spark SQL.Work with DataFrames, transformations, and partitioning.Handle data ingestion from various formats (Parquet, CSV, JSON, Delta).2. Azure Data Factory (ADF)Build and maintain ADF pipelines, datasets, linked services, and triggers.Integrate ADF pipelines with Azure Databricks, ADLS, Azure SQL, APIs, etc.Monitor pipeline runs and troubleshoot failures.3. Azure Cloud DatabricksWork with ADLS Gen2 for data storage and management.Run, schedule, and debug Databricks notebooks.Use Delta Lake for data processing and incremental loads.4. ETL Data ManagementImplement data cleansing, transformations, and validation checks.Follow standard data engineering best practices.Support production jobs and ensure data quality.5. DevOps CollaborationUse Git or Azure DevOps for code versioning.Participate in code reviews and documentation.Collaborate with analysts and data architects for requirements.Required Skills24 years of hands-on experience with PySpark and Spark SQL.Experience building data pipelines in Azure Data Factory.Working knowledge of Azure Databricks ADLS Gen2.Good SQL knowledge.Understanding of ETL concepts and data pipelines.Good to HaveExperience with Delta Lake (MERGE, schema evolution).Familiarity with CICD (Azure DevOpsGitHub).Exposure to Snowflake is a plus.Soft SkillsStrong analytical and problem-solving abilities.Good communication and teamwork skills.Ability to learn quickly and adapt to new tools.

Responsibilities

Job Title: Developer Work Location: - Hyderabad, TG// Kolkata, WB. Skill Required :- Digital : PySpark~Azure Data Factory Range: 6 to 8 Yrs Job Description:- Key Responsibilities1. PySpark DevelopmentDevelop and optimize ETL pipelines using PySpark and Spark SQL.Work with DataFrames, transformations, and partitioning.Handle data ingestion from various formats (Parquet, CSV, JSON, Delta).2. Azure Data Factory (ADF)Build and maintain ADF pipelines, datasets, linked services, and triggers.Integrate ADF pipelines with Azure Databricks, ADLS, Azure SQL, APIs, etc.Monitor pipeline runs and troubleshoot failures.3. Azure Cloud DatabricksWork with ADLS Gen2 for data storage and management.Run, schedule, and debug Databricks notebooks.Use Delta Lake for data processing and incremental loads.4. ETL Data ManagementImplement data cleansing, transformations, and validation checks.Follow standard data engineering best practices.Support production jobs and ensure data quality.5. DevOps CollaborationUse Git or Azure DevOps for code versioning.Participate in code reviews and documentation.Collaborate with analysts and data architects for requirements.Required Skills24 years of hands-on experience with PySpark and Spark SQL.Experience building data pipelines in Azure Data Factory.Working knowledge of Azure Databricks ADLS Gen2.Good SQL knowledge.Understanding of ETL concepts and data pipelines.Good to HaveExperience with Delta Lake (MERGE, schema evolution).Familiarity with CICD (Azure DevOpsGitHub).Exposure to Snowflake is a plus.Soft SkillsStrong analytical and problem-solving abilities.Good communication and teamwork skills.Ability to learn quickly and adapt to new tools. Essential Skills:- Role SummaryWe are looking for a Data Engineer with 24 years( more experience is welcomed) of hands-on experience in PySpark, Azure Data Factory (ADF), and Azure-based data pipelines. The ideal candidate should have strong skills in building ETL workflows, working with big-data technologies, and supporting production data processes.Key Responsibilities1. PySpark DevelopmentDevelop and optimize ETL pipelines using PySpark and Spark SQL.Work with DataFrames, transformations, and partitioning.Handle data ingestion from various formats (Parquet, CSV, JSON, Delta).2. Azure Data Factory (ADF)Build and maintain ADF pipelines, datasets, linked services, and triggers.Integrate ADF pipelines with Azure Databricks, ADLS, Azure SQL, APIs, etc.Monitor pipeline runs and troubleshoot failures.3. Azure Cloud DatabricksWork with ADLS Gen2 for data storage and management.Run, schedule, and debug Databricks notebooks.Use Delta Lake for data processing and incremental loads.4. ETL Data ManagementImplement data cleansing, transformations, and validation checks.Follow standard data engineering best practices.Support production jobs and ensure data quality.5. DevOps CollaborationUse Git or Azure DevOps for code versioning.Participate in code reviews and documentation.Collaborate with analysts and data architects for requirements.Required Skills24 years of hands-on experience with PySpark and Spark SQL.Experience building data pipelines in Azure Data Factory.Working knowledge of Azure Databricks ADLS Gen2.Good SQL knowledge.Understanding of ETL concepts and data pipelines.Good to HaveExperience with Delta Lake (MERGE, schema evolution).Familiarity with CICD (Azure DevOpsGitHub).Exposure to Snowflake is a plus.Soft SkillsStrong analytical and problem-solving abilities.Good communication and teamwork skills.Ability to learn quickly and adapt to new tools.
  • Salary : Rs. 90,000.0 - Rs. 1,65,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Developer