As a Technology Platform Engineer, you will be responsible for creating both production and non-production cloud environments utilizing appropriate software tools tailored for specific projects or products. Your typical day will involve deploying automation pipelines and automating the processes of environment creation and configuration, ensuring that all systems are optimized for performance and reliability. You will collaborate with various teams to ensure seamless integration and functionality across platforms, contributing to the overall success of the projects you are involved in. Roles & Responsibilities: - Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor and evaluate the performance of cloud environments to ensure optimal operation. Professional & Technical Skills: - Must To Have Skills: Proficiency in TIBCO BusinessWorks.- Strong understanding of cloud infrastructure and services.- Experience with automation tools and scripting languages.- Familiarity with CI/CD practices and tools.- Ability to troubleshoot and resolve technical issues efficiently. Additional Information: - The candidate should have minimum 5 years of experience in TIBCO BusinessWorks.- This position is based at our Bengaluru office.- A 15 years full time education is required.
Responsibilities
As a Technology Platform Engineer, you will be responsible for creating both production and non-production cloud environments utilizing appropriate software tools tailored for specific projects or products. Your typical day will involve deploying automation pipelines and automating the processes of environment creation and configuration, ensuring that all systems are optimized for performance and reliability. You will collaborate with various teams to ensure seamless integration and functionality across platforms, contributing to the overall success of the projects you are involved in. Roles & Responsibilities: - Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor and evaluate the performance of cloud environments to ensure optimal operation. Professional & Technical Skills: - Must To Have Skills: Proficiency in TIBCO BusinessWorks.- Strong understanding of cloud infrastructure and services.- Experience with automation tools and scripting languages.- Familiarity with CI/CD practices and tools.- Ability to troubleshoot and resolve technical issues efficiently. Additional Information: - The candidate should have minimum 5 years of experience in TIBCO BusinessWorks.- This position is based at our Bengaluru office.- A 15 years full time education is required.
Salary : Rs. 0.0 - Rs. 2,16,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Site Reliability Engineer
Role Descriptions: Site Reliability Engineer
Essential Skills: Site Reliability Engineer
Desirable Skills: Site Reliability Engineer
Responsibilities
Role Descriptions: Site Reliability Engineer
Essential Skills: Site Reliability Engineer
Desirable Skills: Site Reliability Engineer
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Job Summary:
We are seeking a skilled Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. You will play a crucial role in our data ecosystem by working with cloud technologies to enable data accessibility, quality, and insights across the organization. This role requires expertise in Azure Databricks, Azure Data Factory, Snowflake, PySpark, SQL,any cloud (preferbly Azure), Data Modelling
Requirements:
Experience Level: 3 to 5 Years
• Bachelor’s in Computer Science, Data Engineering, or related field.
• Proficiency in Azure Databricks for data processing and pipeline orchestration.
• Strong SQL skills and understanding of data modeling principles.
• Ability to troubleshoot and optimize data workflows.
*Responsibilities for Internal Candidates
Key Responsibilities:
• Data Pipeline Development: Design, build, and optimize data pipelines to ingest, transform, and load data from multiple sources, using Azure Databricks,
• Good to have Snowflake, and DBT.
• Data Architecture: Develop and manage data models within Snowflake, ensuring efficient data organization and accessibility.
• Data Transformation: Implement transformations in DBT, standardizing data for analysis and reporting.
• Performance Optimization: Monitor and optimize pipeline performance, troubleshooting and resolving issues as needed.
• Collaboration: Work closely with data scientists, analysts, and other stakeholders to support data-driven projects and provide access to reliable, well-structured data.
Qualifications:
• Having relevant Experience in MS Azure, Snowflake, DBT& Big Data Hadoop eco-system components
• Understanding of Hadoop Architecture and underlying framework including Storage Management.
• Strong understand and implementation experience in Hadoop, Spark, Hive/Databricks
• Expertise in implementing Data lake solution using Scala as well as Python.
• Expertise with orchestration tool like Azure Data Factory
• Strong SQL and Programing skills
• Experience with DataBricks is desirable
• Understanding / Implementation experience with CICD tools such as Jenkins, Azure DevOps, GITHUB is desirable.
Responsibilities
Job Summary:
We are seeking a skilled Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. You will play a crucial role in our data ecosystem by working with cloud technologies to enable data accessibility, quality, and insights across the organization. This role requires expertise in Azure Databricks, Azure Data Factory, Snowflake, PySpark, SQL,any cloud (preferbly Azure), Data Modelling
Requirements:
Experience Level: 3 to 5 Years
• Bachelor’s in Computer Science, Data Engineering, or related field.
• Proficiency in Azure Databricks for data processing and pipeline orchestration.
• Strong SQL skills and understanding of data modeling principles.
• Ability to troubleshoot and optimize data workflows.
*Responsibilities for Internal Candidates
Key Responsibilities:
• Data Pipeline Development: Design, build, and optimize data pipelines to ingest, transform, and load data from multiple sources, using Azure Databricks,
• Good to have Snowflake, and DBT.
• Data Architecture: Develop and manage data models within Snowflake, ensuring efficient data organization and accessibility.
• Data Transformation: Implement transformations in DBT, standardizing data for analysis and reporting.
• Performance Optimization: Monitor and optimize pipeline performance, troubleshooting and resolving issues as needed.
• Collaboration: Work closely with data scientists, analysts, and other stakeholders to support data-driven projects and provide access to reliable, well-structured data.
Qualifications:
• Having relevant Experience in MS Azure, Snowflake, DBT& Big Data Hadoop eco-system components
• Understanding of Hadoop Architecture and underlying framework including Storage Management.
• Strong understand and implementation experience in Hadoop, Spark, Hive/Databricks
• Expertise in implementing Data lake solution using Scala as well as Python.
• Expertise with orchestration tool like Azure Data Factory
• Strong SQL and Programing skills
• Experience with DataBricks is desirable
• Understanding / Implementation experience with CICD tools such as Jenkins, Azure DevOps, GITHUB is desirable.
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
As a Custom Software Engineering Lead, a typical day involves overseeing the technical direction and architectural design of bespoke software solutions. This role requires guiding teams through the entire development lifecycle, from initial design concepts to final delivery. The position demands a focus on maintaining high standards for code quality, ensuring that applications are scalable and perform efficiently while aligning with the broader business goals. Collaboration and leadership are central, as the role involves coordinating efforts across multiple teams to achieve cohesive and effective software outcomes. Roles & Responsibilities: - Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Lead the establishment and enforcement of development standards to ensure consistency and quality across projects.- Mentor and support team members to foster professional growth and enhance technical capabilities.- Drive continuous improvement initiatives to optimize development processes and delivery timelines. Professional & Technical Skills: - Must To Have Skills: Proficiency in Spring Boot.- Strong knowledge of microservices architecture and RESTful API design.- Experience with cloud platforms and containerization technologies.- Familiarity with database design and optimization techniques.- Ability to implement scalable and maintainable software solutions.- Competence in debugging, performance tuning, and code review practices. Additional Information: - The candidate should have minimum 5 years of experience in Spring Boot.- This position is based at our Bengaluru office.- A 15 years full time education is required.
Responsibilities
As a Custom Software Engineering Lead, a typical day involves overseeing the technical direction and architectural design of bespoke software solutions. This role requires guiding teams through the entire development lifecycle, from initial design concepts to final delivery. The position demands a focus on maintaining high standards for code quality, ensuring that applications are scalable and perform efficiently while aligning with the broader business goals. Collaboration and leadership are central, as the role involves coordinating efforts across multiple teams to achieve cohesive and effective software outcomes. Roles & Responsibilities: - Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Lead the establishment and enforcement of development standards to ensure consistency and quality across projects.- Mentor and support team members to foster professional growth and enhance technical capabilities.- Drive continuous improvement initiatives to optimize development processes and delivery timelines. Professional & Technical Skills: - Must To Have Skills: Proficiency in Spring Boot.- Strong knowledge of microservices architecture and RESTful API design.- Experience with cloud platforms and containerization technologies.- Familiarity with database design and optimization techniques.- Ability to implement scalable and maintainable software solutions.- Competence in debugging, performance tuning, and code review practices. Additional Information: - The candidate should have minimum 5 years of experience in Spring Boot.- This position is based at our Bengaluru office.- A 15 years full time education is required.
Salary : Rs. 0.0 - Rs. 2,17,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Desired Competencies (Technical/Behavioral Competency)
Must-Have • Expert in using REST and SOAP API’s.
• Expert in building MuleSoft Integrations and API’s.
• Experience integrating a portfolio of SAAS applications.
• Document Design and Runbooks.
• Designs, develops, and maintains complex applications.
• Support MuleSoft architecture process including system, process, and experience APIs.
• Responsible for the analysis, design, implementation, and deployment of full software development life cycle (SDLC) of MuleSoft project using AGILE process.
• Implemented orchestration API’s for the scale ticket integrations.
• Responsible for impact analysis document review, code review and Junit test cases.
• Integration of various systems utilizing Queues, Topics, Http, File system, DBs and SFTP components.
• Experience in Mule components that includes File, SMTP, FTP, SFTP, and database Connectors.
• Developed Restful/SOAP web services in Mule ESB based on SOA architecture.
• Design and Implemented RESTFUL Web Services using various data format (JSON, XML) to provide an interface to the various third-party applications.
• Experience in MuleSoft Anypoint API platform on designing and implementing Mule APIs by documenting and designing REST API's using RAML.
• Hand-on experience with transformers, exception handling, testing & Security of Mule ESB endpoint through Oath.
• Write MUnit test cases to validate the Mule flows.
• Experience with SOAP and/or Web Services.
• Strong problem solving and troubleshooting skills with the ability to exercise mature judgment.
Good-to-Have • Experience in architecting solutions with the MuleSoft Anypoint Platform on Mule 4, MCPA and MCIA certification would be preferred.
• Experience on performance, scalability, reliability, monitoring and other operational concerns of integration solutions on Anypoint Platform.
• Deployment of integration flows to higher environment and MuleSoft Environment Management Best practices
• Providing Guidance on MuleSoft API Manager, Runtime Manager, Anypoint MQ and Exchange.
• Experience with Runtime Fabric or Kubernetes
• Platform upgrades, Security upgrades and Patches.
• Knowledgeable of securing data; understands PGP, SSH, OAuth, HTTPS, SFTP
Responsibilities
Desired Competencies (Technical/Behavioral Competency)
Must-Have • Expert in using REST and SOAP API’s.
• Expert in building MuleSoft Integrations and API’s.
• Experience integrating a portfolio of SAAS applications.
• Document Design and Runbooks.
• Designs, develops, and maintains complex applications.
• Support MuleSoft architecture process including system, process, and experience APIs.
• Responsible for the analysis, design, implementation, and deployment of full software development life cycle (SDLC) of MuleSoft project using AGILE process.
• Implemented orchestration API’s for the scale ticket integrations.
• Responsible for impact analysis document review, code review and Junit test cases.
• Integration of various systems utilizing Queues, Topics, Http, File system, DBs and SFTP components.
• Experience in Mule components that includes File, SMTP, FTP, SFTP, and database Connectors.
• Developed Restful/SOAP web services in Mule ESB based on SOA architecture.
• Design and Implemented RESTFUL Web Services using various data format (JSON, XML) to provide an interface to the various third-party applications.
• Experience in MuleSoft Anypoint API platform on designing and implementing Mule APIs by documenting and designing REST API's using RAML.
• Hand-on experience with transformers, exception handling, testing & Security of Mule ESB endpoint through Oath.
• Write MUnit test cases to validate the Mule flows.
• Experience with SOAP and/or Web Services.
• Strong problem solving and troubleshooting skills with the ability to exercise mature judgment.
Good-to-Have • Experience in architecting solutions with the MuleSoft Anypoint Platform on Mule 4, MCPA and MCIA certification would be preferred.
• Experience on performance, scalability, reliability, monitoring and other operational concerns of integration solutions on Anypoint Platform.
• Deployment of integration flows to higher environment and MuleSoft Environment Management Best practices
• Providing Guidance on MuleSoft API Manager, Runtime Manager, Anypoint MQ and Exchange.
• Experience with Runtime Fabric or Kubernetes
• Platform upgrades, Security upgrades and Patches.
• Knowledgeable of securing data; understands PGP, SSH, OAuth, HTTPS, SFTP
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Gen AI Job Description:
o Experience Level: 3 to 5 Years
o Design, implement, and manage workflows for integrating and deploying GenAI applications from Azure, Amazon, or Snowflake. Analyse systems and applications and provide recommendations for design, enhancement and development, and play an active part in their execution.
o Platform engineering: Collaborate with other teams to integrate AI solutions into existing workflows and systems to get the platform running and available. They configure and manage the underlying infrastructure that supports the platform, ensuring scalability, reliability, and high availability.
o Develop and implement best practices for managing the lifecycle of large language AI models, including version control, testing, and validation.
o Troubleshoot and resolve issues related to the performance and deployment of large language AI models.
o Stay up to date with the latest advancements in large language AI models and operations technologies to continuously improve our AI infrastructure.
o Ability to develop, suggest best practices of designing infrastructures that support fine-tuning of models to improve performance and efficiency, and troubleshoot any issues that arise during development or deployment.
o Creating and maintaining documentation: Ensure clear and comprehensive documentation of AI/ ML / LLM
o Security integration: GenAI platform engineers weave security best practices throughout the development lifecycle to safeguard the platform from vulnerabilities and data breaches.
o Monitoring and logging: Implementation of robust monitoring and logging systems, LLMOps best practices that allows for proactive identification and resolution of potential issues.
o Responsible AI Guardrails: GenAI platform engineers are responsible for ensuring all Responsible AI metrics are governed through proper system infrastructure and monitoring.
o Data privacy and governance: Ensuring user data privacy and adhering to data governance regulations are paramount considerations for GenAI platform engineers.
Requirements:
• Bachelor’s or master’s degree in statistics / economics / operation Research / data science / computer science / related field.
• 2 years of relevant experience in managing Gen AI applications, model monitoring, model validation, implementing I/O guardrails & FinOps monitoring
• Strong cross-cultural communication and negotiation skills, including the demonstrated ability to solicit opinions and accept feedback and the ability to effectively manage collaboration across time zones.
• Understanding of OpenAI, Llama, Claude, Arctic, Mistral large language models, how to deploy them on cloud/ on-premises and use APIs to build Industry solutions.
• Experience with AI/ML frameworks and tools (e.g., Langchain, Semantic Kernel, TensorFlow, PyTorch).
• Experience in using LLM models on cloud i.e. OpenAI @ Azure, Amazon Bedrock, Snowflake Cortex AI
• Familiarity with cloud platforms (e.g., AWS, Azure, Snowflake) and containerization technologies (e.g., Docker, Kubernetes).
• Advanced & secure coding experience in at least one language (Python, PySpark, TypeScript)
• Exposure to Vector/Graph/SQL Databases, non-deterministic automated testing, workflow platforms
• Excellent problem-solving skills and attention to detail.
• Strong communication and collaboration skills and experience in operating effectively as part of cross-functional teams.
Responsibilities
Gen AI Job Description:
o Experience Level: 3 to 5 Years
o Design, implement, and manage workflows for integrating and deploying GenAI applications from Azure, Amazon, or Snowflake. Analyse systems and applications and provide recommendations for design, enhancement and development, and play an active part in their execution.
o Platform engineering: Collaborate with other teams to integrate AI solutions into existing workflows and systems to get the platform running and available. They configure and manage the underlying infrastructure that supports the platform, ensuring scalability, reliability, and high availability.
o Develop and implement best practices for managing the lifecycle of large language AI models, including version control, testing, and validation.
o Troubleshoot and resolve issues related to the performance and deployment of large language AI models.
o Stay up to date with the latest advancements in large language AI models and operations technologies to continuously improve our AI infrastructure.
o Ability to develop, suggest best practices of designing infrastructures that support fine-tuning of models to improve performance and efficiency, and troubleshoot any issues that arise during development or deployment.
o Creating and maintaining documentation: Ensure clear and comprehensive documentation of AI/ ML / LLM
o Security integration: GenAI platform engineers weave security best practices throughout the development lifecycle to safeguard the platform from vulnerabilities and data breaches.
o Monitoring and logging: Implementation of robust monitoring and logging systems, LLMOps best practices that allows for proactive identification and resolution of potential issues.
o Responsible AI Guardrails: GenAI platform engineers are responsible for ensuring all Responsible AI metrics are governed through proper system infrastructure and monitoring.
o Data privacy and governance: Ensuring user data privacy and adhering to data governance regulations are paramount considerations for GenAI platform engineers.
Requirements:
• Bachelor’s or master’s degree in statistics / economics / operation Research / data science / computer science / related field.
• 2 years of relevant experience in managing Gen AI applications, model monitoring, model validation, implementing I/O guardrails & FinOps monitoring
• Strong cross-cultural communication and negotiation skills, including the demonstrated ability to solicit opinions and accept feedback and the ability to effectively manage collaboration across time zones.
• Understanding of OpenAI, Llama, Claude, Arctic, Mistral large language models, how to deploy them on cloud/ on-premises and use APIs to build Industry solutions.
• Experience with AI/ML frameworks and tools (e.g., Langchain, Semantic Kernel, TensorFlow, PyTorch).
• Experience in using LLM models on cloud i.e. OpenAI @ Azure, Amazon Bedrock, Snowflake Cortex AI
• Familiarity with cloud platforms (e.g., AWS, Azure, Snowflake) and containerization technologies (e.g., Docker, Kubernetes).
• Advanced & secure coding experience in at least one language (Python, PySpark, TypeScript)
• Exposure to Vector/Graph/SQL Databases, non-deterministic automated testing, workflow platforms
• Excellent problem-solving skills and attention to detail.
• Strong communication and collaboration skills and experience in operating effectively as part of cross-functional teams.
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance