Trucks, Buses, Transportation, and Logistics
Position: Data Engineer , External •
Exp range: 3-6 (Z2)
• Key Responsibilities: o We are seeking a skilled Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. You will play a crucial role in our data ecosystem by working with cloud technologies to enable data accessibility, quality, and insights across the organization. This role requires expertise in Azure Databricks, Snowflake, and DBT. Requirements: • Bachelor’s in Computer Science, Data Engineering, or related field. • Proficiency in Azure Databricks for data processing and pipeline orchestration. • Experience with Snowflake as a data warehouse platform and DBT for transformations. • Strong SQL skills and understanding of data modeling principles. • Ability to troubleshoot and optimize data workflows. *Responsibilities for Internal Candidates Key Responsibilities: • Data Pipeline Development: Design, build, and optimize data pipelines to ingest, transform, and load data from multiple sources, using Azure Databricks, Snowflake, and DBT. • Data Architecture: Develop and manage data models within Snowflake, ensuring efficient data organization and accessibility. • Data Transformation: Implement transformations in DBT, standardizing data for analysis and reporting. • Performance Optimization: Monitor and optimize pipeline performance, troubleshooting and resolving issues as needed. • Collaboration: Work closely with data scientists, analysts, and other stakeholders to support data-driven projects and provide access to reliable, well-structured data. Qualifications: • Having relevant Experience in MS Azure, Snowflake, DBT& Big Data Hadoop eco-system components • Understanding of Hadoop Architecture and underlying framework including Storage Management. • Strong understand and implementation experience in Hadoop, Spark, Hive/Databricks • Expertise in implementing Data lake solution using Scala as well as Python. • Expertise with orchestration tool like Azure Data Factory • Strong SQL and Programing skills • Experience with DataBricks is desirable • Understanding / Implementation experience with CICD tools such as Jenkins, Azure DevOps,
Responsibilities
Trucks, Buses, Transportation, and Logistics
Position: Data Engineer , External •
Exp range: 3-6 (Z2)
• Key Responsibilities: o We are seeking a skilled Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. You will play a crucial role in our data ecosystem by working with cloud technologies to enable data accessibility, quality, and insights across the organization. This role requires expertise in Azure Databricks, Snowflake, and DBT. Requirements: • Bachelor’s in Computer Science, Data Engineering, or related field. • Proficiency in Azure Databricks for data processing and pipeline orchestration. • Experience with Snowflake as a data warehouse platform and DBT for transformations. • Strong SQL skills and understanding of data modeling principles. • Ability to troubleshoot and optimize data workflows. *Responsibilities for Internal Candidates Key Responsibilities: • Data Pipeline Development: Design, build, and optimize data pipelines to ingest, transform, and load data from multiple sources, using Azure Databricks, Snowflake, and DBT. • Data Architecture: Develop and manage data models within Snowflake, ensuring efficient data organization and accessibility. • Data Transformation: Implement transformations in DBT, standardizing data for analysis and reporting. • Performance Optimization: Monitor and optimize pipeline performance, troubleshooting and resolving issues as needed. • Collaboration: Work closely with data scientists, analysts, and other stakeholders to support data-driven projects and provide access to reliable, well-structured data. Qualifications: • Having relevant Experience in MS Azure, Snowflake, DBT& Big Data Hadoop eco-system components • Understanding of Hadoop Architecture and underlying framework including Storage Management. • Strong understand and implementation experience in Hadoop, Spark, Hive/Databricks • Expertise in implementing Data lake solution using Scala as well as Python. • Expertise with orchestration tool like Azure Data Factory • Strong SQL and Programing skills • Experience with DataBricks is desirable • Understanding / Implementation experience with CICD tools such as Jenkins, Azure DevOps,
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Job Description:
Development experience using AWS as Quick Sight good communication skill worked at least in 1 development project
Essential Skills:
Development experience using AWS as Quick Sight good communication skill worked at least in 1 development project
Responsibilities
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Job Description: SAP BODS (Integration)
• 6+ years of relevant experience with SAP Data Services (BODS) Integration and Migration.
• Hands-on experience with SAP BODS 4.2 and above
• Working knowledge on SQL scripting
• Working knowledge on optimizing techniques
• Hands-on experience in Implementation & upgrade to S/4 HANA (1909 or 2020)
• Provide technical/functional subject matter expertise and technical/functional clarification for SAP data objects and configuration in support of design sessions with related Process Teams to confirm functionality-driven data standards.
• Establish connectivity to legacy systems and develop and schedule ETL extraction jobs from legacy systems.
• Design, develop, unit test, and debug complex data conversion jobs and workflows using SAP Data Services, IDocs, LSMWs, etc.
• Work closely with the Functional Teams to identify and document the data objects field mapping, transformation, and validation rules.
• Support the development of unit and end-to-end data migration test plans and test scripts (including testing for data extraction, transformation, data loading, and data validation).
• Perform data load activities for each mock load, cutover simulation and production deployment identified in the plan into the environments identified.
• Provide technical support, defect management, and issue resolution during all testing cycles, including Mock Data Load cycles.
• Complete all necessary data migration documentation necessary to support system validation/compliance requirements.
• Provide technical support, defect management, and issue resolution during Production deployment and hyper care support.
Essential Skills: SAP BODS (Integration)
• 6+ years of relevant experience with SAP Data Services (BODS) Integration and Migration.
• Hands-on experience with SAP BODS 4.2 and above
• Working knowledge on SQL scripting
• Working knowledge on optimizing techniques
• Hands-on experience in Implementation & upgrade to S/4 HANA (1909 or 2020)
• Provide technical/functional subject matter expertise and technical/functional clarification for SAP data objects and configuration in support of design sessions with related Process Teams to confirm functionality-driven data standards.
• Establish connectivity to legacy systems and develop and schedule ETL extraction jobs from legacy systems.
• Design, develop, unit test, and debug complex data conversion jobs and workflows using SAP Data Services, IDocs, LSMWs, etc.
• Work closely with the Functional Teams to identify and document the data objects field mapping, transformation, and validation rules.
• Support the development of unit and end-to-end data migration test plans and test scripts (including testing for data extraction, transformation, data loading, and data validation).
• Perform data load activities for each mock load, cutover simulation and production deployment identified in the plan into the environments identified.
• Provide technical support, defect management, and issue resolution during all testing cycles, including Mock Data Load cycles.
• Complete all necessary data migration documentation necessary to support system validation/compliance requirements.
• Provide technical support, defect management, and issue resolution during Production deployment and hyper care support.
Responsibilities
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
• Python: Data Science lib exp especially Gen AI frameworks like langchain etc and NLP lib, also person need to be strong in data science core lib and concepts
• Azure AI Services: Azure configuration, Azure AI Search, language services, Document Services and Open AI services
Having Gen-AI solution understanding (ex: RAG) is required. Again, Copilot Studio experience is plus.
Responsibilities
• Python: Data Science lib exp especially Gen AI frameworks like langchain etc and NLP lib, also person need to be strong in data science core lib and concepts
• Azure AI Services: Azure configuration, Azure AI Search, language services, Document Services and Open AI services
Having Gen-AI solution understanding (ex: RAG) is required. Again, Copilot Studio experience is plus.
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Job Description:
1. Incident Resolution -General L1/L2 troubleshooting.
2. Major Incident Support.
3. This is a must have - L2/L3 Solaris OS administration activities including LDOM/CDOM, Solaris Zone (local/global), UFS, ZFS, Solaris Volume manager.
4. L2/L3 Redhat/Oracle Linux OS administration activities.
5. HPUX server management.
6. OS administration activities include but are not limited to Build, Troubleshooting, Server down, Resource (CPU/RAM/Disk/Network) management.
7. Azure and Oracle cloud knowledge including Exadata on cloud.
8. Storage or Network or Disk Interface issues.
9. Working knowledge on Veritas cluster, Oracle RAC cluster, Oracle ASM.
10. Working knowledge on Pacemaker cluster.
11. Working knowledge on Cisco UCS.
12. MetaDisk (MD) Filesystem management.
13. Veritas filesystem management.
14. Multipath filesystem management.
15. Solaris zpool/apppool filesystem management.
16. Oracle FMADM management.
17. CRU replacement/installation for devices needing opening the server.
18. To respond immediately to any requests escalated.
19. Preparation of SOP.
20. To perform against all key performance indicators including the Resolution and Response Time of SLA Matrix..
21. Follow 24*7 business support model.
22. Follow Shift roster and daily Shift HOTO perfectly.
23. Good to know - Oracle engineering system - Exadata and Supercluster.
24. Good to know - Python and Bash scripting, Ansible.
25. Good to know - Nagios monitoring tool
Essential Skills:
L2/L3 Solaris OS administration activities including LDOM/CDOM, Solaris Zone (local/global), UFS, ZFS, Solaris Volume manager, Veritas cluster, Oracle RAC cluster, Oracle ASM.
At least 5 to 7 years of live experience of resolving L2/L3 level of technical incident resolution in Solaris/OEL/Linux/Unix /RHEL/HPUX Platform.
Desirable Skills:
Good to know - Oracle engineering system - Exadata and Supercluster Good to know - Python and Bash scripting, Ansible - Nagios monitoring tool
Responsibilities
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance