Job Title: Engineer
Work Location: ~HYDERABAD~BANGALORE~BHUBANESWAR~
Skill Required: Digital : Databricks~Digital : PySpark
Experience Range in Required Skills: 8 to 10
Job Description:
· At least 8-10 years of experience in development of data solutions using cloud platforms
· Strong programming skills in Python, Spark, Databricks
· Strong SQL skills and experience writing complex yet efficient SPROCs/Functions/Views using T-SQL
· Solid understand of spark architecture and experience with performance tuning big data workloads in spark
· Building complex data transformations on both structure and semi-structured data (XML/JSON) using Spark & SQL
· Familiarity with Azure Databricks environment
· Good understanding of Azure cloud ecosystem; Azure data certification of DP-200/201/203 will be an advantage
· Proficient source control using GIT
· Good understanding of Agile, DevOps and CI-CD automated deployment (e.g. Azure DevOps, Jenkins)
· Good understanding of Servicenow, BAU process, ready to work in shifts
Essential Skills:
Azure, Python/Spark, SQL, Databricks
Desirable Skills:
Azure, Python/
Responsibilities
~HYDERABAD~BANGALORE~BHUBANESWA
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders to drive project success. You will also engage in problem-solving activities, ensuring that the applications meet the required standards and specifications while fostering a collaborative environment for your team.
Roles & Responsibilities:
- Expected to be an SME.
- Collaborate and manage the team to perform.
- Responsible for team decisions.
- Engage with multiple teams and contribute on key decisions.
- Provide solutions to problems for their immediate team and across multiple teams.
- Mentor junior team members to enhance their skills and knowledge.
- Continuously assess and improve application performance and user experience.
- Shift C is mandatory
Professional & Technical Skills:
- Must To Have Skills: Proficiency in DevOps.
- Strong understanding of continuous integration and continuous deployment practices.
- Experience with cloud platforms such as AWS, Azure, or Google Cloud.
- Familiarity with containerization technologies like Docker and Kubernetes.
- Knowledge of scripting languages such as Python, Bash, or Ruby.
Additional Information:
- The candidate should have minimum 7.5 years of experience in DevOps.
- This position is based in Hyderabad.
- A 15 years full time education is required.
Responsibilities
As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders to drive project success. You will also engage in problem-solving activities, ensuring that the applications meet the required standards and specifications while fostering a collaborative environment for your team.
Roles & Responsibilities:
- Expected to be an SME.
- Collaborate and manage the team to perform.
- Responsible for team decisions.
- Engage with multiple teams and contribute on key decisions.
- Provide solutions to problems for their immediate team and across multiple teams.
- Mentor junior team members to enhance their skills and knowledge.
- Continuously assess and improve application performance and user experience.
- Shift C is mandatory
Professional & Technical Skills:
- Must To Have Skills: Proficiency in DevOps.
- Strong understanding of continuous integration and continuous deployment practices.
- Experience with cloud platforms such as AWS, Azure, or Google Cloud.
- Familiarity with containerization technologies like Docker and Kubernetes.
- Knowledge of scripting languages such as Python, Bash, or Ruby.
Additional Information:
- The candidate should have minimum 7.5 years of experience in DevOps.
- This position is based in Hyderabad.
- A 15 years full time education is required.
Salary : Rs. 0.0 - Rs. 2,30,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Job Title: Engineer
Work Location: Chennai, TN / Kochi, KL / Bangalore, KA / Gurgaon, HA / Noida, UP / Bhubaneswar, OD / Kolkata, WB / Hyderabad, TG / Pune, MH
Skill Required: Digital : Microsoft Azure~Digital : Databricks~Digital : PySpark
Experience Range: 6-8 years
Job Description:
Leads large-scale, complex, cross-functional projects to build technical roadmap for the WFM Data Services platform. Leading and reviewing design artifacts Build and own the automation and monitoring frameworks that showcase reliable, accurate, easy-to-understand metrics and operational KPIs to stakeholders for data pipeline quality Execute proof of concept on new technology and tools to pick the best tools and solutions Supports business objectives by collaborating with business partners to identify opportunities and drive resolution. Communicating status and issues to Sr Starbucks leadership and stakeholders. Directing project team and cross functional teams on all technical aspects of the projects Lead with engineering team to build and support real-time, highly available data, data pipeline and technology capabilities Translate strategic requirements into business requirements to ensure solutions meet business needs Define implement data retention policies and procedures Define implement data governance policies and procedures Identify, design, and implement internal process improvements automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability Enable team to pursue insights and applied breakthroughs, while also driving the solutions to Starbucks scale Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of structured and unstructured data sources and using big data technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Perform root cause analysis to identify permanent resolutions to software or business process issues
Responsibilities
Job Title: Engineer
Work Location: Chennai, TN / Kochi, KL / Bangalore, KA / Gurgaon, HA / Noida, UP / Bhubaneswar, OD / Kolkata, WB / Hyderabad, TG / Pune, MH
Skill Required: Digital : Microsoft Azure~Digital : Databricks~Digital : PySpark
Experience Range: 6-8 years
Job Description:
Leads large-scale, complex, cross-functional projects to build technical roadmap for the WFM Data Services platform. Leading and reviewing design artifacts Build and own the automation and monitoring frameworks that showcase reliable, accurate, easy-to-understand metrics and operational KPIs to stakeholders for data pipeline quality Execute proof of concept on new technology and tools to pick the best tools and solutions Supports business objectives by collaborating with business partners to identify opportunities and drive resolution. Communicating status and issues to Sr Starbucks leadership and stakeholders. Directing project team and cross functional teams on all technical aspects of the projects Lead with engineering team to build and support real-time, highly available data, data pipeline and technology capabilities Translate strategic requirements into business requirements to ensure solutions meet business needs Define implement data retention policies and procedures Define implement data governance policies and procedures Identify, design, and implement internal process improvements automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability Enable team to pursue insights and applied breakthroughs, while also driving the solutions to Starbucks scale Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of structured and unstructured data sources and using big data technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Perform root cause analysis to identify permanent resolutions to software or business process issues
Salary : Rs. 55,000.0 - Rs. 95,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Digital : Microsoft Azure~Digital : Databricks~Digital : PySpark
Experience Range: 6-8 years
Job Description:
Leads large-scale, complex, cross-functional projects to build technical roadmap for the WFM Data Services platform. Leading and reviewing design artifacts Build and own the automation and monitoring frameworks that showcase reliable, accurate, easy-to-understand metrics and operational KPIs to stakeholders for data pipeline quality Execute proof of concept on new technology and tools to pick the best tools and solutions Supports business objectives by collaborating with business partners to identify opportunities and drive resolution. Communicating status and issues to Sr Starbucks leadership and stakeholders. Directing project team and cross functional teams on all technical aspects of the projects Lead with engineering team to build and support real-time, highly available data, data pipeline and technology capabilities Translate strategic requirements into business requirements to ensure solutions meet business needs Define implement data retention policies and procedures Define implement data governance policies and procedures Identify, design, and implement internal process improvements automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability Enable team to pursue insights and applied breakthroughs, while also driving the solutions to Starbucks scale Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of structured and unstructured data sources and using big data technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business
Responsibilities
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Role Category :Programming & Design
Role :Digital : Microsoft Azure~Digital : Databricks~Digital : PySpark Experience Range: 6-8 years
1.Manage & administer IBM Storage platforms.
2.Perform storage provisioning, zoning, replication, firmware upgrades, and performance tuning.
3.Administer IBM Spectrum Protect (TSM) backup environment including schedules, policies, restores, failover , fail back and disk storage operations.
4.Manage SAN fabrics (Brocade/Cisco), zoning, switch upgrades, and fabric health.
Responsibilities
1.Manage & administer IBM Storage platforms.
2.Perform storage provisioning, zoning, replication, firmware upgrades, and performance tuning.
3.Administer IBM Spectrum Protect (TSM) backup environment including schedules, policies, restores, failover , fail back and disk storage operations.
4.Manage SAN fabrics (Brocade/Cisco), zoning, switch upgrades, and fabric health.
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance