Skill – Non Voice
Shifts – Rotational UK Shifts.
Work from Office
Responsibilities
Role Requirement - We are looking for a highly skilled and detail-oriented professional with advanced proficiency in Microsoft PowerPoint and the full Microsoft Office Suite, including Excel, Word, Outlook, and Power BI. The ideal candidate will excel at designing and delivering visually compelling, data-driven, and business-relevant presentations, dashboards, reports, and documents. This role requires the ability to translate complex information into clear, impactful visual narratives that support executive communication, business proposals, strategic reviews, and client-facing engagements.
Expertise in developing interactive dashboards and performance reports through Power BI will be a strong advantage.
Salary : Rs. 21,000.0 - Rs. 25,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Role Category :Programming & Design
Role :Customer Contact Communications Representative
Job Description:
Lead the end-to-end migration of data pipelines and workloads from Teradata to Databricks.
Analyze existing Teradata workloads to identify optimization opportunities and migration strategies.
Design and implement scalable, efficient, and cost-effective data pipelines using PySpark/Scala on Databricks
Skills Required:
Strong experience with Databricks, Apache Spark, and Delta Lake.
Proficiency in PySpark, SQL, and optionally Scala.
Strong understanding of Spark, Databricks performance tuning techniques
Understanding of Teradata utilities, BTEQ scripts, stored procedures
Experience with cloud platforms (Azure, AWS, or GCP), preferably Azure Databricks.
Familiarity with data modeling, data warehousing concepts, and big data best practices.
Strong problem-solving and communication skills.
Experience with version control and CI/CD tools.
Responsibilities
Job Description:
Lead the end-to-end migration of data pipelines and workloads from Teradata to Databricks.
Analyze existing Teradata workloads to identify optimization opportunities and migration strategies.
Design and implement scalable, efficient, and cost-effective data pipelines using PySpark/Scala on Databricks
Skills Required:
Strong experience with Databricks, Apache Spark, and Delta Lake.
Proficiency in PySpark, SQL, and optionally Scala.
Strong understanding of Spark, Databricks performance tuning techniques
Understanding of Teradata utilities, BTEQ scripts, stored procedures
Experience with cloud platforms (Azure, AWS, or GCP), preferably Azure Databricks.
Familiarity with data modeling, data warehousing concepts, and big data best practices.
Strong problem-solving and communication skills.
Experience with version control and CI/CD tools.
Salary : Rs. 10,00,000.0 - Rs. 25,00,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance