Missions
Transform and achieve the change of technology and practices from traditional infrastructure towards platform
Hands-on experience in working on such platform migration programs with capability to devise reduced TTM measures
Execute and drive delivery of similar high impact global program
Position our teams well & enable a culture high performing team to successfully deliver high impact programs.
Ensure that all the aspects of cost/budget compliance and operational risk management
High level of collaboration with various stakeholders
Profile
Strong Knowledge of Bigdata Architecture & Administrator's role
Candidate should have 4+ years of experince
Excellent communication skills
Ability to work independently with a strong attention to detail
Ability to work well in a global team environment
Ability to structure work, split in task, follow every step with continuous feedback
Execution of jobs to migrate data and apply local control
Ability to analyse if execution failed or in case of data discrepancy
Good Scripting knowledge in Bash, Python, Anaconda, Ansible
Knowledge on Automation/DevOps Tools Github, Jenkins, Docker, Kubernetes
Data Ingestion, Data Access & Data storage using Hadoop tools like HBase, Flume, Kafka, Nifi, ElasticSearch
Application Deployment using JAVA & Python APIs
Cluster Connectivity, Security, Backup and Disaster Recovery
Knowledge on Kibana, Talend, Graphana, Control-M
Responsibilities
Missions
Transform and achieve the change of technology and practices from traditional infrastructure towards platform
Hands-on experience in working on such platform migration programs with capability to devise reduced TTM measures
Execute and drive delivery of similar high impact global program
Position our teams well & enable a culture high performing team to successfully deliver high impact programs.
Ensure that all the aspects of cost/budget compliance and operational risk management
High level of collaboration with various stakeholders
Profile
Strong Knowledge of Bigdata Architecture & Administrator's role
Candidate should have 4+ years of experince
Excellent communication skills
Ability to work independently with a strong attention to detail
Ability to work well in a global team environment
Ability to structure work, split in task, follow every step with continuous feedback
Execution of jobs to migrate data and apply local control
Ability to analyse if execution failed or in case of data discrepancy
Good Scripting knowledge in Bash, Python, Anaconda, Ansible
Knowledge on Automation/DevOps Tools Github, Jenkins, Docker, Kubernetes
Data Ingestion, Data Access & Data storage using Hadoop tools like HBase, Flume, Kafka, Nifi, ElasticSearch
Application Deployment using JAVA & Python APIs
Cluster Connectivity, Security, Backup and Disaster Recovery
Knowledge on Kibana, Talend, Graphana, Control-M
Salary : Rs. 5,00,000.0 - Rs. 12,00,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Proven commercial experience as a Data Engineer / Big Data Developer for 3+ years preferably in building data lake solution by ingesting and processing data from various source systems
• Demonstrated software engineering / Big Data development background using Python (preferred), Java, Scala
• Experience with multiple Big data technologies and concepts such as HDFS, NiFi, Kafka, Hive, Spark, Spark streaming, HBase, EMR and GCP
• Background within Hadoop eco system (Cloudera, Spark, Scala and PySpark);
• Have worked on Spark performance tuning, troubleshooting Spark Job
• Strong in concepts of object-oriented programming, memory management e.g. garbage collection.
• Worked on building ETL pipeline.
• In-depth understanding of Data Management practices and Database technologies
• Ability to work in team in diverse, fast-paced Agile environment
• Apply DevOps, Continuous Integration and Continuous Delivery principles to build automated pipelines for deployment and production assurance on the data platform.
• Knowledge of building self-contained applications using Docker and OpenShift
• Share knowledge with immediate peers and build communities and connections that promote better technical practices across the organisation
• Implement test cases and test automation.
• Experience in building various frameworks for enterprise data lake is highly desirable
• Good data analytics skills by using SQL to troubleshoot data issues
Responsibilities
Proven commercial experience as a Data Engineer / Big Data Developer for 3+ years preferably in building data lake solution by ingesting and processing data from various source systems
• Demonstrated software engineering / Big Data development background using Python (preferred), Java, Scala
• Experience with multiple Big data technologies and concepts such as HDFS, NiFi, Kafka, Hive, Spark, Spark streaming, HBase, EMR and GCP
• Background within Hadoop eco system (Cloudera, Spark, Scala and PySpark);
• Have worked on Spark performance tuning, troubleshooting Spark Job
• Strong in concepts of object-oriented programming, memory management e.g. garbage collection.
• Worked on building ETL pipeline.
• In-depth understanding of Data Management practices and Database technologies
• Ability to work in team in diverse, fast-paced Agile environment
• Apply DevOps, Continuous Integration and Continuous Delivery principles to build automated pipelines for deployment and production assurance on the data platform.
• Knowledge of building self-contained applications using Docker and OpenShift
• Share knowledge with immediate peers and build communities and connections that promote better technical practices across the organisation
• Implement test cases and test automation.
• Experience in building various frameworks for enterprise data lake is highly desirable
• Good data analytics skills by using SQL to troubleshoot data issues
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance