We found 4 jobs matching to your search

Advance Search

Skills

Locations

Experience

Job Description

Band – 7A Hourly rate- 995 Daily rate – 7962 Rate – 20,06,424 Location – Bangalore

Responsibilities

Band – 7A Hourly rate- 995 Daily rate – 7962 Rate – 20,06,424 Location – Bangalore
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :bigdata pyspark

Job Description

Duration: 6 Months Work Location: Hyderabad Experience: 5+ years Job Description: 1. Should have sound knowledge and hands on experience on bigdata, Spark, Scala, Java technologies. 2. Minimum 5+ years of experience in Designing and development of bigdata applications.

Responsibilities

  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Technology Lead

Job Description

Bigdata - Hadoop

Responsibilities

  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Bigdata - Hadoop

Job Description

 Experience in Hadoop, ElasticSearch, Machine Learning technology stack (Python, R, Spark, etc..)  Should have minimum 3 years of experience in BIG Data with prior experience in Java and ETL.  Provide hands-on leadership for the design and development of ETL data flows using Hadoop and Spark ECO system components. Leading the develop of large-scale, high-speed, and low-latency data solutions in the areas of large scale data manipulation, long term data storage, data warehousing, Low-latency retrieval systems, real-time reporting and analytics Data applications -visualization, BI, dashboards, ad-hoc analytics  Must to have hands on experience on SPARK, Kafka, HIVE/PIG, API development, and any ETL tool Preferably Talend (Any other tool is fine as long as the resource has strong Hold on ETL process).  Must to have Core Java knowledge and good to have Spring, Hibernate Strong hold on SQL / PL SQL  Must to have hands on experience on Unix Scripting  Translate complex functional and technical requirements into detailed design  Perform analysis of data sets and uncover insights  Maintain security and data privacy  Propose best practices/standards  Excellent communication skills.

Responsibilities

 Experience in Hadoop, ElasticSearch, Machine Learning technology stack (Python, R, Spark, etc..)  Should have minimum 3 years of experience in BIG Data with prior experience in Java and ETL.  Provide hands-on leadership for the design and development of ETL data flows using Hadoop and Spark ECO system components. Leading the develop of large-scale, high-speed, and low-latency data solutions in the areas of large scale data manipulation, long term data storage, data warehousing, Low-latency retrieval systems, real-time reporting and analytics Data applications -visualization, BI, dashboards, ad-hoc analytics  Must to have hands on experience on SPARK, Kafka, HIVE/PIG, API development, and any ETL tool Preferably Talend (Any other tool is fine as long as the resource has strong Hold on ETL process).  Must to have Core Java knowledge and good to have Spring, Hibernate Strong hold on SQL / PL SQL  Must to have hands on experience on Unix Scripting  Translate complex functional and technical requirements into detailed design  Perform analysis of data sets and uncover insights  Maintain security and data privacy  Propose best practices/standards  Excellent communication skills.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Bigdata Hadoop