We found 8 jobs matching to your search

Advance Search

Skills

Locations

Experience

Job Description

Must Have Skills (Top 3 technical skills only) * 1. Big data, Hive 2. Unix, sql , spark , scala 3. Automation skils Nice to have skills (Top 2 only) 1. 2. Detailed Job Description: Strong experience in SQL, UNIX, HIVE, SPARKSCALA or JAVA, should have all the basic echo system knowledge on Hadoop.ETL background or batch processing testing is value add. Testing automation optionalOther Hadoop tools optional Minimum years of experience required (mention the bare minimum acceptable for this position) *: 4+ Certifications Needed (if any): Top 3 responsibilities you would expect the Subcon to shoulder and execute*: Technical skill set as mentioned Client interfacing skills Comunication skills

Responsibilities

Must Have Skills (Top 3 technical skills only) * 1. Big data, Hive 2. Unix, sql , spark , scala 3. Automation skils Nice to have skills (Top 2 only) 1. 2. Detailed Job Description: Strong experience in SQL, UNIX, HIVE, SPARKSCALA or JAVA, should have all the basic echo system knowledge on Hadoop.ETL background or batch processing testing is value add. Testing automation optionalOther Hadoop tools optional Minimum years of experience required (mention the bare minimum acceptable for this position) *: 4+ Certifications Needed (if any): Top 3 responsibilities you would expect the Subcon to shoulder and execute*: Technical skill set as mentioned Client interfacing skills Comunication skills
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Bigdata-249424,249420,249418

Job Description

Total Yrs. of Experience* 3-6 yrs Relevant Yrs. of experience* least 3 years Detailed JD *(Roles and Responsibilities) PFA Mandatory skills* Big Data + Scala + python Desired skills* Python, Python Data Science Libraries, ML algorithms, Neural Networks, Naive Bayes, SVM, Decision Forests, NLP, text analytics technologies, R, MatLab, HDFS, Hive, Spark, Scala etc. Data visualization tools such as Tableau, Query languages such as SQL, Hive Domain* Sourcing and Procurement

Responsibilities

Total Yrs. of Experience* 3-6 yrs Relevant Yrs. of experience* least 3 years Detailed JD *(Roles and Responsibilities) PFA Mandatory skills* Big Data + Scala + python Desired skills* Python, Python Data Science Libraries, ML algorithms, Neural Networks, Naive Bayes, SVM, Decision Forests, NLP, text analytics technologies, R, MatLab, HDFS, Hive, Spark, Scala etc. Data visualization tools such as Tableau, Query languages such as SQL, Hive Domain* Sourcing and Procurement
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Bigdata,scala,Python-249025

Job Description

Qualifications ● Master’s or bachelor's degree in Physics, Applied Math, Statistics, Computer Science, Engineering or a related degree or equivalent experience. Graduate degree heavily preferred. ● At least 3-5 years of experience as a software or data engineer. ● Experience with and knowledge of SAP ecosystem or similar ERP ecosystems. ● Understanding with ticket management tools like ServiceNow or Jira ● Experience with code version control and collaboration platforms like Bitbucket, GitHub ● Knowledge of reporting and visualization with tools such as Tableau & Business Objects ● Knowledge or experience in ETL process architecture, design, and implementation. ● Ability to manage multiple, conflicting, high priority requests ● Ability to demonstrate strong analytical and problem-solving skills with real-life datasets, and be able to abstract mathematical models from real-life problems ● Knowledge with Google cloud storage /data products and services. ● Strong coding skills in Python, R, or other scripting languages with experience in delivering production-ready code ● Experience in SQL development ● Ability to prioritize and manage projects to completion without guidance ● Translates data and information into clear, professional reports and analyses that offer an insightful interpretation ● Understanding on AI/ML development projects and technical architecture specifically is a plus ● Technical certifications such as Google Cloud Data Engineer or advanced certifications in data science a plus.

Responsibilities

Qualifications ● Master’s or bachelor's degree in Physics, Applied Math, Statistics, Computer Science, Engineering or a related degree or equivalent experience. Graduate degree heavily preferred. ● At least 3-5 years of experience as a software or data engineer. ● Experience with and knowledge of SAP ecosystem or similar ERP ecosystems. ● Understanding with ticket management tools like ServiceNow or Jira ● Experience with code version control and collaboration platforms like Bitbucket, GitHub ● Knowledge of reporting and visualization with tools such as Tableau & Business Objects ● Knowledge or experience in ETL process architecture, design, and implementation. ● Ability to manage multiple, conflicting, high priority requests ● Ability to demonstrate strong analytical and problem-solving skills with real-life datasets, and be able to abstract mathematical models from real-life problems ● Knowledge with Google cloud storage /data products and services. ● Strong coding skills in Python, R, or other scripting languages with experience in delivering production-ready code ● Experience in SQL development ● Ability to prioritize and manage projects to completion without guidance ● Translates data and information into clear, professional reports and analyses that offer an insightful interpretation ● Understanding on AI/ML development projects and technical architecture specifically is a plus ● Technical certifications such as Google Cloud Data Engineer or advanced certifications in data science a plus.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Data Engineer

Job Description

Candidate is expected to proficient in Python with Big data analytics. Hands on experience is must. Ability to coordinate teams and discussion. Should possess excellent communication skills. Willing to work from Client office

Responsibilities

Candidate is expected to proficient in Python with Big data analytics. Hands on experience is must. Ability to coordinate teams and discussion. Should possess excellent communication skills. Willing to work from Client office
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :python,Bigdata-248780

Job Description

Must Have Skills (Top 3 technical skills only) * 1. Unix 2. SQL 3. Hadoop Nice to have skills (Top 2 only) 1. 2. Detailed Job Description: Candidate should have strong testing experience in Hadoop, Spark, Hive, SQL, Unix Shell Script and Java. Also should have strong knowledge to prepare test artifacts like Test Case, Test plan, Test Result documents.

Responsibilities

Must Have Skills (Top 3 technical skills only) * 1. Unix 2. SQL 3. Hadoop Nice to have skills (Top 2 only) 1. 2. Detailed Job Description: Candidate should have strong testing experience in Hadoop, Spark, Hive, SQL, Unix Shell Script and Java. Also should have strong knowledge to prepare test artifacts like Test Case, Test plan, Test Result documents.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Bigdata Testing - 247934

Job Description

• 5+ years of IT experience with 3+ years on Big Data skills • Experience in Hadoop, MapReduce, Hive with hands-on experience in supporting Hadoop applications and testing. • Experience in Hive queries, Linux/Unix – Linux is preferable, hands on experience in Spark 2.0 • Experience in testing ETL, Reports, BI and reporting in a high volume environment. • Experience validating data mapping, selection criteria, aggregations, sorting, lookups, transformations, data loads • Hands-on experience in generating test data and test related procedures, packages, triggers

Responsibilities

• 5+ years of IT experience with 3+ years on Big Data skills • Experience in Hadoop, MapReduce, Hive with hands-on experience in supporting Hadoop applications and testing. • Experience in Hive queries, Linux/Unix – Linux is preferable, hands on experience in Spark 2.0 • Experience in testing ETL, Reports, BI and reporting in a high volume environment. • Experience validating data mapping, selection criteria, aggregations, sorting, lookups, transformations, data loads • Hands-on experience in generating test data and test related procedures, packages, triggers
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Bigdata Testing

Job Description

Big Data ETL Developers: Proficient understanding of distributed computing principles, Management of Hadoop cluster, with all included services • Proficiency with Hive-QL and scripting language • Knowledge of various ETL techniques and frameworks, such as Flume, Kafka, Sqoop( Optional) • Understanding of Hadoop, Hive and SQL • Very Good hold on Hive Scripting Language & Performance Tuning • Candidate should have Data Processing ability (ETL techniques) using hive scripting experience. • Candidate MUST NOT be limited to Data Migration capability from legacy DB to Hadoop Cluster • Candidate must be able to Analyze, Develop and Debug the Hive Scripts on his own. • Proficient with Partitioning, Analytical aggregation and dealing with large tables. • Understanding of MySQL is a good to have criteria • Unix shell scripting • Conceptual understanding of Data Modeling , Master/Meta data management (MDM)

Responsibilities

Big Data ETL Developers: Proficient understanding of distributed computing principles, Management of Hadoop cluster, with all included services • Proficiency with Hive-QL and scripting language • Knowledge of various ETL techniques and frameworks, such as Flume, Kafka, Sqoop( Optional) • Understanding of Hadoop, Hive and SQL • Very Good hold on Hive Scripting Language & Performance Tuning • Candidate should have Data Processing ability (ETL techniques) using hive scripting experience. • Candidate MUST NOT be limited to Data Migration capability from legacy DB to Hadoop Cluster • Candidate must be able to Analyze, Develop and Debug the Hive Scripts on his own. • Proficient with Partitioning, Analytical aggregation and dealing with large tables. • Understanding of MySQL is a good to have criteria • Unix shell scripting • Conceptual understanding of Data Modeling , Master/Meta data management (MDM)
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Data Engineer

Job Description

Bigdata with Azure

Responsibilities

Bigdata with Azure
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Bigdata with Azure