We found 6 jobs matching to your search

Advance Search

Skills

Locations

Experience

Job Description

• A degree in information science, business informatics, physics, mathematics, economics or similar with 0- 6 months of training or certification • Knowhow with ETL & databases, particularly in the Big Data & Data Warehousing environment • Programming knowledge in Python (especially Pandas, NumPy, Scikit-learn, Keras) or R, (SQL, Bash is a plus) • Familiarity in working with analysis and visualization tools; alternatively according Python or R libraries (Bokeh, Matplotlib, ggplot, Plotly) • Ideally you have worked with at least one of these technologies Dataiku, Jupyter, Axway, Databricks, APIs, Git, SAP ERP & SAP BW • Having knowledge in DevOps, Frontend, Coding Standards, Testing and Profiling is a plus • Knowledge of MS Azure with Fabric tools like ADF, One Lake, PySpark notebooks is essential

Responsibilities

  • Salary : Rs. 0.0 - Rs. 6.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Big Data & Data Warehousing - Rao, Kameshwara

Job Description

Job Description: At least 5 to 8 years of relevant experience in development of Big Data applications Hands on experience using Unix/Linux based application and scripting knowledge using Shell/Bash/Python. Experience of working in agile teams, knowledge of Scrum or SAFe agile. Experience in building Apache Nifi pipelines, Batch applications. Good understanding of performance design patterns and anti-pattern Have a strong knowledge of Payment system and/or payment application from financial/banking domain." Essential Skills: Hadoop/Big Data , Hive, Impala, Spark, Apache Nifi, Unix, Shell Scripting, SQL

Responsibilities

  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :BigData and Hadoop Ecosystems~Digital : Scala~Digital : PySpark~Core Java

Job Description

Big Data Specialist Engineer Minimum Qualifications - Hands on Hadoop ecosystem application development experience. - Hands on of Spark and Scala Development experience. Preferred Qualifications - 5+ years of application building experience in a professional environment. - Thorough understanding of Hadoop and its ecosystem i.e. Spark, HDFS, Hive - Good Understanding of modern and traditional databases and SQL - Working knowledge of microservices - Excellent coding skills in Scala - Hands on with Spark and advanced Scala is a must - Hands on with Apache Nifi and Kafka is good to have - Proficiency in Linux environment and its tool i.e. grep, awk, sed, etc. - Hands on with git, jenkins, ansible. - Excellent communication skills and being able to work independently or in a full team. - Financial knowledge would added advantage - Good Problem Solving skills and attention to detail. - Ability to work independently as well as in a team environment. - Excellent Communication and Interpersonal skills

Responsibilities

  • Salary : Rs. 0.0 - Rs. 16.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Specialist Software Engineer

Job Description

Job Description: Excellent problem-solving and analytical skills. Essential Skills: Experience in data engineering, with a focus on Java, Pyspark and SQL Programming languages. Strong understanding of data architectures and data warehousing principles. Hands-on experience in Big Data technologies (Spark, Hive, Hadoop)

Responsibilities

  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :BigData and Hadoop Ecosystems~Advanced Java Concepts

Job Description

Job Description: Excellent problem-solving and analytical skills. Essential Skills: Experience in data engineering, with a focus on Java, Pyspark and SQL Programming languages. Strong understanding of data architectures and data warehousing principles. Hands-on experience in Big Data technologies (Spark, Hive, Hadoop)

Responsibilities

  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :BigData and Hadoop Ecosystems~Advanced Java Concepts

Job Description

Should have 3 to 5 years of hands on experience working on Java, Spring Framework related components Should have at least 2 years of hands on experience working on using Java Spark on HDInsight or SoK8s Should have at least 2 years of hands on experience working on using Container & Orchestration tools such as Docker & Kubernetes Should have experience working on projects using Agile Methodologies and CI/CD Pipelines Should have experience working on at least one of the RDBMS databases such as Oracle, PostgreSQL and SQL Server Nice to have exposure to Linux platform such as RHEL and Cloud platforms such as Azure Data Lake Nice to have exposure to Investment Banking Domain

Responsibilities

  • Salary : Rs. 0.0 - Rs. 12.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role : Job Description - Specialist Software Engineer - Java + BigData (250009V9) Specialist Software Engineer