Big Data - Architect/ Senior Architect :
Job description
• Architecture experience with Spark, AWS and Big Data.
• Experience on design and build of Data Lake on AWS Cloud.
• Hands on experience in AWS Cloud and related tools, storage and architectural aspects are required.
• Experience with integration of different data sources with Data Lake is required.
• Experience in Big Data/Analytics/Data science tools and a good understanding of the leading products in the industry are required along with passion, curiosity and technical depth
• Thorough understanding and working experience in Cloudera/Horton Hadoop distributions
• Solid functional understanding of the Big Data Technologies, Streaming and NoSQL databases
• Experience in working with Big Data eco-system including tools such as YARN, Impala, Hive, Flume, HBase, Sqoop, Apache Spark, Apache Storm, Crunch, Java, Oozie, SQOOP, Pig, Scala, Python, Kerberos/Active Directory/LDAP
• Experience in solving Streaming use cases using Spark, Kafka ,NiFi
• Thorough understanding, strong technical/architecture insight and working experience in Docker, Kubernets
• Containerization experience with Big Data stack using Open Shift/Azure
• Exposure to Cloud computing and Object Storage services/platforms
• Experience with Big Data deployment architecture, configuration management, monitoring, debugging and security
• Experience in performing Cluster Sizing exercise based on capacity requirements
• Ability to build strong partnership with internal teams, vendors on resolving product gaps/issues and escalate to the management on timely manner\
• Good Exposure to CI/CD tools, application hosting, containerization concepts
• Excellent verbal and written skills, Team skills, Proficient with MS Visio, Strong analytical and problem solving skills
• Must be a self-starter, excellent communication and interpersonal skills.
• Strong problem solving and analytical skills
• Effective verbal and written communication skills
Responsibilities
Big Data - Architect/ Senior Architect :
Job description
• Architecture experience with Spark, AWS and Big Data.
• Experience on design and build of Data Lake on AWS Cloud.
• Hands on experience in AWS Cloud and related tools, storage and architectural aspects are required.
• Experience with integration of different data sources with Data Lake is required.
• Experience in Big Data/Analytics/Data science tools and a good understanding of the leading products in the industry are required along with passion, curiosity and technical depth
• Thorough understanding and working experience in Cloudera/Horton Hadoop distributions
• Solid functional understanding of the Big Data Technologies, Streaming and NoSQL databases
• Experience in working with Big Data eco-system including tools such as YARN, Impala, Hive, Flume, HBase, Sqoop, Apache Spark, Apache Storm, Crunch, Java, Oozie, SQOOP, Pig, Scala, Python, Kerberos/Active Directory/LDAP
• Experience in solving Streaming use cases using Spark, Kafka ,NiFi
• Thorough understanding, strong technical/architecture insight and working experience in Docker, Kubernets
• Containerization experience with Big Data stack using Open Shift/Azure
• Exposure to Cloud computing and Object Storage services/platforms
• Experience with Big Data deployment architecture, configuration management, monitoring, debugging and security
• Experience in performing Cluster Sizing exercise based on capacity requirements
• Ability to build strong partnership with internal teams, vendors on resolving product gaps/issues and escalate to the management on timely manner\
• Good Exposure to CI/CD tools, application hosting, containerization concepts
• Excellent verbal and written skills, Team skills, Proficient with MS Visio, Strong analytical and problem solving skills
• Must be a self-starter, excellent communication and interpersonal skills.
• Strong problem solving and analytical skills
• Effective verbal and written communication skills
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Big Data Engineer Skills/Requirements: 20#
• 5 – 7 years of recent experience in data engineering.
• Bachelor’s Degree or more in Computer Science or a related field.
• A solid track record of data management showing your flawless execution and attention to detail.
• Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities
• Extensive Experience around SQL, Streaming and ETL
• Experience with integration of data from multiple data sources
• Technical expertise regarding data models, database design development, data mining and segmentation techniques
• Strong knowledge of and experience with statistics.
• Programming experience, ideally in Python, Spark, Kafka, sqoop or Java, and a willingness to learn new programming languages to meet goals and objectives.
• Experience in C, Perl, Javascript or other programming languages is a plus.
• Knowledge of data cleaning, wrangling, visualization and reporting, with an understanding of the best, most efficient use of associated tools and applications to complete these tasks.
• Experience in MapReduce is a plus.
• Deep knowledge of data mining, machine learning, natural language processing, or information retrieval.
• Experience processing large amounts of structured and unstructured data, including integrating data from multiple sources.
• Experience with machine learning toolkits including, H2O, SparkML or Mahout
• A willingness to explore new alternatives or options to solve data mining issues, and utilize a combination of industry best practices, data innovations and your experience to get the job done.
• Experience in production support and troubleshooting.
Responsibilities
Big Data Engineer Skills/Requirements: 20#
• 5 – 7 years of recent experience in data engineering.
• Bachelor’s Degree or more in Computer Science or a related field.
• A solid track record of data management showing your flawless execution and attention to detail.
• Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities
• Extensive Experience around SQL, Streaming and ETL
• Experience with integration of data from multiple data sources
• Technical expertise regarding data models, database design development, data mining and segmentation techniques
• Strong knowledge of and experience with statistics.
• Programming experience, ideally in Python, Spark, Kafka, sqoop or Java, and a willingness to learn new programming languages to meet goals and objectives.
• Experience in C, Perl, Javascript or other programming languages is a plus.
• Knowledge of data cleaning, wrangling, visualization and reporting, with an understanding of the best, most efficient use of associated tools and applications to complete these tasks.
• Experience in MapReduce is a plus.
• Deep knowledge of data mining, machine learning, natural language processing, or information retrieval.
• Experience processing large amounts of structured and unstructured data, including integrating data from multiple sources.
• Experience with machine learning toolkits including, H2O, SparkML or Mahout
• A willingness to explore new alternatives or options to solve data mining issues, and utilize a combination of industry best practices, data innovations and your experience to get the job done.
• Experience in production support and troubleshooting.
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance