Essential:
· At least 2 years of professional experience developing Python applications in Django and/or Flask (Python scripting does not count)
· Full-stack development (front-end and backend)
· Software development fundamentals: PEP8, SOLID, MVC, TTD, CI/CD, etc. (Computer science degree and/or formal software development training)
· Interested in learning new skills in Networking and Automation (Ideally they should have this but we can teach it)
· Smart, proactive, and eager to learn.
Highly Desirable:
· Developing containerised applications E.g. Docker, Kubernetes, etc.
· Developing applications to work in a cloud environment
· Network and security fundamentals (A vendor qualification like a CCNA would be perfect)
Responsibilities
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
AWS cloud Engineer – AWS Application Development and Deployment, Develop Infrastructure as Code (IaC) using tools like Terraform or AWS CloudFormation.
Strong knowledge of VPC, EC2, S3, Lambda, RDS, IAM, CloudTrail, CloudWatch
Responsibilities
AWS cloud Engineer – AWS Application Development and Deployment, Develop Infrastructure as Code (IaC) using tools like Terraform or AWS CloudFormation.
Strong knowledge of VPC, EC2, S3, Lambda, RDS, IAM, CloudTrail, CloudWatch
Salary : Rs. 10,00,000.0 - Rs. 25,00,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
3+ years of quality-assurance testing experience
Must work in Agile teams for 2+ years
A graduate degree in Computer Science, Engineering or a related field
Experience with bug-tracking systems such as JIRA
Attention to detail, ability to follow test plans and scripts, and good management skills
Preferred skills and qualifications
Having a Java certification is preferred
Exposure to CI\CD tools such as GitLab
Excellent communication skills that can cross multiple technical disciplines
Responsibilities
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Role Category :Programming & Design
Role : Specialist Software Engineer -Manual tester
Data Engineer with strong experience in Python, Apache Spark, and Apache Airflow to join our data engineering team. You will be responsible for designing, building, and maintaining scalable data pipelines and workflows that support our analytics and data science initiatives..
8+ years of experience required
Responsibilities
Data Engineer with strong experience in Python, Apache Spark, and Apache Airflow to join our data engineering team. You will be responsible for designing, building, and maintaining scalable data pipelines and workflows that support our analytics and data science initiatives..
8+ years of experience required
Salary : Rs. 10,00,000.0 - Rs. 25,00,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Job Title: Developer
Work Location: Mumbai, MH / Hyderabad, TG / Chennai, TN / Bhubaneswar, OR / Bangalore, KA
Duration: 6 months (Extendable)
Skill Required: Digital: Microsoft Azure, Digital: Python for Data Science, Digital: Databricks, Digital: PySpark, Azure Data Factory
Experience Range in Required Skills: 6-8 Years
Job Description:
1. Designing and implementing data ingestion pipelines from multiple sources using Azure Databricks.
2. Developing scalable and re-usable frameworks for ingesting data sets.
3. Integrating the end-to-end data pipeline - to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times.
4. Working with event based streaming technologies to ingest and process data.
5. Working with other members of the project team to support delivery of additional project components (API interfaces, Search).
6. Evaluating the performance and applicability of multiple tools against customer requirements
Key Responsibilities:
1. Develop and maintain ETLELT pipelines using Azure Data Factory (ADF) and Azure Databricks.
2. Implement data ingestion flows from diverse sources including Azure Blob Storage, Azure Data Lake, On-Prem SQL, and SFTP.
3. Design and optimize data models and transformations using Oracle, Spark SQL, PySpark, SQL Server, Progress DB SQL.
4. Build orchestration workflows in ADF using activities like Lookup, ForEach, Execute Pipeline, and Set Variable.
5. Perform root cause analysis and resolve production issues in pipelines and notebooks. Collaborate on CICD pipeline creation using Azure DevOps, Jenkins.
6. Apply performance tuning techniques to Azure Synapse Analytics and SQL DW.
7. Maintain documentation including runbooks, technical design specs, and QA test cases Data Pipeline Engineering Design and implement scalable, fault-tolerant data pipelines using Azure Synapse and Data bricks.
8. Ingest data from diverse sources including flat files, DB2, NoSQL, and cloud-native formats (CSV, JSON).
9. Technical Skills Required Cloud Platforms Azure (ADF, ADLS, ADB, Azure SQL, Synapse, Cosmos DB) ETL Tools Azure Data Factory, Azure Databricks Programming SQL, PySpark, Spark SQL DevOps Automation Azure DevOps, Git, CICD, Jenkins
Responsibilities
Job Title: Developer
Work Location: Mumbai, MH / Hyderabad, TG / Chennai, TN / Bhubaneswar, OR / Bangalore, KA
Duration: 6 months (Extendable)
Skill Required: Digital: Microsoft Azure, Digital: Python for Data Science, Digital: Databricks, Digital: PySpark, Azure Data Factory
Experience Range in Required Skills: 6-8 Years
Job Description:
1. Designing and implementing data ingestion pipelines from multiple sources using Azure Databricks.
2. Developing scalable and re-usable frameworks for ingesting data sets.
3. Integrating the end-to-end data pipeline - to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times.
4. Working with event based streaming technologies to ingest and process data.
5. Working with other members of the project team to support delivery of additional project components (API interfaces, Search).
6. Evaluating the performance and applicability of multiple tools against customer requirements
Key Responsibilities:
1. Develop and maintain ETLELT pipelines using Azure Data Factory (ADF) and Azure Databricks.
2. Implement data ingestion flows from diverse sources including Azure Blob Storage, Azure Data Lake, On-Prem SQL, and SFTP.
3. Design and optimize data models and transformations using Oracle, Spark SQL, PySpark, SQL Server, Progress DB SQL.
4. Build orchestration workflows in ADF using activities like Lookup, ForEach, Execute Pipeline, and Set Variable.
5. Perform root cause analysis and resolve production issues in pipelines and notebooks. Collaborate on CICD pipeline creation using Azure DevOps, Jenkins.
6. Apply performance tuning techniques to Azure Synapse Analytics and SQL DW.
7. Maintain documentation including runbooks, technical design specs, and QA test cases Data Pipeline Engineering Design and implement scalable, fault-tolerant data pipelines using Azure Synapse and Data bricks.
8. Ingest data from diverse sources including flat files, DB2, NoSQL, and cloud-native formats (CSV, JSON).
9. Technical Skills Required Cloud Platforms Azure (ADF, ADLS, ADB, Azure SQL, Synapse, Cosmos DB) ETL Tools Azure Data Factory, Azure Databricks Programming SQL, PySpark, Spark SQL DevOps Automation Azure DevOps, Git, CICD, Jenkins
Salary : Rs. 70,000.0 - Rs. 1,30,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance