• Independently designs, implements, and maintains GraphQL backends using TypeScript, Apollo Server, Axios/Fetch (REST SDKs), MariaDB (SQL) with ObjectionJS + KnexJS, Redis, AWS, and modern authorization frameworks including AuthZ libraries or GraphQL Envelop.
• Applies strong engineering fundamentals, including Linux, Docker, OWASP, SOLID/DRY/KISS/YAGNI, and sound data structures & algorithms for correctness, safety, and efficiency.
• Authors strongly typed schemas using GraphQL SDL or code first frameworks (e.g., GiraphQL/Pothos, TypeGraphQL) and uses GraphQL Code Generator to produce type safe definitions for schemas, resolvers, queries, mutations, and subscriptions.
• Configures Apollo Server with TypeScript; develops type safe resolvers; implements REST and database data sources; manages context initialization (auth, tenancy, request scoping); and enforces query depth/complexity limits, rate limiting, and persisted queries.
• Demonstrates expert SQL capabilities in MariaDB, including schema design, indexing, migrations, query optimization, and resilient data access through ObjectionJS and KnexJS, ensuring idempotent operations and safe transactions.
• Optimizes performance by eliminating N+1 through DataLoader, implementing caching (in memory and Redis), optimizing pagination and batching, and profiling GraphQL resolver and SQL hot paths.
• Implements secure by design GraphQL services, including OAuth/OIDC, encryption in transit/at rest, secret management, input validation, output encoding, least privilege access, and resolver level authorization to mitigate CSRF/CORS and other abuse vectors.
• Defines and executes high quality GraphQL and REST API tests across all API testing types (contract, functional, integration, negative, security, performance) using Browser DevTools, Bruno, and Insomnia, and writes comprehensive unit, integration, and end to end tests.
• Produces maintainable, reusable, type safe code that is fast, idempotent, resilient, observable, and fault tolerant, incorporating retries, exponential backoff, graceful degradation, circuit breakers, and robust structured error surfaces.
• Diagnoses failures across the stack using logs, metrics, traces, database analysis, and network inspection; performs root cause analysis and provides actionable remediations with evidence based findings.
• Operates services locally via Docker and deploys to AWS using appropriate configuration, secrets management, monitoring, and observability tooling; tunes caching, database performance, and GraphQL query execution in production environments.
• Designs stable, versioned, backward compatible GraphQL contracts; maintains API documentation and operational runbooks; and ensures seamless integration between backend logic, database layers, and frontend clients.
Responsibilities
• Independently designs, implements, and maintains GraphQL backends using TypeScript, Apollo Server, Axios/Fetch (REST SDKs), MariaDB (SQL) with ObjectionJS + KnexJS, Redis, AWS, and modern authorization frameworks including AuthZ libraries or GraphQL Envelop.
• Applies strong engineering fundamentals, including Linux, Docker, OWASP, SOLID/DRY/KISS/YAGNI, and sound data structures & algorithms for correctness, safety, and efficiency.
• Authors strongly typed schemas using GraphQL SDL or code first frameworks (e.g., GiraphQL/Pothos, TypeGraphQL) and uses GraphQL Code Generator to produce type safe definitions for schemas, resolvers, queries, mutations, and subscriptions.
• Configures Apollo Server with TypeScript; develops type safe resolvers; implements REST and database data sources; manages context initialization (auth, tenancy, request scoping); and enforces query depth/complexity limits, rate limiting, and persisted queries.
• Demonstrates expert SQL capabilities in MariaDB, including schema design, indexing, migrations, query optimization, and resilient data access through ObjectionJS and KnexJS, ensuring idempotent operations and safe transactions.
• Optimizes performance by eliminating N+1 through DataLoader, implementing caching (in memory and Redis), optimizing pagination and batching, and profiling GraphQL resolver and SQL hot paths.
• Implements secure by design GraphQL services, including OAuth/OIDC, encryption in transit/at rest, secret management, input validation, output encoding, least privilege access, and resolver level authorization to mitigate CSRF/CORS and other abuse vectors.
• Defines and executes high quality GraphQL and REST API tests across all API testing types (contract, functional, integration, negative, security, performance) using Browser DevTools, Bruno, and Insomnia, and writes comprehensive unit, integration, and end to end tests.
• Produces maintainable, reusable, type safe code that is fast, idempotent, resilient, observable, and fault tolerant, incorporating retries, exponential backoff, graceful degradation, circuit breakers, and robust structured error surfaces.
• Diagnoses failures across the stack using logs, metrics, traces, database analysis, and network inspection; performs root cause analysis and provides actionable remediations with evidence based findings.
• Operates services locally via Docker and deploys to AWS using appropriate configuration, secrets management, monitoring, and observability tooling; tunes caching, database performance, and GraphQL query execution in production environments.
• Designs stable, versioned, backward compatible GraphQL contracts; maintains API documentation and operational runbooks; and ensures seamless integration between backend logic, database layers, and frontend clients.
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Role Summary:
Senior individual contributor across 4–5 concurrent projects. You engage Business and Transformation Leaders to assess feasibility, deliver POCs in 1–4 weeks, define solution architecture, and build the complex pieces yourself. The quality of your upfront design determines how fast the team builds and how clean the testing is.
Key Responsibilities:
• Business Engagement & Feasibility
o Meet Business and Transformation Leaders to understand pain points and assess AI solution feasibility.
o Recommend Pro Code (LangGraph/AgentCore) or Low Code (Copilot Studio/Power Automate) based on the use case — document and communicate the rationale.
o Deliver working POCs within 1–4 weeks. Evaluate Forward Engineer POCs and decide to scale or rebuild based on quality.
o Present feasibility and POC outcomes to business stakeholders with clear scope, effort, and value framing.
• Architecture & Design
o Define solution architecture on AWS AgentCore and LangGraph — the primary stack for all Pro Code solutions.
o Invest heavily upfront in design robustness: strong architecture enables smooth builds; weak architecture amplifies every downstream problem.
o Design systems integration: API architecture, MCP connections, database and data platform access patterns, SAP, Salesforce, and internal systems.
o Define agent state management, tool orchestration, human-in-the-loop escalation, and data flow.
o Ensure all solutions comply with TR’s established security, governance, and compliance standards.
o Continuously evaluate emerging agentic AI frameworks, platform updates, and industry patterns — provide evidence-based recommendations on adoption timing and fit for TR's stack.
• Hands-On Build & Team Leadership
o Build complex and architecturally critical solution components directly — this is a coding role.
o Guide the Solutions Lead, Developer, and Associate through architecture, implementation patterns, and production readiness.
o Enable the Lead to own day-to-day decisions during build by ensuring architecture is unambiguous before stepping back.
o Use AI coding tools (Claude Code, GitHub Copilot, Cursor, Cline) to accelerate POC and development. Own all generated code fully.
Responsibilities
Required Skills & Experience:
Must Have:
o AWS AgentCore — Runtime, Memory, Tools Gateway — production hands-on required.
o LangGraph — Multi-agent state machines, conditional routing, checkpointing, HITL — primary framework.
o LangChain — Advanced chains, memory, custom tool integration.
o AWS Bedrock — Multi-model deployment, knowledge bases, guardrails.
o Database & AI Data Access — SQL proficiency, NL-to-SQL, LLM-powered query and insight layers. Snowflake a plus.
o Systems Integration — API design (REST), MCP server/client, A2A patterns, SAP/Salesforce/internal system connectors.
o RAG Architecture — Hybrid search, re-ranking, agentic RAG, graph RAG — select and justify per use case.
o Multi-Model Strategy — OpenAI, Claude, Gemini, Llama — provider trade-offs and cost governance.
o Pro Code vs Low Code — Evaluate each use case and recommend. Copilot Studio and Power Automate for the right automation scenarios.
o AI Development Tools — Claude Code, GitHub Copilot, Cursor, or Cline — accelerate delivery; own and fix all generated code in production.
o Python — Expert-level production code — you write, review, and fix code.
o Production Deployment — Docker, CI/CD, post-deployment monitoring, cost optimisation.
o Business Communication — Present feasibility and POC outcomes to business leaders clearly.
o Cloud Adaptability — Google Agentspace and Azure AI Foundry exposure welcome — AWS is the primary stack.
o Experience — 10+ years total; 3–5 years solution architecture with direct delivery accountability; production agentic AI systems deployed.
Good to Have:
o MCP / A2A — Production server/client implementations.
o Document Intelligence — Azure Document Intelligence, Textract, layout-aware chunking.
o Fine-Tuning — LoRA / QLoRA for domain adaptation.
o Graph Databases — Neo4j for knowledge graph RAG.
o Domain Experience — Legal, financial, or regulatory AI applications.
o Certifications — AWS Solutions Architect Pro, Google Professional Cloud Architect, Azure Solutions Architect Expert.
What We Expect From You
• Customer Obsession
o Proactively understand customer goals and deliver measurable value.
• Competitive Drive
o Set high standards, demonstrate tenacity, and ensure our solutions lead in quality.
• Challenging Mindset
o Foster fact-based dialogue, challenge assumptions, and encourage disruptive thinking.
• Action and Learning Velocity
o Build fast, fail fast, learn fast. Iterate rapidly and make data-driven decisions.
• Collaboration and Accountability
o Collaborate across a global team with humility, ownership, and mutual accountability.
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Please work on below demand.
• Primary mandate skill required - Tanium
• Secondary mandate skill required – Intune
• Can we consider Contractor(CWR) profiles-Yes
• Flexible to hire in any location – If not, please mention job location -Open
• Detailed Job Description –
Detailed JD :
Tanium Admin
• Deploying, configuring, and maintaining the Tanium platform and its modules.
• Performing system health checks and ensuring robust endpoint management.
• Identifying, analyzing, and remediating security vulnerabilities and compliance issues.
• Creating and managing Tanium sensors and questions to gather data.
• Troubleshooting endpoint issues across various operating systems like Windows, Linux, and macOS.
• Assisting customers with support cases and answering questions.
• Developing and testing new product features, as seen in roles like Software Engineer.
• Creating and managing deployment strategies for software and patches.
• Working with automation and low-code tools like Tanium Automate.
Intune Admin :
• Design, implement, and manage Intune policies for Windows, macOS, iOS, and Android platforms.
• Oversee application deployment strategies using Microsoft Endpoint Manager (MEM).
• Configure and maintain app protection and configuration policies.
• Provide L3/L4 support for escalated issues related to Intune, device compliance, and application deployment.
• Analyze logs and telemetry data to resolve complex technical issues.
• Collaborate with Microsoft support and internal teams for issue resolution.
• Implement and manage compliance policies, conditional access, and endpoint security baselines.
• Integrate Intune with Microsoft Defender for Endpoint and other security tools.
• Ensure endpoint configurations align with organizational security standards and regulatory requirements.
• Develop and maintain PowerShell scripts to automate Intune tasks and generate reports.
• Utilize Microsoft Graph API for advanced automation and integration.
• Design and implement integrations with Azure AD, Autopilot, SCCM (co-management), and third-party tools.
• Participate in architectural planning and contribute to the endpoint management roadmap.
• Create and maintain dashboards and reports for device compliance, deployment status, and user activity.
• Monitor system health and performance of Intune services.
• Maintain comprehensive documentation of configurations, procedures, and troubleshooting steps.
• Provide training and mentorship to L1/L2 support teams.
Key skills and qualifications:
• Tanium & Intune expertise: Deep knowledge of the platform and its various modules.
• Operating systems: Strong knowledge of Windows, Linux, and macOS environments.
• Scripting and automation: Experience in automating tasks and creating sensors and questions.
• Security and compliance: Understanding of vulnerability management, threat hunting, and compliance reporting.
• Troubleshooting: Ability to identify and solve issues on endpoints.
• Deployment experience: Familiarity with Tanium & Intune-based deployments and other tools like SCCM.
• Customer support: Skills in triaging and solving support cases.
Responsibilities
Please work on below demand.
• Primary mandate skill required - Tanium
• Secondary mandate skill required – Intune
• Can we consider Contractor(CWR) profiles-Yes
• Flexible to hire in any location – If not, please mention job location -Open
• Detailed Job Description –
Detailed JD :
Tanium Admin
• Deploying, configuring, and maintaining the Tanium platform and its modules.
• Performing system health checks and ensuring robust endpoint management.
• Identifying, analyzing, and remediating security vulnerabilities and compliance issues.
• Creating and managing Tanium sensors and questions to gather data.
• Troubleshooting endpoint issues across various operating systems like Windows, Linux, and macOS.
• Assisting customers with support cases and answering questions.
• Developing and testing new product features, as seen in roles like Software Engineer.
• Creating and managing deployment strategies for software and patches.
• Working with automation and low-code tools like Tanium Automate.
Intune Admin :
• Design, implement, and manage Intune policies for Windows, macOS, iOS, and Android platforms.
• Oversee application deployment strategies using Microsoft Endpoint Manager (MEM).
• Configure and maintain app protection and configuration policies.
• Provide L3/L4 support for escalated issues related to Intune, device compliance, and application deployment.
• Analyze logs and telemetry data to resolve complex technical issues.
• Collaborate with Microsoft support and internal teams for issue resolution.
• Implement and manage compliance policies, conditional access, and endpoint security baselines.
• Integrate Intune with Microsoft Defender for Endpoint and other security tools.
• Ensure endpoint configurations align with organizational security standards and regulatory requirements.
• Develop and maintain PowerShell scripts to automate Intune tasks and generate reports.
• Utilize Microsoft Graph API for advanced automation and integration.
• Design and implement integrations with Azure AD, Autopilot, SCCM (co-management), and third-party tools.
• Participate in architectural planning and contribute to the endpoint management roadmap.
• Create and maintain dashboards and reports for device compliance, deployment status, and user activity.
• Monitor system health and performance of Intune services.
• Maintain comprehensive documentation of configurations, procedures, and troubleshooting steps.
• Provide training and mentorship to L1/L2 support teams.
Key skills and qualifications:
• Tanium & Intune expertise: Deep knowledge of the platform and its various modules.
• Operating systems: Strong knowledge of Windows, Linux, and macOS environments.
• Scripting and automation: Experience in automating tasks and creating sensors and questions.
• Security and compliance: Understanding of vulnerability management, threat hunting, and compliance reporting.
• Troubleshooting: Ability to identify and solve issues on endpoints.
• Deployment experience: Familiarity with Tanium & Intune-based deployments and other tools like SCCM.
• Customer support: Skills in triaging and solving support cases.
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
10+ years of overall experience in SAP integration and middleware technologies.
• 5+ years of hands-on experience with SAP CPI (Cloud Platform Integration), including iFlows, adapters, mappings, and integration patterns.
• Strong knowledge of SAP BTP, API Management, and Cloud Connector.
• Experience working on SAP S/4HANA conversion or greenfield/brownfield implementation projects.
• Expertise in integration technologies such as IDocs, BAPIs, RFCs, SOAP/REST APIs, OData services.
• Hands-on experience with Groovy scripting and message mappings in CPI.
• Strong understanding of security concepts (OAuth, certificates, encryption, etc.).
• Excellent analytical, problem solving, and communication skills.
• Ability to lead teams and coordinate with global stakeholders.
Preferred Qualificationsss
• SAP Certification in CPI or Integration Suite.
• Experience with migration tools, AIF, Integration Advisor, or Event Mesh.
• Experience in Agile/DevOps driven project delivery.
Responsibilities
10+ years of overall experience in SAP integration and middleware technologies.
• 5+ years of hands-on experience with SAP CPI (Cloud Platform Integration), including iFlows, adapters, mappings, and integration patterns.
• Strong knowledge of SAP BTP, API Management, and Cloud Connector.
• Experience working on SAP S/4HANA conversion or greenfield/brownfield implementation projects.
• Expertise in integration technologies such as IDocs, BAPIs, RFCs, SOAP/REST APIs, OData services.
• Hands-on experience with Groovy scripting and message mappings in CPI.
• Strong understanding of security concepts (OAuth, certificates, encryption, etc.).
• Excellent analytical, problem solving, and communication skills.
• Ability to lead teams and coordinate with global stakeholders.
Preferred Qualificationsss
• SAP Certification in CPI or Integration Suite.
• Experience with migration tools, AIF, Integration Advisor, or Event Mesh.
• Experience in Agile/DevOps driven project delivery.
Salary : Rs. 0.0 - Rs. 1,00,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
• Job Summary:
We are seeking a skilled Data Engineer with strong experience in SQL, Python, Tableau, and ETL tools to design, build, and maintain reliable data pipelines and analytics solutions. This role focuses on ensuring data quality, enabling scalable data workflows, and supporting business intelligence and reporting needs.
• Roles and Responsibilities
• Collect, clean and validate data from multiple sources to ensure accuracy and reliability • Develop ETL Pipelines to process the data from multiple sources such as csv,flat files and live databases • Build, maintain, and optimize SQL queries, stored procedures, and data pipelines • Use Python for data manipulation, automation, statistical analysis, and exploratory data analysis • Collaborate with cross functional teams (data analysts, product teams, business stakeholders) to understand data requirements. • Perform trend analysis, forecasting, and KPI reporting • Support data governance, documentation, and metadata management • Troubleshoot data issues and identify opportunities for process improvement • Work with cross functional teams such as IT, engineering, operations, and finance • Staying up to date with emerging technologies and trends in data analytics space and recommending innovative solutions to improve data efficiency and quality • Manage the changes and refresh the data for all the Dashboards
Responsibilities
• Required Skills
What you need to Bring: Bachelor’s/master’s in engineering, Computer Science, or equivalent experience. 3 to 5 years of experience in the IT industry, experience in Data space is a must. Technical Skills (Experience Level: 3 to 5 years) SQL & Database Management: Expertise in querying, transforming, and optimizing data. You should have solid experience in relational databases (PostgreSQL, MySQL, Oracle) and NoSQL (MongoDB, Cassandra) databases. Solid experience in SQL Programming: Strong in Python programming for automation and pipeline development. good to have scala. ETL/ELT Frameworks: Building pipelines to Extract, Transform, and Load data using any leading industry ETL tools such as DataIQ, Informatica, Data stage, Alteryx Data Processing Framework: Solid experience in processing large-scale data using Apache Spark or any other data processing framework. should have worked at least one project in large distributed system Data Modeling: Designing efficient schemas and understanding normalization/denormalization to ensure fast data retrieval. Good Experience in creating logical and physical data models. Version Control: Proficiency in Git is mandatory for managing code and collaborating on pipelines Cloud Platforms: Expertise in at least one cloud platform (GCP, AWS or Azure) and their specific data services. Good to have GCP experience. Orchestration: Automating and scheduling complex workflows using tools like Apache Airflow, Prefect, or Dagster. Data Warehousing: Knowledge of modern cloud-native warehouses like Snowflake, Google Big Query, Teradata or Amazon Redshift Real-time Processing: Knowledge of handling data streams as they arrive using Kafka, Flink, or Spark Streaming. Data Governance & Security: Implementing encryption, access controls, and ensuring compliance with regulations like GDPR or HIPAA. AI/ML Integration: Building infrastructure and "feature pipelines" to support machine learning models Good to Have Technical Skills: • Knowledge or Experience using Agile methodology to perform software development. • Knowledge of ITIL Industry best practices • Knowledge of Google looker studio experience is plus. • Knowledge of any BI tool is a plus , Preferably Tableau • Knowledge of Pulse and Tableau Prep is added advantage • Knowledge of UI/UX – Figma tools is added advantage • Having Manufacturing domain experience is great value ad, but not mandatory
• Desired Skills:
What you need to Bring: Bachelor’s/master’s in engineering, Computer Science, or equivalent experience. 3 to 5 years of experience in the IT industry, experience in Data space is a must. Technical Skills (Experience Level: 3 to 5 years) SQL & Database Management: Expertise in querying, transforming, and optimizing data. You should have solid experience in relational databases (PostgreSQL, MySQL, Oracle) and NoSQL (MongoDB, Cassandra) databases. Solid experience in SQL Programming: Strong in Python programming for automation and pipeline development. good to have scala. ETL/ELT Frameworks: Building pipelines to Extract, Transform, and Load data using any leading industry ETL tools such as DataIQ, Informatica, Data stage, Alteryx Data Processing Framework: Solid experience in processing large-scale data using Apache Spark or any other data processing framework. should have worked at least one project in large distributed system Data Modeling: Designing efficient schemas and understanding normalization/denormalization to ensure fast data retrieval. Good Experience in creating logical and physical data models. Version Control: Proficiency in Git is mandatory for managing code and collaborating on pipelines Cloud Platforms: Expertise in at least one cloud platform (GCP, AWS or Azure) and their specific data services. Good to have GCP experience. Orchestration: Automating and scheduling complex workflows using tools like Apache Airflow, Prefect, or Dagster. Data Warehousing: Knowledge of modern cloud-native warehouses like Snowflake, Google Big Query, Teradata or Amazon Redshift Real-time Processing: Knowledge of handling data streams as they arrive using Kafka, Flink, or Spark Streaming. Data Governance & Security: Implementing encryption, access controls, and ensuring compliance with regulations like GDPR or HIPAA. AI/ML Integration: Building infrastructure and "feature pipelines" to support machine learning models Good to Have Technical Skills: • Knowledge or Experience using Agile methodology to perform software development. • Knowledge of ITIL Industry best practices • Knowledge of Google looker studio experience is plus. • Knowledge of any BI tool is a plus , Preferably Tableau • Knowledge of Pulse and Tableau Prep is added advantage • Knowledge of UI/UX – Figma tools is added advantage • Having Manufacturing domain experience is great value ad, but not mandatory
• Soft Skills :
• Excellent problem-solving and analytical skills • Strong communication and collaboration skills • Ability to work independently and as part of a global team. • Self-motivated and able to work in a fast-paced environment. • Detail-oriented and committed to delivering high-quality work. • Display one-team behavior while thinking end to end solutioning.
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
• Designs, implements, and maintains front end applications using TypeScript, Vue 3 (Composition API), Pinia, Vue Router, Vite, SCSS, Tailwind, PrimeVue, Pino logging, Axios, and GraphQL clients (Apollo/URQL/Relay), integrating Auth0 for authentication and authorization.
• Demonstrates strong engineering fundamentals across Linux, Node.js (npm/pnpm), and Docker, applying OWASP practices and SOLID/DRY/KISS/YAGNI principles with sound data structures & algorithms.
• Exhibits deep Vue expertise: reactivity system, directives, component design, props/emits, slots, and lifecycle hooks; organizes code via Composition Functions and type safe patterns in TypeScript.
• Consumes GraphQL SDL and REST OpenAPI specifications, employing client generation where available; connects components to APIs with Axios/Fetch and GraphQL clients, handling auth flows, pagination, caching, and error surfaces.
• Translates Figma designs into accessible, responsive HTML5/SCSS using BEM methodology, Tailwind utility patterns, and PrimeVue or equivalent component libraries; documents components in Storybook.
• Implements secure front end architecture with Auth0 (OAuth/OIDC), token handling, secure storage, CSP, XSS/CSRF mitigation, input validation/encoding, and safe error handling.
• Optimizes web performance via code splitting, lazy loading, tree shaking, asset and image optimization, caching strategies, and efficient rendering; monitors and improves Core Web Vitals using browser performance tooling.
• Writes maintainable, reusable, component driven code that is secure, fast, idempotent, reliable, and resilient, with clear separation of concerns and consistent logging via Pino (browser).
• Tests thoroughly with Jest or Vitest and Vue Test Utils for unit and integration coverage; performs end to end testing; uses msw (mswjs) to mock backend APIs; validates APIs with Bruno or Insomnia and Browser DevTools (console, network, performance).
• Troubleshoots effectively by tracing logs, inspecting errors, and isolating root causes across UI, API, and network layers; produces actionable defect reports with evidence.
• Operates locally in Docker and collaborates on CI/CD workflows; familiar with AWS deployment patterns and front end observability (logging, metrics, tracing) for production support.
• Maintains API and component documentation, aligns with versioned contracts (GraphQL/OpenAPI), and ensures seamless integration between front end experiences and backend data/services.
.
Responsibilities
• Designs, implements, and maintains front end applications using TypeScript, Vue 3 (Composition API), Pinia, Vue Router, Vite, SCSS, Tailwind, PrimeVue, Pino logging, Axios, and GraphQL clients (Apollo/URQL/Relay), integrating Auth0 for authentication and authorization.
• Demonstrates strong engineering fundamentals across Linux, Node.js (npm/pnpm), and Docker, applying OWASP practices and SOLID/DRY/KISS/YAGNI principles with sound data structures & algorithms.
• Exhibits deep Vue expertise: reactivity system, directives, component design, props/emits, slots, and lifecycle hooks; organizes code via Composition Functions and type safe patterns in TypeScript.
• Consumes GraphQL SDL and REST OpenAPI specifications, employing client generation where available; connects components to APIs with Axios/Fetch and GraphQL clients, handling auth flows, pagination, caching, and error surfaces.
• Translates Figma designs into accessible, responsive HTML5/SCSS using BEM methodology, Tailwind utility patterns, and PrimeVue or equivalent component libraries; documents components in Storybook.
• Implements secure front end architecture with Auth0 (OAuth/OIDC), token handling, secure storage, CSP, XSS/CSRF mitigation, input validation/encoding, and safe error handling.
• Optimizes web performance via code splitting, lazy loading, tree shaking, asset and image optimization, caching strategies, and efficient rendering; monitors and improves Core Web Vitals using browser performance tooling.
• Writes maintainable, reusable, component driven code that is secure, fast, idempotent, reliable, and resilient, with clear separation of concerns and consistent logging via Pino (browser).
• Tests thoroughly with Jest or Vitest and Vue Test Utils for unit and integration coverage; performs end to end testing; uses msw (mswjs) to mock backend APIs; validates APIs with Bruno or Insomnia and Browser DevTools (console, network, performance).
• Troubleshoots effectively by tracing logs, inspecting errors, and isolating root causes across UI, API, and network layers; produces actionable defect reports with evidence.
• Operates locally in Docker and collaborates on CI/CD workflows; familiar with AWS deployment patterns and front end observability (logging, metrics, tracing) for production support.
• Maintains API and component documentation, aligns with versioned contracts (GraphQL/OpenAPI), and ensures seamless integration between front end experiences and backend data/services.
.
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance