1. Purpose

The purpose of this SOP is to establish a standardized framework for the Software Development Life Cycle (SDLC) of AMRIT, an open-source Electronic Health Record (EHR) platform. It aims to ensure the development, deployment, and maintenance of AMRIT adhere to best practices, regulatory compliance, and healthcare standards such as SNOMED CT, HL7, and LOINC. This SOP facilitates collaboration among developers, QA teams, product managers, and other stakeholders while maintaining data security, system interoperability, and usability across diverse healthcare environments.

2. Scope

This SOP applies to all phases of AMRIT’s SDLC, including requirement analysis, design, development, testing, deployment, and maintenance. It covers software products developed under AMRIT’s umbrella—ranging from web applications to mobile apps—and addresses integration with healthcare systems like ABHA cards and Point-of-Care Testing (POCT) devices. The primary audience includes developers, QA engineers, DevOps teams, product managers, implementation managers, and community contributors. The SOP also emphasizes offline functionality for remote healthcare delivery and multilingual support for diverse user bases.

3. Roles & Responsibilities


RoleResponsibilties
Business Systems Analyst (BSA)
  • Deeply understand government health programs (RCH, NCD, TB, Immunization, etc.) and how AMRIT supports them.

  • Collaborate with field teams, program leads, and government stakeholders to:

    • Gather detailed requirements

    • Clarify reporting logic, workflows, and business rules (e.g., eligibility for high-risk pregnancy alerts)

  • Identify gaps between existing product capabilities and program requirements.

  • Create as-is and to-be workflows, clearly outlining system changes required.

  • Map manual processes (registers, Excel trackers) to digital equivalents in AMRIT.

  • Translate requirements into:

    • Functional specs (e.g., “Add new risk factor dropdown in ANC form”)

    • Data mapping sheets (e.g., “How RCH high-risk pregnancy logic maps to DB fields”)

    • Report formulas and indicator definitions

  • Ensure these are developer-ready and testable.

  • Sit in on product refinement discussions and explain the “why” behind every feature.

  • Help devs understand program logic, field constraints, and data flows.

  • Explain tech constraints/possibilities to non-tech stakeholders in simple language.

  • Help create test scenarios based on functional specs.

  • Coordinate with QA and field teams to:

    • Validate that features match business logic

    • Reproduce and document edge case bugs

  • Be the go-to person for "Is this working as expected?" type questions.

  • Create user-facing documents or training support docs if needed.
  • Understand what data is required for NHM/State dashboards and how AMRIT supports it.

  • Work with data teams to:

    • Define indicator logic

    • Validate data exports

    • Ensure completeness and consistency of program reporting

  • Work with architects and external systems (e.g., NDHM, eAushadhi, Nikshay) to map:

    • Data fields

    • Consent flows

    • Reporting standards (e.g., FHIR)

  • Help ensure AMRIT integrates cleanly with public health data ecosystems.

  • Deliverables:

    • Functional Requirement Documents (FRD)

    • Use Case & Workflow Diagrams

    • Mapping Sheets (form fields to database)

    • Program Indicator Definitions (used in reporting)

    • Test Scenarios for UAT

    • Release Notes from a business logic standpoint

Product Owner
  • Champion the mission of AMRIT — better digital healthcare delivery through scalable, user-friendly tech.

  • Translate that mission into a clear product vision for each module (Facility App, Sakhi App, Admin portal, etc.).

  • Ensure alignment with goals from funders, NHM/state partners, and internal leadership.

  • Maintain and prioritize the product backlog.

  • Write clear and concise user stories with acceptance criteria (e.g., "As an ASHA, I want to see my assigned beneficiaries for today").

  • Balance field needs with tech feasibility and roadmap priorities.

  • Deeply understand ASHAs, ANMs, facility staff, and their workflows.

  • Continuously gather feedback from field pilots, training sessions, and WhatsApp groups.

  • Validate mockups, workflows, and prototypes with real users before pushing to dev.

  • Be the bridge between:

    • Design (for UI/UX that works in low-literacy, low-tech settings)

    • Engineering (for breaking features into dev-friendly stories)

    • Field Ops/Project Managers (to ensure rollouts are smooth and aligned with realities on the ground)

  • Join sprint planning and review calls to provide domain context and prioritization decisions.

  • Coordinate cross-team dependencies and testing to ensure releases are smooth and field-ready.
  • Track feature adoption and impact — not just usage, but health outcomes or operational efficiencies (e.g., "Did Sakhi reduce paper records?").

  • Define success for each epic or feature. Example:

    • "Increase Sakhi app usage in State X to 80% weekly active users"

    • "Reduce paper forms usage at facilities by 50% in 3 months"

  • Ensure all features comply with:

    • Data privacy laws (DPDP, HIPAA-like principles)

    • Public digital infrastructure standards (NDHM guidelines, FHIR, etc.)

  • Be aware of local workflows, language requirements, and device constraints.

  • Regularly reflect on what’s working or not — from both tech and adoption perspective.

  • Iterate on workflows and tools to increase ease-of-use and reduce friction.

  • Help make AMRIT a DPG-quality product — usable, adaptable, and open.

Scrum Master
  • Run efficient, focused ceremonies like:

    • Daily Standups – Keep updates crisp, surface blockers early.

    • Sprint Planning – Help the team break down stories and estimate work.

    • Sprint Reviews – Showcase outcomes to internal/external stakeholders (like Project Managers or field teams).

    • Retrospectives – Encourage honest reflection and process improvement.

  • Actively identify and unblock tech or coordination bottlenecks.

    • E.g., Backend API not ready for frontend? Field dependency delaying test? Fix by nudging or aligning stakeholders.

  • Escalate issues when needed to the Project Manager, Tech Architect, or Product Owner.

  • Help devs, QA, and designers understand and follow agile best practices.

  • Encourage:

    • Sustainable pace

    • Respectful communication

    • Ownership of delivery

  • Guide the team in self-organization and accountability.

  • Monitor:

    • Sprint velocity

    • Story spillover

    • Bug-to-feature ratio

    • Cycle time

  • Share trends with the team and help tune the delivery process accordingly.

  • Help ensure backlog is groomed and ready.

  • Assist the PO in:

    • Prioritization

    • Story writing support (esp. tech refinement)

    • Making sure stories are INVEST-compliant (Independent, Negotiable, Valuable, Estimable, Small, Testable)

  • Facilitate coordination between:

    • Frontend & backend teams

    • QA & developers

    • Field ops & tech

  • Support alignment in multi-module sprints (e.g., Facility + Sakhi + Admin Portal releases)

  • Maintain up-to-date boards (JIRA, Trello, etc.).

  • Ensure everyone knows what’s being built and what’s done.

  • Encourage clear status updates, definition of done, and ownership of tasks.

  • Create a safe space for team members to:

    • Raise concerns

    • Give feedback

    • Admit mistakes

  • Track action items from retros and ensure follow-through.

  • Suggest improvements in:

    • Sprint cadence

    • QA process

    • Field feedback loops

    • Deployment planning

  • Champion adoption of tools like JIRA dashboards, CI/CD pipeline visibility, etc.

  • Drive initiatives like “Tech Debt Thursdays” or bug bashes.

  • Facilitate the onboarding of new engineers or QAs into the sprint rituals.

Developers
  • Write clean, modular code for frontend or backend based on tickets shared by the tech lead or product team.

  • Stick closely to functional specs and design wireframes.

  • Start with simple forms, UI components, services, or minor database updates.

  • Manually verify your features in the dev or staging environment.

  • Check:

    • Form validations

    • API responses

    • UI consistency

  • Run unit tests where applicable.

  • Implement modules and features as per functional requirements and UI/UX designs.

  • Handle both frontend and backend tasks depending on your stack expertise.

  • Follow modular, reusable, and maintainable coding practices.

  • Manually validate your features before raising a PR.

  • Write and run unit tests; aim for decent test coverage.

  • Understand and fix issues found in integration or regression testing.

  • Review others' code (as per your comfort) and seek feedback on your own.

  • Incorporate suggestions, learn better practices, and improve over time.

  • Know how AMRIT is used by ASHAs, ANMs, Facility Staff, and Health Departments.

  • Understand workflows like RCH tracking, ANC visits, or NCD follow-ups.

  • Ask domain-related questions to BAs or Product when in doubt

  • Join sprint ceremonies (refinement, planning, demos, retros).

  • Work closely with QA to clarify functionality and validate bug fixes.

  • Pair with L2 support to reproduce or investigate field issues.

  • Follow Git branching strategy, linting, formatting rules, and commit conventions.

  • Use tools like Postman, Swagger, ELK dashboard, Firebase Crashlytics (for mobile).

  • Help with CI/CD improvements or developer tooling automation where relevant.

  • Ask for help when blocked, and share knowledge when you solve problems.

  • Take ownership of tickets, but stay curious about how your work connects to the larger system.

  • Document reusable patterns or decisions you implement.

Senior Developers
  • Break down epics or large features into manageable technical components.

  • Choose appropriate design patterns and data structures.

  • Ensure modular, extensible code design across microservices and front-end modules.

  • Review junior/mid-level code with a focus on teaching.

  • Conduct knowledge-sharing sessions (e.g., clean code, domain logic, system design).

  • Help onboard new devs into the codebase and workflows.

  • Define and enforce code quality standards, testing practices, and CI/CD checks.

  • Ensure test coverage and automation are part of sprint planning.

  • Introduce tools or practices to reduce bugs and improve developer experience.

  • Own technical components like authentication, offline sync, data sync logic, caching, etc.

  • Coordinate across teams/modules to ensure consistency.

  • Manage technical debt and prioritize refactoring efforts.

  • Lead root cause analysis for bugs affecting live users.

  • Coordinate with QA, L2 support, and DevOps for logs, stack traces, and monitoring data.

  • Implement hotfixes while balancing long-term solutions

  • Influence scope and implementation approach during backlog refinement.

  • Validate technical feasibility and estimate effort.

  • Help bridge communication gaps between product and engineering.

  • Ensure logging, monitoring, and alerting are present for new features.

  • Collaborate with DevOps to optimize build pipelines and deployments.

  • Drive performance profiling and optimization efforts.

  • Represent engineering in cross-functional conversations.

  • Champion user needs, especially for low-end devices, poor connectivity, and field usability.

  • Align tech decisions with AMRIT’s mission and long-term sustainability.

Technical Architect
  • Design scalable, modular, and secure system architectures across:

    • Backend APIs (Spring Boot, REST/GraphQL)

    • Frontend (Angular-based apps for facility/ASHAs)

    • Data pipelines (for reporting, observability, NHM analytics)

  • Define and document the technical architecture diagrams, data flow, deployment models (cloud/on-prem), and integration points.

  • Ensure these integrations are secure, standards-compliant, and fault-tolerant.
  • Set up and enforce coding standards, design patterns, and documentation practices.

  • Review PRs for architecture decisions, performance, and maintainability.

  • Mentor developers and help resolve technical blockers.

  • Design the deployment strategy (AWS, EC2, RDS, etc.) and CI/CD workflows.

  • Define monitoring, logging, and alerting standards (e.g., ELK stack, Prometheus/Grafana).

  • Work with the dev and ops teams to ensure high availability and resilience of services.

  • Ensure the architecture meets data protection and security guidelines, especially given that AMRIT handles health data.

  • Lead security audits and compliance checks (like HIPAA principles, even if not legally binding).

  • Implement API gateways, RBAC, token/auth mechanisms

  • Optimize database queries, caching strategies, and API response times.
  • Work closely with:

    • Product Managers – to translate feature specs into technical plans

    • Project Managers – to align sprints with technical priorities

    • QA/Testing teams – to define test automation strategy, mock services

  • Help with technical scoping and effort estimation.

  • Ensure the platform remains extensible, vendor-agnostic, and developer-friendly for the future.

  • Promote use of Open Source and Digital Public Goods standards (like OpenHIE, FHIR).

  • Document the system so that new developers or partner orgs can easily onboard.

  • Represent AMRIT in external forums (DPG showcases, digital health summits, etc.)

  • Contribute to open source contributions, standards discussions, and FHIR profiles

Project Manager
  • Work with government partners (NHM, State Health Societies, etc.), funders, and internal teams to ensure alignment on project goals.

  • Act as a bridge between tech, field operations, and leadership—ensuring each understands the other's priorities, timelines, and constraints.

  • Regularly capture user feedback from ASHAs, facility staff, and other ground-level users.

  • Create detailed implementation roadmaps for rollouts in new states or facilities.
  • Manage timelines, dependencies, and risks across:

    • Tech development (from design to release)

    • Field operations (training, onboarding, adoption)

    • Integrations (NDHM, other health systems)

  • Use tools like JIRA, Asana, or Excel trackers (depending on maturity) to track progress.

  • Coordinate with field teams for pilot launches, feedback loops, and issue resolution.

  • Proactively manage change management—helping people adapt to new tech workflows.

  • Identify bottlenecks early—whether it's tech, policy, or adoption issues.

  • Maintain an escalation matrix to resolve blockers quickly.

  • Track issues from the field and tech and ensure they’re resolved in a timely manner.


L2 Support 
  • Analyze issues escalated from L1 support (call center, state ops, field teams) that need deeper investigation.

  • Reproduce bugs using test environments or staging apps.

  • Determine whether the issue is:

    • User error (e.g., wrong workflow)

    • Data-related (e.g., sync issues, missing records)

    • Tech-related (e.g., API failure, app crash)

  • Query logs via ELK stack or log aggregators.

  • Run SQL queries on reporting or production read-only DBs to check data inconsistencies.

  • Use Postman or Swagger to test APIs and identify backend issues.

  • Create clear, reproducible bug reports with:

    • Steps to replicate

    • Screenshots or logs

    • Metadata (device ID, state, app version)

    • Any temporary workaround

  • Categorize them correctly (Severity, Module, Root cause area)

  • Escalate bugs to the right developers or QA leads.

  • Provide devs with enough context so they don’t have to go digging.

  • Follow up and track resolution until deployed in a release.

  • Join bug triage meetings or sprint reviews when needed.

  • Once a fix is done or workaround is found, update L1 with:

    • Explanation in simple language

    • Clear instructions to share with users

    • ETA for permanent resolution (if applicable)

  • Help maintain a knowledge base or issue tracker (JIRA, Confluence, etc.)

  • Identify recurring bugs and raise them as candidates for:

    • Better UX

    • Dev fixes

    • Training interventions

  • Help Product/QA teams prioritize based on volume or impact.

  • Assist QA during release cycles by:

    • Validating critical bug fixes in staging

    • Helping test edge cases based on past L2 issues

  • Serve as a second layer of QA when releases are rushed but high-risk

  • Handle minor configuration issues (e.g., facility ID mismatch, program enablement not reflecting).

  • Work with admin tools or dashboards to:

    • Reset user access

    • Troubleshoot program config errors

  • Never share full patient data or credentials while debugging.

  • Follow data protection protocols, especially in logs or SQL access.

  • Escalate to tech leads if an issue affects a large user base or critical workflows (e.g., ANC visit not saving).

  • Maintain an internal wiki or playbook of common issue categories and resolutions.

  • Share monthly reports on L2 ticket volumes, types, and resolution SLAs.

  • Contribute to improving observability (e.g., suggest better logging, error messages).

QA Testers
  • Read user stories, program guidelines, and business rules to understand:

    • What each feature should do (e.g., “ANC visit form should allow risk tagging”)

    • Program logic (e.g., who counts as a high-risk pregnancy)

    • Expected workflows in field and facility contexts

  • Collaborate with Product Owner or BSA for clarity on edge cases.

  • Create detailed manual test cases that cover:

    • Positive flows (happy path)

    • Negative cases (validation, limits)

    • Boundary conditions (e.g., age, dates)

    • Role-based access (e.g., what an ASHA vs. ANM can see)

  • Use Excel, TestLink, Zephyr, or other tools as preferred.

  • Perform manual testing in staging and pre-prod environments.

  • Validate:

    • Forms and data entry

    • Calculations (e.g., age, EDD)

    • Offline sync behavior (especially in Sakhi app)

    • Multi-lingual support (e.g., Hindi, Bengali UI texts)

  • Cross-browser testing for web apps and cross-device testing for mobile (low-end devices too).

  • Ensure Android apps (Sakhi, Facility app) work correctly on:

    • Low RAM phones

    • Different screen sizes

    • Patchy internet

    • OS versions common in the field (Android 8+)

  • Validate sync behavior, local storage, error states, etc.

  • Reproduce and document bugs with:

    • Steps to replicate

    • Screenshots or videos

    • Logs (if available)

    • Severity/Priority tagging

  • Use JIRA (or your bug tracking system) consistently.

  • Collaborate with L2 support to validate bugs from the field.

  • Maintain a regression suite for every module (e.g., RCH, NCD, TB).

  • Run sanity tests after each build before releases.

  • Work with the release lead or Scrum Master to ensure release quality.

  • Join refinement and sprint planning meetings to:

    • Understand new features in advance

    • Highlight test complexity

    • Raise testing risks early (e.g., too many changes in one form)

  • Ensure data shown in reports (e.g., line lists, indicators) matches backend calculations.

  • Help validate:

    • Program indicators (e.g., % of high-risk pregnancies identified)

    • Downloadable Excel reports

    • Dashboards used by state teams

  • Clarify bugs with devs and confirm fixes post-deployment.

  • Work with BSAs to understand field use-cases better.

  • Join hands with L2 to verify field-raised bugs and reproduce issues.

  • Basic knowledge of:

    • Postman (to hit APIs directly)

    • Browser dev tools (to check network calls)

    • Firebase logs (for mobile crashes)

    • Android emulators and device testing platforms (like BrowserStack)

  • Maintain a QA checklist per release (shared with team).

  • Build simple automation for regression tests using tools like Selenium, Appium, or Playwright

  • Contribute to product improvement with feedback from a “tester’s eye”.

QA Manager
  • Manage a team of manual testers, automation engineers, and interns (if any).

  • Assign tasks, ensure coverage, and provide regular feedback and mentoring.

  • Help junior QAs grow into domain experts who think like end-users (ASHAs, ANMs, etc.).

  • Ensure every ticket/story has sufficient test case coverage (functional, edge, regression).

  • Validate that new features are tested across devices, roles, and workflows (e.g., facility app, ASHA app).

  • Own cross-browser, mobile, and API testing matrices.

  • Work with automation engineers to identify test cases for automation.

  • Prioritize automation for critical workflows (e.g., RCH form submission, follow-ups).

  • Integrate test automation into CI/CD pipelines (e.g., GitHub Actions, Jenkins).

  • Track key QA health indicators:

    • Bug rejection rate

    • Escaped defects in production

    • Test coverage trends

    • Test execution times

  • Share regular quality reports with product and engineering leadership.

  • Give final QA sign-off before staging/production deployments.

  • Ensure that no critical or high-severity bugs are open before greenlighting.

  • Coordinate smoke tests and rollback plans when needed.

Database Administrator (DBA)
  • Collaborate with architects and developers to review or improve database schema design.

  • Normalize data where needed while keeping performance in mind.

  • Ensure schema changes follow version control (e.g., Liquibase/Flyway).

  • Install, configure, and maintain database servers (e.g., PostgreSQL, MySQL).

  • Monitor database health, uptime, replication status, and query performance.

  • Set up replication, high availability, and failover strategies.

  • Implement user access controls and role-based permissions.

  • Apply database encryption at rest and in transit.

  • Monitor and audit database access logs, especially for PII/PHI compliance.

  • Analyze slow queries and tune indexes, joins, and execution plans.

  • Recommend changes in code or schema to reduce load on the DB.

  • Use monitoring tools (e.g., pgAdmin, AWS CloudWatch, Datadog) to track query latency and resource usage.

  • Configure automated and manual backups with clear retention policies.

  • Regularly test restore procedures to validate data integrity and RTO/RPO goals.

  • Maintain runbooks for recovery from database failure or corruption.

  • Set up and maintain dev/staging DBs with sanitized or obfuscated data for testing.

  • Help QA and Dev teams debug data-related issues.

  • Manage database versioning across environments.

  • Track data growth trends and recommend storage capacity upgrades.

  • Archive or purge stale data as per data retention policies.

  • Plan for horizontal or vertical scaling if usage increases.

  • Help developers simulate load-heavy queries or bulk data scenarios.

  • Monitor DB behavior during load tests and advise improvements.

  • Maintain clear SOPs for backups, migrations, role management, and restores.

  • Automate routine tasks using scripts or tools (e.g., cron jobs, AWS Lambda).

  • Document schema structure for devs and BAs using tools like dbdiagram.io or ERD generators.

  • Work with:

    • Developers to guide efficient queries.

    • QA to seed test data.

    • Product to assess impact of large schema changes.

      • DevOps to fine-tune DB monitoring, backups, and infrastructure-as-code (IaC) setups.

IT and DevOps Engineer
  • Provision and manage infrastructure (on-premise or cloud, e.g., AWS, GCP, Azure) for both staging and production environments.

  • Maintain compute, storage, and networking resources to ensure scalability, high availability, and fault tolerance.

  • Automate infrastructure provisioning using Infrastructure-as-Code (IaC) tools like Terraform, CloudFormation, or Ansible.

  • Ensure proper management of cloud resources (e.g., EC2, RDS, S3) with optimal cost efficiency and security.

  • Set up and maintain CI/CD pipelines for smooth, automated code deployment.

  • Integrate tools like Jenkins, GitHub Actions, CircleCI, or GitLab CI for testing, building, and deploying code.

  • Ensure pipelines are secure, efficient, and handle all necessary steps, from code commit to production deployment.

  • Monitor deployment success, rollback processes, and integrate feature flags for seamless releases.

  • Work closely with security teams to implement best practices for system hardening, including firewall configurations, encryption, and identity access management (IAM).

  • Audit infrastructure regularly for vulnerabilities and ensure compliance with healthcare standards such as HIPAA or GDPR.

  • Manage secret management solutions (e.g., HashiCorp Vault, AWS Secrets Manager) to keep sensitive information secure.

  • Implement and maintain secure access to systems (e.g., SSH key management, VPNs).

  • Set up monitoring and alerting using tools like Prometheus, Grafana, Datadog, or AWS CloudWatch to ensure high availability and proactive issue resolution.

  • Implement logging solutions, integrating ELK stack (Elasticsearch, Logstash, Kibana) or Splunk, to collect, analyze, and visualize logs for troubleshooting and performance monitoring.

  • Set thresholds for critical metrics such as CPU usage, memory consumption, response times, and error rates, and configure alerts for abnormal behaviors.

  • Automate manual tasks and repetitive processes using scripting languages like Bash, Python, or PowerShell.

  • Develop automated backup scripts, data migration routines, and log aggregation tools.

  • Ensure that manual interventions are minimized and failures are automatically addressed.

  • Implement disaster recovery plans (DRP) and backup solutions with clear RPO (Recovery Point Objective) and RTO (Recovery Time Objective).

  • Set up multi-region replication and load balancing to ensure business continuity in case of outages.

  • Regularly test backup and recovery procedures to ensure they meet SLAs.

  • Regularly patch and update servers, operating systems, and third-party software to ensure security and stability.

  • Optimize performance of databases, applications, and infrastructure by analyzing load patterns and implementing relevant changes.

  • Keep track of system resource utilization, and recommend scaling decisions (vertical and horizontal scaling).

  • Collaborate with developers to ensure that infrastructure and environment configurations match the codebase needs.

  • Help developers troubleshoot environment-related issues, such as network configurations, database connections, or deployment failures.

  • Work with the development team to set up development and staging environments that closely resemble production.

  • Maintain documentation for all infrastructure, automation scripts, and deployment processes.

  • Provide clear instructions for disaster recovery, backup restoration, and troubleshooting common system issues.

  • Share knowledge with other team members on emerging DevOps practices, tools, and trends.

  • Coordinate changes to production environments, ensuring smooth rollouts of new features and bug fixes.
  • Maintain rollback strategies and ensure proper testing for all changes before live deployments.

4. Development Workflow

a. Agile Framework

🗓️ 2-Week Sprints with Backlog Grooming

  • Sprint Length: Each sprint lasts for 2 weeks. This timeframe ensures that the team has enough time to complete tasks without being overly stretched.

  • Backlog Grooming:

    • Before each sprint, conduct a backlog grooming session to review and prioritize the tasks for the upcoming sprint. This ensures that the backlog is up-to-date and that high-priority issues are addressed promptly.

    • Product owners and key stakeholders should participate to ensure that requirements are clearly understood and documented.

    • Each task should have clear acceptance criteria, dependencies, and effort estimates (story points).

🧑‍💻 Daily Standups

  • Held every day at a consistent time to promote alignment and communication.

  • Each team member answers:

    • What did I work on yesterday?

    • What will I work on today?

    • Any blockers or issues?

  • Keep the meeting time-boxed (typically 15 minutes).

  • Scrum Master ensures that blockers are addressed and tracked.

📅 Sprint Planning

  • At the start of each sprint, a Sprint Planning meeting is held. This meeting is where:

    • Product Owner presents prioritized tasks from the backlog.

    • Development Team discusses the scope, technical approach, and effort estimation for each item.

    • Acceptance criteria and definition of done are clarified.

  • Goal Setting: A clear sprint goal should be set to ensure the team is aligned on priorities and expected outcomes.

  • Task Breakdown: Tasks should be broken down into manageable units, making them achievable within the sprint's timeline.

  • Story Point Assignment:

    • Role of Product Owner: The product owner presents and clarifies the requirements for the user stories, ensuring that each story has well-defined acceptance criteria.

    • Team Estimation: During Sprint Planning, the team assigns story points to each user story based on complexity, effort, and uncertainty.

    • Historical Data: Use past sprints’ data to ensure consistency in estimating story points.

    • Story Point Criteria:

      • Small stories (1-3 points): Can be done quickly and with low uncertainty.

      • Medium stories (5-8 points): Requires some effort, but no significant complexity or dependencies.

      • Large stories (13+ points): Involves significant effort, dependencies, or requires research and investigation.

    • Refer to Story points documentation for more details.

🏃‍♂️ Ticket Movement

  • The flow of tickets follows a clear path from creation to completion:
    • 1. OPEN

      • Description: The ticket is created and logged into the system. This stage indicates that the task is newly initiated and requires analysis.
      • Responsible Roles: Project Manager, Business Systems Analyst (BSA).

      2. ANALYSIS

      • Description: The requirements and scope of the ticket are analyzed. This includes gathering business needs, technical specifications, and feasibility studies.
      • Responsible Roles: BSA, Product Owner.
      • Outcome: Once the analysis is complete, the ticket moves to "Ready for Development."

      3. READY FOR DEVELOPMENT

      • Description: The ticket is reviewed and approved for development. All prerequisites (e.g., design documents, acceptance criteria) are finalized.
      • Responsible Roles: Scrum Master, Technical Architect.
      • Outcome: Developers can start working on the task.

      4. IN DEVELOPMENT

      • Description: The development team works on implementing the feature or fixing the issue described in the ticket.
      • Responsible Roles: Developers, Senior Developers.
      • Outcome: Once development is complete, the ticket moves to "Pending QA."

      5. PENDING QA

      • Description: The development work is completed, and the ticket awaits QA testing.
      • Responsible Roles: QA Testers, QA Manager.
      • Outcome: The ticket moves to "In QA" for testing.

      6. IN QA

      • Description: QA testers validate the functionality against requirements and acceptance criteria. They perform manual and automated tests to ensure quality.
      • Responsible Roles: QA Testers, QA Manager.
      • Outcome: If all tests pass successfully, the ticket moves to "QA Approved."

      7. QA APPROVED

      • Description: The QA team certifies that the feature or fix meets quality standards and is ready for deployment in a development environment.
      • Responsible Roles: QA Manager.
      • Outcome: The ticket moves to "DEV ENV Deployed."

      8. DEV ENV DEPLOYED

      • Description: The feature or fix is deployed in the development environment for further validation by developers or stakeholders.
      • Responsible Roles: Senior Developers, Technical Architect.
      • Outcome: Once approved in the development environment, it moves to "UAT ENV Deployed."

      9. UAT ENV DEPLOYED

      • Description: The feature or fix is deployed in the User Acceptance Testing (UAT) environment for end-user validation.
      • Responsible Roles: Product Owner, L2 Support Team.
      • Outcome: If end-users approve it during UAT testing, it moves to "UAT Approved."

      10. UAT APPROVED

      • Description: End-users or stakeholders approve the functionality after testing in UAT. It is now ready for production deployment.
      • Responsible Roles: Product Owner.
      • Outcome: The ticket moves to "Production Deployed."

      11. PRODUCTION DEPLOYED

      • Description: The feature or fix is deployed in the live production environment for actual use by end-users.
      • Responsible Roles: Senior Developers, L2 Support Team.
      • Outcome: Once verified in production, the ticket is marked as "Closed."

      12. CLOSED

      • Description: The ticket is marked as resolved and closed after successful deployment and verification in production.
      • Responsible Roles: Project Manager, Scrum Master.
  • Tickets must be updated regularly based on progress, and it’s essential to move tickets between states to reflect the current status.

  • Tickets should be closed promptly after successful deployment or if issues arise, with comments explaining the reason.

🔄 Sprint Retrospectives

  • After each sprint, a retrospective meeting is held to reflect on the sprint and identify areas of improvement. This could cover:

    • What went well?

    • What can be improved?

    • Actionable items to improve processes for the next sprint.

  • Scrum Master facilitates and tracks action items from the retrospective.


b. Git Branching 

🌳 Branching Strategy

  • Main Branch (main or master): Represents the stable production-ready version of the code. Code in this branch should always be deployable.

  • Development Branch (develop): The primary branch where integration occurs. This is where features from feature branches are merged after they pass development and testing stages.

  • Feature Branches: Each new feature, enhancement, or bug fix is developed in a separate feature branch. These branches are named with relevant identifiers, e.g., feature/ABC-123-add-login-screen or bugfix/XYZ-456-fix-api-issue.

    • Start Point: Always create feature branches from the develop branch.

    • End Point: After completing development and local testing, feature branches are merged back into the dev branch.

🔀 Merging Process

  • When a feature is complete:

    1. Pull Request (PR): A developer submits a PR to merge their feature branch into dev.

    2. Code Review: At least one peer reviews the code for quality, functionality, and alignment with coding standards.

    3. CI/CD Check: Automated tests (unit, integration, linting, etc.) must pass as part of the PR validation process.

    4. Merge: Once the PR passes review, it is merged into dev. Avoid merging directly into main unless it’s for a production release.

⚙️ Hotfixes

  • If an urgent issue arises in production, hotfixes are addressed by creating a hotfix branch from main (e.g., hotfix/XYZ-789-fix-critical-bug).

  • Once the hotfix is deployed, it should be merged into both main (for production) and develop (to ensure the fix is included in future releases).

c. Pull Request and Code Review

i. Pull Request Process

Step 1: Creating the Pull Request

  • PR Creation: Developers must create a pull request (PR) against the appropriate branch (typically develop for active development, main or master for production-ready code).

  • PR Description: Developers should provide a meaningful description in the PR, outlining what is being done (e.g., bug fixes, new features, refactoring) and mentioning any related issues from JIRA or other task management tools.

Step 2: Assign Reviewers

  • Review Assignment: The PR should be assigned to relevant team members for review. The choice of reviewers depends on the area of work (e.g., front-end or back-end) and the complexity of the changes.

  • Notification: Reviewers should be notified promptly to ensure timely feedback and avoid unnecessary delays in the process.

Step 3: Automated Checks

  • CI/CD Pipeline Integration: The PR should trigger the Continuous Integration/Continuous Deployment (CI/CD) pipeline, which includes:

    • For Angular, ensure that tests (unit and e2e) are executed, and linting checks are done (using tools like ESLint and Prettier).

    • For Java, run unit tests (using JUnit, Mockito) and static code analysis tools (SonarQube, Checkstyle).

    • For Kotlin, ensure that unit tests (using JUnit or KotlinTest) and static code analysis tools (Detekt) run correctly.

  • Build Verification: Ensure the code successfully compiles or builds before merging.

ii. Code Review Guidelines

Each repository type (Angular, Java, Kotlin) has slightly different focuses based on the framework, language, and best practices. Below are specific code review areas for each.

For Angular Repositories:

  1. Code Structure & Readability:

    • Component Structure: Ensure that components are modular, small, and reusable. They should follow the Single Responsibility Principle (SRP).

    • Naming Conventions: Ensure component, service, and variable names are descriptive and follow Angular naming conventions.

    • Separation of Concerns: Ensure business logic is separated from UI logic. Complex logic should reside in services, not components.

  2. Template & Styling:

    • HTML Template: Ensure that the template uses Angular directives like *ngIf, *ngFor correctly and follows best practices.

    • CSS/SCSS: Verify that styles are scoped properly (e.g., using ViewEncapsulation in Angular), and that stylesheets follow the Angular Style Guide.

    • UI Consistency: Verify that UI elements follow the design system and are consistent with the application’s theme (e.g., using Angular Material if applicable).

  3. State Management:

    • Reactive Programming: Ensure proper use of RxJS operators for handling asynchronous operations like HTTP requests.

    • State Management: Check for appropriate use of state management tools (e.g., NgRx or BehaviorSubject) when managing application state.

  4. Error Handling:

    • Ensure errors are handled properly, including in HTTP requests (i.e., handling HTTP errors gracefully and displaying user-friendly messages).

    • Use ErrorBoundary or similar techniques to catch unhandled errors in the UI.

  5. Testing:

    • Unit Testing: Ensure that unit tests exist for components, services, and other business logic, and they are written using Jasmine and run with Karma.

    • End-to-End Testing: If applicable, ensure e2e tests are in place using Protractor or Cypress.

    • Verify that test coverage is sufficient and tests run in CI.

  6. Performance:

    • Check that the application is optimized for performance, such as implementing lazy loading for large modules and reducing unnecessary API calls.

For Java (Spring Boot) Repositories:

  1. Code Structure & Readability:

    • Layered Architecture: Ensure the code adheres to a layered architecture (e.g., controllers, services, repositories). Controllers should handle HTTP requests only, while services contain business logic.

    • Modularity: Ensure that classes, methods, and services are modular and follow SOLID principles to make the code easier to maintain.

    • Naming Conventions: Follow Java naming conventions for classes, methods, and variables.

  2. Dependency Injection & Spring Annotations:

    • Spring Annotations: Ensure that Spring’s dependency injection is used correctly with annotations like @Autowired, @Service, @Repository, @Controller, and @RestController.

    • Service Layer: Ensure that business logic is not present in controllers but in dedicated service classes.

  3. Security:

    • Authentication and Authorization: Review the use of JWT tokens, OAuth, or Spring Security for user authentication and authorization.

    • Input Validation: Ensure input validation is present, especially in APIs. Use annotations like @Valid or @NotNull where necessary.

    • Avoid Hardcoding Sensitive Data: Ensure sensitive data (e.g., passwords, API keys) is never hardcoded or exposed in source code.

  4. Error Handling:

    • Global Exception Handling: Ensure that there is a centralized approach for handling exceptions in the application (e.g., using @ControllerAdvice).

    • Ensure proper HTTP status codes are returned for different types of errors (e.g., 400 for bad requests, 404 for not found, 500 for internal server errors).

  5. Database & ORM (JPA/Hibernate):

    • Efficient Queries: Ensure that database queries are efficient, optimized, and avoid N+1 query problems.

    • Transactions: Ensure transaction management is handled appropriately for operations that need atomicity (e.g., using @Transactional).

    • Database Migrations: Ensure proper database migrations are applied when there are schema changes (e.g., using Flyway or Liquibase).

  6. Testing:

    • Unit Testing: Verify that unit tests are present for business logic, using JUnit and Mockito.

    • Integration Testing: Ensure that there are adequate integration tests to verify the interaction between components, particularly with Spring Boot Test.

    • Test Coverage: Ensure that test coverage is sufficient, and use tools like JaCoCo to monitor coverage.

  7. Performance:

    • Ensure efficient database queries, caching strategies, and review the use of async processing for tasks that don't need to block the main thread.

For Kotlin Repositories:

  1. Code Structure & Readability:

    • Kotlin Best Practices: Ensure the code follows Kotlin best practices, such as using extension functions, null safety features, and concise syntax.

    • Naming Conventions: Ensure that classes, methods, and variables are named using CamelCase and adhere to Kotlin conventions.

  2. Use of Kotlin Features:

    • Null Safety: Ensure the code makes use of Kotlin’s null safety features (e.g., nullable types, ?. and !! operators).

    • Data Classes: Ensure data classes are used where appropriate for modeling immutable objects with automatically generated equals, hashCode, and toString methods.

  3. Concurrency & Coroutines:

    • Coroutines: Ensure Kotlin coroutines are used for asynchronous programming instead of traditional callback mechanisms. Review for proper use of launch, async, and structured concurrency.

  4. Error Handling:

    • Sealed Classes: Ensure sealed classes are used for handling specific types of errors or states, making the error handling more type-safe.

    • Custom Exceptions: If there are custom exceptions, ensure they are appropriately defined and used.

  5. Testing:

    • Unit Testing: Ensure unit tests exist for business logic and other components, using JUnit with KotlinTest or MockK for mocking.

    • Integration Testing: Ensure proper integration tests using Spring Boot Test or other relevant tools.

    • Test Coverage: Ensure sufficient test coverage, and validate the correctness of coroutines with mocking frameworks like MockK.

  6. Performance:

    • Ensure that Kotlin’s performance optimizations, such as inline functions and tail recursion, are used where appropriate.

iii. Final Approval and Merging

  • PR Review Feedback: Reviewers should leave comments or suggestions on the PR, addressing any concerns related to code quality, security, performance, or best practices.

  • Changes Requested: If changes are requested, developers must address them and push updates to the same PR.

  • Approval: Once the review process is complete and all feedback has been incorporated, reviewers should approve the PR. The PR can then be merged into the target branch (usually develop or main).

  • Squash and Merge: For a cleaner git history, PRs should be merged using the Squash and Merge strategy, which condenses all commits in the PR into a single commit.

4. Security & Compliance

In AMRIT, security and compliance are paramount, especially because the platform deals with sensitive health information and Personally Identifiable Information (PII). The team must follow rigorous standards and best practices to ensure that data is handled securely and in compliance with relevant regulations (e.g. DPDP). Below are the key aspects that should be practiced:

a. Secure Engineering Practices

Secure Coding Guidelines

  • Input Validation & Sanitation: All user inputs should be validated both on the client side and server side to prevent SQL injection, XSS (Cross-Site Scripting), CSRF (Cross-Site Request Forgery), and other injection attacks.

    • Use of Prepared Statements: In SQL queries, always use prepared statements to prevent SQL injection.

    • Sanitize User Input: Use input sanitization libraries to clean user inputs before processing.

    • Sanitize Output: Always sanitize and escape data before rendering it on the front end to avoid XSS attacks.

  • Authentication & Authorization:

    • Use multi-factor authentication (MFA) wherever possible, especially for accessing admin panels or sensitive data.

    • Follow OAuth 2.0 and JWT (JSON Web Tokens) standards for secure API authentication and authorization.

    • Implement role-based access control (RBAC) or attribute-based access control (ABAC) to enforce permissions based on the user’s role and privileges.

    • For sensitive actions, ensure two-person rule or approval workflows.

  • Password Management:

    • Hash passwords using secure hashing algorithms like bcrypt or PBKDF2 with strong salts.

    • Never store passwords or sensitive information in plaintext.

    • Enforce password complexity rules, such as requiring a mix of characters, and minimum password length.

  • Data Encryption:

    • In-Transit Encryption: Use TLS (Transport Layer Security) to encrypt all communications over HTTP (e.g., HTTPS for web traffic).

    • At-Rest Encryption: Encrypt sensitive data stored in databases using strong encryption algorithms (e.g., AES-256).

    • Encryption Keys: Store encryption keys securely, using services like AWS KMS or Azure Key Vault, and ensure they are rotated periodically.

  • Security Headers:

    • Use HTTP Security Headers like Strict-Transport-Security (HSTS), Content-Security-Policy (CSP), and X-Content-Type-Options to mitigate various attacks such as XSS, clickjacking, and man-in-the-middle attacks.

  • API Security:

    • Rate Limiting: Implement rate limiting to prevent abuse and DDoS attacks.

    • API Key Management: Use API keys for authentication with external services. Never hardcode API keys in the source code or expose them in public repositories.

    • Input Validation for APIs: Apply strict validation rules for all incoming API requests to prevent malicious data from being processed.

Code Reviews for Security

  • All PRs must include a review specifically for security concerns, including:

    • Checking for hardcoded credentials, secrets, or API keys.

    • Verifying encryption methods and key management practices.

    • Reviewing authentication and authorization logic to ensure proper access control.

    • Ensuring that no sensitive data (e.g., passwords, personal information) is logged or exposed in error messages.

Secure Dependencies Management

  • Third-party Libraries: Always use trusted, well-maintained third-party libraries. Regularly update libraries to ensure that any known vulnerabilities are patched.

    • Use tools like Dependabot (for GitHub) or Snyk to automatically check for vulnerabilities in dependencies.

  • Minimize External Dependencies: Avoid using external dependencies unless absolutely necessary, and review their source code to verify that they follow secure practices.

b. Health Information & PII (Personally Identifiable Information)

Since AMRIT handles health information and Personally Identifiable Information (PII), it’s crucial that all team members follow the appropriate protocols to protect this sensitive data and maintain compliance with laws and regulations.

Data Classification and Segmentation

  • Data Sensitivity Classification: Classify all data based on its sensitivity (e.g., public, internal, confidential, sensitive). Health information and PII should always be categorized as confidential or sensitive.

    • Sensitive Data: This includes any data related to health (such as diagnoses, medical history, treatment plans), identity (name, address, phone number, email), and financial information.

  • Data Segmentation: Use segmentation techniques to ensure that sensitive data is only accessible to authorized roles. Implement the principle of least privilege (PoLP), ensuring that only those who need access to sensitive data can access it.

Data Retention and Minimization

  • Data Retention Policy: Define a clear data retention policy for health information and PII. Ensure that personal data is only stored for as long as necessary to fulfill the intended purpose and legal obligations.

    • Implement automatic data purging or archiving mechanisms for expired or obsolete records.

  • Data Minimization: Only collect and store the minimum amount of PII and health data necessary to fulfill business requirements. Avoid collecting excessive data points unless absolutely necessary.

Access Control for Sensitive Data

  • Role-Based Access: Enforce strict access control mechanisms to ensure that only authorized users can access health information and PII. This can include:

    • Access control based on roles (e.g., health professionals, administrators, etc.).

    • Time-based or context-based access controls (e.g., limiting access to PII only during certain times or from certain locations).

  • Audit Logs: Maintain comprehensive audit logs for access to sensitive data. Logs should include:

    • Who accessed the data

    • When it was accessed

    • What actions were performed on the data (view, modify, delete)

    • Any failed access attempts

    These logs should be stored securely and be readily accessible for auditing purposes.

Data Masking and Anonymization

  • Data Masking: Where appropriate, mask or encrypt sensitive information in databases, especially in development or test environments.

  • Anonymization/De-identification: Use techniques to anonymize or de-identify sensitive health data when used for analytics or machine learning. This can help to prevent exposure of personally identifiable data while still allowing for useful analysis.

Regulatory Compliance

  • Local Compliance Regulations: Ensure that AMRIT complies with local data protection regulations and laws in India. This may include country-specific privacy laws such as India’s Data Protection Bill.

Training and Awareness

  • Regular Training: All team members must receive regular training on data security, privacy regulations, and secure handling of health information and PII.

  • Security Awareness: Team members should be trained to identify common phishing attacks, social engineering, and other tactics used to compromise sensitive data.

  • Incident Response Plan: Ensure that the team is familiar with the incident response plan in case of a data breach or security incident. This includes promptly notifying affected individuals and relevant authorities as per regulatory requirements.

c. Regular Security Audits and Penetration Testing

  • Vulnerability Scanning: Use automated tools to regularly scan the application and infrastructure for vulnerabilities (e.g., OWASP ZAP, Burp Suite).

  • Penetration Testing: Conduct annual penetration testing of the application and network infrastructure to identify potential vulnerabilities and ensure security measures are effective.

  • Compliance Audits: Regularly audit the platform’s compliance with security frameworks and privacy regulations.

5. Quality Assurance

In AMRIT, Quality Assurance (QA) plays a crucial role in ensuring the reliability, security, and functionality of the platform. The QA process must be meticulously integrated into the Software Development Life Cycle (SDLC) to maintain high-quality standards for the system and deliver a flawless product to end-users.

a. QA Process Overview

The QA process ensures that the product meets the defined quality standards and is free of defects. This process is integrated throughout the SDLC, from requirement gathering to final deployment, and involves:

  • Test Planning: Define the scope, strategy, and types of testing to be performed.

  • Test Execution: Execute various tests such as unit tests, integration tests, and user acceptance tests.

  • Defect Management: Track and manage defects found during testing.

  • Test Reporting: Provide regular reports on testing progress, defects, and test coverage.

  • Test Closure: Final review and closure of testing activities.

b. Types of Testing to be Performed

Unit Testing

  • Purpose: Verify individual components or units of the application, such as functions or methods, to ensure they work as expected.

  • Responsibility: Developers are responsible for writing unit tests for their code, typically using frameworks like JUnit (for Java), Mocha or Jasmine (for Angular).

  • Frequency: Performed continuously, with every feature or bug fix being accompanied by unit tests.

Integration Testing

  • Purpose: Ensure that different modules or services within the application work together correctly.

  • Responsibility: QA engineers should create integration tests to verify that components interact properly, focusing on end-to-end data flow and system behavior.

  • Frequency: Performed after unit testing and before the full system testing phase.

System Testing

  • Purpose: Evaluate the entire system's functionality, performance, and behavior as a whole.

  • Responsibility: QA engineers will execute system-level tests based on the requirements to ensure the application performs as intended.

  • Frequency: Conducted after integration testing and before the user acceptance testing phase.

Regression Testing

  • Purpose: Ensure that new changes do not negatively impact the existing functionality of the system.

  • Responsibility: QA engineers will run a suite of tests to confirm that new code doesn’t break existing features.

  • Frequency: Performed after each sprint, typically before every release.

Performance Testing

  • Purpose: Assess the performance and scalability of the application, ensuring it handles expected user loads and performs efficiently.

  • Responsibility: QA engineers use tools like JMeter, Gatling, or LoadRunner to simulate high-traffic conditions and identify potential bottlenecks.

  • Frequency: Done during system testing and pre-production phases.

Security Testing

  • Purpose: Identify security vulnerabilities in the application, ensuring the system is secure from threats and risks.

  • Responsibility: Security-focused QA engineers will run vulnerability scans and penetration tests to verify the system's security posture.

  • Frequency: Performed at every major release, with a focus on critical features handling sensitive data.

User Acceptance Testing (UAT)

  • Purpose: Validate that the application meets the end user's needs and business requirements.

  • Responsibility: QA, product owners, L1 team and key stakeholders collaborate to ensure the application delivers as expected in real-world scenarios.

  • Frequency: UAT is typically performed in the final stage before the release is made to production.

c. Test Strategy and Plan

Each phase of testing should be planned and executed according to a Test Strategy and Test Plan to ensure that testing aligns with the project goals.

Test Strategy:

  • Scope: Outline which features will be tested and the types of testing to be performed.

  • Test Levels: Define whether testing will be done at the unit, integration, system, and acceptance levels.

  • Resources: Identify the team members responsible for different types of testing.

  • Tools: Define tools and technologies to be used in testing (e.g., JUnit for unit tests, Selenium for UI tests, JMeter for performance testing).

Test Plan:

  • Test Objectives: Define clear goals for each phase of testing.

  • Test Cases: Each test should have a predefined test case that specifies the test’s objectives, input data, expected result, and the steps to execute.

  • Test Execution: Set timelines for when tests will be executed, along with responsibility assignments for different team members.

  • Acceptance Criteria: Define the criteria that must be met for the testing phase to be considered complete.

d. Test Automation

Automating repetitive and critical tests helps increase test coverage, reduce testing time, and ensure consistency. Key points to consider for automation:

  • Identify Critical Tests for Automation: Focus on automating high-impact and frequently executed tests, such as unit tests, smoke tests, and regression tests.

  • Automation Framework: Define and implement an automation framework (e.g., Selenium for UI tests, TestNG for integration tests) to streamline test creation, execution, and reporting.

  • Continuous Integration (CI): Integrate automated tests into the CI/CD pipeline (e.g., using Jenkins, GitHub Actions, or GitLab CI) to run tests automatically on every code push, ensuring that issues are caught early.

  • Maintenance: Ensure that the automated test suite is regularly updated and maintained to adapt to changes in the application.

e. Defect Management

Defect management is crucial to identify, track, and resolve defects efficiently.

Defect Lifecycle:

  • Defect Logging: Every defect found during testing should be logged in the issue tracking system (e.g., JIRA) with detailed information (e.g., steps to reproduce, expected vs. actual behavior, severity).

  • Defect Prioritization: Prioritize defects based on severity and impact. Critical issues must be fixed before release.

  • Defect Tracking: Track the status of defects (e.g., Open, In Progress, Fixed, Reopened, Closed) and ensure timely resolution.

  • Defect Review: Defects should be reviewed during sprint retrospectives to identify recurring patterns and address root causes.

Severity and Priority Levels:

  • Critical: Application-breaking issues (e.g., crashes, data loss).

  • Major: High-impact issues affecting core functionality but with workarounds.

  • Minor: Issues with low impact, often cosmetic or related to non-critical features.

  • Trivial: Cosmetic issues with no impact on functionality.

f. Continuous Testing and Quality Metrics

Test Coverage:

  • Ensure adequate test coverage for all key features, especially those related to patient data, PII, and health records.

  • Use code coverage tools (e.g., JaCoCo for Java, Istanbul for JavaScript) to measure test coverage and ensure it meets defined thresholds (e.g., 80% or higher).

Test Metrics:

  • Defect Density: Measure defects per unit of code or functionality.

  • Test Execution Rate: Measure how many tests were executed successfully vs. total tests created.

  • Escaped Defects: Track defects found after the release to measure the effectiveness of the testing process.

g. Reporting and Communication

  • Test Reports: Generate comprehensive reports for each testing phase, including pass/fail rates, defect trends, and test coverage.

  • Status Meetings: Conduct regular QA status meetings to communicate testing progress, risks, and issues. Share test reports with stakeholders regularly.

  • Sprint Retrospectives: QA insights should be shared during sprint retrospectives to improve the process and address any issues faced during testing.

h. Post-Release Testing and Monitoring

After the release to production, QA should continue to monitor the application for any issues that arise in real-world use.

  • Smoke Testing in Production: Perform light testing on critical paths after deployment to ensure basic functionality.

  • User Feedback: Gather user feedback and bug reports to identify issues that may have been missed during testing.

  • Continuous Monitoring: Set up monitoring systems (e.g., Prometheus, Grafana, ELK) to continuously track application performance, security, and uptime.


6. Release Management

The Release Management process ensures that the deployment of new features, bug fixes, and updates to the AMRIT platform is smooth, controlled, and efficient. This section covers key aspects such as updating JIRA releases, informing support teams, build verification testing, semantic versioning, and ownership of the release process.

a. JIRA Releases Update

JIRA is the central tool for tracking the progress of features, bugs, and enhancements. Properly updating the release versions in JIRA is critical for maintaining visibility, traceability, and clear communication with stakeholders.

Updating JIRA Releases:

  • Release Version Creation: At the beginning of each sprint or release cycle, a new version is created in JIRA. This version corresponds to the upcoming release and will serve as the target for the issues and stories that will be worked on.

  • Linking Issues to the Release: Every issue (user story, bug, enhancement) that will be part of the release must be linked to the respective release version in JIRA. This ensures that all work completed within that sprint or release cycle is accounted for.

    • Release Version Field: Ensure that the release version is selected in the "Fix Version" field for each issue.

  • Progress Monitoring: As development work progresses, update the JIRA issues to reflect their status (e.g., in progress, code review, testing, done). The progress is automatically tracked, and the team can have real-time insights into how much work is done and what remains.

  • Release Notes: Once the release is ready, generate the release notes directly from JIRA. These notes should summarize the key changes made in the release, including new features, bug fixes, and any significant improvements.

  • Closing the Release: After deployment to production, close the release version in JIRA and archive it to prevent further changes. Ensure that all issues linked to the release are resolved or marked as "done" in JIRA.

b. Informing L1 Support

The L1 Support team must be informed about upcoming releases to prepare for potential issues, monitor the deployed features, and respond to customer queries effectively.

Steps to Inform L1 Support:

  • Pre-Release Notification: A few days before the release, send a detailed release note to the L1 Support team. This should include:

    • New features being released.

    • Any bug fixes or enhancements.

    • Known issues or regressions from previous releases.

    • Specific areas to monitor after release (e.g., health data-related features).

    • Any configuration or environment changes.

  • Release Day Communication: On the day of the release, inform the L2 Support team when the deployment starts and when it completes. This allows them to monitor the system and be ready for user-reported issues.

  • Post-Release Monitoring: After the release, the L2 Support team should monitor the system for anomalies or bugs reported by users. They should be able to quickly verify if the issue is related to the new release.

  • Support Documentation: Ensure that L2 Support has access to updated support documentation, including troubleshooting steps and FAQs related to the new release.

c. Build Verification Testing (BVT)

Build Verification Testing (BVT) is a crucial step in ensuring the integrity and stability of the application after each deployment.

BVT Process:

  • Definition: BVT is a set of preliminary tests executed after each build to verify that the major functionality of the system works and that the build is stable enough to proceed with further testing.

  • Responsibility: The QA team is typically responsible for running the BVTs.

  • Scope of BVT:

    • Ensure that the application is deployed successfully and is accessible.

    • Verify that the core features, like login, user registration, and key workflows, are functioning.

    • Check for any obvious errors in the application, such as missing assets, broken links, or crashes.

  • Automation: If automated tests are in place, BVT should run these automated tests first, ensuring basic functionality is verified quickly.

  • Manual Checks: If automated testing is unavailable, QA should perform critical path testing to manually verify the build.

  • Sign-Off: The BVT must be completed successfully before any further testing (e.g., regression, UAT) or deployment to production is allowed.

d. Semantic Versioning

Semantic Versioning (SemVer) is a versioning scheme that aims to convey meaning about the underlying changes in a release. It helps manage dependencies and compatibility between systems, ensuring that the release process is transparent and predictable.

Versioning Format:

Semantic Versioning uses the format MAJOR.MINOR.PATCH:

  • MAJOR: Incremented when backward-incompatible changes are made (e.g., breaking changes in the API).

  • MINOR: Incremented when backward-compatible new features or enhancements are added.

  • PATCH: Incremented when backward-compatible bug fixes or minor improvements are made.

Versioning Strategy:

  • Pre-Release Versions: For pre-production or staging environments, use labels like 1.0.0-alpha or 1.0.0-beta to indicate that the release is not final.

  • Release Candidates: Before the final release, use 1.0.0-rc1, 1.0.0-rc2, etc., to indicate that the release is a candidate for production but may still have unresolved issues.

  • Stable Releases: Once all critical issues are resolved and the release is ready for production, increment the version to a stable number (e.g., 1.0.0, 2.3.1).

e. Ownership of the Release Process

Clear ownership of the release process ensures accountability and a smooth transition from development to production.

Roles and Responsibilities:

  • Product Owner: Owns the product roadmap and ensures that the release aligns with business objectives. They prioritize the features and fixes that go into the release.

  • L2 Manager: Responsible for overseeing the entire release process. They plan, coordinate, and communicate the release timeline, updates, and the necessary steps across teams.

  • Development Team: Ensures that the code is written, tested, and ready for release. Developers will create the release branch and ensure that the feature is production-ready.

  • QA Team: Owns the testing phase of the release. They run BVTs, regression tests, and UAT to ensure that the release is stable and meets quality standards.

  • DevOps Engineer: Handles the deployment process, including setting up environments, ensuring the infrastructure is in place, and performing the actual deployment to production. They also manage rollback procedures in case of deployment failure.

  • L2 Support Team: Takes ownership of post-release monitoring and support. They are responsible for troubleshooting any production issues and supporting end-users post-release.

f. Post-Release Monitoring and Support

Once the release is live in production, continuous monitoring is necessary to ensure the system performs as expected.

  • Monitoring: Set up automated systems to track application performance, such as uptime, response times, and error rates. Tools like Prometheus, Grafana, or New Relic can help with this.

  • User Feedback: Encourage users to report any issues or anomalies encountered post-release.

  • Hotfixes: If critical issues arise after the release, the development and QA teams should be ready to implement hotfixes. These should be planned and released quickly, following the established release process.

  • Post-Release Review: After the release, conduct a retrospective meeting to discuss what went well, what could have been improved, and how to streamline the release process for future versions.

7. Third-Party Components

AMRIT leverages various third-party libraries, APIs, tools, and SDKs across its frontend, backend, and infrastructure layers. Proper governance around their use ensures security, license compliance, performance, and long-term maintainability.

a. Evaluation Before Adoption

Before incorporating any third-party component into the AMRIT platform:

  • Functionality Fit: Ensure it addresses a real functional or technical need and doesn’t introduce unnecessary dependencies.

  • Security Review:

    • Check for known vulnerabilities using tools like npm audit, OWASP Dependency Check, Snyk, or Trivy.

    • Prefer actively maintained projects with good community support.

  • License Review:

    • Only use components with permissive open-source licenses (e.g., MIT, Apache 2.0, BSD).

    • Avoid restrictive or viral licenses like GPL unless explicitly approved.

  • Performance Impact: Ensure the library doesn’t bloat the application bundle (frontend) or significantly impact backend performance.

  • Community Health:

    • Check for last commit date, number of contributors, open issues, and responsiveness on GitHub.

    • Review documentation quality.

b. Approval Process

  • All third-party components must be reviewed and approved by the Tech Lead / Architect before being added to the codebase.

  • For components that handle data processing, especially health data or PII, an additional compliance review must be conducted.

c. Usage Guidelines

  • Pin versions in package.json, pom.xml, or build.gradle to avoid unexpected changes due to automatic upgrades.

  • Avoid over-reliance on a single component for core business logic.

  • Wrap critical third-party functions (e.g., encryption, health algorithms) in an internal abstraction layer to ease future migration.

  • Document usage and purpose in the repository’s README.md or internal wiki.

d. Maintenance and Updates

  • Schedule periodic dependency audits (e.g., once per sprint/month).

  • Use tools like Dependabot, Renovate, or npm-check-updates to monitor outdated or vulnerable dependencies.

  • Avoid upgrading major versions unless tested thoroughly in staging.

e. Monitoring & Risk Management

  • Continuously monitor for CVEs (Common Vulnerabilities and Exposures) associated with third-party libraries.

  • Subscribe to GitHub/watchlists or mailing lists of critical dependencies.

  • In case of critical vulnerability:

    • Assess the impact.

    • Patch or upgrade immediately.

    • Notify the Technical Architect if applicable.

f. Third-Party APIs

For any external APIs (e.g., SMS gateways, identity providers, health registries):

  • Use well-defined, versioned APIs with SLA and documentation.

  • Handle API failures gracefully (timeouts, retries, fallbacks).

  • Do not hardcode API tokens or secrets—store them securely in environment variables or secret managers.

  • Ensure proper rate limiting, data validation, and logging are in place.

g. Logging and Telemetry

  • Do not log raw third-party responses if they contain sensitive data.

  • Ensure error logs are scrubbed of tokens, secrets, or patient-identifiable data.

h. Offboarding/Deprecation

  • Before removing a third-party component:

    • Audit where it's used.

    • Replace it with internal or alternative solutions.

    • Clean up residual references (docs, configs, environment files).

8. Incident Response

This section outlines the workflow for managing incidents raised from the field, operations, and support teams, ensuring timely resolution and accountability. Given AMRIT's deployment in health systems, responsiveness to field issues is critical for maintaining service continuity and trust.

a. Sources of Incident Tickets

Incidents can be raised from:

  • Field Users (e.g., ASHAs, lab techs, facility staff)

  • Operations Team

  • L1 Support (external helpdesk or internal first responders)

All issues are initially routed through the JIRA Service Desk (Support Portal).

b. Initial Triage (L1 & Ops)

  • L1 and Operations teams perform first-level triage to:

    • Validate the issue (reproduce it if possible)

    • Check if it is a known error or user training issue

    • Tag the ticket with proper severity and category (Bug, Enhancement, Infra, etc.)

    • Attach relevant logs, screenshots, or replication steps

If resolvable, L1 handles the issue directly (e.g., config corrections, user guidance).

c. Escalation to L2 Support

If L1 is unable to resolve the ticket:

  • It is escalated to the L2 Support team.

  • L2 Support performs a technical investigation:

    • Analyze logs

    • Check database/API health

    • Identify if it's a backend/frontend bug, performance issue, or infra problem.

If L2 confirms a code-level or system-level issue, the ticket is moved to the AMM JIRA board for engineering attention.

d. Ticket Handoff to Engineering

  • L2 creates or moves the ticket to the AMM JIRA Board, linking it to the original service desk ticket.

  • The ticket must include:

    • Summary and detailed description

    • Replication steps and environment

    • Error logs and screenshots (if available)

    • Priority and component labels

    • Suggested RCA (if found)

e. Engineering Resolution

  • The AMRIT Engineering Team triages the incoming issue during daily standup or within a defined SLA window.

  • Based on severity:

    • P1/P2 issues are hotfixed or prioritized in the current sprint

    • P3/P4 issues are taken into the product backlog for future sprints

Once resolved:

  • QA verifies the fix in staging.

  • Build is deployed after verification and documented under Release Management.

  • L2 closes the engineering ticket and updates the original Service Desk issue with resolution notes.

f. Communication Protocols

  • Field Ops/L1 must be informed:

    • When an issue is escalated

    • When it’s being worked on

    • Once it is resolved and deployed

Communication may happen via comments on the JIRA service desk ticket.

g. Documentation and Learnings

  • Post-mortems for P1 incidents must be documented (blameless format).

  • Add any known errors to the internal Knowledge Base.

  • Tag issues that frequently recur for deeper analysis or product improvements.

h. Responsibility Matrix

RoleResponsibility
L1 Support/OpsFirst triage, check known issues, basic support
L2 SupportTechnical investigation, escalation to AMM
EngineeringRoot cause analysis, fix, QA, deployment
QABuild verification, fix validation
Release ManagerConfirm fix deployment and update JIRA
Scrum Master/POSprint-level prioritization and communication

i. RCA and CAPA Requirement

All bugs or incidents escalated from support (L2 or Service Desk) must include a Root Cause Analysis (RCA) before the ticket can be closed. This requirement applies regardless of severity (P1–P4), though P1 and P2 issues demand deeper analysis and more thorough documentation.

The RCA should be documented as per this template, and a link should be added to the dedicated field within the JIRA issue template for consistency and traceability.

Recommended RCA Format

To ensure clarity and uniformity, the following structure should be used when documenting RCAs:

  • Root Cause: What precisely caused the issue?
    E.g., unhandled null value, configuration drift, stale data, broken API contract

  • Trigger: What condition or event surfaced the issue?
    E.g., new data, infrastructure change, user action

  • Impact: What was the observable effect on users or systems?
    Include number of users affected and duration if available.

  • Resolution / Fix: What action was taken to resolve the issue?

  • Preventive Measures: What steps will be implemented to prevent recurrence?
    E.g., additional test coverage, improved monitoring, refactoring, documentation updates

Roles and Responsibilities

  • RCA Ownership:
    The RCA is owned by the developer who resolved the issue, with support from:

    • L2 Support – for replication details and logs

    • QA – for verification and assessing regression impact

    • Tech Lead / Architect – for complex or systemic issues

  • RCA Review:
    The Tech Lead or QA Manager must review the RCA for:

    • Clarity and completeness

    • Depth of analysis

    • Inclusion of meaningful preventive actions

    If the RCA is insufficient, lacks preventive measures, or appears superficial, it must be revised before the ticket is closed.

Continuous Improvement

Recurring RCA patterns and systemic issues should be flagged and discussed in:

  • Sprint Retrospectives

  • Monthly Quality Reviews

  • Engineering Guild or Knowledge Sharing Sessions

This helps drive organizational learning and long-term improvements in product quality and system reliability.

RCA Best Practices

Do:

  • Dig beyond surface-level symptoms

  • Collaborate with QA and Ops for context

  • Recommend systemic improvements when applicable

Don’t:

  • Assign blame to individuals or teams

  • Write vague root causes like “code issue” or “logic bug”

  • Skip documenting preventive actions

  • No labels