1. Purpose
The purpose of this SOP is to establish a standardized framework for the Software Development Life Cycle (SDLC) of AMRIT, an open-source Electronic Health Record (EHR) platform. It aims to ensure the development, deployment, and maintenance of AMRIT adhere to best practices, regulatory compliance, and healthcare standards such as SNOMED CT, HL7, and LOINC. This SOP facilitates collaboration among developers, QA teams, product managers, and other stakeholders while maintaining data security, system interoperability, and usability across diverse healthcare environments.
2. Scope
This SOP applies to all phases of AMRIT’s SDLC, including requirement analysis, design, development, testing, deployment, and maintenance. It covers software products developed under AMRIT’s umbrella—ranging from web applications to mobile apps—and addresses integration with healthcare systems like ABHA cards and Point-of-Care Testing (POCT) devices. The primary audience includes developers, QA engineers, DevOps teams, product managers, implementation managers, and community contributors. The SOP also emphasizes offline functionality for remote healthcare delivery and multilingual support for diverse user bases.
3. Roles & Responsibilities
Role | Responsibilties |
Business Systems Analyst (BSA) |
|
Product Owner |
|
Scrum Master |
|
Developers |
|
Senior Developers |
|
Technical Architect |
|
Project Manager |
|
L2 Support |
|
QA Testers |
|
QA Manager |
|
Database Administrator (DBA) |
|
IT and DevOps Engineer |
|
4. Development Workflow
a. Agile Framework
🗓️ 2-Week Sprints with Backlog Grooming
Sprint Length: Each sprint lasts for 2 weeks. This timeframe ensures that the team has enough time to complete tasks without being overly stretched.
Backlog Grooming:
Before each sprint, conduct a backlog grooming session to review and prioritize the tasks for the upcoming sprint. This ensures that the backlog is up-to-date and that high-priority issues are addressed promptly.
Product owners and key stakeholders should participate to ensure that requirements are clearly understood and documented.
Each task should have clear acceptance criteria, dependencies, and effort estimates (story points).
🧑💻 Daily Standups
Held every day at a consistent time to promote alignment and communication.
Each team member answers:
What did I work on yesterday?
What will I work on today?
Any blockers or issues?
Keep the meeting time-boxed (typically 15 minutes).
Scrum Master ensures that blockers are addressed and tracked.
📅 Sprint Planning
At the start of each sprint, a Sprint Planning meeting is held. This meeting is where:
Product Owner presents prioritized tasks from the backlog.
Development Team discusses the scope, technical approach, and effort estimation for each item.
Acceptance criteria and definition of done are clarified.
Goal Setting: A clear sprint goal should be set to ensure the team is aligned on priorities and expected outcomes.
Task Breakdown: Tasks should be broken down into manageable units, making them achievable within the sprint's timeline.
Story Point Assignment:
Role of Product Owner: The product owner presents and clarifies the requirements for the user stories, ensuring that each story has well-defined acceptance criteria.
Team Estimation: During Sprint Planning, the team assigns story points to each user story based on complexity, effort, and uncertainty.
Historical Data: Use past sprints’ data to ensure consistency in estimating story points.
Story Point Criteria:
Small stories (1-3 points): Can be done quickly and with low uncertainty.
Medium stories (5-8 points): Requires some effort, but no significant complexity or dependencies.
Large stories (13+ points): Involves significant effort, dependencies, or requires research and investigation.
- Refer to Story points documentation for more details.
🏃♂️ Ticket Movement
- The flow of tickets follows a clear path from creation to completion:
1. OPEN
- Description: The ticket is created and logged into the system. This stage indicates that the task is newly initiated and requires analysis.
- Responsible Roles: Project Manager, Business Systems Analyst (BSA).
2. ANALYSIS
- Description: The requirements and scope of the ticket are analyzed. This includes gathering business needs, technical specifications, and feasibility studies.
- Responsible Roles: BSA, Product Owner.
- Outcome: Once the analysis is complete, the ticket moves to "Ready for Development."
3. READY FOR DEVELOPMENT
- Description: The ticket is reviewed and approved for development. All prerequisites (e.g., design documents, acceptance criteria) are finalized.
- Responsible Roles: Scrum Master, Technical Architect.
- Outcome: Developers can start working on the task.
4. IN DEVELOPMENT
- Description: The development team works on implementing the feature or fixing the issue described in the ticket.
- Responsible Roles: Developers, Senior Developers.
- Outcome: Once development is complete, the ticket moves to "Pending QA."
5. PENDING QA
- Description: The development work is completed, and the ticket awaits QA testing.
- Responsible Roles: QA Testers, QA Manager.
- Outcome: The ticket moves to "In QA" for testing.
6. IN QA
- Description: QA testers validate the functionality against requirements and acceptance criteria. They perform manual and automated tests to ensure quality.
- Responsible Roles: QA Testers, QA Manager.
- Outcome: If all tests pass successfully, the ticket moves to "QA Approved."
7. QA APPROVED
- Description: The QA team certifies that the feature or fix meets quality standards and is ready for deployment in a development environment.
- Responsible Roles: QA Manager.
- Outcome: The ticket moves to "DEV ENV Deployed."
8. DEV ENV DEPLOYED
- Description: The feature or fix is deployed in the development environment for further validation by developers or stakeholders.
- Responsible Roles: Senior Developers, Technical Architect.
- Outcome: Once approved in the development environment, it moves to "UAT ENV Deployed."
9. UAT ENV DEPLOYED
- Description: The feature or fix is deployed in the User Acceptance Testing (UAT) environment for end-user validation.
- Responsible Roles: Product Owner, L2 Support Team.
- Outcome: If end-users approve it during UAT testing, it moves to "UAT Approved."
10. UAT APPROVED
- Description: End-users or stakeholders approve the functionality after testing in UAT. It is now ready for production deployment.
- Responsible Roles: Product Owner.
- Outcome: The ticket moves to "Production Deployed."
11. PRODUCTION DEPLOYED
- Description: The feature or fix is deployed in the live production environment for actual use by end-users.
- Responsible Roles: Senior Developers, L2 Support Team.
- Outcome: Once verified in production, the ticket is marked as "Closed."
12. CLOSED
- Description: The ticket is marked as resolved and closed after successful deployment and verification in production.
- Responsible Roles: Project Manager, Scrum Master.
Tickets must be updated regularly based on progress, and it’s essential to move tickets between states to reflect the current status.
Tickets should be closed promptly after successful deployment or if issues arise, with comments explaining the reason.
🔄 Sprint Retrospectives
After each sprint, a retrospective meeting is held to reflect on the sprint and identify areas of improvement. This could cover:
What went well?
What can be improved?
Actionable items to improve processes for the next sprint.
Scrum Master facilitates and tracks action items from the retrospective.
b. Git Branching
🌳 Branching Strategy
Main Branch (
main
ormaster
): Represents the stable production-ready version of the code. Code in this branch should always be deployable.Development Branch (
develop
): The primary branch where integration occurs. This is where features from feature branches are merged after they pass development and testing stages.Feature Branches: Each new feature, enhancement, or bug fix is developed in a separate feature branch. These branches are named with relevant identifiers, e.g.,
feature/ABC-123-add-login-screen
orbugfix/XYZ-456-fix-api-issue
.Start Point: Always create feature branches from the
dev
elop branch.End Point: After completing development and local testing, feature branches are merged back into the
dev
branch.
🔀 Merging Process
When a feature is complete:
Pull Request (PR): A developer submits a PR to merge their feature branch into
dev
.Code Review: At least one peer reviews the code for quality, functionality, and alignment with coding standards.
CI/CD Check: Automated tests (unit, integration, linting, etc.) must pass as part of the PR validation process.
Merge: Once the PR passes review, it is merged into
dev
. Avoid merging directly intomain
unless it’s for a production release.
⚙️ Hotfixes
If an urgent issue arises in production, hotfixes are addressed by creating a hotfix branch from
main
(e.g.,hotfix/XYZ-789-fix-critical-bug
).Once the hotfix is deployed, it should be merged into both
main
(for production) anddevelop
(to ensure the fix is included in future releases).
c. Pull Request and Code Review
i. Pull Request Process
Step 1: Creating the Pull Request
PR Creation: Developers must create a pull request (PR) against the appropriate branch (typically
develop
for active development,main
ormaster
for production-ready code).PR Description: Developers should provide a meaningful description in the PR, outlining what is being done (e.g., bug fixes, new features, refactoring) and mentioning any related issues from JIRA or other task management tools.
Step 2: Assign Reviewers
Review Assignment: The PR should be assigned to relevant team members for review. The choice of reviewers depends on the area of work (e.g., front-end or back-end) and the complexity of the changes.
Notification: Reviewers should be notified promptly to ensure timely feedback and avoid unnecessary delays in the process.
Step 3: Automated Checks
CI/CD Pipeline Integration: The PR should trigger the Continuous Integration/Continuous Deployment (CI/CD) pipeline, which includes:
For Angular, ensure that tests (unit and e2e) are executed, and linting checks are done (using tools like ESLint and Prettier).
For Java, run unit tests (using JUnit, Mockito) and static code analysis tools (SonarQube, Checkstyle).
For Kotlin, ensure that unit tests (using JUnit or KotlinTest) and static code analysis tools (Detekt) run correctly.
Build Verification: Ensure the code successfully compiles or builds before merging.
ii. Code Review Guidelines
Each repository type (Angular, Java, Kotlin) has slightly different focuses based on the framework, language, and best practices. Below are specific code review areas for each.
For Angular Repositories:
Code Structure & Readability:
Component Structure: Ensure that components are modular, small, and reusable. They should follow the Single Responsibility Principle (SRP).
Naming Conventions: Ensure component, service, and variable names are descriptive and follow Angular naming conventions.
Separation of Concerns: Ensure business logic is separated from UI logic. Complex logic should reside in services, not components.
Template & Styling:
HTML Template: Ensure that the template uses Angular directives like
*ngIf
,*ngFor
correctly and follows best practices.CSS/SCSS: Verify that styles are scoped properly (e.g., using ViewEncapsulation in Angular), and that stylesheets follow the Angular Style Guide.
UI Consistency: Verify that UI elements follow the design system and are consistent with the application’s theme (e.g., using Angular Material if applicable).
State Management:
Reactive Programming: Ensure proper use of RxJS operators for handling asynchronous operations like HTTP requests.
State Management: Check for appropriate use of state management tools (e.g., NgRx or BehaviorSubject) when managing application state.
Error Handling:
Ensure errors are handled properly, including in HTTP requests (i.e., handling HTTP errors gracefully and displaying user-friendly messages).
Use ErrorBoundary or similar techniques to catch unhandled errors in the UI.
Testing:
Unit Testing: Ensure that unit tests exist for components, services, and other business logic, and they are written using Jasmine and run with Karma.
End-to-End Testing: If applicable, ensure e2e tests are in place using Protractor or Cypress.
Verify that test coverage is sufficient and tests run in CI.
Performance:
Check that the application is optimized for performance, such as implementing lazy loading for large modules and reducing unnecessary API calls.
For Java (Spring Boot) Repositories:
Code Structure & Readability:
Layered Architecture: Ensure the code adheres to a layered architecture (e.g., controllers, services, repositories). Controllers should handle HTTP requests only, while services contain business logic.
Modularity: Ensure that classes, methods, and services are modular and follow SOLID principles to make the code easier to maintain.
Naming Conventions: Follow Java naming conventions for classes, methods, and variables.
Dependency Injection & Spring Annotations:
Spring Annotations: Ensure that Spring’s dependency injection is used correctly with annotations like
@Autowired
,@Service
,@Repository
,@Controller
, and@RestController
.Service Layer: Ensure that business logic is not present in controllers but in dedicated service classes.
Security:
Authentication and Authorization: Review the use of JWT tokens, OAuth, or Spring Security for user authentication and authorization.
Input Validation: Ensure input validation is present, especially in APIs. Use annotations like
@Valid
or@NotNull
where necessary.Avoid Hardcoding Sensitive Data: Ensure sensitive data (e.g., passwords, API keys) is never hardcoded or exposed in source code.
Error Handling:
Global Exception Handling: Ensure that there is a centralized approach for handling exceptions in the application (e.g., using
@ControllerAdvice
).Ensure proper HTTP status codes are returned for different types of errors (e.g., 400 for bad requests, 404 for not found, 500 for internal server errors).
Database & ORM (JPA/Hibernate):
Efficient Queries: Ensure that database queries are efficient, optimized, and avoid N+1 query problems.
Transactions: Ensure transaction management is handled appropriately for operations that need atomicity (e.g., using
@Transactional
).Database Migrations: Ensure proper database migrations are applied when there are schema changes (e.g., using Flyway or Liquibase).
Testing:
Unit Testing: Verify that unit tests are present for business logic, using JUnit and Mockito.
Integration Testing: Ensure that there are adequate integration tests to verify the interaction between components, particularly with Spring Boot Test.
Test Coverage: Ensure that test coverage is sufficient, and use tools like JaCoCo to monitor coverage.
Performance:
Ensure efficient database queries, caching strategies, and review the use of async processing for tasks that don't need to block the main thread.
For Kotlin Repositories:
Code Structure & Readability:
Kotlin Best Practices: Ensure the code follows Kotlin best practices, such as using extension functions, null safety features, and concise syntax.
Naming Conventions: Ensure that classes, methods, and variables are named using CamelCase and adhere to Kotlin conventions.
Use of Kotlin Features:
Null Safety: Ensure the code makes use of Kotlin’s null safety features (e.g., nullable types,
?.
and!!
operators).Data Classes: Ensure data classes are used where appropriate for modeling immutable objects with automatically generated
equals
,hashCode
, andtoString
methods.
Concurrency & Coroutines:
Coroutines: Ensure Kotlin coroutines are used for asynchronous programming instead of traditional callback mechanisms. Review for proper use of
launch
,async
, and structured concurrency.
Error Handling:
Sealed Classes: Ensure sealed classes are used for handling specific types of errors or states, making the error handling more type-safe.
Custom Exceptions: If there are custom exceptions, ensure they are appropriately defined and used.
Testing:
Unit Testing: Ensure unit tests exist for business logic and other components, using JUnit with KotlinTest or MockK for mocking.
Integration Testing: Ensure proper integration tests using Spring Boot Test or other relevant tools.
Test Coverage: Ensure sufficient test coverage, and validate the correctness of coroutines with mocking frameworks like MockK.
Performance:
Ensure that Kotlin’s performance optimizations, such as inline functions and tail recursion, are used where appropriate.
iii. Final Approval and Merging
PR Review Feedback: Reviewers should leave comments or suggestions on the PR, addressing any concerns related to code quality, security, performance, or best practices.
Changes Requested: If changes are requested, developers must address them and push updates to the same PR.
Approval: Once the review process is complete and all feedback has been incorporated, reviewers should approve the PR. The PR can then be merged into the target branch (usually
develop
ormain
).Squash and Merge: For a cleaner git history, PRs should be merged using the Squash and Merge strategy, which condenses all commits in the PR into a single commit.
4. Security & Compliance
In AMRIT, security and compliance are paramount, especially because the platform deals with sensitive health information and Personally Identifiable Information (PII). The team must follow rigorous standards and best practices to ensure that data is handled securely and in compliance with relevant regulations (e.g. DPDP). Below are the key aspects that should be practiced:
a. Secure Engineering Practices
Secure Coding Guidelines
Input Validation & Sanitation: All user inputs should be validated both on the client side and server side to prevent SQL injection, XSS (Cross-Site Scripting), CSRF (Cross-Site Request Forgery), and other injection attacks.
Use of Prepared Statements: In SQL queries, always use prepared statements to prevent SQL injection.
Sanitize User Input: Use input sanitization libraries to clean user inputs before processing.
Sanitize Output: Always sanitize and escape data before rendering it on the front end to avoid XSS attacks.
Authentication & Authorization:
Use multi-factor authentication (MFA) wherever possible, especially for accessing admin panels or sensitive data.
Follow OAuth 2.0 and JWT (JSON Web Tokens) standards for secure API authentication and authorization.
Implement role-based access control (RBAC) or attribute-based access control (ABAC) to enforce permissions based on the user’s role and privileges.
For sensitive actions, ensure two-person rule or approval workflows.
Password Management:
Hash passwords using secure hashing algorithms like bcrypt or PBKDF2 with strong salts.
Never store passwords or sensitive information in plaintext.
Enforce password complexity rules, such as requiring a mix of characters, and minimum password length.
Data Encryption:
In-Transit Encryption: Use TLS (Transport Layer Security) to encrypt all communications over HTTP (e.g., HTTPS for web traffic).
At-Rest Encryption: Encrypt sensitive data stored in databases using strong encryption algorithms (e.g., AES-256).
Encryption Keys: Store encryption keys securely, using services like AWS KMS or Azure Key Vault, and ensure they are rotated periodically.
Security Headers:
Use HTTP Security Headers like Strict-Transport-Security (HSTS), Content-Security-Policy (CSP), and X-Content-Type-Options to mitigate various attacks such as XSS, clickjacking, and man-in-the-middle attacks.
API Security:
Rate Limiting: Implement rate limiting to prevent abuse and DDoS attacks.
API Key Management: Use API keys for authentication with external services. Never hardcode API keys in the source code or expose them in public repositories.
Input Validation for APIs: Apply strict validation rules for all incoming API requests to prevent malicious data from being processed.
Code Reviews for Security
All PRs must include a review specifically for security concerns, including:
Checking for hardcoded credentials, secrets, or API keys.
Verifying encryption methods and key management practices.
Reviewing authentication and authorization logic to ensure proper access control.
Ensuring that no sensitive data (e.g., passwords, personal information) is logged or exposed in error messages.
Secure Dependencies Management
Third-party Libraries: Always use trusted, well-maintained third-party libraries. Regularly update libraries to ensure that any known vulnerabilities are patched.
Use tools like Dependabot (for GitHub) or Snyk to automatically check for vulnerabilities in dependencies.
Minimize External Dependencies: Avoid using external dependencies unless absolutely necessary, and review their source code to verify that they follow secure practices.
b. Health Information & PII (Personally Identifiable Information)
Since AMRIT handles health information and Personally Identifiable Information (PII), it’s crucial that all team members follow the appropriate protocols to protect this sensitive data and maintain compliance with laws and regulations.
Data Classification and Segmentation
Data Sensitivity Classification: Classify all data based on its sensitivity (e.g., public, internal, confidential, sensitive). Health information and PII should always be categorized as confidential or sensitive.
Sensitive Data: This includes any data related to health (such as diagnoses, medical history, treatment plans), identity (name, address, phone number, email), and financial information.
Data Segmentation: Use segmentation techniques to ensure that sensitive data is only accessible to authorized roles. Implement the principle of least privilege (PoLP), ensuring that only those who need access to sensitive data can access it.
Data Retention and Minimization
Data Retention Policy: Define a clear data retention policy for health information and PII. Ensure that personal data is only stored for as long as necessary to fulfill the intended purpose and legal obligations.
Implement automatic data purging or archiving mechanisms for expired or obsolete records.
Data Minimization: Only collect and store the minimum amount of PII and health data necessary to fulfill business requirements. Avoid collecting excessive data points unless absolutely necessary.
Access Control for Sensitive Data
Role-Based Access: Enforce strict access control mechanisms to ensure that only authorized users can access health information and PII. This can include:
Access control based on roles (e.g., health professionals, administrators, etc.).
Time-based or context-based access controls (e.g., limiting access to PII only during certain times or from certain locations).
Audit Logs: Maintain comprehensive audit logs for access to sensitive data. Logs should include:
Who accessed the data
When it was accessed
What actions were performed on the data (view, modify, delete)
Any failed access attempts
These logs should be stored securely and be readily accessible for auditing purposes.
Data Masking and Anonymization
Data Masking: Where appropriate, mask or encrypt sensitive information in databases, especially in development or test environments.
Anonymization/De-identification: Use techniques to anonymize or de-identify sensitive health data when used for analytics or machine learning. This can help to prevent exposure of personally identifiable data while still allowing for useful analysis.
Regulatory Compliance
Local Compliance Regulations: Ensure that AMRIT complies with local data protection regulations and laws in India. This may include country-specific privacy laws such as India’s Data Protection Bill.
Training and Awareness
Regular Training: All team members must receive regular training on data security, privacy regulations, and secure handling of health information and PII.
Security Awareness: Team members should be trained to identify common phishing attacks, social engineering, and other tactics used to compromise sensitive data.
Incident Response Plan: Ensure that the team is familiar with the incident response plan in case of a data breach or security incident. This includes promptly notifying affected individuals and relevant authorities as per regulatory requirements.
c. Regular Security Audits and Penetration Testing
Vulnerability Scanning: Use automated tools to regularly scan the application and infrastructure for vulnerabilities (e.g., OWASP ZAP, Burp Suite).
Penetration Testing: Conduct annual penetration testing of the application and network infrastructure to identify potential vulnerabilities and ensure security measures are effective.
Compliance Audits: Regularly audit the platform’s compliance with security frameworks and privacy regulations.
5. Quality Assurance
In AMRIT, Quality Assurance (QA) plays a crucial role in ensuring the reliability, security, and functionality of the platform. The QA process must be meticulously integrated into the Software Development Life Cycle (SDLC) to maintain high-quality standards for the system and deliver a flawless product to end-users.
a. QA Process Overview
The QA process ensures that the product meets the defined quality standards and is free of defects. This process is integrated throughout the SDLC, from requirement gathering to final deployment, and involves:
Test Planning: Define the scope, strategy, and types of testing to be performed.
Test Execution: Execute various tests such as unit tests, integration tests, and user acceptance tests.
Defect Management: Track and manage defects found during testing.
Test Reporting: Provide regular reports on testing progress, defects, and test coverage.
Test Closure: Final review and closure of testing activities.
b. Types of Testing to be Performed
Unit Testing
Purpose: Verify individual components or units of the application, such as functions or methods, to ensure they work as expected.
Responsibility: Developers are responsible for writing unit tests for their code, typically using frameworks like JUnit (for Java), Mocha or Jasmine (for Angular).
Frequency: Performed continuously, with every feature or bug fix being accompanied by unit tests.
Integration Testing
Purpose: Ensure that different modules or services within the application work together correctly.
Responsibility: QA engineers should create integration tests to verify that components interact properly, focusing on end-to-end data flow and system behavior.
Frequency: Performed after unit testing and before the full system testing phase.
System Testing
Purpose: Evaluate the entire system's functionality, performance, and behavior as a whole.
Responsibility: QA engineers will execute system-level tests based on the requirements to ensure the application performs as intended.
Frequency: Conducted after integration testing and before the user acceptance testing phase.
Regression Testing
Purpose: Ensure that new changes do not negatively impact the existing functionality of the system.
Responsibility: QA engineers will run a suite of tests to confirm that new code doesn’t break existing features.
Frequency: Performed after each sprint, typically before every release.
Performance Testing
Purpose: Assess the performance and scalability of the application, ensuring it handles expected user loads and performs efficiently.
Responsibility: QA engineers use tools like JMeter, Gatling, or LoadRunner to simulate high-traffic conditions and identify potential bottlenecks.
Frequency: Done during system testing and pre-production phases.
Security Testing
Purpose: Identify security vulnerabilities in the application, ensuring the system is secure from threats and risks.
Responsibility: Security-focused QA engineers will run vulnerability scans and penetration tests to verify the system's security posture.
Frequency: Performed at every major release, with a focus on critical features handling sensitive data.
User Acceptance Testing (UAT)
Purpose: Validate that the application meets the end user's needs and business requirements.
Responsibility: QA, product owners, L1 team and key stakeholders collaborate to ensure the application delivers as expected in real-world scenarios.
Frequency: UAT is typically performed in the final stage before the release is made to production.
c. Test Strategy and Plan
Each phase of testing should be planned and executed according to a Test Strategy and Test Plan to ensure that testing aligns with the project goals.
Test Strategy:
Scope: Outline which features will be tested and the types of testing to be performed.
Test Levels: Define whether testing will be done at the unit, integration, system, and acceptance levels.
Resources: Identify the team members responsible for different types of testing.
Tools: Define tools and technologies to be used in testing (e.g., JUnit for unit tests, Selenium for UI tests, JMeter for performance testing).
Test Plan:
Test Objectives: Define clear goals for each phase of testing.
Test Cases: Each test should have a predefined test case that specifies the test’s objectives, input data, expected result, and the steps to execute.
Test Execution: Set timelines for when tests will be executed, along with responsibility assignments for different team members.
Acceptance Criteria: Define the criteria that must be met for the testing phase to be considered complete.
d. Test Automation
Automating repetitive and critical tests helps increase test coverage, reduce testing time, and ensure consistency. Key points to consider for automation:
Identify Critical Tests for Automation: Focus on automating high-impact and frequently executed tests, such as unit tests, smoke tests, and regression tests.
Automation Framework: Define and implement an automation framework (e.g., Selenium for UI tests, TestNG for integration tests) to streamline test creation, execution, and reporting.
Continuous Integration (CI): Integrate automated tests into the CI/CD pipeline (e.g., using Jenkins, GitHub Actions, or GitLab CI) to run tests automatically on every code push, ensuring that issues are caught early.
Maintenance: Ensure that the automated test suite is regularly updated and maintained to adapt to changes in the application.
e. Defect Management
Defect management is crucial to identify, track, and resolve defects efficiently.
Defect Lifecycle:
Defect Logging: Every defect found during testing should be logged in the issue tracking system (e.g., JIRA) with detailed information (e.g., steps to reproduce, expected vs. actual behavior, severity).
Defect Prioritization: Prioritize defects based on severity and impact. Critical issues must be fixed before release.
Defect Tracking: Track the status of defects (e.g., Open, In Progress, Fixed, Reopened, Closed) and ensure timely resolution.
Defect Review: Defects should be reviewed during sprint retrospectives to identify recurring patterns and address root causes.
Severity and Priority Levels:
Critical: Application-breaking issues (e.g., crashes, data loss).
Major: High-impact issues affecting core functionality but with workarounds.
Minor: Issues with low impact, often cosmetic or related to non-critical features.
Trivial: Cosmetic issues with no impact on functionality.
f. Continuous Testing and Quality Metrics
Test Coverage:
Ensure adequate test coverage for all key features, especially those related to patient data, PII, and health records.
Use code coverage tools (e.g., JaCoCo for Java, Istanbul for JavaScript) to measure test coverage and ensure it meets defined thresholds (e.g., 80% or higher).
Test Metrics:
Defect Density: Measure defects per unit of code or functionality.
Test Execution Rate: Measure how many tests were executed successfully vs. total tests created.
Escaped Defects: Track defects found after the release to measure the effectiveness of the testing process.
g. Reporting and Communication
Test Reports: Generate comprehensive reports for each testing phase, including pass/fail rates, defect trends, and test coverage.
Status Meetings: Conduct regular QA status meetings to communicate testing progress, risks, and issues. Share test reports with stakeholders regularly.
Sprint Retrospectives: QA insights should be shared during sprint retrospectives to improve the process and address any issues faced during testing.
h. Post-Release Testing and Monitoring
After the release to production, QA should continue to monitor the application for any issues that arise in real-world use.
Smoke Testing in Production: Perform light testing on critical paths after deployment to ensure basic functionality.
User Feedback: Gather user feedback and bug reports to identify issues that may have been missed during testing.
Continuous Monitoring: Set up monitoring systems (e.g., Prometheus, Grafana, ELK) to continuously track application performance, security, and uptime.
6. Release Management
The Release Management process ensures that the deployment of new features, bug fixes, and updates to the AMRIT platform is smooth, controlled, and efficient. This section covers key aspects such as updating JIRA releases, informing support teams, build verification testing, semantic versioning, and ownership of the release process.
a. JIRA Releases Update
JIRA is the central tool for tracking the progress of features, bugs, and enhancements. Properly updating the release versions in JIRA is critical for maintaining visibility, traceability, and clear communication with stakeholders.
Updating JIRA Releases:
Release Version Creation: At the beginning of each sprint or release cycle, a new version is created in JIRA. This version corresponds to the upcoming release and will serve as the target for the issues and stories that will be worked on.
Linking Issues to the Release: Every issue (user story, bug, enhancement) that will be part of the release must be linked to the respective release version in JIRA. This ensures that all work completed within that sprint or release cycle is accounted for.
Release Version Field: Ensure that the release version is selected in the "Fix Version" field for each issue.
Progress Monitoring: As development work progresses, update the JIRA issues to reflect their status (e.g., in progress, code review, testing, done). The progress is automatically tracked, and the team can have real-time insights into how much work is done and what remains.
Release Notes: Once the release is ready, generate the release notes directly from JIRA. These notes should summarize the key changes made in the release, including new features, bug fixes, and any significant improvements.
Closing the Release: After deployment to production, close the release version in JIRA and archive it to prevent further changes. Ensure that all issues linked to the release are resolved or marked as "done" in JIRA.
b. Informing L1 Support
The L1 Support team must be informed about upcoming releases to prepare for potential issues, monitor the deployed features, and respond to customer queries effectively.
Steps to Inform L1 Support:
Pre-Release Notification: A few days before the release, send a detailed release note to the L1 Support team. This should include:
New features being released.
Any bug fixes or enhancements.
Known issues or regressions from previous releases.
Specific areas to monitor after release (e.g., health data-related features).
Any configuration or environment changes.
Release Day Communication: On the day of the release, inform the L2 Support team when the deployment starts and when it completes. This allows them to monitor the system and be ready for user-reported issues.
Post-Release Monitoring: After the release, the L2 Support team should monitor the system for anomalies or bugs reported by users. They should be able to quickly verify if the issue is related to the new release.
Support Documentation: Ensure that L2 Support has access to updated support documentation, including troubleshooting steps and FAQs related to the new release.
c. Build Verification Testing (BVT)
Build Verification Testing (BVT) is a crucial step in ensuring the integrity and stability of the application after each deployment.
BVT Process:
Definition: BVT is a set of preliminary tests executed after each build to verify that the major functionality of the system works and that the build is stable enough to proceed with further testing.
Responsibility: The QA team is typically responsible for running the BVTs.
Scope of BVT:
Ensure that the application is deployed successfully and is accessible.
Verify that the core features, like login, user registration, and key workflows, are functioning.
Check for any obvious errors in the application, such as missing assets, broken links, or crashes.
Automation: If automated tests are in place, BVT should run these automated tests first, ensuring basic functionality is verified quickly.
Manual Checks: If automated testing is unavailable, QA should perform critical path testing to manually verify the build.
Sign-Off: The BVT must be completed successfully before any further testing (e.g., regression, UAT) or deployment to production is allowed.
d. Semantic Versioning
Semantic Versioning (SemVer) is a versioning scheme that aims to convey meaning about the underlying changes in a release. It helps manage dependencies and compatibility between systems, ensuring that the release process is transparent and predictable.
Versioning Format:
Semantic Versioning uses the format MAJOR.MINOR.PATCH
:
MAJOR: Incremented when backward-incompatible changes are made (e.g., breaking changes in the API).
MINOR: Incremented when backward-compatible new features or enhancements are added.
PATCH: Incremented when backward-compatible bug fixes or minor improvements are made.
Versioning Strategy:
Pre-Release Versions: For pre-production or staging environments, use labels like
1.0.0-alpha
or1.0.0-beta
to indicate that the release is not final.Release Candidates: Before the final release, use
1.0.0-rc1
,1.0.0-rc2
, etc., to indicate that the release is a candidate for production but may still have unresolved issues.Stable Releases: Once all critical issues are resolved and the release is ready for production, increment the version to a stable number (e.g.,
1.0.0
,2.3.1
).
e. Ownership of the Release Process
Clear ownership of the release process ensures accountability and a smooth transition from development to production.
Roles and Responsibilities:
Product Owner: Owns the product roadmap and ensures that the release aligns with business objectives. They prioritize the features and fixes that go into the release.
L2 Manager: Responsible for overseeing the entire release process. They plan, coordinate, and communicate the release timeline, updates, and the necessary steps across teams.
Development Team: Ensures that the code is written, tested, and ready for release. Developers will create the release branch and ensure that the feature is production-ready.
QA Team: Owns the testing phase of the release. They run BVTs, regression tests, and UAT to ensure that the release is stable and meets quality standards.
DevOps Engineer: Handles the deployment process, including setting up environments, ensuring the infrastructure is in place, and performing the actual deployment to production. They also manage rollback procedures in case of deployment failure.
L2 Support Team: Takes ownership of post-release monitoring and support. They are responsible for troubleshooting any production issues and supporting end-users post-release.
f. Post-Release Monitoring and Support
Once the release is live in production, continuous monitoring is necessary to ensure the system performs as expected.
Monitoring: Set up automated systems to track application performance, such as uptime, response times, and error rates. Tools like Prometheus, Grafana, or New Relic can help with this.
User Feedback: Encourage users to report any issues or anomalies encountered post-release.
Hotfixes: If critical issues arise after the release, the development and QA teams should be ready to implement hotfixes. These should be planned and released quickly, following the established release process.
Post-Release Review: After the release, conduct a retrospective meeting to discuss what went well, what could have been improved, and how to streamline the release process for future versions.
7. Third-Party Components
AMRIT leverages various third-party libraries, APIs, tools, and SDKs across its frontend, backend, and infrastructure layers. Proper governance around their use ensures security, license compliance, performance, and long-term maintainability.
a. Evaluation Before Adoption
Before incorporating any third-party component into the AMRIT platform:
Functionality Fit: Ensure it addresses a real functional or technical need and doesn’t introduce unnecessary dependencies.
Security Review:
Check for known vulnerabilities using tools like
npm audit
,OWASP Dependency Check
,Snyk
, orTrivy
.Prefer actively maintained projects with good community support.
License Review:
Only use components with permissive open-source licenses (e.g., MIT, Apache 2.0, BSD).
Avoid restrictive or viral licenses like GPL unless explicitly approved.
Performance Impact: Ensure the library doesn’t bloat the application bundle (frontend) or significantly impact backend performance.
Community Health:
Check for last commit date, number of contributors, open issues, and responsiveness on GitHub.
Review documentation quality.
b. Approval Process
All third-party components must be reviewed and approved by the Tech Lead / Architect before being added to the codebase.
For components that handle data processing, especially health data or PII, an additional compliance review must be conducted.
c. Usage Guidelines
Pin versions in
package.json
,pom.xml
, orbuild.gradle
to avoid unexpected changes due to automatic upgrades.Avoid over-reliance on a single component for core business logic.
Wrap critical third-party functions (e.g., encryption, health algorithms) in an internal abstraction layer to ease future migration.
Document usage and purpose in the repository’s
README.md
or internal wiki.
d. Maintenance and Updates
Schedule periodic dependency audits (e.g., once per sprint/month).
Use tools like
Dependabot
,Renovate
, ornpm-check-updates
to monitor outdated or vulnerable dependencies.Avoid upgrading major versions unless tested thoroughly in staging.
e. Monitoring & Risk Management
Continuously monitor for CVEs (Common Vulnerabilities and Exposures) associated with third-party libraries.
Subscribe to GitHub/watchlists or mailing lists of critical dependencies.
In case of critical vulnerability:
Assess the impact.
Patch or upgrade immediately.
Notify the Technical Architect if applicable.
f. Third-Party APIs
For any external APIs (e.g., SMS gateways, identity providers, health registries):
Use well-defined, versioned APIs with SLA and documentation.
Handle API failures gracefully (timeouts, retries, fallbacks).
Do not hardcode API tokens or secrets—store them securely in environment variables or secret managers.
Ensure proper rate limiting, data validation, and logging are in place.
g. Logging and Telemetry
Do not log raw third-party responses if they contain sensitive data.
Ensure error logs are scrubbed of tokens, secrets, or patient-identifiable data.
h. Offboarding/Deprecation
Before removing a third-party component:
Audit where it's used.
Replace it with internal or alternative solutions.
Clean up residual references (docs, configs, environment files).
8. Incident Response
This section outlines the workflow for managing incidents raised from the field, operations, and support teams, ensuring timely resolution and accountability. Given AMRIT's deployment in health systems, responsiveness to field issues is critical for maintaining service continuity and trust.
a. Sources of Incident Tickets
Incidents can be raised from:
Field Users (e.g., ASHAs, lab techs, facility staff)
Operations Team
L1 Support (external helpdesk or internal first responders)
All issues are initially routed through the JIRA Service Desk (Support Portal).
b. Initial Triage (L1 & Ops)
L1 and Operations teams perform first-level triage to:
Validate the issue (reproduce it if possible)
Check if it is a known error or user training issue
Tag the ticket with proper severity and category (Bug, Enhancement, Infra, etc.)
Attach relevant logs, screenshots, or replication steps
If resolvable, L1 handles the issue directly (e.g., config corrections, user guidance).
c. Escalation to L2 Support
If L1 is unable to resolve the ticket:
It is escalated to the L2 Support team.
L2 Support performs a technical investigation:
Analyze logs
Check database/API health
Identify if it's a backend/frontend bug, performance issue, or infra problem.
If L2 confirms a code-level or system-level issue, the ticket is moved to the AMM JIRA board for engineering attention.
d. Ticket Handoff to Engineering
L2 creates or moves the ticket to the AMM JIRA Board, linking it to the original service desk ticket.
The ticket must include:
Summary and detailed description
Replication steps and environment
Error logs and screenshots (if available)
Priority and component labels
Suggested RCA (if found)
e. Engineering Resolution
The AMRIT Engineering Team triages the incoming issue during daily standup or within a defined SLA window.
Based on severity:
P1/P2 issues are hotfixed or prioritized in the current sprint
P3/P4 issues are taken into the product backlog for future sprints
Once resolved:
QA verifies the fix in staging.
Build is deployed after verification and documented under Release Management.
L2 closes the engineering ticket and updates the original Service Desk issue with resolution notes.
f. Communication Protocols
Field Ops/L1 must be informed:
When an issue is escalated
When it’s being worked on
Once it is resolved and deployed
Communication may happen via comments on the JIRA service desk ticket.
g. Documentation and Learnings
Post-mortems for P1 incidents must be documented (blameless format).
Add any known errors to the internal Knowledge Base.
Tag issues that frequently recur for deeper analysis or product improvements.
h. Responsibility Matrix
Role | Responsibility |
---|---|
L1 Support/Ops | First triage, check known issues, basic support |
L2 Support | Technical investigation, escalation to AMM |
Engineering | Root cause analysis, fix, QA, deployment |
QA | Build verification, fix validation |
Release Manager | Confirm fix deployment and update JIRA |
Scrum Master/PO | Sprint-level prioritization and communication |
i. RCA and CAPA Requirement
All bugs or incidents escalated from support (L2 or Service Desk) must include a Root Cause Analysis (RCA) before the ticket can be closed. This requirement applies regardless of severity (P1–P4), though P1 and P2 issues demand deeper analysis and more thorough documentation.
The RCA should be documented as per this template, and a link should be added to the dedicated field within the JIRA issue template for consistency and traceability.
Recommended RCA Format
To ensure clarity and uniformity, the following structure should be used when documenting RCAs:
Root Cause: What precisely caused the issue?
E.g., unhandled null value, configuration drift, stale data, broken API contractTrigger: What condition or event surfaced the issue?
E.g., new data, infrastructure change, user actionImpact: What was the observable effect on users or systems?
Include number of users affected and duration if available.Resolution / Fix: What action was taken to resolve the issue?
Preventive Measures: What steps will be implemented to prevent recurrence?
E.g., additional test coverage, improved monitoring, refactoring, documentation updates
Roles and Responsibilities
RCA Ownership:
The RCA is owned by the developer who resolved the issue, with support from:L2 Support – for replication details and logs
QA – for verification and assessing regression impact
Tech Lead / Architect – for complex or systemic issues
RCA Review:
The Tech Lead or QA Manager must review the RCA for:Clarity and completeness
Depth of analysis
Inclusion of meaningful preventive actions
If the RCA is insufficient, lacks preventive measures, or appears superficial, it must be revised before the ticket is closed.
Continuous Improvement
Recurring RCA patterns and systemic issues should be flagged and discussed in:
Sprint Retrospectives
Monthly Quality Reviews
Engineering Guild or Knowledge Sharing Sessions
This helps drive organizational learning and long-term improvements in product quality and system reliability.
RCA Best Practices
Do:
Dig beyond surface-level symptoms
Collaborate with QA and Ops for context
Recommend systemic improvements when applicable
Don’t:
Assign blame to individuals or teams
Write vague root causes like “code issue” or “logic bug”
Skip documenting preventive actions