Pharma Human Factors

View Original

Effective Summative Evaluations in Pharmaceutical Industry

Welcome to our exploration of summative evaluations for the pharmaceutical industry, a pivotal phase in the pharmaceutical product validation process. Here, I'll share insights on preparing effectively for these evaluations, focusing on their critical role in assessing the safety and efficacy of pharmaceutical products. We aim to guide manufacturers through this crucial process, emphasizing key considerations and steps to achieve a successful outcome. However, before delving into the specifics, let's gain a better understanding of what summative evaluations entail.

Understanding Formative vs. Summative evaluations

In the pharmaceutical industry, two main types of usability testing exist: formative and summative.

  • Formative Usability Testing: This phase involves evaluating a product's design throughout its development cycle. It's about finding improvement areas and ensuring the design moves in the right direction. Typically involving 5 to 8 participants, formative testing, while not a regulatory requirement, is a recommended practice; especially for off-the-shelf device refinement.

  • Summative Usability Testing: This testing occurs when the design is finalized. Its goal is to validate that the design meets user requirements and ensures safe, effective interactions. Summative testing is essential for Class II and III medical devices, involving a formal approach and a larger participant sample to account for diverse user characteristics.

Understanding these differences is vital for manufacturers to adhere to regulatory requirements and best practices, ensuring products are both safe and user-friendly.

Defining Test Setup

The test setup is crucial in summative evaluations, encompassing study design, participant selection, materials, methods, and data analysis. This setup ensures the experiment's validity and reproducibility, while demonstrating the pharmaceutical company’s commitment to rigorous device validation.

Setting Clear Evaluation Objectives

Establishing clear objectives is the first step in any Human Factors evaluation. Specially in summative evaluations since the focus is on confirming the safety and effectiveness of a medical device for its intended users. This involves verifying that the device meets user needs through predefined tasks in the Task Analysis, tracing those tasks to the Use Risk Related Assessment (URRA).

Choosing the Right Evaluation Approach

The study design is central to the summative evaluation. Common approaches include:

  • Simulated-use Testing: This entails creating realistic usage conditions to assess device performance, especially for self-injection drug delivery devices, where we simulate normal household environments.

  • Clinical Evaluations: Researchers conduct these evaluations in clinical or real-world settings, with a primary focus on assessing the device's safety, efficacy, and medication delivery accuracy

To better understand these differences there is a FDA guidance including a nice explanation of the two techniques and their differences.

Ensuring Representative Participant Selection

Selecting participants that represent the intended user population is critical. Furthermore, this involves including actual patients, caregivers, and healthcare professionals, while avoiding surrogates to ensure authentic interactions with the device. Additionally, you must pay special attention to including vulnerable populations, often referred to as patients with a 'higher critical condition' among the intended users.

Moreover, the most frequently defined user groups for pharmaceutical products typically include patients, caregivers, and healthcare professionals. For instance, patient user profiles often represent adolescents, adults, and elderly individuals with and without previous experience using drug delivery devices.

Stimuli for the evaluation

For summative evaluations, the stimuli, including the medical device and supporting materials, must accurately represent the commercial product. This ensures that the interface tested is reflective of the actual product, complying with regulatory standards. The FDA requires conducting the summative evaluation with the "final finished combination product", the terminology is defined in their 2023 Guidance.

Documenting the Methodology for Summative Evaluations in Pharma

As a professional deeply involved in pharmaceutical product development, I can't stress enough the importance of meticulous documentation in summative evaluations. This detailed record-keeping is not just a procedural formality; it's the backbone of ensuring consistent, reliable, and replicable evaluations. In this section, I'll share how to effectively document the methodology for summative evaluations, focusing on drug delivery devices.

Detailing the Experiment's Blueprint

Documenting the summative evaluation methodology involves creating a comprehensive blueprint of the entire process. This includes:

  1. Sequence and Order of Tasks: Clearly outline the sequence of evaluated tasks. This helps in maintaining the focus of the study and ensures that all necessary aspects are covered.

  2. Moderator's Script: The script guides the moderator on how to conduct the evaluation. It should be detailed enough to maintain consistency across different sessions.

  3. Participant Instructions: Detailed scenarios and instructions provided to participants are crucial. They should be clear, unambiguous, and mimic real-life situations as closely as possible.

  4. Data Collection Methods: Enumerate and explain the methods used for data collection, like direct observation, video recording, and note-taking.

  5. Anticipated Use Errors: Identify potential use errors by task and how they will be recorded. This anticipates challenges users might face and prepares the team to address them effectively.

Adhering to Industry Standards

When undertaking summative evaluations, especially for drug delivery devices, it's essential to align with recognized standards like IEC 62366 and FDA guidelines. These standards offer a structured approach for conducting evaluations, including:

  • Participant Sampling: Typically, involving 15 participants from each user group ensures a diverse and comprehensive user feedback pool.

  • Task Definition: Clearly defined tasks with potential errors allow moderators to accurately assess device interaction.

  • Naturalistic Testing Approach: Conduct tests in a way that mimics real-world conditions, avoiding any undue influence on participant behavior.

Comprehensive Data Collection

Diverse data collection methods enrich the evaluation process. This includes:

  • Recording Use Errors: Keeping a precise count and description of any errors encountered.

  • Observation and Recording: Using video recording and note-taking for detailed observation.

  • Debrief Interviews: Conducting interviews post-evaluation to understand participant experiences and perceptions.

Root Cause Investigation and Analysis

A key aspect of the methodology is the investigation and analysis of root causes for any use error, difficulty, or close call observed during testing. I always recommend a consistent root cause approach to ensure any identified differences are not due to variations in the experiment but rather differences between users. To learn about Root Causes techniques I always recommend my teammates to read the book "Medical Device Use Error: Root Cause Analysis" by Wiklund.

In conclusion, well-documented methodology is a cornerstone of successful summative evaluations in the pharmaceutical industry. By adhering to these guidelines, we ensure that our drug delivery devices are not only effective and safe but also meet the highest standards of user satisfaction and regulatory compliance.

Practical tips planning a Summative Evaluation for the Pharmaceutical Industry

From my personal experience, I highly recommend utilizing a physical board to trace each User Requirement Specification (URS) or Task to the corresponding section of the Risk Analysis and the potential Use Errors identified in the Use Related Risk Analysis (URRA). This visual representation helps in understanding the connections between requirements, risks, and potential errors.

I suggest collaborating with a colleague to discuss how to present the results if instances of each error are identified. Engage in a thoughtful conversation about how to rationalize those errors, explore whether there are any current mitigations implemented that could reduce the occurrence of those events, and consider the need for additional scenarios (e.g., Knowledge Based Assessment) to test the effectiveness of those mitigations.

It is important to note that the example provided above is not an exhaustive list of all the potential tasks to be tested in a summative evaluation. Instead, it serves as an illustrative example of how to structure the board and establish connections between tasks, potential errors, and how to test them. Adaptation and customization are necessary to cater to the specific needs of the drug delivery device under evaluation.

Ultimately, remember that the aim is to construct a compelling narrative based on empirical data that showcases the safety and effectiveness of the entire interface, including the device, instructions for use (IFU), packaging, app, and any other relevant components. The pharmaceutical company can demonstrate its commitment to providing a safe and effective product to the market by presenting a comprehensive story backed by empirical evidence.