Skip links

Progress in the FDA’s transition from CSV – CSA Where does Computer Software Assurance (CSA) come from?

The US FDA Centre for Devices and Radiological Health (CDRH) Case for Quality program promotes a risk-based, product quality–focused, and patient-centric approach to computerized systems. This initiative encourages critical thinking based on product and process knowledge, and quality risk management over prescriptive documentation-driven approaches.

Over the last six years the FDA has been working on CSA in partnership with Industry and consultants. The details and progress of the guide was highly anticipated, as was regularly communicated by those in the working group as offering a ‘paradigm shift’ for the entire industry and was marketed as a coming confrontation between CSA vs CSV, tacitly referring to the methods as defined in the ISPE GAMP approach. Behind the hype however there was little content released during the draft work that helped outsiders understand the thoughts of the FDA and how far CSA might go with pushing needed innovation in software qualification and validation.

In September 2022, the FDA published a draft entitled “Computer Software Assurance for Manufacturing, Operations, and Quality System Software”. This article summarises the main contents and considers CSA in practice for new ways to validate software in GxP applications, and its fit alongside more established Computer System Validation approaches.

Does CSA represent a paradigm shift in software validation?

The published draft guidance on CSA is stated to be relevant for “computers and automated data processing systems used as part of medical device production”. It is important to note that the guidance is still a draft and the defined scope is only for medical devices. Medical devices is a significant sector with high dependence on software, both in their production and as embedded software forming part or in some cases the entire the product itself. However, CSA’s impact and scope is therefore limited to this sector, and thus not applicable for the larger pharmaceutical industry, at least for now – it may however be the initial steps from the FDA to influence CSV practice more widely in the future (as many have speculated) but this is currently out of scope.

The draft’s contents are stated early to build on top of the FDA’s 2002 “General Principles of Software Validation”, so rather than an entirely new approach CSA at its heart is an extension of a 20 year old guidance. This was a further limitation that was not expected based on early communications from the group to expect a foundational change and paradigm shift.

The 2002 General Principles of Software Validation included an important statement on ‘least burdensome approach’, this was supported by the FDA to encourage lean ways of software assurance. The new guidance is an extension of the same thinking to optimize methods without detriment to product quality and data integrity, however common CSV practice in industry has been adopting these concepts for a long time and are already well established and described in best practices. It is interesting to note since the draft has been released that members of the CSA initiative are now actively engaged with GAMP under a specific Special Interest Group, which will reduce the risk of unaligned approaches for CSV going forward and enables a collaborative approach with CSA and GAMP. And so, the initial positioning of CSA vs. CSV has now been superseded by a common understanding of where CSA fits in the overall practices for CSV that have been in development since the early 1990’s, and so CSA does not currently represent a significant new methodology.

We will move to review the guide and summarise the key points: there are two main findings from the CSA guidance

  1. To classify systems and functions as being direct/indirect impact to define the high-level need and depth of validation.
  2. To promote the use of both unscripted testing and scripted testing based on a risk/maturity assessment of the software functions.

 

Initial CSA focus is on impact and types of testing

CSA’s goal is to reduce superfluous documentation while still confirming that the software is usable and any risks are minimised, the topics stated in the guide relevant for this are:

  • Intended use and impact to Patient Safety, Product Quality, and Data Integrity

In FDA regulated production patient safety is always the top priority. The CSA guide mentions examples of systems that are considered to have an intended use that is directly related to the product and its quality, such as software used in production automation. It also covers system examples that are not having a direct impact as have a supporting role. In systems with direct impact more detailed steps and evidence, including testing, is expected to show it meets its intended use. Using the impact of the system to justify the level of validation required is not a new tool, it is stated in the original 2002 guide and has been a foundational aspect of GAMP from its early versions, in the CSA guide however it is well stated and intentioned to re-focus companies and remind them of using critical thinking at the outset to decide where and how to apply CSV.

One example provided shows that systems that “collect and record data from the process for monitoring and review purposes (potentially) do not have a direct impact on production or process performance”, and so risk based thinking has to consider strongly the intended usage of systems and their data, and not make a simplistic connection between the role of the system and the severity of validation efforts.

It is interesting to note that in the CSA guidance there is little direct reference to Data Integrity. It mentions the integrity of a record in an IT system, but no connection is made to the wider Data Integrity initiative, such as those deeply incorporated into the latest GAMP bodies of work. It is not clear in the guidance how to cover Data Integrity risks which is recommended in today’s world due to the high attention from global auditors on the topic.

  • Risk based approach

Using risk assessments in software validation to determine the scope and depth of verification (test) activities is also not new in common CSV practice. Since GAMP4 and through GAMP5 and the latest GAMP5 Release 2 it is a foundational aspect used to determine what might go wrong with software and link the required controls and functional testing. The goal with risk assessment in CSV is to demonstrate that potential failures are mitigated and the system operates according to intended use (see above). In the CSA guide it states risk assessment’s role to be to “reasonably foreseeable software failures, determining whether such a failure poses a high process risk, and systematically selecting and performing assurance activities commensurate with the medical device or process risk”.

The CSA guide builds from risk assessment to further detail and classify functions to determine, a) the need for test activities, and b) the types of testing (see below) that can be applied, this next section contains the most useful and practical contents of the guide.

  • Encourage wider testing types such as unscripted testing and tools

The CSA guide classifies two main types of testing ‘scripted’ and ‘unscripted’, the former being less burdensome to generate due to the simplicity of test checklists instead of highly detailed and specific test instructions and procedures. The main guidance in the draft for how to apply the types of testing is “For high-risk software features, functions, and operations, manufacturers may choose to consider more rigor such as the use of scripted testing or limited scripted testing”. It also specifically mentions the use of “Computer System Validation tools (e.g., bug tracker, automated testing) for the assurance of software” and recommends them to be considered “part of the quality system whenever possible”. These are helpful points, and we encourage companies to consider how to adopt flexible testing approaches into their CSV practice.

One aspect to consider when looking to incorporate unscripted testing into your CSV verification activities is that often unscripted and explorative testing may actually uncover more issues and bugs than tests that stick rigidly to the test instruction and procedure! There is no reflection of this in the CSA guide and whilst overall the need to apply different types of testing is clear and well stated, it might not be appropriate in all cases to apply unscripted testing only for lower impact/risk systems and functions.

Example: scaling test activities and type of testing

Example: Scaling test activities and type of testing

Other non-CSA topics that are important in the new ways to do CSV context:

In parallel to the CSA initiative then there are several related GxP topics which will have an impact on how we validate systems and are to be considered when undertaking and shaping CSV practices:

  • Effective Data Governance to Achieve and Maintain GxP Compliance

Good data and IT governance is critical to achieving and maintaining compliance with GxP requirements. GxP covers a wide range of activities, including but not limited to: quality assurance, quality control, manufacturing, distribution, and storage and in today’s world then many of the important records and data are held within IT systems. To be in compliance with GxP, a company must have strong policies and procedures for IT, both for systems and data governance. Particularly important is the role of information security and protection of IT assets and data from failure, intrusion and cyber-crime. It is recommended that with regards to CSV then current practices for IT and data governance are built into procedures and responsibilities at the company level and are kept up-to-date and used to explain in audits how they impact CSV, and applied in systems and projects.

  • Quality by Design (QbD)

Quality by Design (QbD) is a systematic approach to quality control that is becoming increasingly popular in the manufacturing industry. QbD is based on the principle that quality should be designed into a product from the outset, rather than being inspected and tested for after the fact. QbD as a wide concept is still emerging in terms of real examples, however in process control and validation then in some sectors the industry has applied QbD using Process Analytical Technology to adapt the operation in real-time, particularly for Biotech and Cell & Gene this is required and both regulatory and procedure guidance has to find practical ways to enable flexibility within a Design Space.

In terms of software and computer systems then applying QbD can have several aspects, one is the systems that execute advanced control capability such as QbD/PAT above, but the same concepts can be applied elsewhere such as the use of real-time threat monitoring that is provided by Cloud Service Providers- instead of reacting to incidents that impact availability of IT systems, increasingly the CSP’s offer highly redundant and active environments to ensure reliability and security. These controls can be considered an application of QbD that reduce risks to data integrity inherently rather than relying on manual and routine administrative procedures.

Interestingly in the CSA guidance, an example relevant for QbD is mentioned for an MES “that is used to automatically control and adjust established critical production parameters. Changes in this system as direct impact may be a change to a manufacturing procedure that affects the safety or effectiveness of the device. If so, changes affecting this specific operation would require a 30-day notice”. In these statements then the Agency is stating the need for requiring pre-approval of changes that would be required on a regular basis within in batch for QbD/PAT and so this points out that further regulatory discussion is needed to allow companies to move towards design space based thinking for the production of new treatments.

  • Continuous Improvement within QMS

Within any Quality Management System (QMS), there is always room for continuous improvement. No matter how effective your QMS is, there are always ways to make it better through conducting regular audits, both internal and external. You can also seek feedback from your employees and customers and for guidance’s such as CSA it is recommended to review new ways of working and consider bringing them into your QMS to improve processes and procedures. CSA’s guidance on unscripted testing and automation is a great reason to start and review your SOP’s to define a more lean approach to CSV at your company.

  • Configurable Systems and Development Models

Configurable systems and development models allow teams to select the tools, processes, and workflows that best fit their needs. This flexibility enables teams to optimize their workflows for speed, quality, and efficiency.

In the CSA guide there is a powerful statement to encourage the adoption of technology, “advances in manufacturing technologies, including the adoption of automation, robotics, simulation, and other digital capabilities, have allowed manufacturers to reduce sources of error, optimize resources, and reduce patient risk. FDA recognizes the potential for these technologies to provide significant benefits for enhancing the quality, availability, and safety of medical devices, and has undertaken several efforts to help foster the adoption and use of such technologies”.

Perhaps the most useful aspect of the CSA guide is not so much in the detailed contents itself, but that the US FDA is continuing its role to highly align and promote initiatives that push digitalization and modernization of compliance practices. This should be applauded, and whilst CSA may not actually represent any seismic change in CSV practice, it is timely to question how CSV has been applied and to encourage critical thinking with direct support from the world’s most influential regulator.

  • Use of existing documentation and effective supplier relationships

Since the start of CSV the importance of the supplier’s role has been a foundation and key aspect of the GAMP methodology. As a large amount of work is done by suppliers to verify their solutions and document these activities before they reach a project, the regulated company has looked to leverage this to avoid duplicate work and lower their own validation tasks and testing.

The CSA guidance makes the importance of this integration and the benefits of working with suitable suppliers very clear: “the manufacturer could incorporate the practices, validation work, and electronic information already performed by developers of the software as the starting point and determine what additional activities may be needed. For some lower risk software features, functions, and operations, this may be all the assurance that is needed by the manufacturer.”

With this statement more pragmatic approaches to leveraging the supplier’s existing work and the relationship with the supplier are encouraged. Particularly for modern cloud solutions significant responsibility is held by the supplier and their infrastructure/service partners than for traditional on-premise deployments, and so it is helpful that the Agency are active to promote the collaboration between suppliers and customer to meet the basic CSV expectations.

How can we help?

Factorytalk are highly experienced in CSV from the early days of GAMP and have validated all types and complexities of systems used in Pharma, we also regularly update our practices and recommendations and can help you understand what CSA is, the latest version of GAMP5 Release 2 and to find practical, compliant and value-added ways to undertake CSV.

Our methods incorporate GAMP5R2, CSA, and how to bring Data Integrity into your assessments and operation of IT systems. We can assist with your training and knowledge, consulting all the way to entire CSV assignments for digital plants. Contact us for more information.

Reference

ISPE. “Why ISPE GAMP® Supports the FDA CDRH: Case for Quality Program” Published December 2019.

Leave a comment