Ticker

6/recent/ticker-posts

Computer System Validation



INTRODUCTION & OVER VIEW: - Validation is an essential part of GMP. It is an ongoing set of activities which continues from the initiation of the project until system retirement. 
Computerized System Validation (CSV) is performed based on activities that occur throughout t he entire life cycle. Therefore, validation activities must be planned, specified, built/configured, verified against the specification, and ultimately reported. Where a computerized system replaces a manual operation, there should be no resultant decrease in product quality, process control or quality assurance. Computerized systems used in the manufacture of pharmaceutical products should be properly designed, validated and maintained to ensure that the system serves its intended purpose and meets its quality attributes in a consistent manner. The applications should be validated and it is expected that the infrastructure on which the validated applications are dependent, is compliant and controlled. Therefore, the IT infrastructure which supports the GMP regulated activities should be qualified.

Risk-Based Approach to Computerised System Validation :-Risk management should be applied throughout the lifecycle of the computerised system taking into account patient safety,data integrity and product quality. Several risk assessments may need to be performed various stages of a Computerized system Life Cycle. The object of Quality risk management is to identify , analyse and Categiries GMP risk and determine the appropriate control required to manage these risks.

Roles and Responsibilities :- In accordance with PIC/S Guide to Good Manufacturing  Practice  for Medicinal Products PE 009-10 - Annex 11 (Computerised Systems), roles and responsibilities
(e.g. Business Process Owner, System Owner, Supplier, IT, etc.)
must be clearly    defined and documented for the life cycle of a computerised system. 
The Process Owner is responsible for ensuring the compliance of the computerised system and has the end-to-end responsibility of the business processes, whereas the System Owner is responsible for ensuring the system is supported, maintained and is available for use throughout the lifecycle. In some  instances, the Process Owner may take over the role of the System Owner and vice versa. Furthermore, responsibilities for writing, approving and authorising documents should also be defined. Activities and responsibilities can be assigned by using a matrix to list the responsible  and deliverables for each task.

For Example :- One PLC Based Capsule filling Machine is Installed In Manufacturing area So We Will Decide the Role and Responsibilities:-
         A.     Project Owner:- Lead of Project Team.
B.     System Owner:- Lead of Manufacturing (Production) Owner
C.     Techincal Owner:- Lead of Engineering Team
D.     IT Infrastructure:- Lead of IT Complianc Team
E.     Validation Lead:- Owner of Validation Team I.e Process Validation or CSV Team
F.    QA Lead :- Involved of QA Member (Review of Documentation)

 Prospective and Legacy Systems Validation :- Prospctive System Validation :- It System is a new system. Lagacy System Validation:- It System is existing systems.
It is expected for validation to be conducted prospectively for all new systems and where possible, for existing systems (legacy systems). However, in the event where legacy systems.
validation is required, it may be supported by a comprehensive review of historical data in addition to re-defining, documenting, re-qualifying, prospectively validating applications and introducing GMP related life-cycle controls to assure that existing systems are operating correctly.
Good historical data may be used instead of testing. Lack of adequate evidence to support the validation process will make it difficult to perform a meaningful validation and thus can lead to suspension or shut-down of systems if imposing life-cycle controls and testing also cannot be performed.
 Retrospective validation is not equivalent to prospective validation and it is not a preferred method for computerised systems; it is used in exceptional cases (continued use is necessary, good data are available and re-testing is not feasible) only and it is not an option for new systems.

Personnel & Training :- Persons involved with computerised systems validation activities should be appropriately qualified in order to carry out their assigned duties. Personnel and contractors who are responsible for the development, operation, maintenance and administration of the computerised systems must have relevant training, education and experience for the particular system and role . Training measures and qualifications should be documented and stored as part of the Quality Management System (QMS).

ER/ES (Electronic Records; Electronic Signatures) :-

Electronic records :- It must be possible for electronic records to be printed in a readable format. For batch release related records, it should be possible to generate and print out the changes made to the original data.

Electronic signatures :- Electronic signatures should be unique to one individual and there must be procedures to ensure that the owners of the electronic signatures are aware of their responsibilities for their actions, i.e. Users must be aware that electronic signatures have the same impact as hand-written signatures. Electronic signatures must be permanently linked to their respective electronic records to ensure that the signatures cannot be edited, duplicated or removed in anyway.
Electronic signatures must clearly indicate:
· The displayed/printed name of the signer
· The date and time when the signature was executed; and
· The reason for signing (such as review, approval, responsibility, or    authorship) associated with the signature.

Basic Needs of Computer System Validation
System elements that need to be considered in computerized system validation include computer hardware and software, related equipment and network components and operating system environment, procedures and systems documentation including user manuals and people (such as, but not limited to, users, data reviewers, system application administrators, network engineers, database administrators and people involved in archiving). Computerized system validation activities should address both system configuration as well as any custom-developed elements.

 Computerized systems should be maintained in the validated state with risk-based controls appropriate to the different stages of the system life cycle. These stages include system planning, specification, programming and configuration, system testing, preparation and verification of standard operating procedures (SOPs) and training programmers, system operation and maintenance including handling of software and hardware updates, monitoring and review, followed by system retirement .

Depending on the types of systems or typical applications such as process control systems (distributed control system (DCS), programmable logic controller (PLC), supervisory control and data acquisition (SCADA), laboratory information management systems (LIMS), laboratory instrument control systems and business systems (enterprise resource planning (ERP), manufacturing resource planning (MRP II)) used by the manufacturer, a document covering (but not limited to) the following information should be available on-site:
 · Purpose and scope
 · Roles and responsibilities
 · Validation approach
 · risk management principles
 · System acceptance criteria
 · Vendor selection and assessment
 · Computerized system validation steps
 · Configuration management and change control procedures
 · Back-up and recovery
 · Error handling and corrective action
 · Contingency planning and disaster recovery
 · Maintenance and support
 · System requirement
 · Validation deliverable and documentation
 · Template, formats, annexes

  V Model of Computer System Development Life Cycle:-
    A systematic approach to computerized system validation, which begins with initial 
    risk assessment and continues throughout the life of the system, must be defined to            ensure  quality is built into the computerized systems.

Hardware Qualification:-
Without stable hardware any program will fail. The frustration and expense of supporting bad hardware can drain an organization, delay progress, and frustrate everyone involved. At Stanford Linear Accelerator Center (SLAC), we have created a testing method that helps our group, SLAC Computer Services (SCS), weed out potentially bad hardware and purchase the best hardware at the best possible cost. Commodity hardware changes often, so new evaluations happen periodically each time we purchase systems and minor re-evaluations happen for revised systems for our clusters, about twice a year. This general framework helps SCS perform correct, efficient evaluations.

Defining System Requirements: - It is difficult to maintain system homogeneity in a growing cluster environment. The hardware available to build systems changes often. This has the negative effect of adding complexity in management, software support for new hardware, and system stability. Introducing new hardware can introduce new hardware bugs. To constrain change and efficiently manage our systems, SCS developed a number of tools and requirements to enable an easy fit into our management and computing framework. We reduced the features to a minimum that would fit our management infrastructure and produce valid results with our code. This is our list of requirements:
1.  One rack unit (1U) case with mounting rails for 19 inch rack.
2.  At least two Intel PIII CPUs at 1GHZ or greater
3.  At least 1GB of ECC memory for every two CPUs
4.  100MB Ethernet interface with PXE support on the network card and in the BIOS
5.  Serial console support with BIOS level access support
6.  One 9GB or larger system disk, 7200 RPM or greater
7.  All systems must be FCC and UL compliant.

Starting Our System Testing: - The eleven vendors we choose ranged from the largest system builders to the small, screwdriver shops. The criteria for being in the evaluation were to meet the list of basic requirements and send three systems for testing. We needed the systems for ninety days. In many cases, we did not need the systems that long, but it’s good to have the time to thoroughly investigate the hardware.
 Two of the three systems were racked; the third was placed on a table for visual inspection and testing. The systems on the tables had their lids removed, and were digitally photographed.
Later the tabled systems would be used for the power and cooling tests and visual inspection. The other two systems were integrated into a rack in the same manner as all our clustered systems, but they did not join the pool of production systems. Some systems had unique physical sizing and racking restrictions that prevented our being able to use them.
 Each model of system had a score sheet. The score sheets were posted to our working group’s web-page. Each problem was noted on the website, and we tried to contact the vendor to resolve any issues. In this way we tested the system, the vendors willingness to work with us, and their ability to fix problems. We had a variety of experiences. Some vendors just shipped us another model, some worked through the problem with us, others responded that it was not a problem, and one or two ignored us. This quickly narrowed the systems that we considered manageable.
Throughout the period of testing, if a system was not doing a specific task it was running hardware testing scripts or run-in scripts. Each system did ’run-in’ for at least thirty days. No vendor does ’run-in’ for more than seventy-two hours, and this allowed us to see failures over the long term. Other labs reported that they too saw problems over long testing cycles.

We wanted to evaluate a number of aspects of all the systems. First, the quality of the physical engineering. Second, how well it operated and if it was stable. Third, measure a system’s performance. Last, evaluate the contract, support, and vendor’s responsiveness.

Physical Inspection
The systems placed on the table were evaluated by several criteria:
1.  Quality of construction
2.   Physical design
3.   Accessibility
4.   Quality of the power supply
5.  Cooling design

Quality of construction: - The systems greatly varied in quality of construction. We found bent-over, jammed ribbon-cables, blocked airflow, flexible cases, and cheap, multi-screw access that were unbelievably bad for a professional product. There were poor design decisions, like a power switch offset in the back of a system that was nearly inaccessible once the system was racked. On the positive side of the experience, there were a few well engineered systems.

 Physical Design: -This evaluation would include quality of airflow and cooling, rack ability, size/weight, and system layout. Features such as drive bays out the front would also be noted. Airflow is a big problem with the hot x86 CPUs especially in restricted space like a 1U rack system. Some systems had blocked airflow or had little to no circulation. Heat can cause instability in systems and reduce operational lifetimes, so good airflow is critical.

Physical Construction:-Rigidity of the case, no sharp edges, how the system fit together, and cabling, are part of this category. These might seem small, uninteresting factors until you get cut by a system case, or have a huge percentage of ’dead on arrivals’ because the systems were mishandled by the shipper and the cases were too weak to take the abuse. 
     We have to use these systems for a number of years, and to have a simple yet glaring problem is a pain and potently expensive to maintain.

Accessibility:- Tool less access should be a standard on all clustered systems. When you have thousands of systems, you are always servicing some. To keep the cost of that service low, parts should be quickly and easily replaceable. Unscrewing and screwing six to eight tiny machine screws slows down access to the hardware. Also, parts that fit so one part does not have to come out to get to another part and easy access to drives are pluses. Some features that we did not ask for, like keyboard and monitor connections on the front of the case are o.k., but not really necessary.

Power :- We tested the quality of the power supply using a Dranetz-BMI Power Quality Analyzer (see sidebar). Power correction is often noted in the literature for a system, but we have seen radically different measurements relative to the published number. For example, one power supply that was published to have a power factor correction of .96 actually had a .49 correction. This can have terrible consequences when multiplied by 512 systems. We tested the system at idle and under heavy load. The range of quality was dramatic and an important factor in choosing a manageable system.

The physical inspection, features, cooling and power-supply quality test weeded out a number of systems early. Getting these out of the way first reduced the number of systems that we had to do extensive testing on, thereby reducing the amount of time for testing in general. System engineering, design, and quality of parts ranged broadly. Moving to the next testing stage would also cull the herd and result in systems that we have been pleased to support.

Non-Engineer Work :- Non-engineering factors (contractual agreements, warranties, and terms) are critical to the success of bring in new systems for production work. The warranty terms and length affects the long-term cost of system support. We also try to assess the financial health of the company. A warranty does little good if the vendor is not around to honor it.

Another aspect of that couples the non-engineering work with the engineers is the acceptance criteria, which is seldom talked about until it is too late. These criteria determine the point in the deployment that the vendor is done and the organization is willing to accept the systems. This should be in writing in your purchase order. If the vendor drops the system off at the curb, and later during the rollout period some hardware related problem surfaces, you need to be within your rights to tell the vendor to fix the system problem or remove the systems. On the vendor side, a clear separation of what is a hardware and what is a software problem needs to be clear. Often a vendor will have to work with the client to determine the nature of the problem, so the cost of that will need to be built in to the price of the system.

OS Hardening:-
●  OS hardening (which is short for operating system hardening) refers to adding extra security  measures to your operating system in order to strengthen it against the risk of cyber attack.
● All mainstream modern operating systems are designed to be secure by default, of course. But on most systems, you can add extra security features and modify the default configuration settings in ways that make the system less vulnerable to attacks than it would be with a default install alone.
 ● OS hardening is especially important in situations where a system faces above-average security risks, such as a Web server that is exposed to the public Internet or a data server that contains data that is subject to strict regulatory privacy requirements. However, given
● The high rate of cyber attacks today, operating system hardening is a best practice even in cases where servers or data face only average security risks. The time it takes to harden an OS is often well worth it, because, as they say, an ounce of prevention equals a pound of cure.
● You can think of OS hardening as akin to adding a security system to your home or installing deadbolts on your doors. Although your house was built to be basically secure, these extra security measures provide additional confidence that intruders won't be able to break past your home's basic defenses.

      OS Hardening Checklist
The exact steps that you take to harden an operating system will vary depending on the type of operating system, its level of exposure to the public Internet, the types of   applications it hosts and other factors.
 However, the following OS hardening checklist is a good place to start when hardening  any type of operating system:

● Firewall configuration: - our operating system may or may not have a firewall set up by   default. Even if it does have a firewall running, the firewall rules may not be as strict as they could be. For this reason, OS hardening should involve reviewing firewall configurations and modifying them so that traffic is accepted only from the IP addresses   and on the ports from which it is strictly needed. Any non-essential open ports are an unnecessary security risk.

    ● Access control: - Windows, Linux and OS X all provide user, group and account        management features that can be used to restrict access to files, networking and other resources. But these features are often not as strict as they could be by default. Review them to make sure that access to a given resource is granted only to users who truly need it. For example, if you have a Linux server where each user account has read access to other users' home directories, and this access is not actually required for the   use case that the server supports, you would want to change file permissions to close off the unnecessary access.

Anti-virus: - Depending on the type of system you are hardening and the workloads   running on it, you may want to install and configure anti-virus software to detect and    remediate malware. For example, if you are hardening a Windows workstation where users will be opening email messages, having anti-virus software in place provides useful extra security in case users open a malicious file attachment.

Software updates: - Be sure to determine whether the operating system that you are    hardening will install security updates automatically, and then change that setting as    needed. In most cases, automatic software updates are a good idea because they help  keep your system ahead of security threats as they emerge. But in certain situations, you may want to avoid auto-updates and instead require administrators to approve software changes manually in order to minimize the risk of an update that could disrupt a critical service.

● Hardening frameworks :- Some operating systems provide frameworks that are         designed for the specific purpose of adding extra access control and anti-buffer-           overflow features to the system and the applications it hosts. App Armor and SE            Linux are examples of this type of software on Linux. In general, installing or enabling these tools is a good system hardening best practice.
disrupt a critical service.

● Data and workload isolation :- For OS hardening, it is a good idea to isolate data and  workloads from one another as much as possible. Isolation can be achieved by hosting different databases or applications inside different virtual machines or containers, or restricting network access between different workloads. That way, if an attacker is able to gain control of one workload, he won't necessarily be able to access others as well.

● Disable unnecessary features: - It is also a best practice to disable any operating        system or application features that you are not using. For example, if your Linux    server runs a graphical interface by default but you will only be accessing the system through an SSH client, you should disable (or, better, uninstall completely) the graphical interface. Similarly, if your Windows workstation has Skype installed by default but the users will actually be running Skype, disable or uninstall the program. In addition to  consuming system resources unnecessarily, features that are not being used create potential security holes.

      GLOSSARY:-
Archival :- Archiving is the process of protecting records from the possibility of being further altered or deleted, and storing these records under the control of independent data management personnel throughout the required retention period. Archived records should include, for example, associated metadata and electronic signatures.
  
 Audit trail :-  The audit trail is a form of metadata that contains information associated with  actions that relate to the creation, modification or deletion of GXP records. An audit trail provides for secure recording of life-cycle details such as creation, additions, deletions or alterations of information in a record, either paper or electronic, without obscuring or overwriting the original record. An audit trail facilitates the reconstruction of the history of such events relating to the record regardless of its medium, including the “who, what, when and why” of the action

Backup: A backup means a copy of one or more electronic files created as
 an alternative in case the original data or system are lost or become unusable (for example, in the event of a system crash or corruption of a disk). It is important to note that backup differs from archival in that back-up copies of electronic records are typically only temporarily stored for the purposes of recovery and may be periodically overwritten. Such temporary back-up copies should not be relied upon as an archival mechanism.

Business continuity Plan:- A written plan that is documented and maintained that defines the ongoing process supported by management and funded to ensure that the necessary steps are taken to identify the impact of potential losses, maintain viable recovery strategies and recovery plans, and ensure the continuity of services through personnel training, plan testing and maintenance. 

Change control :- The process of assuring that a computerized system remains validated following a change. It includes assessing the impact of the change to determine when and if repetition of a validation or verification process or specific portion of it is necessary and performing appropriate activities to ensure the system remains in a validated state.

Computerized system: A broad range of systems including, but not limited to, automated laboratory equipment, laboratory information management, and document management systems. The computerized system consists of the hardware, software, and network components, together with the controlled functions and associated documentation.

 Commercial (off-the-shelf, configurable) computerized system:- Software defined by a market driven need, commercially available, and whose fitness for use has been demonstrated by a broad spectrum of commercial users; also known as COTS.

User Requirement Specifications (URS): describes what the system should do. The user requirements contain scientific, business, legal, regulatory, safety, performance and quality aspects of the future system. The user requirements serve as the basis for the Performance Qualification (PQ)

 Computerized System validation plan: The validation plan shall be an approved document, which describes the validation activities and responsibilities. The validation plan specifies the Computerized System subjected to validation and compiles the validation activities to be performed and the validation targets/criteria to be fulfilled. The validation plan shall be prepared and approved prior to conducting the test.

 DQ (Design Qualification): Documented verification that the proposed design of facilities, systems, and equipment is suitable for the intended purpose

 IQ (Installation qualification): Documented verification that a system is installed         according to written and pre-approved specifications.
OQ (Operational qualification): Documented verification that a system operates according to written and pre-approved specifications throughout specified operating ranges at the customer.

 PQ (Performance qualification) or User Acceptance Testing (UAT): Documented verification that a system is capable of performing the activities of the processes it is required to perform, according to written and pre-approved specifications, within the scope of the business process and operating environment

Data life cycle:- All phases of the process by which data are created, recorded, processed, reviewed, analyzed and reported, transferred, stored and retrieved and monitored until retirement and disposal. There should be a planned approach to assessing, monitoring and managing the data and the risks to those data in a manner commensurate with potential impact on patient safety, product quality and/or the reliability of the decisions made throughout all phases of the  data life cycle.

Disaster recovery:-  Process for planning or engaging appropriate resources to restore the normal business function in the event of a disaster.

 Functional specifications:- The functional specifications document, if created, defines functions  and technological solutions that are specified for the computerized system based upon technical requirements needed to satisfy user requirements (e.g. specified bandwidth required to meet the user requirement for anticipated system usage).


Good documentation practices :- In the context of these guidelines, good documentation practices are those measures that collectively and individually ensure documentation, whether  paper or electronic, is secure, attributable, legible, traceable, permanent, contemporaneously recorded, original and accurate.
       

 Production environment: - The business and computing operating environment in which a computerized system is used by end-users. For regulated computerized systems, the production environment is the business and computing operating environment in which the computerized system is being used for good laboratory practice-regulated purposes.

System Development life cycle: The period of time that starts when a computerized system is conceived and ends when the product is no longer available for use by end-users. The system life cycle typically includes a requirements and planning phase a development phase that includes: a design phase and a programming and testing phase and a system qualification  and release phase that includes: system integration and testing phase system validation phase system release phase; and a system operation and maintenance phase; and a system retirement phase.

 User acceptance testing:-  Verification of the fully-configured computerized system  installed in the production environment (or in a validation environment equivalent to    the production environment) to perform as intended in the automated business process when operated by end-users trained in end-user standard operating procedures (SOPs) that define system use and control. User-acceptance testing may be a component of the performance qualification (PQ) or a validation step separate from the PQ.

 CAPA :- CAPA is used to bring about improvements to an organization's processes, and is often undertaken to eliminate causes of non-conformities or other undesirable situations.CAPA is a concept within good manufacturing practice (GMP),Hazard Analysis and Critical Control Points/Hazard Analysis and Risk-based Preventive Controls (HACCP/HARPC) and numerous ISO business standards. It focuses on the systematic investigation of the root causes of identified problems or identified risks in an attempt to prevent their recurrence (for corrective action) or to prevent occurrence
(for preventive action).

 Deviation :- Deviation that is a difference between an observed value and the true        value of a quantity of interest (such as a population mean) is an error and                     deviation that is the difference between the observed value and an estimate of the      true value (such an estimate may be a sample mean) is a residual
 here are Two type of Deviations
      
 Planned Deviation :- Planned Deviations, which are Described, and Pre-Approved      Deviation from the current Approve Documentation/System.

●    Planned Deviation Shall be Approve Before Execution.
●    Planned deviation Should be Handle approved QA Change Control.
●    All Changes Should be evaluated for Product Impact , signification
●   The need for Requalification and Revalidation
●   Changes ultimately approved and Rejected by QA.


Unplanned Deviation: - Unplanned Deviation also Called as Incident.
●    Incident can be defined as Unplanned and Uncontrolled Event in the form of 
Incompliance from the Designed system or Procedures at any Stage of
Manufacturing, Warehouse, Packing and Engineering, Testing and Storage of Drug
Product due to System.

Classification of Deviation:- 
a. Critical 
b. Major 
C. Miner.

Quality Impact Incident: - Quality Impact incident are error or occurrences during Execution of an activity which will affect the Quality, purity and Strength of the Drug Product.

Quality Non-Impact Incident: - Quality Non-Impact incident are error or occurrences during execution of an activity which may have no Impact on Product Quality, Purity, Strength of the Drug Product.

 VALIDATION MASTER PLAN
 There should be a computerized system validation master plan that describes the policy, approach, organization and planning, resources, execution and management of computerized system validation for all of the GXP systems in use on-site. The computerized system validation master plan (CSVMP) should contain, for example, the scope, risk management approach and a complete inventory list of all GXP systems.The CSVMP should also outline the controls including but not limited to backup and recovery of data, contingency planning, disaster recovery, change control management, configuration management, error handling, maintenance and support, corrective measures and system access control policies that will be in place to maintain the validated state of the systems. The CSVMP should refer to protocols and reports as appropriate, for the conduct of Validation.Where appropriate, computerized systems should be classified based on risk assessment relating to their GXP impact.

Validation Protocol:-Validation should be executed in accordance with the validation protocol and applicable SOP. A validation protocol should define the validation strategy, including roles and responsibilities and documentation and activities to be performed. The protocol should cover the specification, development, testing, review and release of the computerized system for GXP use The validation protocol should be tailored to the system type, impact,   risks and requirements applicable to the system in which it will be used.        
Validation Report: - A validation summary report should be prepared, summarizing system validation activities. It should outline the validation process and activities and describe and justify any deviations from the process and activities specified in the protocol.  The report should include all critical and major test discrepancies that occurred during the verification/validation testing and describe how these were resolved. The report should be approved after the resolution of any issue identified during validation and the system should then be released and ready for GXP use.

VENDOR MANAGEMENT:- For vendor-supplied and/or vendor-managed computerized systems or system components, including cloud-based systems, an evaluation of the vendor-supplied system and the vendor’s quality systems should be conducted and recorded. The scope and depth of this evaluation should be based upon risk management principles. Vendor evaluation activities may include: completion of an audit checklist by the vendor; gathering of vendor documentation related to system development, testing and maintenance including vendor procedures, specifications, system architecture diagrams, test evidence, release notes and other relevant vendor documentation; and/or on-site audit of the vendor facilities to evaluate and continuously monitor as necessary the vendor’s system life cycle control procedures, practices and documentation. Appropriate quality agreements should be in place with the vendor defining the roles and responsibilities and quality procedures throughout the system life cycle.

REQUIREMENTS SPECFICATIONS: - Requirements specifications should be written to document the minimum user requirements and functional or operational requirements and performance requirements. Requirements may be documented in separate URS and functional requirements specifications (FRS) documents or in a combined document. User requirements specifications. The authorized URS document, or equivalent, should state the intended uses of the proposed computerized system and should define critical data and data life-cycle controls that will assure consistent and reliable data throughout the processes by which data is created, processed, transmitted, reviewed, reported, retained and retrieved and eventually disposed . The URS should include requirements to ensure that the data will meet regulatory requirements such as ALCOA principles and WHO guidelines on good documentation practices. Other aspects that should be specified include, but are not limited to, those related to the data to be entered, processed, reported, stored and retrieved by the system, including any master data and other data considered to be the most critical to system control and data output. the flow of data including that of the business process(es) in which the system will be used as well as the physical transfer of the data from the system to other systems or network components. Documentation of data flows and data process maps are recommended to facilitate the assessment and mitigation and control of data integrity risks across the actual, intended data process(es);
 ·        networks and operating system environments that support the data flows;
·        how the system interfaces with other systems and procedures;
·        the limits of any variable and the operating programe and test program
·        synchronization and security control of time/date stamps;
·         technical and procedural controls of both the application software as well as operating systems to assure system access only to authorized persons;
·   technical and procedural controls to ensure that data will be attributable
to unique individuals (for example, to prohibit use of shared or generic                               login credentials);
·   technical and procedural controls to ensure that data is legibly and contemporaneously recorded to durable (“permanent”) media at the time of each step and event and controls that enforce the sequencing of each step and event (for example, controls that prevent alteration of data in temporary memory in a manner that would not be documented)
·        technical and procedural controls that assure that all steps that create, modify or delete electronic data will be recorded in independent, computer-generated audit trails or other metadata or alternate documents that record the “what” (e.g. original entry), “who”(e.g. user identification), “when” (e.g. time/date stamp) and “why” (e.g. reason) of the action;
·        backups and the ability to restore the system and data from backups;
·        the ability to archive and retrieve the electronic data in a manner that assures that the archive copy preserves the full content of the original electronic data set, including all metadata needed to fully reconstruct the GXP activity. The archive copy should also preserve the meaning of the original electronic data set, including its dynamic format that would allow the data to be reprocessed, queried and/or tracked and trended electronically as needed;
·        input/output checks, including implementation of procedures for the review of original  electronic data and metadata, such as audit trails
·        technical and procedural controls for electronic signatures;
·        alarms and flags that indicate alarm conditions and invalid and altered data in order to  facilitate detection and review of these events;
·        system documentation, including system specifications documents, user manuals and procedures for system use, data review and system administration;
·        system capacity and volume requirements based upon the predicted system usage and performance requirements;
·        Performance monitoring of the system.
·        controls for orderly system shutdown and recovery
·        business continuity
·        User requirements should be related to the tests carried out in the  qualification phase (typically either the operation qualification (OQ) or the PQ)
·  In the case of, e.g. a chromatography data system (CDS), it is further important to define the requirements for the basic functions of taking into account following details: - 
·    requirements for hardware, workstations and operating systems;System requirements such as number of users, locations;
·        compliance requirements, i.e. open or closed system, security and access configuration, data integrity, time and date stamp, electronic signature and data migration;
·        workflow of CDS;
-          information technology (IT) support requirements;
    -     interface requirement.

Functional specifications:- The functional specifications should define specific functions of the computerized system  based upon technical requirement needed to satisfy user requirements .The functional specifications provide a basis for the system design and configuration specifications. Functional specifications should consider requirements for operation of   the computerized system in the intended computing environment, such as network infrastructure requirements, as well as functions provided by vendor-supplied software as well as functions required for user business processes that are not met by out-of-the-box software functionality and default configurations and that will require custom code development. With regard to the proper functioning of computer software, the following general aspects should be kept in mind when specifying installation and user/functional requirements:
- Language, name, function (purpose of the program );
Inputs;
outputs, including electronic data and metadata that constitute the “original records”;
   ‒fixed set point (process variable that cannot be changed by the operator);
  Variable set point (entered by the operator);
   Edits (reject input/output that does not conform to limits and minimize errors);
  input processing parameters (and equations);
      Program overrides (e.g. to stop a mixer before time).

The personnel access roles who have the ability and/or are authorized to write, alter or have access to program should be identified. There should be appropriate segregation of roles between personnel responsible for the business process and personnel in system administration and maintenance roles who will have the ability to alter critical master data, critical set points, and system policies and configuration settings.

   With regard to the proper functioning of computer hardware and to prevent damage, the following general aspects should be kept in mind when specifying installation and functional requirements:
      ‒       location;
      ‒       power supply;
      ‒       environmental conditions;
      ‒       magnetic disturbances;
      ‒       mechanical disturbances;
       ‒       physical security.

SYSTEM DESIGN CONFIGURATION SPECIFICATIONS :-  
System design and configuration specifications should be developed based on user and functional requirements. Specification of design parameters and configuration settings (separate or combined) should ensure data integrity and compliance with “good documentation   practices for electronic data”
System design and configuration specifications should provide a high-level system description as well as an overview of the system physical and logical architecture and should map out the automated system business process and relevant work flows and data flows if these have not already been documented in other requirements specifications documents.
  The system design and configuration specifications may include, as applicable,              specifications to define design of software code, for software code that is developed in- house, if any, and configuration specifications of configurable elements of the software  application, such as security profiles, audit trail configuration, data libraries and other configurable elements.
In addition, the system design and configuration specifications may also include, based upon risk, the hardware design and configuration specifications as well as that of any supporting network infrastructure.

  Example configuration settings and design controls for good documentation practices that should be enabled and managed across the computing environment (for both the software application, including off-the-shelf software, and operating systems environments) include,but are not limited to :-

·        restricting security configuration settings for system administrators to independent persons, where technically feasible;
·        disabling configuration settings that allow overwriting and reprocessing of data without  traceability disabling use of “hidden fields” and the ability to delete data and the ability to obscure data with data annotation tools;
·        restricting access to time/date stamps for systems to be used in clinical trials, configuration and design controls should be implemented to protect the blinding of the trial, for example, by restricting access to who can view randomization data that may be stored electronically.
·        System design and configuration specifications should include secure, protected, independent computer-generated audit trails to track changes to these settings in the system.

DESIGN QUALIFICATION

Design review should be conducted to verify that the proposed design and configuration of the system is suitable for its intended purpose and will meet all applicable user and functional requirements specifications.
This process that may be referred to as design qualification may include a review of vendor documentation, if applicable, and verification that requirements specifications are traceable to proposed design and configuration specifications.

BUILD AND PROJECT IMPLEMENTATION
Once the system requirements and the system design and configuration are specified and verified, system development or “build and test” activities may begin. The development activities may occur as a dedicated phase following completion of specification of system requirements and design Alternatively, development activities may occur iteratively as requirements are specified and verified (such as when prototyping or rapid-development   methodologies are employed).
Vendor-supplied systems:-    For vendor-supplied systems, development controls for the vendor-supplied portion of the computerized system should be assessed during the vendor evaluation or supplier qualification. For custom-built systems and configurable systems, as well as for vendor- supplied systems that include custom components (such as custom-coded interfaces or custom report tools) and/or require configuration (such as configuration of security profiles in the software or configuration of the hardware within the network infrastructure), the system should be developed under an appropriate documented quality management system.
Custom-developed systems :-  For custom-developed systems or modules, the quality management system controls should include development of code in accordance with documented programming standards, review of code for adherence to programming standards and design specifications, and development testing that may include unit testing and module/integration testing.

 System prototyping and rapid, agile development methodologies may be employed during the system build and development testing phase. There should be an adequate level of documentation of these activities. Preparation for the system qualification phases

 System prototyping and rapid, agile development methodologies may be employed during the system build and development testing phase. There should be an adequate level of documentation of these activities. Preparation for the system qualification phases.
The system development and build phase should be followed by the system qualification phase. This typically consists of installation, operational and performance testing, but actual qualification required may vary depending on the scope of the validation project as defined in the validation plan and based upon a documented and justified risk assessment.
Prior to the initiation of the system qualification phase, the software program and         requirements and specifications documents should be finalized and subsequently managed under formal change control.

Persons who will be conducting the system qualification should be trained to adhere to the following requirements for system qualification:
·        test documentation should be generated to provide evidence of testing;
·        test documentation should comply with good documentation practices;
·        any discrepancies between actual test results and expected results should be documented and adequately resolved based upon risk prior to proceeding to subsequent test phases.
INSTALLATION QUALIFICATION
 The first phase of system testing is installation qualification (IQ), also referred to as installation verification testing. IQ should provide documented evidence that the computerized system, including software and associated hardware, is installed and configured in the intended system testing and production environments according to written specifications.
The IQ will verify, for example, that the computer hardware on which the software        application is installed has the proper firmware and operating system; that all components are present and in the proper condition; and that each component is installed per the manufacturer or developer instructions.
IQ should include verification that configurable elements of the system are configured as specified. Where appropriate, this could also be done during OQ.

OPERATIONAL QUALIFICATION
The OQ, or operational/functional verification resting, should provide documented      evidence that the software and hardware function as intended throughout anticipated    operating ranges.
   Functional testing should include, based upon risk:
·        an appropriate degree of challenge testing (such as boundary, range, limit,  nonsense entry  testing) to verify the system appropriately handles erroneous entries or erroneous use;  
·        verification that alarms are raised based upon alarm conditions;
·        flags are raised to signal invalid or altered data. Working document Considerations for functional testing of hardware and software
Note: the section below provides for examples, and is not an exhaustive list. Static, dust, power feed voltage fluctuations and electromagnetic interference could influence the system. Hardware .

The extent of validation should depend on the complexity of the system. Appropriate tests and challenges to the hardware should be performed as part of validation.

    Hardware is considered to be equipment and the focus should be on location,                maintenance and calibration of hardware, as well as on qualification.
   The qualification of the hardware should prove:
    that the capacity of the hardware matches its assigned function (e.g. foreign language);
·        that it operates within the operational limits (e.g. memory, connector ports, input ports);
·        that the hardware configuration settings are appropriate and meet user and functional requirements;
·        that it performs acceptably under challenging conditions (e.g. long hours, temperature extremes);
·        reproducibility/consist

 Some of the hardware qualification may be performed by the computer vendor. However, the ultimate responsibility for the suitability of equipment used remains with the company.

Qualification protocols, reports (including data) should be kept by the company for the hardware in its configured state. When qualification information is produced by an outside firm, e.g. computer vendor, the records should be sufficiently complete (including general results and protocols) to allow the company to assess the adequacy of the qualification and verification activities. A mere certification of suitability from the vendor, for example, will be inadequate.

 Functional testing of software should provide assurance that computer programs           (especially those that control critical activities in manufacturing and processing) will  function consistently within pre-established limits for both normal conditions as well as under worst-case conditions (e.g. out-of-limit, out-of-range, alarm conditions).

 Functional testing, also known as “black box” testing, involves inputting normal and    abnormal test cases; then, evaluating outputs against those expected. It can apply to          computer software or to a total system (reference: CEFIC GMP).

STANDARD OPERATING PROCEDURES AND TRAINING

  Prior to the conduct of the PQ and user acceptance testing (UAT), and prior to the release of the computerized system for GXP use, there should be adequate written procedures and  documents and training programmes created defining system use and control. These may include vendor-supplied user manuals as well as SOPs and training programes developed in house.
Example procedures and training programmes that should be developed include, but are not necessarily limited to:
·        system use procedures that address:
·        routine operation and use of the system in the intended business process(es),
·        review of the electronic data and associated metadata(such as audit trails) and how the        source electronic records will be reconciled with printouts, if any
·        mechanisms for signing electronic data,
·        system training requirements prior to being granted system access;
      system administration procedures that addres
·        granting and disabling user access and maintaining security controls,
·        backup/restore
·        archival/retrieval,
·        disaster recovery and business continuity,
·        change management,
·        incident and problem management,
·        System maintenance.

PERFORMANCE QUALIFICATION AND USER ACCEPTANCE TESTING 

Note: The user requirements specifications should provide a basis for UAT that will be conducted by the system users during the PQ of the system.

PQ, that includes UAT, should be conducted to verify the intended system use and administration outlined in the URS, or equivalent document.
The PQ should be conducted in the production environment or in a validation environment  that is equivalent to the production environment in terms of overall software and hardware configuration.

 PQ testing should also include, as applicable, an appropriate degree of           stress/load/volume testing based upon the anticipated system use and performance requirements in the production environment.
In addition, an appropriate degree of end-to-end or regression testing of the system should be conducted to verify the system performs reliably when system components are integrated in the fully-configured system deployed in the production environment.

 UAT should be conducted by system users to verify the adequacy of system use SOPs and data review SOP(s) and training programmes. The UAT should include verification of the ability to readily discern invalid and altered data, including the ability to efficiently review electronic data and metadata, such as audit trails.

 IT system administrators should verify the adequacy of system administration SOP(s) and  controls that will be routinely executed during normal operational use and administration of the system, including backup/restore and archival/retrieval processes. Legacy systems

 SYSTEM OPERATION AND MAINTENANCE :-

Manufacturers should have systems and procedures in place to ensure security of data and control access to computerized systems.

Suitable security systems should be in place to prevent unauthorized entry or manipulation or deletion of data through both the application software as well as in operating system environments in which data may be stored or transmitted. Data should be entered or amended only by persons authorized to do so.

The activity of entering data, changing or amending incorrect entries and creating         backups should be done in accordance with SOPs.

Security should extend to devices used to store programs, such as tapes, disks and        magnetic strip cards or other means. Access to these devices should be controlled.     

Procedures for review of metadata, such as audit trails, should define the frequency, roles and responsibilities, and nature of these reviews.

 Details on user profiles, access rights to systems, networks, servers, computer systems and software should be documented and an up-to-date list on the individual user rights for the software, individual computer systems and networks should be maintained and subjected to change control. The level of detail should be sufficient to enable computer system validation personnel, IT personnel/any external auditor/inspector to ascertain that security features of the system and of software used to obtain and process critical data cannot be circumvented.  

 All GXP computerized systems in a company, either stand-alone or in a network, should be monitored using an audit trail for the system that is configured to capture events that are relevant. These events should include all elements that need to be monitored to ensure that the integrity of the data could not have been compromised, such as but not limited to, changes in data, deletion of data, dates, times, backups, archives, changes in user access rights, addition/deletion of users and logins. The configuration and archival of these audit trails should be documented and also be subjected to change control. These audit trails should be validated to show that these cannot be modified in their archived form. Actions, performance of the system and acquisition of data should be traceable and identify the persons who made entries and or changes, approved decisions or performed other critical steps in system use or control.

The entry of master data into a computerized system should be verified by an independent authorized person and locked before release for routine use.

 Validated computerized systems should be maintained in the validated state once            released to the GXP production environment.               

There should be written procedures governing system operation and maintenance,
·        including for example:
·        performance monitoring;
·        change management and configuration management;
·        problem management;
·        programme and data security;
·        programme and data backup/restore and archival/retrieval;
·        system administration and maintenance;
·        data flow and data life cycle;
·        system use and review of electronic data and metadata (such as audit trails);
·        personnel training;
·        disaster recovery and business continuity;
·        availability of spare parts and technical support;
·        periodic re-evaluation

Computerized systems should be periodically reviewed to determine whether the system remains in a validated state or whether there is a need for revalidation. The scope and extent of the revalidation should be determined using a risk-based approach.
·        review of changes;
·        review of deviations;
·        review of incidents;
·        systems documentation;
·        procedures;
·        training;
·        effectiveness of corrective and preventive action (CAPA)
            CAPA should be taken where indicated as a result of the periodic review
            Automatic updates should be subject to review prior to becoming effective. 
    
     SYSTEM RETIREMENT
Once the computerized system or components are no longer needed, the system or components should be retired in accordance with a change control procedure and formal plan for retirement.

Retirement of the system should include decommissioning of the software and hardware, retirement of applicable procedures as necessary. Measures should be in place to ensure the electronic records are maintained and readily retrievable throughout the required records retention period.

 Records should be in a readable form and in a manner that preserves the content and meaning of the source electronic records. For example, if critical quality and/or compliance data need to be reprocessed after retirement of the system, the business owner may arrange for migration of the critical records to a new system and for verification of correct reprocessing of the data on the  new system.

The outcome of the retirement activities, including tractability of the data and computerized systems, should be presented in a report.

GAMP5 :-

GAMP stands for Good Automated Manufacturing Practice. Usually, when one hears the terms GAMP5, it is in reference to a guidance document entitled GAMP5: A Risk-Based Approach to Compliant GxP Computerized Systems. This document is published by an industry trade group called the International Society for Pharmaceutical Engineering (ISPE)based on input from pharmaceutical industry professionals.
In a nutshell, GAMP5: A Risk-Based Approach to Compliant GxP Computerized Systems provides a framework for the risk-based approach to computer system validation where a system is evaluated and assigned to a predefined category based on its intended use and complexity. Categorizing the system helps guide the writing of system documentation (including specifications and test scripts and everything in between).
GAMP5’s approach can be summed up by the V-model diagram. The V-model juxtaposes the specifications produced for a system to the testing performed as part of the verification process. The types of specifications associated with a system are tied to its degree of complexity. For example, for
§ Category 1 – Infrastructure Software
§ Category 3 – Non-Configured Products
§ Category 4 – Configured Products
§ Category 5 – Custom Applications
§ Hardware Category 1 – Standard Hardware Components
§ Hardware Category 2 – Custom Built Hardware Components

Computer System Validation Flow and Documentation Need
  v System Classification Assessment
  v Validation Plan
  v User Requirement Specification
  v System Requirement Specification
  v Functional Risk Assessment
  v Design Specification
  v Configuration Baseline
  v Test Plan
  v Installation Qualification
  v Operational Qualification
  v Performance Qualification
  v Test Report
  v Requirement Traceability Matrix
  v Validation Summery Report
  v System Release Report
  v System Retirement
  v System Decommissioning Report

     System Classification Assessment:- Assessment of Gxp or Non Gxp, Electronic
     Signature and Electronic Record, Computer Categorization (Gamp5),
      Business Requirement, Business Process Requirement.

Validation Plan :- The FDA uses the NIST definition: A management document
Describing the approach taken for a project. The plan typically describes work to be done, resources required, methods to be used, configuration management and quality assurance procedures to be followed, schedules to be met, project organization, etc. Project in this context is a generic term. Some projects may also need integration plans, security plans, test plans, quality assurance plans, etc. (See: documentation plan, software development plan, test plan, software engineering.) In practice, the validation plan describes how the validation project is going to be performed.

Validation Plan should include
• Deliverables (documents) to be generated during the validation process
    • Resources, departments, and personnel to participate in the validation project
    • Time-lines for completing the validation project
    • Acceptance criteria to confirm that the system meets defined requirements
    • Compliance requirements for the system, including how the system will           meet these requirements
    • The plan should be written with an amount of detail that reflects system                 complexity.
    • The plans should be approved, at a minimum, by the System Owner and  Quality   Assurance. Once approved, the plan should be retained according to your site document control procedures.

User Requirements Specification
The User Requirements Specification describes the business needs for what users require from the system. User Requirements Specifications are written early in the validation process, typically before the system is created. They are written by the system owner and end-users, with input from Quality Assurance. Requirements outlined in the URS are usually tested in the Performance Qualification or User Acceptance Testing. User Requirements Specifications are not intended to be a technical document; readers with only a general knowledge of the system should be able to understand the requirements outlined in the URS.
The URS is generally a planning document, created when a business is planning on
Acquiring a system and is trying to determine specific needs. When a system has already been created or acquired, or for less complex systems, the user requirement specification can be combined with the functional requirements document.

System Requirement Specification
System Requirements Specification (SRS) (also known as a Software Requirements Specification) is a document or set of documentation that describes the features and behavior of a system or software application. It includes a variety of elements (see below) that attempts to define the intended functionality required by the customer to satisfy their different users.In addition to specifying how the system should behave, the specification also defines at a high- level the main business processes that will be supported, what simplifying assumptions have been made and what key performance parameters will need to be met by the system.

Functional Risk Assessment
In validation, Risk Assessment documents potential business and compliance risks associated with a system and the strategies that will be used to mitagate those risks. Risk Assessments justify allocation of validation resources and can streamline the testing process. They also serve as a forum for users, developers, system owners, and quality to discuss the system which can have other intangible benefits. 21 CFR 11 does not require risk assessments, but Annex 11 does require a risk-management strategy.

Assigning risk should be a multi-disciplinary function. System owners, key end-users, system developers, information technology, engineers, and Quality should all participate if they are involved with the system. The Risk Assessment should be signed by the personnel who participated in the assessment.
There are many methods for Risk Assessment, but they generally all include rating risk for each requirement in at least three specific categories:

• Criticality – How important a function is to system functionality. Low criticality means that the system can continue to function relatively normally, even if the function is completely compromised. High risk means that if the function is damaged, one of the primary functions of the system cannot be accomplished.

• Detect ability– The ease of detecting an issue arising with a particular function. It is more risky if there is a low chance of detectability; high chances of detectability correspond to lower risk.

• Probability– The probability of an issue arising with a particular function. Low probability means there is little chance that the function will fail; high probability means there is a high chance that the function will fail.

Design Specification
Design Specifications describe how a system performs the requirements outlined in the Functional Requirements. Depending on the system, this can include instructions on 
testing specific requirements, configuration settings, or review of functions or code. All requirements outlined in the functional specification should be addressed; linking requirements between the functional requirements and design specification is performed via the Traceability
Good requirements are objective and testable. Design Specifications may include:
 Specific inputs, including data types, to be entered into the system
 Calculations/code used to accomplish defined requirements
 Outputs generated from the system
 Explaining technical measures to ensure system security
 Identify how the system meets applicable regulatory requirements

Configuration Base Line
Configuration Base are standard setups used when configuring machines in organizations. Configuration baselines are used to provide a starting point where machines can then be customized with respect to their specific roles in the network. For example, a Windows domain controller may not require Windows Media Services to be installed since its Primary Function is that of a directory service. A Web server would not necessarily require a database to be installed. Additionally, specific services would be installed, turned off, or even removed completely on the basis of the final location of the system in the network architecture.

 Test Plan

A TEST PLAN is a document describing software testing scope and activities. It is the basis for formally testing any software/product in a project. ISTQB Definition. test plan: A document describing the scope, approach, resources and schedule of intended test activities.

Installation Qualification :- IQ is an acronym for “Installation Qualification”: which is defined by the FDA as, establishing by objective evidence that all key aspects of the process equipment and ancillary system installation adhere to the manufacturer’s approved specification  and that the recommendations of the supplier of the equipment are suitably considered.

Operational Qualification :- Documented verification that a system operates according to written and pre-approved specifications throughout specified operating ranges  at the customer.

Performance Qualification: - Documented verification that a system is capable of    performing the activities of the processes it is required to perform, according to written and  pre- approved specifications, within the scope of the business process and operating   environment.

Test Report :- A “test package” consisting of a corresponding set of system tests and   business cases, including a final test report, covering validation needs in OQ and PQ.

Requirement Tractability Matrix: - 
  The RTM shall document the relationship between the requirements defined in the URS and FRS documents and the Qualification Tests defined   in the respective test scripts (such as IQ, OQ, PQ Scripts).

 The Requirements Traceability Matrix(RTM) is a document that links requirements throughout the validation process.
The requirements Tractability Matrix is to ensure that all requirements defined for a system are tested in the test protocols. The tractability matrix is a tool both for the validation team, to ensure that requirements are not lost during the validation project, and for auditors, to review the validation documentation.The requirements tractability matrix is usually 
developed in concurrence with the initial list of requirements (either the User Requirements Specification or Functional Requirements Specification). As the Design Specifications and Test Protocols are developed, the tractability matrix is updated to include the updated documents. Ideally, requirements should be traced to the specific test 
step in the testing protocol in which they are tested.

Validation Summery Report :- Provide an overview of the entire validation project. Once the summary report is signed, the validation project is considered to be complete. When regulatory auditors review validation projects, they typically begin by reviewing the summary report. When validation projects use multiple testing systems, some organizations will produce a testing summary report for each test protocol, then summarize the project with a final Summary Report.
The amount of detail in the reports should reflect the relative complexity, business use, and regulatory risk of the system. The report is often structured to mirror 
the validation plan that initiated the project.
The report is reviewed and signed by the system owner and Quality.
The collection of documents produced during a validation project is called a Validation Package. Once the validation project is complete, all validation packages should be stored according to your site document control procedures. Summary reports should be approved by the System Owner and Quality Assurance.

System Release Report:- The System Release Report (SRR) has been approved and issued to notify the users that the system has been successfully validated for release/use in the Production department.

System Retirement :- Retirement of computer systems used in FDA and equivalent international environments is part of the validation life cycle and should follow well defined and documented processes. While earlier phases are well understood the industry is unsure on what to do when the systems are taken out of service. Most critical are strategies and procedures to retrieve data after system retirement.

System Decommissioning Report :- 
A decommissioning plan must be prepared for systems that are to be retired from operational service so that the process is documented and controlled. Consideration must be taken into account with regards to the archiving of data and records retention requirements, along with any hardware disposal.


Required Documentation for Different Category (GAMP-5)

Sr.
No.
Required Document
Category 1
Category2
Category3
Category 4
Category 5
1
Validation Master Plan
N
NA
N
Y
N
2
System Classification Assessment
N
NA
Y
Y
Y
3
Validation Plan
Y
NA
Y
Y
Y
4
User Requirement   Specification
Y
NA
N
Y
Y
5
System Requirement   Specification
N
NA
N
Y
Y
6
Functional Risk Assessment
N
NA
Y
Y
Y
7
Functional Specification
N
NA
Y
Y
Y
8
Test Plan
Y
NA
Y
Y
Y
9
IQ
Y
NA
Y
Y
Y
10
OQ
Y
NA
NA
Y
Y
11
PQ *
Y
NA
Y
Y
N
12
IOQ #
Y
NA
Y
Y
Y
13
OPQ#
Y
NA
Y
Y
Y
14
Test Report
Y
NA
Y
Y
Y
15
Required Traceability Matrix
Y
NA
NA
Y
Y
16
Validation Summery Report
Y
NA
Y
Y
Y
17
Factory Acceptance Test
N
NA
N
Y
N
18
Site Acceptance Test
N
NA
N
Y
N

*Performance Qualification in not required in case of Perform the PQ of Equipment.

#Apllicable on HMI MMI or Non GXP Based System.

Reference :- 
1. WHO Guideline
2. ISPE Computer System Validation
3. US FDA Electronic Signature/Electronic Record Computer System Validation
4. EU GMP







Reactions

Post a Comment

2 Comments

  1. The post you have shared here is really great as it contains some good knowledge which is very useful for me. Thanks for posting it. Keep it up. test and tag services Adelaide

    ReplyDelete
    Replies
    1. Thanks for Appreciation. Kindly subscribe and stay tuned with us.

      Delete