ANALYTICAL METHOD VALIDATION

 ANALYTICAL METHOD VALIDATION


Introduction: After the development of an analytical procedure, it is must important to assure that the procedure will consistently produce the intended a precise result with high degree of accuracy. The method should give a specific result that may not be affected by external matters. This creates a requirement to validate the analytical procedures.

Validation of an analytical procedure is the process by which it is established, by laboratory studies, that the performance characteristics of the procedure meet the requirements for the intended analytical applications

Analytical Method Validation is to be performed for new analysis methods or for current methods when any changes are made to the procedure, composition of the drug product and synthesis of the drugs substances.

Common types of analytical procedure that can be validated.

  • Identification tests;
  • Quantitative tests for impurities content;
  • Limit tests for the control of impurities;
  • Quantitative tests of the active moiety in samples of drug substance or drug product or other selected component(s) in the drug product.

Typical validation characteristics which should be considered are listed below:

  • Accuracy
  • Precision
  • Specificity
  • Detection Limit
  • Quantitation Limit
  • Linearity
  • Range
  • Robustness
  • Stability of solution

The validation characteristics are to be evaluated on the basis of the type of analytical procedures.

Parameter

Type of Analytical Procedures

Identification

Impurities

Quantitative Tests

Quantitative

Limit

Accuracy

Not Required

Required

Not Required

Required

Precision

Not Required

Required

Not Required

Required

Specificity

Required

Required

Required

Required

Detection Limit

Not Required

Not Required

Required

Not Required

Quantitation Limit

Not Required

Required

Not Required

Not Required

Linearity

Not Required

Required

Not Required

Required

Range

Not Required

Required

Not Required

Required


Methods and Terminology

1.   Accuracy
The accuracy of an analytical method is the closeness of the test results obtained by that method to the true value. It is recommended that accuracy should be determined using a minimum of nine determinations over a minimum of the three concentration levels, covering the specified range (3 concentrations/3 replicates each of total analytical procedures).

Recovery = Analytical Result x 100%
                   True Value

The recovery should be in the range of Control limit.

The following method can be applied for calculating the Upper Control Limit (UCL) and Lower Control Limit (LCL). The method involves the moving range, which is defined as the absolute difference between two consecutive measurements (|xi-xi-1|).

2. Precision
The precision of an analytical method is the degree of agreement among individual test results when the method is repeated to multiple samplings of a homogeneous sample. The precision of an analytical procedure is usually expressed as the standard deviation or relative standard deviation (coefficient of variation) of a series of measurements. It is indicated by Relative Standard Deviation, RSD, which is determined by the equation:

Where xi is an individual measurement in a set of n measurement and  is the arithmetic mean of the set. Generally, the RSD should not be more than 2%.

2.1  Repeatability
Repeatability refers to the use of the analytical procedure within a laboratory over a short period of time using the same analyst with the same equipment. Repeatability should be assessed using a minimum of nine determinations covering the specified range for the procedure (i.e., three concentrations and three replicates of each concentration or using a minimum of six determinations at 100% of the test concentration).

2.2  Reproducibility
Reproducibility expresses the precision between laboratories (collaborative studies, usually applied to standardization of methodology). Reproducibility is usually demonstrated by means of an inter-laboratory trial.

2.3  Intermediate Precision
Intermediate precision is the results from within lab variations due to random events such as different days, different analysts, different equipment, etc.

The standard deviation, relative standard deviation (coefficient of variation) and confidence interval should be reported for each type of precision investigated.

3. Specificity
Specificity is the ability to measure accurately and specifically the analyte of interest in the presence of other components that may be expected to be present in the sample matrix such as impurities, degradation products and matrix components. It must be demonstrated that the analytical method is unaffected by the presence of spiked materials (impurities and/or excipients).

In case of identification tests, the method should be able to discriminate between compounds of closely related structures which are likely to be present. Similarly, in case of assay and impurity tests by chromatographic procedures, specificity can be demonstrated by the resolution of the two components which elute closest to each other.[9]

It is not always possible to demonstrate that an analytical procedure is specific for a particular analyte (complete discrimination). In this case a combination of two or more analytical procedures is recommended to achieve the necessary level of discrimination.

4.  Linearity
Linearity is the ability of the method to elicit test results that are directly, or by a well-defined mathematical transformation, proportional to analyte concentration within a given range.[10] It should be established initially by visual examination of a plot of signals as a function of analyte concentration of content. If there appears to be a linear relationship, test results should be established by appropriate statistical methods. Data from the regression line provide mathematical estimates of the degree of linearity. The correlation coefficient, y-intercept, and the slope of the regression line should be submitted.

It is recommended to have a minimum of five concentration levels, along with certain minimum specified ranges. For assay, the minimum specified range is from 80% -120% of the target concentration.

Regression line, y = ax + b

Where, a  is the slope of regression line and b is the y- intercept.

Here, x may represent analyte concentration and y may represent the signal responses.

Correlation Coefficient,

Where xi is an individual measurement in a set of n measurement and is the arithmetic mean of the set, yi is an individual measurement in a set of n measurement and  is the arithmetic mean of the set.

5. Detection Limit and Quantitation Limit
The Detection Limit is defined as the lowest concentration of an analyte in a sample that can be detected, not quantified. The Quantitation Limit is the lowest concentration of an analyte in a sample that can be determined with acceptable precision and accuracy under the stated operational conditions of the analytical procedures. Some of the approaches to determine the Detection Limit and Quantitation Limit are:

a. Visual Evaluation
Visual evaluation may be used for non-instrumental methods. For non-instrumental procedures, the detection limit is generally determined by the analysis of samples with known concentrations of analyte and by establishing the minimum level at which the analyte can be reliably detected. And the quantitation limit is generally determined by the analysis of samples with known concentrations of analyte and by establishing the minimum level at which the analyte can be determined with acceptable accuracy and precision. Visual Evaluation approach may also be used with instrumental methods.

b. Signal to Noise
This approach can only be applied to analytical procedures that exhibit baseline noise. Determination of the signal-to-noise ratio is performed by comparing measured signals from samples with known low concentrations of analyte with those of blank samples and establishing the minimum concentration at which the analyte can be reliably detected for the determination of Detection Limit and reliably quantified for the determination of Quantitation Limit. A signal-to-noise ratio between 3 or 2:1 is generally considered acceptable for estimating the detection limit and A typical signal-to-noise ratio is 10:1 is considered for establishing the quantitation limit.

c. Standard Deviation of the response and the Slope.

The Detection Limit may be expressed as:

DL = 3.3σ/ s

The Quantitation Limit may be expressed as:

QL = 10σ/ s

Where, σ is standard deviation of the response and s is slope of the linearity curve.

The method used for determining the detection limit and the quantitation limit should be presented. If DL and QL are determined based on visual evaluation or based on signal to noise ratio, the presentation of the relevant chromatograms is considered acceptable for justification.

6. Range

The range of an analytical procedure is the interval between the upper and lower levels of analyte (including these levels) that have been demonstrated to be determined with a suitable level of precision, accuracy, and linearity using the procedure as written. The range is normally expressed in the same units as test results (e.g., percent) obtained by the analytical procedure.

The following minimum specified ranges should ne considered:

  • For Assay of a Drug Substance (or a drug product) the range should be from 80% to 120% of the test concentration.
  • For Determination of an Impurity: from 50% to 120% of the acceptance criterion.
  • For Content Uniformity: a minimum of 70% to 130% of the test concentration, unless a wider or more appropriate range based on the nature of the dosage form (e.g., metered-dose inhalers) is justified.
  • For Dissolution Testing: ±20% over the specified range

(e.g., if the acceptance criteria for a controlled-release product cover a region from 20%, after 1 hour, and up to 90%, after 24 hours, the validated range would be 0% to 110% of the label claim).

7. Robustness
The robustness of an analytical procedure is a measure of its capacity to remain unaffected by small but deliberate variations in procedural parameters listed in the procedure documentation and provides and indication of its suitability during normal usage. Robustness may be determined during development of the analytical procedure.[15]

If measurements are susceptible to variations in analytical conditions, the analytical conditions should be suitably controlled or a precautionary statement should be included in the procedure. One consequence of the evaluation of robustness should be that a series of system suitability parameters (e.g., resolution test) is established to ensure that the validity of the analytical procedure is maintained whenever used.[16]

Examples of typical variations are:

  • stability of analytical solutions;
  • extraction time.

In the case of liquid chromatography, examples of typical variations are:

  • influence of variations of pH in a mobile phase;
  • influence of variations in mobile phase composition;
  • different columns (different lots and/or suppliers);
  • temperature;
  • flow rate.

In the case of gas-chromatography, examples of typical variations are:

  • different columns (different lots and/or suppliers);
  • temperature;
  • flow rate.

System Suitability Testing
System suitability testing is an integral part of many analytical procedures. The tests are based on the concept that the equipment, electronics, analytical operations and samples to be analyzed constitute an integral system that can be evaluated as such. System suitability test parameters to be established for a particular procedure depend on the type of procedure being validated. They are especially important in the case of chromatographic procedures.

Interpretation and Treatment of Variation of Analytical Data
Analytical procedures are developed and validated to ensure the quality of drug products. The analytical data can be treated and interpreted for the scientific acceptance. The statistical tools that may be helpful in the interpretation of analytical data are described. Many descriptive statistics, such as the mean and standard deviation, are in common use. Other statistical tools, such as calculating confidence interval, outlier tests, etc. can be performed using several different, scientifically valid approaches.

1. Confidence Interval:
A confidence interval for the mean may be considered in the interpretation of data. Such intervals are calculated from several data points using the sample mean  and sample standard deviation (s) according to the formula:


in which tα/2,n-1 is a statistical number dependent upon the sample size (n), the number of degrees of freedom (n-1), and the desired confidence level (1-α).

Its values are obtained from published tables of the Student t-distribution. The confidence interval provides an estimate of the range within which the “true” population mean (µ) falls, and it also evaluates the reliability of the sample mean as an estimate of the true mean. If the same experimental set-up were to be replicated over and over and a 95% (for example) confidence interval for the true mean is calculated each time, then 95% of such intervals would be expected to contain the true mean, µ. One cannot say with certainty whether or not the confidence interval derived from a specific set of data actually collected contains µ. However, assuming the data represent mutually independent measurements randomly generated from a normally distributed population the procedure used to construct the confidence interval guarantees that 95% of such confidence intervals contain µ.

2. Outlying Results:
Occasionally, observed analytical results are very different from those expected. Aberrant, anomalous, contaminated, discordant, spurious, suspicious or wild observations; and flyers, rogues, and mavericks are properly called outlying results. Like all laboratory results, these outliers must be documented, interpreted, and managed. Such results may be accurate measurements of the entity being measured, but are very different from what is expected. Alternatively, due to an error in the analytical system, the results may not be typical, even though the entity being measured is typical. When an outlying result is obtained, systematic laboratory and process investigations of the result are conducted to determine if an assignable cause for the result can be established. Factors to be considered when investigating an outlying result include—but are not limited to—human error, instrumentation error, calculation error, and product or component deficiency. If an assignable cause that is not related to a product or component deficiency can be identified, then retesting may be performed on the same sample, if possible, or on a new sample.

When used appropriately, outlier tests are valuable tools for pharmaceutical laboratories. Several tests exist for detecting outliers such as the Extreme Studentized Deviate (ESD) Test, Dixon's Test, and Hampel's Rule.

Choosing the appropriate outlier test will depend on the sample size and distributional assumptions. Many of these tests (e.g., the ESD Test) require the assumption that the data generated by the laboratory on the test results can be thought of as a random sample from a population that is normally distributed, possibly after transformation.

3. Generalized Extreme Studentized Deviate (ESD) Test
This is a modified version of the ESD Test that allows for testing up to a previously specified number, r, of outliers from a normally distributed population. Let r equal 1, and n equal 10.

Normalize each result by subtracting the mean from each value and dividing this difference by the standard deviation.

Take the absolute value of these results, select the maximum value (|R1|), and compare it to a previously specified tabled critical value λ1 based on the selected significance level (for example, 5%). If the the maximum value is larger than the tabled critical value, it is identified as being inconsistent with the remaining data. If the maximum value is less than the tabled critical value, there is not an outlier. Sources for -values are included in many statistical textbooks.

CONCLUSION
Method Validation is an important analytical tool to ensure the accuracy and specificity of the analytical procedures with a precise agreement. This process determines the detection and quantitation limit for the estimation of drug components. The validation procedures are performed along with the system suitability. Some statistical tools are also used to interpret the analytical results of the validation characteristics.
The validation of analytical methods not only requires the performance of characteristics parameter but also the statistical treatments of the analytical data. The acceptance of the variation of the analytical data is determined by these treatments.




Post a Comment

1 Comments

  1. I think it would be helpful to cite the relevant ICH guidance documents and Pharmacopeia chapters that you used in preparing this summary.

    ReplyDelete