Section: 8 | Analytical Standardization and Calibration |
Help Manual

Page of 1
Type a page number and hit Enter.
/1
Back to Search Results
Type a page number and hit Enter.
Summary of table differences
No records found.
How to Cite this Reference
 The recommended form of citation is: John R. Rumble, ed., CRC Handbook of Chemistry and Physics, 102nd Edition (Internet Version 2021), CRC Press/Taylor & Francis, Boca Raton, FL. If a specific table is cited, use the format: "Physical Constants of Organic Compounds," in CRC Handbook of Chemistry and Physics, 102nd Edition (Internet Version 2021), John R. Rumble, ed., CRC Press/Taylor & Francis, Boca Raton, FL.

Thomas J. Bruno

# Overview

Most modern instrumental techniques used in analytical chemistry produce an output or signal that is not absolute; the signal or peak is not a direct quantitative measure of concentration or target analyte quantity. Thus, to perform quantitative analysis, one must convert the raw output from an instrument (information) into a quantity (knowledge). This is done by standardizing or calibrating the raw response from an instrument (Refs. 1-4). Here, we briefly summarize the most common methods applied in analytical chemistry, recognizing that this is a very large field. We note that the common use of the term “standardization” is not to be confused with the application of standard methods as specified by regulatory or consensus standard organizations.

# Samples

In all of the discussion to follow, we assume that the sample has been properly drawn from the parent population material and properly prepared. Clearly, the most precise analytical methods and the most painstaking calibration methods are useless if applied to a sample that does not represent reality. Nevertheless, the term “sampling,” which describes the process of obtaining the sample (from the population material), implies the existence of a sampling uncertainty (arising mainly from population material heterogeneity) (Refs. 5 and 6). Thus, the analytical result is an estimate of what would be obtained from the parent population material. The theory, concepts, and nomenclature regarding samples and sampling constitute a complex, statistically based, sub-specialty of analytical chemistry well beyond the scope presented here. We begin with some simplified definitions (Ref. 7):

• Aliquot: An aliquot is a known fraction of a homogeneous mass or volume.
• Amount of substance: The amount of substance is the fundamental quantity of material measured in the number of moles.
• Analyte: The analyte is the target component or compound in the sample for which one desires a measurement.
• Bias: As applied to sampling, bias refers to a systematic displacement, error, uncertainty, or mistake caused by a flaw in the sampling procedure.
• Convenience sample: A sample chosen on the basis of accessibility, expediency, cost, efficiency, or other reason not directly concerned with sampling parameters.
• Determination: The determination is the entire analytical procedure or method performed on a test portion.
• Matrix: The matrix is the background or carrier material of the sample that includes all components except the analyte(s) of interest.
• Phase: The phase describes the physical state of a substance: primarily solid, liquid, gas, but this term might include more detailed descriptions to include supercritical fluid, plasma, etc. Note that the term “vapor” typically refers to a gas phase above and in equilibrium with a condensed phase (often called the headspace in chemical analysis).
• Population material: The population material is the entirety of the bulk material from which the sample is drawn. This might be a plot of earth, a warehouse full of sugar, or a tank of jet fuel.
• Quantity: The quantity refers to the mass or volume of a substance.
• Random sample: A sample selected so that any portion of the sample has an equal or known chance of being chosen for measurement.
• Representative sample: A sample resulting from a sampling process that can be expected to adequately reflect the properties of interest of the parent population.  A representative sample may be a random sample. The degree of representativeness of the sample is limited by cost or convenience.
• Selective sample: A sample that is deliberately chosen by using a sampling plan that eliminates materials with certain characteristics or selects only a material with other relevant characteristics desired for an analysis.
• Test portion: The test portion is the actual material removed from a sample for analysis.
• Umpire sample: A sample taken, prepared, and stored in an agreed upon manner for the purpose of settling a dispute, arrived at by agreement that will include the test method and procedure, serving as the basis for acceptance, rejection, or economic adjustment.  This is sometimes called a referee sample.
• Unknown: The unknown is a term that describes the target measurement or unknown quantity that is desired for the analyte, or the analyte itself.

Sampling uncertainty is that part of the total uncertainty in an analytical procedure or determination that results from using only a fraction of the population material. In this respect, sampling by any method is an extrapolation process. Because the sampling uncertainty is usually ignored for an individual analysis on an individual test portion, the sampling uncertainty is considered as being due entirely to the variability of the test portion. It is therefore assessed, when necessary, by replication of the sampling from the parent population material, and statistically isolating the uncertainty thus introduced by analysis of the variance. Typically, the problems associated with liquid population material are less complex but must not be ignored. Sample stratification, concentration and thermal gradients, poor mixing, and gradients associated with flow are all real effects that must be considered. Sampling uncertainty is often minimized by field and laboratory processing, with procedures that can include mixing, reduction, coning and quartering, riffling, milling, and grinding.

Another aspect that must be considered subsequent to sampling is sample preservation and handling. The integrity of the sample must be preserved during the inevitable delay between sampling and analysis. Sample preservation may include the addition of preservatives or buffer solutions, pH adjustment, use of an inert gas “blanket,” and cold storage or freezing.

# Calibration and Standardization

## External Standard Methods

The external standard method can be applied to nearly all instrumental techniques, within the general limits discussed here, and the specific limitations that may be applicable with individual techniques. This method is conceptually simple; the user constructs a calibration curve of instrumental response with prepared mixtures containing the analyte(s) over a range of concentrations, an example of which is shown in Figure 1a. Thus, the curve represents the raw instrumental response of the analyte as a function of analyte concentration or amount. Each point on this plot must be measured several times so that the repeatability can be assessed. Only random uncertainty should be observed in the replicates; trends of increasing or decreasing response (hysteresis) must be remedied by identifying the source and adjusting the method accordingly. The calibration solutions should be randomized (that is, measured in random order). Although called a calibration “curve,” ideally the signal versus concentration plot is linear, or substantially linear (that is, areas of nonlinearity are unimportant, otherwise they are localized, minor, and properly treated by the measurement technique). In some cases, the response may be linearizable (for example, by calculating the logarithm of the raw response). If a curve shows nonlinearity in an area that is important for the analysis, one must measure more concentrations (data points) in the region of curvature.

In practice, the line that results from the calibration is fit with an appropriate model, and the desired value for the unknown concentration is calculated. The curve can be used graphically if approximation suffices. Mixtures prepared for external standard calibration can contain one or many analytes. Once a calibration curve is prepared, it can often be used for some time period, provided such a procedure has been previously validated (that is, the stability of the standards and the instrument over the time of use has been assessed). Otherwise, it is best to measure the unknown and the standards within a short period of time. Moreover, if any major change is made to the instrumentation (changing a detector or detector parameters, changing a chromatographic column, etc.), the standards must be remeasured.

FIGURE 1a. An example calibration curve prepared by use of the external standard method. The instrument response is represented by A, and the concentration resulting in that response is [A]. While curves for two analytes are shown, in principle, one can plot as many analytes as desired. While five points per analyte have been shown, one can measure as many as required. Note that a region of nonlinearity is shown in the latter part of the curve for one of the components. One would require a larger number of points to adequately represent and fit any nonlinear areas.

To successfully use the method, the standard mixtures must be in a concentration range that is comparable to that of the unknown analyte, and ideally should bracket the unknown. Multiple measurements of each standard mixture should be made to establish repeatability of points on the curve. Many instrumental methods have operation ranges (frequency, temperature, etc.) in which the uncertainty is minimized, so components and concentrations for standard mixtures must respect this. The standard mixtures should be in the same matrix as the unknown, and the matrix must not interfere with the unknown or other standard mixture components. Any pretreatment of the unknown must also be reflected in the standard mixtures. As with any calibration method, components in the standard mixtures must be available at a high (or at least known) purity, they must be stable during preparation, and must be soluble in the required matrix. Unless the physical phenomenon of a measurement is well understood, extrapolation beyond the curve is not recommended (and indeed is usually strongly discouraged); nevertheless, extrapolation is occasionally done in practice. In those cases, one must be cautious, report exactly how the extrapolation was done, and assess any increase in uncertainty that may result. Note that the curve might not extrapolate through the origin. This is usually the result of adsorption (of components on container walls), carryover hysteresis, absorption (of components in seals or septa), or component degradation or evaporation.

A major consideration with external standardization is that typically, the sample size (for example, the injection volume in chromatography) must be maintained constant for standard mixtures and the unknowns. If the sample size varies slightly, it is often possible to apply a correction to the raw signal. One should not attempt to generate a calibration curve by varying the sample size (that is, for example, injecting increasing volumes into a gas chromatograph). This caution does not preclude serial dilution methods (see below), in which multiple solutions are generated for separate measurement. Other issues that can hinder successful application of the external standards method include instrumental aspects that might not be readily apparent. In chromatographic methods, for example, one can overload the column or detector. In older instruments, settings of signal attenuation were typically made manually, while in newer instruments, this may occur through software, sometimes without operator interaction or knowledge.

Note, inter alia, that in Figure 1a (and indeed all the examples presented here), the uncertainty is only indicated for the variable on the y-axis. In reality, we must recognize that there is uncertainty for the values plotted on the x-axis as well, but we often only treat the largest uncertainty, or the uncertainty that is most important for our application. Note, also, that it is critical to maintain the integrity of standards; decomposition, degradation, moisture uptake, etc., will adversely affect the validity of the calibration.

## Abbreviated External Standard Methods

In many situations in chemical analysis, a full calibration curve is not prepared because of the complexity, time, or cost. In such situations, abbreviated external standard methods are often used. Under no circumstances can an abbreviated method be used if the raw signal response is nonlinear. Moreover, these methods are not generally appropriate for analyses in regulatory, forensic, or health care environments where the consequences can be far reaching.

### Single Standard

This method uses a simple proportion approach to standardize an instrument response. It can be used only when the system has no constant determinate error or bias,* and when the reagents used give a zero blank response (that is, the instrument response from the matrix and measurement system only, without the analyte). A standard should be prepared such that the concentration is close to that of the unknown. One then calculates the concentration of the unknown, [X], as:

[X] = (Ax/As) [S]     (1)

where Ax is the instrument response of the unknown, As is the instrument response of the standard, and [S] is the concentration of the standard.

### Single Standard Plus Assumed Zero

This method, illustrated schematically in Figure 1b, assumes that the blank reading will be zero. One uses a two-point calibration in which the origin is included as the first point. It is important to ensure, by experiment or experience, that such a method is adequate to the task.

### Single Standard Plus Blank

If the analytical method has no determinate error or bias, but does produce a finite blank value, then one must also perform a blank measurement, which is subtracted from the instrument response of the standard and the unknown. Then the same procedure (Eq. 1) is used as for the single standard. If multiple samples are to be measured, it is important to measure the blank between each measurement.

FIGURE 1b. An example of a single-point calibration curve. The instrument response is represented by A, and the concentration resulting in that response is [A]. The origin (0,0) is assigned as part of the curve and is assumed to have no uncertainty.

FIGURE 1c. An example of two standards plus a blank calibration curve. The blank is subtracted from each of the standards. The instrument response is represented by A, and the concentration resulting in that response is [A].

### Two Standards Plus Blank

When the analytical method has both a determinate error (or bias), and a finite blank value, at least three calibrations must be made: two standards and one blank. The standard concentrations are typically prepared widely spaced in concentration, and the higher concentration should be chosen to represent the limit of linearity of the instrument or method. If this is not practical, the higher concentration should simply be the highest expected concentration of the analyte (unknown). This method is illustrated schematically in Figure 1c. If multiple samples are to be measured, it is important to measure the blank between each measurement.

## Internal Normalization Method

As mentioned above, the raw signal from an analytical instrument is typically not an absolute measure of concentration of the analyte(s), because the instrument may respond differently to each component. In some cases, such as with chromatographic methods, it is possible to apply response factors, determined from a standard mixture containing all constituents of the unknown sample, for standardization (Ref. 8). The standard mixture is gravimetrically prepared (with known mass percents for each component), and the instrument response is measured, for example as chromatographic areas. The total mass percent and the total area percent each sum to 100. One calculates the ratio of each mass percentage to each area, choosing one component as the reference, which is assigned a response factor of unity. To obtain the response factors of all the other components, one divides its (mass%-to-area ratio) with that of the reference. This is done for all components, producing a response factor for all components, except of course for the reference, defined as unity. When the unknown sample is measured, the response factor is multiplied by each raw area, and the resulting area percent provides the normalized mass percent of each component in the unknown.

This method corrects for minor variations in sample size (earlier defined as the test portion), although large differences in sample size must be avoided so that one is assured of consistent instrument performance. Although the method corrects for the different responses of samples, large differences must be avoided. This also means that the detector must respond linearly to the concentrations of each component, even if the concentrations are very different. This may require dilution or concentration of the sample in some situations. In chromatographic applications, all components of a mixture must be analyzed and standardized, since normalization must be performed on the entire sample.

Some techniques, such as gas chromatography with flame ionization detection and thermal conductivity detection, have well defined physical phenomena associated with output signals. With these techniques, there are some limited, published response factor data that can be used in an approximate way to standardize the response from these devices.

## In Situ Standardization

While it is rare that an analytical method can be calibrated by use of a single solution, some instances of spectrophotometry and electroanalytical methods can qualify. To use this method, one sequentially and incrementally adds known masses of standard analytes to a solution, with an instrument response being measured after each addition. This procedure can only be used if the analytical method itself does not change the analyte concentration (nondestructive) and does not lead to a loss of solution volume. A solid crystalline analyte is an example. One must also minimize changes in solution volume over the course of the standardization.

Samples presented for analysis often are contained in complex matrices with many impurities that may interact with the analyte, potentially enhancing or diminishing a signal from an instrumental technique. In such cases, the preparation of an external standard calibration curve will be impossible, because it might be very difficult to reproduce the matrix. In these cases, the standard addition method may be used. A standard solution containing the target analyte is prepared and added to the sample, thus maintainng for the unknown impurities and their effects. While the quantity of target analyte in the target sample is unknown, the added quantity is known, and its incremental additive effect on the instrument signal can be measured. Then, the quantity of the unknown analyte is determined by what is effectively an extrapolation. In practice, the volume of the standard solution added is kept small to avoid dilution of the unknown impurities by no more than 1% of the total signal. This method can only be used if there is a verified linear relationship between the signal and quantity of analyte. If a determinate error is present, then the slope of the line must be known. Moreover, the sample cannot contain any components that can respond as the analyte (that is, masquerade).

In the simplest case, one addition of analyte is made after first measuring the response of the analyte in the unknown sample. Thus, two measurements are required:

Axo = m[X0]     (2)

Axi = m([X0] + [S])     (3)

where Axo is the instrument response of the analyte in the unknown sample, [X0] is the concentration in the unknown sample, and Axi is the instrument response upon the addition of the standard, [S] (additive in equation because X and S are the same compound). The assumed slope is the proportionality constant, m. The two equations are solved simultaneously for [X0]. This technique is very rapid and economical, but there are serious drawbacks. There is no built in check for mistakes on the part of the analyst, there is no means to average random uncertainties, and there is no way to detect interference (mentioned above as masquerade).

This standard addition method alleviates some of the problems inherent in single standard addition. Here, the unknown sample is first measured in the instrument. Then that sample is “spiked” with incrementally increasing concentrations of the analyte, generating a curve such as that shown in Figure 1d. The curve should extrapolate to zero signal at zero concentration. The concentration of the analyte in the unknown is read or calculated from the abscissa (x-axis).

## Internal Standard Methods

An internal standard is a compound added to a sample at a known concentration, the purpose of which is to exhibit a similar signal when measured in an instrument but be distinguishable from the signal of the desired analyte. It provides the highest level of reliability in quantitation by chromatographic methods and is not affected by large differences in sample size (Ref. 8). Unlike the internal normalization method, it is not necessary to elute or measure all the components of the sample, one need focus only on the component(s) of interest. In atomic spectrometry, this method is not affected by changes in gas flow rates, sample aspiration rates, and flame suppression or enhancement. Another situation in which this method is valuable is when the sample matrix is either unknown or very complex, precluding the preparation of external standards.

### Multiple Internal Standards

A set of calibration solutions is prepared by mass, containing the target analyte, X, and a standard that is not present in the unknown sample, A. The instrument response (for example, a chromatographic area) is measured for each calibration solution, and a plot is made to establish linearity as in Figure 1a. The ordinate axis is the ratio of the response of the unknown analyte component, Ax, to the response of the chosen standard, As. The abscissa is the ratio of mass of X to the mass of S for that standard mixture. Once the linearity is confirmed in the concentration range of interest, the unknown is spiked with a known mass of S, the instrument response is measured, and the area ratio Ax/As is calculated. Either the graph or a fit of the data on Figure 1e is then used to determine the corresponding mass fraction, from which the mass of X may be determined. Note that the calculations could be simplified if the same mass per volume of the internal standard is added to both the unknown samples and the calibration standards.

FIGURE 1d. An example of calibration by multiple standard addition. Three additions (spikes) of the analyte X are shown, as is the extrapolation to the unknown concentration, X0.

FIGURE 1e. An example of the multiple internal standard method. The ordinate (y) axis is the ratio of the response of the unknown analyte component, Ax, to the response of the chosen standard, As. The abscissa (x) axis is the ratio of mass of X to the mass of S for that standard mixture.

### Single Internal Standard

In practice, once the linearity is established for a given mixture, it is no longer necessary to use multiple standards, although this is the most precise method. Subsequent to the verification of linearity, one standard solution can be used to fix the slope, provided it is close in concentration to that of the target analyte. In this case, the mass of the unknown can be found from:

X/S = (Ax/As)(1/R)     (4)

where X is the mass of the unknown analyte in the sample, S is the mass of the added internal standard in the sample, Ax and As are the instrument responses (areas) of the unknown and internal standard, respectively. R is a ratio determined from the standard solution prepared with both X and S: (mass, unknown analyte/mass, internal standard)/(signal, unknown analyte/signal, internal standard) = R.

(5)

Because R is the slope of the calibration curve discussed above, once linearity is established, one solution suffices. There are many conditions that must be fulfilled in order to use the internal standard method, and it is rare that all of them can actually be met. Indeed, in practice, one tries to meet as many as possible, but those that are mandatory are italicized. The compound chosen must not be present already in the unknown. The compound chosen must be separable from the analyte present in the unknown. An exception occurs when an isotopically labeled standard is used, in conjunction with mass discrimination or radioactive counting detection. In a chromatographic measurement, this is typically at least baseline resolution, although this would be a minimally acceptable degree of separation. On the other hand, the unknown analyte peak and the internal standard peak should be close to each other (temporally) on the chromatogram. The compound chosen must be miscible with the solvent at the temperature of reagent preparation and measurement. The compound chosen must not react chemically with the sample or solvent, or interfere in any way with the analysis. It is critical to maintain the integrity of standards; decomposition, degradation, moisture uptake, etc., will adversely affect the validity of the calibration. In the case of a chromatographic measurement, the same applies to interactions with the stationary phase. The compound chosen must be chemically similar (for example, in functionality, thermophysical properties) to the analyte. If such a compound is not available (for example, in a chromatographic measurement), an appropriate hydrocarbon should be chosen as a surrogate. The standard solution should be prepared at a similar concentration as in the unknown matrix; ratio correction of large differences is no substitute for an appropriate concentration. In a chromatographic measurement, the compound chosen must elute as closely as possible to the analyte and should not be the last peak to elute (the final peak often shows different geometry such as tailing). The compound chosen must be sufficiently nonvolatile to allow for storage as needed. When there is the potential for the unknown analyte to be lost by adsorption, absorption, or some other interaction with the matrix or container, a compound called a carrier is sometimes added in large excess. The carrier is similar, chemically and physically, to the unknown analyte, but easily separated from it. Its purpose is to saturate or season the matrix and prevent analyte loss.

## Serial Dilution

Serial dilution is less a standardization method as it is a method of generating solutions to be used for standardizations. Nevertheless, its importance and utility, as well as the popularity of its application, warrants mention in this section. A serial dilution is the stepwise dilution of a substance, observant of a specified, constant progression, usually geometric (or logarithmic). One first prepares a known volume of stock solution of a known concentration, followed by withdrawing some small fraction of it to another container or vial. This subsequent container is then filled to the same volume as the stock solution with the same solvent or buffer. The process is then repeated for as many standard solutions as are desired. A ten-fold serial dilution could be 1 M, 0.1 M, 0.01 M, 0.001 M, etc. A ten-fold dilution for each step is called a logarithmic dilution or log-dilution, a 3.16 fold (100.5 fold) dilution is called a half-logarithmic dilution or half-log dilution, and a 1.78 fold (100.25 fold) dilution is called a quarter-logarithmic dilution or quarter-log dilution. In practice, the ten-fold dilution is the most common. The serial dilution procedure is not only used in chemical analysis but also in serological preparations in which cellular materials such as bacteria are diluted. A critical aspect of serial dilution is that the initial solution concentration must be prepared and determined with great care, since any mistake here will be propagated into all resulting solutions.

# Traceability

Analytical measurements and certifications often contain a statement of traceability.  Traceability describes the “result or measurement whereby it can be related to appropriate standards, generally international or national standards, through an unbroken chain of comparisons" (Ref. 9).  Traceability typically includes the application of a reference material (RM) or a standard reference material (SRM) for instrument calibration before standardization for the analytes of interest.  The true value of a measured quantity (τ) cannot typically be determined.  The true value is defined as characterizing a quantity that is perfectly defined. It is an ideal value which could be arrived at only if all causes of measurement uncertainty were eliminated, and the entire population was sampled.

# Uncertainty

As stated in Section 2, the result of a measurement is only an approximation or estimate of the true value of the measurand or quantity subject to measurement. In the determination of the combined standard uncertainty and ultimately the expanded uncertainty, it is critical to include the uncertainty of calibration in the process, as discussed above. The process of arriving at the uncertainty Uy of a quantity y that is based upon measured quantities xi,..,xz is called the propagation of uncertainty. A full discussion of propagation of uncertainty is beyond the scope of this section; a simplified prescription, in the form of general and specific formulae, is provided here. In general, the propagated random uncertainty in y can be determined from:

This approach can be used when the uncertainties are random (not systematic), are relatively small, and are independent or uncorrelated (that is, in the absence of covariance). Relatively large uncertainties (such as those approaching the magnitude of the measurand itself) cannot be treated with this approach, especially if the measurand is a nonlinear function of the measured quantity. Note that by convention the use of upper case Uy denotes the expanded uncertainty, which is the uncertainty multiplied by a coverage factor k in excess of unity (for the 95% confidence level, the coverage factor is 2). A coverage factor k = 1 represents the 68% confidence level. In scientific and technical reports and publications, the goal is to report measurements and the standard uncertainty (k = 1) or the expanded uncertainty (k > 1).

It is possible to reduce this general formulation to more specific formulae in the cases of common mathematical operations. These are provided in Table 1.

# Compliance against Limits

It is often necessary or desirable to compare an analytical result against some established regulatory standard or limit, for example, to determine if the concentration of a toxic substance falls above or below a legal limit. It is imperative to consider the uncertainty when determining compliance against limits; indeed, most established limits are set with some consideration or allowance for uncertainty. A decision rule is often built into tests of compliance limits. A common decision rule is that a measured result indicates non-compliance if the measurand exceeds the limit by the expanded uncertainty. A similar approach is applied for measurands that fall below an established limit.

## Uncertainty and Error Rates

A disconnect often occurs when scientific personnel attempt to convey their measured results in legal or regulatory venues. For example, the courts in the United States routinely must deal with scientific testimony based on measurements, and scientists are often asked to characterize their measurements in terms of error rates.

Statistically, the error rate is the frequency of type I and type II errors in null hypothesis significance testing. This has importance in forensic chemistry, e.g., when the blood alcohol content (BAC) of a sample is measured. Here, a null hypothesis might be the BAC of sample X is not below 0.08% (mass/mass). A measurement above that level, and therefore a failure to reject the null hypothesis, can result in a legal finding of intoxication. A type I error occurs when a rejected null hypothesis is correct (false positive); a type II error occurs when the accepted null hypothesis is false (false negative). Independent of the frequency of type I and type II errors (the statistical error rate), each measurement of BAC has an uncertainty. The uncertainty of each measurement is determined by the propagation of the contributions to uncertainty that is represented by the uncertainty budget, multiplied by the appropriate coverage factor. The error rate of a particular laboratory or technique is not so easily determined. In some large state forensic laboratories, error rates can be approximated by inserting known standard samples anonymously into the normal workflow, but even this approach has limitations.

It is important to understand that the concept of error rate is distinct from the frequency at which an analytical instrument “throws an error.” For example, in the headspace gas chromatographic analysis of BAC, the instrument might report a sampling error, and the operator might notice a damaged needle, which is then replaced. The frequency of this type of error is different from the error rate mentioned above.

## References

1. Chalmers, R. A., Chapter 2: Standards and Standardization in Chemical Analysis, Vol. 3, Elsevier, Amsterdam, 1975.
2. Danzer, K., and Currie, L. A., Pure Appl. Chem. 70, 993, 1998. [https://doi.org/10.1351/pac199870040993]
3. Danzer, K., Otto, M., and Currie, L. A., Pure Appl. Chem. 76, 1215, 2004. [https://doi.org/10.1351/pac200476061215]
4. Woodget, B. W., and Cooper, D., Samples and Standards, Analytical Chemistry by Open Learning, John Wiley & Sons, Chichester, 1987.
5. Gy, P., Sampling for Analytical Purposes, John Wiley & Sons, Chichester, 1998.
6. Vitt, J. E., and Engstrom, R. C., J. Chem. Educ. 76, 99, 1999. [https://doi.org/10.1021/ed076p99]
7. Horowitz, W., Pure Appl. Chem. 62, 1193, 1990. [https://doi.org/10.1351/pac199062061193]
8. Grob, R. L., Modern Practice of Gas Chromatography, Wiley Interscience, New York, 1995.
9. Inczedy, J., Lengyel, T., Ure, A. M., Compendium of Analytical Nomenclature, Third Edition, International Union of Pure and Applied Chemistry, 1997
 *Determinate error and bias are related terms that describe uncertainty that arises from a fixed cause, and that can, in principle, be eliminated if recognized. Determinate error (or systematic error) is most often associated with a measurement, while bias can be associated with either a measurement or with the sampling procedure.

## TABLE 1. Specific Formulae for the Propagation of Random, Independent Uncertainty

 Measurand argument Arithmetic uncertainty formula y(where y is a counted random event over a time interval) y = A × x(where A is a constant with no uncertainty) y = x1 + x2 y = x1/x2y = x1 × x2 y = (x1 × x2)/x3 y = log(x) y = ln(x) y = ex Uy = y × Ux y = xa $\frac{{U}_{y}}{\left|y\right|}=\left|a\right|×\left(\frac{{U}_{x}}{x}\right)$ y = 10x $\frac{{U}_{y}}{\left|y\right|}=\left|a\right|×\left(\frac{{U}_{x}}{x}\right)$

Page 1 of 1
1/1

Entry Display
This is where the entry will be displayed

#### Other ChemNetBase Products

 You are not within the network of a subscribing institution.Please sign in with an Individual User account to continue.Note that Workspace accounts are not valid.

Confirm Log Out
Are you sure?
Your personal workspace allows you to save and access your searches and bookmarks.

Are you sure?

 You have entered your Individual User account sign in credentials instead of Workspace credentials. While using this network, a personal workspace account can be created to save your bookmarks and search preferences for later use. Click the help icon for more information on the differences between Individual User accounts and Workspace accounts.
My Account

 Username Title [Select]DrProfMissMrsMsMrMx [Select]DrProfMissMrsMsMrMx First Name (Given) Last Name (Family) Email address

Searching for Chemicals and Properties

The CRC Handbook of Chemistry and Physics (HBCP) contains over 700 tables in over 450 documents which may be divided into several pages, all categorised into 17 major subject areas. The search on this page works by searching the content of each page individually, much like any web search. This provides a challenge if you want to search for multiple terms and those terms exist on different pages, or if you use a synonym/abbreviation that does not exist in the document.

We use metadata to avoid some of these issues by including certain keywords invisibly behind each table. Whilst this approach works well in many situations, like any web search it relies in the terms you have entered existing in the document with the same spelling, abbreviation etc.

Since chemical compounds and their properties are immutable, a single centralised database has been created from all chemical compounds throughout HBCP. This database contains every chemical compound and over 20 of the most common physical properties collated from each of the >700 tables. What's more, the properties can be searched numerically, including range searching, and you can even search by drawing a chemical structure. A complete list of every document table in which the compound occurs is listed, and are hyperlinked to the relevant document table.

The 'Search Chemicals' page can be found by clicking the flask icon in the navigation bar at the top of this page. For more detailed information on how to use the chemical search, including adding properties, saving searches, exporting search results and more, click the help icon in to top right of this page, next to the welcome login message.

Below is an example of a chemical entry, showing its structure, physical properties and document tables in which it appears.

We have developed this cookie policy (the “Cookie Policy”) in order to explain how we use cookies and similar technologies (together, “Cookies”) on this website (the “Website”) and to demonstrate our firm commitment to the privacy of your personal information.

The first time that you visit our Website, we notify you about our use of Cookies through a notification banner. By continuing to use the Website, you consent to our use of Cookies as described in this Cookie Policy. However, you can choose whether or not to continue accepting Cookies at any later time. Information on how to manage Cookies is set out later in this Cookie Policy.

Cookies are small text files containing user IDs that are automatically placed on your computer or other device by when you visit a website. The Cookies are stored by the internet browser. The browser sends the Cookies back to the website on each subsequent visit, allowing the website to recognise your computer or device. This recognition enables the website provider to observe your activity on the website, deliver a personalised, responsive service and improve the website.

## Cookies We Use and Their Purpose

‘Strictly Necessary’ Cookies enable you to move around the Website and use essential features. For example, if you log into the Website, we use a Cookie to keep you logged in and allow you to access restricted areas, without you having to repeatedly enter your login details. If you are registering for or purchasing a product or service, we will use Cookies to remember your information and selections, as you move through the registration or purchase process.

Strictly Necessary Cookies are necessary for our Website to provide you with a full service. If you disable them, certain essential features of the Website will not be available to you and the performance of the Website will be impeded.

‘Performance’ Cookies collect information about how you use our Website, for example which pages you visit and if you experience any errors. These Cookies don’t collect any information that could identify you – all the information collected is anonymous. We may use these Cookies to help us understand how you use the Website and assess how well the Website performs and how it could be improved.

‘Functionality’ Cookies enable a website to provide you with specific services or a customised experience. We may use these Cookies to provide you with services such as watching a video or adding user comments. We may also use such Cookies to remember changes you make to your settings or preferences (for example, changes to text size or your choice of language or region) or offer you time-saving or personalised features.

You can control whether or not Functionality Cookies are used, but disabling them may mean we are unable to provide you with some services or features of the Website.

## First and Third Party Cookies

The Cookies placed on your computer or device include ‘First Party’ Cookies, meaning Cookies that are placed there by us, or by third party service providers acting on our behalf. Where such Cookies are being managed by third parties, we only allow the third parties to use the Cookies for our purposes, as described in this Cookie Policy, and not for their own purposes.

You always have a choice over whether or not to accept Cookies. When you first visit the Website and we notify you about our use of Cookies, you can choose not to consent to such use. If you continue to use the Website, you are consenting to our use of Cookies for the time being. However, you can choose not to continue accepting Cookies at any later time. In this section, we describe ways to manage Cookies, including how to disable them.

You can manage Cookies through the settings of your internet browser. You can choose to block or restrict Cookies from being placed on your computer or device. You can also review periodically review the Cookies that have been placed there and disable some or all of them.

Please be aware that if you choose not to accept certain Cookies, it may mean we are unable to provide you with some services or features of the Website.

In order to keep up with changing legislation and best practice, we may revise this Cookie Policy at any time without notice by posting a revised version on this Website. Please check back periodically so that you are aware of any changes.

## Questions or Concerns

You can also contact the Privacy Officer for the Informa PLC group at [email protected].

Here is a list of cookies we have defined as 'Strictly Necessary':

### Taylor and Francis 'First Party' Cookies

JSESSIONID

Here is a list of the cookies we have defined as 'Performance'.

_ga

_gid

_gat

Accessibility

The Voluntary Product Accessibility Template (VPAT) is a self-assessment document which discloses how accessible Information and Communication Technology products are in accordance with global standards.

The VPAT disclosure templates do not guarantee product accessibility but provide transparency around the product(s) and enables direction when accessing accessibility requirements.

Taylor & Francis has chosen to complete the International version of VPAT which encompasses Section 508 (US), EN 301 549 (EU) and WCAG2.1 (Web Content Accessibility Guidelines) for its products.