Techniques

Acid base concept Arrhenius's concept: An acid is a substance which dissociates in aqueous solution, releasing the hydro

Views 407 Downloads 0 File size 4MB

Report DMCA / Copyright

DOWNLOAD FILE

Recommend stories

Citation preview

Acid base concept Arrhenius's concept: An acid is a substance which dissociates in aqueous solution, releasing the hydrogen ion H+ (a proton): HA A− + H+ The equilibrium constant for this dissociation reaction is known as a dissociation constant. The liberated proton combines with a water molecule to give a hydronium (or oxonium) ion H3O+, and so Arrhenius later proposed that the dissociation should be written as an acid–base reaction: HA + H2O A− + H3O+ Brønsted and Lowry generalized this further to a proton exchange reaction. acid + base conjugate base + conjugate acid. The acid loses a proton, leaving a conjugate base; the proton is transferred to the base, creating a conjugate acid. For aqueous solutions of an acid HA, the base is water; the conjugate base is A− and the conjugate acid is the hydronium ion. The Brønsted–Lowry definition applies to other solvents, such as dimethyl sulfoxide: the solvent S acts as a base, accepting a proton and forming the conjugate acid SH+. The designation of an acid or base as "conjugate" depends on the context. The conjugate acid BH+ of a base B dissociates according to BH+ + OH − B + H2O which is the reverse of the equilibrium H2O (acid) + B (base) (conjugate acid).

OH− (conjugate base) + BH+

The hydroxide ion OH−, a well known base, is here acting as the conjugate base of the acid water. Acids and bases are thus regarded simply as donors and acceptors of protons respectively. Water is amphiprotic: it can react as an acid or a base. Another example of an amphiprotic molecule is the bicarbonate ion HCO3− which is the conjugate base of the carbonic acid molecule H2CO3 in the equilibrium H2CO3 + H2O HCO3− + H3O+ but also the conjugate acid of the carbonate ion CO32− in (the reverse of) the equilibrium HCO3− + OH− CO32− + H2O. Carbonic acid equilibria are important for acid-base homeostasis in the human body.

An acid dissociation constant, Ka, (also known as acidity constant, or acid-ionization constant) is a quantitative measure of the strength of an acid in solution. It is the equilibrium constant for a chemical reaction known as dissociation in the context of acid-base reactions. The equilibrium can be written symbolically as: A− + H+

HA

where HA is a generic acid which dissociates by splitting into A−, known as the conjugate base of the acid, and the hydrogen ion or proton, H+, which, in the case of aqueous solutions, exists as a solvated hydronium ion. The dissociation constant is usually written as a quotient of the equilibrium concentrations, denoted by [HA], [A−] and [H+]: [H+] [A−] Ka=

[HA]

Due to the many orders of magnitude spanned by Ka values, a logarithmic measure of the acid dissociation constant is more commonly used in practice. pKa, which is equal to −log10 Ka, may also be referred to as an acid dissociation constant: pKa= -log10 Ka The larger the value of pKa, the smaller the extent of dissociation. A weak acid has a pKa value in the approximate range −2 to 12 in water. Acids with a pKa value of less than about −2 are said to be strong acids; a strong acid is almost completely dissociated in aqueous solution, to the extent that the concentration of the undissociated acid becomes undetectable. pKa values for strong acids can, however, be estimated by theoretical means or by extrapolating from measurements in non-aqueous solvents in which the dissociation constant is smaller, such as acetonitrile and dimethylsulfoxide. In particular, the pH of a solution can be predicted when the analytical concentration and pKa values of all acids and bases are known; conversely, it is possible to calculate the equilibrium concentration of the acids and bases in solution when the pH is known. pH: pH is a measure of the acidity or basicity of a solution. It is defined as the cologarithm of the activity of dissolved hydrogen ions (H+). Hydrogen ion activity coefficients cannot be measured experimentally, so they are based on theoretical calculations. The pH scale is not an absolute scale; it is relative to a set of standard solutions whose pH is established by international agreement.

pH is defined as minus the decimal logarithm of the hydrogen ion activity in an aqueous solution. By virtue of its logarithmic nature, pH is a dimensionless quantity. pH= - log aH = log10 1/aH where aH is the (dimensionless) activity of hydrogen ions. The reason for this definition is that aH is a property of a single ion which can only be measured experimentally by means of an ion-selective electrode which responds, according to the Nernst equation, to hydrogen ion activity. pH is commonly measured by means of a combined glass electrode, which measures the potential difference, or electromotive force, E, between an electrode sensitive to the hydrogen ion activity and a reference electrode, such as a calomel electrode or a silver chloride electrode. The combined glass electrode ideally follows the Nernst equation: Eo - E E = Eo + RT/nF loge (aH); pH = 2.303 RT/F where E is a measured potential , Eo is the standard electrode potential, that is, the electrode potential for the standard state in which the activity is one. R is the gas constant T is the temperature in Kelvin, F is the Faraday constant and n is the number of electrons transferred, one in this instance. The electrode potential, E, is proportional to the logarithm of the hydrogen ion activity. This definition, by itself, is wholly impractical because the hydrogen ion activity is the product of the concentration and an activity coefficient. The single-ion activity coefficient of the hydrogen ion is a quantity which cannot be measured experimentally. To get round this difficulty the electrode is calibrated in terms of solutions of known activity.

pH in living systems Compartment

pH

Gastric acid

0.7

Lysosomes

4.5

Granules of chromaffin cells

5.5

Urine

6.0

Neutral H2O at 37 °C

6.81

Cytosol

7.2

Cerebrospinal fluid (CSF)

7.3

Blood

7.34 – 7.45

Mitochondrial matrix

7.5

Pancreas secretions

8.1

Henderson–Hasselbalch equation pH = pKa – log [AH]/[A-]

At half-neutralization [AH]/[A−] = 1; since log(1) =0 , the pH at half-neutralization is numerically equal to pKa. Conversely, when pH = pKa, the concentration of AH is equal to the concentration of A−. The buffer region extends over the approximate range pKa ± 2, though buffering is weak outside the range pKa ± 1. At pKa ± 1, [AH]/[A−] = 10 or 1/10. If the pH is known, the ratio [AH]:[A−] may be calculated. This ratio is independent of the analytical concentration of the acid. Buffer solution of a desired pH can be prepared as a mixture of a weak acid and its conjugate base. In practice the mixture can be created by dissolving the acid in water, and adding the requisite amount of strong acid or base. The pKa of the acid must be less than two units different from the target pH. Polyprotic acids are acids that can lose more than one proton. The constant for dissociation of the first proton may be denoted as Ka1 and the constants for dissociation of successive protons as Ka2, etc. Phosphoric acid, H3PO4, is an example of a polyprotic acid as it can lose three protons.

equilibrium

pKa value

H3PO4 = H2PO4− + H+

pKa1 = 2.15

H2PO4− =HPO42− + H+

pKa2 = 7.20

HPO42− = PO43− + H+

pKa3 = 12.37

Applications and significance of pKa A knowledge of pKa values is important for the quantitative treatment of systems involving acid–base equilibria in solution. Many applications exist in biochemistry; for example, the pKa values of proteins and amino acid side chains are of major importance for the activity of enzymes and the stability of proteins.Protein pKa values cannot always be measured directly, but may be calculated using theoretical methods. Buffer solutions are used extensively to provide solutions at or near the physiological pH for the study of biochemical reactions, the design of these solutions depends on a knowledge of the pKa values of their components. Important buffer solutions include MOPS, which provides a solution with pH 7.2, and tricine which is used in gel electrophoresis. Buffering is an essential part of acid base physiology including acid-base homeostasis, and is key to understanding disorders such as acid-base imbalance. The isoelectric point of a given molecule is a function of its pK values, so different molecules have different isoelectric points. This permits a technique called isoelectric focussing, which is used for separation of proteins by 2-D gel polyacrylamide gel electrophoresis.

Common buffer compounds used in biology

Buffer solutions also play a key role in analytical chemistry. They are used whenever there is a need to fix the pH of a solution at a particular value. Compared with an aqueous solution, the pH of a buffer solution is relatively insensitive to the addition of a small amount of strong acid or strong base. The buffer capacity of a simple buffer solution is largest when pH = pKa. In acid-base extraction, the efficiency of extraction of a compound into an organic phase, such as an ether, can be optimised by adjusting the pH of the aqueous phase using an appropriate buffer. At the optimum pH, the concentration of the electrically neutral species is maximised; such a species is more soluble in organic solvents having a low dielectric constant than it is in water. This technique is used for the purification of weak acids and bases. Useful buffer mixtures Components

pH range

HCl, Sodium citrate

1-5

Citric acid, Sodium citrate

2.5 - 5.6

Acetic acid, Sodium acetate

3.7 - 5.6

Na2HPO4, NaH2PO4

6-9

Borax, Sodium hydroxide

9.2 - 11

Common Name

pKa at 25°C

Buffer Range

Mol. Weight

Full Compound Name

TAPS

8.43

7.7–9.1

243.3

3{[tris(hydroxymethyl)methyl]amino}propanes ulfonic acid

Bicine

8.35

7.6–9.0

163.2

N,N-bis(2-hydroxyethyl)glycine

Tris

8.06

7.5–9.0

121.14

tris(hydroxymethyl)methylamine

Tricine

8.05

7.4–8.8

179.2

N-tris(hydroxymethyl)methylglycine

HEPES

7.48

6.8–8.2

238.3

4-2-hydroxyethyl-1-piperazineethanesulfonic acid

TES

7.40

6.8–8.2

229.20

2{[tris(hydroxymethyl)methyl]amino}ethanesul fonic acid

MOPS

7.20

6.5–7.9

209.3

3-(N-morpholino)propanesulfonic acid

PIPES

6.76

6.1–7.5

302.4

piperazine-N,N′-bis(2-ethanesulfonic acid)

Cacodylat e

6.27

5.0–7.4

138.0

dimethylarsinic acid

SSC

7.0

6.5-7.5

189.1

saline sodium citrate

MES

6.15

5.5–6.7

195.2

2-(N-morpholino)ethanesulfonic acid

PHYSICAL VARIABLES Scientists measure all kinds of stuff in the lab. While many observations are qualitative—what color, what state, etc.—many observations are quantitative. Measuring the mass of a reactant, or the volume of a liquid, performing a titration, and other more sophisticated measurements require careful determination of value, which must be recorded along with the proper unit. Both the magnitude of the number and the unit are essential for communicating information to other chemists, wishing to repeat an experiment. The magnitudes of these measurable observations are called the physical variables. Variables are of two types, Substantial variables: These variables have a unit. They are measured against a precise physical standard. These standards are called units. E.g. mass, length, volume, time, viscosity, heat, temperature, etc. Natural variables: These are variables known as dimensionless numbers or groups. They do not require any units to express their measurement. Examples for unitless variables are refractive index, specific gravity, specific viscosity, etc. There are some physical phenomena that do not have any units, such as the Reynolds number. The Reynolds number (Re) is a dimensionless measurement in fluid mechanics used to characterize the nature of flow of fluid through tunnels and pipes. DIMENSIONS AND UNITS The physical variables that we use in physics and chemistry can be classified into two categories— fundamental quantities and derived quantities. There are some physical variables which form the basis of all measurements and quantities and are known as fundamental quantities or dimensions or base quantities. The units to express them are known as base units. The units to express them are also derived from base units. These units are called derived units and their dimensions are the combination of fundamental quantities. Area = length × length Volume = length × length × length Velocity = distance/time UNITS Physical variables are measured against certain standards known as units. Base units are those used to express the dimensions or the fundamental quantities, and the derived units are those derived from the fundamental or base units.

There are different systems of units such as MKS, CGS, SI, and FPS units. Units of one system can be converted into units of another system. SI units is the officially accepted system and is widely in use. There are two clusterings of metric units in science and engineering. One cluster, based on the centimeter, the gram, and the second, is called the CGS system. The other, based on the meter, kilogram, and second, is called the MKS system. Similarly, FPS system is the old British system that uses foot, pound, and second as the basic units.

SI Unit All systems of weights and measurements, metric and non-metric, are linked through a network of international agreements supporting the International System of Units. The International System is called the SI, using the first two initials of its French name Le Système International d’Unités. The SI is maintained by a small agency in Paris, the inter-national Bureau of Weights and Measures (BIPM, for Bureau International des Poids et Mesures), and it is updated every few years by an international conference, the General Conference on Weights and Measures (CGPM, for Conférence Générale des Poids et Mesures), attended by representatives of all the industrial countries and international scientific and engineering organizations. TABLE: Base Measurements and Base Units - SI

Few units Ampere [A] The ampere is the basic unit of electrical current. It is the current that produces a specified force between two parallel wires, which are one meter apart in a vacuum. It is named after the French physicist Andre Ampere (1775-1836). kelvin [K] The kelvin is the basic unit of temperature. It is 1/273.16th of the thermodynamic temperature of the triple point of water. It is named after the Scottish mathematician and physicist William Thomson 1st Lord Kelvin (1824-1907).

Mole [mol] The mole is the basic unit of substance. It is the amount of substance that contains as many elementary units as there are atoms in 0.012 kg of carbon-12. Candela [cd] The candela is the basic unit of luminous intensity. It is the intensity of a source of light of a specified frequency, which gives a specified amount of power in a given direction. Farad [F] The farad is the SI unit of the capacitance of an electrical system, that is, its capacity to store electricity. It is a rather large unit as defined and is more often used as a microfarad. It is named after the English chemist and physicist Michael Faraday (17911867). Hertz [Hz] The hertz is the SI unit of the frequency of a periodic phenomenon. One hertz indicates that 1 cycle of the phenomenon occurs every second. For most work much higher frequencies are needed such as the kilohertz [kHz] and megahertz [MHz]. It is named after the German physicist Heinrich Rudolph Hertz (1857-94). Joule [J] The joule is the SI unit of work or energy. One joule is the amount of work done when an applied force of 1 newton moves through a distance of 1 meter in the direction of the force. It is named after the English physicist James Prescott Joule (1818-89). Newton [N] The newton is the SI unit of force. One newton is the force required to give a mass of 1 kilogram an acceleration of 1 meter per second per second. It is named after the English mathematician and physicist Sir Isaac Newton (1642-1727). Ohm [Ω] The ohm is the SI unit of resistance of an electrical conductor. Its symbol is the capital Greek letter ‘omega’. It is named after the German physicist Georg Simon Ohm (1789-1854). Pascal [Pa] The pascal is the SI unit of pressure. One pascal is the pressure generated by a force of 1 newton acting on an area of 1 square meter. It is a rather small unit as defined and is more often used as a kilopascal [kPa]. It is named after the French mathematician, physicist, and philosopher Blaise Pascal (1623-62). Volt [V] The volt is the SI unit of electric potential. One volt is the difference of potential between two points of an electrical conductor when a current of 1 ampere flowing between those points dissipates a power of 1 watt. It is named after the Italian physicist Count Alessandro Giuseppe Anastasio Volta (1745-1827).

Watt [W] The watt is used to measure power or the rate of doing work. One watt is a power of 1 joule per second. It is named after the Scottish engineer James Watt (17361819). MEASUREMENT CONVENTIONS The magnitude of these variables should be expressed with correct conventions for further analysis and understanding. It indicates the conditions of measurement and analysis and is essential mainly for comparing the magnitudes. For example, the magnitudes of the substantial variables like density and pressure are always expressed with respect to temperature. Density is defined as mass per volume at a fixed temperature, whereas the specific gravity is the ratio of density of a material with that of water. Therefore, it is a dimensionless variable. Density of water at 4°C is exactly 1.000 g. cm–3, therefore we can say immediately that the density of ethanol is 0.789 g.cm–3. ERRORS IN DATA AND CALCULATIONS Measurements are prone to errors. Therefore, all techniques for data analysis must consider this error in measurements. Experimental errors while taking measurements are sometimes unavoidable and may depend on accuracy. For example, consider the measurement of length. Measure the length of a table as 5 meters. Here, we are actually comparing the length of the table with that of a standard that is 1 meter long. In this comparison, there is always some uncertainty regarding its accuracy. It depends on the accuracy of the scale that you have used for measuring the length. If the length measured is between 5 and 6 and the scale that was used did not have any subdivisions of meters marked on that, the measurement is not accurate. To get a more accurate measurement of the length, use a scale where the meter is subdivided into centimeters and the length of the table can be measured to the accuracy of centimeters, say 5 meters and 3 centimeters. Experimentally-determined quantities always have errors to varying degrees. The reliability of the conclusions drawn from this data must take experimental errors into considerations for calculations. Minimization of errors by adopting accurate measurement scales, estimation of the errors and principles of error propagation in calculations are very important in all sciences to prevent deceptive and confusing interpretation of facts. ABSOLUTE AND RELATIVE UNCERTAINTY Experimental and measurement errors always create uncertainty in the final data. This problem can be solved by introducing the rules of significant figures.

In this method, we specify the range of error by which each of the given values can be varied. Each of the readings will be uncertain within this range of error. This error value is known as absolute error. The same error can be represented in terms of percentage, and then it is called relative error. For example, when representing the temperature of a solution it will be 37 ± 3°C. Here, ± 3°C represents the actual temperature range by which the reading is uncertain or can be varied and this is known as the absolute error. When the same error is represented as a percentage it is known as relative error. 37 ± 3°C can be represented as 37 ± 1.25%. Here, the error, 1.25 % is called relative error. TYPES OF ERRORS Experimental errors can be broadly classified into two categories: a. Systemic errors and b. Random errors When an error affects all measurements in the same way it is called a systemic error. In most cases, the cause of this error is known and introducing a correction factor can minimize the error. For example, a watch showing an error of + five minutes (five minutes fast). In this case we can reduce five minutes from the time shown by the clock to get the correct time. A balance that shows an error of – 0.5 gm can be adjusted for that error effectively if the fact is known. If an error occurs due to unknown reasons it is called a random error or an accidental error. This type of error can be detected by repeating the experiments under the same conditions. If different experimental values or results when repeating the experiments without changing the experimental conditions are found, then there are random errors. These errors can be quantified and minimized by applying methods of statistical analysis. The results or data of an experiment should be reliable and reproducible. The term precision refers to the reliability and reproducibility of results. It also indicates the magnitude by which the data is free from random errors. We also use the term accuracy to refer to the quality of the data. When there is a minimum of both systemic and random errors or when it is almost zero and the results are reproducible, then we refer to the data as accurate.

Liquid chromatography of biomolecules Proteins, peptides, DNA, RNA, lipids, and organic cofactors have various characteristics such as electric charge, molecular weight, hydrophobicity, and surface relief. Purification is usually achieved by using methods that separate the biomolecules according to their differences in these physical characteristics, such as ion exchange, gel filtration, and affinity chromatography.

Ion-Exchange Chromatography of Proteins In ion exchange chromatography, the stationary solid phase commonly consists of a resin with covalently attached anions or cations. Solute ions of the opposite charge in the liquid, mobile phase are attracted to the ions by electrostatic forces. Adsorbed sample components are then eluted by application of a salt gradient which will gradually desorb the sample molecules in order of increasing electrostatic interaction with the ions of the column (Figs. 1– 2). Because of its excellent resolving power, ion exchange chromatography is probably the most important type of chromatographic methods in many protein preparations. The choice of ion exchange resin for the purification of a protein largely depends on the isoelectric point, pI, of the protein. At a pH value above the pI of a protein, it will have a negative net charge and adsorb to an anion exchanger. Below the pI, the protein will adsorb to a cation exchanger. For example, if the pI is 4 then in most cases it is advisable to choose a resin which binds to the protein at a pH > 4. Since at pH > 4 this protein is negatively charged, the resin has to be an anion ion exchanger, e.g., DEAE. One could also use a pH < 4 and a cation exchanger, but many proteins are not stable or aggregate under these conditions. If, in contrast, the protein we want to purify has a pI = 10, it is positively charged at usually suitable conditions for protein ion exchange chromatography, i.e., at a pH around 7. Thus, in general for this protein type we have to choose a cation ion exchange resin, e.g., CM, which is negatively charged at neutral pH. The capacity of the resin strongly depends on the pH and the pI of the proteins to be separated (Fig. 4; Table), but also on the quality of the resin, the applied pressure, and the number of runs of the column (Fig.5). To improve the life of the resin, it should be stored in a clean condition in the appropriate solvent and not be used outside the specified pH range and pressure limit.

Fig. 1 Example of ion exchange chromatography. (a)– (c) Loading the column: mobile anions (or cations) are held near cations (or anions) that are covalently attached to the resin (stationary phase). (d)–(f) Elution of the column with a salt gradient: the salt ions weaken the electrostatic interactions between sample ions and ions of the resin; sample molecules with different electrostatic properties are eluted at different salt concentrations, typically between 0–2 M. (g) Interaction of sample molecules with ions attached to the resin: at a suitable pH and low salt concentration, most of the three types of biomolecules to be separated in this example reversibly bind to the ions of the stationary phase.

Fig. 2 Two ion exchangers: diethyl-amino-ethyl (DEAE) and carboxy methyl (CM). The positive charge of DEAE attracts negatively charged biomolecules. CM is suitable for purification of positively charged biomolecules

Fig. 3 Example for the salt concentration during adsorption of a sample to an ion exchange column, subsequent elution of the sample, and cleaning of the column. Example of a purification protocol: First the solution of biomolecules and impurities in buffer contained in a syringe is loaded onto the column. The biomolecules and some of the impurities bind to the ions attached to the resin. Loading is completed and non-binding molecules are partly rinsed through the column with some further buffer. The next step is to apply a salt gradient with a programmable pump which mixes buffer with extra salt containing buffer. The steep salt gradient at the beginning elutes most of the weakly binding impurities. At a certain salt concentration, the biomolecules to be purified elute from the column. Elution is monitored with an absorption detector at 280 nm wavelength and the sample fraction collected. After each run the column is cleaned with 1–2 M KCl. This removes most of the strongly binding sample impurities

Fig. 4 Charge properties of anion and cation exchangers. DEAE has a significant capacity at low and medium pH; CM is highly capacious at high and medium pH

The experimental set-up (Fig. 6) often just consists of a bottle with buffer, a bottle with buffer with salt, a programmable FPLC or HPLC pump, the column, a detector and recorder of absorption at 280 nm, or occasionally at 220 nm, and a sample collector. If the right conditions for protein preparation are unknown, a pre-run is performed with a small fraction of the sample. Attention should be paid not to overload the column in preparative runs since this can shift peak positions and lead to substantial sample losses. In many cases of modern high expression of recombinant proteins, it is possible to obtain a protein with 99% purity with a single ion exchange chromatographic step.

Fig. 6 Typical setup for chromatographic purification of proteins with ion exchange FPLC. The pump mixes the salt gradient for sample elution after the sample was loaded, e.g., with a syringe.

Fig. 5 Change of the capacity of ion exchange columns due to usage. High performance columns operated at the appropriate pressure and pH can last many 1000 runs

Gel Filtration Chromatography Gel filtration chromatography (sometimes referred to as size exclusion chromatography) separates biomolecules based on differences in their molecular size. The process employs a gel media suspended in an aqueous buffer solution which is commonly packed into a chromatographic column. These columns can vary in size from very small (for example, spin columns of K > 1 for the other proteins, which are within the fractionation range for the column.

Applications

Affinity chromatography

Desalting Desalting is necessary for purification of biochemicals. Gel with low exclusion limit MW 1000-2000 is used. Short column and high flow rate can be used because of the vast difference in size of solutes and contaminants. Macromolecules will be eluted with little dilution and salts retained on the column. Concentration of Dilute Solutions Solution is mixed with a small quantity of dry gel that will absorb 10 to 20 times its weight in water. Some salts and small molecules are taken up also. Final macromolecules in a solution of almost unchanged pH and ionic strength but significantly decreased volume. Molecular Weight Determination Size is approximately proportional to molecular weight, M. Elution time, VE, can be expressed by: VE = a + b log M a and b are constants and dependent on the mobile and stationary phases. Protein purification

Affinity chromatography is a method enabling purification of biomolecules and other macromolecules with respect to individual structure or function. It utilizes the highly specific binding of the macromolecule to a second molecule which is attached to the stationary phase. The principle of operation is as follows: (a) the sample is injected into the column; (b) buffer is rinsed through the column, so that sample molecules with no affinity to the stationary phase are eluted from the column, but sample molecules with a high affinity for the stationary phase are retained in the column; (c) the retained sample molecules are eluted from the column by buffer with a high salt concentration or a different pH or a different solvent composition (Fig. below). The preparation of the protein can be performed by using a number of protein tags. The tags should not cause artificial interactions and should not alter the conformation of the tagged protein. Very common are poly-histidine tags that are attached to the protein by genetic engineering (Fig. below). The tag typically consists of 8–12 histidine residues. It binds to nickel compounds at the surface of the chromatography beads. Fig. illustrates a somewhat different variant of affinity chromatography in which misfolded proteins are continuously refolded by chaperones and eluted with buffer.

Proteins are separated according to the difference in their molecular masses as described previously. Procedure 1. Column must first be equilibrated with desired buffer. 2. pass several column volumes of buffer through column. 3. important step because the equilibration buffer is the buffer in which the protein sample will elute. 4. Next, sample is loaded onto the column and allowed to enter the resin. 5. Then more equilibration buffer is passed through the column to separate sample and elute it from column. 6. Fractions are collected as the sample elutes from the column. 7. Larger proteins elute in the early fractions and smaller proteins elute in subsequent fractions.

Types of Column exclusion range for some common gel filtration chromatography media. Sephadex G-50 1-30 kDa Sephadex G-100

4-150 kDa

Sephadex G-200

5-600 kDa

Sephadex is a trademark of Pharmacia.

Fig. Purification of antibodies with affinity chromatography: The antigen is chemically bound to the beads of the column and the mixture of antibodies is rinsed through the column. Antibodies with high binding constants bind to the antigen and are eluted later with a buffer with a high salt concentration

Immobilized Metal Ion Affinity Chromatography Immobilized metal ion affinity chromatography (IMAC) of proteins has the ability to differentiate a single histidine residue on the surface of a protein, it can bind proteins with dissociation constants of 10–5 to 10–7, and has had wide application in the field of molecular biology for the rapid purification of recombinant proteins. Since the first set of work was published describing the immobilization of metal ions using a chelating agent covalently attached to a stationary support, to purify proteins, there have been several modifications and adaptations of this technique over the years. The traditional approach describes the use of immobilized metal ions and in particular borderline Lewis metal ions such as Cu2+, Ni2+, and Zn2+, to purify proteins on the basis of their histidine content. This application was further extended to incorporating a hexa-histidine tail onto the protein and using the tail as a purification handle in combination with a highly selective stereochemistry in the form of a Ni-nitrilotriacetic acid (Ni-NTA) complex for binding the purification handle. Yet another mode of interaction involved in the IMAC of proteins was the mixed mode interactions involving aspartate and/or glutamate surface residues on proteins along with electrostatic interactions, again independent of histidine interactions. The traditional use of IMAC for proteins has been to select proteins on the basis of their histidine content. The approach uses a chelating agent immobilized on a stationary surface to capture a metal ion and form an immobilized metal chelate complex (IMCC) (see Fig). Generally, Cu2+, Ni2+, and Zn2+, have been used in this mode but other metal ions such as Co2+, Cd2+, Fe2+, and Mn2+ have also been examined as the metal ions of choice. Histidine selection by the IMCC exploits the preference of borderline. Lewis metals with a pKa of 6.0, histidine will be able to donate electrons effectively.

Fig. Attachment of a protein to a bead of an affinity column with a histidine tag. About 10 histidine residues were attached to the protein by genetic engineering, e.g., by polymerase chain reaction (PCR) mutagenesis. The histidine residues strongly bind to the bead made from a nickel chelate resin

Fig. Refolding of expensive, poorly folding proteins: Folding chaperones, also known as chaperonins, are attached to the beads and the unfolded or misfolded protein is rinsed through the column. The chaperone interacts with the sample protein and catalyses its folding into the correct conformation Separation by affinity chromatography is based on a biological property of macromolecules rather than on a physical property. It is a highly sensitive technique and, in principle, can give absolutely pure sample in a single step. The technique requires that the macromolecule be able to bind to a specific ligand attached to an insoluble matrix under certain conditions, and then detach itself under certain other conditions. In other words the binding should be specific and reversible.

Fig: Histidine residues binding to an immobilized metal ion affinity adsorbent.

Thin layer chromatography In this case, the stationary phase is applied to a glass, plastic or metal foil plate as a uniform, thin layer and the sample is applied at the top edge of the layer using a micro-pipette or syringe. This technique can be used for partition chromatography, adsorption chromatography and exclusion chromatography. The advantage of this method is that a large number of samples can be studied simultaneously. Also, it is quick and simple.

Paper chromatography In this technique, the cellulose fibres of a sheet of special chromatographic paper act as a support or the stationary phase (Figure below). There are generally two methods that are employed in paper chromatography – the ascending method, and the descending method. In both of them the solvent is placed at the bottom of a jar or tank so that the chamber is saturated with solvent vapor. The chromatographic paper is held vertically inside the tank. In the ascending method, the lower end of the paper dips into the solvent and the sample spot is applied just above the surface of the solvent. As the solvent moves up the sheet of paper by capillary action, the sample constituents move along with it and get separated. In the descending method, the upper end of the paper, where sample is applied, is held in a trough of solvent at the top of the tank, and the rest of the paper is allowed to hang down, without touching the solvent at the bottom of the tank. Again by capillary action (and gravity), the solvent moves down the sheet of paper, taking the sample along with it and separating the components. The components are detected by spraying the paper with specific colouring reagents, for example, ninhydrin, which binds only to amino acids and proteins. Sometimes fluorescent dyes are used. For example, ethidium bromide is used with DNA samples. When the paper is subsequently examined under ultraviolet light, the position of the fluorescent or ultraviolet absorbing spots can be seen. The corresponding compounds are identified on the basis of their Rf values where

The value is the relative displacement of solute with respect to the solvent front and is a constant for a given compound under standard conditions. Thus by looking up a table of values, it is possible to identify the compound after separation.

Adsorption chromatography This technique separates the sample into its components based on the degree of their adsorption by the adsorbent and on their solubility in the solvent used. Silicic acid (silica gel), aluminum oxide, calcium carbonate and cellulose may be used as the stationary phase in this method. The optimum combination of adsorbent and eluting solvent will depend upon the type of sample separation under consideration. For example, hydroxyapatite (calcium phosphate) is used for DNA separation as this adsorbent can bind double stranded DNA but not single stranded DNA or RNA. Normally any organic solvent can be chosen as the mobile phase, e.g. hexane, heptane, proponal, butanol etc.

Partition chromatography There are two types of partition chromatography. (a) Liquid -liquid chromatography: In this type of partition chromatography, the material separation is achieved based on its differing solubilities in two (immiscible) liquid phases. The sample is dissolved in a relatively non-polar solvent and is passed through a column that contains immobilized water or some other polar solvent in a supporting matrix such as silica gel. When the sample moves through the column it gets partitioned between the mobile phase and the immobile, polar phase. In the reverse phase liquid-liquid chromatographic method, the stationary phase is a non-polar substance like liquid paraffin supported by the matrix. High Performance Liquid Chromatography (HPLC) is a liquid-liquid partition chromatographic technique. It is widely used for the separation of biological compounds either as a normal phase method, or as a reverse phase method. In the latter, it is useful for separating polar compounds. (b) Counter current chromatography: This is also partition chromatography with this major difference, that there is no supporting matrix. The sample is equilibrated between two immiscible solvents (e.g. ethylacetate and water) in a test tube. The partition coefficient is defined as:

After equilibration the upper phase is withdrawn and transferred to a fresh volume of lower phase solvent in another tube. The original lower phase is added to a fresh volume of upper phase solvent, and so on. The process is repeated several times, yielding several test tubes, half of them finally containing the upper phase solvent, and half containing the lower phase solvent. A solute which is more soluble in upper phase (K > 1) will concentrate in tubes containing the upper phase solvent. If the solubility is greater for lower phase (K < 1) then it will concentrate in tubes of the lower phase solvent. Counter current chromatography is successfully used for cell organelle fractionation.

Gas liquid chromatography (GLC) The two phases used are a liquid and a gas. The method is very sensitive and reproducible. The partition coefficient of a volatile sample between the liquid and the gas phases, as it is carried through the column by the carrier gas, may be largely different from unity. This feature is exploited to separate or purify the compound. The stationary or immobile phase is generally nonvolatile and thermally stable at the temperature of the experiment. Usually organic compounds with high boiling point are coated on a base to form the stationary phase. The base is generally made up of celite (diatomaceous silica). The sample to be separated is packed in a narrow, coiled, steel or glass tube and placed in an oven at an elevated temperature. An inert gas like argon, nitrogen or helium is passed through the tube. The sample is volatilised and is carried away by the mobile gas phase. It is passed over the solid phase packed into another column, and as it does so, the sample distributes itself between the gas and solid phases.

Chromatofocusing Chromatofocusing is a protein-separation technique that was introduced by Sluyterman and his colleagues between 197’7 and 1981. Chromatofocusing combines the advantage of high-capacity ion exchange procedures with the high resolution of isoelectric focusing into a single chromatographic focusing procedure. During chromatofocusing, a weak ionexchange column of suitable buffering capacity is equilibrated with a buffer that defines the upper pH of the separation pH gradient to follow. A second “focusing” buffer is then applied to elute bound proteins, roughly in order of their isoelectric (PI) points, The pH of the focusing buffer is adjusted to a pH that defines the lower limit of the pH gradient. The pH gradient is formed internally during isocratic elution with a single focusing buffer; no external gradient forming device is required. The pH gradient is formed as the eluting buffer (i.e., focusing buffer) titrates the buffering groups on the ion exchanger. Peak widths in the range of 0.05 pH unit and samples containing several hundred milligrams of protein can be processed in a single step. Chromatofocusing is therefore a powerful analytical probe of protein surface charge, as well as an effective preparative technique for protein isolation. The application of chromatofocusing to silicabased stationary phases for use in a high-performance mode has extended the utility of this technique.

Electrophoresis Important biological molecules like peptides, proteins, carbohydrates and nucleic acids have ionizable chemical groups and hence, under suitable conditions, exist in solution as charged species. Even nonpolar molecules can be made to exist as weakly charged species by preparing phosphate, borate or similar derivatives. Molecules with similar charge may have different charge/mass ratios since their molecular weights could be different. Molecules with differing charges or differing charge/mass ratios would migrate at different rates in an electric field. The separation of molecules by utilizing these differences in rates of migration is known as electrophoresis. If a molecule has a net charge q, the application of an electric field E will result in a force: F= qE This force will accelerate the particle in a fluid till a steady state is reached when the frictional force is equal and opposite to the applied force. If v is the steady state velocity then, Fv = qE where f is the frictional force. Therefore v= qE/F For a spherical molecule with radius a and charge Ze (where e is the charge on electron and Z the atomic number), V= Ze E/(6πηα) since q = Ze and where is the viscosity of the solution. The electrophoretic mobility u is defined as the velocity per unit electric field. u= v/E However, due to the presence of counter ions in any aqueous solution, this equation does not completely account for the electrostatic mobility. These ions can neutralize some of the charges on the macromolecule if they bind tightly to it. Alternatively unbound or loosely bound ions can modify the ionic strength of the solution and thus alter the electric field that is felt by the polymer. In such a situation the electrophoretic mobility for a spherical particle is modified by a factor K, where K is a function of the screening parameter which has dimensions of L-1 (inverse of length). Therefore, u= (ZeK)/(6πηα) Thus mobility will increase with increasing charge on the macromolecule. Electrophoresis procedures can be generally divided into three categories: (1) Moving boundary electrophoresis. (2) Zone electrophoresis. (3) Continuous flow electrophoresis.

Moving boundary electrophoresis In this method, the sample is dissolved in a buffer and placed in a cell. An additional quantity of the buffer is layered on top of it. When the electric field is applied, closely related molecules will tend to move as a band since their charges are similar, and boundaries are formed between groups of molecules with different electrophoretic mobility. The separations can be seen by direct optical observations. This method suffers from many experimental difficulties such as errors due to convection. Hence it is not very useful as a preparative technique. Zone electrophoresis In this method the mixture to be analyzed is applied as a small spot or very thin band to the supporting medium. When electric field is applied the molecules move as distinct bands or zones. The molecules can be identified by staining, UV absorption etc. There are several different types of zone electrophoresis. We treat some of them below. Low voltage electrophoresis The electrophoresis unit (see Figure below) consists of a power pack, which supplies DC current across two electrodes, buffer reservoirs, a support, and a transparent insulating cover. The low voltage power pack can either give a constant voltage or a constant current and has an output of 0 to 500 V and 0 to 150 mA. Either stainless steel or platinum electrodes are used. The supporting medium, which can be paper or cellulose acetate, must first be saturated with buffer and held on a sheet of insulating material like Perspex. The sample is then applied as a small spot using a micropipette. The location of the spot depends on the nature of the molecular mixture present in the sample. For example, if there are molecules with opposite charges, then they will separate and move towards opposite electrodes and hence the sample must be applied at the centre. The two reservoirs on either side of the medium are isolated from the electrodes so that any pH change there does not affect the buffer. After application of the sample, power is switched on for the required amount of time. At the end of the experiment, the migration of the different constituents of the mixture may be analyzed. The experiment may be performed in different ways. For example, instead of paper or cellulose acetate, thin layers of silica, alumina or cellulose may be applied on to a glass plate and used as the supporting medium. This technique is similar to TLC (thin layer chromatography) and is known as thin layer electrophoresis (TLE).

High voltage electrophoresis A disadvantage with low voltage paper electrophoresis is that diffusion processes may influence the migration of the molecules so that it is not strictly according to the charges on them. One way to overcome this is to resort to high voltages, of the order of 10,000 volts. Such a voltage can produce potential gradients up to 200 volts/cm. If the separation between the electrodes is d metres and if Vis the potential difference in volts then the potential gradient is V/d volts/metre. The force on the charged molecule is then Vq/d Newtons, where q is the charge in coulombs. The migration of the molecules is caused by this force and is proportional to it, and an increase in the potential gradient leads to an increased rate of migration. The distance migrated by the ions will depend on the voltage as well as the amount of time for which it is applied. This technique results in higher resolution and faster separation. But because of the high voltage, it generates considerable heat and the apparatus requires cooling. Gel electrophoresis When applied to the separation of a mixture of proteins, simple zone electrophoresis has limited resolving power, due to the similarity in mobility of the components. The molecular sieving property of the gel helps in separating molecules that have similar charge properties but differ in size and shape. The gels can be run as horizontal slabs or as vertical slabs. Gels are prepared in glass or Perspex containers.

Cross-linker Most protocols use acrylamide and the cross-linker bisacrylamide (bis) for the gel matrix. TEMED (N,N,N¢,N¢-tetramethylethylenediamine) and ammonium persulfate are used to catalyze the polymerization of the acrylamide and bis. TEMED, a base, interacts with ammonium persulfate at neutral to basic pH to produce free radicals. The free radical form of ammonium persulfate initiates the polymerization reaction via the addition of a vinyl group. Another crosslinker, BAC (bis-acrylylcystamine) can be dissolved by β-mercaptoethanol. It is useful for nucleic acid electrophoresis. One other crosslinker, piperazine diacrylamide (PDA), can replace bis-acrylamide in isoelectric focusing (classical tube gel or flatbed gel) experiments.

Initiator or Catalyt TEEMED and APS

Figure: Polymerization of acrylamide

How Do You Control Pore Size? Pore size is most efficiently and predictably regulated by manipulating the concentration of acrylamide in the gel. Pore size will change with the amount of crosslinker, but the effect is minimal and less predictable. Note the greater impact of acrylamide concentration on pore size, especially at the levels of crosslinker usually present in gels. Practical experience with various ratios of acrylamide : bis have shown that it is best to change pore size by changing the acrylamide concentration. Why Should You Overlay the Gel? An overlay is essential for adequate resolution. If you do not overlay, the bands will have the shape of a meniscus. Two closely spaced bands will overlap; the middle of the top band will extend down between the front and back of the bottom band. Overlaying the gel during polymerization will prevent this problem. Common overlays are best quality water, the buffer used in the gel at a 1x dilution, and watersaturated t-butanol. The choice is a matter of personal preference. Reproducible Polymerization: Reproducible polymerization is one of the most important ways to ensure that your samples migrate as sharp, thin bands to the same location in the gel every time. Attention to polymerization will also help keep the background of your stained gels low. Acrylamide polymerization is affected by the amount of oxygen gas dissolved in the solution, the concentrations and condition of the catalysts, the temperatures and pH of the stock solutions, and the purity of the gel components. Gel Percentage vs. Catalyst Concentration

WHICH GEL SHOULD YOU USE? SDS-PAGE, NATIVE PAGE OR I SOELECTRIC FOCUSING? The strategy you choose depends on your goal, of course. If you want to determine the molecular weight of your protein, use SDS-PAGE. If you want to measure the isoelectric point of your protein, choose isoelectric focusing (IEF). For proteomics work, use 2D electrophoresis (IEF followed by SDS-PAGE). Native PAGE is used to assay enzyme activity, or other biological activity, for example, during a purification procedure. Each kind of protein PAGE has issues to consider, and these issues are addressed in the next section.

Will Your SDS Gel Accurately Indicate the Molecular Weight of Your Proteins? Estimation of the molecular weight of the protein of interest, accurate to within 2000 to 5000 daltons, requires the protein band(s) to run within the middle two-thirds of the gel. This is illustrated in the graph of the log of the molecular weight of a set of standard proteins vs. the relative mobility of each one . Note that the proteins with a relative mobility below 0.3 or above 0.7 fall off the linear portion of the curve. Thus the most accurate molecular weight values are obtained when the relative mobility of the protein of interest is between 0.3 and 0.7. This means that if your protein doesn’t enter the gel very well, you must change the gel %T before you can get a good molecular weight value. The sample may require a different (better) solubilization procedure also.

Figure : Log of the molecular weight (in daltons) of a protein versus the relative mobility. Reproduced with permission from Bio-Rad Laboratories. Straight % Gel or a Gradient Gel? If you want to resolve proteins that are within a few thousand daltons of each other in molecular weight, then use a straight percent gel (the same concentration of acrylamide throughout the gel). To get baseline resolution for such proteins, that is, to get clear, unstained space between bands, you may need to use a longer gel. Mini gels have 6 to 8 cm resolution space. Large gels have 12 to 20cm space. The closer the bands are in molecular weight, the longer the gel must be. A gradient gel is used to resolve a larger molecular weight range than a straight percent gel. A 10% gel resolves proteins from 15 to 100kDa, while a 4% to 20% gradient gel resolves proteins from 6 to 300kDa, although the restriction about good molecular weight determination discussed above still holds. Accurate molecular weights can be determined with gradient gels

Isoelectric Focusing: Isoelectric focusing (IEF) measures the isoelectric point, or pI, of a protein. The main problem for IEF is sample solubility, seen as streaking or in-lane background on the stained IEF gel, or horizontal streaking on a 2-D gel. Sample solubilization should be optimized for each new sample. At present there are two kinds of IEF gels in use: gels formed with carrier ampholytes, and gels formed with acrylamido buffers, known as IPG gels (immobilized pH gradient gels).

CRITICAL FOR SUCCESSFUL NATIVE PAGE Sample Solubility Native PAGE is performed under conditions that don’t denature proteins or reduce their sulfhydryl groups. Solubilizing samples for native PAGE is especially challenging because most nondenaturing detergents do not solubilize complex samples well, and the unsolubilized proteins stick on the gel origin and bleed in, causing inlane background.

Location of Band of Interest Sample proteins move in a native gel as a function of their charge as well as their mass and conformation, and because of this, the location of the protein band of interest may be difficult to determine. For instance, in some buffer systems, BSA,at 64kDa, will move in front of soybean trypsin inhibitor, at 17kDa.The easiest way to detect the protein of interest is to determine its location by Western blotting. Alternatively, the protein’s location can be monitored by enzyme activity or bioassay, which usually requires elution from the gel. How Can You Be Sure That Your Proteins Have Sufficient Negative Charge to Migrate Well into a Native PAGE Gel? To determine this, it is useful to have some idea of the pI of the protein of interest. The pH of the buffer should be at least 2 pH units more basic than the pI of the protein of interest. An alternative is to use an acidic buffer system, and reverse the polarity of the electrodes. This works well for very basic proteins. Buffer Systems for Native PAGE Buffer systems for native PAGE are either continuous or discontinuous. Discontinuous buffer systems focus the protein bands into thin fine lines in the stacking gel, and these systems are preferred because they provide superior resolution and sample volumes can be larger and more dilute. In a discontinuous buffer system, the buffers in the separating gel and stacking gel, and the upper and lower tank buffers, may all be different in concentration, molecular species, and pH. The reader should initially try the standard Laemmli SDS-PAGE buffer system without the SDS and reducing agent. That buffer system is relatively basic, so most proteins will be negatively charged and run toward the anode. For protein gels, the choice between continuous and discontinuous buffer systems is usually made on the basis of what works, and the pI of the protein(s) of interest. POWER SUPPLY Macromolecules move through a polyacrylamide or agarose gel because they carry a charge at the pH of the buffer used in the system, and the voltage potential put across the cell by the power supply drives them through the gel. This is the effect of the main voltage potential, set by the power supply.

Constant Current or Constant Voltage—When and Why? The choice of constant current or constant voltage depends on the buffer system, and especially on the size of the gel. Historically constant voltage was used because constant current power supplies were not available. However, currently available programmable power supplies, with constant voltage, constant current, or constant power options, permit any power protocol to be used as needed. Generally speaking, constant current provides better resolution because the heat in the cell can be controlled more precisely (The higher the current, the higher the heat, and the poorer is the resolution, due to diffusion of the bands.) However, constant current runs will take longer than constant voltage runs (Table below).

Why Are Sequencing Gels Electrophoresed under Constant Power? Sequencing gels are run under constant voltage or constantpower, at a temperature between 50 and 55°C. If constant voltage is used, then the voltage must be changed during the run, after the desired temperature is reached. If constant power is used, the power can be set, and the voltage and current will adjust as the run proceeds, maintaining the elevated temperature required for good band resolution. Elevated temperature and the urea in the sequencing gel maintain the DNA in a denatured condition. IMPROVING RESOLUTION AND CLARITY OF PROTEIN GELS How Can You Generate Reproducible Gels with Perfect Bands Every Time? High-quality, reproducible results are generated by using pure, electrophoresis grade chemicals and electrophoresis grade water, by preparing solutions the same way every time and with exact measurement of volumes, by correctly polymerizing your gels the same way every time as discussed above, and by preparing the samples so that they enter the gel completely, without contaminating components that can degrade the resolution. The most important factors for good band resolution and clarity are correct sample preparation and the amount of protein loaded onto the gel, and they are discussed in greater detail below. Finally, the detection procedure must be followed carefully, with attention to detail and elapsed time.

Why Are Nucleic Acids Almost Always Separated via Constant Voltage? Nucleic acids are usually separated with a continuous buffer system (the same buffer everywhere). Under these conditions, the runs take the same time with constant voltage as with any other parameter held constant, and the resolution is not improved using another parameter as constant. This is usually true for both agarose and PAGE The use of continuous buffers in nucleic acid electrophoresis makes the gels easy to pour and to run. As with protein separation, small sample sizes must be utilized within continuous buffer systems, particularly when using vertical systems, to prevent bands from overlapping.

What Procedures and Strategies Should Be Used to Optimize Protein Sample Preparation? Consider the cellular location of your protein of interest, and attempt to eliminate contaminating materials at the earliest stages of the purification. If it is a nuclear binding protein, first isolate the nuclei from your sample, usually with differential centrifugation, and then isolate the proteins from the nuclei. If it is a mitochondrial protein, use differential centrifugation to isolate mitochondria (spin the cell lysate at 3000 x g to remove nuclei, then at 10,000 x g to bring down mitochondria). If the protein is membrane bound, use a step gradient of sucrose or other centrifugation medium to isolate the specific membrane of interest. For soluble proteins, spin the cell lysate at 100,000 x g to remove all cellular membranes and use the supernatant. Note that nucleic acids are very sticky; they can cause proteins to aggregate together with a loss of electrophoretic resolution. If you have smearing in your sample, add 1mg/ml of DNase and RNase to remove the nucleic acids.

Is the Problem Caused by the Sample or the Sample Buffer? Sometimes it is difficult to determine whether the problem is in the sample or the sample buffer. Run the standard both with and without the sample buffer to determine this. It is best to prepare the sample buffer without reducing agent—dithiothreitol (DTT), betamercaptoethanol (BME), or dithioerythritol (DTE)— freeze it into aliquots, and add the reducing agent to the aliquot before use. All these reducing agents evaporate readily from aqueous solution. Adding the reducing agent fresh for each use means the reducing agent will always be fresh and in full strength. Buffer components may separate out during freezing, especially urea, glycerol, and detergents. Aliquots of sample buffer must be mixed thoroughly after thawing, to make sure the buffer is a homogeneous solution. How Do You Choose a Detergent for IEF or Native PAGE? Triton X-100 is often used to keep proteins soluble during IEF or native PAGE, but it may solubilize only 70% of the protein in a cell. SDS is the best solubilizer, but it cannot be used for IEF because it imparts a negative charge to the proteins. During the IEF, it is stripped off the proteins by the voltage potential, and the formerly soluble proteins precipitate in the IEF gel, resulting in a broad smear. Of course, SDS cannot be used in native PAGE because it denatures proteins very effectively. Some authors state that SDS may be used in combination with other detergents at 0.1% or less. It may help solubilize some proteins when used this way. However, this is not recommended, as the protein loads must remain low, and other problems may result. Many non-ionic or zwitterionic detergents can be used for IEF or native PAGE to keep proteins soluble. CHAPS (3-[(3-cholamidopropyl)dimethylammonio]1-propanesulfonate) is most often used, as it is a very good solubilizer, and is non-denaturing. It should be used from 0.1% up to 4.0%. Another very effective solubilizer is SB 3-10 (decyldimethylammoniopropanesulfonate), but it is denaturing. Other detergents, designed especially for IEF on IPG gels, have recently been designed and used successfully. The minimum detergent concentration for effective solubilization must be determined for each sample. What Other Additives Can Be Used to Enhance Protein Solubility?

Some proteins are very difficult to solublize for electrophoresis. Urea can be used, from 2 to 8M or 9.5M. Thiourea can be used at up to 2M; it greatly enhances solubility but cannot be used at higher concentration. This is because above 2M, the urea, thiourea, or detergent may precipitate out. The total urea concentration (urea + thiourea) cannot be above approximately 7.0 M if thiourea is used with a bis gel due to these solubility constraints. AGAROSE ELECTROPHORESIS What Is Agarose? Agarose, an extract of seaweed, is a polymer of galactose. The polymer is 1,3-linked (beta)-d-galactopyranose and 1,4-linked 3,6-anhydro-(alpha)-l-galactopyranose. The primary applications are electrophoresis of nucleic acids, electrophoresis of very large proteins, and immunoelectrophoresis. What Is Electroendosmosis (-Mr or EEO)? -Mr is a measure of the amount of electroendosmosis that occurs during electrophoresis with a particular grade of agarose. Electroendoosmosis is the mass movement of water toward the cathode, against the movement of the macromolecules, which is usually toward the anode. High -Mr means high electroendosmosis. The mass flow of water toward the cathode is caused by fixed negative charges in the agarose gel (sulfate and carboxyl groups on the agarose). Depending on the application, electroendosmosis causes loss of resolution, or it can enable certain kinds of separations to occur, for instance, during counter immunoelectrophoresis. Applications for agarose preparations of different -Mr values are shown in Table here. Agarose Preparations of Different -Mr Values

What Causes Nucleic Acids to Migrate at Unexpected Migration Rates? Supercoiled DNA is so twisted about itself that it has a smaller Stoke’s radius (hydrated radius), and moves faster than some smaller DNA fragments. If supercoiled DNA is nicked, it will unwind or start to unwind during the electrophoresis, and become entangled in the agarose. As this occurs, the DNA slows down its migration, and produces unpredictable migration rates.

What Causes Fuzzy Bands? 1.The sample might have been degraded by endogenous Dnase 2.longer run at higher temperatures 3.Samples loaded too high in the well (overloading) 4.Poor-quality agarose 5.Contaminated buffers (Should be autoclaved)

ELUTION OF NUCLEIC ACIDS AND PROTEINS FROM GELS Table here summarizes the features, benefits and limitations of different elution strategies. DNA purification and elution.

DETECTION What Should You Consider before Selecting a Stain? There are several factors to consider before selecting a stain, primary among them the sensitivity needed. Table given below provide a general guide to stain sensitivity, and mention other considerations.

Common Protein Stains

Common Nuclic Acid Stains

Will the Choice of Stain Affect a Downstream Application? This is an important question. Colloidal Coomassie and Sypro® Ruby can be used on 2-D gels when mass spectrometry is the detection procedure. Certain silver stains can also be used to stain samples for mass spec analysis because of impro-vements in the sensitivity of mass spectro-meters. Sypro Red covers three orders of magnitude, Coomassie covers two, and silver stains provide coverage over one magnitude. Not all silver stains give good mass spectrometry results and those which are used are not as good as Coomassie or Sypro Ruby. For amino acid sequencing, the gel is usually blotted to PVDF, stained for the protein of interest, and then sequenced. Immuno-detection or other more sensitive methods can be used, but usually the sequencing requires at least 1mg of protein. For this reason we suggest that you stain your blot with Coomassie. This does not interfere with sequencing. Note that if you want to blot your gel after staining, only rever-sible stains such as copper stain and zinc stain can be used with good success. If you stain your gel with Coomassie or silver, the proteins are fixed in the gel and are very difficult to transfer to a membrane. Only copper or zinc stains are recommended before blotting a gel for immune detection. How Much Time Is Required for the Various Stains? The speed of staining is quite variable depending on the quality of water, the temperature, and how closely the staining steps are timed. Gels stained with Coomassie can be left in stain from 30 minutes to overnight, but longer staining times will require much longer destaining times, and more changes of destain solution. Colloidal Coomassie may require several days in the stain for optimum sensitivity and uniformity of staining.

Silver stain must be timed carefully for best results. There are many silver staining protocols; most can be completed in 1.5 to 4 hours. Both copper and zinc staining require only 5 to 10 minutes. The fluorescent stains have various time requirements, usually from a few minutes to an hour at most. It is recommended that the protocols for fluorescent staining be followed carefully for best results.

Figure. Gel-Electrophoresis. Proteins of known molecular weight are used to establish a standard curve that allows the MW of sample proteins to be determined. Will the Presence of Stain on Western-Blotted Proteins Interfere with Subsequent Hybridization or Antibody Detection Reactions? Proteins can be detected on a blot after staining the blot with a general protein stain such as Coomassie or colloidal gold, but the interference with subsequent immunodetection will be high. The interference can be 50% or more, but this may not matter if the protein of interest is in high abundance. Proteins which have been stained in the gel will not transfer out of the gel properly, and it is unlikely that an immuno detection procedure will be successful. It is usual to run duplicate gels or run duplicate lanes on the same gel and cut the gel in half, if you want to both stain and blot the protein of interest.

This method can separate proteins which differ only by a single charge, resulting in a difference of pI-value of 0.001. Isoelectric focussing (IEF) can be performed with native proteins or with proteins whose tertiary and secondary structure has been destroyed with (nonionic) caotropes like urea.If after separation the IEF-gel is mounted across an SDS-PAGE-gel, the proteins which have already been separated by pI can be separated again by molecular weight.

Does Ethidium Bromide Interfere with the Common Enzymatic Manipulation of Nucleic Acids? Ethidium bromide does not usually interfere with the activities of most common DNA modifying enzymes. However, ethidium bromide has been shown to interfere with restriction endonucleases.

Isoelectric focusing electrophoresis

and

2D-gel

If the gel contains several thousand different buffer molecules, with slightly different pKa-values, these buffers can be sorted by an electrical field according to pKa. This results in a continuous pH-gradient across the gel. If proteins are added to the gel, they move in the pH-gradient until they reach their pI, where they are uncharged and do no longer move. As a result proteins are sorted by pI.

Figure : Isoelectric focussing. A pH-gradient is established across the gel by electrophoresis at relatively low voltages (≈ 500V). The sample is applied to this gradient and run at high voltages until all proteins have achieved their equilibrium position.

This “two-dimensional” electrophoresis can separate about thousand different proteins from a cell lysate. Comparing the protein expression profiles from cells of different developmental stages, or of healthy and diseased cells, can turn up proteins involved in developmental regulation or in disease processes. With luck such a protein may be a useful drug target. Thus the field of proteomics (the proteome of a cell is the collection of all proteins it contains, like the genome is the collection of all its genes) has attracted considerable attention in the pharmaceutical industry.

Proteomics The method of choice for fractionating a large number of proteins contained in a natural extract is twodimensional gel electrophoresis (Figure below). This technique separates proteins in a plane, first in one direction, as a function of their isoelectric point2, then in the orthogonal direction, according to their molecular weight. After staining, the result is a twodimensional image consisting of a large number of spots corresponding to the constituent proteins. The intensity of spot coloring with certain stains is approximately proportional to the quantity of protein present.

Figure: 2-dimensional electrophoresis. A protein mixture is first separated on an IEF-gel, which is then mounted across a Laemmli-gel. The second electrophoresis separates the bands of proteins with identical pI by molecular weight. About 1000 different proteins can be distinguished on a 2d-gel of mammalian cells.

However, spot resolution may not be sufficient to separate all the proteins; therefore the results obtained using two-dimensional electrophoresis gels are subject to problems of reproducibility and artifacts. Recently, these problems have been partially resolved by using highly standardized protocols and high-precision techniques. Today, it is possible to separate 2000 spots on a single gel. Proteins that have extreme isolectric points are under-represented (supplementary gels with an extreme pH can partly remedy this), as are particularly hydrophobic proteins, which are not Figure : Two-dimensional protein electrophoresis gel. Proteins solubilized by the weak detergent used in the first contained in a natural extract are fractionated in a polyacrylamide gel subjected to an electric field. They are separated first separation. according to their isoelectric point in a stabilized pH gradient and a weak detergent (horizontal axis), then by their molecular weight, in the presence of a strong ionic detergent (vertical axis). Finally, they are stained in the gel to render them visible.

It is impossible to know at the outset which proteins are present in a spot. Identification may be achieved in part by referring to a database containing twodimensional gel electrophoresis results. Such databases exist, notably for bacteria, yeast, fruit fly, mouse, rat, and human proteins. When such information is not available in a database, and if the sequences of the proteins are known, their positions may sometimes be estimated by calculating their isoelectric points and molecular weights, providing they have not undergone post-translational modification.Otherwise, the spots must be removed and the proteins they contain eluted from them. If the sequence of a protein is known, it may be identified after mild hydrolysis, either by microsequencing or by mass spectrometry of the polypeptides obtained. In rare instances, an eluted protein can be renatured and its biological activity tested. Intracellular localization It is more difficult to evaluate the cellular localization of proteins on a large scale; however, some attempts have been made to do so for yeast proteins. All coding sequences have been fused to a fluorescent reporter protein, such as green fluorescent protein (GFP), each strain containing only one such fusion sequence. The strain collection includes at least one representative of each coding sequence fused to GFP. The fluorescence is then located in the cells of each strain, using a light microscope equipped with a fluorescence device. This method is limited by the low resolution of the light microscope (0.3µm) compared with the size of the cell, as well as by localization artifacts sometimes caused by the fused reporter protein, or by over-expression. Protein–protein interactions The vast majority of proteins interact with other proteins, either in a stable manner, in which case they are known as a complex of polypeptide subunits, or in a more or less transitory manner. All degrees of stability are possible, and may be expressed in terms of the half-dissociation time, or of the dissociation constant, to which the half-dissociation time is inversely proportional. The variation in free energy is proportional to the logarithm of the association/ dissociation equilibrium constant. Like the tertiary structure of individual polypeptides, association between polypeptides involves weak chemical bonds or covalent disulfide bridges between two cysteine residues. These interactions may be demonstrated in various ways.

CSIR Favorites 1. Explain Beer-Lambert Law. 2. Define spectroscopy. What are the different types of spectroscopy? 3. Name the techniques that can be used for separating molecules of different sizes. 4. What is the driving force in electrophoresis? 5. Addition of salts in an aqueous solution of protein causes the precipitation of protein. How? 6. What is the basis of gel permeation chromatography? 7. What is the use of high salt concentration in hydrophobic interaction chromatography? 8. Distinguish between absorption chromatography and partition chromatography. 9. A sample of protein mixture having a pH 7.5 is given along with CM cellulose and DEAE cellulose. You are asked to separate a protein of isoelectric point 5.4 from the mixture by ion exchange chromatography. Which ion exchanger you will select, and why? 10. Describe the principle and use of x-ray crystallography. 11. What is meant by absorption spectroscopy? What are the applications in biochemistry? 12. Explain the principle of mass spectroscopy and its application in protein science. 13. Define the following: (a) Zonal centrifugation (b) Amphoteric molecules (c) Extrinsic fluorescence (d) Isoelectric pH (e) Reverse phase chromatography (f) Isoelectric focusing (g) Osmotic pressure (h) Isopycnic centrifugation (i) Ampholytes 14. Explain how osmotic pressure can be used for the determination of molecular weight of a solute. 15. Differentiate between a colorimeter and spectrophotometer.

Light Microscopy The light microscope, so called because it employs visible light to detect small objects, is probably the most well-known and well-used research tool in biology.A beginner tends to think that the challenge of viewing small objects lies in getting enough magnification. In fact, when it comes to looking at living things the biggest challenges are, in order, obtaining sufficient contrast finding the focal plane obtaining good resolution recognizing the subject when one sees it. The smallest objects that are considered to be living are the bacteria. The smallest bacteria can be observed and cell shape recognized at a mere 100x magnification. They are invisible in bright field microscopes. These pages will describe types of optics that are used to obtain contrast, suggestions for finding specimens and focusing on them, and advice on using measurement devices with a light microscope. Types of light microscopes The bright field microscope is best known to students. Better equipped labs may have dark field and/or phase contrast optics. Differential interference contrast, Nomarski, Hoffman modulation contrast and variations produce considerable depth of resolution and a three dimensional effect. Fluorescence and confocal microscopes are specialized instruments, used for research, clinical, and industrial applications. Other than the compound microscope, a simpler instrument for low magnification use may also be found in the laboratory. The stereo microscope, or dissecting microscope usually has a binocular eyepiece tube, a long working distance, and a range of magnifications typically from 5x to 35 or 40x. Some instruments supply lenses for higher magnifications, but there is no improvement in resolution. Such "false magnification" is rarely worth the expense.

Theory of microscopy

The resolving power of a microscope can be defined in terms of the ability to see two neighboring points in the visual field as distinct entities

Bright Field Microscopy With a conventional bright field microscope, light from an incandescent source is aimed toward a lens beneath the stage called the condenser, through the specimen, through an objective lens, and to the eye through a second magnifying lens, the ocular or eyepiece. We see objects in the light path because natural pigmentation or stains absorb light differentially, or because they are thick enough to absorb a significant amount of light despite being colorless. A Paramecium should show up fairly well in a bright field microscope, although it will not be easy to see cilia or most organelles. Living bacteria won't show up at all unless the viewer hits the focal plane by luck and distorts the image by using maximum contrast. A good quality microscope has a built-in illuminator, adjustable condenser with aperture diaphragm (contrast) control, mechanical stage, and binocular eyepiece tube. The condenser is used to focus light on the specimen through an opening in the stage. After passing through the specimen, the light is displayed to the eye with an apparent field that is much larger than the area illuminated. The magnification of the image is simply the objective lens magnification (usually stamped on the lens body) times the ocular magnification. The bright field condenser usually contains an aperture diaphragm, a device that controls the diameter of the light beam coming up through the condenser, so that when the diaphragm is stopped down (nearly closed) the light comes straight up through the center of the condenser lens and contrast is high. When the diaphragm is wide open the image is brighter and contrast is low. Bright field microscopy is best suited to viewing stained or naturally pigmented specimens such as stained prepared slides of tissue sections or living photosynthetic organisms. It is useless for living specimens of bacteria, and inferior for non-photosynthetic protists or metazoans, or unstained cell suspensions or tissue sections.

Dark Field Microscopy Dark field microscopy is a method which also creates contrast between the object and the surrounding field. As the name implies, the background is dark and the object is bright. A annular stop is also used for dark field, but the stop is now outside the field of view. Only light coming from the outside of the beam passes through the object and it cannot be seen directly. Only when light from the stop is deflected and deviated by the object can it be seen. This method also produces a great deal of glare and therefore the specimen often appears as a bright silhouette rather than as a bright object of which much detail can be determined. Principle To view a specimen in dark field, an opaque disc is placed underneath the condenser lens, so that only light that is scattered by objects on the slide can reach the eye (see figure below). Instead of coming up through the specimen, the light is reflected by particles on the slide. Everything is visible regardless of color, usually bright white against a dark background. Pigmented objects are often seen in "false colors," that is, the reflected light is of a color different than the color of the object. Better resolution can be obtained using dark field as opposed to bright field viewing. Dark field illumination is most readily set up at low magnifications (up to 100x), although it can be used with any dry objective lens. Any time you wish to view everything in a liquid sample, debris and all, dark field is best. Even tiny dust particles are obvious. Dark field is especially useful for finding cells in suspension. Dark field makes it easy to obtain the correct focal plane at low magnification for small, low contrast specimens. Uses 1. Initial examination of suspensions of cells such as yeast, bacteria, small protists, or cell and tissue fractions including cheek epithelial cells, chloroplasts, mitochondria, even blood cells (small diameter of pigmented cells makes it tricky to find them sometimes despite the color). 2. Initial survey and observation at low powers of pond water samples, hay or soil infusions, purchased protist or metazoan cultures. 3. Examination of lightly stained prepared slides.

Bright field

Dark field

Phase Contrast Microscopy Most of the detail of living cells is undetectable in bright field microscopy because there is too little contrast between structures with similar transparency and there is insufficient natural pigmentation. However the various organelles show wide variation in refractive index, that is, the tendency of the materials to bend light, providing an opportunity to distinguish them. Principle Highly refractive structures bend light to a much greater angle than do structures of low refractive index. The same properties that cause the light to bend also delay the passage of light by a quarter of a wavelength or so. In a light microscope in bright field mode, light from highly refractive structures bends farther away from the center of the lens than light from less refractive structures and arrives about a quarter of a wavelength out of phase. Light from most objects passes through the center of the lens as well as to the periphery. Now if the light from an object to the edges of the objective lens is retarded a half wavelength and the light to the center is not retarded at all, then the light rays are out of phase by a half wavelength. They cancel each other when the objective lens brings the image into focus. A reduction in brightness of the object is observed. The degree of reduction in brightness depends on the refractive index of the object. Simplified optical pathway for phase contrast microscopy. 1 = light source 2 = annular shaped light mask (condenser) 3 = condenser 4 = specimen 5 = background light 6 = light bent by the specimen 7 = phase ring 8 = eyepiece with intermediate image 9 = eye

With this arrangement the specimen is concentrically illuminated by the apex of a cone of light. The light beams which are diffracted by the specimen pass the objective lens at various angles which are dependent on the relative refractive index and the thickness of the specimen. The other light components, corresponding to the background, pass through the phase ring in the objective which produces an additional phase difference. Thus, the phase differences between the specimen, its details and the background are amplified in the final image, so that minimal differences in refractive index are visible even in colorless specimens with a low contrast and thickness. Depending on the configuration and properties of the phase ring in the objective, the phase contrast microscopy can be positive or negative. In positive phase contrast the specimen is visible with medium or dark grey features, surrounded by a bright halo; the background is of higher intensity than the specimen. In negative phase contrast the background is darker and the specimen appears brighter, surrounded by a dark halo. The bright and dark halos are artifacts which are one of the major disadvantages of phase contrast; they are especially prevalent in specimens inducing large phase shifts. Applications Phase contrast is preferable to bright field microscopy when high magnifications (400x, 1000x) are needed and the specimen is colorless or the details so fine that color does not show up well. Cilia and flagella, for example, are nearly invisible in bright field but show up in sharp contrast in phase contrast. Amoebae look like vague outlines in bright field, but show a great deal of detail in phase. Most living microscopic organisms are much more obvious in phase contrast.

Principle Differential interference microscopy requires several optical components, therefore it can be very expensive to set up. Light from an incandescent source is passed through a polarizer, so that all of the light getting through must vibrate in a single plane. The beam is then passed through a prism that separates it into components that are separated by a very small distance - equal to the resolution of the objective lens. The beams pass through the condenser, then the specimen. In any part of the specimen in which adjacent regions differ in refractive index the two beams are delayed or refracted differently. When they are recombined by a second prism in the objective lens there are differences in brightness corresponding to differences in refractive index or thickness in the specimen. Regions such as the edge of a cell or nucleus are very distinct because the quality of the specimen changes so much over a very short distance. One or more components of the system are adjustable to obtain the maximum contrast. When the contrast is optimized one can obtain a very distinct image that appears three dimensional. The effect is very much like what you see when a subject is shadowed by a strong light coming from one side, as with craters on the moon near the terminator, namely the boundary between the sunlit portion of the Moon's surface and the dark side.

Interference Microscopy The ideas behind high-resolution interferometry were introduced by Tychinski, By combining an optical microscope with an interferometer, it is possible to obtain high-resolution maps of the phase of the optical field at high magnification. As observed by Tychinski, structure of the phase distribution inside the diffraction-limited spot can be seen with such equipment. Fast phase variations are produced by phase dislocations in the optical field, and are readily observable using an interference microscope. In contrast to the amplitude, the phase is not governed by the classical resolution limit of the optical imaging system. Unfortunately, there is no trivial relation between the structure and the optical phase field that it produces, thus making it difficult to extract surface profile information without a priori knowledge.

uses of this microscope involving live and unstained biological samples, such as a smear from a tissue culture or individual water borne single-celled organisms. Its resolution and clarity in conditions such as this are unrivaled among standard optical microscopy techniques.The main limitation is requirement of a transparent sample of fairly similar refractive index to its surroundings. This is unsuitable for thick samples, such as tissue slices, and highly pigmented cells.

Fluorescence Microscopy A fluorescence microscope is a light microscope used to study properties of organic or inorganic substances using the phenomena of fluorescence and phosphorescence instead of, or in addition to, reflection and absorption. The method of fluorescence microscopy is an excitatory light is passed from above (or, for inverted microscopes, from below), through the objective and then onto the specimen instead of passing it first through the specimen. (In the latter case the transmitted excitatory light reaches the objective together with light emitted from the specimen). The fluorescence in the specimen gives rise to emitted light which is focused to the detector by the same objective that is used for the excitation. A filter between the objective and the detector filters out the excitation light from fluorescent light. Since most of the excitatory light is transmitted through the specimen, only reflected excitatory light reaches the objective together with the emitted light and this method therefore gives an improved signal to noise ratio. A common use in biology is to apply fluorescent or fluorochrome stains to the specimen in order to image a protein or other molecule of interest.

The fluorochromes emit light of a given wavelength when excited by incident light of a different (shorter) wavelength. To view this fluorescence in the microscope, several light filtering components are needed. Specific filters are used to isolate the excitation and emission wavelengths of a fluorochrome. A dichroic beam splitter (partial mirror) reflects shorter wavelengths of light and allows longer wavelengths to pass. A dichoric beam splitter is required because the objective acts as both a condenser lens (excitation light) and objective lens (emission light); therefore the beam splitter isolates the emitted light from the excitation wavelength. This epiillumination type of light path is required to create a dark background so that the fluorescence can be easily seen.

Biologists usually need to enhance and differentiate details within living or fixed cells. To do this, special dyes are applied. To increase the sensitivity of observation, dyes (fluorochromes) that fluoresce under invisible or visible radiation are used to stain and delineate the structures of interest. Sometimes, the inherent tendency of some cellular components to fluoresce without additional staining may be usefully employed. This autofluorescence can be seen in cellular components ranging from plant chlorophylls to proteins in eye tissue. Unlike traditional biological staining (e.g., methylene blue and eosin for tissue differentiation), due to the remarkable efficiency and detectability of fluorochromes fluorescing under UV or visible light, much smaller concentrations of these dyes (generally toxic) can be used. This translates into fewer side effects to living cells with minimum disruption to their normal physiology. Also, the biochemical workings of cells, such as calcium ion transport, can be recorded in time-lapse mode using fluorescence microscopy. Fluorescent dyes are commonly used for the detection of live cells and key functional activities in a variety of cellbased assays. The fluorescent dyes, calcein, acetoxymethylester can be used as an insert Systems to label cells when performing analyses such as tumor cell invasion, endothelial cell migration, endothelial cell tubulogenesis, and other cell-based assays. Fluorescence Resonance Energy Transfer Fluorescence resonance energy tranfer (FRET) allows the study of molecular interactions in the lower nanometer range. FRET offers a variety of donor/acceptor pairs. This particular mechanism has become the basis for a useful technique involving the study of molecular interactions and associations at distances far below the lateral resolution of the optical microscope. What is difference between resolution, resolving power and limit of resolution of microscope? Resolution is the ability to tell two points apart as separate points. If the resolving power of your lens is 2um that means two points that are 2um apart can be seen as separate points. If they are closer together than that, they will blend together into one point. The magnification is something different-the ability to make an object larger. If the resolving power of a microscope is poor, it will just magnify a blurry object. Resolution, resolving power, and limit of resolution all mean the same thing. They refer to the ability of a microscope to distinguish between two objects. If a microscope have resolving power 0.3µm , it can distinguish two points that are at least 0.3 µm (300 nm) apart.

Electron Microscopy

The first electron microscope prototype was built in 1931 by the German engineers Ernst Ruska and Max Knoll. An electron microscope is a type of microscope that uses a particle beam of electrons to illuminate a specimen and create a highly-magnified image.Electron microscopes have much greater resolving power than light microscopes that use electromagnetic radiation and can obtain much higher magnifications of up to 2 million times, while the best light microscopes are limited to magnifications of 2000 times. Both electron and light microscopes have resolution limitations, imposed by the wavelength of the radiation they use. The greater resolution and magnification of the electron microscope is because the wavelength of an electron; its de Broglie wavelength is much smaller than that of a photon of visible light. The electron microscope uses electrostatic and electromagnetic lenses in forming the image by controlling the electron beam to focus it at a specific plane relative to the specimen. This manner is similar to how a light microscope uses glass lenses to focus light on or through a specimen to form an image.

Transmission Electron Microscope (TEM) The original form of electron microscope, the transmission electron microscope (TEM) uses a high voltage electron beam to create an image. The electrons are emitted by an electron gun, commonly fitted with a tungsten filament cathode as the electron source. The electron beam is accelerated by an anode typically at +100keV (40 to 400 keV) with respect to the cathode, focused by electrostatic and electromagnetic lenses, and transmitted through the specimen that is in part transparent to electrons and in part scatters them out of the beam. When it emerges from the specimen, the electron beam carries information about the structure of the specimen that is magnified by the objective lens system of the microscope. The spatial variation in this information (the "image") is viewed by projecting the magnified electron image onto a fluorescent viewing screen coated with a phosphor or scintillator material such as zinc sulfide. The image can be photographically recorded by exposing a photographic film or plate directly to the electron beam, or a high-resolution phosphor may be coupled by means of a lens optical system or a fibre optic light-guide to the sensor of a CCD (chargecoupled device) camera. The image detected by the CCD may be displayed on a monitor or computer. Resolution of the TEM is limited primarily by spherical aberration, but a new generation of aberration correctors have been able to partially overcome spherical aberration to increase resolution. Software correction of spherical aberration for the High Resolution TEM (HRTEM) has allowed the production of images with sufficient resolution to show carbon atoms in diamond separated by only 0.89 ångström (89 picometers) and atoms in silicon at 0.78 ångström (78 picometers) at magnifications of 50 million times. The ability to determine the positions of atoms within materials has made the HRTEM an important tool for nanotechnologies research and development. 1. The "Virtual Source" at the top represents the electron gun, producing a stream of monochromatic electrons. 2. This stream is focused to a small, thin, coherent beam by the use of condenser lenses 1 and 2. The first lens (usually controlled by the "spot size knob") largely determines the "spot size"; the general size range of the final spot that strikes the sample. The second lens (usually controlled by the "intensity or brightness knob" actually changes the size of the spot on the sample; changing it from a wide dispersed spot to a pinpoint beam. 3. The beam is restricted by the condenser aperture (usually user selectable), knocking out high angle electrons (those far from the optic axis, the dotted line down the center) 4. The beam strikes the specimen and parts of it are transmitted 5. This transmitted portion is focused by the objective lens into an image.

6. Optional Objective and Selected Area metal apertures can restrict the beam; the Objective aperture enhancing contrast by blocking out high-angle diffracted electrons, the Selected Area aperture enabling the user to examine the periodic diffraction of electrons by ordered arrangements of atoms in the sample 7. The image is passed down the column through the intermediate and projector lenses, being enlarged all the way 8. The image strikes the phosphor image screen and light is generated, allowing the user to see the image. The darker areas of the image represent those areas of the sample that fewer electrons were transmitted through (they are thicker or denser). The lighter areas of the image represent those areas of the sample that more electrons were transmitted through (they are thinner or less dense)

The display of the SEM maps the varying intensity of any of these signals into the image in a position corresponding to the position of the beam on the specimen when the signal was generated. In the SEM image of an ant shown at right, the image was constructed from signals produced by a secondary electron detector, the normal or conventional imaging mode in most SEMs. Generally, the image resolution of an SEM is about an order of magnitude poorer than that of a TEM. However, because the SEM image relies on surface processes rather than transmission it is able to image bulk samples up to several centimetres in size (depending on instrument design) and has a much greater depth of view, and so can produce images that are a good representation of the 3D structure of the sample.

1. The "Virtual Source" at the top represents the electron gun, producing a stream of monochromatic electrons. 2. The stream is condensed by the first condenser lens (usually controlled by the "coarse probe current knob"). This lens is used to both form the beam and limit the amount of current in the beam. It works in conjunction with the condenser aperture to eliminate the high-angle electrons from the beam

Scanning Electron Microscope (SEM) Unlike the TEM, where electrons of the high voltage beam carry the image of the specimen, the electron beam of the Scanning Electron Microscope (SEM) does not at any time carry a complete image of the specimen. The SEM produces images by probing the specimen with a focused electron beam that is scanned across a rectangular area of the specimen (raster scanning). At each point on the specimen the incident electron beam loses some energy, and that lost energy is converted into other forms, such as heat, emission of low-energy secondary electrons, light emission (cathodoluminescence) or x-ray emission.

3. The beam is then constricted by the condenser aperture (usually not user selectable), eliminating some high-angle electrons 4. The second condenser lens forms the electrons into a thin, tight, coherent beam and is usually controlled by the "fine probe current knob" 5. A user selectable objective aperture further eliminates high-angle electrons from the beam. 6. A set of coils then "scan" or "sweep" the beam in a grid fashion (like a television), dwelling on points for a period of time determined by the scan speed (usually in the microsecond range.

7. The final lens, the Objective, focuses the scanning beam onto the part of the specimen desired. 8. When the beam strikes the sample (and dwells for a few microseconds) interactions occur inside the sample and are detected with various instruments 9. Before the beam moves to its next dwell point these instruments count the number of interactions and display a pixel on a CRT whose intensity is determined by this number (the more reactions the brighter the pixel). 10. This process is repeated until the grid scan is finished and then repeated, the entire pattern can be scanned 30 times per second Reflection Electron Microscope (REM) In the Reflection Electron Microscope (REM) as in the TEM, an electron beam is incident on a surface, but instead of using the transmission (TEM) or secondary electrons (SEM), the reflected beam of elastically scattered electrons is detected. This technique is typically coupled with Reflection High Energy Electron Diffraction (RHEED) and Reflection high-energy loss spectrum (RHELS). Another variation is Spin-Polarized Low-Energy Electron Microscopy (SPLEEM), which is used for looking at the microstructure of magnetic domains. Scanning (STEM)

Transmission

Electron

Microscope

The STEM rasters a focused incident probe across a specimen that (as with the TEM) has been thinned to facilitate detection of electrons scattered through the specimen. The high resolution of the TEM is thus possible in STEM. The focusing action (and aberrations) occur before the electrons hit the specimen in the STEM, but afterward in the TEM.

The STEM's use of SEM-like beam rastering simplifies annular dark-field imaging, and other analytical techniques, but also means that image data is acquired in serial rather

Sample preparation Materials to be viewed under an electron microscope may require processing to produce a suitable sample. The technique required varies depending on the specimen and the analysis required: Chemical fixation- for biological specimens aims to stabilize the specimen's mobile macromolecular structure by chemical cross-linking of proteins with aldehydes such as formaldehyde and glutaraldehyde, and lipids with osmium tetroxide. Samples are preserved by fixation with glutaraldehyde (Covalently cross-links proteins) first and then by osmium tetroxide which Binds to and stabilizes lipid bilayers and proteins. Tissue dehydrated, permeated with a polymerizing resin & sectioned into ultra-thin sections. 50 – 100 nm thick (1/200 thickness of a cell). Sections stained with electron-dense material (e.g uranyl acetate) to achieve contrast. Tissue composed of atoms of low atomic number (e.g. carbon, oxygen, nitrogen, hydrogen). To make them visible impregnated with salts of heavy metals. Cryofixation – freezing a specimen so rapidly, to liquid nitrogen or even liquid helium temperatures, that the water forms vitreous (non-crystalline) ice. This preserves the specimen in a snapshot of its solution state. An entire field called cryo-electron microscopy has branched from this technique. With the development of cryo-electron microscopy of vitreous sections (CEMOVIS), it is now possible to observe samples from virtually any biological specimen close to its native state. Dehydration – freeze drying, or replacement of water with organic solvents such as ethanol or acetone, followed by critical point drying or infiltration with embedding resins.

Embedding, biological specimens – after dehydration, tissue for observation in the transmission electron microscope is embedded so it can be sectioned ready for viewing. To do this the tissue is passed through a 'transition solvent' such as epoxy propane and then infiltrated with a resin such as Araldite epoxy resin; tissues may also be embedded directly in water miscible acrylic resin. After the resin has been polymerised (hardened) the sample is thin sectioned (ultrathin sections) and stained then it is then ready for visualization. Embedding, materials - after embedding in resin, the specimen is usually ground and polished to a mirror like finish using ultra-fine abrasives. The polishing process must be performed carefully to minimize scratches and other polishing artifacts that reduce image quality. Sectioning – produces thin slices of specimen, semitransparent to electrons. These can be cut on an ultramicrotome with a diamond knife to produce ultrathin slices about 60 to 90 nm thick. Staining – uses heavy metals such as lead, uranium or tungsten to scatter imaging electrons and thus give contrast between different structures, since many (especially biological) materials are nearly "transparent" to electrons (weak phase objects). Typically thin sections are stained for several minutes with an aqueous or acoholic solution of uranyl acetate followed by aqueous lead citrate. Freeze-fracture or freeze-etch – a preparation method particularly useful for examining lipid membranes and their incorporated proteins in "face on" view. The fresh tissue or cell suspension is frozen rapidly (cryofixed), then fractured by simply breaking or by using a microtome while maintained at liquid nitrogen temperature. The cold fractured surface (sometimes "etched" by increasing the temperature to about –100 °C for several minutes to let some ice sublime) is then shadowed with evaporated platinum or gold at an average angle of 45° in a high vacuum evaporator. A second coat of carbon, evaporated perpendicular to the average surface plane is often performed to improve stability of the replica coating. The specimen is returned to room temperature and pressure, then the extremely fragile "pre shadowed" metal replica of the fracture surface is released from the underlying biological material by careful chemical digestion with acids, hypochlorite solution or SDS detergent. The still floating replica is thoroughly washed from residual chemicals, carefully fished up on fine grids, dried then viewed in the TEM. Ion Beam Milling – thins samples until they are transparent to electrons by firing ions (typically argon) at the surface from an angle and sputtering material from the surface.

A subclass of this is Focused ion beam milling, where gallium ions are used to produce an electron transparent membrane in a specific region of the sample, for example through a device within a microprocessor. Ion beam milling may also be used for cross-section polishing prior to SEM analysis of materials that are difficult to prepare using mechanical polishing. Conductive Coating – an ultrathin coating of electricallyconducting material, deposited either by high vacuum evaporation or by low vacuum sputter coating of the sample. This is done to prevent the accumulation of static electric fields at the specimen due to the electron irradiation required during imaging. Such coatings include gold, gold/palladium, platinum, tungsten, graphite etc. and are especially important for the study of specimens with the scanning electron microscope. Another reason for coating, even when there is more than enough conductivity, is to improve contrast, a situation more common with the operation of a FESEM (field emission SEM).

Applications 1) 2) 3) 4) 5) 6) 7) 8) 9) 10) 11) 12) 13) 14) 15)

Diagnostic electron microscopy Cryobiology Protein localization Electron tomography Cellular tomography Cryo-electron microscopy Toxicology Biological production and viral load monitoring Particle analysis Pharmaceutical QC Structural biology 3D tissue imaging Virology Vitrification Forensics

Few questions 1. What is the significance of the N.A. number inscribed on the outer barrel of an objective? 2. What is meant by resolving power of an objective and how is that distinguished from resolution? 3. What is the relationship between numerical aperture and brightness of an image? Between magnification and brightness of an image? 4. Can phase contrast objectives be used for regular brightfield observation? 5. When a 40x objective is used, the image may appear worse than with a 20x objective. Why?

Electromagnetic Spectrum Visible

Ionizing Radiation

Nonionizing Radiation Infrared

Ultraviolet

Near

Radar

Far

X Rays

FM TV

Gamma Rays

Short wave

Cosmic Rays

10-14

1010

Power Transmission

Broadcast

10-12

108

High

10-10

106

10-8

104

10-6 10-4 10-2 Wavelength in Meters

102

1

10-2

Energy - Electron Volts

Radioactivity Roentgen: Discoverer of X-rays 1895 Becquerel: Discoverer of Radioactivity 1896 Curies: Discoverers of Radium and Polonium 1900-1908 Rutherford: Discoverer Alpha and Beta rays 1897 Radioactivity can be described mathematically without reference to the specific mode of decay of a sample of radioactive atoms. The rate of decay (the number of atoms decaying per unit time) is directly proportional to the number of radioactive atoms N present in the sample: ∆N/∆t = −λN where ∆N/∆t is the rate of decay. The constant λ is the decay constant of the particular species of atoms in the sample, and the negative sign reveals that the number of radioactive atoms in the sample is diminishing as the sample decays. The decay constant can be expressed as – (∆N/∆t )/N, revealing that it represents the fractional rate of decay of the atoms. The value of λ is characteristic of the type of atoms in the sample and changes from one nuclide to the next. Units of λ are (time)−1. Larger values of λ characterize more unstable nuclides that decay more rapidly. The rate of decay of a sample of atoms is termed the activity A of the sample (i.e., A = ∆N/∆t ). A rate of decay of 1 atom per second is termed an activity of 1 becquerel(Bq). That is, 1Bq = 1 disintegration per second (dps)

10-4

1

102

104

106

108

10-6

10-8

10 -10

10-12

10-14

Low

A common unit of activity is the megabecquerel (MBq), where 1 MBq = 106 dps. An earlier unit of activity, the curie (Ci) is defined as 1Ci = 3.7 × 1010 dps Multiples of the curie are the picocurie (10−12 Ci), nanocurie (10−9 Ci), microcurie (10−6 Ci), millicurie (10−3 Ci), kilocurie (103 Ci), and megacurie (106 Ci). The becquerel and the curie are related by 1 Bq = 1 dps = 2.7 × 10−11 Ci. The activity of a radioactive sample per unit mass (e.g., MBq/mg) is known as the specific activity of the sample. The rutherford (Rf) was once proposed as a unit of activity, where 1 Rf = 106 dps.

WHY ATOMS BECOME RADIOACTIVE The nuclei of many atoms are stable. In general, it is these atoms that constitute ordinary matter. In stable nuclei of lighter atoms, the number of neutrons is about equal to the number of protons. A high level of symmetry exists in the placement of protons and neutrons into nuclear energy levels similar to the electron shells constituting the extranuclear structure of the atom. The assignment of nucleons to energy levels in the nucleus is referred to as the “shell model” of the nucleus. For heavier stable atoms, the number of neutrons increases faster than the number of protons, suggesting that the higher energy levels are spaced more closely for neutrons than for protons. The number of neutrons (i.e., the neutron number) in nuclei of stable atoms is plotted in Figure below as a function of the number of protons (i.e., the atomic number). Above Z = 83, no stable forms of the elements exist, and the plot depicts the neutron/proton (N/Z) ratio for the least unstable forms of the elements (i.e., isotopes that exist for relatively long periods before changing).

Unstable

Stability curve

Nuclei that have an imbalance in the N/Z ratio are positioned away from the stability curve. These unstable nuclei tend to undergo changes within the nucleus to achieve more stable configurations of neutrons and protons. The changes are accompanied by the emission of particles and electromagnetic radiation (photons) from the nucleus, together with the release of substantial amounts of energy related to an increase in binding energy of the nucleons in their final nuclear configuration. These changes are referred to as radioactive decay of the nucleus, and the process is described as radioactivity. If the number of protons is different between the initial and final nuclear configurations, Z is changed and the nucleus is transmuted from one elemental form to another. The various processes of radioactive decay are summarized in Table 1-1.

TYPES OF RADIOACTIVE DECAY The process of radioactive decay often is described by a decay scheme in which energy is depicted on the vertical ( y) axis and atomic number is shown on the horizontal (x) axis. A generic decay scheme is illustrated in Figure. The original nuclide (or “parent”) is depicted as Z X A, and the product nuclide (or “progeny”) is denoted as element P , Q, R, or S depending on the decay path. In the path from X to P , the nuclide gains stability by emitting an alpha (α) particle, two neutrons and twoprotons ejected from the nucleus as a single particle. In this case, the progeny nucleushas an atomic number of Z − 2 and a mass number of A−4 and is positioned atreduced elevation in the decay scheme to demonstrate that energy is released as thenucleus gains stability through radioactive decay. The released energy is referred to as the transition energy. In the path from X to Q, the nucleus gains stability through the process in which a proton in the nucleus changes to a neutron.

This process can be either positron decay or electron capture and yields an atomic number of Z –1 and an unchanged mass number A. The path from X to R represents negatron decay in which a neutron is trans-formed into a proton, leaving the progeny with an atomic number of Z + 1 and an unchanged mass number A. In the path from R to S, the constant Z and constant A signify that no change occurs in nuclear composition. This pathway is termed an isomeric transition between nuclear isomers and results only in the release of energy from the nucleus through the processes of gamma emission and internal conversion.

Alpha Decay: Alpha decay is a decay process in which greater nuclear stability is achieved by emission of 2 protons and 2 neutrons as a single alpha (α) particle (a nucleus of helium) from the nucleus. Alpha emission is confined to relatively heavy nuclei such where 2He4 represents the alpha particle. The sum of mass numbers and the sum of atomic numbers after the transition equal the mass and atomic numbers of the parent before the transition. In α decay, energy is released as kinetic energy of the α particle, and is sometimes followed by energy released during an isomeric transition resulting in emission of a γ ray or conversion electron. Alpha particles are always ejected with energy characteristic of the particular nuclear transition.

An alpha transition is depicted in the margin, in which the parent 226Ra decays directly to the final energy state (ground state) of the progeny 222Rn in 94% of all transitions. In 6% of the transitions, 226Ra decays to an intermediate higher energy state of 222Rn, which then decays to the ground state by isomeric transition. For each of the transition pathways, the transition energy between parent and ground state of the progeny is constant. In the example of 226Ra, the transition energy is 4.78 MeV.

Beta Decay Nuclei with an N/Z ratio that is above the line of stability tend to decay by a form of beta (β ) decay known as negatron emission. In this mode of decay, a neutron is transformed into a proton, and the Z of the nucleus is increased by 1 with no change in A. In this manner, the N/Z ratio is reduced, and the product nucleus is nearer the line of stability. Simultaneously an electron (termed a negative beta particle or negatron) is ejected from the nucleus together with a neutral mass less particle, termed a neutrino (actually an “antineutrino” in negatron decay), that carries away the remainder of the released energy that is not accounted for by the negatron. The neutrino (or antineutrino) seldom interacts with matter and is not important to applications of radioactivity in medicine

Gamma Emission and Internal Conversion Gamma rays are high-energy electromagnetic radiation that differ from x rays only in their origin: Gamma rays are emitted during transitions between isomeric energy states of the nucleus, whereas x rays are emitted during electron transitions outside the nucleus. Gamma rays and other electromagnetic radiation are described by their energy E and frequency ν, two properties that are related by the expression E= hν, where h = Planck’s constant (h = 6.62 × 10−34 Jsec). The frequency ν and wavelength λ of electromagnetic radiation are related by the expression ν = c/λ, where c is the speed of light in a vacuum. Internal conversion is a competing process to gamma emission for an isomeric transition between energy states of a nucleus. In a nuclear transition by internal conversion, the released energy is transferred from the nucleus to an inner electron, which is ejected with a kinetic energy equal to the transferred energy reduced by the binding energy of the electron.

RADIOACTIVE EQUILIBRIUM Some progeny nuclides produced during radioactive decay are themselves unstable and undergo radioactive decay in a continuing quest for stability. For example, 226 Ra decays to 222Rn, which, in turn, decays by alpha emission to 218Po.When a radioactive nuclide is produced by radioactive decay of a parent, a condition can be reached inwhich the rate of production of the progeny equals its rate of decay. In this condition, the number of progeny atoms and therefore the progeny activity reach their highest level and are constant for a moment in time. This constancy reflects an equilibrium condition known as transient equilibrium because it exists only momentarily. In cases in which a shorter-lived radioactive progeny is produced by decay of a longer-lived parent, the activity curves for parent and progeny intersect at the moment of transient equilibrium. This intersection reflects the occurrence of equal activities of parent and daughter at that particular moment. After the moment of transient equilibrium has passed, the progeny activity decays with an apparent half-life equal to that of the longer-lived parent. NATURAL RADIOACTIVITY AND DECAY SERIES Most radionuclides in nature are members of one of three naturally occurring radioactive decay series. Each series consists of a sequence of radioactive transformations that begins with a long-lived radioactive parent and ends with a stable nuclide. In a closed environment such as the earth, intermediate radioactive progeny exist in secular equilibrium with the long-lived parent, and decay with an apparent half-life equal to that of the parent

All naturally occurring radioactive nuclides decay by emitting either alpha or negative beta particles. Hence, each transformation in a radioactive series changes the mass number by either 4 or 0 and changes the atomic number by −2 or +1. The uranium series depicted in Figure 1-8 begins with the isotope 238U and ends with the stable nuclide 206Pb. The parent and each product in this series have a mass number that is divisible by 4 with remainder of 2; the uranium series is also known as the 4n + 2 series. The naturally occurring isotopes 226Ra and 222Rn are members of the uranium series. The actinium (4n + 3) series begins with 235U and ends with 207Pb, and the thorium (4n ) series begins with232Th and ends with 208Pb. Members of the hypothetical neptunium (4n+ 1) series do not occur in nature because no long-lived parent is available. Fourteen naturally occurring radioactive nuclides are not members of a decay series . These nuclides, all with relatively long half-lives, are 3H, 14C, 40K, 50V, 87Rb,115In,130Te,138La,142Ce,144Nd,147Sm,176Lu, 187Re, 192 and Pt. ARTIFICIAL PRODUCTION OF RADIONUCLIDES Radioactive isotopes with properties useful in biomedical research and clinical medicine may be produced by bombarding selected nuclei with neutrons and high- energy charged particles. Nuclides with excess neutrons that subsequently decay by negatron emission are created by bombarding nuclei with neutrons in a nuclear reactor or from a neutron generator. Typical reactions are 13 C + n → C + isomeric transition 6 0 6 15P+10n→3215P+ isomeric transition Useful isotopes produced by neutron bombardment include3H,35S,51Cr,60Co,99Mo, 133Xe, and 198Au. Because the isomeric transition frequently results in prompt emission of a gamma ray, neutron bombardment often is referred to as an (n, γ) reaction. The reaction yields a product nuclide with an increase in A of 1 and no increase in Z .

Radioactive nuclides are also produced as a result of nuclear fission. These nuclides can be recovered as fission byproducts from the fuel elements used in nuclear reactors. Isotopes such as 90Sr,99Mo,131I, and 137Cs can be recovered in this manner. Fissionproduced nuclides (fission byproducts) are often mixed with other stable and radioactive isotopes of the same element, and cannot be separated chemically as a solitary radionuclide.13As a consequence, fission byproducts are less useful in research and clinical medicine than are radionuclides that are produced by neutronor charged- particle bombardment. SUMMARY • Radioactive decay is the consequence of nuclear instability. • Negatron decay occurs in nuclei with a high n/p ratio. • Positron decay and electron capture occur in nuclei with a low n/p ratio. • Alpha decay occurs with heavy unstable nuclei. • Isomeric transitions occur between different energy states of nuclei and result in the emission of γ rays and conversion electrons. • The activity A of a sample is A = Aoe−λt where λ is the decay constant (fractional rate of decay). • The half-life T1/2 is the time required for half of a radioactive sample to decay. • The half-life and the decay constant are related by T1/2=0.693/λ. • The common unit of activity is the becquerel (Bq), with 1 Bq = 1 disintegration/second. • Transient equilibrium may exist when the progeny nuclide decays with a T1/2/T1/2 parent. • Secular equilibrium may exist when the progeny nuclide decays with a T1/2/T1/2 parent. • Most radioactive nuclides found in nature are members of naturally occurring decay series

1. Natural oxygen contains three isotopes with atomic masses in amu of 15.9949, 16.9991, and 17.9992 and relative abundances of 2500:1:5. Determine to three decimal places the average atomic mass of oxygen. 2. Determine the energy required to move an electron from the K to the L shell in tungsten and in hydrogen, and explain the difference. 3. How many MBq of132I (T1/2=2.3 hours) should be ordered so that the sample activity will be 500 MBq when it arrives 24 hours later? 4. . How many atoms and grams of 90Y are in secular equilibrium with 50 mCi of 90Sr? 5. If a radionuclide decays for an interval of time equal to its average life, what percentage of the original activity remains?

Measurement of Dose When alpha or beta particles, or gamma radiation, pass through matter, they form ions. They accomplish this by knocking electrons from the orbits of the molecules they pass through. We can monitor the ionization effect by allowing the radiation to pass through dry air and measuring the numbers of ions formed. This is most often done by designing a chamber with an electrical charge capacitance, allowing the radiation to pass through the chamber and monitoring the amount of capacitance discharge caused by the formation of ions. The device is a Geiger-Mueller Counter and has many variations. The ionizing ability is measured in roentgens, and a roentgen is the number of ionizations necessary to form one electrostatic unit (esu) in 1 cc of dry air. Since the roentgen is a large unit, dosages for cell research use are normally divided into milliroentgens (mR). Curies measure the amount of radioactive decay, and roentgens measure the amount of radiation transmitted through matter, over distance. Neither unit is useful in determining biological effect, since biological effect implies that the radiation is absorbed by the tissues that are irradiated. The rad (radiation absorbed dose) is a unit of absorbed dose and equals 100 ergs absorbed in 1 gram of matter. Detection of Radioactivity Ionization chambers. The most common method of measuring radiation exposure is the use of an ionization chamber. Among the more common forms of ionization chambers are the Geiger-Müller counter, scintillation counter, and pocket dosimeter. The chambers are systems composed of 2 electrical plates, with a potential established between them by a battery or other electrical source. In effect, they function as capacitors. The plates are separated by an inert gas, which will prevent any current flow between the plates. When an ionizing radiation enters the chamber, it induces the formation of an ion, which in turn is drawn to one of the electrical plates. The negative ions are drawn to the anode (+ plate), while the positive ions are drawn to the cathode (– plate). As the ions reach the plates, they induce an electric current to flow through the system attached to the plates. This is then expressed as a calibrated output, either through the use of a digital or analog meter, or as a series of clicks, by conversion of the current through a speaker.

The sensitivity of the system depends on the voltage applied between the electric plates. Since alpha particles are significantly easier to detect than beta particles, it requires lower voltage to detect the high energy alpha particles. In addition, alpha particles will penetrate through the metal casing of the counter tube, whereas beta particles can only pass through a quartz window on the tube. Consequently, ionization chambers are most useful for measuring alpha emissions. Highenergy beta emissions can be measured if the tube is equipped with a thin quartz window and the distance between the source of emission and the tube is minimal. A modification of the basic ionization chamber is the pocket dosimeter. This device is a capacitor, which is charged by a base unit and which can then be carried as a portable unit. They are often the size and shape of a pen and can thus be carried in the pocket of a lab coat. When exposed to an ionizing radiation source, the capacitor discharges slightly. Over a period of time, the charge remaining on the dosimeter can be monitored and used as a measure of radiation exposure. The dosimeters are usually inserted into a reading device that is calibrated to convert the average exposure of the dosimeter directly into roentgens or rems. Since the instrument works by discharging the built-up charge, and the charge is on a thin wire in the center of the dosimeter, it can be completely discharged by the flexing of that wire, as it touches the outer shell upon impact. When later read for exposure, the investigator will be informed that they have been exposed to dangerously high levels of radiation, since there will be no charge left in the dosimeter. Besides causing great consternation with the radiation safety officer, and a good deal of paper work, it also causes some unrest with the investigator. The dosimeters should be used in a location where they cannot impact any other objects. Since the dosimeters normally lack the fragile and vulnerable quartz windows of a Geiger tube, and carry lower voltage potentials, they are used for the measurement of x-ray and high energy gamma radiation, and will not detect beta emissions. beta insulator

coated with Ag or graphite “cathode”

Ar+ Ar+ Ar+

Ar0  Ar+ + eTungsten (W) wire “anode”

non absorbed beta

e- e- e-

Photographic Film Low-energy emissions are detected more conveniently through the use of a film badge. This is simply a piece of photographic film sandwiched between cardboard and made into a badge, which can be pinned or clipped onto the outer clothing of the investigator. They can be worn routinely and collected on a regular basis for analysis. When the film is exposed to radiation, it causes the conversion of the silver halide salts to reduced silver (exactly as exposure of the film to light). When the film is developed, the amount of reduced silver (black) can be measured and calibrated for average exposure to radiation. This is normally done by a lab specializing in this monitoring. Because of the simplicity of the system, its relatively low cost, and its sensitivity to nearly all forms of radiation, it is the primary means of radiation exposure monitoring of personnel. Scintillation Counters For accurate quantitative measurement of low-energy beta emissions and for rapid measurement of gamma emissions, nothing surpasses the use of scintillation counters. Since they can range from low- to high energy detection, they are also useful for alpha emissions. Scintillation counters are based on the use of lightemitting substances, with a radioactive source (liquid scintillation counter), the radiation strikes the scintillant molecule, which will then fluoresce as it reemits the energy. Thus, the scintillant gives a flash of light for each radiation particle it encounters. The counter then converts light energy (either as counts of flashes, or as an integrated light intensity) to an electrical measure calibrated as either direct counts or counts per minute (CPM). If the efficiency of the system is known (the percentage of actual radioactive decays that result in a collision with a scintillant), then disintegrations per minute (DPM) can readily be calculated. DPM is an absolute value, whereas CPM is a function of the specific instrument used. Low-energy beta emissions can be detected with efficiencies of 40% or better with the inclusion of the scintillant directly into a cocktail solution. Alpha emissions can be detected with efficiencies in excess of 90%. Thus, with a liquid scintillation counter, very low doses of radiation can be detected. This makes it ideal for both sensitivity of detection and safety. If the system is modified so that the scintillant is a crystal placed outside of the sample chamber (vial), then the instrument becomes a gamma counter. Gamma emissions are capable of exiting the sample vial and entering into a fluorescent crystal.

The light emitted from the crystal is then measured. Gamma counters are usually smaller than liquid scintillation counters, but are limited to use with gamma emittors. Modern scintillation counters usually combine the functional capabilities of both liquid scintillation and direct gamma counting. Since all use of radioactive materials, and particularly the expensive counting devices, is subject to local radiation safety regulations, the specific details of use must be left to institutional discretion. Under no circumstances should radioactive materials be used without the express supervision of the radiation safety officer of the institution, following all specific institutional guidelines and manufacturer directions for the instrument used. . Electrodes Anode 

e−

Photocathode

Dynodes

Photomultiplier tube, sensitive light meter Solute •Solutes (or fluors) exhibit properties which in many respects are just the opposite of those of solvents. •They tend to decay rapidly mainly through the emission of light photons, thus having a high quantum fluorescent yield. •Solutes that directly absorb the excitation energy of the solvent are also known as primary solutes. •Secondary solutes were added to ampilify the primary emissions. •Secondary solutes were also complex organic compounds with the ability to absorb the decay energy of the primary solute and rapidly emitting it at a longer wavelength, shifting the overall signal to a wavelength more easily detectable by photomultiplier tubes. •As more sensitive photomultiplier tubes were constructed, secondary solutes became unnecessary. However, they may still be used to improve counting efficiency, as both the shorter and longer wavelengths can be detected.

Photomulitiplier tube (PMT) Positron Emission Tonography (PET) •The emitted light causes the emission of photoelectrons from the PMT which are multiplied by the PMT into a measurable electrical pulse. •The amplitude of the pulse is proportional to the number of photons which interact in the PMT. •The pulse height at the output of the PMT is proportional to the energy of the beta particle in the sample. •These pulse can be analyzed to provide the energy of the beta particle and the rate of beta emission in the sample. •Also possible to count very low energy gamma emitters by LSC since most of the gammas are absorbed in the counting solution.

Quenching A reduction in the total photon output of a sample resulting from a reduction in energy transfer efficiency. 4 basic types of quenching 1. Impurity : strong inorganic acids, oxidizing agent, some organic compounds 2. Color : chlorophyll, hemoglobin 3. Dilution : insufficient number of flour molecules 4. Absorption : heterogeneous samples. Gamma Scintillation Detector •Scintillators can be made of a variety of materials, depending on the intended applications. •The most common scintillators used in gammaraydetectors which are made of inorganic materials are usually an alkali halide salt, such as NaI or CsI. •To help these materials do their job, a bit of impurity is often added. This material is called an 'activator‘. Thallium and sodium are often used for this purpose. •So one often sees detectors described as NaI(Tl), which means it is a sodium iodide crystal with a thallium activator, or as CsI(Na), which is a cesium iodide crystal with a sodium activator. •TI and Na is used to shift the wavelength of the photon emitted by the excited molecule to a value which is not absorbed by the crystal. •In a solid-state sense, a gamma-ray interacting with the crystal moves electrons from the valence band (by ionization of the molecules) to the conduction band.

PET is a technique used in clinical medicine and biomedical research to create images that show anatomical structure as well as how certain tissues are performing their physiological function. Functional imaging is the major. How does PET work? •The radioisotopes used in PET have very short half lives compared to conventional nuclear medicine radioisotopes (in the order of minutes). •PET radioisotopes emits a positron (a positively charged electron) in the process of decay. When this positron collides with an electron, the 2 particles annihilate each other, and produce 2 photons traveling in opposite directions. •Two detectors positioned opposite one another can be used to detect the event. •Many such events are collected in computer memory, •and are used to make up a 3 dimensional image of the original distribution of tracer within the patient.

• The most common radiopharmaceutical or tracer used in clinical PET imaging is 2- 18F-deoxy-Dglucose (FDG). • This is transported into the cell like normal glucose, but cannot be metabolized after initial metabolism, and is trapped within the cell. • The greater the uptake of the FDG in the cell, the more metabolically active it is.

Detector • Scintillation crystal of Bismuth germanate (BGO) to capture the annihilate photon. • PMT to amplify the signal generated by the BGO annihilation photon capture and convert it into an electric signal. • Detection/Coincidence circuitry to assure that the photon which caused the signal is actually a annihilation photon

AUTORADIOGRAPHY The process of localizing radioactive materials onto a cell is known as autoradiography. 3H (tritium) is used in cell analysis because it is a relatively weak beta emitter (thus making it safer to handle) and, more significantly, can be localized within cell organelles. 14C and 32P are also used, but are more radioactive, require significantly more precautions in handling, and are inherently less capable of resolving intracellular details. They are used at the tissue or organ level of analysis. Radioactive isotopes can be incorporated into cellular molecules. After the cell is labeled with radioactive molecules, it can be placed in contact with photographic film. Ionizing radiations are emitted during radioactive decay and silver ions in the photographic emulsion become reduced to metallic silver grains. The silver grains not only serve as a means of detecting radioactivity but, because of their number and distribution, provide information regarding the amount and cellular distribution of the radioactive label. The process of producing this picture is called autoradiography and the picture is called an autoradiogram.. The number of silver grains produced depends on the type of photographic emulsion and the kind of ionizing particles emitted from the cell. Alpha particles produce straight, dense tracks a few micrometers in length. Gamma rays produce long random tracks of grains and are useless for autoradiograms. Beta particles or electrons produce single grains or tracks of grains. High-energy beta particles (such as those produced by 32P) may travel more than a millimeter before producing a grain. Low-energy beta particles (3H at 14 °C) produce silver grains within a few micrometers of the radioactive disintegration site, and so provide very satisfactory resolution for autoradiography.

The site of synthesis of cellular molecules may be detected by feeding cells a radioactive precursor for a short period and then fixing the cells. During this pulse labeling, radioactivity is incorporated at the site of synthesis but does not have time to move from this site. The site of utilization of a particular molecule may be detected by chase labeling. Cells are exposed to a radioactive precursor, radioactivity is then washed or diluted away, and the cells allowed to grow for a period of time. In this case, radioactivity is incorporated at the site of synthesis, but then has time to move to a site of utilization in the cell. 3H-thymidine

can be used to locate sites of synthesis and utilization of DNA. Thymidine, the deoxyribose nucleoside of thymine, can be purchased with the tritium label attached to the methyl group of thymine. Thymidine is specifically incorporated into DNA in Tetrahymena. Some organisms can remove the methyl group from thymine, and incorporate the uracil product into RNA. Even in this case, RNA would not be labeled because the tritium label would be removed with the methyl group. Methyl-labeled thymidine, therefore, serves as a very specific label for DNA. This is known as pulse labeling, after which the cells are washed free of the radioactive media. All remaining radioactivity would be due to the incorporation of the thymidine into the macromolecular structure of DNA. The cells will be fixed, covered with a photographic emulsion, and allowed to develop. During this time, the activity emanating from the 3H will expose the photographic emulsion, causing the presence of reduced silver grains immediately above the location of the radioactive source DNA). Thus, it will be possible to localize the newly synthesized DNA, or that which was in the S phase of mitosis during the time period of the pulse labeling.

Radiation will hit silver grains in emulsion and expose them Expose to film or emulsion Isotope will emit radiation (usually beta) Incubate tissue with radioactive ligand

Radioimmunoassay (RIA) Determination of Hormone Concentration in the Plasma The concentration of most hormones in the blood is very low; from the low picomolar (10−12) to the high nanomolar (10−9 M) range. This very low concentration presented serious challenges to early investigators who were interested in the study of hormones. This is because traditional analytical means of measurement for steroids and proteins did not allow detection in the physiological range of hormone levels. Therefore, the initial tests of hormone presence or absence included bioassays. Bioassays involved the injection of tissue extracts into experimental animal models followed by observation of some specific anticipated phenotype. For example, long before sensitive laboratory pregnancy tests were developed, urine samples from suspected pregnant women were administered to female African clawed frogs (Xenopus laevis). Induction of egg laying by the animal was indicative of the presence of human chorionic gonadotropin (hCG). This is a hormone that is secreted by the placenta. The technique is called radioimmunoassay (RIA), so named because it combines the high specificity of immunological methods with the high sensitivity of radiotracer methods. RIA analysis can allow sensitivity down to the femtomolar range (10-15 M)! Therefore, RIA is said to have very high sensitivity (i.e., detection at very low concentrations). In addition, because specific antibodies are used (they are carefully screened during antibody selection), these antibodies react only with the hormone of interest. Thus, RIA is said to have very high specificity (i.e., the antibody does not cross-react with other hormones). For this work, Yalow received the Nobel Prize in Physiology or Medicine in 1977, thus, becoming the first woman to receive this prize. She was also the first woman to receive the Lasker Prize, which is the highest honor bestowed on a scientist in the U.S. This prize is often said to be the ”American Nobel”. Principles of RIA RIA works based on competition for binding between non-labeled (”cold”) and isotope labeled (”hot”) hormone for its specific antibody. Typically, a fixed amount of the antibody is attached to the bottom of a tube. Then a fixed amount of hot hormone is added to the tube.

The amount of hot hormone is typically high enough to saturate all of the antibody molecules attached to the bottom of the tube. Because the interaction between the antibody and the hormone involves very strong noncovalent forces, the contents of the tube can be discarded without disturbing the hormone molecules that are bound to the antibody molecules in the bottom of the tube. The tube can now be counted in a gamma counter to obtain a counts per minute (CPM). This value represents the total count, and represents a situation when all of the antibody binding sites are occupied by the hot hormone. We give the label of B0 to this CPM measurement. Summary of Requirements for RIA 1. Highly specific antibody against the hormone of interest. In modern commercial kits, these are covalently attached to the bottom of tube in a way that the antibody binding site is still available to bind hormone. 2. Radio labeled hormone of interest. The hormone is typically labeled with 125I. Thankfully, this also comes in the commercial kit. 3. Known amounts of cold hormone in order to obtain a standard curve. These known standards are also provided with commercial kits. 4. Sample tubes that do not have antibody attached to their bottom. These are in every respect identical to the ones containing the antibody, but of course, lack the antibody. These will be used to determine non-specific binding of the hormone to the walls of the experimental tubes.

SPECTROPHOTOMETRY Absorption spectroscopy, also referred to as UVVisible (UV-Vis) spectroscopy, is one of the simplest techniques of optical spectroscopy. The technique is most frequently employed in life science laboratories as a method to determine concentration or to monitor changes in states (such as DNA melting) or chromophore environment (such as receptor binding). Absorption spectroscopy can be used to evaluate molecules that undergo electronic transitions excited by ultraviolet (UV) and visible light (190– 800 nm). Biologically relevant chromophores in this region include the peptide bond (210 nm), nucleic acid bases (250–260 nm), aromatic amino acid side chains (260–280 nm), heme (400 and 600 nm), and flavin (450 nm). A spectrophotometer measures the relative amounts of light energy passed through a substance that is absorbed or transmitted. We will use this instrument to determine how much light of (a) certain wavelength(s) is absorbed by (or transmitted through) a solution. Transmittance (T) is the ratio of transmitted light to incident light. Absorbance (A) = – log T. Absorbance is usually the most useful measure, because there is a linear relationship between absorbance and concentration of a substance. This relationship is shown by the Beer-Lambert law: A = ebc where e = extinction coefficient (a proportionality constant that depends on the absorbing species) b = pathlength of the cuvette. Most standard cuvettes have a 1-cm path and, thus, this can be ignored c = concentration. A spectrophotometer or calorimeter makes use of the transmission of light through a solution to determine the concentration of a solute within the solution. A spectrophotometer differs from a calorimeter in the manner in which light is separated into its component wavelengths. A spectrophotometer uses a prism to separate light and a calorimeter uses filters.

Both are based on a simple design, passing light of a known wavelength through a sample and measuring the amount of light energy that is transmitted. This is accomplished by placing a photocell on the other side of the sample. All molecules absorb radiant energy at one wavelength of another. Those that absorb energy from within the visible spectrum are known as pigments. Proteins and nucleic acids absorb light in the ultraviolet range. The following figure demonstrates the radiant energy spectrum with an indication of molecules, which absorb in various regions of that spectrum. The design of the single-beam spectrophotometer involves a light source, a prism, a sample holder, and a photocell. Connected to each are the appropriate electrical or mechanical systems to control the illuminating intensity, the wavelength, and conversion of energy received at the photocell into a voltage fluctuation. The voltage fluctuation is then displayed on a meter scale, is displayed digitally, or is recorded via connection to a computer for later investigation.

Absorption Spectrophotometers A UV-Vis spectrophotometer is a device that measures the amount of light absorbed by the chromophore as a function of the wavelength of the electromagnetic radiation. Conventional spectrophotometers consist of the following: a light source (deuterium lamp for the UV region: 190–350 nm and tungsten lamp for the wavelength region above 350 nm); a collimator, for focusing the beam of light; a monochromator, for wavelength selection; a sample compartment; and a detector. Conventional spectrophotometers have been largely replaced by diode array spectrophotometers, which have an advantage of speed over the former. A diode array spectrophotometer uses reverse optics, that is, the polychromatic light passes through the sample first and is then dispersed onto the diode array. The array comprises a series of photodiode detectors juxtaposed on a silicon chip, with each diode designed to measure a finite but narrow band of the spectrum. Thus, the entire spectrum can be collected in a matter of seconds. The short exposure to the incident light can also minimize photodecomposition of samples.

Common Applications 1. Concentration Determination This is a direct application of the Beer–Lambert law. If the absorption coefficient for a molecule is known, then the concentration of the molecule in solution can be obtained by taking an absorption spectrum of the solution and solving the equation:

If a literature value for the absorption coefficient is used, it is important to know the buffer concentration, concentration of any additives, and the pH of the solution. Differences in solution variables can affect an absorption coefficient. An absorption coefficient is always reported with the wavelength and the solvent/buffer solution. For instance, the two commonly used absorption coefficients for measurement of concentration of anticancer drug paclitaxel for laboratory purposes are ε228 nm, ethanol 2.79 x104 M-1 cm-1, ε273 nm, DMSO 1.70 x 103 M-1 cm-1 . Absorption coefficients for proteins are frequently encountered as percent solution absorption coefficients (ε%), which has units of (g/100 ml)-1cm-1. Absorption coefficients for DNA and RNA are frequently expressed as (g/L)-1 cm-1. Absorption coefficient data for nucleic acid polymers are also expressed in terms of O.D. units at 260 nm. (1 O.D. unit is equivalent to an absorbance of 1.0.) For example, an O.D. unit of 1 at 260 nm for dsDNA corresponds to a concentration of 50 mg/ml. The most critical aspect of this procedure is that the concentration of the original stock solution must be very accurate and the subsequent dilutions must be carefully performed. 2. Ligand–Receptor Interactions Absorption spectra can be sensitive to the environment of the chromophore, and this sensitivity may be exploited to evaluate ligand–receptor interactions, particularly when the ligand absorbs light in a region of the spectrum that is different from that of the receptor. Environmental conditions that affect the ionization state of a chromophore can result in large changes in the absorption maximum and molar absorptivity. For example, the absorption spectrum of tyrosine at pH above and below the pKa of the phenol is shown in Fig. below. Ionization of the phenol results in 18 nm shift in the lowest energy absorption maximum and an increase in the molar absorptivity. A shift in an absorption band to longer wavelength is a shift to lower energy, and is also called a red shift or a bathochromic shift. A shift in an absorption band to shorter wavelength is a shift to higher energy, and is also called a blue shift or a hypsochromic shift.

Fig. : Absorption spectra of l-tyrosine below and above the pKa. The absorption maximum shifts from 275 nm in acidic to 294 nm in basic medium with a concomitant increase in the absorption intensity. Small changes in absorption spectra are most easily observed by difference spectroscopy. An absorption spectrum of the unperturbed chromophore, such as a ligand in the absence of the receptor, is recorded. The absorption spectrum of an identical concentration of the ligand in the presence of the receptor is then recorded, and the first (unperturbed) spectrum is subtracted from the second (perturbed) spectrum to yield the difference spectrum. Such measurements can be repeated with varying concentrations of one of the components until saturation in the signal change is reached. These data can then be used to construct a binding curve. 3. Turbidity Measurements Figure in the next page illustrates another function of an absorption spectrophotometer, which is to indirectly detect light scattering. Particles in a solution will scatter light in a manner dependent on the size of the particle and the wavelength of the light. The wavelength dependence of the turbidity of a solution of biological macromolecules can be used to estimate the size and shape of the macromolecule. Since turbidity is directly proportional to absorbance, measurements performed on the absorption spectrophotometer can provide information about molecular weight and dimensions and the concentrations of particles or macromolecules in the solution. For example, bacteria scatter light as if they were small particles, and suspensions of bacteria at sufficient concentration will appear turbid. Measurement of turbidity (apparent absorption) is common laboratory practice to monitor bacterial growth and estimate bacteria concentration. Reactions that proceed with protein aggregation or polymer formation may be monitored by measuring turbidity changes.

Fluorescence Spectroscopy Fluorescence spectroscopy is one of the most widely used optical techniques in biochemistry and cell biology. Highly sensitive and tremendously versatile, fluorescence spectroscopy can be found in virtually all areas of life science research. The common instruments encountered in life science laboratories are steady-state spectrofluorimeter and fluorescence plate reader. This chapter will be limited to the types of information available from the basic versions of these instruments. Even with these limitations, the coverage here is far from comprehensive.

Fluorescence Absorption of a photon of appropriate energy causes an electronic transition from ground state to an excited state. Once a molecule is in an electronically excited Fig. Difference spectrum for hydrazone formation state, the energy must eventually be dissipated for the reaction as a function of time. molecule to return to its ground state. 4. Microplate Reader Spectrophotometers The use of plate readers for measuring optical properties of multiple samples is increasingly common in life science. Microtiter plates are commonly available in 6-, 12-, 24-, 48-, 96-, 384-, and 1536-well formats, and a majority of the commercial instruments will read 96- and 384-well plates. A significant difference between absorption data collected in plates compared to those collected in a standard spectrophotometer is that the path length in the plate is a function of the sample volume, while the path length when measured in spectrophotometer is physically defined by the dimensions of the cuvette regardless of the sample volume. Since the absorbance of a solution is a function of the path length, it is necessary to use identical sample volumes in each well in a plate and/or know how the plate reader responds to samples of varying volumes. There are commercially available plate readers that can correct the output data for variations in sample volume. The most common application using microtiter plates are assays involving multiple samples of similar composition and identical volumes, such as colorimetric assays for cytotoxicity or enzyme-linked immunosorbent assays (ELISAs) using alkaline phosphataseconjugated secondary antibodies. Many protocols that require absorption measurements on multiple samples are much easier to perform using plate readers than conventional spectrophotometers.

Fig. Simplified Jablonski diagram depicting the excitation of an electron by absorption of a photon (hnA) to higher electronic states S1 or S2. The electron returns to the ground electronic state, S0, from the lowest vibrational state of S1. Fluoresce-nce is observed with the release of photon (hnF). The dotted and solid arrows in the diagram represent nonradiative and radiative electronic transitions, respectively. The absorption of a photon occurs very quickly (10–15 sec) from the lowest vibrational level of the ground state (S0) to higher electronic and vibrational states (e.g., S1, S2). The electron relaxes quickly from higher energy states (10–10 to 10–12 sec) to the lowest vibrational level of the first excited state (S1).

This is known as internal conversion. In the absence of photochemical reactions, there are two general paths for loss of excited state energy: radiative and nonradiative. Loss of energy in a radiative pathway involves release of a photon (Fig. on previous page). When the photon comes from the first excited singlet state to the ground state, the light released is fluorescence. An electron can also undergo intersystem crossing, that is, move to an excited triplet state from the excited singlet state. The return of the electron from this triplet state to the ground state may be accompanied by release of a photon. This emission is referred to as phosphorescence and will not be discussed in this section. Excited state energy may also be dissipated by nonradiative paths (without emission of a photon) through mechanisms such as release of heat, interactions with solvent molecules, collisions with other molecules, or resonance energy transfer to another chromophore.

Fluorophores A fluorophore may be endogenous to the biological system, such as an aromatic amino acid residue in a protein. Endogenous or intrinsic fluorophores frequently absorb and emit in the UV region. Visible region fluorophores are generally exogenous, and may be introduced to a biological system through the use of fluorescently labeled antibodies, chemical labeling, or expression of green fluorescent protein or one of its variants. Exogenous or extrinsic fluorophores can be designed to absorb and emit virtually any frequency of light, including near infrared radiation. Both endogenous and exogenous fluorophores may be characterized by the following experimental observation. 1. Excitation and Emission Spectra An emission spectrum is a plot of number of photons emitted at a particular wavelength when the molecule is irradiated at a single wavelength in an absorption band. In general, the shape of an emission spectrum is a mirror image of the absorption spectrum. Absorption of a photon primarily occurs from the lowest vibrational level of the ground state to multiple vibrational levels within the excited state.

Vibrational relaxation of the excited state results in the emitted photon originating from the lowest vibrational level of the excited state. Emission of a photon returns the fluorophore to various vibrational levels within the ground state. The vibrational energy levels of the ground and excited state are about equally spaced, therefore, the absorption and emission spectra appear to be reflected through a mirror plane. An excitation spectrum is also a plot of the number of photons emitted at a particular wavelength; however, in this case, the emission wavelength is held constant and the excitation wavelength is varied. The shape of an excitation spectrum of a molecule is generally the same as its absorption spectrum. Normally, the shape of an excitation or emission spectrum is constant regardless of excitation wavelength, although the overall intensity will vary depending on the absorption coefficient at the excitation wavelength. 2. Stokes’ Shift The emission maximum of a fluorophore is observed at a lower energy (longer wavelength) than the absorption maximum (Fig. below). This is a consequence of the loss of vibrational energy in the excited state before emission of the photon (Fig. last page). The energy difference between the absorption maximum (nA) and the emission maximum (nF) is the Stokes’ shift. The Stokes’ shift is expressed in wave numbers (cm-1). The magnitude of the Stokes’ shift of a fluorophore is a function of its molecular structure and, for many fluorophores, the nature of its surroundings. Environmental effects on the Stokes’ shift of a fluorophore can be useful to probe some aspects of biological systems.

Fig. Absorption and emission spectrum of 8-anilino-1naphthalenesulfonic acid (ANS) in dimethyl sulfoxide.

3. Fluorescence Lifetime The lifetime of a fluorophore is the average time the molecule spends in the excited state. Loss of excited state energy is due to radiative and nonradiative processes. Therefore, lifetime (t) is the inverse of the sum of the rate constant for radiative emission (fluorescence, kr) and the rate constants for all nonradiative dissipation of excited state energy (knr):

A fluorophore with a quantum yield of 1.0 emits all absorbed photons as fluorescence. Relative ‘‘brightness’’ of fluorophores can be assessed by multiplying the absorption coefficient at the excitation wavelength by the fluorescence quantum yield. A comparison of a molecule’s fluorescence intensity with the emission intensity of a fluorophore whose quantum yield is available in the literature is another common way to express relative quantum yield.

5. Fluorescence Anisotropy Fluorophores absorb photons that have their electric The lifetime of a single electronic transition is dipoles aligned parallel to the transition moment of characterized by a single exponential decay of the fluorophore. When freely diffusing fluorophores fluorescence and is the time required for the fraction are excited with polarized light, the emitted light will of the population of molecules in the excited state to normally be depolarized as a result of rotational decrease by a factor of 1/e, or ~ 37%: diffusion during the lifetime of the excited state. The extent of depolarization is assessed by determining the intensity of light emitted parallel (Ik) and perpendicular (I┴) to the excitation light. The In the above equation, t is the time, t is the fluorescence anisotropy (r) is calculated from these fluorescence lifetime, F0 is the initial fluorescence at measurements by the following equation: t= 0. Fluorescence lifetimes are measured by either pulse fluorometry or phase-modulated fluorometry, both of which are beyond the scope of this chapter. It may be noted that the fluorescence lifetime is not directly affected by the energy or intensity of the light Applications of anisotropy measurements include emitted. Thus, fluorophores that have similar spectral determination of equilibrium binding constants for characteristics can be distinguished if their lifetimes ligand–receptor interactions, particularly if the are different. Discrimination between fluorescence fluorescence of the ligand is monitored. lifetimes is the basis for the technique known as fluorescence lifetime imaging microscopy.

C. Fluorescence Instrumentation

4. Quantum Yield The quantum yield (f) of a fluorophore is the ratio of the number of photons emitted as fluorescence to the number of photons absorbed by the molecule:

The quantum yield can also be expressed in terms of the fluorescence lifetime:

Two types of instruments frequently available in life science laboratories are fluorescence. Spectrophotometers have some resemblance to absorption spectrophotometers: both have light sources and accessories to control the energy of light incident on the sample, which is typically in a quartz cuvette. In an absorption spectrophotometer, the detector is in a straight path to the light source, but in a fluorescence spectrophotometer, the detector is at a right angle to the light source. Fluorescence spectrophotometers usually have two monochromators, hence the wavelength of light that hits the sample and the detector can be modulated independently. These instruments can collect two types of spectra: excitation spectra and emission spectra. Emission spectra are the more common. The sample is irradiated with light of a particular energy, selected with the excitation monochromator. The light emitted from the sample passes through the emission. Monochromator, which is usually scanned over a range sufficient to collect data from the entire emission band.

General layout of a fluorimeter Sample

Tunable light source (laser, LED, lamp+ monochromator)

Spectral dispersion apparatus (filters, monochromator)

Beamsplitter Exc. Mono To the sample

Detector

IRef

I P L : I E x c / I R e f  I E x c /( A I E x c )  c o n s t Absorption Versus Fluorescence The absorption intensity of a molecule in solution will be the same regardless of the absorption spectrophotometer used. The same cannot be said for fluorescence intensity of the same solution. Absorption intensity is defined as the ratio of photons transmitted per photons absorbed, whereas fluorescence intensity is proportional to photons emitted. The number of photons emitted will depend on the number of photons incident on the sample, which will depend on instrumental parameters such as lamp intensity and slit width. With the exception of quantum yield measurements, fluorescence spectra are not ratioed, so the absolute value of fluorescence intensity from the same sample is not necessarily constant from day to day even on the same instrument under the same experimental conditions. Fluorescence spectra are therefore shown with arbitrary units in the legend on the y-axis, although other identifying legends may be employed (number of photons, fluorescence intensity, etc.). Unlike absorption spectrum, the intensity of fluorescence is largely influenced by the temperature. Fluorescence quantum yields are sensitive to temperature because the processes by which excited state energy is dissipated (vibrations, collisions) are affected by temperature to a greater extent than the process by which the excited state is formed. Therefore, the solution in the cell compartment should be controlled at a constant temperature, even when the spectra are obtained at ‘‘room temperature.’’

Measuring Emission and Excitation Spectra It is always a good idea to take an absorption spectrum of the sample that will be examined by fluorescence spectroscopy prior to recording fluorescence spectra. 1. Collecting Emission Spectra The most common steady-state fluorescence spectrum is the emission spectrum. To collect the spectrum, a cuvette containing the fluorophore is placed in the instrument and the solution is equilibrated to the desired temperature. The excitation monochromator is adjusted to the wavelength chosen for excitation. This is frequently the absorption maximum for the fluorophore, but another wavelength within the absorption band may also be chosen. The emission monochromator is set to collect a range of wavelengths. The lower limit should be a value greater than the excitation wavelength to avoid collecting stray excitation light. The upper limit should be at a wavelength beyond the end of the emission band. An initial scan of the emission spectrum is performed, and the wavelength of the maximum emission intensity is noted. The intensity should be compared to the acceptable range provided with the instrument documentation. The intensity of the emission can be adjusted by methods described in the instrument documentation, which frequently consists of increasing or decreasing the slits on the excitation or emission monochromator until a suitable value is achieved. The sample is scanned again using the adjusted slits and wavelength limits. A blank spectrum of the solution components without the fluorophore must also be collected using the same parameters employed for the emission spectrum. The blank emission spectrum is subtracted from the sample emission spectrum to yield the emission spectrum of the fluorophore.

Emission spectra may be corrected for fluctuations in the emission intensity due to features of emission monochromator and emission photomultiplier tube. 2. Collecting Excitation Spectra The procedure for collecting an excitation spectrum is similar to the procedure for collecting emission spectra, except that the emission wavelength is held constant and the excitation wavelength region is scanned. A major difference is that excitation spectra must be corrected to be meaningful because the intensity of light from the excitation source varies with wavelength. The standard method for correcting excitation spectra is to use a quantum counter in a reference channel. The specific procedure for a particular instrument should be found in the instrument manual.

Common Experimental Problems and Their Solutions Most steady-state fluorescence experiments are straightforward to perform. There are, however, some trivial sources of error that are frequently encountered by amateur researchers. ‘‘Trivial’’ is used in the sense of simple, not unimportant. 1. Inner Filter Effect Emission spectra are collected assuming that the same intensity of light hits the front and the back of the sample (see Fig.).When the sample absorbs the excitation light strongly, the intensity of light diminishes as it passes through the cell. The emission intensity emanating from the back of the cell is less than that emanating from the front of the cell (see Fig.). Therefore, the overall emission intensity will be lower in a sample with higher absorption at the excitation wavelength. If the absorption of the sample is 0.05 units or less over the effective path of the sample cell, then no inner filter effect is observed. If the absorption of the sample is >0.05, then a linear relationship between concentration of fluorophore and emission intensity cannot be assumed. Two ways of managing the inner filter effect are avoiding it or correcting for it.If possible, avoiding an inner filter effect is the better choice. A simple way to decrease the light absorbed by the sample is to move the excitation wavelength to a region of the band with lower molar absorptivity. Correction for an inner filter eVect can be done using the equation:

Fig. Fluorescence of a sample is observed at right angle to the excitation. (A) The excitation at the front and back of the cuvette is same, so no inner filter effect is observed. (B) For a concentrated solution, the excitation of the sample in the cuvette is not uniform and hence lesser amount of emission is observed. Inner filter correction needs to be performed for this sample. (C) The cuvette has a smaller excitation path (2 mm) and the emission path is 10 mm. The use of this dual path length cuvette decreases the inner filter effect. 2. Secondary Absorption Effect A secondary absorption effect occurs when a component in the sample absorbs the emission light. This is less frequently encountered, and typically occurs when more than one chromophore is in the sample. The secondary absorption effect will depend on the concentration of the species, absorption coeffcient of the acceptor (A), and quantum yield of the donor (D). The absorption spectrum of the sample should be examined to ensure that there is little to no absorptivity in the spectral region in which the emission data are collected. There is no satisfactory way to correct emission spectra for secondary absorption effects, so they should be avoided. Secondary absorption effects can often be eliminated by sample dilution. 3. Photobleaching Molecules in the excited state can also undergo photochemical reactions. If these reactions lead to products with different emissive properties, the emission intensity in the sample will appear to decrease over time. For most fluorophores, this process is irreversible. Photobleaching has been used in fluorescence microscopy as fluorescence recovery after photo bleaching (FRAP). In in vitro assays, however, photobleaching is normally undesirable. Photo bleaching is decreased by limiting the amount of time a fluorophore is exposed to excitation energy. If supplies and instrumentation permit, the effect of bleaching on a sample can be minimized by employing relatively large volumes in the cuvette and stirring the cuvette during data acquisition. (Many instruments have magnetic cell stirrers included or as optional accessories.)

4. Light Scattering The presence of particles and bubbles may result in the scattering of light; these should be removed from a sample whenever possible. A Raman band from the solvent may be observed in an emission scan. In biological systems, most commonly employed solvent is water and the Raman signal is observed 3600 cm-1 lower in energy than the excitation wavelength. Therefore, for an excitation wavelength of 280 nm, the Raman peak will be observed at 311 nm. When the sample fluorescence is intense, the contribution of the Raman band is negligible. Identifying a peak as a scatter rather than a fluorescence peak can be accomplished by changing the excitation wavelength and rescanning the solution. If the peak is fluorescence, the emission intensity of the peak should change, but the wavelength of the emission maximum will not. If it is a scatter peak, the emission maximum will change as the excitation wavelength changes, and the intensity of the peak will remain approximately the same. For example, the water Raman peak will move from 311 to 362 nm if the excitation wavelength is changed from 280 to 320 nm.

Fluorescence Quenching When the fluorescence intensity of a fluorophore is decreased by its interaction with its environment, the fluorescence is said to be ‘‘quenched.’’ Collisional encounters with other molecules in solution that result in deactivation of the excited state result in collisional quenching. Contact between the fluorophore and the quencher is required for collisional quenching to occur, therefore, measurements of collisional quenching can be useful for determining the accessibility of a fluorophore on a biological macromolecule. There are two common types of collisional quenching: dynamic quenching and static quenching. In the former, the quencher collides with the fluorophore in its excited state, dissipating its energy without release of a photon. Dynamic quenching of fluorescence is described by the Stern–Volmer equation:

where F0 and F are the fluorescence intensities in the absence and presence of quencher, respectively. kq is the bimolecular quenching constant, t0 is the lifetime of fluorophore in the absence of quencher, [Q] is the concentration of

the quencher, and KD is the Stern–Volmer constant for dynamic quenching. A plot of F0/F against [Q] gives KD as the slope. A linear Stern–Volmer plot is usually indicative of a single class of fluorophores, all equally accessible to the quencher. In static quenching, a non-fluorescent complex is formed between the fluorophore and the quencher. The quenching equation then becomes:

where Ks is the association constant for the formation of the complex in static quenching. Note that the form of the two equations is the same. In order to determine whether a process is due to static or dynamic quenching, additional experiments must be performed. One simple method is to repeat the quenching experiment at higher temperature. The slope of the plot F/F0 versus [Q] should increase if the process is dynamic quenching and decrease if the loss of fluorescence is due to a static quenching mechanism.

Environmental Effects on Fluorescence Absorption and emission spectra of fluorophores with this characteristic will be little affected by environment; that is, the absorption and emission maxima will be similar whether the fluorophore is in an aqueous environment or in an apolar pocket of a protein. These are called environmentally insensitive fluorophores. Environmentally insensitive probes are particularly useful in imaging. By contrast, when the difference in electron density distribution is pronounced, the absorption and emission spectra can be severely affected by the molecule’s milieu. These are referred to as environmentally sensitive fluorophores. Fluorophores that are environmentally sensitive can also be used for imaging but are frequently used as sensors or to monitor ligand– receptor interactions. Many of the environmentally sensitive fluorophores used as biological probes have an excited state that is more polar than the ground state. The effect of changing the polarity of the environment for such a probe is illustrated in the next Fig. An apolar environment will stabilize the ground state but destabilize the excited state. The energy difference between the two states will increase as the polarity of the solvent decreases. Thus, the absorption and emission maxima will shift to shorter wavelength (higher energy) A more polar environment will stabilize the excited state and destabilize the ground

Fluorescence Resonance Energy Transfer Fluorescence resonance energy transfer (FRET or simply referred to as RET) occurs when a molecule in its excited state transfers energy to another molecule through dipole– dipole interactions, without the appearance of a photon (see Fig.). The transfer is highly dependent on the distance between the D and A species. The efficiency of energy transfer (E) is described by the following equation:

Fig. Jablonski diagram depicting the influence of solvent on fluorescence of the fluorophore. In this representation, the dipole moment of the fluorophore in excited state is greater than the ground state (µE > µG), and hence a polar environment stabilizes the excited state better. state, and thus the absorption and emission maxima will shift to longer wavelength (lower energy). The nature of the environment of a fluorophore in a biological system can therefore be assessed by examining the Stokes’ shift of the fluorophore as a function of solvent and comparing those data to the Stokes’ shift of the fluorophore in the biological system. The quantum yield of environmentally sensitive fluorophores also may be affected by the environment. The relationship between environment and quantum yield is less defined than the relationship between fluorophore environment and Stokes’ shift. Many of the environmentally sensitive fluorophores routinely used in biological systems undergo an increase in quantum yield when the polarity of the environment is decreased. Since receptor sites are typically less polar than the medium on the exterior of the binding site, the increase in fluorescence intensity can be used to quantitatively assess ligand–receptor interactions A few examples of environmentally sensitive fluorophores are shown in the Fig. Some environmentally sensitive probes have structural features that will be affected in a specific way by the environment, and therefore such molecules can be used as sensors.

where R is the distance between the D and A and R0 is the Forster distance, which is the distance at which energy transfer is 50% efficient. The sharp dependence of RET efficiency on distance means that energy transfer between D and A will be observed only when the pair is within a limited range of distances. Here we illustrates the relationship between R0 and RET efficiency. A D–A pair with a Forster distance of 20 Å will undergo transfer with 15% efficiency at 27 Å and 85% efficiency at 15 Å ; thus, in order for energy transfer to be readily observed, the D and A need to be within about 12 Å of one another. The dynamic range is larger when the Forster distance is larger: a D–A pair with a Forster distance of 60 Å will undergo transfer with 15% efficiency at 80 Å and 85% efficiency at 45 Å. A good rule of thumb is that the D–A distance (R) should be within a factor of 2 of R0 for reliable distance measurements.

Fluorescence resonance energy transfer (FRET or RET) between a donor (D) and acceptor (A). The dipole–dipole interaction of the electron in the D excited state with the A, results in A excitation.

The efficiency of resonance energy transfer is determined by the distance between the D–A pair. The figure represents D–A pair Forster distance, R0, of: 20 Å (—), 40 Å (. . .), 60 Å (– –), and 80 Å (– –).

Infrared Spectroscopy Theory An important tool of the organic chemist is Infrared Spectroscopy , or IR. IR spectra are acquired on a special instrument, called an IR spectrometer. IR is used both to gather information about the structure of a compound and as an analytical tool to assess the purity of a compound. The Electromagnetic Spectrum Infrared refers to that part of the electromagnetic spectrum between the visible and microwave regions. Electromagnetic spectrum refers to the seemingly diverse collection of radiant energy, from cosmic rays to X-rays to visible light to microwaves, each of which can be considered as a wave or particle traveling at the speed of light. These waves differ from each other in the length and frequency, as shown in next figure. (Refers to classical Einstein equations). Wavelength, λ is the length of one complete wave cycle. It is often measured in cm (centimeters). Wavelength and frequency are inversely related and it is interesting to note that energy is directly proportional to frequency and inversely proportional to wavelength. The IR region is divided into three regions: the near, mid, and far IR. The mid IR region is of greatest practical use to the organic chemist. This is the region of wavelengths between 3 x 10–4 and 3 x 10–3 cm. Chemists prefer to work with numbers which are easy to write; therefore IR spectra are sometimes reported in µ m, although another unit, ν (nu bar or wave number), is currently preferred.

In wave numbers, the mid IR range is 4000–400 cm–1. An increase in wave number corresponds to an increase in energy. As you will learn later, this is a convenient relationship for the organic chemist. Infrared radiation is absorbed by organic molecules and converted into energy of molecular vibration. In IR spectroscopy, an organic molecule is exposed to infrared radiation. When the radiant energy matches the energy of a specific molecular vibration, absorption occurs. A typical IR spectrum is shown below. The wave number, plotted on the X-axis, is proportional to energy; therefore, the highest energy vibrations are on the left. The percent transmittance (%T) is plotted on the Y-axis. An absorption of radiant energy is therefore represented by a “trough” in the curve: zero transmittance corresponds to 100% absorption of light at that wavelength.

Figure: The IR regions of the electromagnetic spectrum. Band intensities can also be expressed as absorbance (A). Absorbance is the logarithm, to the base 10, of the reciprocal of the transmittance: A = log 10 (1/T) Note how the same spectrum appears when plotted as T and when plotted as A figure below,

Figure: The electromagnetic spectrum Figure : The IR spectrum of octane, plotted as transmission (left) and absorbance (right)..

As illustrated in the spectrum of octane, even simple organic molecules give rise to complex IR spectra. Both the complexity and the wave numbers of the peaks in the spectra give the chemist information about the molecule. The complexity is useful to match an experimental spectrum with that of a known compound with a peak-by-peak correlation. To facilitate this analysis, compilations of IR spectra are available, most well-known of which are those by Sadtler and Aldrich The wave numbers (sometimes referred to as frequencies) at which an organic molecule absorbs radiation give information on functional groups present in the molecule. Certain groups of atoms absorb energy and therefore, give rise to bands at approximately the same frequencies. The chemist analyzes a spectrum with the help of tables which correlate frequencies with functional groups. The theory behind this relationship is discussed in the next section on molecular vibrations. Molecular Vibrations There are two types of molecular vibrations, stretching and bending. It is known fact that a molecule as having rigid bond lengths and bond angles, as when you work with your molecular model sets. This is not the actual case, since bond lengths and angles represent the average positions about which atoms vibrate. A molecule consisting of n atoms has a total of 3 n degrees of freedom, corresponding to the Cartesian coordinates of each atom in the molecule. In a nonlinear molecule, 3 of these degrees are rotational and 3 are translational and the remaining correspond to fundamental vibrations; in a linear molecule, 2 degrees are rotational and 3 are translational. The net number of fundamental vibrations for nonlinear and linear molecules is therefore:

Calculation reveals that a simple molecule such as propane, C3H8, has 27 fundamental vibrations, and therefore, you might predict 27 bands in an IR spectrum! The fundamental vibrations for water, H2O , are given in Figure below. Water, which is nonlinear, has three fundamental vibrations.

On the other hand, carbon dioxide, CO2, is linear and hence has four fundamental vibrations (see Figure below). The asymmetrical stretch of CO2 gives a strong band in the IR at 2350 cm–1. The two scissoring or bending vibrations are equivalent and therefore, have the same frequency and are said to be degenerate , appearing in an IR spectrum at 666 cm–1. The symmetrical stretch of CO2 is inactive in the IR because this vibration produces no change in the dipole moment of the molecule. In order to be IR active, a vibration must cause a change in the dipole moment of the molecule. (The reason for this nvolves the mechanism by which the photon transfers its energy to the molecule, which is beyond the scope of this discussion). Only two IR bands (2350 and 666 cm–1) are seen for carbon dioxide, instead of four corresponding to the four fundamental vibrations. Carbon dioxide is an example of why one does not always see as many bands as implied by our simple calculation. In the case of CO2, two bands are degenerate, and one vibration does not cause a change in dipole moment. Other reasons why fewer than the theoretical number of IR bands are seen include: an absorption is not in the 4000–400 cm–1 range; an absorption is too weak to be observed; absorptions are too close to each other to be resolved on the instrument. Additional weak bands which are overtones or combinations of fundamental vibrations are observed. .

The stretching and bending vibrations for the important organic group, –CH2, are illustrated in Figure next page. (The 3n–6 rule does not apply since the –CH2 group represent only a portion of a molecule.) Note that bending vibrations occur at lower frequencies than corresponding stretching vibrations. Both the stretching and bending vibrations of a molecule as illustrated in the above figures can be predicted mathematically, at least to a useful approximation, especially using computers. The mathematics of stretching vibrations will be sketched in the following section. An understanding of these vibrations can help even the beginning student correlate high and low frequencies in an IR spectrum.

Figure : Stretching and bending vibrational modes for a CH2 group.

Stretching Vibrations

single bond 5 x 105 dyne/cm double bond 10 x 105 dyne/cm triple bond 15 x 105 dyne/cm As the mass of the atoms increases, the vibration frequency decreases. Using the following mass values: C, carbon 12/6.02 x 1023 H, hydrogen 1/6.02 x 1023

Figure : Energy curve for a vibrating spring (left) and energy constrained to quantum mechanical model (right). However, vibrational motion is quantized: it must follow the rules of quantum mechanics, and the only transitions which are allowed fit the following formula: E = (n + 1/2)hν where ν is the frequency of the vibration n is the quantum number (0, 1, 2, 3, . . . ) Figure: Energy curve for an anharmonic oscillator The lowest energy level is E0 = 1/2 hν, the next highest is E1= (showing the vibrational levels for a vibrating bond). 3/2 hν. According to the selection rule, only transitions to the next energy level are allowed; therefore molecules will absorb ν for a C–H bond is calculated to be 3032 cm–1. (Try an amount of energy equal to 3/2 – 1/2 hν or hν. This rule is this calculation!) The actual range for C–H not inflexible, and occasionally transitions of 2 hν, 3 hν, or absorptions is 2850–3000 cm–1. The region of an IR higher are observed. These correspond to bands called over- spectrum where bond stretching vibrations are seen tones in an IR spectrum. They are of lower intensity than the depends primarily on whether the bonds are single, fundamental vibration bands. double, or triple or bonds to hydrogen. The A molecule is not just two atoms joined on a spring, of course. following table shows where absorption by single, A bond can come apart, and it cannot be compressed beyond a double, and triple bonds are observed in an IR certain point. A molecule is actually an anharmonic oscillator. spectrum. You should try calculating a few of these As the interatomic distance increases, the energy reaches a values to convince yourself that the Hooke’s law maximum, as seen in the next Figure. Note how the energy approximation is a useful one. levels become more closely spaced with increasing interatomic distance in the anharmonic oscillator. The allowed transitions, hν, become smaller in energy. Therefore, overtones can be lower in energy than predicted by the harmonic oscillator theory. The following formula has been derived from Hooke’s law. For the case of a diatomic molecule, Although a useful approximation, the motion of two atoms in a large molecule cannot be isolated from the motion of the rest of the atoms in the molecule. In a molecule,two oscillating bonds can share a common atom. When this happens, the vibrations of the two bonds are coupled. As one bond contracts, the other bond can either contract or expand, as in f is the force constant of the bond (dyne/cm) asymmetrical and symmetrical stretching. In general, Equation 7 shows the relationship of bond strength and atomic when coupling occurs, bands at different frequencies mass to the wave number at which a molecule will absorb IR are observed, instead of superimposed bands as you radiation. As the force constant increases, the vibrational might expect from two identical atoms in a bond frequency (wave number) also increases. The force constants vibrating with an identical force constant. In the case for bonds are: of the –CH2 group in Figure 15.6, you note there are two bands in the region for C—H bonds: 2926 cm–1 and 2853 cm–1.

General Uses • Identification of all types of organic and many types of inorganic compounds • Determination of functional groups in organic materials • Determination of the molecular composition of surfaces • Identification of chromatographic effluents • Quantitative determination of compounds in mixtures • Nondestructive method • Determination of molecular conformation (structural isomers) and stereochemistry (geometrical isomers) • Determination of molecular orientation (polymers and solutions) Common Applications • Identification of compounds by matching spectrum of unknown compound with reference spectrum (fingerprinting) • Identification of functional groups in unknown substances • Identification of reaction components and kinetic studies of reactions • Identification of molecular orientation in polymer films • Detection of molecular impurities or additives present in amounts of 1% and in some cases as low as 0.01% • Identification of polymers, plastics, and resins • Analysis of formulations such as insecticides and copolymers

Circular Dichroism Circular dichroism (CD) is an excellent method for the study of the conformations adopted by proteins and nucleic acids in solution. Although not able to provide the beautifully detailed residue-specific information available from nuclear magnetic resonance (NMR) and X-ray crystallography, CD measurements have two major advantages: they can be made on small amounts of material in physiological buffers and they provide one of the best methods for monitoring any structural alterations that might result from changes in environmental conditions, such as pH, temperature, and ionic strength. This chapter describes the important basic steps involved in obtaining reliable CD spectra: careful instrument and sample preparation, the selection of appropriate parameters for data collection, and methods for subsequent data processing. The principal features of protein and nucleic acid CD spectra are then described, and the main applications of CD are discussed. These include: methods for analyzing CD data to estimate the secondary structure composition of proteins, methods for following the unfolding of proteins as a function of temperature or added chemical denaturants, the study of the effects of mutations on protein structure and stability, and methods for studying macromolecule–ligand and macromolecule –macromolecule interactions. CD, the differential absorption of left- and righthanded circularly polarized light, is a spectroscopic property uniquely sensitive to the conformation of molecules, and so has been very widely used in the study of biomolecules. CD often provides important information about the function and conformation of biomolecules that is not directly available from more conventional spectroscopic techniques, such as fluorescence and absorbance. The experimentally measured parameter in CD is the difference in absorbance for left- and right-handed circularly polarized light, ΔA (= AL − AR). Because CD is an absorption phenomenon, the chromophores that contribute to the CD spectrum are exactly the same as those contributing to a conventional absorption spectrum. In order to show a CD signal, a chromophore must be either inherently chiral (asymmetric) or must be located in an asymmetric environment. It is the interaction between the chromophores in the chiral field of the protein that introduces the perturbations leading to optical activity.

The near-UV CD bands of proteins (310–255 nm) derive from Trp, Tyr, Phe, and Cys and reflect the tertiary, and occasionally quaternary, structure of the protein. Although several amino acid side chains (notably Tyr, Trp, Phe, His, and Met) absorb light strongly in the far-UV region of the spectrum (below 250 nm), the most important contributor here is the peptide bond (amide chromophore), with n →π* and π→π* transitions at ~220 and ~190 nm, respectively. The far-UV CD bands of proteins reflect the secondary structure of the protein (α-helix, β-sheet, β-turn, and unordered content). In the case of nucleic acids and oligonucleotides, the aromatic bases are the principal chromophores, with absorption beginning at around 300 nm and extending far into the vacuum UV region. The electronic transitions of the ether and hydroxyl groups of the sugars begin at 200 nm, but their intensity is much weaker than that of the bases, and the electronic transitions of the phosphate groups begin further still into the vacuum. Although CD spectroscopy generally provides only low resolution structural information, it does have two major advantages. First, it is extremely sensitive to changes in conformation, whatever their origin, and second, an extremely wide range of solvent conditions is accessible to study with relatively small amounts of material. The principal applications of CD spectroscopy in the study of biomolecules are, 1. The estimation of protein secondary structure content from far-UV CD spectra. 2. The detection of conformational changes in proteins and nucleic acids brought about by changes in pH, salt concentration, and added co-solvents and the structural analysis of recombinant native proteins and their mutants 3. Monitoring protein or nucleic acid unfolding brought about by changes in temperature or by the addition of chemical denaturants (such as urea and guanidine hydrochloride) 4. Monitoring protein–ligand, protein–nucleic acid, and protein–protein interactions e. Studying (in favorable cases) the kinetics of macromolecule– macromolecule, macromolecule–ligand interactions (particularly slow dissociation processes), and the kinetics of protein folding reactions. The general principles of the most common kinetic methods .

Instrumentation CD instruments are commercially available from several sources: A Peltier system for temperature control and thermal ramping is an invaluable accessory, particularly for studies of the thermal unfolding of proteins and nucleic acids. The only other significant requirement is for a set of high quality quartz cuvettes with good far-UV transmission with path lengths ranging from 0.1 to 10 mm. Cuvettes should always be cleaned immediately after use in order to avoid the buildup of hard-to-remove protein deposits. Instrument Care and Calibration The CD instrument should always be purged with high-purity, oxygen-free, nitrogen (generally run at 3–5 L/min) for at least 20 min before starting the light source and throughout the measurements. If oxygen is present, it may be converted to ozone by the far-UV light from the high-intensity arc, and ozone will damage the expensive optical surfaces. Higher nitrogen flow rates will generally be necessary for measurements made at very short wavelengths. The calibration of the instrument should be checked periodically. Although several CD standards are available, the one used most frequently is d10 camphorsulfonic acid (d10-CSA). The exact concentration (C) of a solution of d10CSA in water (at 2.5 mM) should be determined from an absorption spectrum (using ε285 = 34.5 M-1 cm-1). This measurement also provides a useful check on the wavelength calibration of the instrument, although this can also simply be done by scanning with a holmium oxide filter in the light path and monitoring the voltage on the instrument’s photomultiplier. Sample Preparation All samples should, of course, be of the highest possible purity. For example, the weak near-UV signals of proteins can be swamped by the strong signals from relatively small levels of contaminating nucleic acids. The absorption spectrum of the sample should always be checked to see if and where this absorbance limit is going to be exceeded. In far-UV measurements the absorbance of the sample itself is generally rather small, and the major problems arise from absorption by buffer components, almost all of which will limit far-UV penetration to some extent. The majority of simple buffer components will generally permit CD measurements to below 200 nm.

The CD spectra of membrane proteins are often recorded in detergent solubilized form in order to avoid artifacts arising from differential light scattering and absorption flattening. The far-UV CD spectra of proteins (260–178 nm) are intense, and relatively small amounts of material are required to record them. Because all peptide bonds contribute to the observed spectrum, the amount of material required (measured in mg/ml) is effectively the same for any protein. The near-UV CD spectra of proteins are generally more than an order of magnitude weaker than the far-UV CD spectra. Recording them therefore requires more concentrated material and/or longer optical path lengths. Determination of Sample Concentration Accurate sample concentrations are absolutely essential for the analysis of far-UV CD spectra for secondary structure content and whenever one wishes to make meaningful comparisons between different protein or nucleic acid samples. We routinely determine protein and nucleic acid concentrations using absorption spectroscopy. When the extinction coefficient is known, the concentration can be calculated with considerable accuracy. The absorption spectrum should ideally be recorded with temperature control and careful attention should be given to correct baseline subtraction, especially when buffers containing reducing agents are being used. Highly scattering samples should always be clarified by lowspeed centrifugation or filtration prior to concentration determination. If the spectrum still shows significant light scattering, that is, significant background absorption above 315 nm, a correction must be applied. Data Collection Having chosen the appropriate sample concentration and cuvette path length for the measurement, the user will need to select suitable instrument settings. Consideration should also be given to the selection of an appropriate temperature for the measurement. A. Wavelength Range Far-UV spectra of proteins should generally be scanned from 260 nm to the lowest attainable wavelength. This low-wavelength limit will depend largely on the composition of the buffer being used. Near-UV spectra are routinely scanned over the range 340–255 nm for proteins and from 340 nm to the lowest attainable wavelength for nucleic acids.

B. Scanning Speed and Time Constant In the case of analogue instruments, the product of the scanning speed (nm min-1) and the time constant (sec) should be less than 20 nm min-1 sec. If the instrument uses a response time (equal to three time constants), then the product of scanning speed and response time should be less than 60 nm min-1 sec. If significantly higher values are used, there will be potentially serious errors in both the positions and intensities of the observed CD bands. Good S/N ratios can therefore be achieved either by averaging multiple fast scans recorded with a short time constant or by recording a small number of slow scans with a long time constant. The choice is largely one of personal preference. C. Spectral Bandwidth The spectral bandwidth is generally set to 1 nm, but it may occasionally be necessary to use lower values in order to resolve fine structure in near-UV spectra of proteins. Increasing the spectral bandwidth will reduce the noise by increasing light throughput, but it should always be 2 nm or less in order to avoid distorting the spectrum. D. Temperature Control It is good practice to always record CD spectra with temperature control. This is particularly important for the far-UV CD spectra of proteins, which often show quite pronounced temperature dependence, even outside the range of any thermally induced unfolding of the protein. These small changes in the signal from the folded protein with temperature reflect a true change in conformation and are not simply due to changes in the optical properties of a helix or strand. The changes, which are often linear with temperature, are probably due to fraying of the ends of a helix or to changes in helix–helix interactions. Data Processing and Spectral Characteristics Data Processing The first step is to subtract the baseline scan from the sample scan. All spectra should have been collected with a starting wavelength that gives at least 15–20 nm at the start of the scan where there should be no signal. After baseline subtraction this region should be, and generally is, flat, but the signal may not be zero. The spectrum should be converted to the desired units. In the case of proteins, the observed CD signal, S in millidegrees (Note: 1 millidegree =32,980 ΔA), is generally converted to either the molar CD extinction coefficient (ΔεM) or to the mean residue CD extinction coefficient (Δε MRW) using:

where L is the path length (in cm), CM is the molar concentration, Cmg/ml is the concentration in mg/ml, and MRW is the mean residue weight (molecular weight divided by the number of residues). Although large globular proteins generally have a mean residue weight of approximately 111, the actual value must always be calculated in order to avoid potentially large errors in the calculated intensities. Calculating far-UV intensities is almost invariably done on a per residue basis in order to facilitate comparison between proteins and peptides with different molecular weights. Near-UV CD intensities should generally be reported on a molar rather than a per-residue basis because only four of the amino acid side chains contribute to the CD signals in this region. In the case of nucleic acids, the CD intensities can be calculated using the base, base pair, or molar concentrations. CD intensities are also sometimes reported as molar ellipticity ([θ]M) or mean residue ellipticity ([θ]mrw), which may be directly calculated as: [θ] and Δε values may be interconverted using the relationship [θ]=3298 Δε.

B. Spectral Characteristics 1.Near-UV Spectra of Proteins Near-UV CD bands from individual residues in a protein may be either positive or negative and may vary dramatically in intensity, with residues that are producing the strongest signals. Knowledge of the position and intensity of CD bands expected for a particular chromophore is helpful in understanding the observed near-UV CD spectrum of a protein and the principal characteristics of the four chromophores are therefore summarized, i). Phenylalanine has sharp fine structure in the range 255– 270 nm with peaks generally observed close to 262 and 268 nm (ΔεM = ± 0.3 M-1 cm-1). ii). Tyrosine generally has a maximum in the range 275– 282 (ΔεM = ± 2.0 M-1 cm-1), possibly with a shoulder some 6 nm to the red. iii). Tryptophan often shows fine structure above 280 nm in the form of two Lb bands [one at 288 to 293 and one some 7 nm to the blue, with the same sign ((ΔεM = ± 5.0 M-1 cm-1)] and a La band (around 265 nm) with little fine structure (ΔεM = ± 2.5 M-1 cm-1).

iv). Cystine CD begins at long wavelength (>320 nm) and shows one or two broad peaks above 240 nm (ΔεM = ± 1.0 M-1 cm-1). 2. Far-UV Spectra of Proteins Far-UV CD spectra of proteins depend on secondary structure content and simple inspection of a spectrum will generally reveal information about the structural class of the protein. The characteristic features of the spectra of different protein classes may be summarized as follows. i). All α-proteins show an intense negative band with two peaks (at 208 and 222 nm) and a strong positive band (at 191–193 nm). The intensities of these bands reflect α-helical content. Δεmrw values for a totally helical protein would be of the order of −11M-1cm-1 (at 208 and 222 nm) and +21 M-1cm-1(at 191–193 nm). ii The spectra of regular all-β-proteins are significantly weaker than those of all-α-proteins. These spectra usually have a negative band (at 210–225 nm, Δεmrw : −1 to −3.5 M-1cm-1) and a stronger positive band (at 190–200 Δεmrw : 2 to 6M-1cm-1) . iii). Unordered peptides and denatured proteins have a strong negative band (at 195–200 nm, Δεmrw : −4 to −8 M-1cm-1 ) and a much weaker band (which can be either positive or negative) between 215 and 230 nm (Δεmrw : +0.5 to −2.5 M-1cm-1 ). iv). α+β and α/β proteins almost always have spectra dominated by the α-helical component and therefore often show bands at 222, 208, and 190–195 nm. In some cases, there may be a single broad minimum between 210 and 220 nm because of overlapping αhelical and β-sheet contributions. 3. Nucleic Acids Because the aromatic bases themselves are planar, they do not possess any intrinsic CD signals; it is the presence of the sugars that creates the asymmetry which leads to the small CD signals of the monomeric nucleotides. Likewise, the stacking of the bases in the different polymeric forms results in the close contact and electronic interactions that produce the intense CD signals of the nucleic acids and oligonucleotides. CD is sensitive to secondary structure because the precise nature of these interactions determines the shape of the spectrum. The principal conformational forms of the nucleic acids are the A- and B-forms. In neutral aqueous buffers at moderate salt concentrations, DNA is usually in the B-form, while RNA adopts the A-form. These conformations have characteristic CD spectra that depend on base composition and somewhat on sugar type. Variation in spectral shape with base composition is, of course, significantly more important with short oligonucleotides.

APPLICATIONS 1. Secondary Structure Content of Proteins The estimation of protein secondary structure content from far-UV CD spectra is one of the most widely used applications of CD. several recent experiments, particularly those using site-directed mutagenesis, have shown that aromatic residues can make surprisingly large contributions to the far-UV CD spectra of some proteins and this obviously has serious implications for secondary structure estimation. Some useful programs are available for the analysis of far-UV CD data of proteins these include K2D; and CDNN. On the other hand three very popular programs (SELCON, CDSSTR, and CONTIN/LL)provided with several reference sets with different wavelength ranges are available on the Internet. 2. Detecting Altered Conformation The stability of any particular secondary structure element in a protein or nucleic acid will depend on several different factors . Changes in the pH and ionic strength of the solution will generally affect interactions between charged side chains in proteins, and this can lead to significant changes in secondary and/or tertiary structure. Such changes are readily studied using CD. Similarly, change in structure with increasing temperature can be determined by measuring the CD spectrum in the solvent at room temperature and comparing it with the spectrum measured in an aqueous buffer at particular temperature. 3. Changes Accompanying Complex Formation Protein–protein and protein–nucleic acid interactions are often accompanied by changes in the intrinsic CD of one or both of the components owing to changes in secondary structure and/or the environment of aromatic groups. Such changes may also be caused by the binding of small ligands, such as drugs and metal ions, to macromolecules, and in certain cases the ligand itself may change its optical activity or become optically active. 4. Structural Analysis of Recombinant Native Proteins and Their Mutants When working with mutant proteins, it is, of course, a good practice to test for any effect of the mutation on the general conformation of the protein and, here again, CD provides a convenient means of doing this with the limited amounts of material that are sometimes available. A significant difference in shape between the far-UV CD spectra of the wild-type and mutant proteins can be an indication that the mutation has produced some change in the secondary structure.

5. CD in the Study of Protein Stability Unfolding of macromolecules is generally studied by NMR Spectroscopy using an optical method (absorbance, fluorescence, or CD) or by using diVerential scanning calorimetry NMR is a versatile tool and it has applications in wide (DSC). CD is very widely employed in the study of varieties of subjects in addition to its chemical and protein stability because unfolding is almost biomedical applications, including material and quantum invariably accompanied by major changes in both the computing. In 939 Rabi et al. First detected unclear near and far-UV CD spectra. The free energy of magnetic resonance phenomenon by applying r.f. energy unfolding is determined by monitoring an appropriate to a beam of hydrogen molecules in the Stern-Gerach CD signal as a function of the concentration of a set up and observed measurable deflection of the beam. chemical denaturant (urea or guanidine hydrochloride) In 1991 and 2002 Nobel prize in Chemistry was awarded or as a function of temperature. to Richard Ernst and Kurt Wuthrich respectively for their contribution in NMR spectroscopy. 6. Determination of Equilibrium Dissociation Before going to NMR, we should know what are N, M, Constants and R ? As noted in the previous section, the interaction of a Nucleus: Nuclear spin and Nuclear magnetic moments. macromolecule with another macromolecule or small Magnetic: The Nucleus in a Magnetic Field include, molecule is often associated with a change in the CD Nuclear Zeeman effect & Boltzmann distribution. signal of one or both of the components We will do Resonance: When the nucleus meet the right magnet and this with reference to the simplest possible binding radio wave. model, formation of a 1:1 complex according to the Nuclear spin: following scheme. Nuclear spin is the total nuclear angular momentum quantum number. This is characterized by a quantum number I, which may be integral, half-integral or 0. Only nuclei with spin number I≠0 can absorb/emit The primary requirement for a direct titration, as with electromagnetic radiation. The magnetic quantum number any spectroscopic method, is that the sum of the spectra mI has values of –I, -I+1, … ..+I . ( e.g. for I=3/2, mI= of the components (A and B) should be different from –3/2, –1/2, 1/2, 3/2), that of the complex (AB) at some wavelength. 1. A nucleus with an even mass A and even charge Z→ nuclear spin I is zero Example: 12C, 16O, 32S à No NMR signal Summary In summary, we hope to have shown that CD remains 2. A nucleus with an even mass A and odd charge Z → an excellent technique for studying the structures of integer value I proteins and nucleic acids in solution. CD Example: 2H, 10B, 14N → NMR detectable measurements allow one to quickly determine if a 3. A nucleus with odd mass A → I=n/2, where n is an odd protein is folded and, if so, to characterize its secondary integer structure content. Such measurements therefore enable Example: 1H, 13C, 15N, 31P → NMR detectable one to compare the structures of different mutants of a protein, and of different samples of a protein obtained Nuclear magnetic moments from different species or using different expression Magnetic moment m is another important parameter for a systems. The great sensitivity of CD to changes in nuclei µ = γI (h/2Ω). conformation also makes it an excellent technique for I: spin number; h: Plank constant; studying the effects of changes in environmental γ: gyromagnetic ratio (property of a nuclei). conditions (such as pH, temperature, and ionic strength), for determining protein stability using either Precession and the Larmor frequency chemical or thermal denaturation studies, and (in •The magnetic moment of a spinning nucleus processes appropriate cases) for determining equilibrium with a characteristic angular frequency called the Larmor dissociation constants. One final remark is worth frequency w, which is a function of r and B0 making. It has been emphasized the unfortunate fact Remember µ = γI (h/2π) ? that CD measurements are often severely compromised Angular momentum dJ/dt= µ x B0 by inappropriate experimental design or by a lack of Larmor frequency ω= γ B0 attention to important key aspects of instrument Linear precession frequency, v= ω/2π = γ B0/2π calibration and sample characterization. We hope that this chapter shows how reliable CD data can be obtained, analyzed, and understood.

Quantum mechanics tells us that, for net absorption of radiation to occur, there must be more particles in the lower-energy state than in the higher one. If no net absorption is possible, a condition called saturation. When it’s saturated, Boltzmann distribution comes to rescue:

Nuclear Magnetic Resonance Spectrometer How to generate signals? B0: Magnet B1: Applied small energy

where P is the fraction of the particle population in each state, T is the absolute temperature, k is Boltzmann constant 1.381x10-28 JK-1

Example: At 298K, what fraction of 1H nuclei in 2.35 T field are in the upper and lower states? (m= −1/2 : 0.4999959 ; m=1/2 : 0.5000041 ) The difference in populations of the two states is only on the order of few parts per million. However, this difference is sufficient to generate NMR signal. Anything that increases the population difference will give rise to a more intense NMR signal. Nuclear Magnetic Resonance For a particle to absorb a photon of electromagnetic radiation, the particle must first be in some sort of uniform periodic motion . If the particle “uniformly periodic moves” (i.e. precession) at vprecession, and absorb erengy. The energy is E=hvprecession For I=1/2 nuclei in B0 field, the energy gap between two spin states: ΔE=rhB0/2π

What happen before irradiation Before irradiation, the nuclei in both spin states are processing with characteristic frequency, but they are completely out of phase, i.e., randomly oriented around the z axis. The net nuclear magnetization M is aligned statically along the z axis (M= Mz, Mxy=0).

What happen during irradiation When irradiation begins, all of the individual nuclear magnetic moments become phase coherent, and this phase coherence forces the net magnetization vector M to process around the z axis. As such, M has a component in the x, y plan, Mxy=Msinα. a is the tip angle which is determined by the power and duration of the electromagnetic irradiation.

The radiation frequency must exactly match the precession frequency

This is the so called “ Nuclear Magnetic RESONANCE”!!!!!!!!!

What happen after irradiation ceases After irradiation ceases, not only do the population of the states revert to a Boltzmann distribution, but also the individual nuclear magnetic moments begin to lose their phase coherence and return to a random arrangement around the z axis. (NMR spectroscopy record this process!!). This process is called “relaxation process”. There are two types of relaxation process : T1(spin-lattice relaxation) & T2(spin-spin relaxation)

B1(the irradiation magnet, current induced) (1) Induce energy for nuclei to absorb, but still spin at w or vprecession, Ephoton=hvphoton=DE=rhB0/2π=hvprecession And now, the spin jump to the higher energy ( from m= 1/2m → m= – 1/2). NMR Parameters Chemical Shift The chemical shift of a nucleus is the difference between the resonance frequency of the nucleus and a standard, relative to the standard. This quantity is reported in ppm and given the symbol delta, d = (n - nREF) x106 / vREF In NMR spectroscopy, this standard is often tetramethylsilane, Si(CH3)4, abbreviated TMS, or 2,2dimethyl-2-silapentane-5-sulfonate, DSS, in biomolecular NMR. The good thing is that since it is a relative scale, the d for a sample in a 100 MHz magnet (2.35 T) is the same as that obtained in a 600 MHz magnet (14.1 T).

Electron surrounding each nucleus in a molecule serves to shield that nucleus from the applied magnetic field. This shielding effect cause the DE difference, thus, different v will be obtained in the spectrum

Beff=B0-Bi where Bi induced by cloud electron Bi = sB0 where s is the shielding constant Beff=(1-s) B0 vprecession= (rB0/2π) (1-s) s=0 à naked nuclei s >0 à nuclei is shielded by electron cloud s