Optical methods are widely used to determine physical properties because they are usually non-perturbing and do not involve taking samples. The techniques available may be broadly classified in terms of the ratio of the particle size to the wavelength of the probing light, expressed through the particle size parameter x = πD/λ. If the particles are sufficiently large photography may be used. With care, particles as small as 10 μm (x ≈ 50) can be photographed directly. However, for smaller particles, large magnification is required, which inevitably results in lack of depth of field. Use of a microscope enables particles down to about 1 μm to be observed, but here the depth of field is so restricted that only particles sampled onto a glass slide can be measured. This difficulty can be partly resolved by the use of holography which can reconstruct the distribution of particles in space. The resulting image can be studied in any plane, for example, by a digital television system. Reconstruction of images of particles down to about 0.1 μm has been claimed. A number of instruments are available to perform automated image analysis. Both methods need short pulse light sources (lasers for holography) to freeze the particle motion. Particle velocity and trajectory can be obtained by double flash methods. (See also Photographic Techniques, Holograms, Holographic Interferometry and Interferometry.)
The above methods have the advantage that the image can be seen and the size and shape obtained directly. However, analysis can be very time-consuming. Difficulties also arise for small particles, and at high concentrations where shadowing can occur. The alternative is to use light scattering methods. These can, in principle, be applied to any particle size depending on the technique employed, and analysis can be automated and rapid. However, the analysis is indirect and relies upon a suitable theory to interpret the observations. Because of its convenience, the most widely used is Mie theory, but this only describes scattering by a sphere. In reality almost all solid particles are not spherical and neither are many liquid drops. Consideration must be given, therefore, to the errors that the spherical assumption introduces.
A light wave incident upon a particle can undergo two processes: scattering and absorption. The former can be thought of as being due to reflection, refraction and diffraction. The fraction of scattered light power to that incident is the scattering efficiency (Qsca). An absorption efficiency (Qabs) is similarly defined and the total loss due to both processes is the extinction for which Qext = Qsca + Qabs Propagation through a cloud of particles is described by the Beer-Lambert law
where L is the path length and
is the extinction coefficient. A is the cross-sectional area of the particle and n(D) is the particle size distribution function defined such that
is the number of particles per unit volume.
τ is the transmissivity. The opposite of this (1 — τ) is the loss of light, or the opacity
The principle of measuring extinction is illustrated in Figure 1. A parallel beam of light transluminates the particle cloud. Unscattered light is brought to a focus at the center of a small aperture and passes through to a detector. The aperture only transmits scattered light if the scattering angle is less than θ. The definition of extinction efficiency assumes that all scattered light is excluded from the measurement. As the particles become larger, scattering is increasingly concentrated into very narrow angles around the forward direction. Measurement of transmitted light has to be restricted to extremely small angular ranges in order that true extinction is obtained, and a sufficiently small aperture must be chosen to ensure that this is so.
A second difficulty arises in dense particle clouds due to multiple scattering. This is found to happen to τ < 0.9. In this case, the Beer-Lambert Saw ceases to be true unless, again, all scattered light is excluded. For finite detector apertures, the scattered light collected is added back and must be taken into account unless it is a very small fraction of the losses. For dense clouds, multiple scattering theory becomes necessary to describe the result.
The basic set up for measuring scattering is shown in Figure 2. In this case, the size of the volume within the particle cloud that is seen by the detector is defined by the aperture and the depth of field of the lens along θ. The aperture of the lens determines the angular range over which scattered light is collected. An alternative scheme is to collect parallel light in the fashion of Figure 1 , but with the lens and aperture rotated through the angle θ. In this case, the aperture determines the angular range.
Light scattering depends upon the particle size, refractive index, shape and concentration. In principle all these parameters can be measured, though the most effort has been directed to determination of size. The technique to be used depends upon the size. The very smallest particles (x << 1) fall into the Rayleigh scattering regime where the variation of scattering with angle or wavelength is independent of size. The scattered intensity is proportional to NV2, where V is the particle volume. For absorbing particles, the extinction is proportional to NV. The ratio of these yields V, and N is also be found. For nonabsorbing particles, it is necessary to measure the equivalent refractive index of the particle cloud which is also proportional to NV. Typical examples of such particles may be smokes or the very early stages of nucleation and condensation.
Another method appropriate to small particles is photon correlation spectroscopy. Particles suspended in a fluid are subjected to collisions with molecules. If they are sufficiently small, this results in Brownian Motion. This random velocity causes a frequency broadening of the incident light due to the Doppler effect. The increase in bandwidth results in a reduction in temporal coherence and a consequent decrease in the autocorrelation when the scattered signal is compared with itself after increasing time delays. The decay is exponential with a rate directly related to the diffusion coefficient, which is inversely proportional to the particle size. In turbulent flow the technique is complicated by the random velocity fluctuations, though it has been successfully applied to soot in flames.
As the particle size increases, so does the ratio of forward to backscatter and the extinction becomes size-sensitive. For sizes less than about 1 μm, these vary in a simple monotonic way. The ratio of scattered intensity at two symmetric angles (commonly 45° and 135°) can be used, which is the disymmetry. The alternative measurement of the variation of extinction with wavelength is the spectral extinction method.
For particles greater than 1 μm, forward scatter techniques are most widely used. This is because it is recognized that these are insensitive to shape and refractive index. They are also simple, which is especially true for particles above about 4 μm where Fraunhofer diffraction can be used to describe the scattering. It is preferable, however, to use Mie theory and to supply a refractive index. The particles are illuminated by a parallel laser beam and the scattered light is detected in the focal plane of a receiving lens, commonly by ring photodiodes, as seen in Figure 3. Unscattered light is brought to focus at O and light scattered through the angle θ at P. The angular distribution of the scattering is characteristic of the particle size distribution. By using a range of lens focal lengths, it is claimed that a size range from 1 μm to 1800 μm can be covered. The method is normally limited to transmissivities greater than 50% due to multiple scattering, though ways are being developed to deal with this.
All the above methods examine clouds of large numbers of particles which have a distribution of sizes. These have been recovered by matching the scattering measurements against predicted results for a range of assumed distributions. Typically, two parameter distributions are used for simplicity. The most common of these are the Rosin-Rammler and log-normal distributions. However, there is no guarantee that these represent the true situation, especially if the actual distribution is multi-disperse. Thus, attention is now switching to "model independent" techniques of direct inversion. Perhaps the most widely used of these is that of Phillips and Twomey.
The number density of the particles is readily obtained if the size distribution is known. This is most readily achieved by measurement of the transmissivity. For large particles (x > 10) it is found that Qext 2, a constant. Then it can also be demonstrated from Eqs. (1) and (2) that
where fv is the particle volume fraction and D32 is the Sauter mean diameter. However, the measurement of transmissivity has the difficulty already outlined above that forward scatter must be excluded. Also, if the concentration is low it becomes difficult to measure a small change in a high intensity. In that circumstance, it is preferable to measure the scattered intensity at some large angle (as in Figure 2) where there is no interference from the incident light. This technique is nephelometry. The intensity is proportional to the particle number concentration multiplied by the volume of the test space. This latter component must be found either from geometry or by calibration. There is the added complication that the collected power depends upon the measurement angle and the aperture of the lens.
The alternative to making measurements on clouds of particles is to examine them individually; sometimes referred to as particle counting In the simplest instruments of this type, particle laden fluid is drawn though an enclosure containing the light beam and collecting optics. The use of mirrors ensures that scattered light is collected over a very large solid angle. The size is obtained from a measurement of the received power. In this way particles can be measured down to 0.1 μm diameter. Ultimately there is a limit due to noise caused by scattering from gas molecules. Because the particles arrive at random they are governed by Poisson statistics and the error in count is proportional to the square root of the number. This means that to ensure high accuracy in the size distribution very large numbers of particles have to be counted. The smallest concentration that can be tolerated is determined by the allowable count time and the flow rate. Typical flow rates are of the order 0.03 m3·min−1 The maximum concentration is governed by the requirement that there should only be one particle at a time in the test space. Typically, this is of the order 1011 m−3
When the light source is a laser, a difficulty arises due to the Gaussian beam profile. A small particle passing through the center of the beam, where the intensity is highest, will result in the same intensity as a large particle passing through the outer parts of the beam. This is the trajectory problem. There are three ways to resolve this. One is to filter the beam so that it has a uniform intensity profile; the so-called top hat profile. The second is to provide a second small diameter laser beam which acts as a pointer to the center of the main beam. The two may be discriminated using either differences in wavelength or polarization. The third method is deconvolution, in which a calibration leads to a matrix inversion procedure to recover the true size distribution.
Particle counting yields a temporal average, whereas cloud methods give a spatial average. In order to convert the temporal average into a spatial one, the particle velocities are needed. If the beam profile has a variation which is known, then the particle may be timed between two points on the profile. The alternative is to use laser Doppler velocimetry (LDV). (See also Anemometers, Laser Doppler.) The principle of this is that two laser beams cross to produce an interference pattern at the test space. A particle crossing this pattern generates a signal which varies sinusoidally in time, the frequency of which is simply related to the velocity. The sense of the velocity may be established by frequency shifting one of the beams so that the interference fringes move through the test space.
The general form of the Doppler signal is
The particle size can be established from the mean scattered intensity (I1), as for particle counters. However, methods which rely on the measurement of an absolute quantity have a number of disadvantages, including the need for calibration with particles having known properties. Also, in many systems, access is only available via windows. If these become soiled, their transmissivity is reduced, affecting both the incident and scattered intensities. For these reasons, relative methods are preferable. One such variable is the ratio I2/I1, which is the visibility. While this has been shown to work satisfactorily in a limited range of circumstances, it is normally restricted to particles which are smaller than the fringe spacing because the visibility is not monotonic beyond that. Also, if the particle does not pass exactly through the center of intersection of the laser beams, the visibility is distorted. In dense systems scattering along the paths of the incident beams can result in an imbalance of their intensities, which causes a reduction in visibility not related to particle size.
The third term in the Doppler signal is the phase (). It has been found that this increases linearly with particle size over a wide range. The particle can be larger than the fringe spacing, which means that small test volumes can be retained giving high resolution and enabling high concentrations to be measured. There is one complication due to the fact that a phase change of 2nπ, where n is an integer, cannot be discriminated. This problem has been resolved over a limited range by the use of multiple detectors, each with a small angular offset. Typically, three detectors are used. This technique is phase Doppler anemometry (PDA). Currently, it is claimed that the size range 1 jxm to 10 mm can be studied at concentrations up to 1012 m−3 (For further information see Anemometers, Laser Doppler.)
Particle counting methods have excellent spatial resolution, but the size distribution function is built up over a period of time. The concentration is established by knowledge of the area of the test space through which the particles flow and the volumetric flow rate. On the other hand, cloud methods yield a size distribution almost immediately, and so are capable of measuring rapid temporal fluctuation. However, they have very poor spatial resolution. Further, by measurement close to the forward direction, cloud methods can be made insensitive to unknown shape and refractive index. Particle counters are sensitive to both of these, and phase Doppler instruments normally incorporate a technique for indicating particle nonsphericity.
The fact that scattering depends upon shape and refractive index implies that it can be used to measure these parameters. There is a growing interest in these areas. Shape can be determined provided that it can be described in terms of simple variables—such as axial ratio. This has been applied, for example, to the sizing and counting of fibres. The full complex refractive index has been measured, and in combustion studies the real refractive index of drops has been used remotely to indicate their temperature.
Allen, T. (1990) Particle Size Measurement 4th edn., Chapman and Hall, London.
Bohren, C. E. and Huffman, D. R. (1988) Absorption and Scattering of Light by Small Particles, Wiley-Interscience, New York.
Heitor, M. V., Starner, S. H., Taylor, A. M. K. P., and Whitelaw, J. H. " Veiocity, size and turbulent flux measurements by laser Doppler velocimetry". Instrumentation for Flows with Combustion, A. M. K. P. Taylor, Ed., pp 113–250, Academic Press, London.
Jones, A. R. (1993) Light scattering for particle characterization, Instrumentation for Flows with Combustion, A. M. K. P. Taylor, Ed., pp 323–404, Academic Press, London.
Kerker, M. (1969) The Scattering of Light, Academic Press, New York.
Heat & Mass Transfer, and Fluids Engineering