Computer-Generated Phase-Only Holograms for Real-Time Image Display

Advanced Holography - Metrology and Imaging covers digital holographic microscopy and interferometry, including interferometry in the infra red. Other topics include synthetic imaging, the use of reflective spatial light modulators for writing dynamic holograms and image display using holographic screens. Holography is discussed as a vehicle for artistic expression and the use of software for the acquisition of skills in optics and holography is also presented. Each chapter provides a comprehensive introduction to a specific topic, with a survey of developments to date.


Introduction
Despite their esoteric sounding title, computer-generated holograms (CGHs) are now commonplace in a wide variety of applications and are a vital component in some surprisingly familiar consumer products. Such devices can be realized as fixed, etched structures -and are commonly called diffractive optical elements (DOEs) -or displayed on dynamically addressable liquid-crystal on silicon (LCOS) microdisplays. In either case, the principal attraction is the ability of these devices to generate arbitrary complex-valued optical fields from a small, thin device. As discussed in Bernhardt et al. (1991), one CGH is able to perform the entire functionality associated with a multiple element glass lens design, leading to low-cost, lightweight optical assemblies. Furthermore, the process by which CGHs are made is simple, and lends itself to volume manufacturing through embossing and injection molding techniques; as demonstrated by , it is even possible to obtain adequate performance from CGHs patterned onto overhead transparencies from a standard office laser printer. Furthermore it is possible to fabricate phase-modulating DOEs which do not absorb incident optical illumination, leading to very high efficiencies. Naturally, the flexibility and potential of CGH technology and its ability to implement multiple optical functions and exert control over optical fields -including very near-field evanescent waves as demonstrated by Brauer & Bryngdahl (1997); Elschner & Schmidt (1998); Gupta & Peng (1991); Kowarz (1995); Liu & Kowarz (1998;; Madrazo & Nieto-Vesperinas (1997); Schmitz et al. (1996); Thompson et al. (1999) -has resulted in huge commercial utilization. For example, CD and DVD drives contain a diffractive optical element to appropriately condition and direct the laser beam onto the disc surface and, with the advent of the DVD disc, simultaneous optical pick-up from multiple disc layers can be achieved by employing an injection molded hybrid refractive-diffractive lens. In addition to fixed holograms, there exist numerous methods for representing dynamic CGHs on reconfigurable microdisplay devices. There are a wealth of papers describing dynamic CGHs in applications as diverse as laser beam shapers in Dresel et al. (1996), fanouts and splitters for dynamic routing and multiplexing of laser beams into fibers in telecommunications applications Bengtsson et al. (1996); Gillet & Sheng (2003); Jean-Numa Gillet (1999); Keller & Gmitro (1993), optical traps for biophotonics Jesacher et al. (2004); Sinclair et al. (2004), performing transformations upon optical fields Case et al. 13 www.intechopen.com (1981); Gu et al. (1986); Roux (1991;1993); Stuff & Cederquist (1990), self-adjusting CGHs Lofving (1997), aspheric testing Tang & Chang (1992), and wavelength discrimination for wavelength-division multiplexing (WDM) applications Dong et al. (1998;1996); Layet et al. (1999); Yang et al. (1994). Despite the obvious benefits of computer-generated holography for a wide range of applications, however, it is only recently that CGHs have been demonstrated for the projection and display of two dimensional video-style images. Indeed, such a method of image projection and display has long been desired, but was never previously realized, due to high computational complexity and poor quality of the resultant images.

CGHs for two-dimensional image display
Presenting visual information using a phase-only holographic approach provides a significant efficiency advantage compared to conventional video projection techniques.
Unlike conventional projection displays, which utilize amplitude-modulating microdisplays to selectively block incident optical energy to form the desired image, a holographic display employing an ideal dynamic phase-modulating CGH has a transmission of near unity. Significant efficiency gains could therefore be realized compared to conventional LCOS or DLP -based projectors, in which the illumination is set at a level sufficient to produce a peak white value regardless of the average pixel level (APL) of the scene. Furthermore, the use of an LCOS display as the dynamic modulating element in a laser-based holographic projector allows the removal of front polarizer which serves to waste an additional 50% of the available light in LED-illuminated systems. The properties of diffraction potentially also allow for projection angles several times greater than currently possible in conventional LCOS-based systems. Conventional LCOS-based projection systems are limited by the necessity for a relatively large projection lens, since the function of the projection lens assembly is to enlarge an already sizeable image; to miniaturize the projection optics, then, the resultant image size must be shrunk concomitantly or be subject to severe aberrations, which can only be reduced through the use of highly complex and expensive lens systems. A phase-only holographic projector, on the other hand, is able to exert control over the entire optical field and consequently Buckley et al. (2009) was able to demonstrate that ultra-wide projection angles and novel projection geometries could be achieved without residual optical aberrations. In addition to the compact, simple opto-mechanical assembly of a CGH-based projector, the use of solid-state light sources and LCOS-based light modulators would result in a system containing no moving parts. Fault tolerance of the optical system, which is achieved since the hologram pattern is decoupled from the desired image by a Fourier relationship, is also an attractive property in some applications where display integrity is required and "dead pixels" are unacceptable. Although there have been plenty of examples of using fixed holograms for 2D image formation by Heggarty & Chevalier (1998); Kirk et al. (1992); Lesem & Hirsch (1969);Taghizadeh (1998;2000); Takaki & Hojo (1999), previous attempts at real-time image projection and display using CGHs have been mainly limited to the 3D case and the demonstrations by Ito et al. (2005); Ito & Okano (2004); Ito & Shimobaba (2004); Sando et al. (2004) have required significant computational resources. The few attempts at an implementation of real-time 2D holographic projection by, for example, Mok et al. (1986); Papazoglou et al. (2002); Poon et al. (1993) have been affected by critical limitations imposed both by the computational complexity of the hologram generation algorithms required, and by the poor quality of images produced by the binary holograms they generate. Recently, a great deal of progress has been made in using binary-phase CGHs for projection as detailed in Buckley (2008a;b;2011a) and a new approach to hologram generation and display, based on a psychometrically-determined perceptual measure of image quality, has been shown to overcome both of these problems and has resulted in the commercialization of a real-time 2D holographic projector. This chapter will bring together, for the first time, the recent theoretical and practical advances in realizing 2D and 3D holographic projection systems based on binary phase CGHs.

Motivation
For video display applications, in which the APL is significantly less than the full-white maximum, a projection display based on phase-only computer generated holography could offer a significant efficiency advantage compared to amplitude-modulating LCOS displays, since light is not blocked from the desired image pixels. Quantifying this benefit has proven difficult, however, since there is widespread disagreement in the published literature from, for example, Bhatia et al. (2009;; Buckley et al. (2008); Lee et al. (2009);Weber (2005) as to an acceptable value to use for the APL. The variation in reported values appears to result from the point at which the APL measurement is defined. In a generalized display, the light intensity produced L out is related to the video signal voltage V by L out ∝ V γ , where γ is the display gamma. To obtain a display intensity response L out which is linear with respect to the video image P, the transmit video signal V is encoded by an inverse gamma correction function so that V ∝ P 1/γ . To ensure a uniform perceptual response, the display gamma is typically set to γ = 2.2 to match the approximate lightness sensitivity of a human viewer. In a projection architecture in which the light sources can be modulated in response to average scene or per-pixel brightness, the resultant efficiency benefit is directly related to the mean value of L out ,E[L out ], which is clearly not equal to E [V] when γ = 1. In order to calculate this mean value, and since neither the form of L out nor V are known a priori, we must derive a statistical model for the pixel distribution pre-and post-gamma. Consider an image pixel P that can take a value in the range [0, p), quantized into n bins of size b so that b = p/n. The number of occurrences of a pixel value within the bin [p i−1 , p) is k i , so that the total number of occurrences of that pixel value is and the total number of occurrences k is fixed, so that where ǫ is some positive constant. We define Pr n (b) to be the probability that the pixel value will fall into the b th bin n times.
Since each pixel has an equal probability of taking a value in the range [0, p), the probability that a pixel is addressed once with a value in the b th bin is Pr 1 (b)=λb, where λ is a constant, and the probability that a pixel is not addressed is Pr 0 (b)=1 − λb. We wish to find Pr(P > p), where P is the smallest pixel value, which is equivalent to finding the probability that a pixel is not addressed with any value in the range (0, p). If we suppose further that the pixel value probabilities in any bin are independent of each other, then we obtain From elementary calculus, so that and it follows that the corresponding probability density function (PDF) f P (p) is where p > 0, thus completing the proof that the pixel values are exponentially distributed with mean λ. Let the image pixels P be subject to a gamma encoding process with value γ such that V ∝ P 1/γ .I fP is exponentially-distributed with parameter λ, written as P ∼ exp(λ), then Leemis & McQueston (2008) provides the standard result that the PDF of the random with mean value given by where Γ is the Gamma function, α = λ 1 γ and β = γ. A number of measurements of V for typical TV content are provided by Jones & Harrison (2007); Lee et al. (2009);Stobbe et al. (2008); Weber (2005) and Jones & Harrison (2007) presents curves for experimentally-measured APL data by country, to which a Weibull-distributed variable V ∼ Weibull[α = 0.43, β = 2.2] with mean of approximately 38% is an excellent fit to the average measured APL.
Since we know from experimentally-measured transmission data and equation 6 that the average pixel value in a video image is E [P] = λ = α γ = 16%, then we can reasonably state that, due to the nature of a typical video image, the average optical utilization efficiency of a holographic projector should be a factor of six greater than an LCOS-based system excluding all other inefficiencies. When comparing to LED-illuminated systems which require a front polarizer and careful étendue matching, the efficiency gain could approach one order of magnitude and clearly motivates the investigation of a projection system based on phase-only holography.

2D Fourier holography
A holographic display employs a phase-modulating display element in combination with a coherent light source to form images by diffraction, rather than projection. A Fraunhofer (or far-field) holographic display is based on the result that, when a hologram h(u, v) is illuminated by coherent collimated light of wavelength λ, the complex field F(x, y) formed in the back focal plane of the lens of focal length f due to Fraunhofer diffraction from the pattern h(u, v) is the two-dimensional spatial Fourier transform of the hologram pattern: The relationship of equation 9 is illustrated in Figure 1. If the continuous hologram pattern is then replaced by an element with pixel size Δ then the image F xy formed (or replayed) in the focal plane of the lens is related to the pixellated hologram pattern h uv by the discrete Fourier transform F [·], and is written as Despite the potential advantages of a holographic display, previous attempts at constructing such a system as detailed by Georgiou et al. (2008); Heggarty & Chevalier (1998); Mok et al. (1986); Papazoglou et al. (2002); Poon et al. (1993) have been unable to overcome two fundamental technical problems. The first difficulty is that of calculating a hologram h uv such that, when illuminated by coherent light, a high quality image F xy is formed. It is not possible to simply invert the Fourier transform relationship of equation 10 to obtain the desired hologram h uv , since the result of this calculation would be fully complex and there is no material in existence that can independently modulate both amplitude A uv and phase ϕ uv where h uv = A uv exp jϕ uv . Even if such a material became available, the result contains amplitude components which would absorb incident light and reduce system efficiency. A much better approach is to restrict the hologram h uv to a set of phase only values exp jϕ uv . Performing this operation on h uv whilst maintaining high image quality in F xy is absolutely non trivial, and requires computation to mitigate the effects of information lost in the quantization. The second problem is one of computation. Until recently, there was no hologram-generation method in existence that could simultaneously produce images of sufficient quality for video style images, whilst calculating the holograms quickly enough to allow real-time image display. Figure 2 provides a good example; the 512 × 512-pixel hologram h uv of Figure 2(b) took 10 hours to compute using the standard direct binary search (DBS) algorithm proposed in Dames et al. (1991); Seldowitz et al. (1987) and the resultant reconstruction F xy , shown in Figure 2(c), is a very poor representation of the desired image T xy of Figure 2(a). In this section it is shown that the twin barriers to the realization of a real-time, high quality holographic display can be overcome by defining a new, psychometrically-determined, measure of image quality that is matched to human visual perception. A method of displaying phase holograms that is optimized with respect to this new measure is presented, and is shown to result in high-quality image reproduction.
Image |F xy | 2 resulting from the reconstruction of a desired image T xy from a binary phase-only hologram h uv calculated using the DBS algorithm.

An improved method for hologram generation
Conventional hologram generation algorithms such as DBS, and the Gerchberg-Saxton (GS) described in Gerchberg & Saxton (1972), attempt to exhaustively optimize a hologram to minimize some metric J, which is calculated by comparing the projected image F xy with respect to a target image T xy within some region Ω. Typically, such algorithms employ the mean-squared error (MSE) measure where and γ is a normalizing constant chosen to minimize equation 11 -which seems intuitively satisfying, since zero MSE implies a perfect reconstruction. Unfortunately, this metric is particularly insensitive for the low MSE values typically encountered in holographically-generated images.
An effective demonstration of the deficiency of the MSE measure is provided in the following example, in which three images F 1 xy , F 2 xy and F 3 xy are generated from a target image T xy . Image F 1 xy is equivalent to T xy except for a small contrast change, F 2 xy contains additive Gaussian noise of variance σ 2 n , and F 3 xy exhibits both a change in contrast and additive noise. If the change in contrast is given by c and the mean value of the image pixels is μ , then the MSE metric for each of the images can be shown to be The resultant images are shown in Figure 3, together with MSE figures calculated using equation 12. Although F 1 xy exhibits the highest perceptual image quality, and F 3 xy the lowest, the MSE metrics in fact indicate the opposite. It is clear from equation 12 and Figure 3 that MSE is in fact dominated by mean image errors caused by the contrast change, rather than the additive Gaussian noise which corresponds with poor perceptual image quality. In order to determine an improved optimization metric, it is necessary to derive the properties of noise in holographic replay and, in particular, the noise resulting from the approximation of the complex Fourier Transform information by a phase only hologram.

General properties of holographic replay
Without loss of generality, we consider the one-dimensional image F x , which is the discrete Fourier transform (DFT) of the corresponding P-pixel hologram h u , and is termed the replay field (RPF): Since the form of h u is not known a priori, because it is the result of some unspecified calculation, it can only be assumed that h u is a random variable with some, as yet unknown, distribution. The quest to determine the properties of holographic replay therefore begins by considering the properties of samples of the DFT of a random sequence h u , proceeding to determine the properties of the absolute value of the signal and noise components of the image as would be detected by the eye. Consider a P × 1 vector of independent identically distributed (i.i.d) random variables h 1 , h 2 ,...,h P , each of which has the same arbitrary probability distribution function (PDF) f h i (u), i = 1 ···P. The central limit theorem (CLT) states that the sum of these i.i.d random variables will tend to the Normal distribution which, remarkably, holds true even if the random variables are not themselves Normally distributed, provided that the sample size P is large enough.
Since equation 13 shows that the DFT of h u is merely a weighted average of h u , with the weights being complex exponential factors, the samples resulting from the DFT operation will therefore be governed by the CLT. Hence, regardless of the distribution of the samples h u , the real and imaginary parts of the DFT will be Normally distributed provided that P is large enough. This is an important result in determining the properties of noise occurring in holographic replay. We consider further a P × Q set of complex random samples h uv which can be written as where the real and imaginary parts of h uv have mean and variance (μ r , σ 2 r ) and (μ i , σ 2 i ) respectively.
The DFT of these samples, obtained from equation 10, is therefore Normally-distributed in real and imaginary parts and, following some lengthy calculations, the samples of the DFT are found to be distributed as where F xy ∼ N[·] indicates that the samples F xy are Normally distributed.

Effect of hologram quantization upon the image
In order to determine the properties of noise in holographic replay, it is necessary to determine the effects of quantizing the hologram h u . Let the samples e u represent the error introduced into the hologram by quantization, and E x = F [e u ] be the resultant noise introduced into the image. It is clear from equations 16 that, regardless of the PDF of the error in the samples e u , the image error samples E x are always Normally distributed in real and imaginary parts and, hence, the amplitude of this error is Rayleigh distributed and is given by where σ 2 n = σ 2 r + σ 2 i PQ/2 and depends upon the nature of the quantization performed. It follows that the noise amplitude in any holographically-formed image -regardless of the algorithm used to generate the hologram -will always be Rayleigh distributed and dependent only upon the noise variance σ 2 n . It follows that a holographically generated image will consist of a desired signal component of average value V plus additive noise E xy due to the hologram quantization, and therefore the samples of the total complex image amplitude F xy are distributed as so that the magnitude of the image |F xy | is Ricean distributed and described by with energy Equation 20 is the crucial result for deriving an improved hologram generation algorithm, because it describes the statistical properties of the images produced by any holographic display. Surprisingly, equation 20 shows that holographically-generated images can be completely characterized by just two parameters, V and σ 2 n , regardless of the algorithm used to create the hologram. By appropriate manipulation of these parameters, therefore, it is possible to control the noise properties of a holographic display.

Perceptual significance of noise in holographic replay
Although equation 20 characterizes the statistics of holographic replay with just two parameters, the relationship between the choice of values for each parameter and the resultant perceived image quality is not clear. Since it is not obvious what values a human viewing an image with Ricean distributed pixel values would assign to V and σ 2 n , the only logical way to proceed is to characterize the perceptual degradation of image quality with respect to these parameters by performing a suitably-designed psychometric test on a representative sample of the population as shown in Cable et al. (2004). The general question of the comparative perceptual importance of artifacts in images is too broad to consider in this chapter. Instead, we deal with the more tractable problem of the relative perceptual significance of noise (that is, the deviation of the RPF from the target) that is inevitably present in any holographic reproduction, and how the statistical parameters of the noise affect perception. The psychometric test was designed to present the subject with 300 sequential stimuli, examples of which are shown in Figure 4. Each stimuli comprises a pair of images, which have each been generated from a set of basis images, and the images are presented and random positions with random intensities. To simulate the effect of holographic replay, intensity noise |E xy | 2 with mean μ and variance σ 2 n was added to each image pair, according to equation 19. A subject was placed in front of a monitor screen displaying such stimuli, which in combination are termed the 'veridical field'. To give the impression of a video image, the stimuli were updated 20 times per second. The subject was then asked to record their subjective interpretation of the most pleasing image or, if no distinction was possible, to record no preference. This is known as the three-alternative forced choice (3AFC) paradigm, described in Greene & d'Oliveira (1999). To ensure that the subjective choice of image quality was made instinctively, as it would be for a typical video stream, a time limit of four seconds per image was imposed and, if the response time of the subject was longer, the result was discarded. The results were analyzed by constructing a scatter plot of Figure 4 indicating, for each sample, the user's preference and demarcating the scatter plot into regions where the subject considers the left image to be superior ("left preferred"), reverse ("right preferred") or has no preference ("cannot tell"). Boundaries of best fit between these three regions were then constructed using a linear least-squares measure. The results contained in Figures 5(a) and 5(b) clearly show, as indicated by the dominant horizontal component in the boundary lines, that noise variance in holographic replay is far more significant than the mean as a determinant of the perceptual significance of noise. This experiment suggests that a hologram generation algorithm which employs an error metric that minimizes noise variance σ 2 n is likely to produce RPFs that are subjectively regarded as far higher in quality than the equivalent RPF obtained from other metrics, such as MSE minimization, which attempts to minimize noise energy μ 2 + σ 2 n .

Reduction of noise variance
The conclusion that noise variance is an improved determinant of the perceptual significance of noise in a video image suggests a method for perceptual reduction of noise by exploiting temporal averaging. Consider a holographic display which generates N video subframes which are the result of some, as yet unspecified, hologram generation algorithm. The intensity , and has mean μ and variance σ 2 and i = 1, ··· , N.I f the average of all such subframes is displayed, the time-averaged percept is and, from the CLT, it follows that the variance of this time-averaged field is given by which is N times smaller than the variance of each individual subframe F xy 2 . Hence, a reduction in the noise variance of a video frame can be achieved by displaying the average of N noisy subframes. This property precisely fulfils the requirements suggested by the analytical and psychometric test results.

Practical implementation
A simple method for the creation of the time averaged percept of equation 21 relies upon the properties of the human visual system. The eye, because it responds to intensity, is a square-law detector and due to its composition has a finite response time. Kelly & van Norren (1977) performed a series of experiments using flickering veridical fields to deduce the temporal frequency characteristics of the eye, which resulted in the frequency response curves of Figure 6. Since the rod and cone structures respond slightly differently to flicker, there are disparities between pure luminous and chromatic (red-green) flicker responses -nevertheless, the frequency response of the human eye can be well approximated by a brick-wall filter function with a temporal bandwidth of approximately 25 Hz. Using this approximation and accounting for the square-law response, the time-averaged intensity percept V xy is approximately equal to the integral of the veridical field F xy 2 within a 40 ms time window, and can be expressed as If a suitable microdisplay is used to show N subframes within this 40 ms period, then the integral of equation 23 becomes the summation of equation 21. Hence, by displaying N frames quickly enough to exploit the limited temporal bandwidth of the eye, a human subject will   (1977)). The curves show that the eye can be modeled as a brick-wall filter function with temporal bandwidth of approximately 25 Hz.
perceive an image which is the average of N noisy subframes which is, from equation 22, substantially noise-free.

The one-step phase retrieval (OSPR) algorithm
What remains is to design a hologram-generation algorithm that has the capability to generate N sets of holograms both efficiently and in real time. A simple and computationally-simple method for generating i = 1, ··· , N holograms, each of which gives rise to a reconstruction Algorithm 1: The OSPR algorithm for calculating NP× Q pixel binary phase holograms h (n) uv , n = 1, ··· , N from a P × Q pixel target image T xy .
The results of the previous sections allow us to verify that Algorithm 1 generates holograms with the correct properties. Provided that mean quantization error (introduced into the hologram in the last step of Algorithm 1 is zero, which follows from thresholding about zero, the results of equation 16 can be applied to show that the real and imaginary parts of the reconstruction error E xy -given by the Fourier transform of the quantization error e uv , F [e uv ] -are independently distributed with zero mean and a variance which depends on the second moment of the reconstruction error only. It follows from equation 17 that the magnitude of the reconstruction error has a Raleigh distribution and that we can ensure that each of the N holograms generated will exhibit i.i.d. noise in its RPF if each ϕ xy in step 1 is i.i.d.

Diffraction efficiency
A simple expression for maximum diffraction efficiency η of a phase-only quantized hologram is provided by Goodman & Silvestri (1970) and is where sinc(x)

sin(πx) πx
and M is the number of phase levels uniformly distributed in the interval [0, 2π]. For binary phase devices with M = 2, the maximum achievable diffraction efficiency is just 41%. This figure can be further refined to account for the desired image pattern, per Wyrowski (1991), the computation algorithm -as shown in Mait (1990) -and spatial quantization effects as covered by Arrizón & Testorf (1997); Wyrowski (1992). It is relatively straightforward to determine the maximum achievable diffraction efficiency when the OSPR algorithm is used to calculate the hologram patterns h uv . We first consider an unquantized hologram pattern m uv ∼ N 0, σ 2 /PQ resulting from Algorithm 1, which reconstructs to form an image F xy with total energy σ 2 by equations 16 and Parseval's theorem. A quantization operation is applied to the random variable m uv to obtain a quantized random variable h uv such that where q is the quantization threshold. Restricting the analysis to one dimension for a moment, then the noise e u introduced into the hologram by quantization pixel values about a point q at reconstruction points a, b can be calculated using which results in a mean quantization noise E [e u ] and quantization noise energy E e 2 u of where f m (u) is the PDF of the random variable m u . In the case of binary phase holography, the mean hologram quantization noise E [e u ] is minimized for q = a+b 2 ; since a = −b then q −→ 0 as and E [e u ] ≃ 0 previously shown. E xy = F [e uv ] then represents an upper bound for the RPF noise resulting from hologram quantization, since by the triangle inequality. The noise in the RPF due to quantization can be determined by evaluation of equations 27 at the appropriate reconstruction points a and b. For binary phase quantization, the points lie on the centroids of the positive and negative real axes respectively so that Using equations 27 it can further be shown that the RPF noise due to binary phase quantization is so that the reconstruction E xy ∼ N 0, σ 2 n from equations 16 and it follows that σ 2 n /σ 2 ≃ 36% of the reconstruction energy resulting from a binary phase hologram generated by the OSPR algorithm is noise. The diffraction efficiency η, defined as the proportion of usable energy directed into the first-order intensity samples in the presence of a RPF noise energy σ 2 n , is then and is approximately 32% for binary phase holograms generated using OSPR. A similar calculation by Buckley & Wilkinson (2007) results in a figure of 88% for OSPR-generated continuous phase holograms.

Signal-to-noise ratio
Signal-to-noise ratio (SNR) is an important metric for image display applications since it defines the maximum achievable contrast ratio. In a holographically formed two-dimensional image, the SNR is defined as the ratio of the mean signal energy to the mean noise energy, where the RPF, |F xy | 2 , contains the desired target image T xy with mean value V 2 in addition to additive noise |E xy | 2 caused by hologram quantization.
In an OSPR-based holographic display system, the overall noise field E xy 2 is the time average of N such contributions due to the square-law detection properties of the eye as shown by equation 21. Using a standard result it can be shown that the sum of N such independent, identically distributed exponential random variables is distributed according to the Gamma distribution

290
Advanced Holography -Metrology and Imaging

www.intechopen.com
where Γ(·) is the complete Gamma function. Since the mean of the Gamma distribution is Nσ n then it follows from equation 21 that the mean noise energy in a veridical field V xy composed of N frames is E V xy = σ 2 n (34) The noise energy present in the field V xy , and hence the SNR, is clearly independent of the number of subframes N and, as for the diffraction efficiency, is defined by the number of hologram phase levels and choice of hologram computation algorithm.
If we further define a fractional coverage value η to be the ratio of the sum of the normalized pixel values to PQ, so that 1 PQ ≤ η ≤ 1, then, since the quantization noise is determined by the number of phase levels and is constant, then the SNR S can be defined as and, since the total RPF energy σ 2 = V 2 /η + σ 2 n , then equation 30 can be used to show that Since typical video images exhibit η = 0.24, giving S ≃ 7 independent of the number of subframes N, this immediately highlights an obvious limitation of binary phase holographic video projection. There are several algorithmic methods capable of improving the contrast ratio of a holographically-generated image, each of which depend upon quantizing the hologram in such a way that noise can be selectively placed in the RPF. The Gerchberg-Saxton (GS) and Direct Binary Search (DBS) hologram generation algorithms can both be modified so that each algorithm attempts to minimize the quantization noise energy within a predefined signal window within the RPF, thereby obtaining a local signal-to-noise ratio (SNR) improvement as Brauer et al. (1991); Meister & Winfield (2002); Wyrowksi et al. (1986); Wyrowski & Bryngdahl (1988) previously found. However, both algorithms generate RPFs of insufficient quality and impose computational burdens that are incompatible with a high-quality real-time holographic display. The error diffusion (ED) algorithm, whilst not capable of generating holograms, was shown to be able to quantize holograms to generate RPFs with this useful characteristic by Kirk et al. (1992). As demonstrated in Buckley (2011b), it is possible to employ a multiple subframe approach, using OSPR to calculate holograms which are subsequently binarized using ED, to combine the benefits of image uniformity and high contrast. By implementing a parallel-processor design, the ED algorithm can be realized at the rate required by a multiple-subframe holographic projection system.

Choice of microdisplay
The requirements imposed upon the microdisplay used in the holographic projection system described previously are very different to those for the equivalent imaging system in terms of the liquid crystal material, backplane circuitry and pixel geometry. For a microdisplay employed in an imaging system, the choice of pixel size is usually chosen to represent a compromise between maintaining an adequate aperture ratio whilst minimizing diffractive effects -in a projection system which exploits diffraction, however, such a restriction does not apply. It is a standard result, given in Hecht (1998), that the diffraction angle θ from a hologram pattern of pixel size Δ placed behind a lens and illuminated with coherent collimated light of wavelength λ, is given by This inverse relationship between diffraction angle and feature size suggests that the pixel size in a microdisplay employed in a holographic projection system should be as small as possible, so that subsequent lens power to achieve the desired projection angle is minimized. It is also of paramount importance to provide predictable phase modulation over a wide temperate range and, because multiple subframes are displayed per video frame for the purposes of noise reduction, a high frame rate is required. These requirements can be fulfilled by the use of a ferroelectric Liquid Crystal on Silicon (LCOS) device operating as a phase-only modulator, as shown by O'Brien et al. (1994). In phase modulating mode, a ferroelectric LCOS device with a cell gap providing optical retardation Γ can act as a pixellated binary phase hologram in which each of the pixels can independently impose a phase shift of either 0 or π radians. To achieve phase modulation, the direction of polarization or the incident light (with components E x and E y ) is aligned to bisect the switching angle 2θ of the two LC states. The resultant modulated light components E ′ x and E ′ y in switched and unswitched states can be written in Jones Matrix notation, and are given by 10 00 e −j Γ 2 cos 2 θ + e j Γ 2 sin 2 θ ±j sin Γ 2 sin 2θ ±j sin Γ 2 sin 2θ e −j Γ 2 cos 2 θ + e j Γ 2 sin 2 θ It follows that the diffraction efficiency of the FLC material is determined by where the optical retardation Γ = 2πdΔn λ , with d the thickness of the LC layer and Δn its birefringence. It is clear from equations 39 and 40 that in order to maximize the diffraction efficiency then the LC material switching angle must be 2θ = π radians; the pixels of a microdisplay employing such a material could then independently impose phase shifts of either 0 or π radians, giving ϕ uv ∼ [0, π] as required. It is clear from equation 39 that, given a LC material switching angle of π radians, the pixels of such a device can independently impose a phase shift of either 0 or π radians, giving ϕ uv ∼ [0, π] as required. Development devices with a switching angle of 88 • in the smectic C* phase (SmC*) at operating temperature have previously been demonstrated by Heggarty et al. (2004) and have been deployed as phase modulators in optical switching applications. A commonly encountered issue with ferroelectric LC devices is the need to DC balance the device by displaying inverse compensating images, during which time the device cannot be illuminated. When used in an imaging architecture, O'Callaghan et al. (2009) has shown that this requirement can effectively halve the maximum achievable optical efficiency. In a phase-modulating system employing the OSPR algorithm, however, since the holograms can be chosen to be automatically DC balanced and because the hologram and its inverse both result in the same image, the device can be illuminated during both the valid and compensating fields resulting in the maximum optical efficiency. Figure 7 shows the simplest optical architecture for a holographic projector. The lens pair of L 1 and L 2 form a telescope, which expands the laser beam to capture the entire hologram pattern so that low-pass filtering of the resultant intensity RPF V xy does not result. The reverse arrangement is used for the lens pair of L 3 and L 4 , which acts to demagnify the hologram pixels and consequently increase the diffraction angle Δ as described by . The demagnification D is set by the ratio of focal lengths f 3 to f 4 and, due to the properties of Fraunhofer diffraction, the images remain in focus at all distances from L 4 .

Optical system
Hologram Fig. 7. Optical design for a simple holographic projector. Beam-expansion of the laser diode is performed by lenses L 1 and L 2 , and demagnification by lenses L 3 and L 4 .

Color architecture
The realization of a color holographic projector is relatively straightforward. A desired image is converted into sets of holograms and displayed on a phase modulating microdisplay illuminated by red, green and blue coherent light. Color images can be formed either by spatially segmenting the microdisplay as per Ito & Okano (2004), designing multi-focal CGHs by the method of Makowski et al. (2008), or by employing the frame-sequential color of Buckley (2008a;2011a), which has the advantage of maximizing the output resolution. In the latter case, sets of holograms h (i) uv , i = 1, ··· , N, are calculated and displayed for each wavelength λ r , λ g and λ b , with the RPF scaled to account for the wavelength-dependent diffraction angle. The subsequent diffraction patterns pass through the simple lens pair L 3 and L 4 , which increases the projection angle by demagnifying the microdisplay pixel size Δ. Since the color planes are displayed and illuminated at the subframe rate, the color-sequential approach does not suffer from color breakup. Figure 8 shows an image obtained from a phase-only holographic projection system, employing the techniques described in this paper, manufactured by Light Blue Optics Ltd. The projector was imaged onto a commercially available rear-projection screen and the resultant image was captured by a digital camera. The nominal resolution at the projection screen was approximately WVGA (850 × 480 pixels.) It is clear that the image exhibits the highly saturated primaries associated with laser-based display system, but that the speckle artefacts traditionally associated with this method of projection are substantially suppressed. Fig. 8. Projected image at WVGA resolution resulting from a phase-only holographic projection system, employing the techniques described in this chapter, manufactured by Light Blue Optics Ltd. In this instance, λ r = 642 nm, λ g = 532 nm and λ b = 445 nm.
Several methods can be combined in a holographic projector in order to reduce speckle. In particular, the use of multiple holograms per video frame is beneficial to the speckle contrast; since N phase-independent subframes per video frame are shown within the eye's integration period, then the eye acts to add N independent speckle patterns on an intensity basis, and the contrast of the low-frequency components of the speckle in the field V xy falls as N 1/2 . Due to computational and LC switching speed limitations, N cannot be increased indefinitely so additional methods can be combined to further reduce the speckle contrast. The presence of an intermediate image plane between the lens pair L 3 and L 4 makes it straightforward to employ optical speckle reduction techniques, as previously presented by Buckley (2008c).

2D Fresnel holography
Previous sections have been concerned with far-field (or Fraunhofer) diffraction, in which the RPF F xy and hologram h uv are related by the Fourier transform: In the near-field (or Fresnel) propagation regime, RPF and hologram are related by the Fresnel transform which, using the same notation, can be written as and the RPF F xy at a distance z is related to the P × Q-pixel hologram h uv of feature size Δ x × Δ y by Fresnel diffraction so that and f (2) so that the dimensions of the RPF are λz Δ x × λz Δ y , consistent with the size of RPF in the Fraunhofer diffraction regime as per Schnars & Juptner (2002). The Fresnel diffraction geometry is illustrated in Figure 9.
F xy h uv l z Fig. 9. Fresnel diffraction geometry. When the hologram h uv is illuminated by coherent light, the RPF F xy at a distance z is determined by Fresnel (or near-field) diffraction.
As previously shown by Dorsch et al. (1994);Fetthauer et al. (1995), it is straightforward to generalize hologram generation algorithms to the case of calculating Fresnel holograms. Here, the OSPR algorithm 1 is employed, replacing the conventional Fourier transform step by the discrete Fresnel transform of equation 43. The samples of the discrete Fresnel transform are found to be distributed as where P ′ = PΔ 2 x and Q ′ = QΔ 2 y . The use of Fresnel holography has in two beneficial effects. Firstly, the diffracted near-field at the propagation distance z does not contain the conjugate image evident in the Fraunhofer region, in which z is necessarily greater than the Goodman (1996) distance. Second, because Fresnel propagation is characterized by a distance z, it is evident that the hologram incorporates lens power determined by the properties of the computed hologram, rather than the optical system. It therefore follows that the lens count in a holographic projection system could be reduced simply by removing L 3 of Figure 7, employing instead a Fresnel hologram which encodes the equivalent lens power z = f 3 .

Holographic projector with variable demagnification
In the Fourier projection system of Figure 7 the demagnification D of the hologram pixels, and the concomitant enlargement of the RPF, is determined optically and is given by the ratio D = f 3 / f 4 . The use of a Fresnel hologram displayed on a dynamically addressable microdisplay, however, would allow for a novel variable demagnification effect since the effective focal length of the Fresnel hologram encoding L 3 could be varied simply by recomputing the hologram. An experimental verification of this variable demagnification principle was performed by removing L 3 of Figure 7 and setting f 4 = 100 mm. Three Fresnel holograms were calculated using OSPR with N = 24 subframes, each of which were designed to form a target image in the planes z = 100 mm, z = 200 mm and z = 400 mm. A microdisplay with pixel pitch Δ x = Δ y = 13.62μm was used to display the holograms, and the resulting RPFs -which were reconstructed at λ = 532 nm and imaged onto a non-diffusing screen -were captured with a digital camera. The results are shown in Figure 10, and clearly show the RPF scaling caused by the variable demagnification introduced by each of the Fresnel holograms.

Lens sharing in a holographic projector
In the previous section, it was shown that lens L 3 of the demagnification lens pair could be removed by encoding the equivalent lens power into the hologram. From inspection of Figure  7, it is clear that the same argument could also be applied to L 2 of the beam-expansion lens pair. It follows that, if f 2 = f 3 , the common lens can be shared between the beam-expansion and demagnification assemblies by encoding it into a Fresnel hologram displayed on a reflective microdisplay. The remaining lens L 4 is typically the smallest in the optical path in order to maximize the demagnification D.
An experimental projector was constructed to demonstrate the lens-sharing concept, and the optical configuration is shown in Figure 11(a). A fiber-coupled laser was used to illuminate the same reflective microdisplay, which displayed N = 24 sets of Fresnel holograms each with z = 100 mm. Since the light from the fiber end was highly divergent, this removed the need for lens L 1 . The output lens L 4 had a focal length of f 4 = 36 mm, giving a demagnification D of approximately three. Polarizers were used to remove the large zero order associated with Fresnel diffraction, but have been omitted from Figure 11(a) for clarity. The angle of reflection was also kept small to avoid defocus aberrations. An example image, projected on a screen and captured in low-light conditions with a digital camera, is shown in Figure 11(b). The RPF has been optically enlarged by factor of approximately three due to the demagnification of the hologram pixels and, as the architecture is functionally equivalent to the simple holographic projector of Figure 7, the image is in focus at all points and, due to the use of Fresnel holography, the conjugate image is absent.

3D holography
A 3D hologram of an object is simply a recording of the complex electromagnetic field (produced by light scattered by the object) at a plane in front of the object. By Huygens' principle as detailed in Hecht (1998), if we know the EM field distribution on a plane P,w e can propagate Huygens wavelets through space to evaluate the field at any point in 3D space. As such, the plane hologram encodes all the information necessary to view the object from any position and angle in front of the plane and hence is, in theory, optically indistinguishable from the object. In practice, limitations in the pixel resolution of the recording medium restricts the viewing angle which, as in the 2D case, varies inversely with the pixel size Δ, as given by equation 37. Consider a plane, perpendicular to the z-axis, intersecting the origin, and one point source emitter of wavelength λ and amplitude A at position (x, y, z) behind it. The field h(u, v) present at the plane (u, v, z = 0) -i.e. the hologram h(u, v) -is given by h(u, v)= ZA jλr 2 exp 2jπ λ r with r = (u − x) 2 + (v − y) 2 + z 2 (47) If we regard a 3D scene as M sources of amplitude A i at (x i , y i , z i ), the linear nature of EM propagation results in the total field hologram h(u, v) If we wish to sample h(u, v) over the region u min ≤ u ≤ u max , v min ≤ v ≤ v max to form a P × P hologram h uv , then r i becomes: In Algorithm 2 we present a version of OSPR that generates N full-parallax 3D holograms h (n) uv , n = 1 ···N, for a given set of M point sources A i , i = 1 ···M, at positions (x i , y i , z i ) in the image plane (x, y, z). To test this algorithm, we consider the calculation of N = 8 holograms of resolution 512 × 512 and size 2 mm × 2 mm centered at the origin of our plane P, giving a pixel size of Δ = 4 μm and hence a viewing angle of around 9 degrees under coherent red illumination λ = 632 nm. The 3D scene used was a set of M = 944 point sources that formed a wireframe cuboid of dimensions 12 cm × 12 cm × 18 cm, located at a distance of 1.91 m from the plane. The simulated RPFs produced were calculated by propagating Huygens wavelets from the N holograms h (i) uv in turn through a pinhole aperture K onto a virtual screen (a plane perpendicular to the line from the center of the cube to the pinhole), and recording the intensity distribution on the screen |F (i) xy | 2 ; as before, the time-averaged percept is V xy =

Conclusion
This chapter has described a number of technical innovations that have enabled the realization of a real-time, phase-only holographic projection technology. By defining a new psychometrically determined optimization metric that is far more suited to human perception than the conventional MSE measure, a method for the generation of phase-only holograms which results in perceptually pleasing video-style images was demonstrated. This allows the realization of phase-only holographic video projection systems which, for the first time, overcome the twin barriers of the computational complexity of calculating diffraction patterns in real time and the poor quality of the resultant images. Using these techniques, the chapter has demonstrated algorithms and methods for the generation of 2D and 3D images in the Fraunhofer and Fresnel regimes. As shown in simulation and by preliminary experiment, the RPFs produced by the calculated holograms exhibit a substantial improvement in quality and a reduction in computation time on the scale of orders of magnitude compared to the other techniques demonstrated thus far. A number of commercially available products, notably from Light Blue Optics Inc. (2010), now employ variants of this technology. This chapter, and the information contained herein, contains a thorough description of state-of-the-art holographic projection technology and provides a complete reference to enable an interested reader to simulate, construct and characterize a 2D or 3D phase-only holographic projector.