留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

Hybrid phase retrieval with chromatic dispersion in single-lens system

Cheng Hong Liu Yong Hu Jiajie Zhang Xiaolong Deng Huilong Wei Sui

程鸿, 刘勇, 胡佳杰, 张晓龙, 邓会龙, 韦穗. 单透镜系统中与色散融合的混合相位恢复方法[J]. 红外与激光工程. doi: 10.3788/IRLA20200017
引用本文: 程鸿, 刘勇, 胡佳杰, 张晓龙, 邓会龙, 韦穗. 单透镜系统中与色散融合的混合相位恢复方法[J]. 红外与激光工程. doi: 10.3788/IRLA20200017
Cheng Hong, Liu Yong, Hu Jiajie, Zhang Xiaolong, Deng Huilong, Wei Sui. Hybrid phase retrieval with chromatic dispersion in single-lens system[J]. Infrared and Laser Engineering. doi: 10.3788/IRLA20200017
Citation: Cheng Hong, Liu Yong, Hu Jiajie, Zhang Xiaolong, Deng Huilong, Wei Sui. Hybrid phase retrieval with chromatic dispersion in single-lens system[J]. Infrared and Laser Engineering. doi: 10.3788/IRLA20200017

单透镜系统中与色散融合的混合相位恢复方法

doi: 10.3788/IRLA20200017

Hybrid phase retrieval with chromatic dispersion in single-lens system

计量
  • 文章访问数:  578
  • HTML全文浏览量:  538
  • 被引次数: 0
出版历程
  • 网络出版日期:  2020-05-09

Hybrid phase retrieval with chromatic dispersion in single-lens system

doi: 10.3788/IRLA20200017

摘要: 相位恢复是利用能观测到的强度信息恢复原始相位信息。强度传输方程(TIE)作为一种传统的非干涉相位恢复技术,只需通过测量至少两个相近平面的强度信息即可计算出相位信息。这种方法通常需要通过移动被测物体或摄像机来获取强度图像,不可避免地会产生机械误差。提出了一种新的相位恢复方法:与色散融合的混合相位恢复算法(CD-HPR)。通过设置不同波长的光通过单透镜系统得到物体在同一位置成的像,这样不需要机械运动就能获得聚焦和散焦图像强度,然后结合散焦量与波长之间的关系计算出散焦量,再用强度传输方程计算初始相位信息。最后角谱迭代算法的使用较好的改进了初始相位值。在仿真实验中,该方法恢复的相位与原始相位之间的均方差为0.1076;同时,通过实验恢复了透镜阵列的相位,实验结果与实际参数的误差为3.4%,证明了该方法的正确性和有效性。该方法扩展了传统方法要求光源为单色的局限性,提高了计算精度。

English Abstract

    • Phase retrieval is to recover the original phase information by using the intensity information obtained from observation [1,2] which including interference technique[3] and non-interference technique.. Phase retrieval based on the transport of intensity equation (TIE) is an important non-interference technique. Compared with the traditional interference approaches, it does not need to rely on the superposition of two highly coherent lights, complicated interference devices or strict requirements on the stability of the environment [4]. This method requires only a minimum of two intensity measurements at closely spaced planes for quantitative phase retrieval. Thus, this technology extends applications to microscopy, X-ray phase contrast imaging, diffractive optics and optical measurement [5]-[7]. The conventional TIE method requires moving the object or the CCD to achieve the acquisition of the intensity image, which inevitably introduces errors in the acquired image data. To this end, a method using the liquid crystal on silicon (LCoS) as tunable-lens for phase retrieval was proposed by Cheng Hong [8]. Different defocus images could be formed by changing the phase distribution loaded in the LCoS, which could avoid the errors caused by the mechanical movement. However, the introduction of the LCoS zoom lens in the experiment has an effect on the acquisition of the intensity image and increases the difficulty of the experimental operation. Zuo et al.[9] proposed a method called SQPM, two laterally separated images from different focal planes could be obtained simultaneously by a single camera exposure, yet the accurate registration of the two experimental images was performed necessarily.

      Angular spectrum iterative algorithm is another classic non-interference phase retrieval algorithm. This algorithm has the characteristics of high calculation accuracy and strong adaptability, but its convergence speed is slow and it depends on the initial solution so it tends to converge to the local minimum. Guo Junhu et al. combine TIE with iterative algorithm to improve this shortcoming [10]. But this algorithm still needs to move the object or camera mechanically when acquiring the intensity.

      At the same time, the above mentioned methods are all carried out under monochromatic light. In order to apply a light source with multi-wavelength continuous spectrum to retrieval phase, some numerical methods were designed. Cheng Hong et al. [11] proposed a phase extraction algorithm in lens-based wave propagation model to relieve the effects of phase modulation, which only need to calculate three color components of the phase from the acquired intensity images, then synthesized the final phase. However, this method is the processing of intensity images after sampling, not the application of multi-wavelength in the real sense. Besides, Laura Waller et al. [12] proposed using a white light and a Bayer color camera to acquire a color image at a fixed position, then three monochrome images of the RGB channel are obtained from captured color image and further processed to get the phase result. But this technology requires a specific color camera and reduces the resolution of the recovered phase.

      Here, a chromatic dispersion- hybrid phase retrieval method(CD-HPR) in single-lens system is proposed. It is ensured that the intensity images of the object with different defocus distance can be obtained in the same plane without moving the object or the CCD, the error caused by the mechanical movement in the conventional method and resolution reduction problem are avoided. At the same time, the application of TIE can be extended to multi-wavelength sources, especially for the complex phase reconstruction applied in natural light scene in the future. The CD-HPR method is applied to single-lens system in this paper, and the relevant experimental results are given.

    • TIE is a classic phase retrieval algorithm. Suppose that a sample is illuminated by a monochromatic plane wave with constant intensity along the axis $z$, the sample’s intensity $I(x,y,{z_0})$ and phase $\varphi (x,y,{z_0})$ of the field at the focal plane satisfy the following equation [13]

      $$ - \nabla \cdot \left[ {I(x,y,{z_0})\nabla \varphi (x,y,{z_0})} \right] = {\left. {k\frac{{\partial I(x,y,z)}}{{\partial z}}} \right|_{z = {z_0}}}$$ (1)

      where $k$ is wave number, $k = 2\pi /\lambda $, $\lambda $ is wavelength, ${z_0}$ denotes the distance propagate along the optical axis $z$. $\partial I(x,y,z)/\partial z$ is intensity derivative, which may be estimated by finite differences taken between the in-focus intensity image and the defocus intensity image[14], as shown in Fig.1.

      Figure 1.  Intensity derivative diagram.

      $${\left. {\frac{{\partial I(x,y,z)}}{{\partial z}}} \right|_{z = {z_0}}} \approx \frac{{I(x,y,{z_0}) - I(x,y,{z_0} - \Delta z)}}{{\Delta z}}$$ (2)

      where $\Delta z$ is defocus distance. Substituting (2) into (1) and yielding the phase by use of Fourier methods [15]:

      $$\varphi (x,y,{z_0}) = k{\Im ^{ - 1}}\left[ {{{\left[ {2{\pi ^2}(f_x^2 + f_y^2)} \right]}^{ - 1}}\Im \left[ {{{\partial I(x,y,z)} / {\partial z}}} \right]} \right]$$ (3)

      where $\Im $ and ${\Im ^{ - 1}}$ denote the Fourier and inverse Fourier transform respectively, ${f_x},{f_y}$ are the spatial frequencies in Fourier domain. $\varphi (x,y,{z_0})$ is equal to the product of the optical path length(OPL) through the sample with the wavenumber of the illumination. If a sample consisting of multiple materials with different refractive indices, the physical thickness $h(x,y,{z_0})$ of the sample is related to the OPL by

      $$h(x,y,{z_0}) = \frac{\lambda }{{{n_o} - {n_m}}}\frac{{\phi (x,y,{z_0})}}{{2\pi }}$$ (4)

      Where, ${n_o}$ is the refractive index of the object to be measured and ${n_m}$ is the refractive index of the surrounding medium. The medium is usually air, so ${n_m}$ is about 1.

      In addition, angular spectrum iteration is another classic phase retrieval method. The principle of angular spectrum iterative algorithm is shown in Fig.2. Among them, ${U_1} = \sqrt {{I_1}} \exp [j\phi ]$ and ${U_2} = \sqrt {{I_2}} \exp [j{\phi '}]$ are the complex amplitudes at two different positions in the propagation direction of the light field, and ${I_1}$ and ${I_2}$ are the amplitudes captured at two positions above respectively. Firstly, a guess value is taken as the phase of ${U_1}$, and then the phase and the amplitude ${I_1}$ of ${U_1}$ are synthesized into the complex amplitude $U'_1$. Then the angular spectrum propagation is used to get the complex amplitude ${U_2}$ by the complex amplitude $U'_1$, the real amplitude ${I_2}$ is also used to replace the amplitude of the complex amplitude ${U_2}$ to obtain $U'_2$. Finally, the ${U_1}$ is obtained by the angular spectrum propagation of $U'_2$. Repeating this operation until the phase convergence is restored or reaching the preset counts of iteration. In this paper, the real amplitudes ${I_1}$ and ${I_2}$ are the amplitudes of the focal planes under green and blue illumination, respectively.

      Figure 2.  Schematic diagram of the angular spectrum iterative algorithm.

      In this paper, due to dispersion, the defocusing distance z in formula 2 is large, so the intensity difference method which is used to approximate the strength differential will cause a large error. The angular spectrum iterative algorithm is more dependent on the initial value. If the initial value is not selected properly, it is easy to converge to the local minimum. To solve these problems, a hybrid phase retrieval algorithm is used. The recovery result of TIE is taken as the initial value of the angular spectrum iteration algorithm, and then the angular spectrum iteration is used to iterate between the focusing plane and the defocusing plane until the final phase is obtained by convergence.

      In the phase retrieval method based TIE and GS iterative, the accurate acquisition of intensity images is very important [16]. However, the acquisition of intensity images is usually realized by translating the CCD or the object manually or mechanically, which inevitably leads to the problems of slow speed and low accuracy. The CD-HPR method proposed in this paper has a good effect in solving this problem, which is mainly applied to single-lens system to retrieve phase.

    • The imaging principle of obtaining intensity difference by chromatic dispersion in a single-lens system is shown in Fig.3. The object is placed in front of the lens, ${d_0}$ is the distance between the object and the lens, ${d_i}$ is the distance between the image plane (CCD plane) and the lens. $U({x_0},{y_0})$ and $U({x_i},{y_i})$ are the complex amplitudes of the object plane and the image plane, respectively, which satisfy the following relation

      Figure 3.  Schematic diagram of obtaining intensity difference by chromatic dispersion application in a single lens system.

      $$U({x_i},{y_i}) = \iint {h({x_i},{y_i};{x_0},{y_0})}U({x_0},{y_0})d{x_0}d{y_0}$$ (5)

      where $h({x_i},{y_i};{x_0},{y_0})$ is the amplitude point-spread function.

      In the case of white light illumination, the preset green light with the center wavelength ${\lambda _G}$ obtained by the green filter vertically irradiates the object, the measured intensity of image plane is

      $${I_G}(x,y) = {\left| {{U_G}({x_i},{y_i})} \right|^2}$$ (6)

      Then the blue light with the center wavelength ${\lambda _B}$ is obtained by blue filter, the measured intensity of image plane is

      $${I_B}(x,y) = {\left| {{U_B}({x_i},{y_i})} \right|^2}$$ (7)

      ${I_G}(x,y)$ and ${I_B}(x,y)$ are the in-focus intensities under different wavelengths, respectively. However, due to chromatic dispersion effect of an imaging system, in the same location, the intensity image obtained by green light is an in-focus image, and the intensity image obtained by blue light is a defocus image with respect to green light. Suppose the wavelength of green light is ${\lambda _G}$, the refractive index through the lens is ${n_G}$,and the focal length through the lens is ${f_G}$; the wavelength of blue light is ${\lambda _B}$, the refractive index through the lens is ${n_B}$,and the focal length through the lens is ${f_B}$. By the Lens maker’s formula

      $$\frac{1}{{{f_G}}} = ({n_G} - 1)\left(\frac{1}{{{r_1}}} - \frac{1}{{{r_2}}}\right)$$ (8)
      $$\frac{1}{{{f_B}}} = ({n_B} - 1)\left(\frac{1}{{{r_1}}} - \frac{1}{{{r_2}}}\right)$$ (9)

      Where ${r_1}$ and ${r_2}$ are respectively the radius of the left and right spheres of the lens. Because of ${n_G} \ne {n_B}$ and ${f_G} \ne {f_B}$, the position of green and blue light imaging become different to produce an defocus distance $\Delta z$. Below we derive the expression of the defocus distance $\Delta z$ from the imaging formula. The imaging formula is as follows

      $$\frac{1}{u} + \frac{1}{v} = \frac{1}{f}$$ (10)

      where $u$, $v$ and $f$ are object distance, image distance and focal length respectively. Considering the situation of green light firstly, assume that the object distance in the case of green light illumination is ${u_G}$, image distance is ${v_G}$, and ${v_G} = \dfrac{{{f_G}{u_G}}}{{{u_G} - {f_G}}}$ obtained by the imaging formula. Then change the green light to blue, the object distance ${u_B}$ will not change at this time, which is ${u_B}={u_G}$. By the imaging formula, the image distance ${v_B} = \dfrac{{{f_B}{u_G}}}{{{u_G} - {f_B}}}$ under the blue light. The defocus distance $\Delta z$ is the difference between the image distances under green and blue illumination, which is

      $$\Delta z = {v_G} - {v_B}{\rm{ = }}{u_G}\left( {\frac{{{f_G}}}{{{u_G} - {f_G}}}{\rm{ - }}\frac{{{f_B}}}{{{u_G} - {f_B}}}} \right)$$ (11)

      Substitute (8) and (9) into (11), and get

      $$\Delta z{\rm{ = }}{u_G}\left( {\frac{1}{{{u_G}({n_G} - 1)(\frac{1}{{{r_1}}} - \frac{1}{{{r_2}}}) - 1}} - \frac{1}{{{u_G}({n_B} - 1)(\frac{1}{{{r_1}}} - \frac{1}{{{r_2}}}) - 1}}} \right)$$ (12)

      In particular, when ${u_G} = 2{f_G}$ which means the object is placed at the double focal length of the left side of the single lens system

      $$\Delta z{\rm{ = }}2{f_G} - \frac{{2{f_G}{f_B}}}{{2{f_G} - {f_B}}}{\rm{ = }}4{f_G}(\frac{{{f_G} - {f_B}}}{{2{f_G} - {f_B}}})$$ (13)

      Substitution of (6) and (7) into (2) gives

      $$\frac{{\partial I}}{{\partial z}} \approx \frac{{{I_G}({x_i},{y_i}) - {I_B}({x_i},{y_i})}}{{\Delta z}}$$ (14)

      the phase ${\varphi _0}(x,y)$ of the object can be obtained by substituting (14) into (1)

      $${\varphi _0}(x,y) = \frac{{2\pi }}{\lambda }{\Im ^{ - 1}}\left\{ {{{\left[ {2{\pi ^2}(f_x^2 + f_y^2)} \right]}^{ - 1}}\Im \left[ {{{\partial I} / {\partial z}}} \right]} \right\}$$ (15)

      In addition, it is worth noting that the phase retrieval of single-lens imaging system introduces additional phase aberration of the quadric sphere. In the experiment, the phase ${\varphi _0}(x,y)$ is compensated by the following phase mask.

      $$\Phi (m,n) = \exp\left[ {\frac{{ - i\pi }}{{\lambda D}}({m^2}\Delta {\xi ^2} + {n^2}\Delta {\eta ^2})} \right]$$ (16)

      where $m \times n$ is size of phase mask, $\Delta \xi $ and $\Delta \eta $ are discretized sampling intervals,$D$ is an adjustable parameter that compensates for the wavefront curvature. The compensated phase ${\varphi _i}(x,y)$ is as follows.

      $${\varphi _i}(x,y) = {\varphi _o}(x,y) - C\Phi ({\rm{m}},{\rm{n}})$$ (17)

      where $C$ is a constant and the value is adjusted according to the results in the experiment.

      Considering the long propagation distance of the light field during actual imaging and the large defocusing distance, the hybrid phase retrieval algorithm is used to get the improved retrieval phase. In this paper, the phase recovered by TIE is used as the initial phase of the angular spectrum iterative algorithm and the process of hybrid phase retrieval algorithm is shown in Fig.4. Then the angular spectrum propagation is continuously used between the focusing and defocusing planes, and the intensity of the complex amplitude obtained by the angular spectrum propagation is replaced by the true intensity of the focusing and defocusing planes each time, When the preset number of iterations is reached or the phase is converged, a better retrieval phase $\varphi $ is obtained finally.

      Figure 4.  The flow chart of hybrid phase retrieval algorithm.

    • The relevant simulation experiments are given to test the method according to the theory described above. It is assumed that a pure-phase object with phase shift ranging from 0 rad to 2pi rad, is illuminated by a monochromatic plane wave, as shown in Fig.5(a). For simulation purposes that the following parameters are chosen: image size $N \times M = 256 \times 256$, pixel size dx × dy = 4 um × 4 um the focal length of the lens is f = 150 mm, object distance and image distance under green illumination are 2f = 300 mm, and ${n_G}= 1.527$, ${n_B}=1.519$, then the defocus distance $\Delta z =$ 9.2 mm by (12). The in-focus and the defocus intensity distributions shown in Fig. 5(b) and Fig. 5(c) are simulated under green light and blue light with the center wavelengths ${\lambda _G} =$ 532 nm and ${\lambda _B} =$ 470 nm respectively. The two intensity images are calculated by (15) to obtain the initial phase distribution in the image plane, then the mask for compensation in (16) is used for this phase shown in Fig.5(d). We can see that the compensated phase is still fuzzy due to the large propagation distance of the light field and long defocusing distance in the experiment. We use angular spectrum iterative algorithm to improve the phase, we can see that the phase is more clear shown in Fig. 5(e). The gray values of the Fig. 5(a) and (e) are selected for comparison, as shown in Fig.5(f), and its curve fitting is well

      Figure 5.  Simulation experiment results. (a)Original phase. (b) In-focus intensity image. (c) Defocus intensity image. (d)Initial phase. (e) Final phase. (f) Comparison of the gray value of the transverse center line of the Fig5. (a) and (e).

      In order to further verify the accuracy of the retrieved results, here the RMSE defined in (18) is adopted.

      $$ RMSE = \sqrt {\frac{{\sum\limits_{x,y} {{{\left[ {\varphi (x,y) - {\varphi _{ex}}(x,y)} \right]}^2}} }}{{M \times N}}} $$ (18)

      Where $\varphi (x,y)$ and ${\varphi _{ex}}(x,y)$ represent the recovered phase and the original phase, respectively, and the value of RMSE between them is 0.107 6.

      In order to compare the accuracy of TIE and hybrid algorithms, the experimental results of TIE and hybrid algorithms based on dispersion are presented in Fig.6. Fig.6(a) is the simulated original phase, Fig.6(b) and Fig.6(c) are the retrieval phases obtained by TIE and hybrid algorithm respectively, and their mean square error with the original phase is 0.267 5 and 0.098 7 respectively.It can be seen that the hybrid algorithm significantly improves the accuracy.The numerical experimental results are sufficient for the correctness and effectiveness of CD-HPR in a single-lens system.

      Figure 6.  Comparison between TIE and hybrid algorithm. (a)Original phase. (b)retrieval phases obtained by TIE algorithm. (c)retrieval phases obtained by hybrid algorithm

    • The experimental arrangement used to test the CD–HPR is illustrated in Fig.7. LED white light (GCI-060411, Daheng optics, China) as a light source is used. A bandpass filter of the known central wavelength is placed before the white LED, a variable aperture is placed between the two in order to control the range of the light field and to make the light field strictly symmetrical about the optical axis. A plane wave is obtained by collimating lens (f = 150 mm). The sample is a micro-lens array, which is consist of some single lens made of silicone oil with refractive index of 1.579, the filling material surrounding the single lens is the PDMS with refractive index of 1.403, the maximal thickness of the lens is 1.15 mm. The focal length of a lens in a single-lens system is f = 150 mm, and a CCD (1 280 pixel × 1 024 pixel, pixel size 5.2 × 5.2 um) was built in the image plane.

      Figure 7.  Practical intensity acquisition system.

      First, the preset illumination wavelength (green light) is obtained by bandpass filter with the central wavelength at 532 nm and the full width at half maximum of 22 nm. In this case, the in-focus image can be captured at image plane as shown in Fig.8(a). Then the filter is replaced by the filter with the central wavelength at 470 nm to get blue light, the defocus image is acquired by CCD in the same place as shown in Fig.8(b). Finally, the phase recovered by CD-HPR is shown in Fig. 8(c). The red line in Fig.8(c) is converted from phase to thickness by (4), as shown in Fig.8(d). And 3D display can also be obtained as shown in Fig.8(e). The maximum thickness of the lens measured by the CD-HPR method is approximately 1.19 mm, close to the actual thickness, and the entire experiment process only needs to replace different filters, which verifies the effectiveness of CD-HPR in a single lens system.

      Figure 8.  Experimental result. (a)In-focus intensity image. (b)Defocus intensity image. (c) Phase by CD-HPR. (d) The thickness on the red line of (c). (e) 3D display.

    • A phase retrieval method is proposed that can be applied to the chromatic dispersion-hybrid phase retrieval (CD-HPR) in single-lens systems in this paper. It ensures that the intensity images of the object with different defocus distance can be obtained in the same plane without moving the object or the CCD, the error caused by the mechanical movement in the conventional method is avoided, and the phase is recovered without lowering the resolution. The retrieval method of CD-HPR is applied to single-lens system, which proves the validity and correctness of CD-HPR retrieval method. The application of TIE avoids the limitations of light source requirements, especially for the complex phase reconstruction applied in natural light scene in the future.

WeChat 关注分享

返回顶部

目录

    /

    返回文章
    返回