## Abstract

Imaging turbid media is range limited. In contrast, sensing the medium’s optical properties is possible in larger depths using the iterative multi-plane optical properties extraction technique. It analyzes the reconstructed reemitted light phase image. The root mean square of the phase image yields two graphs with opposite behaviors that intersect at *µ’ _{s,cp}*. These graphs enable the extraction of a certain range of the reduced scattering coefficient,

*µ’*. Here, we aim to extend the range of

_{s}*µ’*detection by optical magnification. We use a modified diffusion theory and show how

_{s}*µ’*shifts with the varying magnification. The theoretical results were tested experimentally, showing that the technique can be adapted to different ranges of

_{s,cp}*µ’*by changing the magnification.

_{s}© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

Light-matter interactions are mainly correlated with absorption, emission, and scattering phenomena. The attenuation, i.e., the total loss of intensity during the interaction, is related to the absorption and represented by the absorption coefficient *µ _{a}*. The scattering, in which the incident light phase and direction are changed due to the light-matter interactions along with the light propagation in the media, is represented by the reduced scattering coefficient

*µ’*. Since the absorption and the scattering phenomena are frequency-dependent, their coefficients are wavelength-dependent as well [1]. Most of the optical imaging techniques can acquire high-resolution images of the surface. However, imaging inside turbid medium is challenging even when it does not absorb the light. The phase, together with the direction of light, are lost and imaging becomes more difficult as the light optical length within the media is longer. Despite the challenge, different methods for imaging under scattering conditions have been developed. The field of underwater imaging has made a significant progress, with growing necessity, especially for the growing field of autonomous underwater vehicles [2] mainly used for navy purposes. During the last years, various imaging technologies have been trying to overcome the scattering in underwater images; some are based on Sonar acoustic imaging, a technology that is constantly developing [3]. In parallel, other methods were developed based on light illumination [4], requiring compensation algorithms [5]. Another type of method uses the movement of the object [6]. Recent progression in the last two methods includes machine learning tools [7–9]. Yet, the above techniques are limited too; Sonar is limited to big or stiff objects and far distances [10], and optical methods are limited to short distances [11]. Indeed, the imaging techniques can acquire important data which is also intuitive for understanding. However, in addition to their limitations, imaging techniques do not profess to achieve all the information contained in the lost phase. Thus, some information seems to be lost together with the phase [12] and so there are constant attempts to retrieve the phase in various fields such as biology [13], material science [14], astronomy [15] etc. Although the phase cannot be measured directly, some of the information can be obtained from proper analysis of the medium's response to light, which is defined by the medium’s optical properties. Therefore, a sensing technique, the iterative multi-plane optical property extraction (IMOPE) [16–21] technique, was suggested, aiming to detect changes in the reduced scattering coefficient. The IMOPE is a non-invasive technique for the detection of media scattering. It reconstructs and analyzes the reemitted light phase from the irradiated medium, based on the relation between the medium's scattering properties and the reemitted light phase. For reconstructing the phase, the IMOPE uses a multi-plane version of the iterative Gerchberg-Saxton (GS) algorithm [22]. The GS algorithm is an error reduction algorithm [23] for phase retrieval [24] and image reconstruction [25]. The IMOPE technique does not use the retrieved phase itself but its root mean square (RMS). After the RMS of the reemitted light phase is obtained, it is compared to the theoretical model and allows the extraction of the medium’s

_{s}*µ’*. As mentioned above, the IMOPE technique was originally developed in the red regime of the electromagnetic (EM) spectrum (

_{s}*633nm*$\textrm{in wavelength})$ [18,19,26], for medical applications using small distances. Later on, we have extended the IMOPE to the other edge of the visible spectrum, to the blue regime, applying it for

*473nm*in wavelength [27]. The red wavelength is mainly targeted to biological applications. The blue wavelength however has higher potential for underwater research since the blue wavelength has the lowest absorption undersea [28,29]. Since underwater research will require significant magnification of the IMOPE technique, we show here the potential of the technique to be modified for different, yet small, magnifications. However, for more significant magnifications future work is required.

In this work we have extended the detection range of *µ’ _{s}* using the IMOPE technique at the blue regime. This extension allows tailoring the linear detection range of the technique to a desired

*µ’*range. First, we show an adaptation of the theoretical model to the blue regime. Then, we change the magnification of the image captured by the detector, showing how different magnifications affect the theoretical model. We also observe the change in the linear technique’s detection range for each magnification. Last, we present the magnification effect on the IMOPE using phantoms with known

_{s}*µ’*, and prove how the experiments support the theoretical work.

_{s}## 2. Theoretical model

Inside a turbid medium, the radiation’s propagation is commonly described by the Radiative Transport Equation (RTE) [30]. It is an integro-differential equation which concerns the different displacements and directions, for a photon, then calculates the losses and benefits in its energy due to scattering and absorption within the medium. Different solutions for the RTE were suggested over the years [31]. The RTE is a complex equation, which has been suggested with different solutions, and the most common one is the diffusion approximation (DA) [32]. The DA describes the diffusion reflection (DR), i.e., the intensity of the scattered light reflected from the surface, as well as the diffusive transmission. The DA uses the method of images with fixed boundary conditions [33] where the boundary (red dotted line in Fig. 1) can be defined either on the surface [32] or higher [34]. The method of images is used such that the light source is described as a real isotropic light source beneath the surface and its image light source symmetrically above the chosen boundary (black circles in Fig. 1). The real source is located at a depth of *1/µ’ _{s}* beneath the surface, and its image is reflected from the chosen boundary. The resulted intensity is represented as a function of

*ρ*, the cylindric coordination of the distance from the center of the source along the surface (green arrow in Fig. 1). This method works well for relatively higher

*ρ*, but is limited at smaller distances where it cannot describe the reflectance accurately [33]. To address this, Piao et al. [34] have published a more accurate model for the DR at short paths of

*ρ*and for a low scattering semi-infinite homogeneous medium. According to Piao’s model, the boundary is located at a distance of

*2AD*where

*A=(1+R*(with

_{eff})(1-R_{eff})*R*[32]) is the mismatch factor and

_{eff}=0.477*D=(1/3)/(µ*is the diffusion coefficient. In Piao’s model, the real and image sources are referred as

_{a}+ µ’_{s})*master sources*(black circles in Fig. 1), where another

*slave source*is beneath the surface, and its image reflects on the other side of the boundary (gray circles in Fig. 1). This model, for shorter light paths and lower scattering coefficients, is referred as the dual source configuration (when the simpler DA model, that includes the master sources only, is referred as the single source configuration). The IMOPE’s theoretical model combines the intensity that is described by the single and double source configurations,

*I*, with a phase model [19]

_{Rdual}(ρ)*φ(ρ)*. This, in order to describe the electromagnetic field for calculating its phase RMS as a function of

*µ’*:

_{s}The distance *ρ* represents the distance between the center of the pencil beam illumination and a specific pixel. However, the actual pathlength of the light beneath the surface is longer than *ρ* and therefore the phase is accumulated along this larger distance. The ratio between the actual optical pathlength and *ρ* is the differential pathlength factor (*DPF*):

The *DPF* calculation refers to the intensity of the real and master source (*S*), the steady state photon fluence rates (Ψ) at the detector.

Thus, the phase itself is:

where $n$ is the refractive index of the object and*λ*is the laser source wavelength. It was previously shown [26] that the phase image can be divided into two region of interest (ROIs) according to the phase RMS. The last behaves differently in the single scattering and multiple scattering areas, which are defined according to the mean free path transport (MFP’) of the photon within the medium:

Where the multiple scattering area is obtained at *ρ>MFP’* (referred as ‘ring’), and the single scattering area is the area closer to the light source (blue arrow in Fig. 1), where *ρ<MFP’* (referred as ‘center’). The phase RMS of each ROI is calculated separately.

The basic IMOPE technique uses the dual source configuration (Piao’s model) to describe both intensities at the single and multiple scattering areas. This is due to the improvement that this model suggests for the areas that are close to the illumination point and since for optical magnification of M=1, both ROIs are relatively close to the center. Magnifying the field of view, we found that calculation of the intensities for the multiple scattering area (i.e., the ring) with simply the single source configuration fits better with the experimental results; this is the model that better describes the larger distances. However, for the single scattering area (i.e., the center), the dual source configuration remained.

## 3. Materials and methods

#### 3.1 IMOPE technique

The IMOPE is a technique developed, originally, to extract the reduced scattering coefficient. It evaluates *µ’ _{s}* from the phase image reconstructed by the GS algorithm. The basic GS algorithm requires two intensity images taken from two planes of the electromagnetic field. At the entrance plane the intensity is notated as

*I*, and at the exit plane as

_{en}*I*. The intensity of each plane is captured by a camera. The mathematical expression for the field propagation is the Fresnel transform (FRT) [35,36]. From the intensities and the field propagation equations, the algorithm aims to reconstruct the field's phase that was lost when the intensities were captured. Since the GS algorithm converges to the intensity’s RMS, but not to the global minima [22], a multi-plane improved version was suggested [37,38]. The multi-plane GS technique uses N planes, rather than two planes only, along the propagation axis. Thus, the IMOPE uses N light intensity images taken at N planes along the z-axis (Fig. 2(a), along with the light-blue arrows). The method starts with applying the multi-plane GS algorithm. The result is a phase image at the desired N

_{ex}^{th}plane which is the reconstructed phase of the reemitted light. The average value of the phase image ${\varphi }$ is then subtracted from the received phase image. As mentioned, the received phase images can be separated to their ROIs, when the border between them is the

*MFP’*(Eq. (4), and the orange circle in Fig. 2(b)). However, in order to explore materials with unknown

*µ’*, we have automized the process which extracts the border between the ROIs, and experimentally confirmed it using phantoms with known

_{s}*µ’*. By knowing the border location, the RMS of each ROI in the phase image can be calculated and compared with the theoretical model (Fig. 2(c) the blue line is the center ROI and the red line is the outer ring ROI), thus the reduced scattering coefficient,

_{s}*µ’*, has been extracted.

_{s}#### 3.2 Optical setup

The experimental setup for light intensity images (Fig. 2(a)) is composed of a laser with a wavelength of *λ=473nm*, with a beam diameter of *1.2mm* and power of *100mW*, an attenuator, that transfer 1% of the laser's intensity, polarizers- for optical clearing purposes, and a lens (focal length of *f=75mm*) in order to focus the illumination beam. A CMOS camera was used for the intensity images acquisition. The lens, polarizer and camera are set on a moving stage with a small angle *θ* from the laser source, hence the distances between the images are corrected appropriately. In order to study the effect of the magnification *M* on the IMOPE technique, we have changed the distances between the lens, the camera and the sample (blue arrows in (Fig. 2(a)). The samples are set on a 3-axis micrometer stage to enable their fine-tuning during the experiments [37].

#### 3.3 Agar-based phantoms

Agar-based phantoms played a significant role in this research. The suitability of the phantoms for this purpose is since we are able to control their * µ’_{s}* accurately, using a known lipid concentration, while having a relatively low absorption coefficient. According to Mie theory, the size of the unit cell and material type determines the scattering coefficient of a substance; the diameter of the intralipid’s unit cell varies between 25-625

*nm*[39]. The phantoms were first used for the validation of the IMOPE technique following a

*473nm*[27] laser illumination and the different magnifications. The phantoms are composed of Intralipid (IL) (Intralipid 20% Emulsion, Sigma-Aldrich, Israel) with varying concentrations and 1% Agarose powder (Agarose- low gelling temperature, Sigma-Aldrich, Israel) and water [21]. The

*varies as function of the intralipid concentration [1,21]. The IL concentrations of the phantoms used in the following experiments are 0.23%, 0.495%, 0.71%, 1.06%, 1.27%, 1.48%, 1.85%, 2.13%, 2.39%, 2.62% respective to*

*µ’*_{s}*values of 0.25, 0.5, 0.7, 1.03, 1.23, 1.4, 1.77, 2.03, 2.28, 2.49*

*µ’*_{s}*mm*[40].

^{-1}## 4. Results and discussion

The theoretical model was first applied to the blue regime, using different magnifications (*M: 1,*$\frac{2}{3}\; ,\frac{1}{2}$, 0.4 and $\frac{1}{3}$ represented in Fig. 3(a), by red, yellow, green, blue and purple, respectively). For each magnification, the solid lines describe the center ROI, and the dashed lines describe the ring ROI. Our results suggested that different magnifications changed the theoretical graphs, as for a lower *M* the graphs presented steeper slope and reached saturation earlier. Also, the ring graphs start for lower * µ’_{s}* values. For each magnification

*M*, the curves of the ring and the center have a crossing point around the phase RMS value of 1; We noticed that the crossing point of the ring’s graph and the center’s graph shifts with

*M*. Extracting the

*value in which this crossing occurs,*

*µ’*_{s}*µ*’

_{s,cp}, for each

*M*(asterisks in Fig. 3(b)) yields the linear fit (line in Fig. 3(b))

*.*

*µ*’_{s,cp}=MNext, we applied the measurements on the phantoms using the basic *M=1* setup. i.e., for each phantom N intensity images were taken using the optical setup described above and their phase RMS was calculated. Since the * µ’_{s}* values of the phantoms were known, we could calculate a theoretical radius that discriminates between the ROIs of each phase image obtained for every phantom; This border is the phantoms’

*MFP’*(Eq. (4)). We aimed to find the range of

*MFP’*which can be detected by our method. Therefore, we used an algorithm that scans the phase image and finds the border radius of each phantom. This algorithm is useful while measuring samples with unknown

*, here however, we used it to examine the capabilities of the IMOPE.*

*µ’*_{s}For *M=1,* the extracted border radius values (the asterisks in Fig. 4(a)) do not match the theory (line in Fig. 4(a)) for the lower * µ’_{s}* values. As expected, the IMOPE cannot detect the radiuses of the lower

*since they are larger than the size of the image, and thus cannot be captured by the detector. Adjusting the optical setup to*

*µ’*_{s}*M=*$\frac{1}{2}$, we saw a significant improvement: we were able to reveal the radiuses with the algorithm (asterisks in Fig. 4(b)), achieving a good fit between them and the theoretical

*MFP’*curve (solid line in Fig. 4(b)) even for low

*values.*

*µ’*_{s}Next, by changing the distances between the camera, the lens and the phantoms, we created several magnifications (*M=1,* $\frac{1}{2},\;\frac{1}{3},\;$*2* and *3*, presented in Fig. 5(a)-(e), respectively). We present the theoretical phase RMS graphs of the ring and the center of the phase RMS (red and blue solid lines, respectively), together with the experimental results (red and blue asterisk, respectively). We see that the basic IMOPE technique, for *M=1* (Fig. 5(a)), cannot yield ring’s values for low * µ’_{s}*. In addition, the high

*values suffer from saturation. The limitations at both edges of the*

*µ’*_{s}*axis narrow the linear ange of informative*

*µ’*_{s}*in which the analysis is possible (between*

*µ’*_{s}*0.7mm*to

^{-1}*1.77mm*). These limitations were expected, due to the image size limitation; When $\mu _s^{\prime}$ is low, the image captured by the detector is smaller than the

^{-1}*MFP’*(Fig. 4(a)). When the $\mu _s^{\prime}$ values are high, they are limited due to low number of pixels which fulfill the condition

*, causing the differences between the phase RMS values become negligible. For*

*ρ*<MFP’*M<1*(Fig. 5(b) and (c)), we see that the experimental results, i.e., the phase RMS of the ring and the center (red and blue asterisks, respectively) are in high agreement with theoretical model. The

*values in which the agreement occurs, are lower for*

*µ’*_{s}*M<1*than for

*M=1*. However, for higher

*we achieve saturation at lower values. For*

*µ’*_{s}*M=*$\frac{1}{2}\;$ (Fig. 5(b)) the informative

*range is between*

*µ’*_{s}*0.5mm*to

^{-1}*1.4mm*, and for

^{-1}*M=*$\frac{1}{3}\;$ (Fig. 5(c)) between

*0.25 mm*to

^{-1}*1.23mm*.

^{-1}Zooming-in i.e., configurating the setup to *M>1* (Fig. 5(d) and (e)) solved the sensitivity problem expressed by the saturation at the higher * µ’_{s}* values as it shifts the linear range to the left. For

*M=2,*the ring’s phase RMS is in good correlation with the theory, in which the

*ranges between*

*µ’*_{s}*1.03mm*to

^{-1}*2.03mm*and for

^{-1}*M=3*it seems that the informative

*range is above*

*µ’*_{s}*2.03mm*. The phase RMS graphs of the centers in this range show a linear behavior with some offset from the theoretical values, as expected since the diffusion-based theory is inaccurate when getting closer to the light source [33]. In other words, this deviation does not indicate a mismatch of the method, but an inaccuracy of the DA based theory describing the area close to the center. Nonetheless, in the low

^{-1}*values the ring ROIs were not captured by the detector (red points are missing) and the center values are saturated. Table 1 summarizes the*

*µ’*_{s}*ranges that were found to be most informative due to their linear behavior. As mentioned previously, the GS algorithm requires N images of the same phantoms, taken from N planes with a distance*

*µ’*_{s}*Δz*between them. All the above experiments were done for different magnifications, based on the same constant distance

*Δz=0.635mm*. We wanted to verify that the deterioration to the fit between the experimental results and the theoretical graphs for

*M>1*(Fig. 5(d)) and (e)) does not point that the

*Δz*should have been magnified according to

*M*. Hence, for $M = 1,\;\frac{1}{2},\;\frac{1}{3}$ magnifications (Fig. 6(a), (b) and (c) respectively) we compared the theoretical results with the experimental results of

*Δz*(Fig. 6, blue and red asterisks for center and ring, respectively) and

*2Δz*(Fig. 6, blue and red circles for center and ring, respectively). Note that the above analysis presents a case in which the reconstruction depth (i.e. (N-1)

*Δz*) of each magnification remains the same when changing the number of images. The results show that there is no significant difference between the phase RMS accepted for the different analysis methods (

*Δz*vs.

*2Δz*).

In this paper, we examined the influence of the optical magnification of the IMOPE’s optical system on its linear range. The change in magnification effectively changes the pixel size, and hence influences the RMS statistics of the phase. A theoretical DA based models for the intensity profiles, combined with Eq.* *(4)) for the phase theoretical model, predicted a shift in the linear range of the phase RMS values in both, the center *( ρ<MFP’)* and ring

*(*ROIs of the phase image. Experimental results from phantoms with varying scattering coefficients confirmed this behavior. However, some offset from the expected theoretical values were observed. The change of the optical magnification therefore extends the range of detection or enhances the detection accuracy for different

*ρ*>MFP’)*ranges and may also enable scanning deeper into the medium. To verify this, we had to first check that the*

*µ’*_{s}*we have used meet the conditions required for the phase reconstruction. The change in the phase is extracted from the intensity image captured by a detector with a pixel size*

*µ’*_{s}*Δx*. Each pixel contains data for the phase change along

*Δz*. Thus,

*Δz*must be large enough to enable a phase change along it, but small enough so there will not be too many phase changes averaged by the pixel. The condition on

*Δz*is given by:

*λ*is the used wavelength.

In the beginning, we reconstructed the phase from the phantoms using the same analysis for each magnification; The analysis used the same distance *Δz=0.635mm* between the intensity planes and the same number of planes (*N=7*) recorded by the detector scanning a total depth *Δz⋅(N-1)=D=3.81mm* for each phantom. Later, when we changed the number of images *N* and the distance *Δz* scanning the same depth *(D=3.81mm)*, the phase RMS of each magnification remained. In other words, we have shown that the results accepted for different *Δz* and *N* but with constant reconstruction depth *D* yield the same phase RMS experimental results (Fig. 6). In addition, we would like to examine how by changing the reconstruction depth, taking different *Δz* with the same number of images, result with different reconstruction depths for the same magnification, meaning, different scanned depth *D*. Therefore, for *M=*$\frac{1}{3}$ we compared two reconstruction depths based on the same number of images with different distances between them: single and triple distance *Δz* between the image planes (asterisks and circles respectively in Fig. 7). The distances between the images are therefore *0.635* and *1.905mm* for single, and triple distances. Hence the total depths *D* are: *3.81* and *11.43mm.* The reconstruction along the different depths had yield the same phase RMS values for the same phantoms, confirming the *Δz* we chose fulfills the condition of Eq.* *(5)). Note, that the results shown in Fig. 7 were achieved from a *M=*$\frac{1}{3}$ configuration. The center is therefore smaller, and the average of the pixels is too coarse. This explains the deterioration of the fit between the theory and the experimental results. In contrary, the ring theory and results highly agree along the * µ’_{s}* axis.

## 5. Conclusion

The IMOPE technique aims to extract the reduced scattering coefficient * µ’_{s}* from a phase RMS reconstructed from a turbid medium. It is based on a multi-planar version of the Gerchberg Saxton algorithm for phase retrieval. The optical setup takes several images that are inserted to the algorithm that results in a reconstructed phase image. The phase image is analyzed and from a comparison with the theoretical model,

*can be extracted. In this research, we have searched for the IMOPE’s optimal range of detection, for different magnifications. We show that in different ranges of*

*µ’*_{s}*the technique’s linear range is limited: at the lower values of*

*µ’*_{s}*it is limited by the size of the image captured by the detector, where at the higher*

*µ’*_{s}*values it is limited due to the saturation the phase RMS reaches at low number of pixels. The work presented here aims to improve the capabilities of the IMOPE technique for the detection of lower and higher reduced scattering coefficients by changing the magnification of the image. In fact, we suggest adaptation of the setup in order to tailor the technique for desired*

*µ’*_{s}*range (summarized in Table 1). We did so by varying the magnification of the image and finding the linear*

*µ’*_{s}*range for each magnification. The phantoms experiments have supported this theoretical work as well. For a magnification of*

*µ’*_{s}*M<1,*the sensitivity for lower

*is improved since they are included at the linear range. For*

*µ’*_{s}*M>1*magnification, the linear range includes higher

*values. Since these values do not suffer from saturation anymore, the sensitivity of the technique for these ranges has improved. In addition, we notated the cross-point between the center and ring phase RMS curves that shift with the magnification*

*µ’*_{s}*M*. Overall, this paper shows that the IMOPE, designed to detect changes in biological tissues, can be extended to other fields beyond its original purpose; It can be tailored to detect wider ranges of

*than necessary for biological research as well as larger distances along the turbid media. In the blue wavelength, for example, it could be utilized for underwater research. The IMOPE itself is a promising sensing method, that breakthrough the imaging limitations and achieves information about invisible area. Here we show the great potential of the IMOPE to be extended to various applications.*

*µ’*_{s}## Acknowledgments

This research conceptualization was formed by D.F. as well as the project administration and funding acquisition. In addition, D.F. together with H.D. were responsible for the research supervision methodology. The software and the theoretical model were written by I.Y. and C.S. The experiments, investigation and formal analysis were performed by C.S, and final validation was conducted by C.S. and H.D. The writing of the original draft preparation was performed by C.S. and H.D. D.F. and R.A. were responsible for the review and editing for improving the paper.

## Disclosures

The authors declare no conflicts of interest.

## Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

## References

**1. **H. Assadi, R. Karshafian, and A. Douplik, “Optical scattering properties of intralipid phantom in presence of encapsulated microbubbles,” Int. J. Photoenergy (2014).

**2. **S. Chutia, N. M. Kakoty, and D. Deka, “A review of underwater robotics, navigation, sensing techniques and applications,” Proceedings of the Advances in Robotics **8**, 1–6 (2017). [CrossRef]

**3. **J. Joslin, “Imaging sonar review for marine environmental monitoring around tidal turbines,” (2019).

**4. **M. Massot-Campos and G. Oliver-Codina, “Optical sensors and methods for underwater 3D reconstruction,” Sensors **15**(12), 31525–31557 (2015). [CrossRef]

**5. **H. Lu, Y. Li, Y. Zhang, M. Chen, S. Serikawa, and H. Kim, “Underwater optical image processing: a comprehensive review,” Mobile Netw Appl **22**(6), 1204–1211 (2017). [CrossRef]

**6. **G. S. Kumar, U. V. Painumgal, M. C. Kumar, and K. Rajesh, “Autonomous underwater vehicle for vision based tracking,” Procedia Computer Science **133**, 169–180 (2018). [CrossRef]

**7. **N. Wang, Y. Wang, and M. J. Er, “Review on deep learning techniques for marine object recognition: Architectures and algorithms,” Control Engineering Practice104458 (2020).

**8. **D. Gomes, A. S. Saif, and D. Nandi, “Robust Underwater Object Detection with Autonomous Underwater Vehicle: A Comprehensive Study,” in Proceedings of the International Conference on Computing Advancements (2020), pp. 1–10.

**9. **Z. Chen, Z. Zhang, F. Dai, Y. Bu, and H. Wang, “Monocular vision-based underwater object detection,” Sensors **17**(8), 1784 (2017). [CrossRef]

**10. **G. Neves, M. Ruiz, J. Fontinele, and L. Oliveira, “Rotated object detection with forward-looking sonar in underwater applications,” Expert Systems with Applications **140**, 112870 (2020). [CrossRef]

**11. **D. Berman, D. Levy, S. Avidan, and T. Treibitz, “Underwater single image color restoration using haze-lines and a new quantitative dataset,” IEEE transactions on pattern analysis and machine intelligence (2020).

**12. **J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature **491**(7423), 232–234 (2012). [CrossRef]

**13. **S. Wang, L. Xue, J. Lai, Y. Song, and Z. Li, “Phase retrieval method for biological samples with absorption,” J. Opt. **15**(7), 075301 (2013). [CrossRef]

**14. **I. Häggmark, W. Vågberg, H. M. Hertz, and A. Burvall, “Comparison of quantitative multi-material phase-retrieval algorithms in propagation-based phase-contrast X-ray tomography,” Opt. Express **25**(26), 33543–33558 (2017). [CrossRef]

**15. **R. A. Gonsalves, “Perspectives on phase retrieval and phase diversity in astronomy,” in *Adaptive Optics Systems IV* (International Society for Optics and Photonics, 2014), p. 91482P.

**16. **I. Yariv, G. Rahamim, E. Shliselberg, H. Duadi, A. Lipovsky, R. Lubart, and D. Fixler, “Detecting nanoparticles in tissue using an optical iterative technique,” Biomed. Opt. Express **5**(11), 3871–3881 (2014). [CrossRef]

**17. **I. Yariv, Y. Kapp-Barnea, E. Genzel, H. Duadi, and D. Fixler, “Detecting concentrations of milk components by an iterative optical technique,” J. Biophotonics **8**(11-12), 979–984 (2015). [CrossRef]

**18. **I. Yariv, M. Haddad, H. Duadi, M. Motiei, and D. Fixler, “New optical sensing technique of tissue viability and blood flow based on nanophotonic iterative multi-plane reflectance measurements,” Int. J. Nanomed. **11**, 5237–5244 (2016). [CrossRef]

**19. **I. Yariv, H. Duadi, and D. Fixler, “An optical method to detect tissue scattering: theory, experiments and biomedical applications,” presented at the SPIE BiOS2019.

**20. **I. Yariv, H. Duadi, and D. Fixler, * An optical method to detect tissue scattering: theory, experiments and biomedical applications* (SPIE, 2019).

**21. **I. Yariv, H. Duadi, and D. Fixler, “Depth Scattering Characterization of Multi-Layer Turbid Media Based on Iterative Multi-Plane Reflectance Measurements,” IEEE Photonics J. **12**(5), 1–13 (2020). [CrossRef]

**22. **R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase image and diffraction plane pictures,” Optik **35**, 237–246 (1972).

**23. **R. Gerchberg, “Super-resolution through error energy reduction,” Optica Acta: International Journal of Optics **21**, 709–720 (1974). [CrossRef]

**24. **J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. **21**(15), 2758–2769 (1982). [CrossRef]

**25. **D. Fixler, H. Duadi, R. Ankri, and Z. Zalevsky, “Determination of coherence length in biological tissues,” Lasers Surg. Med. **43**(4), 339–343 (2011). [CrossRef]

**26. **I. Yariv, H. Duadi, and D. Fixler, “Optical method to extract the reduced scattering coefficient from tissue: theory and experiments,” Opt. Lett. **43**(21), 5299–5302 (2018). [CrossRef]

**27. **I. Yariv, C. Shapira, H. Duadi, and D. Fixler, “Media Characterization under Scattering Conditions by Nanophotonics Iterative Multiplane Spectroscopy Measurements,” ACS Omega **4**(10), 14301–14306 (2019). [CrossRef]

**28. **A. Yamashita, M. Fujii, and T. Kaneko, “Color registration of underwater images for underwater sensing with consideration of light attenuation,” in Proceedings 2007 IEEE international conference on robotics and automation (IEEE2007), pp. 4570–4575.

**29. **J. Mueller, G. S. Fargion, and C. R. McClain, “Ocean Optics Protocols for Satellite Ocean Color Sensor Validation. Volume 6; Special Topics in Ocean Optics Protocols and Appendices; Revised,” (2003).

**30. **K. Sen and S. J. Wilson, * Radiative transfer in curved media* (World Scientific, 1990).

**31. **R. Ankri and D. Fixler, “Gold nanorods based diffusion reflection measurements: current status and perspectives for clinical applications,” Nanophotonics **6**(5), 1031–1042 (2017). [CrossRef]

**32. **R. C. Haskell, L. O. Svaasand, T.-T. Tsay, T.-C. Feng, M. S. McAdams, and B. J. Tromberg, “Boundary conditions for the diffusion equation in radiative transfer,” J. Opt. Soc. Am. A **11**(10), 2727–2741 (1994). [CrossRef]

**33. **S. L. Jacques and B. W. Pogue, “Tutorial on diffuse light transport,” J. Biomed. Opt. **13**(4), 041302 (2008). [CrossRef]

**34. **D. Piao and S. Patel, “Simple empirical master–slave dual-source configuration within the diffusion approximation enhances modeling of spatially resolved diffuse reflectance at short-path and with low scattering from a semi-infinite homogeneous medium,” Appl. Opt. **56**(5), 1447–1452 (2017). [CrossRef]

**35. **Z. Zalevsky, R. G. Dorsch, and D. Mendlovic, “Gerchberg–Saxton algorithm applied in the fractional Fourier or the Fresnel domain,” Opt. Lett. **21**(12), 842–844 (1996). [CrossRef]

**36. **D. Mendlovic, Z. Zalevsky, and N. Konforti, “Computation considerations and fast algorithms for calculating the diffraction integral,” Journal of Modern Optics **44**(2), 407–414 (1997). [CrossRef]

**37. **D. Sazbon, Z. Zalevsky, and E. Rivlin, “Qualitative real-time range extraction for preplanned scene partitioning using laser beam coding,” Pattern Recognition Letters **26**(11), 1772–1781 (2005). [CrossRef]

**38. **E. Grossman, R. Tzioni, A. Gur, E. Gur, and Z. Zalevsky, “Optical through-turbulence imaging configuration: experimental validation,” Opt. Lett. **35**(4), 453–455 (2010). [CrossRef]

**39. **H. J. Van Staveren, C. J. Moes, J. van Marie, S. A. Prahl, and M. J. Van Gemert, “Light scattering in lntralipid-10% in the wavelength range of 4507–1100 nm,” Appl. Opt. **30**(31), 4507–4514 (1991). [CrossRef]

**40. **D. Fixler, J. Garcia, Z. Zalevsky, A. Weiss, and M. Deutsch, “Pattern projection for subpixel resolved imaging in microscopy,” Micron **38**(2), 115–120 (2007). [CrossRef]