Citation: | JIANG Ying, DENG Huiping, XIANG Sen, WU Jin. Joint Focus Measure and Context-Guided Filtering for Depth From Focus[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250540 |
[1] |
XIONG Haolin, MUTTUKURU S, XIAO Hanyuan, et al. Sparsegs: Sparse view synthesis using 3D Gaussian splatting[C]. 2025 International Conference on 3D Vision, Singapore, Singapore, 2025: 1032–1041. doi: 10.1109/3DV66043.2025.00100.
|
[2] |
WESTERMEIER F, BRÜBACH L, WIENRICH C, et al. Assessing depth perception in VR and video see-through AR: A comparison on distance judgment, performance, and preference[J]. IEEE Transactions on Visualization and Computer Graphics, 2024, 30(5): 2140–2150. doi: 10.1109/TVCG.2024.3372061.
|
[3] |
ZHOU Xiaoyu, LIN Zhiwei, SHAN Xiaojun, et al. DrivingGaussian: Composite Gaussian splatting for surrounding dynamic autonomous driving scenes[C]. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2024: 21634–21643. doi: 10.1109/CVPR52733.2024.02044.
|
[4] |
姜文涛, 刘晓璇, 涂潮, 等. 自适应空间异常的目标跟踪[J]. 电子与信息学报, 2022, 44(2): 523–533. doi: 10.11999/JEIT201025.
JIANG Wentao, LIU Xiaoxuan, TU Chao, et al. Adaptive spatial and anomaly target tracking[J]. Journal of Electronics & Information Technology, 2022, 44(2): 523–533. doi: 10.11999/JEIT201025.
|
[5] |
CHEN Rongshan, SHENG Hao, YANG Da, et al. Pixel-wise matching cost function for robust light field depth estimation[J]. Expert Systems with Applications, 2025, 262: 125560. doi: 10.1016/j.eswa.2024.125560.
|
[6] |
WANG Yingqian, WANG Longguang, LIANG Zhengyu, et al. Occlusion-aware cost constructor for light field depth estimation[C]. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 2022: 19777–19786. doi: 10.1109/CVPR52688.2022.01919.
|
[7] |
KE Bingxin, OBUKHOV A, HUANG Shengyu, et al. Repurposing diffusion-based image generators for monocular depth estimation[C]. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2024: 9492–9502. doi: 10.1109/CVPR52733.2024.00907.
|
[8] |
PATNI S, AGARWAL A, and ARORA C. ECoDepth: Effective conditioning of diffusion models for monocular depth estimation[C]. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2024: 28285–28295. doi: 10.1109/CVPR52733.2024.02672.
|
[9] |
SI Haozhe, ZHAO Bin, WANG Dong, et al. Fully self-supervised depth estimation from defocus clue[C]. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, 2023: 9140–9149. doi: 10.1109/CVPR52729.2023.00882.
|
[10] |
YANG Xinge, FU Qiang, ELHOSEINY M, et al. Aberration-aware depth-from-focus[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025, 47(9): 7268–7278. doi: 10.1109/TPAMI.2023.3301931.
|
[11] |
JEON H G, SURH J, IM S, et al. Ring difference filter for fast and noise robust depth from focus[J]. IEEE Transactions on Image Processing, 2020, 29: 1045–1060. doi: 10.1109/TIP.2019.2937064.
|
[12] |
FAN Tiantian and YU Hongbin. A novel shape from focus method based on 3D steerable filters for improved performance on treating textureless region[J]. Optics Communications, 2018, 410: 254–261. doi: 10.1016/j.optcom.2017.10.019.
|
[13] |
SURH J, JEON H G, PARK Y, et al. Noise robust depth from focus using a ring difference filter[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 2444–2453. doi: 10.1109/CVPR.2017.262.
|
[14] |
THELEN A, FREY S, HIRSCH S, et al. Improvements in shape-from-focus for holographic reconstructions with regard to focus operators, neighborhood-size, and height value interpolation[J]. IEEE Transactions on Image Processing, 2009, 18(1): 151–157. doi: 10.1109/TIP.2008.2007049.
|
[15] |
MAHMOOD M T and CHOI T S. Nonlinear approach for enhancement of image focus volume in shape from focus[J]. IEEE Transactions on Image Processing, 2012, 21(5): 2866–2873. doi: 10.1109/TIP.2012.2186144.
|
[16] |
MAHMOOD M T. Shape from focus by total variation[C]. IVMSP 2013, Seoul, Korea (South), 2013: 1–4. doi: 10.1109/IVMSPW.2013.6611940.
|
[17] |
MOELLER M, BENNING M, SCHÖNLIEB C, et al. Variational depth from focus reconstruction[J]. IEEE Transactions on Image Processing, 2015, 24(12): 5369–5378. doi: 10.1109/TIP.2015.2479469.
|
[18] |
HAZIRBAS C, SOYER S G, STAAB M C, et al. Deep depth from focus[C]. 14th Asian Conference on Computer Vision, Perth, Australia, 2019: 525–541. doi: 10.1007/978-3-030-20893-6_33.
|
[19] |
CHEN Zhang, GUO Xinqing, LI Siyuan, et al. Deep eyes: Joint depth inference using monocular and binocular cues[J]. Neurocomputing, 2021, 453: 812–824. doi: 10.1016/j.neucom.2020.06.132.
|
[20] |
MAXIMOV M, GALIM K, and LEAL-TAIXÉ L. Focus on defocus: Bridging the synthetic to real domain gap for depth estimation[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 1068–1077. doi: 10.1109/CVPR42600.2020.00115.
|
[21] |
WON C and JEON H G. Learning depth from focus in the wild[C]. 17th European Conference on Computer Vision, Tel Aviv, Israel, 2022: 1–18. doi: 10.1007/978-3-031-19769-7_1.
|
[22] |
WANG N H, WANG Ren, LIU Yulun, et al. Bridging unsupervised and supervised depth from focus via all-in-focus supervision[C]. 2021 IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 2021: 12601–12611. doi: 10.1109/ICCV48922.2021.01239.
|
[23] |
YANG Fengting, HUANG Xiaolei, and ZHOU Zihan. Deep depth from focus with differential focus volume[C]. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 2022: 12632–12641. doi: 10.1109/CVPR52688.2022.01231.
|
[24] |
邓慧萍, 盛志超, 向森, 等. 基于语义导向的光场图像深度估计[J]. 电子与信息学报, 2022, 44(8): 2940–2948. doi: 10.11999/JEIT210545.
DENG Huiping, SHENG Zhichao, XIANG Sen, et al. Depth estimation based on semantic guidance for light field image[J]. Journal of Electronics & Information Technology, 2022, 44(8): 2940–2948. doi: 10.11999/JEIT210545.
|
[25] |
HE Mengfei, YANG Zhiyou, ZHANG Guangben, et al. IIMT-net: Poly-1 weights balanced multi-task network for semantic segmentation and depth estimation using interactive information[J]. Image and Vision Computing, 2024, 148: 105109. doi: 10.1016/j.imavis.2024.105109.
|
[26] |
PERTUZ S, PUIG D, and GARCIA M A. Analysis of focus measure operators for shape-from-focus[J]. Pattern Recognition, 2013, 46(5): 1415–1432. doi: 10.1016/j.patcog.2012.11.011.
|
[27] |
LIN T Y, DOLLÁR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 936–944. doi: 10.1109/CVPR.2017.106.
|
[28] |
WU Tianyi, TANG Sheng, ZHANG Rui, et al. CGNet: A light-weight context guided network for semantic segmentation[J]. IEEE Transactions on Image Processing, 2021: 1169–1179. doi: 10.1109/TIP.2020.3042065.
|
[29] |
SUWAJANAKORN S, HERNANDEZ C, and SEITZ S M. Depth from focus with your mobile phone[C]. 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 3497–3506. doi: 10.1109/CVPR.2015.7298972.
|
[30] |
FUJIMURA Y, IIYAMA M, FUNATOMI T, et al. Deep depth from focal stack with defocus model for camera-setting invariance[J]. International Journal of Computer Vision, 2024, 132(6): 1970–1985. doi: 10.1007/s11263-023-01964-x.
|