SFNet

Enhanced fringe-to-phase framework using deep learning

paper

dataset

Graphical Abstract




Abstract


In Fringe Projection Profilometry (FPP), achieving robust and accurate 3D reconstruction with a limited number of fringe patterns remains a challenge in structured light 3D imaging. Conventional methods require a set of fringe images, but using only one or two patterns complicates phase recovery and unwrapping. In this study, we introduce SFNet, a symmetric fusion network that transforms two fringe images into an absolute phase. To enhance output reliability, Our framework predicts refined phases by incorporating information from fringe images of a different frequency than those used as input. This allows us to achieve high accuracy with just two images. Comparative experiments and ablation studies validate the effectiveness of our proposed method.



Method

Pipeline of the proposed method

Architecture of SFNet



Experiments


Quantative results of differents methods on the SynthFringe test dataset


Qualitive results of differents methods on the SynthFringe test dataset



Datasets


Our newly constructed dataset, named the SynthFringe dataset, not only contains a greater number of image pairs compared to other datasets but also encompasses images of diverse frequencies. Moreover, we made efforts to design our dataset with a variety of scenes to make it applicable to more general situations.


Representative samples in the dataset


Examples of the collected data, where each column represents the same fringe pattern and each row represents the same view