publications
publications by categories in reversed chronological order. * means the corresponding author.
2024
- Calibration-free deep optics for depth estimation with precise simulationZhengyue Zhuge, Hao Xu, Shiqi Chen, and 5 more authorsOptics and Lasers in Engineering, 2024
Monocular depth estimation is an important computer vision task widely explored in fields such as autonomous driving, robotics, and more. Recently, deep optics approaches that optimize diffractive optical elements (DOEs) with differentiable frameworks have improved depth estimation performance. However, they only consider on-axis point spread functions (PSFs) and are highly dependent on system calibration. We propose a precise end-to-end paradigm combining ray tracing and angular spectrum diffraction. With this approach, we jointly trained a DOE and a reconstruction network for depth estimation and image restoration. Compared with conventional deep optics approaches, we accurately simulate both on-axis and off-axis PSFs, eliminating the need for calibration. We have validated the high similarity between captured PSFs and simulated ones at 19.4∘ field-of-view (FOV). Our optimized phase mask and network achieve state-of-the-art performance among semantic-based monocular depth estimation and existing deep optics methods. During real-world experiments, our prototype camera shows depth distributions similar to the Intel D455 infrared structured light camera.
- Wavelength encoding spectral imaging based on the combination of deeply learned filters and an RGB cameraHao Xu, Shiqi Chen, Haiquan Hu, and 7 more authorsOpt. Express, Mar 2024
Hyperspectral imaging is a critical tool for gathering spatial-spectral information in various scientific research fields. As a result of improvements in spectral reconstruction algorithms, significant progress has been made in reconstructing hyperspectral images from commonly acquired RGB images. However, due to the limited input, reconstructing spectral information from RGB images is ill-posed. Furthermore, conventional camera color filter arrays (CFA) are designed for human perception and are not optimal for spectral reconstruction. To increase the diversity of wavelength encoding, we propose to place broadband encoding filters in front of the RGB camera. In this condition, the spectral sensitivity of the imaging system is determined by the filters and the camera itself. To achieve an optimal encoding scheme, we use an end-to-end optimization framework to automatically design the filters’ transmittance functions and optimize the weights of the spectral reconstruction network. Simulation experiments show that our proposed spectral reconstruction network has excellent spectral mapping capabilities. Additionally, our novel joint wavelength encoding imaging framework is superior to traditional RGB imaging systems. We develop the deeply learned filter and conduct actual shooting experiments. The spectral reconstruction results have an attractive spatial resolution and spectral accuracy.
- Deep Linear Array Pushbroom Image Restoration: A Degradation Pipeline and Jitter-Aware Restoration NetworkZida Chen, Ziran Zhang, Haoying Li, and 6 more authorsJan 2024
Linear Array Pushbroom (LAP) imaging technology is widely used in the realm of remote sensing. However, images acquired through LAP always suffer from distortion and blur because of camera jitter. Traditional methods for restoring LAP images, such as algorithms estimating the point spread function (PSF), exhibit limited performance. To tackle this issue, we propose a Jitter-Aware Restoration Network (JARNet), to remove the distortion and blur in two stages. In the first stage, we formulate an Optical Flow Correction (OFC) block to refine the optical flow of the degraded LAP images, resulting in pre-corrected images where most of the distortions are alleviated. In the second stage, for further enhancement of the pre-corrected images, we integrate two jitter-aware techniques within the Spatial and Frequency Residual (SFRes) block: 1) introducing Coordinate Attention (CoA) to the SFRes block in order to capture the jitter state in orthogonal direction; 2) manipulating image features in both spatial and frequency domains to leverage local and global priors. Additionally, we develop a data synthesis pipeline, which applies Continue Dynamic Shooting Model (CDSM) to simulate realistic degradation in LAP images. Both the proposed JARNet and LAP image synthesis pipeline establish a foundation for addressing this intricate challenge. Extensive experiments demonstrate that the proposed two-stage method outperforms state-of-the-art image restoration models. Code is available at https://github.com/JHW2000/JARNet.
- Revealing the preference for correcting separated aberrations in joint optic-image designJingwen Zhou, Shiqi Chen*, Zheng Ren, and 5 more authorsOptics and Lasers in Engineering, Jan 2024
The joint design of the optical system and the downstream algorithm is a challenging and promising task. Due to the demand for balancing the global optima of imaging systems and the computational cost of physical simulation, existing methods cannot achieve efficient joint design of complex systems such as smartphones and drones. In this work, starting from the perspective of the optical design, we characterize the optics with separated aberrations. Additionally, to bridge the hardware and software without gradients, an image simulation system is presented to reproduce the genuine imaging procedure of lenses with large field-of-views. As for aberration correction, we propose a network to perceive and correct the spatially varying aberrations and validate its superiority over state-of-the-art methods. Comprehensive experiments reveal that the preference for correcting separated aberrations in joint design is as follows: longitudinal chromatic aberration, lateral chromatic aberration, spherical aberration, field curvature, and coma, with astigmatism coming last. Drawing from the preference, a 10% reduction in the total track length of the consumer-level mobile phone lens module is accomplished. Moreover, this procedure spares more space for manufacturing deviations, realizing extreme-quality enhancement of computational photography. The optimization paradigm provides innovative insight into the practical joint design of sophisticated optical systems and post-processing algorithms.
2023
- Dark2Light: multi-stage progressive learning model for low-light image enhancementRui-Kang Li, Meng-Hao Li, Shiqi Chen*, and 2 more authorsOpt. Express, Dec 2023
Due to severe noise and extremely low illuminance, restoring from low-light images to normal-light images remains challenging. Unpredictable noise can tangle the weak signals, making it difficult for models to learn signals from low-light images, while simply restoring the illumination can lead to noise amplification. To address this dilemma, we propose a multi-stage model that can progressively restore normal-light images from low-light images, namely Dark2Light. Within each stage, We divide the low-light image enhancement (LLIE) into two main problems: (1) illumination enhancement and (2) noise removal. Firstly, we convert the image space from sRGB to linear RGB to ensure that illumination enhancement is approximately linear, and design a contextual transformer block to conduct illumination enhancement in a coarse-to-fine manner. Secondly, a U-Net shaped denoising block is adopted for noise removal. Lastly, we design a dual-supervised attention block to facilitate progressive restoration and feature transfer. Extensive experimental results demonstrate that the proposed Dark2Light outperforms the state-of-the-art LLIE methods both quantitatively and qualitatively.
- Design of an optimized Alvarez lens based on the fifth-order polynomial combinationZhichao Ye, Jiapu Yan, Tingting Jiang, and 5 more authorsAppl. Opt., Dec 2023
This paper proposes an optimized design of the Alvarez lens by utilizing a combination of three fifth-order X-Y polynomials. It can effectively minimize the curvature of the lens surface to meet the manufacturing requirements. The phase modulation function and aberration of the proposed lens are evaluated by using first-order optical analysis. Simulations compare the proposed lens with the traditional Alvarez lens in terms of surface curvature, zoom capability, and imaging quality. The results demonstrate the exceptional performance of the proposed lens, achieving a remarkable 26.36% reduction in the maximum curvature of the Alvarez lens (with a coefficient A value of 4\texttimes10\textminus4 and a diameter of 26 mm) while preserving its original zoom capability and imaging quality.
- Image restoration for optical zooming system based on Alvarez lensesJiapu Yan, Zhichao Ye, Tingting Jiang, and 5 more authorsOpt. Express, Oct 2023
Alvarez lenses are known for their ability to achieve a broad range of optical power adjustment by utilizing complementary freeform surfaces. However, these lenses suffer from optical aberrations, which restrict their potential applications. To address this issue, we propose a field of view (FOV) attention image restoration model for continuous zooming. In order to simulate the degradation of optical zooming systems based on Alvarez lenses (OZA), a baseline OZA is designed where the polynomial for the Alvarez lenses consists of only three coefficients. By computing spatially varying point spread functions (PSFs), we simulate the degraded images of multiple zoom configurations and conduct restoration experiments. The results demonstrate that our approach surpasses the compared methods in the restoration of degraded images across various zoom configurations while also exhibiting strong generalization capabilities under untrained configurations.
- Reliable Image Dehazing by NeRFZheyan Jin, Shiqi Chen, Huajun Feng, and 3 more authorsMay 2023
We present an image dehazing algorithm with high quality, wide application, and no data training or prior needed. We analyze the defects of the original dehazing model, and propose a new and reliable dehazing reconstruction and dehazing model based on the combination of optical scattering model and computer graphics lighting rendering model. Based on the new haze model and the images obtained by the cameras, we can reconstruct the three-dimensional space, accurately calculate the objects and haze in the space, and use the transparency relationship of haze to perform accurate haze removal. To obtain a 3D simulation dataset we used the Unreal 5 computer graphics rendering engine. In order to obtain real shot data in different scenes, we used fog generators, array cameras, mobile phones, underwater cameras and drones to obtain haze data. We use formula derivation, simulation data set and real shot data set result experimental results to prove the feasibility of the new method. Compared with various other methods, we are far ahead in terms of calculation indicators (4 dB higher quality average scene), color remains more natural, and the algorithm is more robust in different scenarios and best in the subjective perception.
- Let Segment Anything Help Image DehazeZheyan Jin, Shiqi Chen, Yueting Chen, and 2 more authorsJun 2023
The large language model and high-level vision model have achieved impressive performance improvements with large datasets and model sizes. However, low-level computer vision tasks, such as image dehaze and blur removal, still rely on a small number of datasets and small-sized models, which generally leads to overfitting and local optima. Therefore, we propose a framework to integrate large-model prior into low-level computer vision tasks. Just as with the task of image segmentation, the degradation of haze is also texture-related. So we propose to detect gray-scale coding, network channel expansion, and pre-dehaze structures to integrate large-model prior knowledge into any low-level dehazing network. We demonstrate the effectiveness and applicability of large models in guiding low-level visual tasks through different datasets and algorithms comparison experiments. Finally, we demonstrate the effect of grayscale coding, network channel expansion, and recurrent network structures through ablation experiments. Under the conditions where additional data and training resources are not required, we successfully prove that the integration of large-model prior knowledge will improve the dehaze performance and save training time for low-level visual tasks.
- Toward Real Flare Removal: A Comprehensive Pipeline and A New BenchmarkZheyan Jin, Shiqi Chen, Huajun Feng, and 2 more authorsJul 2023
Photographing in the under-illuminated scenes, the presence of complex light sources often leave strong flare artifacts in images, where the intensity, the spectrum, the reflection, and the aberration altogether contribute the deterioration. Besides the image quality, it also influence the performance of down-stream visual applications. Thus, removing the lens flare and ghosts is a challenge issue especially in low-light environment. However, existing methods for flare removal mainly restricted to the problems of inadequate simulation and real-world capture, where the categories of scattered flares are singular and the reflected ghosts are unavailable. Therefore, a comprehensive deterioration procedure is crucial for constructing the dataset of flare removal. Based on the theoretical analysis and real-world evaluation, we propose a well-developed methodology for generating the data-pairs with flare deterioration. The procedure is comprehensive, where the similarity of scattered flares and the symmetric effect of reflected ghosts are realized. Moreover, we also construct a real-shot pipeline that respectively processes the effects of scattering and reflective flares, aiming to directly generate the data for end-to-end methods. Experimental results show that the proposed methodology add diversity to the existing flare datasets and construct a comprehensive mapping procedure for flare data pairs. And our method facilities the data-driven model to realize better restoration in flare images and proposes a better evaluation system based on real shots, resulting promote progress in the area of real flare removal.
- Imaging Simulation and Learning-Based Image Restoration for Remote Sensing Time Delay and Integration CamerasMenghao Li, Ziran Zhang, Shiqi Chen, and 4 more authorsIEEE Transactions on Geoscience and Remote Sensing, Aug 2023
Time delay and integration (TDI) cameras are widely used in remote sensing areas because they capture high-resolution and high signal-to-noise ratio (SNR) images and images in low-light environments. However, the image quality captured by TDI cameras may be affected by many degradation factors, including jitter, charge transfer time mismatch, and drift angle. Moreover, compared with the single-line push-broom cameras and area gaze cameras used in remote sensing, the degraded effect of the TDI camera may accumulate during the charge accumulation process. In this article, we present a fast imaging simulation method for remote sensing TDI cameras based on image resampling that can accurately simulate the degraded image quality affected by different degradation factors. The simulated image pairs can provide a sufficient dataset for modern supervised-learning image restoration methods. In addition, we present a novel network, containing a row-attention block and row-encoder block to help resolve the row-variant blur to resolve the degraded images. We test our image restoration method on the simulated degraded image datasets and real images; the results show that the proposed method can effectively restore degraded images. Our restoration method does not rely on auxiliary information detected by high-frequency sensors or multispectral bands, and it achieves better results than other blind restoration methods.
- Computational Optics for Mobile Terminals in Mass ProductionShiqi Chen, Ting Lin, Huajun Feng, and 3 more authorsIEEE Transactions on Pattern Analysis and Machine Intelligence, Apr 2023
Correcting the optical aberrations and the manufacturing deviations of cameras is a challenging task. Due to the limitation on volume and the demand for mass production, existing mobile terminals cannot rectify optical degradation. In this work, we systematically construct the perturbed lens system model to illustrate the relationship between the deviated system parameters and the spatial frequency response (SFR) measured from photographs. To further address this issue, an optimization framework is proposed based on this model to build proxy cameras from the machining samples’ SFRs. Engaging with the proxy cameras, we synthetic data pairs, which encode the optical aberrations and the random manufacturing biases, for training the learning-based algorithms. In correcting aberration, although promising results have been shown recently with convolutional neural networks, they are hard to generalize to stochastic machining biases. Therefore, we propose a dilated Omni-dimensional dynamic convolution (DOConv) and implement it in post-processing to account for the manufacturing degradation. Extensive experiments which evaluate multiple samples of two representative devices demonstrate that the proposed optimization framework accurately constructs the proxy camera. And the dynamic processing model is well-adapted to manufacturing deviations of different cameras, realizing perfect computational photography. The evaluation shows that the proposed method bridges the gap between optical design, system machining, and post-processing pipeline, shedding light on the joint of image signal reception (lens and sensor) and image signal processing (ISP).
- Hyperspectral image reconstruction based on the fusion of diffracted rotation blurred and clear imagesHao Xu, Haiquan Hu, Shiqi Chen, and 4 more authorsOptics and Lasers in Engineering, Jan 2023
To overcome the problems of imaging speed and bulky volume of the traditional hyperspectral imaging systems, the recently proposed compact, snapshot hyperspectral imaging system with diffracted rotation has attracted a lot of interest. Due to the severe degradation of the diffracted rotation blurred image, the restored hyperspectral image (HSI) suffers from a lack of spatial detail information and spectral accuracy. To improve the quality of the reconstructed HSI, we present a joint imaging system of diffractive imaging and clear imaging as well as a convolutional neural network (CNN) based method with two input branches for HSI reconstruction. In the reconstruction network, we develop a feature extraction block (FEB) to extract the features of the two input images, respectively. Subsequently, a double residual block (DRB) is designed to fuse and reconstruct the extracted features. Experimental results show that HSI with high spatial resolution and spectral accuracy can be reconstructed. Our method outperforms the state-of-the-art methods in terms of quantitative metrics and visual quality.
- Direct distortion prediction method for AR-HUD dynamic distortion correctionFangzheng Yu, Nan Xu, Shiqi Chen, and 5 more authorsAppl. Opt., Jul 2023
Dynamic distortion is one of the most critical factors affecting the experience of automotive augmented reality head-up displays (AR-HUDs). A wide range of views and the extensive display area result in extraordinarily complex distortions. Existing methods based on the neural network first obtain distorted images and then get the predistorted data for training mostly. This paper proposes a distortion prediction framework based on the neural network. It directly trains the network with the distorted data, realizing dynamic adaptation for AR-HUD distortion correction and avoiding errors in coordinate interpolation. Additionally, we predict the distortion offsets instead of the distortion coordinates and present a field of view (FOV)-weighted loss function based on the spatial-variance characteristic to further improve the prediction accuracy of distortion. Experiments show that our methods improve the prediction accuracy of AR-HUD dynamic distortion without increasing the network complexity or data processing overhead.
- Snapshot hyperspectral imaging based on equalization designed DOENan Xu, Hao Xu, Shiqi Chen, and 6 more authorsOpt. Express, Jun 2023
Hyperspectral imaging attempts to determine distinctive information in spatial and spectral domain of a target. Over the past few years, hyperspectral imaging systems have developed towards lighter and faster. In phase-coded hyperspectral imaging systems, a better coding aperture design can improve the spectral accuracy relatively. Using wave optics, we post an equalization designed phase-coded aperture to achieve desired equalization point spread functions (PSFs) which provides richer features for subsequent image reconstruction. During the reconstruction of images, our raised hyperspectral reconstruction network, CAFormer, achieves better results than the state-of-the-art networks with less computation by substituting self-attention with channel-attention. Our work revolves around the equalization design of the phase-coded aperture and optimizes the imaging process from three aspects: hardware design, reconstruction algorithm, and PSF calibration. Our work is putting snapshot compact hyperspectral technology closer to a practical application.
- Mobile image restoration via prior quantizationShiqi Chen, Jingwen Zhou, Menghao Li, and 2 more authorsPattern Recognition Letters, Sep 2023
In the photograph of mobile terminal, image degradation is a multivariate problem, where the spectral of the scene, the lens imperfections, the sensor noise, and the field of view together contribute to the results. Besides eliminating it at the hardware level, the post-processing system, which utilizes various prior information, is significant for correction. However, due to the content differences among priors, the pipeline that directly aligns these factors shows limited efficiency and unoptimized restoration. Here, we propose a prior quantization model to correct the degradation introduced in the image formation pipeline. To integrate the multivariate messages, we encode various priors into a latent space and quantify them by the learnable codebooks. After quantization, the prior codes are fused with the image restoration branch to realize targeted optical degradation correction. Moreover, we propose a comprehensive synthetic flow to acquire data pairs in a relative low computational overhead. Comprehensive experiments demonstrate the flexibility of the proposed method and validate its potential to accomplish targeted restoration for mass-produced mobile terminals. Furthermore, our model promises to analyze the influence of various priors and the degradation of devices, which is helpful for joint soft-hardware design.
- DR-UNet: dynamic residual U-Net for blind correction of optical degradationJinwen Zhou, Shiqi Chen, Qi Li, and 2 more authorsIn Conference on Infrared, Millimeter, Terahertz Waves and Applications (IMT2022), May 2023
Due to the size limitation of mobile devices, their optical design is difficult to reach the level of professional equipment. Corresponding restoration methods are then needed to compensate for the shortage. However, most of the models are still static, which leads to their limited representation ability of images. To tackle this problem, we propose a plug-and-play deformable residual block for efficiently sampling the spatially related features at different scales. Moreover, considering that the optical degradation is closely correlated with the field-of-view (FOV), we introduce a FOV attention block based on omni-dimensional dynamic convolution to integrate spatial features. On this basis, we further propose a novel optical degradation correction model called DR-UNet. It is constructed on an encoder-decoder structure to capture multiscale information, along with several context blocks. By correcting the optical degradation in images from coarse to fine, we finally obtain high-quality and degradation-free images. Extensive results demonstrate that our method can compete favorably with some state-of-the-art methods.
2022
- Non-blind optical degradation correction via frequency self-adaptive and finetune tacticsTing Lin, Shiqi Chen*, Huajun Feng, and 3 more authorsOpt. Express, Jun 2022
In mobile photography applications, limited volume constraints the diversity of optical design. In addition to the narrow space, the deviations introduced in mass production cause random bias to the real camera. In consequence, these factors introduce spatially varying aberration and stochastic degradation into the physical formation of an image. Many existing methods obtain excellent performance on one specific device but are not able to quickly adapt to mass production. To address this issue, we propose a frequency self-adaptive model to restore realistic features of the latent image. The restoration is mainly performed in the Fourier domain and two attention mechanisms are introduced to match the feature between Fourier and spatial domain. Our method applies a lightweight network, without requiring modification when the fields of view (FoV) changes. Considering the manufacturing deviations of a specific camera, we first pre-train a simulation-based model, then finetune it with additional manufacturing error, which greatly decreases the time and computational overhead consumption in implementation. Extensive results verify the promising applications of our technique for being integrated with the existing post-processing systems.
- SRDiff: Single image super-resolution with diffusion probabilistic modelsHaoying Li, Yifan Yang, Shiqi Chen, and 5 more authorsNeurocomputing, Feb 2022
Single image super-resolution (SISR) aims to reconstruct high-resolution (HR) images from given low-resolution (LR) images. It is an ill-posed problem because one LR image corresponds to multiple HR images. Recently, learning-based SISR methods have greatly outperformed traditional methods. However, PSNR-oriented, GAN-driven and flow-based methods suffer from over-smoothing, mode collapse and large model footprint issues, respectively. To solve these problems, we propose a novel SISR diffusion probabilistic model (SRDiff), which is the first diffusion-based model for SISR. SRDiff is optimized with a variant of the variational bound on the data likelihood. Through a Markov chain, it can provide diverse and realistic super-resolution (SR) predictions by gradually transforming Gaussian noise into a super-resolution image conditioned on an LR input. In addition, we introduce residual prediction to the whole framework to speed up model convergence. Our extensive experiments on facial and general benchmarks (CelebA and DIV2K datasets) show that (1) SRDiff can generate diverse SR results with rich details and achieve competitive performance against other state-of-the-art methods, when given only one LR input; (2) SRDiff is easy to train with a small footprint(The word “footprint” in this paper represents “model size” (number of model parameters).); (3) SRDiff can perform flexible image manipulation operations, including latent space interpolation and content fusion.
- Epistemic-Uncertainty-Based Divide-and-Conquer Network for Single-Image Super-ResolutionJiaqi Yang, Shiqi Chen, Qi Li, and 3 more authorsElectronics, Nov 2022
The introduction of convolutional neural networks (CNNs) into single-image super-resolution (SISR) has resulted in remarkable performance in the last decade. There is a contradiction in SISR between indiscriminate processing and the different processing difficulties in different regions, leading to the need for locally differentiated processing of SR networks. In this paper, we propose an epistemic-uncertainty-based divide-and-conquer network (EU-DC) in order to address this problem. Firstly, we build an image-gradient-based divide-and-conquer network (IG-DC) that utilizes gradient-based division to separate degraded images into easy and hard processing regions. Secondly, we model the IG-DC’s epistemic uncertainty map (EUM) by using Monte Carlo dropout and, thus, measure the output confidence of the IG-DC. The lower the output confidence is, the more difficult the IG-DC is to process. The EUM-based division is generated by quantizing the EUM into two levels. Finally, the IG-DC is transformed into an EU-DC by substituting the gradient-based division with EUM-based division. Our extensive experiments demonstrate that the proposed EU-DC achieves better reconstruction performance than that of multiple state-of-the-art SISR methods in terms of both quantitative and visual quality.
2021
- Extreme-Quality Computational Imaging via Degradation FrameworkShiqi Chen, Huajun Feng, Keming Gao, and 2 more authorsIn Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Oct 2021
To meet the space limitation of optical elements, free-form surfaces or high-order aspherical lenses are adopted in mobile cameras to compress volume. However, the application of free-form surfaces also introduces the problem of image quality mutation. Existing model-based deconvolution methods are inefficient in dealing with the degradation that shows a wide range of spatial variants over regions. And the deep learning techniques in low-level and physics-based vision suffer from a lack of accurate data. To address this issue, we develop a degradation framework to estimate the spatially variant point spread functions (PSFs) of mobile cameras. When input extreme-quality digital images, the proposed framework generates degraded images sharing a common domain with real-world photographs. Supplied with the synthetic image pairs, we design a Field-Of-View shared kernel prediction network (FOV-KPN) to perform spatial-adaptive reconstruction on real degraded photos. Extensive experiments demonstrate that the proposed approach achieves extreme-quality computational imaging and outperforms the state-of-the-art methods. Furthermore, we illustrate that our technique can be integrated into existing postprocessing systems, resulting in significantly improved visual quality.
- Optical Aberrations Correction in Postprocessing Using Imaging SimulationShiqi Chen, Huajun Feng, Dexin Pan, and 3 more authorsACM Trans. Graph. (Post.Rec in SIGGRAPH 2022), Sep 2021
As the popularity of mobile photography continues to grow, considerable effort is being invested in the reconstruction of degraded images. Due to the spatial variation in optical aberrations, which cannot be avoided during the lens design process, recent commercial cameras have shifted some of these correction tasks from optical design to postprocessing systems. However, without engaging with the optical parameters, these systems only achieve limited correction for aberrations.In this work, we propose a practical method for recovering the degradation caused by optical aberrations. Specifically, we establish an imaging simulation system based on our proposed optical point spread function model. Given the optical parameters of the camera, it generates the imaging results of these specific devices. To perform the restoration, we design a spatial-adaptive network model on synthetic data pairs generated by the imaging simulation system, eliminating the overhead of capturing training data by a large amount of shooting and registration.Moreover, we comprehensively evaluate the proposed method in simulations and experimentally with a customized digital-single-lens-reflex camera lens and HUAWEI HONOR 20, respectively. The experiments demonstrate that our solution successfully removes spatially variant blur and color dispersion. When compared with the state-of-the-art deblur methods, the proposed approach achieves better results with a lower computational overhead. Moreover, the reconstruction technique does not introduce artificial texture and is convenient to transfer to current commercial cameras.