Imaging Simulation

how to synthetic the imaging results for an optical system?

Abstract:

As the popularity of mobile photography continues to grow, considerable effort is being invested in the reconstruction of degraded images. Due to the spatial variation in optical aberrations, which cannot be avoided during the lens design process, recent commercial cameras have shifted some of these correction tasks from optical design to postprocessing systems. However, without engaging with the optical parameters, these systems only achieve limited correction for aberrations.In this work, we propose a practical method for recovering the degradation caused by optical aberrations. Specifically, we establish an imaging simulation system based on our proposed optical point spread function model. Given the optical parameters of the camera, it generates the imaging results of these specific devices. To perform the restoration, we design a spatial-adaptive network model on synthetic data pairs generated by the imaging simulation system, eliminating the overhead of capturing training data by a large amount of shooting and registration.Moreover, we comprehensively evaluate the proposed method in simulations and experimentally with a customized digital-single-lens-reflex camera lens and HUAWEI HONOR 20, respectively. The experiments demonstrate that our solution successfully removes spatially variant blur and color dispersion. When compared with the state-of-the-art deblur methods, the proposed approach achieves better results with a lower computational overhead. Moreover, the reconstruction technique does not introduce artificial texture and is convenient to transfer to current commercial cameras.

We present a practical method for optical aberrations correction. We calculate the spatially variant PSFs (shown on the left in 10× resampling) of a specific lens by the proposed optical PSF model. The PSFs are used for imaging simulation to generate training data for the proposed postprocessing chain. Our approach significantly improves image quality (shown on the right) and is convenient for deploying to new devices.

Background:

Different optical system shows different aberration performance. Our aims is to find a general way to correct the various degradation carried out by the optical aberration.

Typical optical aberration degradation appearing on photos. From a to c: blurring(Canon EOS 80D + EF 50mm f /1.8 STM), chromatism (Canon EOS 80D + EF 50mmf /1.2 L USM), and “ringing effect” (HUAWEI HONOR 20).

Method:

The goal of our work is to propose a practical method for recovering the degradation caused by optical aberrations. To achieve this,

  • we first develop an imaging simulation system for dataset generation, which simulates the imaging results of a specific camera and generates a large training corpus for deep learning methods.

  • then, to eliminate the blurring, displacement, and chromatic aberrations caused by lens design, we design a spatially adaptive network architecture and insert it into the postprocessing chain.

In Imaging Simulation, we

  • Built the Optical PSF Model based on ray tracing and coherent superposition, considering the geometric propagation and wave properties of light in the meanwhile.
Each position uniformly sampled on wavefront distribution is a spherical wavelet,which coherently superposes on the image plane. In other words, the spherical wavelets interfere with each other when reaching the image plane. In the lower-right corner, we show the different fields’ PSFs of customized DSLR camera lenses computed by raytracing and coherent superposition.
  • Engaging with the invertible ISP pipeline, constructed the imaging simulation framework to accurately synthetic the performance of optical aberration, which is more reliable than the commercial optical design program (e.g., Zemax) and other SOTA algorithms.
The input image is converted to the energy domain first. Then, the optical PSFs reunited are applied to the energy domain image in a partitioning convolution manner. Finally, the degraded image is transformed into raw Bayer data and converted to aberration images. The details of the energy domain transformation and inverted energy domain transformation are shown at the bottom.
we compare our result with the image generated by Zemax and the data taken by the real camera.

In Aberration Correction, we

  • Designed a novel spatial-adaptive CNN architecture, which introduces FOV input, deformable ResBlock, and context block to adapt to spatially variant degradation.
Our spatial-adaptive CNN architecture for optical aberration correction.

Experiments:

Only trained with synthetic data, the proposed deep-learning method is validated to realize excellent restoration in the natural scene. First, we evaluate the performance of the proposed model:

Real-shoot image restoration comparison. The first row and the second row present the test results of the DSLR camera lens. The third and the fourth row are the results of HUAWEI HONOR 20.

In the implementation,

Aberrations correction vs. HUAWEI ISP. Experimental results of the aberration correction pipeline and the HUAWEI ISP are amplified on both sides of the image. We show different FOVs’ patches for validation.

Where is the limit of the correction? We use MTF evaluations to answer this question.

After correction, our method could greatly enhance the MTF of optical system.

Our Main Observation:

When the forward simulation is comprehensive, the imaging results of a camera could be accurately predicted. The PSF calculation and the energy transform pipeline are of equal importance. In this way, only trained with synthetic data, the proposed deep-learning method is validated to realize excellent restoration in the natural scene.

References

2021

  1. chen_tog.png
    Optical Aberrations Correction in Postprocessing Using Imaging Simulation
    Shiqi Chen, Huajun Feng, Dexin Pan, and 3 more authors
    ACM Trans. Graph. (Post.Rec in SIGGRAPH 2022), Sep 2021