Noise, Image Reconstruction with Noise EE367/CS448I: Computational Imaging and Display stanford.edu/class/ee367 Lecture
Views 142 Downloads 2 File size 3MB
Noise, Image Reconstruction with Noise EE367/CS448I: Computational Imaging and Display stanford.edu/class/ee367 Lecture 10
Gordon Wetzstein Stanford University
Topics •
Fixed pattern noise
•
Gaussian noise •
•
•
Image reconstruction using MAP
Poisson noise •
Richardson-Lucy algorithm
•
RL + TV prior
The SNR with three types of noise sources
What’s a Pixel? photon to electron converter → photoelectric effect!
source: Molecular Expressions wikipedia
What’s a Pixel? •
microlens: focus light on photodiode
•
color filter: select color channel
•
quantum efficiency: ~50%
•
fill factor: fraction of surface area used for light gathering
• source: Molecular Expressions
photon-to-charge conversion and analog-to-digital conversion (ADC)
include noise!
Sensitivity
of sensor to light – digital gain
bobatkins.com
ISO (“film speed”)
Noise •
Noise is (usually) bad!
•
Many sources of noise: heat, electronics, amplifier gain, photon to
electron conversion, pixel defects, read, … •
Let’s start with something simple: fixed pattern noise
Fixed Pattern Noise •
•
Dead or “hot” pixels, dust, pixel sensitivity variations … remove with dark frame calibration:
I captured - I dark I= I white - I dark on RAW image! not JPEG (nonlinear)
Emil Martinec
Noise •
Other than that, different noise follows different statistical distributions, these two are crucial: •
Gaussian
•
Poisson
Gaussian Noise •
Thermal, read, amplifier
•
Additive, signal-independent, zero-mean
+
=
Gaussian Noise •
With i.i.d. Gaussian noise:
b = x +h
(independent and identically-distributed)
additive Gaussian noise
•
We want to find 𝑥
x ∼~ N ( x,0 )
h ~∼ N ( 0,s 2 )
Gaussian Noise •
With i.i.d. Gaussian noise: (independent and identically-distributed)
b ∼~ N ( x,s 2 ) p ( b | x,s ) =
1 2ps 2
e
2 b-x ) ( -
2s 2
b = x +h
x ∼~ N ( x,0 )
h ~∼ N ( 0,s 2 )
Gaussian Noise •
b = x +h
With i.i.d. Gaussian noise: (independent and identically-distributed)
•
b ∼~ N ( x,s 2 ) p ( b | x,s ) =
1 2ps 2
e
2 b-x ) ( -
2s 2
Bayes’ rule:
x ∼~ N ( x,0 )
h ~∼ N ( 0,s 2 )
Gaussian Noise - MAP •
b = x +h
With i.i.d. Gaussian noise: (independent and identically-distributed)
b ∼~ N ( x,s p ( b | x,s ) =
2
) 1 2ps 2
e
x ∼~ N ( x,0 )
h ~∼ N ( 0,s 2 )
•
Bayes’ rule:
•
Maximum-a-posteriori estimation:
2 b-x ) ( -
2s 2
Some information, a prior, for the image
Gaussian Noise – MAP “Flat” Prior
•
Trivial solution (not useful in practice):
p( x) = 1
Gaussian Noise – MAP Self Similarity Prior
•
Gaussian “denoisers” like non-local means and other selfsimilarity priors actually solve this problem:
General Self Similarity Prior •
Generic proximal operator for function f(x):
•
Proximal operator for some image prior
General Self Similarity Prior •
We can use self-similarity as general image prior (not just for denoising) s 2 = l /s s = l /s (h parameter in most NLM implementations is std. dev.)
•
proximal operator for some image prior
•
Image Reconstruction with Gaussian Noise 𝐴x ~ ∼ N ( Ax,0 )
b = Ax + h
With i.i.d. Gaussian noise: (independent and identically-distributed)
b ~∼ N ( Ax,s p ( b | x,s ) =
2
) 1
2ps 2
e
h ~∼ N ( 0,s 2 )
•
Bayes’ rule:
•
Maximum-a-posteriori estimation:
•
Regularized least squares (use ADMM)
2 b-Ax ) ( -
2s 2
Scientific Sensors •
e.g., Andor iXon Ultra 897: cooled to -100° C
•
Scientific CMOS & CCD
•
Reduce pretty much all noise, except for photon noise
What is Photon (or Shot) Noise? •
Fluctuations in the light emitted by a source, i.e. due to particle nature of light (emission)
•
Fluctuation when the incident light is converted to charge, i.e. due to variability in the number of electrons (detection)
•
Same for re-emission → cascading Poisson processes are also described by a Poisson process [Teich and Saleh 1998]
Photon Noise •
Noise is signal dependent!
•
For 𝑁 measured photo-electrons
•
•
standard deviation is
•
mean is
s = N , variance is s 2 = N
N
Poisson distribution
k -N
N e f (k; N ) = k!
Photon Noise - SNR Signal-to-noise ratio (mean / 𝜎)
SNR =
N N SNR in dB
N photons: s = N 2N photons: s = 2 N
𝑆𝑁𝑅 is nonlinear!
1,000
N
10,000
Photon Noise - SNR signal-to-noise ratio
N SNR = N
N photons: s = N 2N photons: s = 2 N
nonlinear!
wikipedia
Maximum Likelihood Solution for Poisson Noise •
Image formation:
•
Probability of measurement 𝑖:
•
Joint probability of all M measurements (use notation trick
):
Maximum Likelihood Solution for Poisson Noise •
Log-likelihood function:
•
Gradient:
Maximum Likelihood Solution for Poisson Noise •
Richardson-Lucy algorithm:
iterative approach to ML estimation for Poisson noise
•
Simple idea: 1. At solution, gradient will be zero 2. When converged, further iterations will not change, i.e.
Richardson-Lucy Algorithm •
Equate gradient to zero
=0 •
Rearrange so that 1 is on one side of equation
Richardson-Lucy Algorithm •
Equate gradient to zero
=0 •
Rearrange so that 1 is on one side of equation
•
Set
Richardson-Lucy Algorithm •
•
For any multiplicative update rule scheme: •
start with positive initial guess (e.g. random values)
•
apply iteration scheme
•
future updates are guaranteed to remain positive
•
always get smaller residual
RL multiplicative update rules:
Richardson-Lucy Algorithm - Deconvolution Blurry & Noisy Measurement
RL Deconvolution
Richardson-Lucy Algorithm •
What went wrong?
•
Poisson deconvolution is a tough problem, without priors it’s pretty
much hopeless •
Let’s try to incorporate the one prior we have learned: total variation
Richardson-Lucy Algorithm + TV •
Log-likelihood function:
•
Gradient:
𝐷
Richardson-Lucy Algorithm + TV •
Gradient of anisotropic TV term
•
This is “dirty”: possible division by 0!
Richardson-Lucy Algorithm + TV •
Follow the same logic as RL, RL+TV multiplicative update rules:
•
2 major problems & solution “hacks”: 1. still possible division by 0 when gradient is zero → set fraction to 0 if gradient is 0
2. multiplicative update may become negative! → only work with (very) small values for lambda
RL+TV, lambda=0.025
RL+TV, lambda=0.005
RL
(A dirty but easy approach to) Richardson-Lucy with a TV Prior
Measurements
Log Residual
Mean Squared Error
Signal-to-Noise Ratio (SNR) SNR =
=
mean pixel value m = standard deviation of pixel value s
PQet PQet + Dt + N r2
P = incident photon flux (photons/pixel/sec) Q e = quantum efficiency t = eposure time (sec) D = dark current (electroncs/pixel/sec), including hot pixels N r = read noise (rms electrons/pixel), including fixed pattern noise
signal noise
Signal-to-Noise Ratio (SNR) SNR =
=
mean pixel value m = standard deviation of pixel value s
PQet PQet + Dt + N
signal noise
signal-dependent 2 r
signal-independent
P = incident photon flux (photons/pixel/sec) Q e = quantum efficiency t = eposure time (sec) D = dark current (electroncs/pixel/sec), including hot pixels N r = read noise (rms electrons/pixel), including fixed pattern noise
Next: Compressive Imaging Single pixel camera
•
Compressive hyperspectral imaging
•
Compressive light field imaging Marwah et al., 2013
•
Wakin et al. 2006
References and Further Reading •
https://en.wikipedia.org/wiki/Shot_noise
•
L. Lucy, 1974 “An iterative technique for the rectification of observed distributions”. Astron. J. 79, 745–754.
•
W. Richardson, 1972 “Bayesian-based iterative method of image restoration” J. Opt. Soc. Am. 62, 1, 55–59.
•
N. Dey, L. Blanc-Feraud, C. Zimmer, Z. Kam, J. Olivo-Marin, J. Zerubia, 2004 “A deconvolution method for confocal microscopy with total variation regularization”, In IEEE Symposium on Biomedical Imaging: Nano to Macro, 1223–1226
•
M. Teich, E. Saleh, 1998, “Cascaded stochastic processes in optics”, Traitement du Signal 15(6)
•
Please read the lecture notes, especially for the “clean” ADMM derivation for solving the maximum likelihood estimation of Poisson reconstruction with TV prior!