Editing the kernel in SmartDeblur
I was asked to explain how I created the “blur model” (a.k.a. kernel) for the placard.
SmartDeblur uses a point spread function image as the convolution kernel. This small grayscale image is displayed on the left.
“Analyze Blur” generates a kernel automatically. Depending on how you select an area (or don’t select anything and use the whole image), you will get completely different results. With this small cropped image the initial result is good enough to read (maybe) a few words:
The original image is blurry but it is not out of focus. The software tries to compute the best kernel without taking this information into account, because the blur could be more complex, with both motion blur and out-of-focus blur. In this case, the kernel is just the path defined by the movement of the camera that created the blur. It’s a short curve. So the automatically generated kernel should be simplified by removing all the clutter:
It is more difficult to make progress from there by trial and error. Add a few pixels, test. Repeat. This can be done in the small editor dialog in SmartDeblur or the kernel can be saved as a PNG image (in v2.3 PRO, not HOME) and edited in any image editor.
A good-looking final result:
I wasn’t sure if I could explain. Curt Collins told me “You can explain. Not quitting is step one.” That’s exactly how it was done.
Recent Comments