The Wavelet Transform Development and Images

In this post, we shall basically look at how the DWT works for images and what are its implications. Also a small look at why wavelets started shall give more insight into the compression technology used in the JPEG2000.

We know of Fourier analysis which transforms our view from the time-domain to the frequency-domain. But a serious drawback was the loss of time information once in frequency-domain which caused problems to analysis of most real-life non-stationary signals. This was corrected by introducing the Short Time Fourier Transform (STFT) which gave frequency v/s time information, as the signal was windowed to form the spectrogram. Again, the problem in this approach was the determination of window size which would be fixed over the entire signal. We would thus lose critical information with extreme variation in frequency.

Thus arose the need for Wavelet analysis which could employ a variable size window. A long window for low-frequency-resolution and a shorter window for high-frequency-resolution. These wavelets were defined as waveforms of limited duration with an integral value of zero.

The wavelet analysis involves breaking up of a signal into shifted and scaled versions of the above mentioned wavelet. The data thus obtained is a time-scale representation with the magnitude representing the correlation between the wavelet and section of the signal. Thus a low-scale i.e. compressed wavelet captures rapidly changing details which are of a higher frequency, and a high-scale i.e. stretched wavelet captures slowly changing coarse details of a lower frequency.

Now, calculating this “local” Continuous Wavelet Transform at every point and scale generates too much data, hence we choose a subset of these scales and positions. This is the Discrete Wavelet Transform (DWT). It is found that if the scales and positions are based on the powers of two, dyadic in nature; the analysis is much more efficient and equally accurate. The practical approach to this is a simple filtering algorithm. A low-pass filter generates the high-scale (long) approximate coefficients and a high-pass filter which generates the low-scale (short) detail coefficients. The resulting coefficients are sub-sampled by 2 as to keep the total amount of data the same.

In case of 2-dimensional data (images), say matrix A, the rows are first wavelet transformed (columns can be transformed first too.. doesn’t matter) to give GrA (approx.) and HrA (detail). These are then transformed column-wise to give GcGrA (left top), GcHrA (right top – horizontal details), HcGrA (left bottom – vertical details) and HcHrA (right bottom – diagonal details). The resulting coefficients are shown in tree form for the image below.

Wavelet Decomposition of Image

Wavelet Decomposition of Image

Well thats it, so to sum it up, the coefficients are filtered parts of the signal such that the filters are designed to capture the entire frequency range (eg. digital frequency: 0 – 0.5 and 0.5 – 1).

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s