As of ImgSource v2.1, you have a choice of five methods to resize RGB images: ISResizeImage, ISResizeImageBicubic, ISSimpleResampleImage, ISDecimateRGB and ISFilterResizeImage. Each of these is designed to resize an image with different speed vs. quality or accuracy vs. perceived quality tradeoffs. (There are 8-bit and even 1-bit versions of some of these - their operation is just a variation on the basic ideas I'll describe here.)
In signal processing circles, the act of resizing a stream of data is known as resampling. This means that you take "samples" of the data source at specific intervals in such a way that you can reproduce the original signal with a different amount of data, within the constraints of your system. In image processing, we're generally trying to resize a two dimensional image in such a way that the resized image resembles the original as much as possible, given a limited amount of time and memory. The goal when reducing images is to preserve the character of the source data, but with fewer points. The goal when enlarging images is to invent data to fill in the holes where there is no source data.*
In these examples, I'll assume you have "Ns" samples in the source and you want to pick "Nd" samples from it. And, I'll discuss these in one dimensional terms. The leap to two dimensions is*left as an exercise to the reader.
Nearest-neighbor : ISSimpleResampleImage
The easiest way to resample anything is to pick every Floor(Ns/Nd)th sample out of the source. Ex. : if Ns = 10 and Nd = 5 (you want your output data to be 1/2 the size of the source), you will pick every 2nd data point from the source. The math is simple and so, this is a very quick operation. As with every process that is both quick and simple, the results are terrible. This method ignores too much of the source data to accurately reproduce it.
Even worse, when enlarging, you end up picking the same data points again and again. If Ns = 5 and Nd = 20, you end up using a new source pixel every four destination pixels. This leads to the dreaded "fat pixel" effect.
Strictly speaking, when enlarging with this method, you are in fact preserving the data in the original image exactly: every source point is used in the destination image. And, you are not introducing any new data into the new image. But, the results are not visually pleasing. More sophisticated techniques
can get rid of the fat pixel effect by making educated guesses as to what the data points between source pixel N and N+1 would be.
Bi-linear resampling*: ISResizeImage
This is a simple technique. Here's it is:
For every destination pixel, find the location of the ideal source pixel by using the Ns/Nd ratio, as above. But this time, don't use the Floor function, preserve the fractional information. Ex. if Ns = 5 and Nd = 15 and we're trying to find the 2nd point in the destination, the ideal source point is at 0.666 (2 * (5 / 15) = 0.666). Because we can't address data at fractional locations, we'll do a weighted average of the two data points closest to our ideal location: 0 and 1. Our destination pixel is then 0.666 of the data at point 0 and (1.0 - 0.666) of the the data at point 1.
This technique is called linear interpolation. If you were to graph the two data points used in the calculation above, with a line between them, the ideal
data value will be somewhere on that line. Bi-linear means that you do it twice - once horizontally and once vertically. The math is the same in either direction.
Using this technique on image data gives results that are far better than the nearest neighbor technique. The images lose the fat pixel effect and you can almost believe that the resizing code has somehow recovered data that was missing from the source - almost.
Bi-cubic resampling*: ISResizeImageBicubic
Cubic interpolation is similar to linear interpolation in that you use your existing data to come up with an equation to model that data so that you can make an educated guess at what other points in that data set will be. Linear interpolation uses two data points to generate a simple line, and you pick your
destination data from that line; in cubic interpolation, you use four data points to generate a 3rd degree equation (of the form x3 + x2 + x + c) and pick your destination data from the curve. This makes the math much more complicated: so complicated that I don't understand it well enough to explain it more than that.
In linear interpolation, you use two source data points to find one the destination point. In cubic interpolation, you use four source points to find
one destination point. So, each destination point is created from twice as many source points, and a cubic equation can model mathematical relationships in data than a simple line can.
The image results from bi-cubic interpolation (cubic interpolation performed in both X and Y directions) are often sharper than bi-linear interpolation because of the higher accuracy possible with 3rd degree equations and the higher number of source pixels used. The downside is that the math required to do this is complex and time consuming. You need to be sure the time spent generating the image is worth the slight visual improvement.
Area-averaging : ISDecimateRGB
This technique is much different from the resampling techniques described above. In those techniques, the number of source points per destination point was fixed by the requirements of the math : 2 for linear resampling, 4 for cubic resampling. The intent of this algorithm is also different. The resampling techniques are designed to reproduce or mimic the source data as closely as possible; area averaging is designed to find the average data value in a given
range of data.*
The way it works is simple: divide the source data into Nd regions. Each destination point is the average data value from the corresponding source region. This is a very intuitive way to reduce the size of a data stream. Every data point in the source contributes equally to the output: and a destination pixel is the average of all source pixels that it represents.*
Unfortunately, this technique can only be used when reducing images. If Ns < Nd, the source regions end up being less than one pixel each and the technique degenerates into a nearest-neighbor equivalent. Also, this technique isn't good when you are only slightly reducing an image: the regions become too small to do much more than echo single source data points. But, for large reductions, area-averaging can give results that are equal to or better than any of the resampling techniques described above.
Filter resampling : ISFilterResampleImage
The ISFilterResampleImage function provides many different resizing methods, but all of them are handled by a single resizing "engine". This engine works something like this: for each pixel, a number of neighboring pixels is combined in a weighted average to form the output pixel. Each method uses a different number of neightbor pixels and the weighting for these pixel is determined by a filtering function; each method uses a different filtering function, ranging in complexity from trivial to sinister.
The results from these different methods vary dramatically. Some of the methods are probably useless for all purposes, while some of the others give results that far exceed any other ImgSource resizing method. But, as with all things, the better the results, the longer the calcuation takes.
For most purposes, the simple methods (ISResizeImage, ISResizeImageBicubic, ISDecimateRGB) will give respectable results. If you can afford to wait, ISResizeFilterImage can give outstanding results.
Smaller Animals Software, Inc