Back to the Home page

The Story of the Fast Fourier Transform

About Jean Baptiste Joseph Fourier and his Insight

Joseph Fourier was low born and orphaned at the age of nine, but his talents were recognized and he received a good education. He accompanied Napolean Bonaparte to Egypt in 1798 where his job was to lecture the troops in mathematics. He eventually became the secretary of the French Academy of Sciences in 1822. That was the year he submitted a paper about Newton's Law of Cooling. In that paper, he asserted that any function can be transformed into a summation of sines and that the process was reversible. It was a profound and fundamental insight. His insight, now called the Fourier Transform, is one of the most versatile tools Scientists and Engineers have. It is ubiquitous. One example, JPEG image compression, is widely used on the internet to reduce file sizes by 90 percent. The Fourier transform was used to compress Fourier's image on the previous page.

Hearing is another example. The inner ears of mammals and birds perfrom the Fourier Transform. Ears are not at all like microphones. The brain receives the Fourier Transform of the acoustical signal.

Oh, and by the way, Fourier is also given credit for discovering the Green House Effect.

The need for speed

The need to perform Fourier analysis really fast was very apparent in 1965. The need arose from seismology, crystallography, sonar and many other applications. There was also a realization that FTNMR (Fourier Transform Nuclear Magnetic Resonance) and FTIR (Fourier Transform Infrared) spectrometers could be useful.

If you were a scientific computer programmer in 1965 and you were told to evaluate the discrete Fourier transform for 1000s of samples, you would quickly realize that your 1965 computer was going to choke.

fourierint (2K)

The problem is that the computational effort increases with the square of N. A thousand sample sequence requires millions of complex multiplies and your computer can only do a few hundred complex multiplies per second.

A clever algorithm saves the day

Two mathematicians, J. W. Cooley and J. W. Tukey, presented a paper in 1965 describing an algorithm that radically reduced the computational effort. The Cooley-Tukey algorithm came to be known as the fast fourier transform or FFT. The value of the FFT was quickly realized all over the world. Corporations were founded to exploit the FFT. One of those corporations, Federal Scientific, used the FFT to analyze the famous 18 minute gap in the recordings made by Richard Nixon in 1973. Federal Scientific was a subsidiary of Nicolet which itself was an early adopter of the the FFT.

Apparently the value of the FFT was not quickly realized by Cooley and Tukey. They did not order reprints of their paper. They already had closets full of reprints of their previous papers. They were shocked when the reprint requests poured in.

It turns out that Cooley and Tukey were not the first to publish the FFT. Frederic Gauss was first. He published it in 1805—in Latin.

butterfly (25K)The Algorithm might as well be called "divide and conquer." You start with a large number of 2 point Fourier transforms. Here is a flow chart that might help explain it. These are combined into 4 point arrays, then 8 point arrays, then 16 point arrays and so forth until you get a single array. The number of times you have to pass through the data is log base 2 of N, not the square of N. The total number of complex calculations is n log base 2 of n. If there are 8000 points, the FFT is about 1000 times faster. If there are 1,000,000 points, the FFT is about 100,000 times faster.

If you need to do Fourier analysis you also need the FFT to get the job done in a reasonable amount of time. This is what the FFT look like in "C." The key to understanding it is in understanding the pointers.

while(pass!=size)  /*fft done?*/
/*compute the next pointers*/
     if(++index&pass)    /*done with a cell?*/
          index+=pass;   /*yes*/
          if(index==size)   /*done with pass?*/
               dtheta>>=1;    /*yes*/

A few things should be noted.

• The sine and cosine terms are pre-calculated and stored in a table.

• The number of data points (size) must be a power of two.

• The data are stored in bit inverted sequence. That means, for example, that the contents of locations 10110000 and 00001101 are swapped. (The result of an FFT is in normal sequence.) (There are other ways to do it.)

• The data are manipulated in place. A separate array is not needed to store the result thus halving memory requirements.

The FFT, in its most natural form, starts with complex data and it produces a complex result. However, some applications produce only real data, the imaginary data being set to zero. The result of an FFT in that case is an array with complex conjugate symmetry. That means that half of the array can be easily reconstructed from the other half. This fact can be exploited to reduce memory requirements and computation times by half.