Fast Fourier transform
The fast Fourier transform ( English fast Fourier transform , therefore mostly abbreviated to FFT ) is an algorithm for the efficient calculation of the discrete Fourier transform (DFT). With it, a timediscrete signal can be broken down into its frequency components and thus analyzed.
Similarly, there is the inverse fast Fourier transformation (IFFT) for the discrete inverse Fourier transformation . The same algorithms are used in IFFT, but with conjugate coefficients.
The FFT has numerous applications (see also below ) in engineering , natural sciences, and applied mathematics . It is also used in mobile radio technologies such as UMTS and LTE and in wireless data transmission, for example in WLAN radio network technology.
The FFT belongs to the divideandconquer method , so that  in contrast to direct calculation  previously calculated intermediate results can be reused and arithmetic operations can be saved. The most famous method is attributed to James Cooley and John W. Tukey , who published it in 1965. Strictly speaking, a form of the algorithm was designed by Carl Friedrich Gauß as early as 1805 , who used it to calculate the trajectories of the asteroids (2) Pallas and (3) Juno . A variant of the algorithm by Carl Runge was published for the first time in 1903 and 1905. In addition, restricted forms of the algorithm were developed several times before Cooley and Tukey; B. by Irving John Good (1960). After Cooley and Tukey, there have also been numerous suggestions for improvement and variations, for example from Georg Bruun , CM Rader and Leo I. Bluestein .
Informal description of the algorithm (Cooley and Tukey)
Cooley and Tukey's algorithm is a classic divideandconquer method. The prerequisite for its use is that the number of support points or sampling points is a power of two.
The algorithm is based on the observation that the calculation of a DFT of size 2n can be broken down into two calculations of a DFT of size n (via the vector with the entries of the even and the odd indices), with the two partial results reappearing after the transformation a Fourier transformation of size 2n are to be summarized.
Since the calculation of a DFT of half the length only requires a quarter of the complex multiplications and additions of the original DFT and, depending on the length of the output vector, this rule can be applied several times in a row, the recursive application of this basic idea finally allows a calculation in time; To save on trigonometric arithmetic operations, the properties of the unit roots from the Fourier matrix can also be used with the FFT .
Cooley and Tukey's algorithm
The discrete Fourier transform (DFT) of a vector of dimension is:
 .
The entries with even indices are noted as
and their DFT of dimension as .
Accordingly, the entries with odd indices are noted as
with DFT .
Then follows:
With the calculation of and ( ), both , and are determined. This decomposition has practically halved the computational effort.
Mathematical description (general case)
In mathematics, the fast discrete Fourier transform is treated in a much more general context:
Let be a commutative unitary ring . In let the number be a unit (i.e. invertible); further be a th root of unity with . For example is in the remainder class ring
 with , , d is odd (which is equivalent to the requirement "relatively prime to "),
the element is such a unit root, the corresponding FFT is used in the SchönhageStrassen algorithm .
Then be in the module to the discrete Fourier transform with
calculate optimized as follows:
First, we represent the indices and in two ways as follows:
 .
So we have the following recursion:
 ,
 .
Because of
we get the discrete Fourier transform from this .
complexity
In contrast to DFT, this classic variant of FFT according to Cooley and Tukey can only be carried out if the length of the input vector corresponds to a power of two. The number of sampling points can therefore be 1, 2, 4, 8, 16, 32, etc., for example. One speaks here of a Radix2FFT. Other lengths are possible using the alternative algorithms listed below.
The following recursion equation for the runtime of the FFT results from the above recursion:
The term describes the effort to multiply the results by a power of the unit root and to add the results. There are N pairs of numbers and adds N / 2 multiplied numbers with unit roots. Overall, f (N) is linearly bounded:
With the master's theorem the running time is:
The structure of the data flow can be described by a butterfly graph , which defines the order of the calculation.
Implementation as a recursive algorithm
The direct implementation of the FFT in pseudocode according to the above rule takes the form of a recursive algorithm :
 The field with the input values is transferred as a parameter to a function that divides it into two half as long fields (one with the values with an even index and one with the values with an odd index).
 These two fields are now transferred to new instances of this function.
 At the end, each function returns the FFT of the field passed as a parameter. Before an instance of the function is terminated, these two FFTs are combined into a single FFT according to the formula shown above  and the result is returned to the caller.
This is now continued until the argument of a call to the function only consists of a single element (recursion termination): The FFT of a single value is (it has itself as a constant component, and no other frequencies) itself. The function that only receives a single value as a parameter, can therefore return the FFT of this value without any calculation  the function that called it combines the two FFTs, each 1 point long, that it receives back, the function that called it in turn, the two 2point FFTs, and so on.
The speed advantage of the FFT compared to the DFT can be estimated well using this algorithm:
 In order to calculate the FFT of an elementlong vector, recursion levels are required when using this algorithm . The number of vectors to be calculated doubles in each level  while their length is halved, so that in the end precisely complex multiplications and additions are necessary in every level except for the last recursion level . So the total number of additions and multiplications is
 In contrast, the DFT requires complex multiplications and additions for the same input vector .
Implementation as a nonrecursive algorithm
The implementation of a recursive algorithm is usually not ideal in terms of resource consumption, since the many function calls required for this require computing time and memory for memorizing the return addresses. In practice, a nonrecursive algorithm is therefore usually used, which can be optimized depending on the application compared to the form shown here, which is optimized for easy understanding:
 In the above algorithm, if first the two halves of the field are swapped with each other, and then the two halves of these halves, etc.  then the result in the end is the same, as if all elements of the field were numbered in ascending order from 0 and then the order of the bits of the Field numbers reversed.
 After the input values have been resorted in this way, the only task left is to combine the individual short FFTs from the last recursion level to the outside into longer FFTs, e.g. B. in the form of three nested loops:
 The outermost loop counts the recursion level through (from 0 to N1 ).
 The next loop counts through the FFT sections in which the FFT is still divided in this recursion level. The counter of this loop is referred to below as .
 The innermost loop counts the element within an FFT section (referred to below ) through (from 0 to )
 In the innermost of these loops there are always the two samples with the following two indices:
 combined via a butterfly graph:
Alternative forms of FFT
In addition to the FFT algorithm by Cooley and Tukey, also called the Radix2 algorithm, shown above, there are a number of other algorithms for fast Fourier transformation. The variants differ in how certain parts of the “naive” algorithm are transformed in such a way that fewer (highprecision) multiplications are necessary. It usually applies here that the reduction in the number of multiplications causes an increased number of additions as well as intermediate results to be kept in the memory at the same time.
Some further algorithms are shown below in an overview. Details and exact mathematical descriptions including derivations can be found in the literature given below.
Radix4 algorithm
The radix4 algorithm is, analogous to it, the radix8 algorithm or, in general, the radix2 ^{N} algorithm, a further development of the radix2 algorithm above. The main difference is that the number of data to be processed points is a power of 4 and 2 of ^{N} must represent. The processing structure remains the same, except that in the butterfly graph, instead of two data paths, four or eight and generally 2 ^{N} data paths have to be linked to one another per element . The advantage consists in a further reduced computational effort and thus a speed advantage. Compared with the above algorithm by Cooley and Tukey, the Radix4 algorithm requires approx. 25% fewer multiplications. With the Radix8 algorithm, the number of multiplications is reduced by approx. 40%.
Disadvantages of this method are the coarser structure and a complex program code. With the Radix4 algorithm, only blocks of lengths 4, 16, 64, 256, 1024, 4096, ... can be processed. With the Radix8 algorithm, the restrictions are to be seen analogously.
Winograd algorithm
With this algorithm, only a certain, finite number of supporting points of the number is possible, namely:
where all combinations of N _{j are} permissible, in which the N _{j used are } relatively prime . This means that a maximum block length of only 5040 is possible. The possible values for are in the range up to 5040 closer to the number line than the powers of two. This enables better finetuning of the block length. The algorithm is built up from basic blocks of the DFT, the lengths of which correspond to N _{j} . With this method, the number of multiplications is reduced compared to the Radix2 algorithm, but at the same time the number of additions required increases. In addition, a complex permutation of the data is necessary at the input and output of each DFT, which is formed according to the rules of the Chinese remainder of the law . _{}
In practical implementations, this type of fast Fourier transformation has advantages over the Radix2 method when the microcontroller used for the FFT does not have its own multiplication unit and a great deal of computing time has to be used for the multiplications. In today's signal processors with their own multiplier units, this algorithm is no longer of major importance.
Prime factor algorithm
This FFT algorithm is based on similar ideas as the Winograd algorithm, but the structure is simpler and thus the multiplication effort is higher than with the Winograd algorithm. The main advantage of the implementation lies in the efficient use of the available memory through optimal adaptation of the block length. If a fast multiplying unit is available in a particular application and memory is tight at the same time, this algorithm can be optimal. With a similar block length, the execution time is comparable to that of the Cooley and Tukey algorithm.
Goertzel algorithm
The Goertzel algorithm is a special form of efficient calculation of individual spectral components and is more efficient at calculating only a few spectral components ( bins ) than all blockbased FFT algorithms, which always calculate the complete discrete spectrum.
Chirp z transform
Bluestein FFT algorithm for data volumes of any size (including prime numbers ).
The inverse FFT
The inverse of the discrete Fourier transform (DFT) agrees with the DFT except for the normalization factor and a sign. Since the fast Fourier transformation is an algorithm for calculating the DFT, this naturally also applies to the IFFT.
Applications
Computer algebra
Classic applications of the fast Fourier transformation can be found, for example, in computer algebra in connection with the implementation of fast polynomial processing algorithms. As illustrated in the chart on the right can be as rapid multiplication of two polynomials and to realize subquadratic term. First of all, the coefficient sequences corresponding to the two polynomials and corresponding to the two polynomials are transformed by fast Fourier transformation in runtime , so that the Fouriertransformed coefficient sequences corresponding to the polynomial are obtained by componentwise multiplication in runtime . This is ultimately transformed back in runtime using a fast inverse Fourier transformation. The total running time is in and is therefore asymptotically more efficient compared to the classic polynomial multiplication with running time .
Further areas of application
The other areas of application of the FFT are so varied that only a selection can be shown here:

Financial math
 The calculation of option prices (cf. Carr / Madan 1999)

Signal analysis
 Acoustics (audio measurements). A relatively trivial application is many guitar tuners or similar programs that benefit from the high speed of the FFT.
 Calculation of spectrograms (diagrams showing the amplitudes of the respective frequency components)
 Reconstruction of the image with the magnetic resonance tomograph or the analysis of crystal structures by means of Xrays , in each of which the Fourier transform of the desired image or the square of this Fourier transform is created.
 Measurement technology / general
 Digital network analyzers that try to determine the behavior of a circuit , a component or a line on a conductor path when operated with any frequency mixture.

digital signal processing
 Synthesis of audio signals from individual frequencies via the inverse FFT
 To reduce the computational effort for circular convolution in the time domain of FIR filters and replace it with the fast Fourier transform and simple multiplications in the frequency domain . (see also fast folding ). The fast folding offers z. B. the possibility to transport any audio or similar signals with little computing effort through very complex filters ( equalizer, etc.).
 Compression algorithms often use FFT or, like the wellknown MP3 format, the discrete cosine transformation that is related to it . The FFT of images or sounds often results in only relatively few frequency components with high amplitudes. This is advantageous when using a method of storing results that requires fewer bits to represent lower numbers, such as B. the Huffman coding . In other cases it is exploited that some of the frequencies can be left out without greatly impairing the result, so that the data flow can be reduced.
 telecommunications
 Long wave reception with the PC
 Broadband data transmission via OFDM , the basis for ADSL and WLAN (Internet), the various DVB transmission standards for digital television e.g. B. via antenna, cable and TV satellite , DRM , DAB (radio) and LTE (4th generation mobile communications). The high speed of data transmission is achieved here by the fact that many relatively slow data transmissions are operated simultaneously on many carrier frequencies . The complex signal, which is created by superimposing the individual signals, is then broken down into individual signal carriers by the remote station using the FFT.
literature
Magazine articles
 James W. Cooley, John W. Tukey: An algorithm for the machine calculation of complex Fourier series. In: Math. Comput. 19, 1965, pp. 297301.
 CM Rader: Discrete Fourier transforms when the number of data samples is prime. In: Proc. IEEE. 56, 1968, pp. 11071108.
 Leo I. Bluestein: A linear filtering approach to the computation of the discrete Fourier transform. In: Northeast Electronics Research and Engineering Meeting Record. 10, 1968, pp. 218219.
 Georg Bruun: zTransform DFT filters and FFTs. In: IEEE Trans. On Acoustics, Speech and Signal Processing (ASSP). 26, No. 1, 1978, pp. 5663.
 MT Heideman, DH Johnson, CS Burrus: Gauss and the History of the Fast Fourier Transform. In: Arch. Hist. Sc. 34, No. 3, 1985.
Books
 Alan V. Oppenheim, Ronald W. Schafer: Discretetime signal processing. 3. Edition. R. Oldenbourg Verlag, Munich / Vienna 1999, ISBN 3486241451 .
 E. Oran Brigham: FFT. Fast Fourier transform. R. Oldenbourg Verlag, Munich / Vienna 1995, ISBN 3486231774 .
 Steven W. Smith: The Scientist and Engineer's Guide to Digital Signal Processing . 1st edition. Elsevier Ltd, Oxford, 2002, ISBN 9780750674447 , chap. 18 (English, dspguide.com ).
Web links
 www.inf.fhflensburg.de/lang/algorithmen/fft/fft.htm  description of the Fourier transformation and unit roots (German)
 Importance of FFT analysis in audio technology (example graphic: square wave signal) (German)
 www.sprut.de/electronic/pic/16bit/dsp/fft/fft.htm  Introduction to FFT for those who have not studied, e.g. B. Apprentices (German)
 www.fftw.org (English)
 Paul Bourke (1993): DFT (Discrete Fourier Transform)  FFT (Fast Fourier Transform) (nice FFT code in C, in 1D and 2D) (English)
 Kevin McGee: An introduction to signal processing and fast fourier transform (FFT). Retrieved April 27, 2010 .