Talk:Fast Fourier transform

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Stevenj (talk | contribs) at 07:11, 25 January 2007 (→‎example). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The "Other FFT Algorithms" segment is just an unreadable blob of text; I've separated the algorithms out (I think), which doesn't look as nice but is easier to read. Frankly, however, the whole thing ought to be rewritten for coherency, or expanded and put into its own section. I'm not qualified to do either of those things.

I think breaking this into more than three paragraphs is overkill. As for expanding it, any additional information about the FFT algorithms in question should go in a separate article, like for Prime-factor FFT algorithm and Rader's FFT algorithm. —Steven G. Johnson 23:58, 29 May 2006 (UTC)[reply]


More information on 2d or 3d FFT's would be super.

Alex.Szatmary

Done. At least as a start (the fancier multidimensional algorithms such as vector-radix could eventually use their own pages). —Steven G. Johnson 04:41, August 23, 2005 (UTC)

More information about the practical uses of FFT (e.g. how it is used for digital signal processing) would be very useful. If I ever find out what it does, I may write it up myself... Jevon 12:11, 23 November 2005 (UTC)[reply]

As is described in the first paragraph, information about applications belongs in discrete Fourier transform. —Steven G. Johnson 16:34, 23 November 2005 (UTC)[reply]

in finite fields

The whole article seems to address floating-point arithmetics as an approximation to the complex field.

However, the same algorithms may be used in any field where there is a nth prime root of the unit, where n is the length of the vector, including finite fields GF(pk). Since the multiplicative group of a finite field is cyclic, the condition for the existence of such a root is n | pk-1. If k=1, such a field is easily implementable in machine by modular arithmetics.

There is an algorithm for multiplying long integers by

  • consider them written in base : and , as polynomials
  • transforming them by FFT modulo p
  • writing the convolution product where
  • computing this convolution product by inverse FFT of the product of the two FFTs
  • doing a little carrying.

A little web search brings up: J. M. Pollard, Mathematics of Computation, Vol. 25, No. 114. (Apr., 1971), pp. 365-374. Stable URL: http://links.jstor.org/sici?sici=0025-5718%28197104%2925%3A114%3C365%3ATFFTIA%3E2.0.CO%3B2-U

I'm not an expert in the field, so I don't think comfortable writing about this. I'll look how GNU MP does it. David.Monniaux 22:07, 13 October 2006 (UTC)[reply]

Only one section of the article discusses floating-point approximation, so I'm not sure why you're saying that the "whole article" is concerned with this. However, it's true that the article is concerned with algorithms for the discrete Fourier transform over vectors of complex numbers. It's certainly true that many of the algorithms have direct analogues for any finite field (although I can give counter-examples of some that can't, such as the Rader-Brenner algorithm); in particular, those algorithms that only use the fact that is a primitive root of unity (e.g. Cooley-Tukey, PFA, Rader...) are generalizable in this way. But such transforms are generally given different names, like the number theoretic transform. —Steven G. Johnson 22:49, 13 October 2006 (UTC)[reply]
I've added a brief mention of such generalizations in the introductions. Any further details belong in articles like number theoretic transform. —Steven G. Johnson 23:02, 13 October 2006 (UTC)[reply]
Ah, interesting. I actually didn't know different names were used in English (I learned mathematics in French :-) ). David.Monniaux 08:27, 14 October 2006 (UTC)[reply]

example

Past century, I was frustrated with complexity of FFT progtams suggested in the literature. I believe, beginners can add all the accessories and improvements, if the short algorithm alreaady works. So, I add the short portable example. If you can write even shorter copylefted example, it may be even better. dima 03:23, 24 January 2007 (UTC)[reply]

I removed this. First, Wikipedia is not a code repository, and using a particular programming language (as opposed to English pseudo-code) is inappropriate in a mathematical article. Second, it makes the common mistake of confusing an "algorithm" (an abstract decomposition of the DFT) with an "implementation". Third, an example of the Cooley-Tukey algorithm would belong on Cooley-Tukey FFT algorithm, not here. Fourth, that page already gives the radix-2 example, but as math and English, not C++ code. Fifth, your code was far from minimal. Here is a shorter (but still not minimal) example in C (using the C99 complex type) that you would call with rec_fft(-1, N, in, out, 1, 1), for N a power of 2:
void rec_fft(int sign, int N, complex double *x, complex double *y, int is, int os)
{
     if (N == 1) *y = *x;
     else {
          int N2 = N/2;
          rec_fft(sign, N2, x, y, is*2, os);
          rec_fft(sign, N2, x+is, y+os*N2, is*2, os);
          complex double omega = 1.0, omega1 = cexp(sign * I * 6.2831853071795864769 / N);
          for (int i = 0; i < N2; ++i) {
               complex double a = y[os*i], b = omega * y[os*(i + N2)];
               y[os*i] = a + b;
               y[os*(i + N2)] = a - b;
               omega *= omega1;
          }
     }
}
(Making the code work out-of-place, and recursively, allows you to basically copy the two-line mathematical derivation of the radix-2 case, and leads to the simplest implementations because you don't have to worry about bit-reversal etc. It is not nearly as fast as a highly optimized code, but then again no textbook radix-2 FFT is remotely optimal these days.) If you want a code library that is pretty short and simple and of reasonable speed, Google "kissfft".
—Steven G. Johnson 07:00, 25 January 2007 (UTC)[reply]