Ask Sawal

Discussion Forum
Notification Icon1
Write Answer Icon
Add Question Icon

Where fft is used?

4 Answer(s) Available
Answer # 1 #

As explained in the first part, the sampling rate fs of the measuring system and the block length BL are the two central parameters of an FFT. The sampling rate indicates how often the analog signal to be analyzed is scanned. When recording wav files via a commercially-available PC sound card, for example, the audio signal is usually sampled 44,100 times per second.

Harry Nyquist was the discoverer of a fundamental rule in the sampling of analog signals: the sampling frequency must be at least double the highest frequency of the signal. If, for example, a signal containing frequencies up to 24 kHz is to be sampled, a sampling rate of at least 48 kHz is required for this purpose. Half the sampling rate, in this example 24 kHz,  is called the "Nyquist frequency". But what happens if signals above the Nyquist frequency are fed in to the system?

For the most, a signal is sampled with a more-than-sufficient number of samples. With a 48 kHz sampling rate, for example, the 6 kHz frequency is sampled 8 times per cycle, while the 12 kHz frequency is only sampled 4 times per cycle. At the Nyquist frequency, only 2 samples are available per cycle. With 2 samples or more it is still possible to reconstruct the signal without loss. If, however, less than 2 samples are available, artifacts which do not occur in the sampled (original) signal are generated.

In the FFT, these artifacts appear as mirror frequencies. If the Nyquist frequency is exceeded, the signal is reflected at this imaginary limit and falls back into the useful frequency band. The following video shows an FFT system with 44.1 kHz sampling rate. A sweep signal of 15 kHz to 25 kHz is fed in to this system.

These unwanted mirror frequencies are counteracted with an analog low-pass filter (anti-aliasing filter) before the scanning. The filter ensures that frequencies above the Nyquist frequency are suppressed.

In the case of periodically-continuous signals, the time windowing serves to smooth the undesired transitional jumps at the end of the scanning (see part 1). This prevents smearing in the spectrum. There are numerous types of windows, some of which differ only slightly. When selecting the time window, the following rule applies: Each window requires a compromise between frequency selectivity and amplitude accuracy.

In the analysis of non-periodic signals, e.g. noise or music, it is often advantageous to capture multiple FFT blocks and determine mean values therefrom. There are two possible approaches:

Modern high-resolution FFT analyzers offer the possibility to decouple the number of measurement results from the FFT block length. This results in an increase in measurement performance time, especially for high-resolution FFTs. Thus, for example, with a 2MB block length it is no longer necessary to measure and represent more than 1 Million points (bins), but only the number necessary for the display, e.g. 1024. The value chosen for each FFT bin can be defined in two ways:

FFTs are mainly used to visualize signals. However, there are also applications where FFT results are used in calculations. For example, very simple levels of defined frequency bands can be calculated by adding them via an RSS (Root Sum Square) algorithm. Another application is the comparison of spectra. The example below shows an acoustic measurement of a cordless screwdriver. The measured spectrum is subtracted from a defined reference spectrum. This difference is compared against an upper and lower tolerance. The upper spectrum shows a functional cordless screwdriver. In the lower, the acoustic spectrum suggests that the test specimen is defective.

[4]
Edit
Query
Report
Gerard Harris
SUPERVISOR CONTINGENTS
Answer # 2 #

Gilbert Strang, author of the classic textbook Linear Algebra and Its Applications, once referred to the fast Fourier transform, or FFT, as “the most important numerical algorithm in our lifetime.” No wonder. The FFT is used to process data throughout today’s highly networked, digital world. It allows computers to efficiently calculate the different frequency components in time-varying signals—and also to reconstruct such signals from a set of frequency components. You couldn’t log on to a Wi-Fi network or make a call on your cellphone without it. So when some of Strang’s MIT colleagues announced in January at the ACM-SIAM Symposium on Discrete Algorithms that they had developed ways of substantially speeding up the calculation of the FFT, lots of people took notice.

“FFTs are run billions of times a day,” says Richard Baraniuk, a professor of electrical and computer engineering at Rice University, in Houston, and an expert in the emerging field of compressive sensing, which has much in common with the approaches now being applied to speed up the calculation of FFTs.

Efforts to improve the calculation of Fourier transforms have a long and generally overlooked history. While most engineers associate the FFT with the procedure James Cooley of IBM and John Tukey of Princeton described in 1965, specialists recognize that it has much deeper roots. Although he never published it, the renowned German mathematician Carl Friedrich Gauss had worked out the basic approach, probably as early as 1805—predating, remarkably enough, even Fourier’s own work on the topic.

Given that great mathematical minds have been thinking about how to speed up this particular calculation for more than two centuries, how is it that progress is still being made? The fundamental reason is that the newer methods are tailored to run fast for only some signals—ones that are termed “sparse” because they contain a relatively small number of frequency components of significant size. The traditional FFT takes the same amount of computational time for any signal.

“Certainly there are applications where you need to run the full FFT because the data are not sparse at all,” says Piotr Indyk of MIT’s Computer Science and Artificial Intelligence Laboratory, who developed the new algorithms in collaboration with his colleague Dina Katabi and two students, Haitham Hassanieh and Eric Price. Fortunately, many real-world signals satisfy this sparsity requirement.

“Most signals are sparse,” says Katabi, who points out that when you send, say, a video file over wireless channels, transmitting only a few percent of the frequency content is typically sufficient—and in line with the sparsity levels that her group’s new algorithms handle well. Baraniuk adds that the frequency content of many natural signals, be they astronomical images or bird chirps, tends to be concentrated among the lower frequencies. “Sparsity is everywhere,” he says.

[4]
Edit
Query
Report
Manzar Rapper
Sound Technician
Answer # 3 #

In signal processing, FFT forms the basis of frequency domain analysis (spectral analysis) and is used for signal filtering, spectral estimation, data compression, and other applications. Variations of the FFT such as the short-time Fourier transform also allow for simultaneous analysis in time and frequency domains.

[3]
Edit
Query
Report
Sugriva Devan
Public Relations (Pr) Officer
Answer # 4 #

A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain (often time or space) to a representation in the frequency domain and vice versa. The DFT is obtained by decomposing a sequence of values into components of different frequencies. This operation is useful in many fields, but computing it directly from the definition is often too slow to be practical. An FFT rapidly computes such transformations by factorizing the DFT matrix into a product of sparse (mostly zero) factors. As a result, it manages to reduce the complexity of computing the DFT from O ( N 2 ) {\textstyle O\left(N^{2}\right)} , which arises if one simply applies the definition of DFT, to O ( N log ⁡ N ) {\textstyle O(N\log N)} , where N {\displaystyle N} is the data size. The difference in speed can be enormous, especially for long data sets where N may be in the thousands or millions. In the presence of round-off error, many FFT algorithms are much more accurate than evaluating the DFT definition directly or indirectly. There are many different FFT algorithms based on a wide range of published theories, from simple complex-number arithmetic to group theory and number theory.

Fast Fourier transforms are widely used for applications in engineering, music, science, and mathematics. The basic ideas were popularized in 1965, but some algorithms had been derived as early as 1805. In 1994, Gilbert Strang described the FFT as "the most important numerical algorithm of our lifetime", and it was included in Top 10 Algorithms of 20th Century by the IEEE magazine Computing in Science & Engineering.

The best-known FFT algorithms depend upon the factorization of N, but there are FFTs with O(N log N) complexity for all N, even for prime N. Many FFT algorithms depend only on the fact that e − 2 π i / N {\textstyle e^{-2\pi i/N}} is an N-th primitive root of unity, and thus can be applied to analogous transforms over any finite field, such as number-theoretic transforms. Since the inverse DFT is the same as the DFT, but with the opposite sign in the exponent and a 1/N factor, any FFT algorithm can easily be adapted for it.

The development of fast algorithms for DFT can be traced to Carl Friedrich Gauss's unpublished work in 1805 when he needed it to interpolate the orbit of asteroids Pallas and Juno from sample observations. His method was very similar to the one published in 1965 by James Cooley and John Tukey, who are generally credited for the invention of the modern generic FFT algorithm. While Gauss's work predated even Joseph Fourier's results in 1822, he did not analyze the computation time and eventually used other methods to achieve his goal.

Between 1805 and 1965, some versions of FFT were published by other authors. Frank Yates in 1932 published his version called interaction algorithm, which provided efficient computation of Hadamard and Walsh transforms. Yates' algorithm is still used in the field of statistical design and analysis of experiments. In 1942, G. C. Danielson and Cornelius Lanczos published their version to compute DFT for x-ray crystallography, a field where calculation of Fourier transforms presented a formidable bottleneck. While many methods in the past had focused on reducing the constant factor for O ( N 2 ) {\textstyle O\left(N^{2}\right)} computation by taking advantage of "symmetries", Danielson and Lanczos realized that one could use the "periodicity" and apply a "doubling trick" to "double with only slightly more than double the labor", though like Gauss they did not analyze that this led to O ( N log ⁡ N ) {\textstyle O(N\log N)} scaling.

James Cooley and John Tukey independently rediscovered these earlier algorithms and published a more general FFT in 1965 that is applicable when N is composite and not necessarily a power of 2, as well as analyzing the O ( N log ⁡ N ) {\textstyle O(N\log N)} scaling. Tukey came up with the idea during a meeting of President Kennedy's Science Advisory Committee where a discussion topic involved detecting nuclear tests by the Soviet Union by setting up sensors to surround the country from outside. To analyze the output of these sensors, an FFT algorithm would be needed. In discussion with Tukey, Richard Garwin recognized the general applicability of the algorithm not just to national security problems, but also to a wide range of problems including one of immediate interest to him, determining the periodicities of the spin orientations in a 3-D crystal of Helium-3. Garwin gave Tukey's idea to Cooley (both worked at IBM's Watson labs) for implementation. Cooley and Tukey published the paper in a relatively short time of six months. As Tukey did not work at IBM, the patentability of the idea was doubted and the algorithm went into the public domain, which, through the computing revolution of the next decade, made FFT one of the indispensable algorithms in digital signal processing.

Let x 0 {\displaystyle x_{0}} , …, x N − 1 {\displaystyle x_{N-1}} be complex numbers. The DFT is defined by the formula

where e i 2 π / N {\displaystyle e^{i2\pi /N}} is a primitive Nth root of 1.

Evaluating this definition directly requires O ( N 2 ) {\textstyle O\left(N^{2}\right)} operations: there are N outputs Xk, and each output requires a sum of N terms. An FFT is any method to compute the same results in O ( N log ⁡ N ) {\textstyle O(N\log N)} operations. All known FFT algorithms require Θ ( N log ⁡ N ) {\textstyle \Theta (N\log N)} operations, although there is no known proof that lower complexity is impossible.

To illustrate the savings of an FFT, consider the count of complex multiplications and additions for N = 4096 {\textstyle N=4096} data points. Evaluating the DFT's sums directly involves N 2 {\textstyle N^{2}} complex multiplications and N ( N − 1 ) {\textstyle N(N-1)} complex additions, of which O ( N ) {\textstyle O(N)} operations can be saved by eliminating trivial operations such as multiplications by 1, leaving about 30 million operations. In contrast, the radix-2 Cooley–Tukey algorithm, for N a power of 2, can compute the same result with only ( N / 2 ) log 2 ⁡ ( N ) {\textstyle (N/2)\log _{2}(N)} complex multiplications (again, ignoring simplifications of multiplications by 1 and similar) and N log 2 ⁡ ( N ) {\textstyle N\log _{2}(N)} complex additions, in total about 30,000 operations — a thousand times less than with direct evaluation. In practice, actual performance on modern computers is usually dominated by factors other than the speed of arithmetic operations and the analysis is a complicated subject (for example, see Frigo & Johnson, 2005), but the overall improvement from O ( N 2 ) {\textstyle O\left(N^{2}\right)} to O ( N log ⁡ N ) {\textstyle O(N\log N)} remains.

By far the most commonly used FFT is the Cooley–Tukey algorithm. This is a divide-and-conquer algorithm that recursively breaks down a DFT of any composite size N = N 1 N 2 {\textstyle N=N_{1}N_{2}} into many smaller DFTs of sizes N 1 {\textstyle N_{1}} and N 2 {\textstyle N_{2}} , along with O ( N ) {\displaystyle O(N)} multiplications by complex roots of unity traditionally called twiddle factors (after Gentleman and Sande, 1966).

This method (and the general idea of an FFT) was popularized by a publication of Cooley and Tukey in 1965, but it was later discovered that those two authors had independently re-invented an algorithm known to Carl Friedrich Gauss around 1805 (and subsequently rediscovered several times in limited forms).

The best known use of the Cooley–Tukey algorithm is to divide the transform into two pieces of size N/2 at each step, and is therefore limited to power-of-two sizes, but any factorization can be used in general (as was known to both Gauss and Cooley/Tukey). These are called the radix-2 and mixed-radix cases, respectively (and other variants such as the split-radix FFT have their own names as well). Although the basic idea is recursive, most traditional implementations rearrange the algorithm to avoid explicit recursion. Also, because the Cooley–Tukey algorithm breaks the DFT into smaller DFTs, it can be combined arbitrarily with any other algorithm for the DFT, such as those described below.

There are FFT algorithms other than Cooley–Tukey.

For N = N1N2 with coprime N1 and N2, one can use the prime-factor (Good–Thomas) algorithm (PFA), based on the Chinese remainder theorem, to factorize the DFT similarly to Cooley–Tukey but without the twiddle factors. The Rader–Brenner algorithm (1976) is a Cooley–Tukey-like factorization but with purely imaginary twiddle factors, reducing multiplications at the cost of increased additions and reduced numerical stability; it was later superseded by the split-radix variant of Cooley–Tukey (which achieves the same multiplication count but with fewer additions and without sacrificing accuracy). Algorithms that recursively factorize the DFT into smaller operations other than DFTs include the Bruun and QFT algorithms. (The Rader–Brenner and QFT algorithms were proposed for power-of-two sizes, but it is possible that they could be adapted to general composite N. Bruun's algorithm applies to arbitrary even composite sizes.) Bruun's algorithm, in particular, is based on interpreting the FFT as a recursive factorization of the polynomial zN − 1, here into real-coefficient polynomials of the form zM − 1 and z2M + azM + 1.

Another polynomial viewpoint is exploited by the Winograd FFT algorithm, which factorizes zN − 1 into cyclotomic polynomials—these often have coefficients of 1, 0, or −1, and therefore require few (if any) multiplications, so Winograd can be used to obtain minimal-multiplication FFTs and is often used to find efficient algorithms for small factors. Indeed, Winograd showed that the DFT can be computed with only O(N) irrational multiplications, leading to a proven achievable lower bound on the number of multiplications for power-of-two sizes; unfortunately, this comes at the cost of many more additions, a tradeoff no longer favorable on modern processors with hardware multipliers. In particular, Winograd also makes use of the PFA as well as an algorithm by Rader for FFTs of prime sizes.

Rader's algorithm, exploiting the existence of a generator for the multiplicative group modulo prime N, expresses a DFT of prime size N as a cyclic convolution of (composite) size N − 1, which can then be computed by a pair of ordinary FFTs via the convolution theorem (although Winograd uses other convolution methods). Another prime-size FFT is due to L. I. Bluestein, and is sometimes called the chirp-z algorithm; it also re-expresses a DFT as a convolution, but this time of the same size (which can be zero-padded to a power of two and evaluated by radix-2 Cooley–Tukey FFTs, for example), via the identity

Hexagonal fast Fourier transform (HFFT) aims at computing an efficient FFT for the hexagonally-sampled data by using a new addressing scheme for hexagonal grids, called Array Set Addressing (ASA).

In many applications, the input data for the DFT are purely real, in which case the outputs satisfy the symmetry

and efficient FFT algorithms have been designed for this situation (see e.g. Sorensen, 1987). One approach consists of taking an ordinary algorithm (e.g. Cooley–Tukey) and removing the redundant parts of the computation, saving roughly a factor of two in time and memory. Alternatively, it is possible to express an even-length real-input DFT as a complex DFT of half the length (whose real and imaginary parts are the even/odd elements of the original real data), followed by O(N) post-processing operations.

It was once believed that real-input DFTs could be more efficiently computed by means of the discrete Hartley transform (DHT), but it was subsequently argued that a specialized real-input DFT algorithm (FFT) can typically be found that requires fewer operations than the corresponding DHT algorithm (FHT) for the same number of inputs. Bruun's algorithm (above) is another method that was initially proposed to take advantage of real inputs, but it has not proved popular.

There are further FFT specializations for the cases of real data that have even/odd symmetry, in which case one can gain another factor of roughly two in time and memory and the DFT becomes the discrete cosine/sine transform(s) (DCT/DST). Instead of directly modifying an FFT algorithm for these cases, DCTs/DSTs can also be computed via FFTs of real data combined with O(N) pre- and post-processing.

A fundamental question of longstanding theoretical interest is to prove lower bounds on the complexity and exact operation counts of fast Fourier transforms, and many open problems remain. It is not rigorously proved whether DFTs truly require Ω ( N log ⁡ N ) {\textstyle \Omega (N\log N)} (i.e., order N log ⁡ N {\displaystyle N\log N} or greater) operations, even for the simple case of power of two sizes, although no algorithms with lower complexity are known. In particular, the count of arithmetic operations is usually the focus of such questions, although actual performance on modern-day computers is determined by many other factors such as cache or CPU pipeline optimization.

Following work by Shmuel Winograd (1978), a tight Θ(N) lower bound is known for the number of real multiplications required by an FFT. It can be shown that only 4 N − 2 log 2 2 ⁡ ( N ) − 2 log 2 ⁡ ( N ) − 4 {\textstyle 4N-2\log _{2}^{2}(N)-2\log _{2}(N)-4} irrational real multiplications are required to compute a DFT of power-of-two length N = 2 m {\displaystyle N=2^{m}} . Moreover, explicit algorithms that achieve this count are known (Heideman & Burrus, 1986; Duhamel, 1990). However, these algorithms require too many additions to be practical, at least on modern computers with hardware multipliers (Duhamel, 1990; Frigo & Johnson, 2005).

A tight lower bound is not known on the number of required additions, although lower bounds have been proved under some restrictive assumptions on the algorithms. In 1973, Morgenstern proved an Ω ( N log ⁡ N ) {\displaystyle \Omega (N\log N)} lower bound on the addition count for algorithms where the multiplicative constants have bounded magnitudes (which is true for most but not all FFT algorithms). Pan (1986) proved an Ω ( N log ⁡ N ) {\displaystyle \Omega (N\log N)} lower bound assuming a bound on a measure of the FFT algorithm's "asynchronicity", but the generality of this assumption is unclear. For the case of power-of-two N, Papadimitriou (1979) argued that the number N log 2 ⁡ N {\textstyle N\log _{2}N} of complex-number additions achieved by Cooley–Tukey algorithms is optimal under certain assumptions on the graph of the algorithm (his assumptions imply, among other things, that no additive identities in the roots of unity are exploited). (This argument would imply that at least 2 N log 2 ⁡ N {\textstyle 2N\log _{2}N} real additions are required, although this is not a tight bound because extra additions are required as part of complex-number multiplications.) Thus far, no published FFT algorithm has achieved fewer than N log 2 ⁡ N {\textstyle N\log _{2}N} complex-number additions (or their equivalent) for power-of-two N.

A third problem is to minimize the total number of real multiplications and additions, sometimes called the "arithmetic complexity" (although in this context it is the exact count and not the asymptotic complexity that is being considered). Again, no tight lower bound has been proven. Since 1968, however, the lowest published count for power-of-two N was long achieved by the split-radix FFT algorithm, which requires 4 N log 2 ⁡ ( N ) − 6 N + 8 {\textstyle 4N\log _{2}(N)-6N+8} real multiplications and additions for N > 1. This was recently reduced to ∼ 34 9 N log 2 ⁡ N {\textstyle \sim {\frac {34}{9}}N\log _{2}N} (Johnson and Frigo, 2007; Lundy and Van Buskirk, 2007). A slightly larger count (but still better than split radix for N ≥ 256) was shown to be provably optimal for N ≤ 512 under additional restrictions on the possible algorithms (split-radix-like flowgraphs with unit-modulus multiplicative factors), by reduction to a satisfiability modulo theories problem solvable by brute force (Haynal & Haynal, 2011).

Most of the attempts to lower or prove the complexity of FFT algorithms have focused on the ordinary complex-data case, because it is the simplest. However, complex-data FFTs are so closely related to algorithms for related problems such as real-data FFTs, discrete cosine transforms, discrete Hartley transforms, and so on, that any improvement in one of these would immediately lead to improvements in the others (Duhamel & Vetterli, 1990).

All of the FFT algorithms discussed above compute the DFT exactly (i.e. neglecting floating-point errors). A few "FFT" algorithms have been proposed, however, that compute the DFT approximately, with an error that can be made arbitrarily small at the expense of increased computations. Such algorithms trade the approximation error for increased speed or other properties. For example, an approximate FFT algorithm by Edelman et al. (1999) achieves lower communication requirements for parallel computing with the help of a fast multipole method. A wavelet-based approximate FFT by Guo and Burrus (1996) takes sparse inputs/outputs (time/frequency localization) into account more efficiently than is possible with an exact FFT. Another algorithm for approximate computation of a subset of the DFT outputs is due to Shentov et al. (1995). The Edelman algorithm works equally well for sparse and non-sparse data, since it is based on the compressibility (rank deficiency) of the Fourier matrix itself rather than the compressibility (sparsity) of the data. Conversely, if the data are sparse—that is, if only K out of N Fourier coefficients are nonzero—then the complexity can be reduced to O(K log(N) log(N/K)), and this has been demonstrated to lead to practical speedups compared to an ordinary FFT for N/K > 32 in a large-N example (N = 222) using a probabilistic approximate algorithm (which estimates the largest K coefficients to several decimal places).

FFT algorithms have errors when finite-precision floating-point arithmetic is used, but these errors are typically quite small; most FFT algorithms, e.g. Cooley–Tukey, have excellent numerical properties as a consequence of the pairwise summation structure of the algorithms. The upper bound on the relative error for the Cooley–Tukey algorithm is O ( ϵ log ⁡ N ) {\textstyle O(\epsilon \log N)} , compared to O ( ϵ N 3 / 2 ) {\textstyle O(\epsilon N^{3/2})} for the naïve DFT formula, where ε is the machine floating-point relative precision. In fact, the root mean square (rms) errors are much better than these upper bounds, being only O ( ϵ log ⁡ N ) {\textstyle O(\epsilon {\sqrt {\log N}})} for Cooley–Tukey and O ( ϵ N ) {\textstyle O(\epsilon {\sqrt {N}})} for the naïve DFT (Schatzman, 1996). These results, however, are very sensitive to the accuracy of the twiddle factors used in the FFT (i.e. the trigonometric function values), and it is not unusual for incautious FFT implementations to have much worse accuracy, e.g. if they use inaccurate trigonometric recurrence formulas. Some FFTs other than Cooley–Tukey, such as the Rader–Brenner algorithm, are intrinsically less stable.

In fixed-point arithmetic, the finite-precision errors accumulated by FFT algorithms are worse, with rms errors growing as O ( N ) {\textstyle O({\sqrt {N}})} for the Cooley–Tukey algorithm (Welch, 1969). Achieving this accuracy requires careful attention to scaling to minimize loss of precision, and fixed-point FFT algorithms involve rescaling at each intermediate stage of decompositions like Cooley–Tukey.

To verify the correctness of an FFT implementation, rigorous guarantees can be obtained in O ( N log ⁡ N ) {\textstyle O(N\log N)} time by a simple procedure checking the linearity, impulse-response, and time-shift properties of the transform on random inputs (Ergün, 1995).

As defined in the multidimensional DFT article, the multidimensional DFT

transforms an array xn with a d-dimensional vector of indices n = ( n 1 , … , n d ) {\textstyle \mathbf {n} =\left(n_{1},\ldots ,n_{d}\right)} by a set of d nested summations (over n j = 0 … N j − 1 {\textstyle n_{j}=0\ldots N_{j}-1} for each j), where the division n/N, defined as n / N = ( n 1 / N 1 , … , n d / N d ) {\textstyle \mathbf {n} /\mathbf {N} =\left(n_{1}/N_{1},\ldots ,n_{d}/N_{d}\right)} , is performed element-wise. Equivalently, it is the composition of a sequence of d sets of one-dimensional DFTs, performed along one dimension at a time (in any order).

This compositional viewpoint immediately provides the simplest and most common multidimensional DFT algorithm, known as the row-column algorithm (after the two-dimensional case, below). That is, one simply performs a sequence of d one-dimensional FFTs (by any of the above algorithms): first you transform along the n1 dimension, then along the n2 dimension, and so on (or actually, any ordering works). This method is easily shown to have the usual O ( N log ⁡ N ) {\textstyle O(N\log N)} complexity, where N = N 1 ⋅ N 2 ⋅ ⋯ ⋅ N d {\textstyle N=N_{1}\cdot N_{2}\cdot \cdots \cdot N_{d}} is the total number of data points transformed. In particular, there are N/N1 transforms of size N1, etcetera, so the complexity of the sequence of FFTs is:

In two dimensions, the xk can be viewed as an n 1 × n 2 {\displaystyle n_{1}\times n_{2}} matrix, and this algorithm corresponds to first performing the FFT of all the rows (resp. columns), grouping the resulting transformed rows (resp. columns) together as another n 1 × n 2 {\displaystyle n_{1}\times n_{2}} matrix, and then performing the FFT on each of the columns (resp. rows) of this second matrix, and similarly grouping the results into the final result matrix.

In more than two dimensions, it is often advantageous for cache locality to group the dimensions recursively. For example, a three-dimensional FFT might first perform two-dimensional FFTs of each planar "slice" for each fixed n1, and then perform the one-dimensional FFTs along the n1 direction. More generally, an asymptotically optimal cache-oblivious algorithm consists of recursively dividing the dimensions into two groups ( n 1 , … , n d / 2 ) {\textstyle (n_{1},\ldots ,n_{d/2})} and ( n d / 2 + 1 , … , n d ) {\textstyle (n_{d/2+1},\ldots ,n_{d})} that are transformed recursively (rounding if d is not even) (see Frigo and Johnson, 2005). Still, this remains a straightforward variation of the row-column algorithm that ultimately requires only a one-dimensional FFT algorithm as the base case, and still has O(N log N) complexity. Yet another variation is to perform matrix transpositions in between transforming subsequent dimensions, so that the transforms operate on contiguous data; this is especially important for out-of-core and distributed memory situations where accessing non-contiguous data is extremely time-consuming.

There are other multidimensional FFT algorithms that are distinct from the row-column algorithm, although all of them have O ( N log ⁡ N ) {\textstyle O(N\log N)} complexity. Perhaps the simplest non-row-column FFT is the vector-radix FFT algorithm, which is a generalization of the ordinary Cooley–Tukey algorithm where one divides the transform dimensions by a vector r = ( r 1 , r 2 , … , r d ) {\textstyle \mathbf {r} =\left(r_{1},r_{2},\ldots ,r_{d}\right)} of radices at each step. (This may also have cache benefits.) The simplest case of vector-radix is where all of the radices are equal (e.g. vector-radix-2 divides all of the dimensions by two), but this is not necessary. Vector radix with only a single non-unit radix at a time, i.e. r = ( 1 , … , 1 , r , 1 , … , 1 ) {\textstyle \mathbf {r} =\left(1,\ldots ,1,r,1,\ldots ,1\right)} , is essentially a row-column algorithm. Other, more complicated, methods include polynomial transform algorithms due to Nussbaumer (1977), which view the transform in terms of convolutions and polynomial products. See Duhamel and Vetterli (1990) for more information and references.

An O ( N 5 / 2 log ⁡ N ) {\textstyle O(N^{5/2}\log N)} generalization to spherical harmonics on the sphere S2 with N2 nodes was described by Mohlenkamp, along with an algorithm conjectured (but not proven) to have O ( N 2 log 2 ⁡ ( N ) ) {\textstyle O(N^{2}\log ^{2}(N))} complexity; Mohlenkamp also provides an implementation in the libftsh library. A spherical-harmonic algorithm with O ( N 2 log ⁡ N ) {\textstyle O(N^{2}\log N)} complexity is described by Rokhlin and Tygert.

The fast folding algorithm is analogous to the FFT, except that it operates on a series of binned waveforms rather than a series of real or complex scalar values. Rotation (which in the FFT is multiplication by a complex phasor) is a circular shift of the component waveform.

Various groups have also published "FFT" algorithms for non-equispaced data, as reviewed in Potts et al. (2001). Such algorithms do not strictly compute the DFT (which is only defined for equispaced data), but rather some approximation thereof (a non-uniform discrete Fourier transform, or NDFT, which itself is often computed only approximately). More generally there are various other methods of spectral estimation.

The FFT is used in digital recording, sampling, additive synthesis and pitch correction software.

The FFT's importance derives from the fact that it has made working in the frequency domain equally computationally feasible as working in the temporal or spatial domain. Some of the important applications of the FFT include:

An original application of the FFT in finance particularly in the Valuation of options was developed by Marcello Minenna.

FFT-related algorithms:

FFT implementations:

Other links:

[1]
Edit
Query
Report
Bisht Nitesh
POLISHING MACHINE OPERATOR HELPER