Why do we study sine and cosine waves so much? Most waves, like most water waves and most sound waves, do not resemble sine and cosine waves at all (we will henceforth refer to sine and cosine waves as sinusoidal waves).
Well, it turns out that while most waves are not sinusoidal waves, all of them are actually combinations of sinusoidal waves of different sizes and frequencies. Hence we can understand much about essentially any wave simply by studying sinusoidal waves. This idea that any wave is a combination of multiple sinusoidal waves is part of the branch of mathematics called Fourier analysis.
Here’s a suggestion for an experiment from the book Vibrations and Waves by A.P. French: If you speak into the strings of piano (I believe one of the pedals have to be held down first) the strings will vibrate, and since each string corresponds to a sine wave of a certain frequency, it will give you the breakdown of the sine wave components that make up your voice. If a string vibrates more strongly more than others it means there’s a bigger part of that in your voice, i.e. that sine wave component has a bigger amplitude.
More technically, we can express these concepts in the following manner. Let be a function that is integrable over some interval from to (for a wave, we can take to be the “period” over which the wave repeats itself). Then over this interval the function can be expressed as the sum of sine and cosine waves of different sizes and frequencies, as follows:
This expression is called the Fourier series expansion of the function . The coefficient is the “level” around which the waves oscillate; the other coefficients and refer to the amplitude, or the “size” of the respective waves whose frequencies are equal to . Of course, the bigger the frequency, the “faster” these waves oscillate.
Now given a function that satisfies the condition given earlier, how do we know what sine and cosine waves make it up? For this we must know what the coefficients and are.
In order to solve for and , we will make use of the property of the sine and cosine functions called orthogonality (the rest of the post will make heavy use of the language of calculus, therefore the reader might want to look at An Intuitive Introduction to Calculus):
for all ,
What this means is that when a sine or cosine function is not properly “paired” then its integral over an interval equal to its period will always be zero. It will only give a nonzero value if it is properly paired, and we can “rescale” this value to make it equal to .
Now we can look at the following expression:
Knowing that the function has a Fourier series expansion as above, we now have
But we know that integrals involving the cosine function will always be zero unless it is properly paired; therefore it will be zero for all terms of the infinite series except for one, in which case it will yield (the constants are all there to properly scale the result)
We have therefore used the orthogonality property of the cosine function to “filter” a single frequency component out of the many that make up our function.
Next we might use instead of . This will give us
We can continue to the procedure to solve for the coefficients , , and so on, and we can replace the cosine function by the sine function to solve for the coefficients , , and so on. Of course, the coefficient can also be obtained by using .
In summary, we can solve for the coefficients using the following formulas:
Now that we have shown how a function can be “broken down” or “decomposed” into a (possibly infinite) sum of sine and cosine waves of different amplitudes and frequencies, we now revisit the relationship between the sine and cosine functions and the exponential function (see “The Most Important Function in Mathematics”) in order to give us yet another expression for the Fourier series. We recall that, combining the concepts of the exponential function and complex numbers we have the beautiful and important equation
which can also be expressed in the following forms:
Using these expressions, we can rewrite the Fourier series of a function in a more “shorthand” form:
Finally, we discuss more concepts related to the process we used in solving for the coefficients , , and . As we have already discussed, these coefficients express “how much” of the waves with frequency equal to are in the function . We can now abstract this idea to define the Fourier transform of a function as follows:
There are of course versions of the Fourier transform that use the sine and cosine functions instead of the exponential function, but the form written above is more common in the literature. Roughly, the Fourier transform also expresses “how much” of the waves with frequency equal to are in the function . The difference lies in the interval over which we are integrating; however, we may consider the formula for obtaining the coefficients of the Fourier series as taking the Fourier transform of a single cycle of a periodic function, with its value set to outside of the interval occupied by the cycle, and with variables appropriately rescaled.
The Fourier transform has an “inverse”, which allows us to recover from :
Fourier analysis, aside from being an interesting subject in itself, has many applications not only in other branches of mathematics and also in the natural sciences and in engineering. For example, in physics, the Heisenberg uncertainty principle of quantum mechanics (see More Quantum Mechanics: Wavefunctions and Operators) comes from the result in Fourier analysis that the more a function is “localized” around a small area, the more its Fourier transform will be spread out over all of space, and vice-versa. Since the probability amplitudes for the position and the momentum are related to each other as the Fourier transform and inverse Fourier transform of each other (a result of the de Broglie relations), this manifests in the famous principle that the more we know about the position, the less we know about the momentum, and vice-versa.
Fourier analysis can even be used to explain the distinctive “distorted” sound of electric guitars in rock and heavy metal music. Usually, plucking a guitar string produces a sound wave which is sinusoidal. For electric guitars, the sound is amplified using transistors; however, there is a limit to how much amplification can be done, and at a certain point (technically, this is when the transistor is operating outside of the “linear region”), the sound wave looks like a sine function with its peaks and troughs “clipped”. In Fourier analysis this corresponds to an addition of higher-frequency components, and this results in the distinctive sound of that genre of music.
Yet another application of Fourier analysis, and in fact its original application, is the study of differential equations. The mathematician Joseph Fourier, after whom Fourier analysis is named, developed the techniques we have discussed in this post in order to study the differential equation expressing the flow of heat in a material. It so happens that difficult calculations, for example differentiation, involving a function correspond to easier ones, such as simple multiplication, involving its Fourier transform. Therefore it is a common technique to convert a difficult problem to a simple one using the Fourier transform, and after the problem has been solved, we use the inverse Fourier transform to get the solution to the original problem.
Despite the crude simplifications we have assumed in order to discuss Fourier analysis in this post, the reader should know that it remains a deep and interesting subject in modern mathematics. A more general and more advanced form of the subject is called harmonic analysis, and it is one of the areas where there is much research, both on its own, and in connection to other subjects.
Vibrations and Waves by A.P. French
Fourier Analysis: An Introduction by Elias M. Stein and Rami Shakarchi