Forum Discussion
Would you recommend the use of Excel complex numbers where performance is required?
I have (finally) achieved another Excel formulation of the Cooley-Tukey Decimation in Time (DIT) method for the implementation of the Discrete Fourier Transformation. The core of the implementation is to use new array shaping functions to allow multiplications involving partitioned matrices to be carried out as simple array products using broadcasting.
I am getting to think of the manipulation and presentation of 2D arrays as the core functionality of any spreadsheet. What TOCOL and WRAPCOLS achieve, is to arrange any multidimensional dataset so that one dimension represents the array operation whilst the other represents a stacking of the remaining dimensions which allows the operator to act on a 'for each element' basis.
Modern Excel is pretty good at this now, provided the array operation itself results in a scalar value; for the majority of problems one is somewhat screwed (to use a technical phrase or saying that needs to be quoted with caution). I for one am getting to be irritated by the number of times I have to resort the REDUCE/VSTACK when I really meant to use SCAN, BYCOL, MAP, which whine about nested arrays, when the sole purpose of the entire calculation was to deliver as set of nested arrays!
Fortunately this particular workbook was an exception, in that one of the dimensions was always devoted to real/imaginary parts and even I can manage arrays of size two, without resorting to full-blown array algebra!
- lori_mFeb 22, 2023Iron Contributor
That's a significant performance increase, yours beats mine now! It goes to show lambda functions can scale well when built-in array shaping and multiplication functions are utilised.
Mine essentially follows the introductory section of this link https://www.osti.gov/servlets/purl/1559731 with N= N1 x N2 but without implementing the suggested algorithm with radix = 4. The attachment includes a lambda random number generator for testing - I actually came across a few references to DFTs for testing randomness but wouldn't have high expectations for my simple xorshift lambda function.
- PeterBartholomew1Feb 28, 2023Silver Contributor
I think this is the file I sent to you attached to a LinkedIn message but I will put it here before I lose it! Having written the FFT, I made the slight modifications to produce the IFFT and then went on to implement the two functions as a convolution by complex multiplication of the the two FFTs and applying the IFFT to the product.
The uses I had in mind were for calculating the depreciation of capital expenditure or the cash balances that result from the delay between invoice and payment. What reminded me of all this was a blog post by Jon Peltier.
Improved Excel Lambda Moving Average - Peltier Tech
The Convolveλ function represents overkill, taking 2.5ms rather than 0.5ms for Jon's direct approach but I rather like the way the convolution can be written so that the average does not lag the data points. I might be a bit more competitive were we averaging 4096 points over a spread of 256.
= Convolveλ(datarange, nPoints, 32)- PeterBartholomew1Mar 03, 2023Silver Contributor
The convolution seems to be pretty flexible in terms of having both a number of different applications and allowing experimentation to look at the effect of selecting different kernels for the smoothing. In my latest attempt I chose to use the binomial coefficients which I then looked up and found it is known as a Gaussian Filter, published in 1958.
Both the smoothness and the fit look pretty good to me, though Jon's dataset is so noisy that it is difficult to judge.
I was surprised by the work you published on GitHub. Whilst I had anticipated that there would be others working with FFT who would be far more knowledgeable than I (I had never even thought to look under the covers of the discrete form of Fourier transformations), I did not anticipate that any would be working with Excel! I wrongly assumed it would be all Python or MATLAB/Simulink.