Forum Discussion
Would you recommend the use of Excel complex numbers where performance is required?
That is some impressive coding for Excel formulas. Sorry I missed your original post. Do you actually work with FFTs or was it just a case of experimentation with modern Excel?
I found the theoretical basis of the FFT algorithm somewhat heavy-going and finished up by implementing version with a double recursion. I am now looking at following
Nonrecursive Fast Fourier Transform - YouTube
to see what a non-recursive form might look like.
As for the original question on the in-built complex function calculation, your answer is very much in line with my expectations. Thanks for the confirmation.
I have (finally) achieved another Excel formulation of the Cooley-Tukey Decimation in Time (DIT) method for the implementation of the Discrete Fourier Transformation. The core of the implementation is to use new array shaping functions to allow multiplications involving partitioned matrices to be carried out as simple array products using broadcasting.
I am getting to think of the manipulation and presentation of 2D arrays as the core functionality of any spreadsheet. What TOCOL and WRAPCOLS achieve, is to arrange any multidimensional dataset so that one dimension represents the array operation whilst the other represents a stacking of the remaining dimensions which allows the operator to act on a 'for each element' basis.
Modern Excel is pretty good at this now, provided the array operation itself results in a scalar value; for the majority of problems one is somewhat screwed (to use a technical phrase or saying that needs to be quoted with caution). I for one am getting to be irritated by the number of times I have to resort the REDUCE/VSTACK when I really meant to use SCAN, BYCOL, MAP, which whine about nested arrays, when the sole purpose of the entire calculation was to deliver as set of nested arrays!
Fortunately this particular workbook was an exception, in that one of the dimensions was always devoted to real/imaginary parts and even I can manage arrays of size two, without resorting to full-blown array algebra!
- PeterBartholomew1Feb 21, 2023Silver ContributorSome further information for the record. The spreadsheet ran with 64K rows, taking 2360ms. More realistic was the 20ms for 1K growing linearly to 80ms for 8K rows. Far more usable than I had anticipated!
- lori_mFeb 22, 2023Iron Contributor
That's a significant performance increase, yours beats mine now! It goes to show lambda functions can scale well when built-in array shaping and multiplication functions are utilised.
Mine essentially follows the introductory section of this link https://www.osti.gov/servlets/purl/1559731 with N= N1 x N2 but without implementing the suggested algorithm with radix = 4. The attachment includes a lambda random number generator for testing - I actually came across a few references to DFTs for testing randomness but wouldn't have high expectations for my simple xorshift lambda function.
- PeterBartholomew1Feb 28, 2023Silver Contributor
I think this is the file I sent to you attached to a LinkedIn message but I will put it here before I lose it! Having written the FFT, I made the slight modifications to produce the IFFT and then went on to implement the two functions as a convolution by complex multiplication of the the two FFTs and applying the IFFT to the product.
The uses I had in mind were for calculating the depreciation of capital expenditure or the cash balances that result from the delay between invoice and payment. What reminded me of all this was a blog post by Jon Peltier.
Improved Excel Lambda Moving Average - Peltier Tech
The Convolveλ function represents overkill, taking 2.5ms rather than 0.5ms for Jon's direct approach but I rather like the way the convolution can be written so that the average does not lag the data points. I might be a bit more competitive were we averaging 4096 points over a spread of 256.
= Convolveλ(datarange, nPoints, 32)