Forum Discussion
A LAMBDA Exercise
Hi Patrick
I am not so sure. If anything, there is less of a need, given the new array shaping functions, for formulas relating indices in order to generate array transformations. I followed your link and posted a formula solution. It required 200ms to generate a 50,000 x 12 array of results.
The question of locating an element of an array is a little unusual to my mind. More often it is a case of mapping an entire array to a new layout, though one could conceivably play 'find the "X"'. One thing I have done to get a handle on some of the new array shaping functions is to apply them to a data set but, in parallel, to apply the same formula to an equivalent index array.
Then again, maybe this isn't what you are trying to achieve!!
Essentially, I was working on a solution where I had created an array and then needed to check each element in the array to determine if it would be a 1 or 0. I was using MAP with a dummy range of identical dimensions to obtain Row and Column. The problem was getting the array to know when to stop filling in 1s.
For example. Item 1 shows 3. By the time I get to the 4th position in the array, I need to tell it to stop filling in 1s even though it's still concerned with Item 1.
I hope that makes sense. I may revisit de-stacking with a clearer head.
- PeterBartholomew1Jun 30, 2022Silver Contributor
I am not sure I fully understand the scenario and, it may well be that I am simply agreeing with Matt. That said, I see that helper functions like MAP and SCAN may perform calculations element by element without there being an obvious means of addressing adjacent elements of the array. In such a situation, my suggestion would be to scan an index array rather than directly scanning the target array. Elements of the target array could then be returned by use of the INDEX function, e.g.
= MAP(indexArray, LAMBDA(k, AVERAGE( INDEX( targetArray, k+{0,1,2} ) ) ) )
Am I on the right track?
- Patrick2788Jul 01, 2022Silver Contributor
Yes, I think you and mtarler have it correct. I have a way (SCAN with XLOOKUP using SEQUENCE for lookup array and return array) to reset sequential numbering (e.g 1,2,3,4,5,6,1,2,3, etc) but I don't believe it can be scaled for large data sets.
I still need to study your workbook from the destacking thread. When I'm at my desk at work I don't have access to an Insider build so I'm missing out during typical working hours!
- mtarlerJun 30, 2022Silver ContributorI think what you are getting at could be explained using an n-point weighted average filter (similar to a recent post I answered). So think of a set of data going up and down but with a lot of noise. You can take the average of the points around each spot as a way to smooth the curve. Let's say you want a 5 point average so starting with data point 3 you average data points 1,2,3,4,5 and get a new value for that location. Then at point 4 you average 2,3,4,5,6 for a new point there and so on. You can also add a weight factor like 0.1, 0.2, 0.4, 0.2, 0.1 so that the new replacement is weighted more by the center points and less by the outside points. So basically as you SCAN or MAP through you want to access 'near-by' points to use in the calculation of the 'present' point (doesn't have to be average, it could be making a replica of that minefield game that inserts a number based on the # of bomb in adjacent cells).
So we CAN use INDEX if we know we want a 3-point average but and n goes up the number of INDEX needed goes up 2x or a recursive call might be possible but either way it is bulky, complicated and inefficient and a native capability could be much better.
That all said, if I totally got your intentions (@Patrick2788) wrong I apologize and ignore this. 🙂