Forum Discussion
Patrick2788
Jun 25, 2022Silver Contributor
A LAMBDA Exercise
For this exercise I stripped down a calendar template and put in some sample data. The goal is to obtain gross pay for July for 3 employees. The data arrangement: I believe there are seve...
Patrick2788
Jun 30, 2022Silver Contributor
I'd have to step you through my thinking in re: De-stacking.
Essentially, I was working on a solution where I had created an array and then needed to check each element in the array to determine if it would be a 1 or 0. I was using MAP with a dummy range of identical dimensions to obtain Row and Column. The problem was getting the array to know when to stop filling in 1s.
For example. Item 1 shows 3. By the time I get to the 4th position in the array, I need to tell it to stop filling in 1s even though it's still concerned with Item 1.
I hope that makes sense. I may revisit de-stacking with a clearer head.
Essentially, I was working on a solution where I had created an array and then needed to check each element in the array to determine if it would be a 1 or 0. I was using MAP with a dummy range of identical dimensions to obtain Row and Column. The problem was getting the array to know when to stop filling in 1s.
For example. Item 1 shows 3. By the time I get to the 4th position in the array, I need to tell it to stop filling in 1s even though it's still concerned with Item 1.
I hope that makes sense. I may revisit de-stacking with a clearer head.
mtarler
Jun 30, 2022Silver Contributor
I think what you are getting at could be explained using an n-point weighted average filter (similar to a recent post I answered). So think of a set of data going up and down but with a lot of noise. You can take the average of the points around each spot as a way to smooth the curve. Let's say you want a 5 point average so starting with data point 3 you average data points 1,2,3,4,5 and get a new value for that location. Then at point 4 you average 2,3,4,5,6 for a new point there and so on. You can also add a weight factor like 0.1, 0.2, 0.4, 0.2, 0.1 so that the new replacement is weighted more by the center points and less by the outside points. So basically as you SCAN or MAP through you want to access 'near-by' points to use in the calculation of the 'present' point (doesn't have to be average, it could be making a replica of that minefield game that inserts a number based on the # of bomb in adjacent cells).
So we CAN use INDEX if we know we want a 3-point average but and n goes up the number of INDEX needed goes up 2x or a recursive call might be possible but either way it is bulky, complicated and inefficient and a native capability could be much better.
That all said, if I totally got your intentions (@Patrick2788) wrong I apologize and ignore this. 🙂
So we CAN use INDEX if we know we want a 3-point average but and n goes up the number of INDEX needed goes up 2x or a recursive call might be possible but either way it is bulky, complicated and inefficient and a native capability could be much better.
That all said, if I totally got your intentions (@Patrick2788) wrong I apologize and ignore this. 🙂