Nov 20 2021 10:51 PM
@Sergei Baklan @Peter Bartholomew @lori_m @Chris_Gross
Using a single formula, involving use of Dynamic Array Formulas, I am trying to Convert data
---from this format
----To This Format
Seeking help from the learned friends
Nov 21 2021 02:10 AM
As for me that's much easier to do with Power Query as in attached. What's the reason to do that with lambdas?
Nov 21 2021 04:01 AM
A key part of the solution is to develop a Lambda function that will return a record given the centre and transaction numbers
= LAMBDA(centre,transaction,
LET(
blkSrNos, INDEX(Original,,1),
blockHdrRow, XMATCH({1;2;3}, blkSrNos),
field, SEQUENCE(1,7),
idx, XLOOKUP(centre, blockHdrRow,blockHdrRow,,1),
IF(field<=3,
INDEX(Original, idx, field),
INDEX(Original, idx+1+transaction,field-2))
)
)(2,2)
which can be called using
= NewRecordλ(2,2)
Nov 21 2021 08:19 AM
Nov 21 2021 08:24 AM
Nov 21 2021 03:35 PM
SolutionI owe you an apology. I rushed the previous formula before taking the dogs out and introduced an error. I think this is correct.
= LAMBDA(Centre,Transaction,[field],
LET(
k, SEQUENCE(ROWS(TblOriginal)),
blockNum?, ISNUMBER(centreSrNo),
blockHdrRow, FILTER(k,blockNum?),
fld, IF(ISOMITTED(field), SEQUENCE(1,7), field),
idx, INDEX(blockHdrRow,Centre),
IF(fld<=3,
INDEX(Original, idx, fld),
INDEX(Original, idx+1+Transaction,fld-2))
)
)(3,4)
As @Sergei Baklan suggested, PQ is purpose written for this type of data manipulation and should be the first strategy to be considered. What I have tried to do in the attached is to take the restructuring in two steps. The first is to write a Lambda function capable of reading any given data item from the original table, treating it as an array in 3 dimensions (Centre, Unit, Transaction).
To normalise the data from there requires one to build a set of dimension indices corresponding to records within the intended normalised table. 'CentrNo', 'UnitNo' and 'FieldNo' do this but they are not that easy to create for irregular lists.
Something that is possible, having created the first Lambda function, is to omit the step of creating a normalised form of the dataset altogether, and simply work with the original. One loses the flexibility of the pivot table but, in some instances, generating the desired results requires less work than normalising the data.
Another formula of note within the solution is the FillDownλ function used to calculate the 'CentreNo' array
= LAMBDA(values,
SCAN(0, values,
LAMBDA(prev,curr,
IF(ISNUMBER(curr), curr, prev))
)
)
As yet, I would not claim to know what constitutes good practice when it comes to using Lambda functions. There is too much 'learning as one goes' for that and, as always, multiple strategies are frequently available..
Nov 21 2021 04:26 PM
Nov 21 2021 06:21 PM
Nov 22 2021 01:32 AM
Something that may not be obvious, is that I converted the original dataset to a Table, but it may not be obvious because I supressed filters, banded rows and headers. The Table allows for structured references that expand as data is inserted or appended. The PQ approach requires a refresh following any data update whereas the dynamic arrays should live up to their billing and be dynamic. Then again, fitting a data refresh into the business process should not be that burdensome.
As for closing the thread, selecting a reply as the best response serves that purpose, though it is still possible to post to the thread. Do not worry about offending Sergei or myself, we my follow ideas and make recommendations but, ultimately, it is your decision as to the solution that best meets your needs.
Nov 26 2021 08:51 PM
Nov 21 2021 03:35 PM
SolutionI owe you an apology. I rushed the previous formula before taking the dogs out and introduced an error. I think this is correct.
= LAMBDA(Centre,Transaction,[field],
LET(
k, SEQUENCE(ROWS(TblOriginal)),
blockNum?, ISNUMBER(centreSrNo),
blockHdrRow, FILTER(k,blockNum?),
fld, IF(ISOMITTED(field), SEQUENCE(1,7), field),
idx, INDEX(blockHdrRow,Centre),
IF(fld<=3,
INDEX(Original, idx, fld),
INDEX(Original, idx+1+Transaction,fld-2))
)
)(3,4)
As @Sergei Baklan suggested, PQ is purpose written for this type of data manipulation and should be the first strategy to be considered. What I have tried to do in the attached is to take the restructuring in two steps. The first is to write a Lambda function capable of reading any given data item from the original table, treating it as an array in 3 dimensions (Centre, Unit, Transaction).
To normalise the data from there requires one to build a set of dimension indices corresponding to records within the intended normalised table. 'CentrNo', 'UnitNo' and 'FieldNo' do this but they are not that easy to create for irregular lists.
Something that is possible, having created the first Lambda function, is to omit the step of creating a normalised form of the dataset altogether, and simply work with the original. One loses the flexibility of the pivot table but, in some instances, generating the desired results requires less work than normalising the data.
Another formula of note within the solution is the FillDownλ function used to calculate the 'CentreNo' array
= LAMBDA(values,
SCAN(0, values,
LAMBDA(prev,curr,
IF(ISNUMBER(curr), curr, prev))
)
)
As yet, I would not claim to know what constitutes good practice when it comes to using Lambda functions. There is too much 'learning as one goes' for that and, as always, multiple strategies are frequently available..