Copilot for Microsoft 365 Tech Accelerator
Feb 28 2024 07:00 AM - Feb 29 2024 10:30 AM (PST)
Microsoft Tech Community

Transposing data related to same ID in row

Copper Contributor


I have a table where column A is for ID, column B is the date for measurements in column C.

I would like to transpose rows having the same ID to columns where column A would be the ID, B the date for the first measurement in C, D the date for second measurement in E, etc...
One ID can have between 1 to 11 different measurements.

Could someone help me with this problem?

I have 2300 rows to look through and I would love to find a quick solution. 

Thank you in advance.

Here's an example of how it is now and how I want it to look like:



Which I would like to transform like this:


(The variables and the numbers are not the ones I look at.)


9 Replies




This returns the intended output in my Excel for the web sheet. The name of the table in this example is "Tabelle7". You can replace this name with the name of your table.

transpose related data.png



A 365 solution.


Reduce, the new workhorse!


    uID, SORT(UNIQUE(Table1[ID])),
    Pivot, LAMBDA(a, v,
            scores, TOROW(FILTER(Table1[[nscore_date]:[nscore]], Table1[ID] = v)),
            IFERROR(VSTACK(a, HSTACK(v, scores)), "")
    DROP(REDUCE("", uID, Pivot), 1)




select id,group_concat(nscore_date||'</td><td>'||nscore,'</td><td>')  detail from stack_related_to_id group by ID;



@OliverScheurich@peiyezhu@Patrick2788, thank you for your input, it worked!

I want to now copy/paste my new table in a new worksheet but Excel is displaying an error message since this second table is dependent on the first one.
How do I "break" this dependence to make it possible to create a new worksheet with this new data presentation without the first table?


Reduce, the new workhorse!


Yes, but ...

Although REDUCE does not suffer from the 'array of arrays' problem that cripples SCAN, MAP, BYROW, BYCOL, it does have performance limitations when used with VSTACK to gather prior results.  I have recently found that recursive bisection performs better, despite being even more convoluted in terms of its logic than usual!

Worksheet formula
= LET(
    uID, SORT(UNIQUE(Table1[ID])),
    HSTACK(uID, BMAPλ(uID, Pivotλ))

where the Lambda helper function BMAPλ is given by

    n, ROWS(X),
    Y, IF(
        n > 1,
            ℓ, n - QUOTIENT(n, 2),
            X₁, TAKE(X, ℓ),
            X₂, DROP(X, ℓ),
            Y₁, BMAPλ(X₁, Fnλ),
            Y₂, BMAPλ(X₂, Fnλ),
            IFERROR(VSTACK(Y₁, Y₂), "")


= TOROW(FILTER(Table1[[nscore_date]:[nscore]], Table1[ID] = v))

What it does is bisect the list of IDs until only one is left and returns the result associated with that ID.  It then evaluates the result from the other ID of the final pair and stacks the result.  It then repeats with the next pair of IDs and so on back up the calling tree.


Once one is into 1000s of rows the performance differences can be large.  For example, I had to rework one solution because REDUCE was taking over 3½ minutes to evaluate.  With BMAPλ this reduced to <1sec.




@Peter Bartholomew 

This is certainly a creative workaround! I can see where calculation speed is much better than Reduce/VSTACK.  If I'm reading this correctly, you're significantly reducing the amount of times VSTACK is being employed by bisecting the data.


If there are 4 unique IDs like in the example provided, REDUCE/VSTACK is run 4x.

With your workaround, VSTACK is run 2 times?


I don't think I am winning in terms of the number of VSTACK operations.  Where I gain is on the typical size of the operands.  For REDUCE, half the final stack (i.e. ½N) is typical for the first operand.  For bisection it is log₂N (after all, half the operations only involve two records).  I think it is the amount of memory set aside for the result of each VSTACK operation and the time taken to copy records to the fresh memory that brings everything to a standstill.


I had been pretty much ready to write recursion off as a novelty, replaced by helper functions that are more efficient as well as easier to use.  I think this puts a somewhat esoteric form or recursion back into play!