Formula challenge: re-shape and cleanup

Valued Contributor

You're tasked with re-shaping and cleaning up the fictional Faber College alumni list to be used for a mail merge.


Going from this:


To this:


To spice things up, the First and Last names must be in separate columns.  Additionally, bellman records must not be included in your list (They've been contacted already).


The rules are simple: The solution must be a formula.  VBA, PowerQuery, helper columns, fill handle - not allowed.


I've included my solution.  I believe the best solutions for this exercise will involve some of the newest functions currently available on the Beta channel.  I'm looking forward to seeing some creativity. Have at it!

23 Replies

@Patrick2788  So I do not have Beta functions so here is an option without them.  I don't know how well your solution performs on the full data set but mine crashes.  As soon as you get into the couple thousands it start to take a lot of time and at some point before 10K it just crashes.  I added a LAMBDA function (AlumniX) purely to easily select how many rows to test it on.  It also tries adjusts that length to make it full records (as long as it isn't tripped up by one of those gap rows).  Here is the formula:


          LET(name,INDEX(alum,MROUND(r+1,3)-2,MOD(r-1,3)+1),LEFT(name,SEARCH(" ",name))),
          LET(name,INDEX(alum,MROUND(r+1,3)-2,MOD(r-1,3)+1),MID(name,SEARCH(" ",name),LEN(name))),
     FILTER(list, INDEX(list,,3)<>"Bellman"))


I did not have any trouble with your workbook. I'm running this on a new computer with 32 GB RAM, i7 processor. I will try it on my old computer.
Did you try replacing AlumniX(200) with AlumniX(10000) or just Alumni? It works great at a couple hundred of lines but got slow at a couple thousand and I'm pretty sure it 'broke' around 10000 and forget about 100000+. That said, my computer has been slogging lately and is well over due to be replaced.

I tried it with 'Alumni' and Excel refused to calculate it - returned a 0. There was no memory or CPU spike, it appears to be a limitation of MAKEARRAY. I tried running it in Excel web app for grins, wouldn't calculate it.

@Patrick2788 Although you explicitly mentioned NOT to use PQ, I did it anyway. I'm a lazy person and favor easy solutions over complicated ones. Your formula is impressively short and performed lightning fast on my laptop (also a 32GB machine).

Please find the PQ script that I  used below. It worked as fast :))

    Source = Excel.CurrentWorkbook(){[Name="alumni"]}[Content],
    RemoveBlankRows = Table.SelectRows(Source, each not List.IsEmpty(List.RemoveMatchingItems(Record.FieldValues(_), {"", null}))),
    SplitTbl = Table.Split( RemoveBlankRows, 3 ),
    Convert = Table.FromList(SplitTbl, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
    Transpose = Table.AddColumn(Convert, "Custom", each Table.Transpose ([Column1])),
    Remove = Table.SelectColumns(Transpose,{"Custom"}),
    Expand = Table.ExpandTableColumn(Remove, "Custom", {"Column1", "Column2", "Column3"}),
    Split = Table.SplitColumn(Expand, "Column1", Splitter.SplitTextByEachDelimiter({" "}, QuoteStyle.Csv, true), {"Column1.1", "Column1.2"}),
    Filter = Table.SelectRows(Split, each not Text.Contains([Column2], "Bellman"))

With some knowledge of M (and I'm far from an expert) this isn't all that complicated and easy to follow. Thanks for the challenge and sorry for violating the rules of it.

Thanks for sharing! I've been meaning to dive into PQ more and this is something I can study. Thank you!


I've done some more testing with MAKEARRAY. I've been feeding it large ranges trying to get it to quit but no luck.

Granted, there's no re-shaping being done but Excel is able to calculate this:

I'm interested in what part of @mtarler 's formula is causing Excel to refuse to calculate. This is presuming it's mostly the presence of MAKEARRAY.  Going off the calculation times posted by @Peter Bartholomew in the other discussion, it has to be MAKEARRAY.


Firstly, my congratulations on a very efficient and compact solution.


On the MAKEARRAY discussion, I do not think the function is itself a major problem, provided each calculation is a scalar evaluation.  Where I think I hit problems is where the each calculation is actually a full array calc but uses INDEX to restrict the return to a single value.  Once each value comes at the price of a full array calc, the time builds as O(n²).


My solution was both more complicated and more expensive than yours (x3) but it did allow me to explore some techniques that I thought might be usable for other problems.

= LET(
     Fnλ,          HSTACK(FirstWordλ,FinalWordλ,Identityλ,Identityλ),
     baseTbl,      WRAPROWS(TOCOL(alumni,1,1),3),
     extendedTbl,  CHOOSECOLS(baseTbl,{1,1,2,3}),
     u,            SEQUENCE(ROWS(extendedTbl),,1,0),
     Fnsλ,         CHOOSEROWS(Fnλ,u),
     processedTbl, MAP(Fnsλ,extendedTbl,Applyλ),

The key part of my experiment was to build an array of Lambda functions of the same dimension as the table to which it was to be applied.  Then the functions could be applied one by one using MAP.  My take is that the approach involves some pretty heavy computational steps but it remains O(n), so should not lock up as the problem size increases.



@Peter Bartholomew 

Thanks for sharing, Peter. I like your approach. It's very clean and easy to follow. MAP has tremendous potential when used with multiple arrays.  Later when I'm at my other computer I may extended the rows down to 1 million and have at it with the timer. CHOOSECOLS/CHOOSEROWS/TOCOL/HSTACK appear to be very efficient calculation-wise in handling large arrays.

When I designed this exercise, I was curious how others would go about removing the 'bellman' records. From the other discussion, I recalled @Sergei Baklan mention your use of FILTER possibly slowing down your calculation so I went in determined not to use FILTER.  Ultimately, I used it thinking it would run smoothly if stored inside the NOBELL LAMBDA and called from my main formula.  FILTER seems to be best approach to exclude records at the moment.


@Sergei Baklan might know better, but I do not think FILTER is especially slow, given that it evaluates the entire criterion for each row.  The conditional aggregation functions are faster because they don't evaluate secondary conditions once the first condition fails.  If you are only trying to extract one or two matches XLOOKUP would be a preferred option because of its bisection search which requires order O(ln(n)) steps.


Doing some more testing with this data set. I expanded the alumni list to just shy of 1 million rows. My solution and @Peter Bartholomew 's formula produced #NUM! errors.  I found HSTACK, when run by itself and stacking FirstWord and FinalWord, was failing with precisely 524,289 rows in the alumni named range.  I shortened the range and HSTACK worked again.


The main formulas still produced #NUM! errors because WRAPROWS, CHOOSECOLS, CHOOSEROWS, TOCOL, etc. - had reached limits and were producing #NUM! errors, too.  Granted, these functions are still in BETA but it's fascinating to get an idea of what they can handle.  Maybe these limits will change once released.


For HSTACK it depends on both dimensions - number of stacked columns and number of rows in each. My short test gives


@Sergei Baklan 

There is a pattern here! (grin)


Speaking of how people would do the removal of 'bellman' records I noticed in a couple of solution I thought it could be more efficient if that step was moved up earlier. If I read the code right, it looked like it could be done before the splitting of the first and last name in some cases which means that name splitting step would act on a slightly smaller set.


It has been an interesting exercise; we have learnt something about the limits of HSTACK, but it has also pushed me into trying strategies that were previously only possibilities.  I had to modify your named Lambda to apply it at the start of my formula without invalidating the existing version

= LAMBDA(arr, col, FILTER(arr, CHOOSECOLS(arr, col) <> "Bellman"));

Overall, I was pleasantly surprised at how easy it was to return to an existing formula and modify it using modern methods.  A traditional formula nested 7-deep would not be fun though, to be fair, there would probably have been a whole raft of helper columns to ease the pain.

For me, the function that I am most likely to reuse is

= LAMBDA(λ, x, λ(x));

Lambdas don't have to be long to be useful!



I've put thought into removing the 'bellman' records as early as possible. The way I designed this problem (Placing blank rows intermittently) it makes it very difficult to pull those records until before using TOCOL and WRAPROWS. I've considered using MAP to pull the records but the blank rows are an issue (and using MAP to cleanup before TOCOL/WRAPROWS is probably not very efficient).  I had also considered BYROW to identify the rows to be used in the array but again, removing the blank rows first would have to be done.  FILTER appears to be the best option at the moment. Maybe the calculations are a bit quicker if the 'bellman' records are pulled earlier. It would certainly have to be done if the record set was much larger and the 'bellman' records had to be removed to allow HSTACK to run.

My first draft of this exercise actually included arbitrary header information every thousand rows or so (The sheet would've appeared to have been pasted from a poorly designed Word document).  I like creating a puzzle and working towards a solution when it seems impossible. I might revisit that first draft.

If you wish to add to the pain, delete Ron Palmer's email address (row 2) and see what happens

@Peter Bartholomew 

At that point I think it's still do-able but maybe not worth the trouble. I'd still ignore blanks with TOCOL, then I think it comes down to writing logic.  I'd use the @ to identify complete records.

Text - Keep
Text - Keep
@ - Keep
Text - Drop
Text - Drop
Text - Keep
Text - Keep
@ - Keep