Sep 19 2024 12:06 PM - edited Sep 20 2024 01:50 PM
I'm curious what you all think of this approach for managing state in excel with minimal computational impact. I'm sure after a few thousand different items to track, performance may be implicated, but the formulas initially feel like they will create static values that don't have to be recalculated with any frequency. Skip towards the end if you just want to see the formulas. (P.S. I updated the code to make it simpler and a bit more robust against invalid data)
I think one of the important ideas here is that this formula can be used within a table column the same as any other formula that doesn't return an array, the difference being that calculations are not lost between steps of the row and the only information a calculation needs is the limited state info in the prior state.
Any comments welcome. If you've found a better way to do this, I'd be curious to know. I am limiting myself to using only native excel formulas/functionality plus excel labs.
P.S. The formulas have one or two undeveloped ideas because they are not essential. For instance, there is one inline lambda that uses a switch to return a comparison function for data masking that has only implemented the equals and does not equal operators.
**********Description********************
General idea is such - write a series of formulas that serialize and deserialize arrays and combine it with an efficient mechanism to access the most recent state. The example uses the idea of lots of items where each lot has a quantity and cost - transactions can either increase the quantity (and cost) or decrease it. The state being managed is the existent lots, their quantities, and their costs.
columns are simple:
control; date; lot; qty; cost; state
So here is the overall structure:
table formula checks control column to see if the current row impacts the state -
if it doesn't, return nothing,
if it does, move on
if it is the first control number, just serialize the row data
if it isn't, find the prior state and update it
The way it updates the state is by
1) deserializing the prior state
2) determining whether the prior state already included the current row's item
3) updating that value if so or just returning the current rows values,
4) filtering the prior state to remove the current row's item
5) vstacking the filtered array onto the current row's entry (potentially updated), and
6) serializing the current state.
Using 40 rows as an example, I might end with a serialized state that looks like this:
lot_H,65,650;lot_I,100,1000;lot_J,45,450;lot_K,70,700;lot_L,30,300;lot_A,135,1350;lot_B,150,1500;lot_C,145,1450;lot_D,90,900;lot_E,120,1200;lot_F,80,800;lot_G,170,1700
and deserializes to this:
lot_A 135 1350
lot_B 150 1500
lot_C 145 1450
lot_D 90 900
lot_E 120 1200
lot_F 80 800
lot_G 170 1700
lot_H 65 650
lot_I 100 1000
lot_J 45 450
lot_K 70 700
lot_L 30 300
using this function: SORT(array.deserialize_string(I57))
table formula:
=LET(
current_trans_array, HSTACK([@lot], [@qty], [@cost]),
current_trans_date_no, [@date],
prior_state_date_no, MAX(FILTER([date], [date] < current_trans_date_no, 1)),
is_first_trans, (current_trans_date_no = prior_state_date_no),
prior_state_array, IF(
is_first_trans,
current_trans_array,
array.deserialize_string(XLOOKUP(prior_state_date_no, [date], [state], "Not found", 0))
),
updated_array, IF(
is_first_trans,
current_trans_array,
array.track_state(prior_state_array, current_trans_array)
),
result, array.serialize_array(updated_array),
result
)
module formulas:
deserialize_string = LAMBDA(state_string, TEXTSPLIT(state_string, ",", ";", FALSE));
serialize_row = LAMBDA(row, TEXTJOIN(",", FALSE, row));
serialize_array = LAMBDA(state_array,
REDUCE(
,
BYROW(state_array, LAMBDA(r, serialize_row(r))),
LAMBDA(acc, current_row, acc & ";" & current_row)
)
);
track_state =
lambda(
prior_state_array, current_array,
LET(
zero_if_empty, lambda(x, if(isnumber(value(x)),x,0)),
current_id, INDEX(current_array, 1, 1),
current_id_not_found_flag, hstack(current_id, 0, 0),
is_only_item,
AND(
ROWS(prior_state_array)=1,
INDEX(prior_state_array,1,1)=current_id
),
filtered_array,
FILTER(
prior_state_array,
CHOOSECOLS(prior_state_array,1)<>current_id,
current_id_not_found_flag
),
is_new_item,
IF(ROWS(prior_state_array)=ROWS(filtered_array),AND(filtered_array = prior_state_array),FALSE),
prior_array,
if(
is_new_item,
current_id_not_found_flag,
FILTER(
prior_state_array,
choosecols(prior_state_array,1)=current_id,
current_id_not_found_flag
)
),
updated_current_array,
HSTACK(
current_id,
(zero_if_empty(INDEX(prior_array, 1, 2)) + zero_if_empty(INDEX(current_array, 1, 2))),
zero_if_empty(INDEX(prior_array, 1, 3))+zero_if_empty(INDEX(current_array, 1, 3))
),
final_array,
if(
is_only_item,
updated_current_array,
vstack(filtered_array,updated_current_array)
),
sort(filter(final_array, CHOOSECOLS(final_array,2)<>0))
)
);
Sep 21 2024 02:25 PM
I really need to see the formula in the context of its workbook. You are doing a number of things that I wouldn't do but that does not mean they are wrong; they are worth exploring. I think you are calculating running totals for a number of products. The calculations I have experience of are simpler in that only one product stream is considered at a time but more complex in that it matches outputs to inputs on a FIFO basis.
I normally move outside the table to perform such calculation, using arrays and spill ranges rather than using the inherent replication of table formula. There seems to be some benefit of your approach in that persistent storage is used to hold the prior step of an cumulative calculation. The downside would appear to be that the serialised data representing the evolving state would seem to be cumbersome, so if appears there are swings and roundabouts to be evaluated.
I would suggest you use your serialised data to hold the current state but to record the minimum of accumulated data as possible. Since I would be using an array calculation, I would have access to every step returned by the calculation but that means that the entire calculation is processed each time data is added. I am sorry that my ability to visualise and evaluate your proposal has proved to be somewhat limited.
Sep 23 2024 08:27 AM
Hi Peter,
Thanks for replying. I've attached a primitive example for you - at this point I haven't actually implemented it in a meaningful data set (though I have demonstrated it works across thousands of lines of data).
The motivation is overcoming the limitations of excel's requirement for every cell to return a value, i.e. you can't return a thunk or any other fancy container/wrapper to a cell and then reference that cell for its value. This would, theoretically, allow you to persist the intermediate calculations in involved formulas without having to solve the array of array problems (or using the name manager to name intermediate calculation results).
Obviously some refinement is needed (such as using better row/column delimiters) and if taken seriously, data compression techniques could be automatically implied (removing all but the most essential data and encoding that data to the extent reasonable). If done correctly, with about 30k characters available in a string, you can save quite a lot of information in one of excel's most efficient data structures (strings are saved as entire discrete values and given a single lookup key).
A bigger application of the idea, for instance, would be assigning individual lot numbers to every inflow of a security and being able to update that lots quantity and remaining cost basis according to whatever formula the cell using the array wants, i.e. this time specific identification, that time FIFO, another time LIFO, another time cost averaging, etc. Because the entire lot state is being persisted, a single outflow transaction of securities can relatively simply impact all affected security lots - whether that be one or a thousand. For instance, if you are using an average method over 15 lots of IBM shares, one formula could just search for lots of IBM in the prior state and adjust their values to their new remaining shares/value on a pro rata basis. Because the state was persisted, the entire updating of lots is basically a simple filter of one table that consists exclusively of the updated information for each remaining lot.
Even if this type of persistence were limited to something like once every thousand rows with the inbetween steps requiring full calculation/accumulation of all steps subsequent to the last persisted state, surely figuring out the state after 1,000 transactions is better than figuring it out after 10,000.
Sep 23 2024 02:02 PM
Your serialization appears to be well implemented and works. The methods we are developing are far removed from one another and the manner in which they scale is likely to be different. I tend to use thunks and perform entire calculations as one. I haven't given much thought to the way in which I could separate out 'historical' data to leave a smaller dynamic formula to bring the calculation up to date. Perhaps I should!
The idea of serialising the data appears to be considerably more complicated than assigning a field to stats from each lot but it should be more dynamic when new lots are introduced.
I have included the type of calculations I would turn to if faced with a problem such as that you describe, but it is so different that I doubt it will be of much value to you.
Sep 24 2024 02:43 PM
I haven't been of much value to you, but you have given me something to think about. Essentially what I do is record transactions (in-flows and out-flow) and perform the entire calculation from time zero each time a further transaction occurs. The array formulae are fast and this causes no difficulty at first. The point must come, though, where piling through decades of old data cannot be justified.
That would be where your idea of having persistent data to provide a complete description of the current state would be important. This would, in effect, provide a record of a stock-taking exercise and the focus would be the balances rather than the flows.
Since the MAPλ helper function I have used here is something I only completed recently, I have been presumptuous enough to use your problem to further test the functionality by applying it to a set of FIFO calculations corresponding to the individual Lots. The first image shows the result presented as an array of normalised tables
whilst the second has a grouping option selected (cross-tabs are also possible)
An exercise for the future is to create a persistently stored checkpoint that will support ongoing calculation.
Sep 24 2024 07:33 PM
You've been of help to me on lots of occasions, including this one. If the solution strikes you as meh, it is probably me who is missing the bigger picture. Generally speaking my solutions are solving excel problems that amount to less than a minute of compute time (if that) at the cost of dozens of hours of thinking about computer science and how I can shoehorn it into excel. What really gets me is that despite my effort to act as if I am capable of using excel labs to achieve efficiencies much closer to assembly language techniques, I am always forced to confront that Excel is a fully abstracted language whose internal calculations are non-transparent and whose own internal calculation engine is doing with the data/formulas what it likes (and I'm sure it makes better choices than I do).
The problem I solved here is one that vexes the both of us with no good solution. How can we use thunks (or any other lazy evaluation type solution to the array of array problems) as return values and not just keep them wrapped up in the ether and scope bound. Serializing the string is a real possibility, I think, as inelegant as it may be. Given my preference for structured data, this is a legitimate way to have something tantamount to tail recursion happening within a table capable of auto expanding/contracting/updating that is subject to re-ordering and filtering on a whim. It is really the re-sorting of tables of thousands of lines and more than two dozen columns that gives me anxiety.
Also, consider that this method allows you to store an array of potentially thousands of cells in a single cell in a fully resolved form that can instantly be reconverted into a full blown array capable of all of your array functions just by using a simple deserialization formula. I just don't think the excel modules (in excel labs) or the standard excel features were really designed for handling the sorts of arrays I want it to consider for thousands of cells.
Sep 25 2024 02:39 PM
Coming from an engineering background I am more used to the idea of formulating the numerical analysis problem and then solving it as efficiently as one can within a single computation. The process model by which transactions are processed one at a time as they arise is less familiar territory for me.
Excel, as a functional programming environment will prevent you from changing the state iteratively as new transactions are processed. The use of a table which expands along with the input data would allow the new state to recorded (possibly within hidden fields) and you have demonstrated that it is workable. I do have concerns that the demand on memory and processing (encoding/decoding) might be heavy if an entire copy of the data structure is held at every step.
On the other hand, repeating a complete recalculation from time zero on each occasion a further transaction is appended will also become unworkable as the data grows in volume. It might well be that some compromise solution is called for in which blocks of transactions are processed before the resulting state is stored in order to provide a checkpoint/restart.
As far as thunks are concerned, I see them as simple in-memory references to previously calculated results. Because of their ephemeral nature, I tend to perform quite large blocks of calculation and write back to the grid only the minimum state data required to make sense of the next timestep (opening balances typically, from which the complete model may be built for that timestep).
Although a table would persist calculated data far better than LET variables, that too is limited. Being formulae, the data only persists until the workbook is closed. You have to be able to afford a complete recalculation on workbook open.
Overall, I think the idea of capturing array data as text within one or more cells of a table has its place. I would aim to store the minimum of data required to restart the calculation and possibly only store it at intervals to provide checkpoint restarts if calculation from step to step turns out to be less resource intensive.
I am aware that I have not provided in-depth support for your idea and have even gone off at a tangent, exploring other ideas. I would have liked to be in a position to offer an expert assessment but in reality my comments are somewhat speculative and there are many points at which I would defer to your opinion.
Sep 28 2024 03:40 AM
This is an interesting discussion and I believe points to a limitation in Excel. What I would like to see is formulas being able to return more general 'Data Types' to cells including lambda definitions or array results so that data could be held in memory rather than output to the sheet. I've seen a few options over recent years that may be of relevance to this kind of scenario:
- The Python in Excel extension is taking steps in this direction by supporting dataframes within cells: https://support.microsoft.com/en-gb/office/python-in-excel-dataframes-a10495b2-8372-4f0f-9179-32771f...
- Add-in libraries that support 'object handles' (e.g. XLL+/XLW frameworks) which are of particular relevance with Real-Time Data where array sizes may be variable and for efficiency only dependent cells need be recalculated.
- Without any extensions, I think the suggested text serialisation approach is a decent workaround for smaller data sets e.g. using ARRAYTOTEXT / TEXTJOIN functions. Another possibility might be to store transactional data as records (#rows, #cols, {data}) via array shaping functions TOROW / WRAPROWS - which might overcome the 32k data restriction though tables could not then be utilised due to the need for data spilling.