User Profile
joelb95
Brass Contributor
Joined 2 years ago
User Widgets
Recent Discussions
Re: CountA, Unique & Filter Combined Formula Query
This might be a little wordy for you, but try this formula instead: =LET( count_range,A1:A5, criteria1,(A1:A5)="Check", criteria2,(A1:A5)="Check2", filtered_results, FILTER( A1:A5, criteria1*criteria2 ), result, IF( ISERROR(filtered_results), 0, COUNTA(UNIQUE(filtered_results)) ), result ) The basic idea is that COUNTA effectively acts as a cell counter regardless of the contents of that cell. Given that a filter function returns a "#CALC!" error when there are no matching results (you did not choose a default value in your filter function), this means that when you do not match both of your criteria, the FILTER will return the calc error, then UNIQUE will return the calc error, and then COUNTA will count the cell containing the error value as 1. If you don't want an error value to count with COUNTA, I think you are out of luck. Something like =SUM(IF(NOT(ISERROR(A1:A5)),1)) would work.140Views0likes0CommentsRe: Advice Needed: Performing complicated nested if calculation
Is it possible to give a small sample of your data and expected outcomes? Rather than thinking in terms of nested "if" type statements, I strongly recommend you think in terms of "flow", i.e. "given data that looks like X, I should treat it in Y way." Two major ways in excel to express a thought like that are the "ifs" formula and the "switch" formula, depending upon your particular circumstance. Think of the "ifs" being what you use when the data you are considering is not simply different values of the same variable name. Because you mention multiple sources/types of data that would be relevant to how you perform your calculations, it is unlikely that a switch would provide any conceptual benefits or readability enhancements. For instance, a general description for your various scenarios using a flow-type approach might be "When these condition are met, apply this formula. When those conditions are met, apply that formula. When a third set of conditions are met, apply a third formula, and when none of these specified scenarios are present, carry on using a fourth formula." So now you just need to figure out what your conditions are and what you want to do (which is where you started). Question 1: Where is the data related to your conditions? If, for instance, you have a data table which includes independent variables, e.g. the date rate A changed, the date rate B changed, and no relation between A and B or the dates of change, things are different than if you have a complicated relationship between A and B, the dates of change, and the information is all over the place. In the simplest case, you might have a data table with headers like "rate_change_date", "rate_type", "new_rate" which includes all rates from inception of your data through the end of your data. In such a case, you might answer the question "Did a rate change during the period?" by an uncomplicated date filter. You should try to focus on the question of your conditions and structure data independent of your questions about what/how to do a calculation so that you have a clear logical structure in place that makes sense and is usable regardless of the calculation you end up needing to do. For instance: get_rate_changes_during_period = LAMBDA(period_start, period_end, FILTER( rate_change_table, (rate_change_table[date] <= period_end) * (rate_change_table[date] >= period_start), "" ) ) you might also have helper formulas like: changed_rate_data_row = LAMBDA(rate_type, period_start, period_end, FILTER( rate_change_table, (rate_change_table[rate_type] = rate_type) * (rate_change_table[date] <= period_end) * (rate_change_table[date] >= period_start), FALSE ) ) prior_rate_data = LAMBDA(change_date, rate_type, TAKE( SORT( FILTER( rate_change_table, (rate_change_table[date] < change_date) * (rate_change_table[type] = rate_type), FALSE ) ), -1 ) ) days_in_period_pre_and_post = LAMBDA(change_date, HSTACK( DAYS(change_date), EOMONTH(change_date, 0) - change_date + 1 ) ) And then you start your formula with something like: =LET( rate_change_table, YOUR_TABLE_REFERENCE_HERE, target_rate_type, EITHER_TYPE_A_OR_TYPE_B, period_start_date, DATE(YEAR(2021), MONTH(1), DAY(1)), period_end_date, EOMONTH(period_start_date, 0), // Define the rate change row for the target type within the period period_change_row, changed_rate_data_row(target_rate_type, period_start_date, period_end_date), target_change_date, CHOOSECOLS(period_change_row, 1), // Retrieve the prior rate row for the target type before the change date prior_rate_row, prior_rate_data(target_change_date, target_rate_type), // Calculate the days of prior and post-rate within the period days_of_prior_rate_and_post_rate, days_in_period_pre_and_post(target_change_date), // Select prior and changed rates prior_rate, CHOOSECOLS(prior_rate_row, 3), changed_rate, CHOOSECOLS(period_change_row, 3), // Continue with your subsequent logic here... ) At this point, you have your relevant rate data: The date of change Number of days in the month at that rate (assuming only one change per month) The rate at the beginning of the month and after it changed Next, follow a similar approach to retrieve any additional data conditions you might need, such as whether sales reached a certain percentage of total revenues in the period. Once you have all your conditions, you can combine them in a structured IFS formula as follows: IFS( AND(period_change_row, sales_to_revenues_percentage > 0.4), calculation_for_tier_1(period_change_row, prior_rate_row, days_of_prior_rate_and_post_rate), AND(period_change_row, sales_to_revenues_percentage > 0.3), calculation_for_tier_2(period_change_row, prior_rate_row, days_of_prior_rate_and_post_rate), NOT(period_change_row), calculation_for_no_rate_change_in_period, TRUE, "Invalid data" ) Explanation of Logic First Condition: If period_change_row exists (indicating a rate change) and sales_to_revenues_percentage exceeds 40%, apply calculation_for_tier_1 with the relevant data. Second Condition: If period_change_row exists and sales_to_revenues_percentage exceeds 30% (but not 40%), apply calculation_for_tier_2. Third Condition: If there is no rate change (using NOT(period_change_row)), apply the calculation for periods with no rate change. Fallback Condition: If none of the conditions are met, return "Invalid data". This formatting clarifies each logical tier and maintains the readability of your IFS formula, making it easier to troubleshoot and update as needed. By cleanly separating the following elements: Data Retrieval: Collecting all relevant data points needed for calculations Flow Paths: Defining the possible calculations or actions to perform Conditions: Identifying the specific data conditions that trigger each calculation you can create a robust structure that: Allows for easy modification and reuse Ensures future intelligibility for both yourself and others This approach provides a clear, modular structure where each component—data retrieval, conditions, and calculations—remains separate and straightforward. This independence simplifies both maintenance and adaptability, making each part easy to adjust or expand without disrupting the others.0Views1like0CommentsRe: Complex look up with output
Hi there, I can see you’re looking to cross-reference a table of training courses with a report of students’ completed courses, filtering dynamically to identify which courses each student is missing for their specific grade and level. Here’s a structured way to think through this in Excel: Step 1: Understand the Data Structure To solve this problem, we’ll use three tables: tbl_courses: Contains the course names and their required grade and level. tbl_courses_completed: Contains each student’s name and the courses they’ve completed. tbl_current_students: Contains the current student list with their grade and level. The goal here is to filter these tables to get a list of courses for each student that they still need to complete for their grade and level. Step 2: Define the Lookup Process For each student, you’ll want to: Find their grade and level: Using the student’s name in tbl_current_students, we can look up their specific grade and level. List Required Courses for the Grade and Level: Filter tbl_courses to retrieve the courses required for that grade and level. Identify Courses Already Completed: Look up the student in tbl_courses_completed to find the courses they’ve finished. Calculate Missing Courses: Compare the required courses with the completed courses and list any that are missing. Step 3: Build the Formula Here’s an outline of a potential formula approach: Determine Grade and Level: Use XLOOKUP to find the student’s grade and level based on their name. Get Required Courses: Use FILTER to list courses from tbl_courses that match the student’s grade and level. Filter Out Completed Courses: With FILTER or MATCH, identify courses in this list that aren’t found in tbl_courses_completed for that student. Display Missing Courses: If any courses remain, output them as a list for that student. This approach could be set up in a third sheet with dropdown menus to select the student, grade, and level dynamically, displaying a list of any missing courses for easy review. You could elect to use this formula on a case by case basis (change the name in the target cell and have the results displayed) or you could stack it together for use later (example at end). Step 1: define the data structure first table with course names and course levels is "tbl_courses" and it has two columns, "course_name" and "course_level". second table with student names and completed courses is "tbl_courses_completed" and it has three columns, "student_name", "course_name", and "course_grade_level". third table with current student names and grade level is "tbl_current_students" and it has two columns "name" and "grade_level". Step 2. 1 - Find their grade and level: current_student_grade = xlookup(student_name, tbl_current_students[name], tbl_current_students[grade_level]) Step 2.2 - List Required Courses for the Grade and Level classes_for_current_student_grade = filter(tbl_classes[course_name],tbl_classes[grade_level]=current_student_grade), Step 2.3 - Identify Courses Already Completed current_student_grade_level_classes_taken = filter(tbl_classes[course_name], (tbl_courses_completed[student_name]=current_student_name)*(tbl_courses_completed[course_grade_level]=current_student_grade)) Step 2.4 Calculate Missing Courses has_taken_class_names_in_grade_boolean_mask = byrow(classes_for_current_grade, lambda(class_name, isnumber(match(current_class, current_student_grade_level_classes_taken,0))) current_student_grade_level_classes_not_taken = filter(classes_for_current_grade, not(has_taken_class_names_in_grade_boolean), "all_classes_taken") lambda form for finding missing classes for a particular student: get_missing_classes = lambda( student_name, let( current_student_grade, xlookup( student_name, tbl_current_students[name], tbl_current_students[grade_level] ), classes_for_current_student_grade, filter( tbl_classes[course_name], tbl_classes[grade_level]=current_student_grade ), current_student_grade_level_classes_taken, filter( tbl_classes[course_name], ( (tbl_courses_completed[student_name]=current_student_name)* (tbl_courses_completed[course_grade_level]=current_student_grade) ), has_taken_class_names_in_grade_boolean_mask, byrow( classes_for_current_grade, lambda( class_name, isnumber( match( current_class, current_student_grade_level_classes_taken, 0 ) ) ) ), current_student_grade_level_classes_not_taken, filter( classes_for_current_grade, not(has_taken_class_names_in_grade_boolean), "all_classes_taken" ), current_student_grade_level_classes_not_taken ) ); Now let's say you want to use this formula to populate an easily referenced data set, one way to do that is to put it into a table where the student's name and grade are appended to each missing class name, thereby letting you filter by class, grade, or name, and create useful sets of data like, "grade 3 students who need algebra" by filtering the grade column for 3 and the class name column for algebra. Here is one possible implementation: =let( list_of_students, tbl_current_students[name], number_of_students, rows(list_of_students), list_of_student_indices, sequence(number_of_students), get_student_name, lambda(idx, index(list_of_students, idx, 1)), get_student_grade, lambda( student_name, xlookup( student_name, tbl_current_students[name], tbl_current_students[grade_level] ) ), get_missing_classes_for_student, lambda( current_student_name, current_student_grade, let( classes_for_current_student_grade, filter( tbl_courses[course_name], tbl_courses[grade_level] = current_student_grade ), current_student_grade_level_classes_taken, filter( tbl_courses_completed[course_name], ( (tbl_courses_completed[student_name] = current_student_name) * (tbl_courses_completed[course_grade_level] = current_student_grade) ) ), has_taken_class_names_in_grade_boolean_mask, byrow( classes_for_current_student_grade, lambda( class_name, isnumber( match( class_name, current_student_grade_level_classes_taken, 0 ) ) ) ), current_student_grade_level_classes_not_taken, filter( classes_for_current_student_grade, not(has_taken_class_names_in_grade_boolean_mask), "all_classes_taken" ), current_student_grade_level_classes_not_taken ) ), append_grade_and_name_to_missing_class_list, lambda( student_idx, let( student_name, get_student_name(student_idx), student_grade, get_student_grade(student_name), missing_classes, get_missing_classes_for_student(student_name, student_grade), hstack( expand(student_name, rows(missing_classes), student_name), expand(student_grade, rows(missing_classes), student_grade), missing_classes ) ) ), get_current_student_missing_class_array, reduce( append_grade_and_name_to_missing_class_list(index(list_of_students, 1, 1)), drop(list_of_student_indices, 1), lambda( acc, student_idx, vstack( acc, append_grade_and_name_to_missing_class_list(student_idx) ) ) ), headers, hstack("Student Name", "Grade Level", "Missing Class"), result, vstack( headers, get_current_student_missing_class_array ), result ) Let me know if any of this doesn't make sense or if there is an error somewhere. I didn't test the formulas, so some minor tweaks may be required. [Edit: When I initially posted this reply, I indicated that ChatGPT wrote the intro after a bit of prompting, but it seems that people might have understood that GPT wrote the whole post or designed the solution. All ChatGPT did was make my initial narrative a little more friendly/accessible. At this stage, it is utterly incapable of designing/crafting/effectively implementing the solution I provided. I strongly caution you against taking ChatGPT's excel solutions at face value or assuming that they are optimized/ideal by any measure.]0Views0likes2CommentsRe: Complex look up with output
ChatGPT had no idea how to do it. It just cleaned up the introduction a bit. ChatGPT is actually pretty bad at Excel (at least it is bad at functional programming in Excel). Tell me exactly what it is you need it to do and I can probably expand the formula as necessary.30Views1like0CommentsRe: Excel - sorting rows by balance due
Generally speaking, the answer to such questions depends on what your use case is - do you just need to see the open balances or do you need to be able to interact with the rows that have the open balances? If you just need to see them, you can write a dynamic array formula that uses sortby, filter, etc. and set up your data view however makes sense. If you need to edit the data, that is more complicated since there is not, so far as I know, a way to have data in a table automatically sort as you add/change data. The solution for viewing/editing the data in place (outside of a VBA type solution or other more "outside" excel tools as NikolinoDE suggested) is to manually update your sort. This can be accomplished in a couple of ways, but a relatively simple way (I believe PeterBartholomew1 uses this one) is to concatenate the columns you wish to sort by into a single helper column and then sort by that column. I believe he uses a character like "-" or some such to split the text strings and takes advantage of textjoin, but I could be wrong. Despite the helper column, I tend to think that this method is probably the most conceptually efficient as involves a single, straightforward join formula combined with sorting as needed on the helper column. Keep in mind that Peter offered this solution before sortby was generally available. What I would likely do, depending on the particular need, is write a formula to assign an integer to a helper column representing the sort order of that row and then sort by the helper column. So, for instance, I might take the table range which includes the columns I wish to sort by and includes an additional "primary id" or "control number" type column, i.e. a column with relatively simple data to search and find unique values, preferably sortable. So if you don't have a primary column already, just add one, number it from 1 to whatever, but make sure the values are fixed (meaning you copy and paste the integer values if you number the columns using a formula). Then do something like use sortby for the columns of interest, and then add a similar control number to the sorted array. You then xlookup the control number for the original array in the sorted array and return the sorted array's control number. After that setup, from then on you can simply sort by the column with the sorted array's control number and get your current sorted state. Only thing you have to be sure to do is to continue adding control numbers to each new row of an array. Note that the general difference between my approach and Peters (besides my taking advantage of the new sortby forumula) is that he sorts by the data directly whereas I create an "id" that sort of hides the data behind something that looks prettier. In both cases, the impact is the same - you are adding a new column to your data array that you have to manually sort by every time you want the most up-to-date ordering of your data. So say you have an array named "sample_array" with these headers and values: orig_unsorted_id color amount balance 1 0 2 5 3 0 4 25 5 0 you add a column called "sorted_id" and insert the following formula (I used a match to get the column index for a header in case you aren't using excel's table feature - if you are, you could just use, e.g., sample_array[orig_unsorted_id]): =LET( target_orig_id, [@[orig_unsorted_id]], source_array, sample_array[#All], primary_id_col_name, "orig_unsorted_id", sort_by_col_name, "balance", source_array_data, DROP(source_array, 1), source_array_headers, TAKE(source_array, 1), balance_col, CHOOSECOLS(source_array_data, MATCH(sort_by_col_name, source_array_headers, 0)), orig_id_col, CHOOSECOLS(source_array_data, MATCH(primary_id_col_name, source_array_headers, 0)), sorted_orig_id_col, SORTBY(orig_id_col, balance_col, -1), return_array, SEQUENCE(ROWS(source_array_data)), XLOOKUP(target_orig_id, sorted_orig_id_col, return_array, "check_values") ) you should end up with: orig_unsorted_id color amount balance sort_order 1 0 3 2 5 2 3 0 4 4 25 1 5 0 5 you then sort by the "sorted_id" column. Every time you update a value or add a new row, you just have to re-sort by the sorted_id column. You should get the following after the first sort: orig_unsorted_id color amount balance sort_order 4 25 1 2 5 2 1 0 3 3 0 4 5 0 5 You can enhance this sort method as necessary, including filtering the original array before sorting, filtering it after and resorting, etc.1View0likes0CommentsRe: How to Append Tables with Different Column Order with Automatic Refresh
You can either reorder them on import, reorder them once imported, or reorder them when combining. No matter when it happens, it will involve assigning the source columns to the correct final location. Theoretically you could just sort the columns alphabetically if they had the same names/column count, but you said that they come in different column widths. So your best bet is really to find the most efficient way of doing your mapping and adapting that mapping to the method you pick for reordering. I'm sure people here would recommend power query for the task because it is likely the easiest choice for you, but it doesn't sound like something you have used before.117Views0likes0CommentsRe: How to Append Tables with Different Column Order with Automatic Refresh
Your process sounds very manual, so I am imaging that somewhere in your wookbook you have the four "source" tables that you update periodically and one combined table somewhere in a worksheet. What you need is a programmatic way to combine them on a sheet. The basic idea is that you need to make the columns be able to be stacked properly. I have no idea how many columns you have, but regardless, you need to form a "mapping" between what you have and what you want. If your columns are always fixed, it is just the row data that changes, the problem is fairly simple - just directly map things like: vstack(choosecols(table1, 3), choosecols(table2,6), choosecols(table3,4), choosecols(table4,2)). In this way, you'll get a single column with each of your source table's related data included. A second choice is to take advantage of standardizing names. For instance, you can add a row above your source tables where you write down the standard name of each column. You then use a match based approach to put the columns into a vstack, e.g. "purple", "green", "blue", "red" for table 1 "apple","plum","blueberry" for table 2 you create a mapping for table 2, say "red","purple","blue" then you get the proper column using xlookup: xlookup("red",table_2_mapping,table_2) So now you can just stack: vstack( xlookup("red",table_1_headers,table_1) xlookup("red",table_2_mapping, table2) etc. ) Obviously the more involved your column names, the more complicated the mapping. Well structured data and consistent naming will go a long way to making your situation easier.11Views0likes0CommentsRe: Loan Amortization Schedule
And p.s. this entire sheet can be made with a single formula if that is your thing, but the way I write it would require you to use a table for the additional payments. p.p.s. here you go. The attached file has both an incell formula version and a named lambda version. There is room for change, certainly, but I think it should function on a base level. =LET( total_periods, months_of_payments + years_of_payments * 12, monthly_rate, IF( OR(ISBLANK(effective_rate), ISOMITTED(effective_rate)), yearly_stated_rate / 12, (1 + effective_rate) ^ (1 / 12) - 1 ), adj_future_value, IF(OR(ISOMITTED(future_value), ISBLANK(future_value)), 0, future_value), regular_payment, PMT(monthly_rate, total_periods, present_value, adj_future_value), is_priniciple_reduction, present_value > future_value, adj_payment, IF(is_priniciple_reduction, -regular_payment, regular_payment), has_additional_payments, OR( NOT(ISBLANK(additional_payment_amount_col)), NOT(ISOMITTED(additional_payment_amount_col)) ), has_principal_increases, OR( NOT(ISBLANK(principal_increase_amount_col)), NOT(ISOMITTED(principal_increase_amount_col)) ), period_column, SEQUENCE(total_periods), date_column, EOMONTH(start_date, SEQUENCE(period_column, , 0)), calcCurPeriod, LAMBDA(cur_end_of_month_date, opening_balance, LET( cur_additional_payments, IF( has_additional_payments, FILTER( additional_payment_amount_col, (additional_payment_date_col <= cur_end_of_month_date) * (additional_payment_date_col >= EOMONTH(cur_end_of_month_date, -1)), 0 ), 0 ), cur_periodic_interest, opening_balance * monthly_rate, cur_increases_to_principal, IF( has_principal_increases, FILTER( principal_increase_amount_col, principal_increase_date_col <= cur_end_of_month_date * principal_increase_date_col > EOMONTH(cur_end_of_month_date, -1) ), 0 ), cur_payment, MIN( (opening_balance + cur_periodic_interest + cur_increases_to_principal), -regular_payment ), cur_scheduled_payment_amount_applied_to_principal, cur_payment - cur_periodic_interest, cur_total_payments, cur_payment + cur_additional_payments, cur_total_increases_to_principal, cur_periodic_interest + cur_increases_to_principal, closing_balance, ROUND( opening_balance - cur_total_payments + cur_total_increases_to_principal, 2 ), values_through_additional_payments, HSTACK( cur_end_of_month_date, opening_balance, -regular_payment ), values_through_increases_to_principal, IF( has_additional_payments, HSTACK( values_through_additional_payments, cur_additional_payments, cur_periodic_interest, cur_scheduled_payment_amount_applied_to_principal ), HSTACK( values_through_additional_payments, cur_periodic_interest, cur_scheduled_payment_amount_applied_to_principal ) ), values_through_closing_balance, IF( has_principal_increases, HSTACK( values_through_increases_to_principal, cur_increases_to_principal, closing_balance ), HSTACK(values_through_increases_to_principal, closing_balance) ), values_through_closing_balance ) ), payment_schedule_headers_prior_to_additional_payments, HSTACK( "period", "end_date", "opening_balance", "scheduled_payment_amount" ), payment_schedule_headers_prior_to_principal_increases, IF( has_additional_payments, HSTACK( payment_schedule_headers_prior_to_additional_payments, "additional_payments_applied_to_principal", "periodic_interest", "scheduled_payment_applied_to_principal" ), HSTACK( payment_schedule_headers_prior_to_additional_payments, "periodic_interest", "scheduled_payment_applied_to_principal" ) ), payment_schedule_headers_through_closing_balance, IF( has_principal_increases, HSTACK( payment_schedule_headers_prior_to_principal_increases, "increases_to_principal", "closing_balance" ), HSTACK(payment_schedule_headers_prior_to_principal_increases, "closing_balance") ), basic_schedule, DROP( REDUCE( present_value, period_column, LAMBDA(acc, cur_period, IF( TAKE(acc, -1, -1), VSTACK( acc, HSTACK( cur_period, calcCurPeriod(EOMONTH(start_date, cur_period - 1), TAKE(acc, -1, -1)) ) ), acc ) ) ), 1 ), result, VSTACK( payment_schedule_headers_through_closing_balance, FILTER(basic_schedule, ISNUMBER(CHOOSECOLS(basic_schedule, 3)), "") ), result )7Views0likes0CommentsRe: Loan Amortization Schedule
Do you have a particular format in mind? If I were to do it across multiple cells, it would go something like principle amount = interest_rate = periods = ordinary_payment = pmt() first_date = date = edate(first_date,sequence(periods) scheduled_payment = if(prior_balance>0,ordinary_payment,0) additional_payment = (whatever value you want) current_period_interest = if(prior_balance>0, prior_balance*interest_rate),0 total_principle_payment = additional_payment+(schedule_payment-current_period_interest) closing_balance = prior_balance-total_principle_payment total_scheduled_payments = sumifs(scheduled_pmts_col, date_col<=date) total_additional_payments = sumifs(additional_payments, date_col<=date) total_interest_paid = sumifs(interest_col, scheduled_pmts_col, date_col<=date) total_principle_paid = total_additional_payments + (total_scheduled_payments-total_interest_paid) Where there are blanks after the equal sign, that means you can either a) name the cell and enter a vale in that cell or b) insert a reference to the value you want (like the cell next to it). I would strongly suggest using a table so that you can sort and what not, but you can use the formulas described in just an ordinary sheet. If you use a table for the additional payments, rather than entering a value/reference for each additional payment, you can do additional_payment = sumifs(additional_payment_amt_acol, (other_date_col<=date)*(other_date_col>edate(date,-1))) The only downside of this method is that if you are interested in more precise interest adjustments based on the date of the additional payment, this formula won't do it (but could be modified if you needed it to). Any other solution is really just a re-wrapping of this suggestion. You can always calculate the periodic payment in advance, but interest must be calculated based upon the outstanding balance from the prior period. Once you know that number, you can determine how much of the scheduled payment reduced principle. If you want to make additional payments, you just reduce the principle further. And in case it is non-obvious, the reason for the dynamic formula for the date is that it will automatically change size based upon the criteria you use. If you just have the remaining formulas reference the date col (e.g. $a$5#) and place them next to it, you can fill down all of the formulas to match the size. You just need to delete everything after the first row of formulas if you want to repeat the fill down. All of the other cell references can be normal, like the prior balance is just E4 if your current ending balance would be E5. No need for offsets, indirects, or anything fancy. The first row would get a slightly different formula if you have to worry about headers, but that is not a big deal. Just change the prior balance reference to be conditional, i.e. if(not(isnumber(prior_balance)), original_balance, prior_balance).3Views0likes1CommentStacking row/columns of an array - some module formulas
I just wrote a few functions that should theoretically allow you to take an array, chunk it by rows or columns, and then stack them in the other dimensions, e.g. you have 10 rows by 3 columns and you want to stack them as a set of 5 rows by 6 columns. You should be able to use this to completely flatten an array (e.g. take each column and place them one under the other). It has optional parameters for separate fill values for rows and columns. I welcome any comments, suggestions, or alternative approaches. By way of contrast, if you had an array like: ctrl date end_date 1 1/1/2021 1/1/2021 2 1/15/2021 1/15/2021 3 2/2/2021 2/2/2021 Excel's native TOCOL function would flatten it into one column like this: =TOCOL(B4:D6) or =TOCOL(B4:D6,,FALSE) ctrl date end_date 1 44197 44197 2 44211 44211 or this =TOCOL(B4:D6,,TRUE) ctrl 1 2 date 44197 44211 end_date 44197 44211 whereas mine flattens it like this: =chunk.vStackChunksOfColumns(B4:D7,1) ctrl 1 2 date 44197 44211 end_date 44197 44211 or this =chunk.vStackChunksOfColumns(B4:D6,2) ctrl date 1 44197 2 44211 end_date 44197 44211 or this (which is effectively no change): =chunk.vStackChunksOfColumns(B4:D6,3) ctrl date end_date 1 44197 44197 2 44211 44211 hStackChunksOfRows = lambda( target_array, rows_per_slice, [no_columns_to_adj_slice_to_df_NO_ADJUSTMENT], [fill_value_for_added_columns_df_DBLQTS], [no_rows_to_adj_array_to_df_ROWS_PER_SLICE_TIMES_NO_OF_SLICES], [fill_value_for_added_rows_df_DBLQTS], let( rows_in_array, rows(target_array), cols_in_array, columns(target_array), no_of_slices, roundup(rows_in_array/rows_per_slice,0), target_slice_width, if( if( isomitted(no_columns_to_adj_slice_to_df_NO_ADJUSTMENT), FALSE, no_columns_to_adj_slice_to_df_NO_ADJUSTMENT ), no_columns_to_adj_slice_to_df_NO_ADJUSTMENT, cols_in_array ) , target_array_height, if( isomitted(no_rows_to_adj_array_to_df_ROWS_PER_SLICE_TIMES_NO_OF_SLICES), rows_per_slice * no_of_slices, no_rows_to_adj_array_to_df_ROWS_PER_SLICE_TIMES_NO_OF_SLICES ), fill_value_for_added_columns, if(isomitted(fill_value_for_added_columns_df_DBLQTS),"",fill_value_for_added_columns_df_DBLQTS), fill_value_for_added_rows, if(isomitted(fill_value_for_added_rows_df_DBLQTS),"",fill_value_for_added_rows_df_DBLQTS), adj_array, normalizeArray( target_array, target_array_height, target_slice_width, FALSE, fill_value_for_added_rows, fill_value_for_added_columns ), stack_iter, sequence(no_of_slices-1), stack_func, lambda( acc, slice_counter, hstack( acc, take(drop(adj_array,(slice_counter)*rows_per_slice),rows_per_slice) ) ), ifs( rows_in_array<=rows_per_slice, adj_array, rows_per_slice = 1, drop(reduce("",sequence(rows_in_array),lambda(acc, cur_row, hstack(acc, chooserows(adj_array,cur_row)))),,1), 1, reduce( take(adj_array,rows_per_slice), stack_iter, stack_func ) ) ) ); vStackChunksOfColumns = lambda( target_array, cols_per_slice, [no_rows_to_adj_slice_to_df_NO_ADJUSTMENT], [fill_value_for_added_rows_df_DBLQTS], [no_columns_to_adj_array_to_df_COLS_PER_SLICE_TIMES_NO_OF_SLICES], [fill_value_for_added_columns_df_DBLQTS], let( rows_in_array, rows(target_array), cols_in_array, columns(target_array), no_of_slices, roundup(cols_in_array/cols_per_slice,0), target_slice_height, if( if( isomitted(no_rows_to_adj_slice_to_df_NO_ADJUSTMENT), FALSE, no_rows_to_adj_slice_to_df_NO_ADJUSTMENT ), no_rows_to_adj_slice_to_df_NO_ADJUSTMENT, rows_in_array ) , target_array_width, if( isomitted(no_columns_to_adj_array_to_df_COLS_PER_SLICE_TIMES_NO_OF_SLICES), cols_per_slice*no_of_slices, no_columns_to_adj_array_to_df_COLS_PER_SLICE_TIMES_NO_OF_SLICES ), fill_value_for_added_rows, if(isomitted(fill_value_for_added_rows_df_DBLQTS),"",fill_value_for_added_rows_df_DBLQTS), fill_value_for_added_columns, if(isomitted(fill_value_for_added_columns_df_DBLQTS),"",fill_value_for_added_columns_df_DBLQTS), adj_array, normalizeArray( target_array, target_slice_height, target_array_width, TRUE, fill_value_for_added_rows, fill_value_for_added_columns ), stack_iter, sequence(,no_of_slices-1), stack_func, lambda( acc, slice_counter, vstack( acc, take(drop(adj_array,,(slice_counter)*cols_per_slice),,cols_per_slice) ) ), result, ifs( cols_in_array<=cols_per_slice, adj_array, cols_per_slice = 1, drop(reduce("",sequence(,cols_in_array),lambda(acc, cur_col, vstack(acc, CHOOSECOLS(adj_array,cur_col)))),1), 1, reduce( take(adj_array,,cols_per_slice), stack_iter, stack_func ) ), result ) ); normalizeArray = LAMBDA( target_array, final_height, // Desired final number of rows final_width, // Desired final number of columns fill_rows_first, // Should rows be filled first (default: TRUE) value_for_filling_rows, // Value to fill additional rows (default: "") value_for_filling_columns, // Value to fill additional columns (default: "") LET( // Get current array dimensions current_height, ROWS(target_array), current_width, COLUMNS(target_array), // Define row adjustment function (truncate or expand) row_func, IF( final_height <= current_height, LAMBDA(target_array_for_row_adjustment, TAKE(target_array_for_row_adjustment, final_height)), LAMBDA(target_array_for_row_adjustment, EXPAND(target_array_for_row_adjustment, final_height, , value_for_filling_rows)) ), // Define column adjustment function (truncate or expand) col_func, IF( final_width <= current_width, LAMBDA(target_array_for_col_adjustment, TAKE(target_array_for_col_adjustment, , final_width)), LAMBDA(target_array_for_col_adjustment, EXPAND(target_array_for_col_adjustment, , final_width, value_for_filling_columns)) ), // Apply adjustments adjusted_array, IF( fill_rows_first, row_func(col_func(target_array)), col_func(row_func(target_array)) ), adjusted_array ) );301Views1like2CommentsRe: Double Thunking Works Wonders!
Reduce merely requires a single array (of any dimension) as a return - it doesn't inherently limit the type of return (so far as I know). If you are willing to thunk thunks, your return stack can be fairly deep before you notice major slow downs. You could, for instance, turn each row of an array into a thunk and hstack them iteratively via reduce, thereby giving you a row of rows. You can write an access function along the lines of lambda( outer_address, row_of_thunks, lambda( inner_address, index( index( row_of_thunks, 1, outer_address )(), 1, inner_address ) ) In this way, thunks just become 2d buckets that could be anything at all and arrays are just the scaffolding for a basic thunk arrangement. Depending how you choose to write your functions, you can have each thunk in an array represent a different state you want to persist and pass it easily from one name scope to another. I've also been thinking a bit about the downside of just adding everything to a single 2 day array with more explicit contained array tracking as the first few columns, that way you can set a particular array as a variable name and have it extracted from the array as you require. Outside of the inherent limits of accessing an array and deconstructing it using take/drop/choosecols/etc., there is no overhead to maintaining the array and no stacking of functions and dependency chains for the data.97Views0likes6CommentsRe: Double Thunking Works Wonders!
PeterBartholomew1 My chunk function (sel.chunk) is just the combination of a slice by rows (sel.sliceRows) and a slice by columns (sel.sliceCols). The reason for all of the parameters is so that you can specify the starting and ending index for the row slice and the column slice using one of three methods: a) absolute values(i.e. references to the start of the dimension), (e.g. start at column at end at column 6) b) as values with reference to the end of the dimension, or (e.g. start at 8 columns from the end of the array and end at 3 columns from the end of the array), c) as values with reference to the other value (e.g. start at column 5 and end 3 columns later or end at column 9 and start 4 columns earlier). All of the index management is handled within the formula so you can almost think of it as a declarative type call - "Give me the slice with these dimensions". All of the validation/error checking happens within the final formula (sel.calcSliceStartAndEnd) which is dimension agnostic - you just specify the total dimension size and how the start and end indices should be determined. The default values and ordering of the parameters are aimed at the standard use case - you tell it the starting index and you tell it either the ending index or how many additional units of the dimension you want (start at row 4 end at row 8 or start at row 4 and include 5 rows), but if you don't specify an end or a start, that just means "from the start give me 8 rows" or "from row 8 give me to the end." This way more complicated dimension slicing can be specifically referenced if needed and otherwise ignored entirely. ---------an FYI on my naming approach/syntax ------------- Syntax: Functions in camel case, Variables in snake case, Classes (in other contexts) in pascal case, and Global variables in all caps. In a more nuanced setting, I'd also use leading underscores for "helper" type objects/functions. You don't see it in the "sel" module context, but I generally try to have functions that construct other functions start with "make", functions that retrieve something start with "get", and functions that are about novel value/object creation start with something like "calc". Excel naming approach: Module names should be intuitively related to the "user" type functions in the module (like "sel" stands for "select" and it includes the slicing and chunking functions), Function names plus the module context should give you a strong intuitive sense of what the function expects/returns, Variable names should essentially be self-documenting why they exist, and Parameter names should self-document what they expect and end with "_df_" if there is a default value followed by what the default value is where the default value should strongly suggest what a valid argument type would be (e.g. "_df_TRUE" for when a parameter expects a boolean or "_df_2" for when it expects an int/number"). My main motivation with parameter names is that excel provides no type hints or hover information for users, so the only way they will know what valid entries are for the function is to divine it from the parameter name, to refer to other documentation, or to go read the function definition, and I'd rather the user have minimal friction (or learning curve) when invoking a function.165Views0likes0CommentsRe: Double Thunking Works Wonders!
Patrick2788 (P.S. Updated chunk function to proper version) Funnily enough I've been working on something somewhat similar, but mine is solidly within the bucket of what I'd call chunking. The primary solution I'm currently working towards involves precalculating the spacial arrangement of the chunks and then making iterators by way of things like map, bycol, or byrow. For the most part I do fancy accumulating stuff inside of a reduce combined with an iter function, but back in the day I was using thunks of thunks as list like objects and then converting those lists into 2d arrays by use of a layout engine (which included array dimension normalization logic and permitted the combination of arrays with different dimensions without unanticipated fill values). The only reason I haven't stuck with the thunks is because you can't meaningfully persist them and their use case is basically limited to the exact formula you are currently constructing. What makes thunks helpful is that they are essentially functions and can be treated as values up until a context where you want them to be evaluated (accomplished by the "()" syntax). Any excel context which permits you to evaluate a function will give you access to the contents, so it all comes down to creatively playing the shell game of unwinding your thunks. Also, consider stacking thunks in a 2 column array where the first column is some sort of identifier and you use a selection mechanism that can select the thunk you want by the identifier - it is much easier than having to iter through each thunk to find out what it is. And for what it is worth, you can very easily create a thunk, calculate some value from that thunk, and then save that value as a thunk. If you accumulate those values with your thunk, you can think of it is a caching mechanism and take advantage of key value pairs. And just because, here is my latest iteration of an array chunker/slicer. Using this in an iter and then aggregating each chunk as a thunk is something even a punk could do. //sel module // for taking portions of other things. chunk = LAMBDA( target_array, [start_at_row], [end_at_row], [start_at_col], [end_at_col], [no_of_next_rows_to_include], [no_of_next_cols_to_include], [rows_to_stop_from_end_of_array], [cols_to_stop_from_end_of_array], [start_at_rows_before_end_of_slice], [start_at_cols_before_end_of_slice], [start_at_rows_before_end_of_array], [start_at_cols_before_end_of_array], [accept_partial_df_FALSE], [return_first_row_if_no_start_df_FALSE], [return_first_col_if_no_start_df_FALSE], [return_last_row_if_no_end_df_FALSE], [return_last_col_if_no_end_df_FALSE], LET( sliced_rows, sliceRows( target_array, start_at_row, end_at_row, no_of_next_rows_to_include, rows_to_stop_from_end_of_array, start_at_rows_before_end_of_slice, start_at_rows_before_end_of_array, accept_partial_df_FALSE, return_first_row_if_no_start_df_FALSE, return_last_row_if_no_end_df_FALSE ), error_in_row_slice, iserror(index(sliced_rows,1,1)), sliced_rows_and_columns, if( error_in_row_slice, error.type(3), sliceCols( sliced_rows, start_at_col, end_at_col, no_of_next_cols_to_include, cols_to_stop_from_end_of_array, start_at_cols_before_end_of_slice, start_at_cols_before_end_of_array, accept_partial_df_FALSE, return_first_col_if_no_start_df_FALSE, return_last_col_if_no_end_df_FALSE ) ), sliced_rows_and_columns ) ); sliceRows = LAMBDA( target_array, [start_at_row], [end_at_row], [no_of_next_rows_to_include], [rows_to_stop_from_end_of_array], [start_at_rows_before_end_of_slice], [start_at_rows_before_end_of_array], [accept_partial_df_FALSE], [return_first_if_no_start_df_FALSE], [return_last_if_no_end_df_FALSE], LET( row_bounds, calcSliceStartAndEnd( columns(target_array), start_at_row, end_at_row, no_of_next_rows_to_include, rows_to_stop_from_end_of_array, start_at_rows_before_end_of_slice, start_at_rows_before_end_of_array, return_first_if_no_start_df_FALSE, return_last_if_no_end_df_FALSE, accept_partial_df_FALSE ), start_row, index(row_bounds,1,1), end_row, index(row_bounds,1,2), return_trimmed_array, columns(row_bounds)=3 , trimmed_to_starting_row, if( return_trimmed_array, DROP( target_array, start_row- 1 ), FALSE ), trimmed_to_ending_row, if( return_trimmed_array, take( trimmed_to_starting_row, end_row - start_row + 1 ), FALSE ), result, if( return_trimmed_array, trimmed_to_ending_row, ERROR.TYPE(3) ), result ) ); sliceCols = LAMBDA( target_array, [start_at_col], [end_at_col], [no_of_next_cols_to_include], [cols_to_stop_from_end_of_array], [start_at_cols_before_end_of_slice], [start_at_cols_before_end_of_array], [accept_partial_df_FALSE], [return_first_col_if_no_start_df_FALSE], [return_last_col_if_no_end_df_FALSE], LET( col_bounds, calcSliceStartAndEnd( columns(target_array), start_at_col, end_at_col, no_of_next_cols_to_include, cols_to_stop_from_end_of_array, start_at_cols_before_end_of_slice, start_at_cols_before_end_of_array, return_first_col_if_no_start_df_FALSE, return_last_col_if_no_end_df_FALSE, accept_partial_df_FALSE ), start_col, index(col_bounds,1,1), end_col, index(col_bounds,1,2), return_trimmed_array, columns(col_bounds)=3 , trimmed_to_starting_col, if( return_trimmed_array, DROP( target_array, , start_col- 1 ), FALSE ), trimmed_to_ending_col, if( return_trimmed_array, take( trimmed_to_starting_col, , end_col - start_col + 1 ), FALSE ), result, if( return_trimmed_array, trimmed_to_ending_col, ERROR.TYPE(3) ), result ) ); calcSliceStartAndEnd = LAMBDA( main_dimension_size, [start_at_idx], [end_at_idx], [end_units_from_start_of_slice], [end_units_from_end_of_dimension], [start_units_from_end_of_slice], [start_units_from_end_of_dimension], [return_first_if_no_from_df_FALSE], [return_last_if_no_through_df_FALSE], [accept_partial_df_FALSE], let( // Parameter usage indicators start_idx_provided, NOT(ISOMITTED(start_at_idx)), calc_start_idx_from_end_of_slice,not(isomitted(start_units_from_end_of_slice)), calc_start_idx_from_end_of_dimension,not(isomitted(start_units_from_end_of_dimension)), start_from_first_unit_as_default,if(isomitted(return_first_if_no_from_df_FALSE),FALSE,return_first_if_no_from_df_FALSE), end_idx_provided,not(isomitted(end_at_idx)), calc_end_idx_from_start_of_slice,not(isomitted(end_units_from_start_of_slice)), calc_end_idx_from_end_of_dimension,not(isomitted(end_units_from_end_of_dimension)), end_at_final_unit_as_default,if(isomitted(return_last_if_no_through_df_FALSE),FALSE,return_last_if_no_through_df_FALSE), at_least_one_relative_position, calc_start_idx_from_end_of_slice * calc_end_idx_from_start_of_slice, accept_partial, if(isomitted(accept_partial_df_FALSE),FALSE,accept_partial_df_FALSE), // Valid parameter checking valid_start_parameter_provided, ( start_idx_provided+ calc_start_idx_from_end_of_slice+ calc_start_idx_from_end_of_dimension ) = 1 , valid_end_parameter_provided, ( end_idx_provided + calc_end_idx_from_start_of_slice + calc_end_idx_from_end_of_dimension ) = 1 , valid_relative_start, calc_start_idx_from_end_of_slice * (end_idx_provided+calc_end_idx_from_end_of_dimension) , valid_relative_end, calc_end_idx_from_start_of_slice * (start_idx_provided+calc_start_idx_from_end_of_dimension) , valid_relative_parameter, if( at_least_one_relative_position, (valid_relative_start+valid_relative_end) = 1, TRUE ), valid_parameters, (valid_start_parameter_provided+start_from_first_unit_as_default) * (valid_end_parameter_provided+end_at_final_unit_as_default) * valid_relative_parameter , result, if( not(valid_parameters), error.type(3), let( abs_start, ifs( valid_relative_start, FALSE, start_idx_provided, start_at_idx, calc_start_idx_from_end_of_dimension, main_dimension_size-start_units_from_end_of_dimension, 1, error.type(3) ), abs_end, ifs( valid_relative_end, FALSE, end_idx_provided, end_at_idx, end_units_from_end_of_dimension, main_dimension_size-end_units_from_end_of_dimension ), start_idx, if( valid_relative_start, abs_end-start_units_from_end_of_slice-1, abs_start ), end_idx, if( valid_relative_end, abs_start+end_units_from_start_of_slice-1, abs_end ), is_valid_slice, and( end_idx>=start_idx, start_idx>0, ((end_idx<=main_dimension_size)+accept_partial) ), if( is_valid_slice, hstack(start_idx, end_idx,is_valid_slice), error.type(3) ) ) ), result ) );370Views1like2CommentsRe: Excel Labs Array Module - What are your thoughts?
SergeiBaklan Thanks for taking a look. These functions are designed for working with dynamic arrays, so the array traversal logic is embedded (usually) in each of them. My replaceCell and replaceBlankCells, for instance, can replace any cell in an array (dynamic or otherwise) without having to separately call an array traversal function. Yes, it involves constructing a new array, but as far as I know, each "discrete" calculation creates its own array anyway (e.g. original_array, extra_row(vstack(original_array,"")), extra_row2(vstack(extra_row,"")) creates two new arrays while original_array, vstack(original_array, extra_row, extra_row2) creates only one new array). The one function that I think is the biggest contributor going forward is the flatten function. It is presumably going to be the basis of all of my set theory module logic.290Views0likes0CommentsRe: Return of the Corkscrew with Spilled Arrays
James_Buist I really don't understand your question, but I have some sense for the area. I've made some other complicated posts that touch on this topic, so if you are up for some reading, you can take a look at my serialization to preserve state function which is aimed primarily at reducing the computational demands for independent formulas. If you are happy to have just a single dynamic formula that rolls forward in time, everything comes down to structure. Basically, you are aggregating data which consists of a few parts - an opening balance, inflows, outflows, and closing balance. What gives you freedom to inject complicated logic for how to change the inflows and outflows is making sure that your data is segregated both from your presentation and your calculations. The primary way that I do that is by having an excel table (ie "list object") that has amounts kept more or less as absolute values in a single column with another column that indicates whether the value is an inflow or outflow for the current purpose. So something like ctrl, date, acct, amt, flow. I left you with the ability to add an optional filter mask for the transactions so that you can adjust the function relatively easily. This is the data in "tbl_trans": ctrl date acct amt flow 1 1/1/2021 first_account 100 I 2 1/15/2021 first_account 22 O 3 2/2/2021 first_account 1,000 I 4 2/22/2021 first_account 44 O 5 3/15/2021 first_account 10,000 I 6 3/16/2021 first_account 66 O This is the output: account start_date end_date opening_balance inflows outflows ending_balance first_account 1/1/2021 1/31/2021 - 100.00 22.00 78.00 first_account 2/1/2021 2/28/2021 78.00 1,000.00 44.00 1,034.00 first_account 3/1/2021 3/28/2021 1,034.00 10,000.00 66.00 10,968.00 getDateMask = lambda( start_date, end_date, (tbl_trans[date] <= end_date)*(tbl_trans[date]>=start_date) ); getAcctMask = lambda( acct_id, tbl_trans[acct] = acct_id ); totalByflow = lambda( flow, filter_mask, [optional_filter_mask], round( sum( filter( tbl_trans[amt], ( (tbl_trans[flow]=flow)* filter_mask* if(isomitted(optional_filter_mask),TRUE,optional_filter_mask) ) ) ), 2 ) ); sumInflows = lambda( date_mask, acct_mask, [optional_filter_mask], totalByFlow("I", date_mask*acct_mask,optional_filter_mask) ); sumOutflows = lambda( date_mask, acct_mask, [optional_filter_mask], totalByFlow("O", date_mask*acct_mask,optional_filter_mask) ); getPriorBalance = lambda( accumulator, take(accumulator,-1,-1) ); getPriorDate = lambda( accumulator, index(take(accumulator,-1),1,3) ); calcPeriod = lambda( first_date, edate(first_date-1,1) ); calcClosingBalance = lambda( opening_balance, inflows, outflows, [optional_add_outflows_df_FALSE], if( if( isomitted(optional_add_outflows_df_FALSE), FALSE, optional_add_outflows_df_FALSE ), opening_balance+outflows-inflows, opening_balance+inflows-outflows ) ); calcCurrentPeriodData = lambda( accumulator, acct, [optional_start_date], [optional_filter_mask], [optional_end_date], [optional_add_outflows_df_FALSE], let( opening_balance, getPriorBalance(accumulator), start_date, if(isomitted(optional_start_date),getPriorDate(accumulator)+1,optional_start_date), end_date, if(isomitted(optional_end_date),calcPeriod(start_date),optional_end_date), acct_mask, getAcctMask(acct), date_mask, getDateMask(start_date,end_date), inflows, sumInflows(date_mask, acct_mask, optional_filter_mask), outflows, sumOutflows(date_mask, acct_mask, optional_filter_mask), closing_balance, calcClosingBalance(opening_balance, inflows, outflows, optional_add_outflows_df_FALSE), result, hstack(acct,start_date, end_date, opening_balance, inflows, outflows, closing_balance ), result ) ); running_balance_heading = hstack("account","start_date", "end_date", "opening_balance", "inflows","outflows","ending_balance"); makeAcctActivityReport = lambda( start_date, initial_balance, acct, [optional_total_periods_df_3], let( accumulator, hstack(expand(hstack("","",start_date-1),1,columns(running_balance_heading)-1,""),initial_balance), periods, if(isomitted(optional_total_periods_df_3),3,optional_total_periods_df_3), result, reduce( accumulator, sequence(periods), lambda( acc, period, vstack(acc, calcCurrentPeriodData(acc,acct)) ) ), vstack(running_balance_heading,drop(result,1)) ) ); Then you make a few helper formulas depending on your needs.720Views0likes0CommentsExcel Labs Array Module - What are your thoughts?
Sorry for what will seem like a code dump, but I'm curious if any of you have tried to create similar modules or if you have any comments/wisdom to share about the current incarnation of my module. I recently ran it through chatgpt, so I'm not sure if it slipped in any errors - the structure should, however, be largely accurate and give enough details to let you understand what I tried to accomplish. Do you notice any shortcomings, obvious enhancements, or alternate approaches to the functions? In particular, I am always concerned about the alternate ways to handle array functions and whether my intuition about formula efficiencies is in the right place. If there is enough interest, I may share some of my other modules. // arr module // This module provides a suite of array manipulation functions to enhance and extend Excel's native capabilities. // The functions are scoped under the `arr.` namespace to prevent naming conflicts with Excel's built-in functions, ensuring reliable use throughout any workbook. // Function names have been carefully selected to avoid ambiguity or collision with Excel's native features, especially when referenced internally without the `arr.` prefix. // Below is an overview of the functions provided in this module, organized by their core functionalities: // 1. Basic Information (Public Interface) // These functions provide basic array analysis and selection tools. // - dimensions: Returns the number of rows and columns in an array, optionally including headers. // - getColumnIdxByName: Retrieves column indices from an array based on header names. // - uniqueElements: Extracts unique elements from an array, returning them either as a row or column. // - countsByElement: Counts occurrences of elements in an array with options for ignoring blanks, errors, and sorting. // 2. Comparisons (Public Interface) // Functions that allow for comparison between arrays and columns. // - areEqualDimension: Checks if two arrays have equal dimensions (width, height, or size). // - compareColumns: Compares columns of an array based on a value and a specified operator. // - getDiffDimensionFunc: Calculates the difference in dimensions (width, height, or size) between two arrays. // 3. Miscellaneous Functions (Public Interface) // General functions for filling arrays and creating values. // - fillArray: Fills an array with specified text over a defined number of rows and columns. // 4. Core Operations (Public Interface) // These high-level array manipulation functions are designed for direct user interaction and support common array tasks. // Basic Combination and Addition // - stack: Stack two arrays either vertically or horizontally. // - stackOn: Stack arrays with user-specified placement (e.g., above, below, left, right). // - stackAndExpand: Stack two arrays while expanding dimensions to match as needed. // Subset Selection and Deletion // - sliceCols: Extract or remove specific columns from an array. // - sliceRows: Extract or remove specific rows from an array. // - trimValue: Trim specified values (e.g., blanks) from rows or columns. // 5. Complex Transformations (Public Interface) // These functions enable higher-level array manipulations such as flattening, replacing, or transforming data. // - flatten: Convert a two-dimensional array into a one-dimensional list, with options for sorting and filtering. // - replaceBlankCells: Replace blank cells in an array with a specified value. // - replaceCell: Replace specific values in an array based on a condition. // - replaceCols: Replace or insert entire columns in an array with options to match dimensions. // - replaceRows: Replace or insert entire rows in an array with options to match dimensions. // 6. Helper Functions (Internal Use) // These internal-use functions assist with specific operations and are prefixed with an underscore to denote their private nature. // Dimension and Size Helpers // - _areSameHeight: Checks if two arrays have the same height. // - _areSameWidth: Checks if two arrays have the same width. // - _areSameSize: Checks if two arrays have the same size. // - _ensureHeight: Ensure an array has the same or greater height than a reference array. // - _ensureWidth: Ensure an array has the same or greater width than a reference array. // - _diffHeight: Calculates the height difference between two arrays. // - _diffWidth: Calculates the width difference between two arrays. // - _diffSize: Calculates the size difference (width and height) between two arrays. // - _maxHeight: Gets the maximum height between two arrays. // - _maxWidth: Gets the maximum width between two arrays. // Stacking Logic Helpers // - _stackSwitch: Determines stacking behavior (e.g., above, below, left, right) based on user input. // - _stackAndExpandSwitch: Expands dimensions as necessary before stacking based on user preference. // - _stackAndExpandHeight: Expands and stacks arrays by height. // - _stackAndExpandWidth: Expands and stacks arrays by width. // - _stackAndExpandAllDimensions: Expands and stacks arrays in both dimensions (width and height). // Basic Information dimensions = lambda( target_array, [show_names_df_FALSE], if( if( isomitted(show_names_df_FALSE), FALSE, show_names_df_FALSE ), vstack(hstack("rows", "columns"), hstack(rows(target_array), columns(target_array))), hstack(rows(target_array), columns(target_array)) ) ); getColumnIdxByName = lambda( array_with_headers, column_names_row, hstack(bycol(column_names_row, lambda(column_name, match(column_name, take(array_with_headers,1),0)))) ); uniqueElements = lambda( target_array, [return_as_col_bool_df_TRUE], trimValue(unique(flatten(target_array, return_as_col_bool_df_TRUE))) ); countsByElement = lambda( target_array, [search_array_df_SELF], [show_element_values_df_FALSE], [ignore_blanks_df_FALSE], [ignore_errors_df_FALSE], [sort_elements_df_0], [traverse_cols_first_df_TRUE], let( flattened_target_array, flatten(target_array,,ignore_blanks_df_FALSE,ignore_errors_df_FALSE,,sort_elements_df_0,traverse_cols_first_df_TRUE), flattened_search_array, if(isomitted(search_array_df_SELF), flattened_target_array, flatten(search_array_df_SELF)), elements, unique(flattened_target_array), pre_result, byrow( elements, lambda( element, iferror(rows(filter(flattened_search_array, flattened_search_array=element)),0) ) ), result, if( if( isomitted(show_element_values_df_FALSE),FALSE,show_element_values_df_FALSE ), hstack(elements, pre_result), pre_result ), result ) ); // Comparisons areEqualDimension = LAMBDA(dimension, array1, array2, SWITCH( dimension, "width", _areSameWidth(array1, array2), "height", _areSameHeight(array1, array2), "size", _areSameSize(array1, array2), ERROR.TYPE(3) ) ); compareColumns= LAMBDA(value_row, array_for_comparison, [comparison_operator], [comparison_col_idx], [value_col_idx], LET( operator, IF(ISOMITTED(comparison_operator), "=", comparison_operator), comp_func, mask.comparisonFunc(operator), // getCompFunc will return #VALUE! for invalid operators col_idx, IF(ISOMITTED(comparison_col_idx), 1, comparison_col_idx), val_idx, IF(ISOMITTED(value_col_idx), 1, value_col_idx), comp_value, IF(COLUMNS(value_row) > 1, CHOOSECOLS(value_row, val_idx), value_row), comp_array, CHOOSECOLS(array_for_comparison, col_idx), IF(comp_func = ERROR.TYPE(3), ERROR.TYPE(3), comp_func(comp_value, comp_array)) // Propagate #VALUE! if operator is invalid ) ); getDiffDimensionFunc = LAMBDA(dimension, array1, array2, SWITCH( dimension, "width", _diffWidth(array1, array2), "height", _diffHeight(array1, array2), "size", _diffSize(array1, array2), ERROR.TYPE(3) ) ); // Miscellaneous functions fillArray = LAMBDA(r, c, txt, MAKEARRAY(r, c, LAMBDA(row, col, txt))); // Stack Functions stack = lambda(array_1, array_2, [vstack_bool_df_TRUE], if( if( isomitted(vstack_bool_df_TRUE), TRUE, vstack_bool_df_TRUE ), vstack(array_1, array_2), hstack(array_1, array_2) ) ); stackOn = lambda( array_to_stack, fixed_array, [stack_placement_df_RIGHT], [match_shared_dimensions_df_TRUE], [fill_value_df_DBQT], let( match_shared_dimension, if(isomitted(match_shared_dimensions_df_TRUE),TRUE,match_shared_dimensions_df_TRUE), result, if( match_shared_dimension, _stackAndExpandSwitch(array_to_stack, fixed_array, stack_placement_df_RIGHT, fill_value_df_DBQT), _stackSwitch(array_to_stack, fixed_array, stack_placement_df_RIGHT) ), result ) ); stackAndExpand = lambda(array1, array2, [exp_width_bool_df_TRUE], [fill_value_df_blank], [exp_height_bool_df_TRUE], [vstack_bool_df_TRUE], let( expand_width, IF(ISOMITTED(exp_width_bool_df_TRUE), TRUE, exp_width_bool_df_TRUE), expand_height, IF(ISOMITTED(exp_height_bool_df_TRUE), TRUE, exp_height_bool_df_TRUE), stack_bool, if(ISOMITTED(vstack_bool_df_TRUE), TRUE, vstack_bool_df_TRUE), result, ifs( expand_height * expand_width, _stackAndExpandAllDimensions(array1, array2, fill_value_df_blank, stack_bool), expand_height, _stackAndExpandHeight(array1, array2, fill_value_df_blank, stack_bool), expand_width, _stackAndExpandWidth(array1, array2, fill_value_df_blank, stack_bool), 1, ERROR.TYPE(3) ), result ) ); // Subset selection and Deletion getColumnsByName = lambda( array_with_headers, column_names_row, choosecols(drop(array_with_headers,1),getColumnIdxByName(array_with_headers,column_names_row)) ); getNonZeroCells = LAMBDA(target_row_or_col, LET(is_not_zero, is.notZero(target_row_or_col), FILTER(target_row_or_col, is_not_zero, "")) ); sliceCols = LAMBDA( original_array, no_columns_to_drop, [no_of_columns_to_take], [no_columns_to_drop_from_end], LET( after_first_drop, DROP(original_array, , no_columns_to_drop), after_take, IF( ISOMITTED(no_of_columns_to_take), after_first_drop, TAKE(after_first_drop, , no_of_columns_to_take) ), after_second_drop, IF( ISOMITTED(no_columns_to_drop_from_end), after_take, DROP(after_take, ,-no_columns_to_drop_from_end) ), after_second_drop ) ); sliceRows = LAMBDA( original_array, no_rows_to_drop, [no_rows_to_take], [no_rows_to_drop_from_end], LET( after_first_drop, DROP(original_array, no_rows_to_drop), after_take, IF( ISOMITTED(no_rows_to_take), after_first_drop, TAKE(after_first_drop, no_rows_to_take) ), after_second_drop, IF( ISOMITTED(no_rows_to_drop_from_end), after_take, DROP(after_take, ,-no_rows_to_drop_from_end) ), after_second_drop ) ); trimValue = lambda( target_row_or_col, [trim_value_df_BLANK], let( trim_mask, if( isomitted(trim_value_df_BLANK), not(isblank(target_row_or_col)), not(target_row_or_col = trim_value_df_BLANK) ), filter(target_row_or_col, trim_mask,"") ) ); // Complex Transformations flatten = LAMBDA( target_array, [return_as_column_bool_df_TRUE], [ignore_blanks_df_FALSE], [ignore_errors_df_FALSE], [unique_elements_only_df_FALSE], [sort_elements_df_0], [traverse_cols_first_df_TRUE], LET( make_column_bool, IF(ISOMITTED(return_as_column_bool_df_TRUE), TRUE, return_as_column_bool_df_TRUE), ignore_blanks, IF(ISOMITTED(ignore_blanks_df_FALSE), FALSE, ignore_blanks_df_FALSE), ignore_errors, IF(ISOMITTED(ignore_errors_df_FALSE), FALSE, ignore_errors_df_FALSE), ignore_value, (ignore_blanks * 1) + (ignore_errors * 2), traverse_cols_first, if(isomitted(traverse_cols_first_df_TRUE),TRUE,traverse_cols_first_df_TRUE), pre_result, IF( make_column_bool, TOCOL(target_array, ignore_value, traverse_cols_first), TOROW(target_array, ignore_value, traverse_cols_first) ), unique_elements_only_bool, if(isomitted(unique_elements_only_df_FALSE), FALSE, unique_elements_only_df_FALSE), sort_elements_value, if(isomitted(sort_elements_df_0), 0, sort_elements_df_0), after_unique_result, if(unique_elements_only_bool, unique(pre_result), pre_result), after_sort_result, switch( sort_elements_value, 0, after_unique_result, 1, sort(after_unique_result), -1, sort(after_unique_result,, -1), error.type(3) ), after_sort_result ) ); replaceBlankCells = LAMBDA( array, [replacement_value], MAP( array, LAMBDA( cur_cell, IF( ISBLANK(cur_cell), IF(ISOMITTED(replacement_value), "", replacement_value), cur_cell ) ) ) ); replaceCell = LAMBDA( array, target_cell_value, replacement_value, [comparison_operator], MAP( array, LAMBDA( cur_cell_value, let( comparison_func, IF( ISOMITTED(comparison_operator), mask.comparisonFunc("="), mask.comparisonFunc(comparison_operator) ), comparison_result, comparison_func(cur_cell_value, target_cell_value), if( comparison_result, replacement_value, target_cell_value ) ) ) ) ); replaceCols = LAMBDA( replacement_cols, original_array, [target_col_idx], [insert_bool_default_false], [trim_to_orig_size_bool_df_FALSE], [expand_replacement_cols_to_match_rows_df_TRUE], [expand_original_cols_to_match_rows_df_TRUE], LET( col_idx, IF(ISOMITTED(target_col_idx), 1, target_col_idx), orig_cols, columns(original_array), insert_bool, IF( ISOMITTED(insert_bool_default_false), FALSE, insert_bool_default_false ), adj_orig_array, if( if( isomitted(expand_original_cols_to_match_rows_df_TRUE), TRUE, expand_original_cols_to_match_rows_df_TRUE ), _ensureHeight(replacement_cols,original_array), original_array ), adj_replacement_cols, if( if( isomitted(expand_replacement_cols_to_match_rows_df_TRUE), TRUE, expand_replacement_cols_to_match_rows_df_TRUE ), _ensureHeight(original_array,replacement_cols), replacement_cols ), first_part, IF( col_idx > 1, hSTACK(TAKE(adj_orig_array, ,col_idx - 1), adj_replacement_cols), adj_replacement_cols ), drop_cols, if( orig_cols>=col_idx, if( insert_bool, col_idx-1, col_idx+columns(adj_replacement_cols)-1 ), 0 ), combined_parts, IF( or(drop_cols=0,drop_cols>orig_cols), first_part, hstack(first_part, drop(adj_orig_array, ,drop_cols)) ), if( if( isomitted(trim_to_orig_size_bool_df_FALSE), FALSE, trim_to_orig_size_bool_df_FALSE ), take(combined_parts, ,orig_cols), combined_parts ) ) ); replaceRows = LAMBDA( replacement_rows, original_array, [target_row_idx], [insert_bool_df_false], [trim_to_orig_size_bool_df_FALSE], [expand_replacement_rows_to_match_cols_df_TRUE], [expand_original_rows_to_match_cols_df_TRUE], LET( row_idx, IF(ISOMITTED(target_row_idx), 1, target_row_idx), orig_rows, rows(original_array), insert_bool, IF( ISOMITTED(insert_bool_df_false), FALSE, insert_bool_df_false ), adj_orig_array, if( if( isomitted(expand_original_rows_to_match_cols_df_TRUE), TRUE, expand_original_rows_to_match_cols_df_TRUE ), _ensureWidth(replacement_rows, original_array), original_array ), adj_replacement_rows, if( if( isomitted(expand_replacement_rows_to_match_cols_df_TRUE), TRUE, expand_replacement_rows_to_match_cols_df_TRUE ), _ensureWidth(original_array,replacement_rows), replacement_rows ), first_part, IF( row_idx > 1, VSTACK(TAKE(adj_orig_array, row_idx - 1), adj_replacement_rows), adj_replacement_rows ), drop_rows, if( rows(adj_orig_array)>=row_idx, if( insert_bool, row_idx-1, row_idx+rows(adj_replacement_rows)-1 ), 0 ), combined_parts, IF( drop_rows<=0, first_part, vstack(first_part, drop(adj_orig_array, drop_rows)) ), result, if( if( isomitted(trim_to_orig_size_bool_df_FALSE), FALSE, trim_to_orig_size_bool_df_FALSE ), take(combined_parts, orig_rows), combined_parts ), result ) ); // Dimension and Size Helpers _areSameHeight = LAMBDA(array1, array2, ROWS(array1) = ROWS(array2) ); _areSameWidth = LAMBDA(array1, array2, COLUMNS(array1) = COLUMNS(array2) ); _areSameSize = LAMBDA(array1, array2, AND(_areSameWidth(array1, array2), _areSameHeight(array1, array2)) ); _ensureHeight = lambda( reference_array, expansion_array, [fill_value_df_DBLQT], expand( expansion_array, max(rows(reference_array), rows(expansion_array)),, if(isomitted(fill_value_df_DBLQT), "",fill_value_df_DBLQT) ) ); _ensureWidth = lambda( reference_array, expansion_array, [fill_value_df_DBLQT], expand( expansion_array, , max(columns(reference_array), columns(expansion_array)), if(isomitted(fill_value_df_DBLQT), "",fill_value_df_DBLQT) ) ); _diffHeight = LAMBDA(array1, array2, ROWS(array1) - ROWS(array2) ); _diffWidth = LAMBDA(array1, array2, COLUMNS(array1) - COLUMNS(array2) ); _diffSize = LAMBDA(array1, array2, HSTACK(_diffHeight(array1, array2), _diffWidth(array1, array2)) ); _maxHeight = LAMBDA(arr_1, arr_2, LET( arr_1_height, ROWS(arr_1), arr_2_height, ROWS(arr_2), max_height, MAX(arr_1_height, arr_2_height), max_height ) ); _maxWidth = LAMBDA(arr_1, arr_2, LET( arr_1_width, COLUMNS(arr_1), arr_2_width, COLUMNS(arr_2), max_width, MAX(arr_1_width, arr_2_width), max_width ) ); // Stacking Logic Helpers _stackSwitch = lambda( array_to_stack, fixed_array, stack_placement_df_RIGHT, switch( if(isomitted(stack_placement_df_RIGHT),"right",stack_placement_df_RIGHT), "above", vstack(array_to_stack, fixed_array), "below", vstack(fixed_array, array_to_stack), "left", hstack(array_to_stack, fixed_array), "right", hstack(fixed_array, array_to_stack), error.type(3) ) ); _stackAndExpandSwitch = lambda( array_to_stack, fixed_array, stack_placement_df_RIGHT, [fill_value_df_DBQT], switch( if(isomitted(stack_placement_df_RIGHT),"right",stack_placement_df_RIGHT), "above", _stackAndExpandWidth(array_to_stack, fixed_array,fill_value_df_DBQT), "below", _stackAndExpandWidth(fixed_array, array_to_stack, fill_value_df_DBQT), "left", _stackAndExpandHeight(array_to_stack, fixed_array,fill_value_df_DBQT), "right", _stackAndExpandHeight(fixed_array, array_to_stack, fill_value_df_DBQT), error.type(3) ) ); _stackAndExpandHeight = LAMBDA(array_1, array_2, [fill_value_df_blank], [vstack_bool_df_FALSE], LET( max_width, _maxWidth(array_1, array_2), max_height, _maxHeight(array_1, array_2), fill_char, IF(ISOMITTED(fill_value_df_blank), "", fill_value_df_blank), stack_bool, if( isomitted(vstack_bool_df_FALSE), FALSE, vstack_bool_df_FALSE ), expanded_array_1, EXPAND(array_1, max_height, , fill_char), expanded_array_2, EXPAND(array_2, max_height, , fill_char), stack(expanded_array_1, expanded_array_2, stack_bool) ) ); _stackAndExpandWidth = LAMBDA(array_1, array_2, [fill_value_df_blank], [vstack_bool_df_TRUE], LET( max_width, _maxWidth(array_1, array_2), max_height, _maxHeight(array_1, array_2), fill_char, IF(ISOMITTED(fill_value_df_blank), "", fill_value_df_blank), stack_bool, if( isomitted(vstack_bool_df_TRUE), FALSE, vstack_bool_df_TRUE ), expanded_array_1, EXPAND(array_1, , max_width, fill_char), expanded_array_2, EXPAND(array_2, , max_width, fill_char), stack(expanded_array_1, expanded_array_2, stack_bool) ) ); _stackAndExpandAllDimensions = LAMBDA(array_1, array_2, [fill_value_df_blank], [vstack_bool_df_TRUE], LET( max_width, _maxWidth(array_1, array_2), max_height, _maxHeight(array_1, array_2), fill_char, IF(ISOMITTED(fill_value_df_blank), "", fill_value_df_blank), stack_bool, if( isomitted(vstack_bool_df_TRUE), TRUE, vstack_bool_df_TRUE ), expanded_array_1, EXPAND(array_1, max_height, max_width, fill_char), expanded_array_2, EXPAND(array_2, max_height, max_width, fill_char), if(stack_bool, vstack(expanded_array_1, expanded_array_2), hstack(expanded_array_1, expanded_array_2)) ) );558Views0likes2CommentsRe: Your thoughts serializing arrays to manage state in excel (by way of inventory tracking)
PeterBartholomew1 You've been of help to me on lots of occasions, including this one. If the solution strikes you as meh, it is probably me who is missing the bigger picture. Generally speaking my solutions are solving excel problems that amount to less than a minute of compute time (if that) at the cost of dozens of hours of thinking about computer science and how I can shoehorn it into excel. What really gets me is that despite my effort to act as if I am capable of using excel labs to achieve efficiencies much closer to assembly language techniques, I am always forced to confront that Excel is a fully abstracted language whose internal calculations are non-transparent and whose own internal calculation engine is doing with the data/formulas what it likes (and I'm sure it makes better choices than I do). The problem I solved here is one that vexes the both of us with no good solution. How can we use thunks (or any other lazy evaluation type solution to the array of array problems) as return values and not just keep them wrapped up in the ether and scope bound. Serializing the string is a real possibility, I think, as inelegant as it may be. Given my preference for structured data, this is a legitimate way to have something tantamount to tail recursion happening within a table capable of auto expanding/contracting/updating that is subject to re-ordering and filtering on a whim. It is really the re-sorting of tables of thousands of lines and more than two dozen columns that gives me anxiety. Also, consider that this method allows you to store an array of potentially thousands of cells in a single cell in a fully resolved form that can instantly be reconverted into a full blown array capable of all of your array functions just by using a simple deserialization formula. I just don't think the excel modules (in excel labs) or the standard excel features were really designed for handling the sorts of arrays I want it to consider for thousands of cells.649Views0likes1CommentRe: Filter Function to be optional depending on blank and non blank values
Jn12345 I spent way more time on this than I should have, but I think there isn't a great answer due to validation limitations. My final suggestion to you is in some ways a poor second to one that PeterBartholomew1 appears to have come up with involving using the cell() formula to return the address of the last active cell for purposes of setting your filters and generating your validation arrays. What I would have done is created a helper column that requires you to check the row that you want to validate in order to update your filter arrays. In any event, here is a file that has some of what I tried for you. You could, theoretically, adapt my formulas to your purpose relatively easily and expand them as necessary. validation_filter=lambda( choice, equip_val, make_val, model_val, serial_val, =LET( equip_col, tbl_equipment_list[Equipment Type], make_col, tbl_equipment_list[Make], model_col, tbl_equipment_list[Model], serial_col, tbl_equipment_list[Serial Number], mask_func, LAMBDA(value, column, IF(ISBLANK(value), EXPAND(TRUE, ROWS(column), 1, TRUE), value = column) ), equip_mask, mask_func(equip_val, equip_col), make_mask, mask_func(make_val, make_col), model_mask, mask_func(model_val, model_col), serial_mask, mask_func(serial_val, serial_col), combined_mask, equip_mask * make_mask * model_mask * serial_mask, no_box_checked, IFERROR( AND(NOT(equip_val), NOT(make_val), NOT(model_val), NOT(serial_val)), FALSE ), equip_choices, UNIQUE(FILTER(equip_col, combined_mask, "")), make_choices, UNIQUE(FILTER(make_col, combined_mask, "")), model_choices, UNIQUE(FILTER(model_col, combined_mask, "")), serial_choices, UNIQUE(FILTER(serial_col, combined_mask, "")), pick_choice_list, LAMBDA(choice, CHOOSE(choice, equip_choices, make_choices, model_choices, serial_choices) ), return_all, LAMBDA(choice, CHOOSE(choice, equip_col, make_col, model_col, serial_col)), result, IF(no_box_checked, return_all(choice), pick_choice_list(choice)), result )) valid_makes=XLOOKUP("x", tbl_results[filter_row], tbl_results[make], FALSE) valid_model=XLOOKUP("x", tbl_results[filter_row], tbl_results[model], FALSE) valid_serial_numbers=XLOOKUP("x", tbl_results[filter_row], tbl_results[serial_number], FALSE) valid_types=XLOOKUP("x", tbl_results[filter_row], tbl_results[type], FALSE)471Views0likes1CommentRe: Filter Function to be optional depending on blank and non blank values
Jn12345 Without seeing your data structure, it is hard to say, but here is a general approach that I refer to as "masking." Rather than using filter like filter(target_array, filter_array=filter_value), you do let(filter_mask, filter_array=filter_value, filter(target_array, filter_mask). In this way, you are giving the filter function an array of boolean values (true or false) rather than asking it to evaluate whether some comparison/operation is true or false. The reason this is useful is because in your let formula, you can pick and name each of your three (or four or five) filter columns, pick and name each of your filter values, and then use boolean functions to combine them. So (f_array_1 = "blue")*(f_array_2>5) is the same thing as saying "tell me which rows in these two arrays have "blue" in the first array and a number greater than 5 in the second. If you want to combine the filters with "and", but you haven't yet decided what the filter value should be, you can just multiply your f_array_1 mask by TRUE to have it return true for all rows satisfying your one filter. The formula below basically says, if there is a value in any one of these three columns in the current row, use that value for a filter mask on the specified column, and return all values in the target_array where it satisfies all of the required filters. This means that the first, second, or third filter values can be blank or have a value and the filtered array returned will be everything (if no values supplied) all the way down to nothing (if no value meets all of the criteria you have specified). So you can fill in the third value first, last, or second - it makes no difference. Your filtered list will always represent the values that meet all of the specified criteria. If you prefer that your filter be that the return list simply meets one of the filter criteria (i.e. f1 or f2 or f3), then you can add the filter masks instead of multiplying them. You can also do whatever other boolean algebra you want on them. You can change the names to your preference or add/remove filter columns and values. You must, however, properly reference the cells where your filter values are, the ranges where your filter columns and target columns are, and your operators for each filter (=, <>, >, <, etc.). The isblank can be temperamental if the cell referred to is not truly blank (has no value whatever) but is something like an empty string, i.e. ="" let( value_array_for_list, SERIAL_NUMBERS_REF, first_column_to_filter_by, PRODUCT_TYPE_COLUMN_REF, second_column_to_filter_by, PRODUCT_STYLE_COLUMN_REF, third_column_to_filter_by, PRODUCT_COLOR_COLUMN_REF, value_to_filter_first_column_by, [@[type]], value_to_filter_second_column_by, [@[sytle]], value_to_filter_third_column_by, [@[color]], first_column_mask, if(isblank(value_to_filter_first_column_by), TRUE, value_to_filter_first_column_by = first_column_to_filter_by), second_column_mask, if(isblank(value_to_filter_second_column_by), TRUE, value_to_filter_second_column_by = second_column_to_filter_by), third_column_mask, if(isblank(value_to_filter_third_column_by), TRUE, value_to_filter_third_column_by = third_column_to_filter_by), combined_mask, first_column_mask*second_column_mask*third_column_mask, filtered_serial_numbers, filter(value_array_for_list, combined_mask, "No matching serial numbers"), filtered_serial_numbers )605Views0likes4Comments
Recent Blog Articles
No content to show