This sounds a lot like something I was implementing around the same time you were writing this post.
I also used stored procedures, but instead of deleting the old files, I had a base folder for the table and wrote consecutively to date/time folders (e.g. 20210701135615 [year-month-day-24hr hour-minute-second])
I also used a base view that included all of the fields and data type conversions, etc.
With this base view, I was able to also then partition over a range of columns to produce a row_number over that partition set that enabled me to create a broad distribution of data across dates that was row versioned. This enabled me to have multiple dataflows in a Power BI workspace, one for the full dataset and the other for a trimmed version of the dataset that a Power BI developer working on their local machine could use for development purposes without having to hang their machine on huge datasets.
The value of keeping the old files in back-dated folders is that one can go back in time if necessary for testing and/or audit purposes.
The path is parameterized in the stored procedure.
Old files could also be cleaned up later via a "data Roomba."
One way to do a metadata switchover to the new table souce could be this:
create the new CETAS table with a new name, like _v2.
within a transaction, drop the original external table and create the new one (not as a CETAS) pointing at the new source; commit.
Then drop the _v2 CETAS table definition.
This effectively keeps the original table and source until such time as the new data is materialized, at which point, the switchover is just a metadata change performed within a transaction.