Oh dang, that is a scary thought! I agree doing something ourselves without admin support is likely necessary.
Any ideas on best way to make a backup? I'm making a guess that @Javelin did this by hand for those two docs, I'm too lazy to do that 😛 but I think your remark on re-learning tidbits does make manually doing this feasible (and the pages counts don't look crazy high).
I spent the evening writing a script to dump high-probability non-spam pages, however it is very possible I am missing pages (and I **may** have included some spam too). I think we are looking at less than 30Mb of data (that 30Mb figure includes some spam content) based on an initial scrape I performed. I have a list of just under 1000 pages, see https://pastebin.com/cyqj3MWm
The old version of mediawiki in use still has the old option "&printable=yes" working - which is the ideal way to get the text out if doing a simple scrape (and not extracting the raw mediawiki markup).
Quick and dirty options would be wget and other scrapping tools (pandoc is not bad). I ran out of time this evening to go further. I was hoping kiwix would be a good option but I can't see any scrape/downloading tools/options for mediawiki. https://www.mediawiki.org/wiki/Manual:Pywikibot might work, assuming eech wiki is not too old (with the API version the bot needs).
I've not looked at Archive.org yet.
Any value in:
1. new thread with "backup wiki" subject?
2. me posting the script(s) I have so far (to github)?