Best web archiving software for complex sites and sites requiring logins?
For years I've on and off looked for web archiving software that can capture most sites, including ones that are "complex" with lots of AJAX and require logins like Reddit. Which ones have worked best for you?
Ideally I want one that can be started up programatically or via command line, an opens a chromium instance (or any browser), and captures everything shown on the page. I could also open the instance myself and log into sites and install addons like UBlock Origin. (btw, archiveweb.page must be started manually).
wget is the most comprehensive site cloner there is. What exactly do you mean by complex? Because wget works for anything static and public... If you're trying to clone compiled source files, like PHP or something, obviously that's not going to work. If that's what you mean by "complex" then just give up, because you can't.
The site apparently works for the people who browse it, but wget isn't succeeding in just cloning the thing.
I want the items that the usable-site is made-of, not endless-failed-requests following recursive errors, forever..
Apparently one has to be ultra-competent to be able to configure all the disincludes & things in the command-line-switches, to get any particular site dealt-with by wget.
Sure, on static-sites it's magic, but on too many sites with dynamically-constructed portions of themselves, it's a damn headache, at times..