Topic: Tools or apps (windows) to download all my favorites offline?

Posted under e621 Tools and Applications

Three I know of are gallery-dl , Raccoony (download links at top of page), and RE621 .

gallery-dl is a command-line program, but it runs on Windows (once you install Python). I am pretty sure you can tell it to download https://e621.net/favorites and it will do the thing, even though I haven't personally used it that way.

Raccoony is a browser extension; at one time it supported both Firefox and Chrome, but it may not work in Chrome at the moment. The way you would use it for this is to turn on the "auto download" option in the settings, visit your favorites page on e621, click on the raccoon head, and then click "Open all in tabs". It will open a new tab for each of the images on that page, and download the image. If you have more than one page of favorites, you'd need to manually click on page 2, raccoon head, "open all in tabs", then page 3, raccoon head, "open all in tabs", etc. It also works on other sites like FA, Weasyl, etc.

RE621 is a script that runs in a browser extension like Tampermonkey. I am less familiar with using it, but I know it offers a "download this group of images" feature.

I'm pretty sure there are more, but I've used all three of the above at one point or another.

alphamule

Privileged

kora_viridian said:
Three I know of are gallery-dl , Raccoony (download links at top of page), and RE621 .

gallery-dl is a command-line program, but it runs on Windows (once you install Python). I am pretty sure you can tell it to download https://e621.net/favorites and it will do the thing, even though I haven't personally used it that way.

Raccoony is a browser extension; at one time it supported both Firefox and Chrome, but it may not work in Chrome at the moment. The way you would use it for this is to turn on the "auto download" option in the settings, visit your favorites page on e621, click on the raccoon head, and then click "Open all in tabs". It will open a new tab for each of the images on that page, and download the image. If you have more than one page of favorites, you'd need to manually click on page 2, raccoon head, "open all in tabs", then page 3, raccoon head, "open all in tabs", etc. It also works on other sites like FA, Weasyl, etc.

RE621 is a script that runs in a browser extension like Tampermonkey. I am less familiar with using it, but I know it offers a "download this group of images" feature.

I'm pretty sure there are more, but I've used all three of the above at one point or another.

I highly, HIGHLY recommend option #2! It just uses your browser's cache facility which means you can do it as you browse, with almost no impact on the site (As far as I can tell).

alphamule said:
It just uses your browser's cache facility which means you can do it as you browse, with almost no impact on the site (As far as I can tell).

I was curious about it, so I Wiresharked myself while using Raccoony. As far as I can tell, it really does get the images out of your browser cache when you tell it to save the image. There was a small amount of extra traffic to the site, but it wasn't anywhere near what you would expect if it was re-downloading the entire image. I didn't check in detail, but I suspect Firefox was hitting the site again, getting 304 Not modified for the image, and then handing the image from its cache to Raccoony.

I've used Raccoony for a few years now, but I don't have auto-download turned on. I don't necessarily want to save a copy of every single image I view. I do use its download hot key when I see something I want to save.

OP's use case is a little bit different, in that they already have a collection of stuff they like, and they already know they want to download every image in that collection. gallery-dl might be easier to use in that case, since I'm pretty sure it knows how to page through all of your favorites.

Once you've "caught up" by downloading all your existing favorites, I think tools like Raccoony or RE621 are a good way to stay caught up, by downloading newer images as you find them.

Worth mentioning hydrus as well. Application that supports local tag-based searching, as well as downloading with tags from quite a few places, including here.

kora_viridian said:

gallery-dl is a command-line program, but it runs on Windows (once you install Python). I am pretty sure you can tell it to download https://e621.net/favorites and it will do the thing, even though I haven't personally used it that way.

.. that doesn't sound quite right. /favorites should only work correctly if you are logged in -- and gallery-dl does support logins in general, but you do have to explicitly configure your login or tell it what browser to import cookies from, it won't just autodetect that your browser has the necessary login info.

...
The explicit search fav:$username (eg. fav:kusomaru) might work regardless of whether you are logged in. I wasn't entirely sure if some items might be omitted from the listing. If they are not, then ripme should also be a workable option.

I do wonder how much work it is to get these various tools to cooperate -- mainly, are they all dumping the files with 'as-is' filenames (the md5 hash filenames provided by e621), or can they be configured to do so, and are their 'is already downloaded?' checks working on the same basis?

alphamule

Privileged

kora_viridian said:
OP's use case is a little bit different, in that they already have a collection of stuff they like, and they already know they want to download every image in that collection. gallery-dl might be easier to use in that case, since I'm pretty sure it knows how to page through all of your favorites.

Once you've "caught up" by downloading all your existing favorites, I think tools like Raccoony or RE621 are a good way to stay caught up, by downloading newer images as you find them.

You can also do a search on your favorites, get a list of XX number of them, and then open them all, but that is sloooooooow. I have such a small number of favorites that it would take long, but that sounds awful if you have more than a couple hundred. :(

savageorange said:
.. that doesn't sound quite right.

Like I said, I don't use gallery-dl regularly. I used it a few times on other sites to verify its functionality, but I'm not super familiar with how it works here. ¯\_(ツ)_/¯

I do wonder how much work it is to get these various tools to cooperate -- mainly, are they all dumping the files with 'as-is' filenames (the md5 hash filenames provided by e621), or can they be configured to do so, and are their 'is already downloaded?' checks working on the same basis?

Raccoony doesn't keep the site's 'as-is' filename out of the box, but can be configured to do that with the {siteFilenameExt} keyword in the Download path option in the settings (I just tried it). The "already downloaded" check only works within the same browser session - if you download an image with Raccoony, close out your browser completely, restart the browser, and then visit the same image again, Raccoony doesn't know that you already downloaded it.

(With the limitations Firefox and Chrome impose on WebExtensions, I'm not sure if it's easily possible for an extension to have the "already downloaded" state persist across browser sessions.)

kora_viridian said:
Like I said, I don't use gallery-dl regularly. I used it a few times on other sites to verify its functionality, but I'm not super familiar with how it works here. ¯\_(ツ)_/¯

Raccoony doesn't keep the site's 'as-is' filename out of the box, but can be configured to do that with the {siteFilenameExt} keyword in the Download path option in the settings (I just tried it).

This is a good hint -- I only guessed at {filenameExt}, which for some reason I do not understand, piles the top of the tag list (and nothing else) into the filename, so I was getting frustrated.

The "already downloaded" check only works within the same browser session - if you download an image with Raccoony, close out your browser completely, restart the browser, and then visit the same image again, Raccoony doesn't know that you already downloaded it.

(With the limitations Firefox and Chrome impose on WebExtensions, I'm not sure if it's easily possible for an extension to have the "already downloaded" state persist across browser sessions.)

Not really what I meant with that, unless you mean that Raccoony doesn't get to read the listing of the directory it writes to. If you have any downloader able to reproduce the original filename, it should be able to check whether you already downloaded that ever, just by checking whether a file by that name already exists in the configured directory (and optionally comparing http headers for things like size of file). Any downloaders that can do that should, in theory, be able to freely interoperate.

(that is a little weird though: are you saying that Raccoony only can have permissions to interrogate part (the currently in-memory part, say) of the cache? Because normally your on-disk cache will contain data from multiple browser sessions)

savageorange said:
This is a good hint -- I only guessed at {filenameExt}, which for some reason I do not understand, piles the top of the tag list (and nothing else) into the filename, so I was getting frustrated.

I think Raccoony does that because e621 doesn't support the idea of "title" for an individual post. On sites like FA, that do allow a title, Raccoony uses the title as part of the filename, by default.

The documentation is a few versions behind, but it may be useful.

Not really what I meant with that, unless you mean that Raccoony doesn't get to read the listing of the directory it writes to.

I think that might be true. I know for sure that WebExtensions can't write to anything other than the browser's Downloads folder, or a folder below that. I'm not exactly sure what the restrictions on reading are.

If you have any downloader able to reproduce the original filename, it should be able to check whether you already downloaded that ever, just by checking whether a file by that name already exists in the configured directory (and optionally comparing http headers for things like size of file).

This does assume that the user never moves things out of Downloads. I, for one, move things out of Downloads to site-specific directories somewhere else on the filesystem, every month or two.

(that is a little weird though: are you saying that Raccoony only can have permissions to interrogate part (the currently in-memory part, say) of the cache?

I know that if you tell Raccoony to download something that's already in the browser cache - memory for sure, maybe the on-disk cache - the browser will just serve that request from the cache, and not do a full re-download from the site.

For more details, you'd have to ask Simon-Tesla, or read MDN and do experiments. I just hit the "D" button to download the pr0n. :D

alphamule said:
I highly, HIGHLY recommend option #2! It just uses your browser's cache facility which means you can do it as you browse, with almost no impact on the site (As far as I can tell).

So, I'm a Java, Github, and addon n00b; after I clone Racoony onto my machine and install npm, what do I have to do to get firefox to use it?

Came across a setting in Firefox people reading this topic might want to be aware of: privacy.partition.network_state
For same-site caching, it doesn't matter, but some extensions I've used require it to be disabled (set to false) in order to get caching working (Tab Image Saver). This was created for the hypothetical situation where some s***head decides it would be funny to measure load times of small files to act as a type of tracker. i.e. Stalker behavior by telemetrics companies.

kaworu said:
So, I'm a Java, Github, and addon n00b; after I clone Racoony onto my machine and install npm, what do I have to do to get firefox to use it?

What? You just download and then install it in Firefox. I guess you can build it yourself (it's Javascript, not Java) if you want, but it's not required. Instructions are pretty clear and concise, in the Readme file. If you're a 'noob' to addons, just use the XPI file I linked to.

Don't use this version, but instead, this one. If you tried the older version, it wouldn't work.

BTW: The Chrom(e/ium) version is not supported ATM, for those others who didn't want to install Firefox.

I use "raccoony/{siteName}/{author}/{siteFilenameExt}" for filename/path of e621 posts, as example of a decent way to sort them out.

Updated

kaworu said:
So, I'm a Java, Github, and addon n00b; after I clone Racoony onto my machine and install npm, what do I have to do to get firefox to use it?

You went down the "I want to develop Raccoony" path. You should take the "I want to use Raccoony" path instead. :)

Like alphamule said, there's a version of it already built and ready to install in Firefox. You can download it from the link alphamule gave.

Once you install it in Firefox, go to e621's post page , or FurAffinity's front page , or a "gallery" page on most other furry art sites. You should see a raccoon-head icon in the lower left corner of your browser window. Mouse over that icon to see what you can do.

The documentation for Raccoony is a few versions behind, but it may be useful.

edit: close quote

Updated

alphamule said:
What? You just download and then install it in Firefox. I guess you can build it yourself (it's Javascript, not Java) if you want, but it's not required.

kora_viridian said:
You went down the "I want to develop Raccoony" path. You should take the "I want to use Raccoony" path instead. :)

Like alphamule said, there's a version of it already built and ready to install in Firefox. You can download it from the link alphamule gave.

I guess that's what I get for reading the forum on my phone and figuring I can just Google for "racoony extension Firefox" to get it on my desktop rather than bring up the forum and just follow the links...

  • 1