Topic: e621 Downloader made in Rust that can download sets, pools, bulk posts, and single posts!

Posted under e621 Tools and Applications

I have returned. This program is my Opus Magnum of programs. I've wanted a software that could do these things for a long time, but could never find one that handled all of the options I wanted. So, I decided to make it myself.

Release Description

e621_downloader is a low-level, close-to-hardware program meant to download a large number of images at a decent pace. It can handle bulk posts, single posts, sets, and pools via a custom easy-to-read language that I made.

Having tested this software extensively with downloading, I managed to download 10,000 posts (which was over 20+ GB of data with some animations going over 100+ MB) in just two hours.

Download Link

FEATURES:
- A tag file that has a simple language with generated examples and comments.
- Searching for single posts, bulk posts, sets, and pools.
- Downloads images in their own folders named after their searching tag.
- The ability to log in and use your blacklist.

Notice for users using the new version (1.6.0 and newer)

If you are not logged into E6, a filter (almost like a global blacklist) is applied with the entry young -rating:s. This blacklist will nullify any posts that fall under it. If you wish to download any posts of this kind, you must log in and download it, otherwise, this filter will be there.

Notice for users using a VPN

I have had a recurring "bug" that has shown in my issues the last couple of months, and they tend to crop up right after a new release, so I am going to supply a new notice for those using VPNs to prevent this becoming issue spam. There are users who are experiencing crashes consistently when parsing, obtaining blacklist, or downloading. It is an issue that is consistent, and each person thus far have been using a VPN with no other noticeable cause linked. After a multitude of testing, I have concluded that users using VPNs will occasionally have either e621 directly or Cloudflare prompt for a captcha, or a test for whether you are a robot. Since my program does not support GUI, or no tangible way of handling that, it will crash immediately. I have looked for fixes to this issue and have yet to find anything. So, if you are using a VPN, be warned, this can happen. The current work around for this issue is switching locations in the VPN (if you have that feature) or disabling the VPN altogether (if you have that option). I understand it is annoying, and can be a pain, but this is all I can do until I come across a fix. Sorry for the inconvenience, and apologies if you are some of the users experiencing this issue.

Features Planned

- [ ] Add a menu system with configuration editing, tag editing, and download configuration built in.
- [ ] Transition the tag file language into Json to integrate easily with the menu system.
- [ ] Update the code to be more sound, structured, and faster.

Installation

All you have to do is decompress e621_downloader.7z and run the software (make sure e621_downloader.exe is in its own directory!). The program will print errors first, don't worry! It is simply telling you that the config is missing and will generate it. It will also make the tag.txt and emergency exit. The program exits to allow for you to edit the tag file. This file will have examples and comments explaining how the file works. Don't worry, it's easy :).

You don't have to worry about making the download directory, it is handled for you. The download directory is configurable in the config.json file (please make sure the directory always ends with /!).

Compiling Source

If you wish to compile the code and run the bleeding edge of the software, you can find installation guides for Windows, Arch Linux, and Debian.

Installation Guide (Windows)
Installation Guide (Arch Linux)
Installation Guide (Debian)

FAQ

Github if you encounter any errors and wish to report it there.

Updated

It's been a long time. About four months to be exact since I started working on this software. I'm happy it's done because now it's in a good enough state for me to use myself. So, enjoy, this is a very fun project.

1.0.0 Release notes:

- Added new date system when downloading artists and characters .
- Ability to enter safe mode .
- Updated tage parser to support pools, sets, single posts, and bulk posts via groups .
- Tag validator has been added to ensure the tags exist on e621 .
- Rewrote the entire web connector.

Improvements and Bug Fixes

- Fixed bug where the tag would have a trailing space when a comment was on the same line .
- Improved client speed, making downloading 20+ GB of images and animations in two hours possible .

Updated by anonymous

There was a small bug that I found and fixed. Nothing breaking, just the wrong order of code executing.

1.0.2 Hotfix Release Notes:

- Fixed the issue where the connector would ask for what mode they want to run before actually creating the tag file.

Updated by anonymous

I have done another patch to the software after it abruptly crashed while downloading images. I can't tell if this is my crappy internet (which cut out shortly after the program crashed) or if it's a 503 response. So to be on the safe side for right now, I have added a check to see if the server responds with a 503. If it does, it will tell the user to contact me or to post on this forum so I can fix it.

1.0.3 Release Notes:

- Added a safety guard in case the server responds with a 503 response code while downloading images .
- Updated the user-agent version from 0.0.1 to 1.0.3.

Updated by anonymous

Been testing this all night and so far I really like it for one thing, it works! which is surprisingly hard to find one of these that do or stays working for long...
So here's hoping it stays working unlike the last 3 I have used over the years XD
right now the only thing I can think of to recommend as a feature tho is cross comparing the posts to be downloaded between the [pools] and [artists] sections.
If I have a comic to be downloaded in [pools] and the artist who drew the comic in the [artist] section, I will end up downloading 2 copies.
Not a huge problem from me since I have other programs that find duplicate images and deal with that but it would be one less step~

Anyways, you rock and happy coding XD

Updated by anonymous

good program helped me to download the collections of the artists, and comics
In summary, there is a small bug at the time of putting a tag related to species (shark, renamon) in the "general" option does not show the correct number of publications to download.

Updated by anonymous

umbreon13 said:

good program helped me to download the collections of the artists, and comics
In summary, there is a small bug at the time of putting a tag related to species (shark, renamon) in the "general" option does not show the correct number of publications to download.

The downloading of posts by the species tag can be iffy in a lot of situations. For instance, the two examples you supplied is "shark" and "renamon". Renamon contains over 11,653 posts and the shark tag has over 18,272 posts. This is a large number of posts to download in a single run but is manageable. The problem comes in when dealing with other species tags. Mammal, for example, has over 1,308,848 posts tied to it. If I didn't have a check inside the program for this, a user would crash there computer or be denied access by the server for attempting to download over one million posts. Currently, there is a checker in place if the tag has under 1,500 posts. If it does, then it will download all the images in one go; if not, it will grab only 5 pages of posts and download those. The best thing I can recommend right now is to use a single post's id and grab images by that, or make your search more specific for what you want to batch download. This will ensure the program grabs most, or all, of what you need.

Updated by anonymous

Dythul said:
Been testing this all night and so far I really like it for one thing, it works! which is surprisingly hard to find one of these that do or stays working for long...
So here's hoping it stays working unlike the last 3 I have used over the years XD
right now the only thing I can think of to recommend as a feature tho is cross comparing the posts to be downloaded between the [pools] and [artists] sections.
If I have a comic to be downloaded in [pools] and the artist who drew the comic in the [artist] section, I will end up downloading 2 copies.
Not a huge problem from me since I have other programs that find duplicate images and deal with that but it would be one less step~

Anyways, you rock and happy coding XD

I would be happy to add a feature like this! But currently, there is something a little more important that needs to be handled. At this moment, aliases have no support and need to be added to the program. That way you won't have to write "my_little_pony" if you want posts with that tag; it's much easier to just type "mlp" and have the aliase system kick in rather than typing the longer form of the tag.

Updated by anonymous

I have released another update after starting work on an issue someone was having with the downloader on Linux. There is still another issue to handle, but what I have currently added is quite important for people who use tags that have Unicode characters in them.

1.1.3 Release Notes:

- Added Unicode character support to the parser [#23.
- Fixed a bug where the parser would see an empty tag at the end of the tag file [#23.

Updated by anonymous

Another quick update!

1.2.3 Release Notes:

- Added an alias checker, so now aliases can be used in the tag file.
- Added the ability for the user to create directories for every tag in the tag file when the images are being saved. This can be enabled and disabled in the config.json file.

Updated by anonymous

P.S. Looking at your list of things to do. You don't have to sign in to get your favorites, you can always access them with fav:username even when signed out. The only thing it would potentially give you access to would be the blacklist and post votes.

Updated by anonymous

KiraNoot said:
P.S. Looking at your list of things to do. You don't have to sign in to get your favorites, you can always access them with fav:username even when signed out. The only thing it would potentially give you access to would be the blacklist and post votes.

I ended up figuring that out when I tinkered with how favorites worked. Since you have posted here, I would love to ask a question!

How do I go about accessing the blacklist? Currently, the only way I see it working is logging in through a request, then accessing the created cookie "blacklisted_tags" and parsing it. Is there an API call I can do to access this easily?

I have looked through the entire API and haven't found anything on it, but I assume I'm either blind or I do have to go down the route I described above. My goal is to have the blacklist used when my program is grabbing posts. My plan to do this is to grab a bulk of posts, then checking for rating, tags, and users on the client-side. But I do want to make sure I'm not overcomplicating anything.

Updated by anonymous

The Blacklist Release is now out, enjoy the fun of using blacklists once more!

1.3.3 Release Notes:

- Users can now use their blacklist by logging in with username and API key.
- login.json is created for logging into the account, having fields for username, API key, and boolean (for whether or not you wish to download your favorites).
- Users can now download their favorites by simply giving the username and making sure DownloadFavorites is true. An API key is not required if you wish to just download your favorites.
- DO NOT SHARE YOUR LOGIN FILE! The API key you supply is exactly like your password, so treat it as such.

Updated by anonymous

The Turbo Release is here! This brings a lot of code optimization with a ton of improvement, on the user's end and the programmer's end. There are also minor features added that I'm happy to have, as I use this program. I think some are good little quality of life things you will enjoy. Have fun!

1.4.3 Release Notes:

- General optimization brought throughout the entire program.
- Removed the date system as it is no longer needed with how the program works with tags.
- Images are now categorized when saved.

Updated by anonymous

The Client Release is done. This implements a lot of general optimizations and a huge change behind the scenes. Without getting too tech-savvy with it, this mostly changes how the clients are handled when making API calls. Before, a client would simply be created when needed, and the user-agent header would be applied manually each time. Now, there is a single client, and every request passes through a call that adds the user-agent header. This will improve the overall security of the program, making sure that every call is passing through all the checks safely. Besides that, there are still other optimizations that make this release even quicker. So enjoy the release!

1.5.3 Release Notes:

- Major update to the way requests are sent.
- Organized and improved more code, and removed the ability to create or not create directories when downloading, as it seems to be an unneeded feature.
- Reversed the posts vector right before starting the download process so that the oldest images are downloaded first and the newest last. This was done for sorting purposes.

Tech-savvy Release Notes:

- Major update to the way requests are sent, relying on a new class called RequestSender rather than Client directly.
- Added a trait for the blacklist and tag parser.
- Added a HashMap macro that's similar to vec!
- Renamed EsixWebConnector to WebConnector.
- Updated the grabber to only get what is needed. The memory usage is not lessened a great deal.
- More renames, minor optimizations, and formatting.
- Updated the parser and how the tags are tokenized.
- Huge update to all documentation along with minor optimizations throughout the program.
- Optimized imports and formatted code.
- A logical error ended up reversing how the blacklist saw and blacklisted posts, this is now fixed.
- Removed Chrono from cargo.

Updated by anonymous

A hotfix for the Client Release, felt the need to make this change as I have already written extensively on why it's there.

1.5.4 Release Notes:

- Fixed a bug where the post limit for the grabber went up to 1,600 instead of 1,280

Updated by anonymous

This is a minor update that brings a lot of improvement visually along with some faster speed in downloading images. Liked how this turned out, so enjoy~

1.5.5 Progress Release Notes:

- Replaced the progress bar with a better one. This one can now show the total file size you need to download, and how far along you are.
- The prompt for when you want to enter safe mode is now handled through a better system.

Tech-savvy Release Notes:

- Fixed bug where the images became directories.
- Renamed function get_length_posts to get_file_size_from_posts.
- Made the download_set function much more readable.
- Reverses array when grabbing posts instead of reversing them right before downloading them.
- Added new progress bar that displays the size of bytes that are going to be downloaded, instead of how many images.
- Small variable rename in FlagWorker
- Fixed bug where the flags raised would pass the margin because the artist tags were being flagged twice.
- Grabber now manages the Blacklist completely, not relying on the WebConnector for the information anymore.
- Blacklist is no longer created and parsed every time a new page of posts are grabbed.

Updated by anonymous

This release is pretty small and contains minor updates with one big change. This change is towards pools. Pools had an issue where it would only download one page, and this was admittedly due to my ignorance over reading the API reference incorrectly, so this is now fixed. I have also numbered the posts that are downloaded for pools, so now you can read them like an actual comic. So enjoy~!

1.5.6 Pools Release Notes

- Pools are now downloaded in its entirety instead of just downloading one page.
- Pools' posts are now numbered when downloading, so reading them is much easier.

Tech-savvy Release Notes

- Better formating on tags.txt.
- When a duplicate is skipped, it will now increment the file_size before continuing to the next post to download.
- Removed the "Favorites" category when downloading the user's favorites.
- Documented code I forgot to document.
- Updated reqwest from 0.9.24 to 0.10.1.
- Updated serde from 1.0.103 to 1.0.104
- The progress bar now fills the line it's on.
- Removed README.txt

Updated by anonymous

Hello everyone! As I'm sure you have noticed, there was a new update to the e621 website, and with that update, it carried a lot of new and exciting changes. One of these changes are the responses we get when we call the API. My program, if you use it, will crash now. The reason for this is because the API calls that I do now are invalid because the response they are expecting no longer exists. So what I have to do is develop a new update that can handle the new responses that I receive from the e621 platform. Hopefully, this makes sense. If not, don't worry, the update that will fix my program shouldn't take too long. It's all just a matter of how quickly I can write my code, and how much free time I have to focus on it (which is a lot because I am currently sick).

So stay tuned, there should be an update for you all soon!

Update

So... I have taken a break from this program for a while, and have been working on life for a while, that includes my social life, job, and family. While doing this, I decided that it was best to put several things on hold, and this ended up being one of them. But, after upgrading my computer with the power of Ryzen, playing games to my heart's content, and becoming a manager at my job, I'm back. I've been working non-stop on making the much-needed changes for this program to work with the new API, and have just finished a fully working version, but, there are a few things that need to be worked on a little more.

Firstly, there are now deprecated/dead parts of my code, sections that are no longer used, this is something I ended up doing since I was stepping through my code quickly to get the program in a working state once more. For me to feel right with releasing the next version, I want to remove this code and make the program a little faster.

Secondly, there are bugs and issues right now that I have noticed recently. One of the large ones, that I'm assuming was here before the API update, is that the blacklist in my program isn't 100% accurate. I'm not sure why, but it seems to be the case. I'm taking a deep look at it since it was one of the things I had to change for the new API.

Another, almost more worrying issue that this API has, is that some posts that are grabbed.... well.... they just don't even have a file URL. I'm not sure if these were deleted posts, and the reason I say that is the API has a flagging section with a boolean for whether are not the post is deleted, and every case showed it wasn't, just the file URL is null. It's confusing to me, but my fix for this is to filter it out from the posts that will be downloaded. I want to add that you are notified when a post like this is filtered, so don't worry.

The blacklist also had another change. How the blacklist on your account was obtained, at least back then, was to access a certain URL with your login, doing this allowed me to grab your blacklist string, this string contained everything you had blacklisted and nothing else. I had built a parser, and a flagging system that emulated e621's version to a T. At the time, it had 100% accuracy through numerous amounts of testing that I did. This API change caused the blacklist string I was requesting to be shifted to a more general area, your user page. To actually get to this page, I need your user ID, so now it has to be manually put in inside of the same file that holds your API key and username. I plan on simplifying this in the future by sending one request to the API for your user page by searching for your username. This request won't show me your blacklist at first, but it will give me your ID, so you won't have to input it. After obtaining it, it will then log in, grab your blacklist, and logout. By logout, I mean it doesn't save the cookie that tells e621 that my client is using your account anymore xp

So yeah, there is a lot of changes to my program, but with my new computer, development has definitely become easier. The fact I can use a debug version of my program and still download it at 5 MB/s is astonishing. I didn't think it could download so quickly in debug mode. Also, I'm currently thinking about releasing a debug build for people to use, but the only issue with it is that if I do, users of the software will need to understand it is in development and is highly likely to crash from small things, especially with half of the API I made torn apart. And yes, I have an API that communicates with e621's API, they're buddies, at least I try to make them buddies, there are quite a lot of safeguards built in it to make sure it doesn't cross a boundary with e621. Beyond that though, this is the current update. Thanks for listening to my rambling, a build should be out soon, I've stated before that I use my own program, and I'm getting to a point where I need it again, so I'm doing this for current users and me. Be patient, I should have more info soon. Bye!~

Fileurl is null because it's blocked by global blacklist, you need to authenticate.
And you can get the profile ID from an of the profile pages.

aobird said:
Fileurl is null because it's blocked by global blacklist, you need to authenticate.

If this is the case, then I'm going to go through and disable my blacklist from my post grabbing and then enable cookies on my client. I'll test it internally for a bit and see if certain blacklisted tags will actually make the file URL null. back then, the API ignored the blacklist entirely, it would show a post no matter if you had your blacklist enabled or not. It's what resulted in me creating my own blacklist filtering system. I'll play around with it for a bit and see what comes of it, if this ends up being the case then I can remove one of my systems and give my program a little more space to process through posts quicker.

I will probably also look into finding out how to remove the global blacklist. I know the cookie that stores it, but I'd rather not tell my client to access a cookie manually and then delete it. I feel like that would be a major work around for something that could be solved a little more simple. But I'll have to look into it.

Thanks for telling me what might be causing it, I'll definitely be looking into it since it wasn't my first thought when I was encountering this problem. I wondered if the blacklist on the site would actually filter through with the new API now, I was expecting it to just straight up delete posts from the list that I would grab, but I guess they're just nullifying it and considering it deleted.

Updated

aobird said:
And you can get the profile ID from an of the profile pages.

I did talk about this in my main post, I guess I didn't really clarify it too clearly. I was half asleep when I wrote it. But yeah, I know how to actually grab the ID, it's just for right now I'm going to have it to where you have to put it in manually. Until I can get other parts of my API sorted out/deleted, it's going to have to be this way for a moment. The API I'd built to communicate with e621's API is currently torn apart and being worked on. My API had a lot of things I used to be able to grab certain things relatively quickly, it was also highly optimized, and because of this, digging through optimized code is a lot of work, especially when I'm just adding new unoptimized code. A better way to look at it, I'm basically adding a new room to a house, and then I'm going through and destroying other rooms that don't need to be there anymore.

Something else I didn't really clarify too clearly was that I was trying to get the program in a working state as quickly as possible. Because currently there was a lot of errors in the code and all the sudden it was crashing every time you would start it, I was trying to work around and get my program and a working state again so I can actively test and improve it. It's currently in that state, it's just right now there's a lot of issues and small things that will crash my software in a heartbeat. So I'm currently trying to get it leveled out and more stable again. once it's balanced and I have everything sorted out, I can then start production on a more stable build, and then I can release it and have my software open for the public again. For right now though, the only thing public is my code.

Updated

mcsib said:
If this is the case, then I'm going to go through and disable my blacklist from my post grabbing and then enable cookies on my client. I'll test it internally for a bit and see if certain blacklisted tags will actually make the file URL null. back then, the API ignored the blacklist entirely, it would show a post no matter if you had your blacklist enabled or not. It's what resulted in me creating my own blacklist filtering system. I'll play around with it for a bit and see what comes of it, if this ends up being the case then I can remove one of my systems and give my program a little more space to process through posts quicker.

I will probably also look into finding out how to remove the global blacklist. I know the cookie that stores it, but I'd rather not tell my client to access a cookie manually and then delete it. I feel like that would be a major work around for something that could be solved a little more simple. But I'll have to look into it.

Thanks for telling me what might be causing it, I'll definitely be looking into it since it wasn't my first thought when I was encountering this problem. I wondered if the blacklist on the site would actually filter through with the new API now, I was expecting it to just straight up delete posts from the list that I would grab, but I guess they're just nullifying it and considering it deleted.

What AoBird means by this is that it's behind the login filter. The site doesn't make distinctions between API usage and requesting through the normal html site. The only entry on this blacklist is currently young -rating:s So you have to support letting users authenticate using their username/api key if they want to download content of this nature.

kiranoot said:
What AoBird means by this is that it's behind the login filter. The site doesn't make distinctions between API usage and requesting through the normal html site. The only entry on this blacklist is currently young -rating:s So you have to support letting users authenticate using their username/api key if they want to download content of this nature.

I recently pushed a new commit to my Github linked here which allows authentication when grabbing posts through Http Basic Auth, what was recommended on the wiki, this removes that global blacklist and allows post grabbing without the worry of the filter you stated. I do have a couple of questions though since you are here. Is there a way to disable this blacklist for users who aren't willing to authenticate? If there isn't, I will state that in the release notes for users in case they wonder why they can't grab certain posts.

I noticed there is an API limit on my user page too, and I wanted to know what this actually meant, since nowhere on the wiki page for the API mentions it. Does it correlate to how many calls I can make with my account authenticated through programs, or is it something entirely different?

Beyond that, I also noticed that with the authentication the global blacklist is removed, but the user's blacklist isn't added in the filtering on posts that I'm grabbing. I was able to grab posts that I had tags blacklisted for, and those blacklisted tags were ignored, so I'm assuming I'll still have to utilize my client-side blacklist system unless there is a way to use the user's blacklist server-side.

mcsib said:
I recently pushed a new commit to my Github linked here which allows authentication when grabbing posts through Http Basic Auth, what was recommended on the wiki, this removes that global blacklist and allows post grabbing without the worry of the filter you stated. I do have a couple of questions though since you are here. Is there a way to disable this blacklist for users who aren't willing to authenticate? If there isn't, I will state that in the release notes for users in case they wonder why they can't grab certain posts.

I noticed there is an API limit on my user page too, and I wanted to know what this actually meant, since nowhere on the wiki page for the API mentions it. Does it correlate to how many calls I can make with my account authenticated through programs, or is it something entirely different?

Beyond that, I also noticed that with the authentication the global blacklist is removed, but the user's blacklist isn't added in the filtering on posts that I'm grabbing. I was able to grab posts that I had tags blacklisted for, and those blacklisted tags were ignored, so I'm assuming I'll still have to utilize my client-side blacklist system unless there is a way to use the user's blacklist server-side.

Bypass login filter without logging in: You can manually construct URLs, but I'm not encouraging this method in applications that can allow authentication, as it may break in the future if things change. It's only really suited for applications that can't allow user auth. The login filter is there to force consent to view that type of content, and the acceptance of the legalities for the region.

API Limit on profile page goes down based on changes made and is a window based throttle. Unless you're doing really weird things, it should be impossible to hit, as it is slightly higher than than the hard rate limit on requests. You get a throttle response if you do manage to hit it. Read actions usually don't cause the number to go down.

Think of it as closer to a login filter for images. It is distinct from the blacklisting system, which is still client side. So your client side code is still needed.

kiranoot said:
Bypass login filter without logging in: You can manually construct URLs, but I'm not encouraging this method in applications that can allow authentication, as it may break in the future if things change. It's only really suited for applications that can't allow user auth. The login filter is there to force consent to view that type of content, and the acceptance of the legalities for the region.

API Limit on profile page goes down based on changes made and is a window based throttle. Unless you're doing really weird things, it should be impossible to hit, as it is slightly higher than than the hard rate limit on requests. You get a throttle response if you do manage to hit it. Read actions usually don't cause the number to go down.

Think of it as closer to a login filter for images. It is distinct from the blacklisting system, which is still client side. So your client side code is still needed.

Okay, that makes a lot of sense to me. I don't plan on bypassing the system like that unless it's an API call, or a query entry to tell the server to not apply the filter, I'll manage it. Mainly because it's also understandable from a legality standpoint. Should I implement something into my program about the legalities of the login filter, or explain what it is exactly filtering, or is that something I shouldn't really worry about?

I'm at a point now where I can make the last steps towards cleaning up my code and getting it stable again, so I'm in a good spot to do it if I have to.

Release 1.6.0-Indev.2

This is it, finally. After numerous amounts of testing, the build is finally available. While this isn’t a release build, as indicated by the indev label in my version, this is a functional and usable build. It can use the tags, download images, and use blacklist (after I fought through updating it…. After finding a huge memory leak….) and basically handle everything you need. There was a lot of changing to my code, and it is ugly at the moment, but that is something for me to worry about. I think I’m finally at a point in my development where I can focus on stabilizing this version and get it functioning with release code. I’m proud with how quick I manage to get this together, albeit, there was a lot of slicing and cutting in my quality that I had to do.

For those curious, I never had to use the #[deprecated(since=”…”, note=”…”) until this build. I was moving so fast through it that I had to start utilizing a lot of features to get this build off the ground, mainly to get it to even run so I could set it up, but it’s all done now. There are over 23 warnings at the moment, many functions not documented, and a ton of commented out stuff as well as mismatched naming. So, with all of this, I’ll have to dedicate a day to get this all cleaned up and looking nice. Once I have everything set up and finished, I can begin focusing on other things. The next release will have quite a nice changelog to it to say the least. With that though, this is all I have to say for now. I hope you all enjoy the build, and I’m sorry if it runs slow for you, this is only a debug build, which means speeds will be horrible, but that will be fixed when the main release is out. Till then, see you all around.

mcsib said:
Okay, that makes a lot of sense to me. I don't plan on bypassing the system like that unless it's an API call, or a query entry to tell the server to not apply the filter, I'll manage it. Mainly because it's also understandable from a legality standpoint. Should I implement something into my program about the legalities of the login filter, or explain what it is exactly filtering, or is that something I shouldn't really worry about?

I'm at a point now where I can make the last steps towards cleaning up my code and getting it stable again, so I'm in a good spot to do it if I have to.

You might make a note in the readme that the login filter exists, what it contains and if that content is desired how they can obtain an api token and log in using the config file. That way you have obvious documentation about it for those that want to use the app for that.

Release 1.6.0 In the Future

The 1.6.0 In the Future release is here, and there was a lot I had altered, the list will contain everything that was altered, so be prepared for a very nerdy list. Have fun~

Also, before I fully get to the release notes, I must clarify that there is now a filter with e621. If you are not logged into E6, a filter (almost like a global blacklist) is applied with the entry young -rating:s. This blacklist will nullify any posts that fall under it. If you wish to download any posts of this kind, you must log in and download it, otherwise, this filter will be there.

Release Notes:
- Downloader now works with the new API on e621
- Blacklist system updated and reworked to provide much greater accuracy and performance
- Login system uses a much safer method to authenticate (the safer method was added with the new version of e621)
- The server now conforms to a new filtering system that nullifies the file URL to posts containing the tag young and not having the post rating set to safe
- Users no longer need their ID in the login.json file to log into their accounts (this is a specific note from those who used the indev release of 1.6.0)
- The flagging system for the blacklist on the downloader now supports negated ratings, users, and IDs.

Bug fixes:
- Fixed massive memory leak in the blacklist system
- Fixed issue where the program would crash if the user's blacklist is empty
- Fixed issue where deleted posts and filtered posts (part of the new API) would crash the downloader while it was grabbing and downloading posts [#31, #32]
- Fixed bug where the user was required to log in, rather than logging in being an option
- Fixed bug where the blacklist would completely ignore blacklisted users

Misc:
- Added new depenancy base64-url at version 1.2.0 for the new authentacation system
- Updated dialoguer from 0.5.0 to 0.6.2
- Updated indicatif from 0.13.0 to 0.15.0
- Updated reqwest from 0.10.1 to 0.10.6
- Updated serde from 1.0.104 to 1.0.112
- Updated serde_json from 1.0.44 to 1.0.55
- Updated dialoguer from 0.1.6 to 0.1.8

Updated

So while the main release has been out, there has been some bugs popping up here and there, and I've been waiting for most of them to show, and I feel it's finally shown enough for me to warrant a new release. This is just a basic hotfix with bug fixes, nothing new or impressive, so enjoy!

1.6.1 Release Notes:

Bug fixes:
- Fixed a bug where the validator had to be either a tag or alias, giving no option to emergency exit encase of typo or invalid tag
- Fixed bug where the pool name for the image file conflicted with OS file and folder name restrictions, causing unexpected behavior and downright crashing on some occasions
- Fixed bug where the program would crash after not being able to figure out category a tag was

Misc:
- Backpedaled to a previous form of the safe mode selection as the current was starting a separate thread/output and letting the program run beyond the prompt itself, opening a window for bugs in the futures.

mcsib said:
So while the main release has been out, there has been some bugs popping up here and there, and I've been waiting for most of them to show, and I feel it's finally shown enough for me to warrant a new release. This is just a basic hotfix with bug fixes, nothing new or impressive, so enjoy!

1.6.1 Release Notes:

Bug fixes:
- Fixed a bug where the validator had to be either a tag or alias, giving no option to emergency exit encase of typo or invalid tag
- Fixed bug where the pool name for the image file conflicted with OS file and folder name restrictions, causing unexpected behavior and downright crashing on some occasions
- Fixed bug where the program would crash after not being able to figure out category a tag was

Misc:
- Backpedaled to a previous form of the safe mode selection as the current was starting a separate thread/output and letting the program run beyond the prompt itself, opening a window for bugs in the futures.

Hi!
Sorry to bug you, I'm still trying to learn coding myself. I'm having a little trouble with the program crashing or just closing when it encounters a file it cant download for some reason. Is there a way to keep the program open to figure out what subject/item was making it crash? And what would someone have to do to make the program stay open with a [COMPLETED] dialogue just to show that it had successfully completed without(or with) errors? Just wondering, not sure if that was a consideration or if it wasn't. Just thought I'd ask. Thanks for making a great program by the way!

redrioccan said:
Hi!
Sorry to bug you, I'm still trying to learn coding myself. I'm having a little trouble with the program crashing or just closing when it encounters a file it cant download for some reason. Is there a way to keep the program open to figure out what subject/item was making it crash? And what would someone have to do to make the program stay open with a [COMPLETED] dialogue just to show that it had successfully completed without(or with) errors? Just wondering, not sure if that was a consideration or if it wasn't. Just thought I'd ask. Thanks for making a great program by the way!

Running it directly via the executable will open (and close) the window automatically. But if you manually run it via CMD, the window will stay open.

Start -> Run -> Type CMD, hit Enter

OR Windows Key+R -> Type CMD, hit Enter

From there manually navigate to the directory and run the downloader. If it errors out the window will stay open, letting you read it. Once it finishes successfully, you'll see it sitting idle, waiting for further input.

drakkenfyre said:
Running it directly via the executable will open (and close) the window automatically. But if you manually run it via CMD, the window will stay open.

Start -> Run -> Type CMD, hit Enter

OR Windows Key+R -> Type CMD, hit Enter

From there manually navigate to the directory and run the downloader. If it errors out the window will stay open, letting you read it. Once it finishes successfully, you'll see it sitting idle, waiting for further input.

Awesome! Thank you. Lol.

redrioccan said:
Awesome! Thank you. Lol.

Exactly what Drakkenfyre said. But with that, feel free to reach out and tell me what error occurred! I don't check e621 very often, and have been playing around with encoding videos more as of late (learning the AV1 codec xp), but if there are any problems that happen, I will fix them, especially because I'm working on an update for the downloader currently with better debug logging and an actual log file. The log file will not only show what you see printed in the program, but it will show much more detailed error printing, something that has been needed for this software for quite some time.

If you encounter any issues, please post it on GitHub, or reach out to me privately on e621 since it sends an email notifying me. With that, I hope you have a good day and that your issue can be sorted out :3

…maybe this sound dumb but I'm don't really understand how it work can someone tell me how to use it I'm really confuse right now

help_me_1 said:
…maybe this sound dumb but I'm don't really understand how it work can someone tell me how to use it I'm really confuse right now

This is admittedly something I should add in the future, for the program's sake since it can be a little more overwhelming compared to other downloaders. I will work on making a good explanation of how to use it soon with the next release, sorry for it being a little complicated at the moment.

Just thought I should ask since I see I'm a few versions behind, but what's the best way to go about updating the program without having to reallocate the download folders. Last time I moved which derive it downloads onto, I had to start from scratch.

Hello. I can't seem to figure out how to get the program to download posts from artists by tag. For example, I'm trying to download psy101's works (great artist btw) for the tags feral and canine by putting psy101 in the Artists section, and feral and canine in the general section (I've tried putting the two tags on their own line, and on the same line, both with spaces, and separated by comma).

Thank you :)

Edit 1: Nevermind, I actually resolved it by putting in the artists in the general section + any other tags.

Edit 2: One actual bug I've seemed to have found though is that "safe mode" is a bit too safe, since it actually stops the program from downloading anything. However, I should note that I'm using Bootcamp (a program built into my Mac that allows me to use Windows).

Edit 3: For some reason, some posts get moved towards the end of the download, so that instead of posts 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, you have posts 1, 2, 3; 5, 6, 7; 9, 10; 4, 8.

Edit 4: Space prior, between, and at the end of tags are not trimmed, so that "canine " is not read properly (due to the trailing space).

Edit 5: Aliases are not properly converted, so that for example, "vulpine" is not converted to "fox", nor "Pokemon" to "Pokémon".

Updated

ketchupee1 said:
Hello. I can't seem to figure out how to get the program to download posts from artists by tag. For example, I'm trying to download psy101's works (great artist btw) for the tags feral and canine by putting psy101 in the Artists section, and feral and canine in the general section (I've tried putting the two tags on their own line, and on the same line, both with spaces, and separated by comma).

Thank you :)

Edit 1: Nevermind, I actually resolved it by putting in the artists in the general section + any other tags.

Edit 2: One actual bug I've seemed to have found though is that "safe mode" is a bit too safe, since it actually stops the program from downloading anything. However, I should note that I'm using Bootcamp (a program built into my Mac that allows me to use Windows).

Edit 3: For some reason, some posts get moved towards the end of the download, so that instead of posts 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, you have posts 1, 2, 3; 5, 6, 7; 9, 10; 4, 8.

Edit 4: Space prior, between, and at the end of tags are not trimmed, so that "canine " is not read properly (due to the trailing space).

Edit 5: Aliases are not properly converted, so that for example, "vulpine" is not converted to "fox", nor "Pokemon" to "Pokémon".

Hello, sorry for the late reply. It's good you found a fix for the first problem.

Problem One

Now for the second, you don’t need to use BootCamp on Mac to use the program. If you are tech-savvy in any way, you can install rustup and then compile the program's source code directly. There's no intense setup or anything, nothing to link, just get rustup, get the source, and then run cargo build --release in the build directory, then boom, you have a release build of the program working perfectly for you! :3

Problem Two

As for the safe mode issue, that's a bug, and I didn't test that mode for a long time. As a result, I broke it for a while; I don't know how many versions. However, downloading the latest release should fix it. If I'm wrong, then you can compile the master branch and get the fix there. I remember finding that bug and fixing it, so hopefully, I'm right. I've been in college, so I haven't been able to work on the project that much.

Problem Three

For the third issue, are you downloading a pool? This is a known bug that I fixed after noticing it when updating the pool downloads feature. My best recommendation for that right now is to compile the master branch into a release build. The code should be working without issue right now. I don't usually push bugs up to the master branch, but I will warn that it will be unstable as there may be bugs I haven't found yet.

Problem Four

The fourth problem is fixed! I'm pretty sure I included that in the latest build, but it may be the master branch too.

Problem Five

Now, as for aliases. They are a little finicky in the program. To explain how it works, aliases in e621 aren't tags with posts tied to them. So when I use the API with an alias, I don't get posts. The program must check if there are posts tied to a tag. If there are no posts, it will search for the alias, then search with the proper tag. It's super convoluted, and I wish I could've made it better, but API limitations at the time prevented it. All of this is done in runtime and not accurately shown in the program. So if the tag file has "vulpine" as a tag, it will show it's searching for and downloading "vulpine" rather than "fox" like what it's actually doing. Same with Pokemon, but if you give more information about what's happening in specific, I can investigate it and see whether or not it's a bug.

Final Notes

I want to apologize for not reaching out sooner; as I said previously, I’m in college for a bachelor's degree in Computer Science. As a result, I’ve been tied up with classes and the sort and haven’t been able to work on the project much. I will get everything sorted once I have the groove of college now; just tackling two classes at once has kept me busy. Hopefully, my post has fixed some of your problems if they are still persistent. This isn’t dead-ware yet, just on a bit of hiatus while I get some learning down.

redrioccan said:
Just thought I should ask since I see I'm a few versions behind, but what's the best way to go about updating the program without having to reallocate the download folders. Last time I moved which derive it downloads onto, I had to start from scratch.

I apologize for the issue there! You must have been on a version where the downloads folder wasn’t categorized. I forgot when I changed it, but I made the downloads folder more organized in how it downloaded tags, and that, as a result, broke many users’ downloads folders. The best way to update the program is to download the build and copy the exe over to your current exe. If it crashes, it’s safe to say I may have updated the login.json file or the config.json file with more information. You can delete them and let the program regenerate them. By any other means, it shouldn’t crash, and if it does, it’s probably a bug. Hopefully, that answers your question.

General Post

It has been a while since I posted here. Personal life and college have taken up my time, so I wanted to drop a little bit of information on the current happenings of the software. First off, this program is not dead-ware (dead or abandoned software); it’s still quite alive, at least in my head. I’ve just been tied up with college at the moment and had some stuff happening in my personal life that kept me from actually making a post or update. I’m in college right now for a Computer Science degree, so I’ve been juggling classes, but on the bright side, in terms of this program, it means the quality of code and updates, in general, will get good. I could port this software to android if I wanted since my degree will revolve me making cross-platform applications for iPad, Android, and Windows. So there may be an android version if I decide to do that. But in general, code will start running smoother, faster, and feel more stable than right now; I need time to adjust since I started college last month. Admittedly, I should have gone into college a lot sooner. However, I was still floating around other things and didn’t know what I was doing with my life, so after some self-reflection and exploring, I feel programming is the genuine passion that I strive for, and as a result, I will be programming a lot more.

That also means this program will start getting more healthy updates, and it will start feeling newer.

Time to list the significant thing: if you are tech-savvy, I highly recommend moving to the bleeding edge of this software’s code right now. It contains tons of fixes to bugs that exist and UI changes that I’ve been making to make the user experience better. It’s unfinished, but I hope to one day go back and clean it up because once I do, it will be enjoyable to use the program fluently. So yeah, stick with the bleeding edge right now until everything is sorted; I’ll figure something out, might help other peeps out and make a pre-release build with what’s there right now, possible new bugs included, just so they can download pools in the correct order, and not deal with weird safe-mode bugs, or other black magic I accidentally made with the code.

As for new features I may add, I'm not quite sure yet. I know one of the running ideas I've had recently has been to either update the parser for the tag file or replace the tag file altogether with a JSON file of some sort. I'm mainly trying to find a way to make the file easily editable without it being complicated to read. I'm trying to make the most straightforward human-readable file that you can look at. So who knows? Maybe next update, I'll create a new file format or replace it with something more standard with years of syntax built onto it. For right now, though, it's just mainly an idea that's up in the air. As for other fixes, especially this update, will be focusing on fixing bugs and making the UI a little bit more readable; I’m not planning on going crazy and trying to implement exclusive new features; this is mostly just a run of the mill “keep the pace” kind of update. In actuality, the program is close to being finished. I kind of already achieved what I had initially planned for it; there’s just a couple of things that need to be fixed within it before I can fully say it's finished. And much like with any program, as much as I want to say it's fully completed, it's probably never going to be finished until I finally stop pushing updates for it. But I feel like there's a lot of unfinished work in the project that I need to complete before I get entirely comfortable leaving it alone to live organically. So if anyone still uses this program, I hope you keep patience with the wait time and that you keep an eye out for any new updates that might release.

This program is built on some old foundation; it was built before many of the more modern mass downloaders were even a thing, and if I'm not mistaken, I was one of the first developers ever to make a program like this that could download all categories, I think there were two others, but they didn't handle all the categories. Their downloading features were a little finicky at the time. The original goal of this program was to be the fastest downloader possible, and I still think thus far I have achieved that. It could technically be faster, but it would require multithreading capabilities. So the download speed you see in the program is just off one thread, not multiple threads like what certain C# implementations or Python implementations can do.

As a result, at some point, I kind of feel a temptation to try and modernize the program to be more multithreaded capable; it would be interesting seeing this program blaze through workloads utilizing all the cores that it can. I remember implementing it through the development branch at one point but never got insane performance increases. When I think back on it, it's mainly because the program wasn't built around multithreading capabilities; it was built to be a very linear one threaded experience. Who knows? Maybe college will get me to make this program fully multithreaded; that would be interesting.

Though this is all I have to put in, hopefully, you remain patient and vigilant towards updates; I will be working on them shortly; I have more stuff to handle quickly. Classes this month have been kicking my butt. So stay safe out there, and have a good day!

Hm. I seem to be suffering from a weird issue. Yesterday, the program worked damn near perfectly, but today, it suddenly started to crash on reading the tag file, even though I literally hadn't changed anything about it. And there doesn't seem to be any sort of log I can look at for further info.

EDIT: Managed to get it to output an error via the command line.

Should enter safe mode? [y/N]
N
Parsed tag file.
thread 'main' panicked at 'Json was unable to deserialize to entry!: Error("missing field 'appeal_count'", line: 0, column: 0)', src\e621\sender.rs:720:31

ONE MORE EDIT: Apparently, it's some sort of issue with the Login info, because it starts going when I remove my info from the login.json file.
The only potential guess I have about this is that I reached the limit of API queries? Not sure.

Updated

system_searcher said:
Hm. I seem to be suffering from a weird issue. Yesterday, the program worked damn near perfectly, but today, it suddenly started to crash on reading the tag file, even though I literally hadn't changed anything about it. And there doesn't seem to be any sort of log I can look at for further info.

EDIT: Managed to get it to output an error via the command line.

Should enter safe mode? [y/N]
N
Parsed tag file.
thread 'main' panicked at 'Json was unable to deserialize to entry!: Error("missing field 'appeal_count'", line: 0, column: 0)', src\e621\sender.rs:720:31

ONE MORE EDIT: Apparently, it's some sort of issue with the Login info, because it starts going when I remove my info from the login.json file.
The only potential guess I have about this is that I reached the limit of API queries? Not sure.

Hi, I happen to be a software developer, and created my own downloader (it is neither as fast nor as efficient, but it's there and it works). I can verify that I can use the API myself, but this downloader does not work. At a guess, I would say that the E621 devs decided to update the API format and not tell anyone, and there's a new field "appeal_count" that it's trying to deserialize into an object but isn't present in the deserialization object. I wish there was an actual, up-to-date API spec out there. It'd make developing to use their API much less of a pain.

nonmoumou said:
Hi, I happen to be a software developer, and created my own downloader (it is neither as fast nor as efficient, but it's there and it works). I can verify that I can use the API myself, but this downloader does not work. At a guess, I would say that the E621 devs decided to update the API format and not tell anyone, and there's a new field "appeal_count" that it's trying to deserialize into an object but isn't present in the deserialization object. I wish there was an actual, up-to-date API spec out there. It'd make developing to use their API much less of a pain.

Then why did it work for me yesterday with nary a hitch? Did e621 literally just update the API today? Also, at the moment, this downloader seems to function mostly well as long as I don't input my account data, if anyone still wants to use it.

nonmoumou said:
Hi, I happen to be a software developer, and created my own downloader (it is neither as fast nor as efficient, but it's there and it works). I can verify that I can use the API myself, but this downloader does not work. At a guess, I would say that the E621 devs decided to update the API format and not tell anyone, and there's a new field "appeal_count" that it's trying to deserialize into an object but isn't present in the deserialization object. I wish there was an actual, up-to-date API spec out there. It'd make developing to use their API much less of a pain.

It's the opposite actually, I have removed the appeal_count field from the user object. I'm currently removing a bunch of stuff that's not used in any way, appeals being one of them. Optimally you wouldn't notice that, but since serde serialization seems to be strict it complains about the field suddenly not being there anymore. Just removing the field from the struct should solve this problem.

earlopain said:
It's the opposite actually, I have removed the appeal_count field from the user object. I'm currently removing a bunch of stuff that's not used in any way, appeals being one of them. Optimally you wouldn't notice that, but since serde serialization seems to be strict it complains about the field suddenly not being there anymore. Just removing the field from the struct should solve this problem.

Unfortunately it's not something that a user of this specific program can do (other than, I guess, doing it via hex editing, somehow), so I guess that we are stuck waiting for the author of this one to poke in, or using other programs. Unfortunate.

Thank you for clarifying, though.

earlopain said:
It's the opposite actually, I have removed the appeal_count field from the user object. I'm currently removing a bunch of stuff that's not used in any way, appeals being one of them. Optimally you wouldn't notice that, but since serde serialization seems to be strict it complains about the field suddenly not being there anymore. Just removing the field from the struct should solve this problem.

Ah, that'd do it - without particular insight into Rust (which I'm entirely unfamiliar with) or the API changes, I could only make a best guess! Thank you for clarifying. Is there a place to see changes to the spec? I've seen some things on github, and the wiki page is also present, but I've not found anything that's particularly up-to-date.

Not really, no. I try to document any changes in the changelog, and if something changes in the api I'll update the wiki page. Generally I won't make backwards incompatible changes unless there is a very good reason to do so. In this case it was a feature that has never been used and the only indicator that it even existed is the appeal counter on the profile and the value in the api call.

Edit: I reported the problem on the projects issue tracker.

Updated

system_searcher said:
Hm. I seem to be suffering from a weird issue. Yesterday, the program worked damn near perfectly, but today, it suddenly started to crash on reading the tag file, even though I literally hadn't changed anything about it. And there doesn't seem to be any sort of log I can look at for further info.

EDIT: Managed to get it to output an error via the command line.

Should enter safe mode? [y/N]
N
Parsed tag file.
thread 'main' panicked at 'Json was unable to deserialize to entry!: Error("missing field 'appeal_count'", line: 0, column: 0)', src\e621\sender.rs:720:31

ONE MORE EDIT: Apparently, it's some sort of issue with the Login info, because it starts going when I remove my info from the login.json file.
The only potential guess I have about this is that I reached the limit of API queries? Not sure.

I apologize for not getting to this sooner. As I said in my last post, I've been in college now so I've been tied up with tight schedules the last couple of months. However, I'm free this week and can focus on the issue that you're experiencing. I also want to thank Earlopain for bringing this up to me on GitHub as it sent me a email notifying me of the issue. And I want to thank Nonmoumou for suggesting an alternative to use for now while I work on developing this update. There will be some big changes to it, so that can be something you can look forward too as well once I'm finished fixing the bug. Till then, hang tight, it'll be fixed shortly!

earlopain said:
Not really, no. I try to document any changes in the changelog, and if something changes in the api I'll update the wiki page. Generally I won't make backwards incompatible changes unless there is a very good reason to do so. In this case it was a feature that has never been used and the only indicator that it even existed is the appeal counter on the profile and the value in the api call.

Edit: I reported the problem on the projects issue tracker.

Thanks for taking time to submit a post on the GitHub, I wouldn't have known about it otherwise. I will get this fixed so the downloader can be usable once again.

1.7.0 Redesign Release

Well, the 1.7.0 release is out. There isn't much done with this release, at least on the overall feature changes. In this update, I wanted to focus on primarily changing the visuals. I got tired of the black-and-white console and decided that injecting some color would make a nice difference. I also wanted a better debug time with this program. So, to do that, I incorporated a logger. This logger will not only make things easier to read, but it will also export its print-outs into a nice .log file for me to be able to read. If you have a bug in the future, all you have to do now is explain what's generally happening and then supply your log file. I can get a much deeper understanding of everything and what's happening from there. So with that, I hope you enjoy this new release!

Note: You need this new version if you wish to log in through the downloader as a recent change to e621 causes past versions to crash.

Release Notes:
- A new logging system for the program with the ability to save the logs to a log file.
- All-new color and design changes to the program, making it much more pleasing to the eye.

Bug Fixes:
- Fixed an issue with the pool name number being off when saving pool images.
- Fixed a bug where the order at which pools were saved was incorrect and often out of order.
- Fixed a crashing bug where the program would completely crash if the user logged in.

Misc:
- Did a massive code clean up that made everything generally faster and more concise to read through.
- Removed comments that no longer applied to the project code.
- Updated base64-url from 1.4.8 to 1.4.13
- Updated reqwest from 0.11.0 to 0.11.9
- Updated serde from 1.0.118 to 1.0.136
- Updated serde_json from 1.0.61 to 1.0.79
- Added dialoguer at version 0.10.0
- Added console at version 0.15.0
- Added log at version 0.4.14
- Added simplelog at version 0.11.2

Known Issues:
- indicatif will no longer be updated for some time as it currently has a performance issue that causes a 98% performance decrease in active release runtime. Currently, the version of indicatif used in the project is the closest up-to-date version without this performance impact.

Full Changelog: 1.6.1...1.7.0

mcsib said:

1.7.0 Redesign Release

Well, the 1.7.0 release is out. There isn't much done with this release, at least on the overall feature changes. In this update, I wanted to focus on primarily changing the visuals. I got tired of the black-and-white console and decided that injecting some color would make a nice difference. I also wanted a better debug time with this program. So, to do that, I incorporated a logger. This logger will not only make things easier to read, but it will also export its print-outs into a nice .log file for me to be able to read. If you have a bug in the future, all you have to do now is explain what's generally happening and then supply your log file. I can get a much deeper understanding of everything and what's happening from there. So with that, I hope you enjoy this new release!

Note: You need this new version if you wish to log in through the downloader as a recent change to e621 causes past versions to crash.

Release Notes:
- A new logging system for the program with the ability to save the logs to a log file.
- All-new color and design changes to the program, making it much more pleasing to the eye.

Bug Fixes:
- Fixed an issue with the pool name number being off when saving pool images.
- Fixed a bug where the order at which pools were saved was incorrect and often out of order.
- Fixed a crashing bug where the program would completely crash if the user logged in.

Misc:
- Did a massive code clean up that made everything generally faster and more concise to read through.
- Removed comments that no longer applied to the project code.
- Updated base64-url from 1.4.8 to 1.4.13
- Updated reqwest from 0.11.0 to 0.11.9
- Updated serde from 1.0.118 to 1.0.136
- Updated serde_json from 1.0.61 to 1.0.79
- Added dialoguer at version 0.10.0
- Added console at version 0.15.0
- Added log at version 0.4.14
- Added simplelog at version 0.11.2

Known Issues:
- indicatif will no longer be updated for some time as it currently has a performance issue that causes a 98% performance decrease in active release runtime. Currently, the version of indicatif used in the project is the closest up-to-date version without this performance impact.

Full Changelog: 1.6.1...1.7.0

You forgot one thing: you've moved the downloads folder from "D_/[configured destination]/General Searches" to just "General Searches". Either that, or it's something weird on my and. At least that's what happened to me, so now I have to relocate all of my already downloaded stuff if I want to update.

For context, my config file is currently:

{
  "downloadDirectory": "D:/e621/"
}

Updated

system_searcher said:
You forgot one thing: you've moved the downloads folder from "D_/[configured destination]/General Searches" to just "General Searches". Either that, or it's something weird on my and. At least that's what happened to me, so now I have to relocate all of my already downloaded stuff if I want to update.

For context, my config file is currently:

{
  "downloadDirectory": "D:/e621/"
}

This is unintended. I don't typically modify my config, so this was a bug I didn't know about. Not to mention that I don't think I even changed the code for how the image is saved, so this is peculiar. I've made a new branch on GitHub and will make a hotfix for this shortly. Apologies for the problem.

Edit: I've double-checked the code that I edited from 1.6.1 to 1.7.0 and saw no changes to the code for saving the downloaded images. On top of this, I also tested downloading with the same config and didn't experience any issues downloading. I have a D: drive and downloaded all my images into the drive without a problem. So I'm more or less confused about what's happening. I'm going to test downloading different categories of images and see if one of them is saving differently (they shouldn't, but at this point, anything can be possible with undefined behavior like this).

Edit 2: This issue is now fixed. It was a simple incident of confusion and nothing more. However, there will be a minor hotfix released here in a little bit that will incorporate a much more in-depth logging system with tons of information for me to comb through when an issue similar to this occurs.

Updated

1.7.0 Redesign Hotfix 1 Release

In light of an issue that popped up on my board yesterday ( #45 ), I decided to take the time to make a hotfix release that adjusts the logging system to be much much more informative. There were questions I asked the person in that issue board that shouldn't have needed to be asked. With these adjustments to the logger, this will no longer be an issue.

With all of this provided below, I hope you have a good day. And if you experience an issue in the future, just share your log file and your issue and that should provide more than enough information for what might be happening.

Hotfix Notes:
- The logger now can print low-level information about your system, OS family, architecture, as well as executable information and DLL information.
- The logger will now be able to print out downloader specific information, such as the downloader name, version, working directory, etc.
- The logger will also print out login information, like username, API key, and whether or not you wish to download favorites. I would like to note, your API key is not visible in the .log file, the key is replaced with * and is merely there to show that the information was provided from the login.json. This is in the event there is a bug related to login issues and I need to verify the presence of the key and username to ensure the user is logged in.
- The logger can now show image save paths and is allowed to share collection information, such as collection names (typically these are search terms), collection category (pools, sets, general searches, etc.), among others.

Misc:
- The release version setup has now been heavily altered to accommodate the crate version.

Pull Requests Accepted:

  • 1.7.0 hotfix 1 by @McSib in #46

Full Changelog: 1.7.0...1.7.0-hotfix.1

A tutorial or a video of how to use this downloader would be nice. It just opens up a black window and I have no idea what to do.
edit: nvm. I actually figured it out

Updated

astr4l said:
how do i use this?

Optional:
----------
Install Windows Terminal from Microsoft Store
Open App
Press 'Ctrl' and ',' key in order
Should start in 'Startup'
Set 'Default terminal application to 'Windows Terminal'

Thank me later :)

Compulsory:
--------------
Download ZIP
Extract
run program
hit enter
close program
edit 'tags.txt', only enter tags you want to search for (one query per line)
edit 'config.json' and change the download directory as desired
run program again

voila

an244 said:
Optional:
----------
Install Windows Terminal from Microsoft Store
Open App
Press 'Ctrl' and ',' key in order
Should start in 'Startup'
Set 'Default terminal application to 'Windows Terminal'

Thank me later :)

Compulsory:
--------------
Download ZIP
Extract
run program
hit enter
close program
edit 'tags.txt', only enter tags you want to search for (one query per line)
edit 'config.json' and change the download directory as desired
run program again

voila

Thanks for writing such a helpful reply! I’ve been busy with college life, and working on computer science courses, so I haven’t had the chance to even look at this forum or the comments posted on it. I might take this and add some more detail to it and put it up on my GitHub README directly, since I’ve been told that it would be a good idea. Generally, there is a lot I need to update about this forum post and the GitHub page, as they’re beginning to age somewhat. I’m going to refrain from rambling though. Thanks again, and I’m happy you are all enjoying my little passion project. Have a great day.

1.7.1 Release

Time has passed since my last update, hasn’t it? It has been a while, and life has taken months of my dev time on this project away. But I am back, and I have more things for the program to refine it even more. This will mostly contain bug fixes, cleaning up parts of the project that I have overlooked, and keeping the program nice and up to date. I am thinking about doing something new for the next update, but it will take time, so I will have to balance it carefully to ensure that I can get it finished. Beyond that, enjoy the update, and resume using the program, now with some fixes that got rid of some annoying bugs that has plagued a few users.

And yes, this version has a Linux build too. Arch Linux and Debian. ;3

Also, please let me know if there are any problems with those builds.

Release Notes:

- You can now set the naming convention of downloaded files (md5 or id). [#47]
- The search term will now shorten in the downloader progress bar if its length is greater than 25 characters.
- The directory named after the search term will now shorten on Windows if it crosses the MAX_PATH length a path can have (260 characters). [#53]
- Fixed an incredibly rare bug where the created_at field in API responses would be null.
- Removed an unused function that was overlooked.
- Updated the parser to report parsing errors.
- Fixed numerous security vulnerabilities tied to my dependencies (I recommend updating just for this reason).

Bug Fixes:
- Located possible cause for bug that crashes the downloader at the very start of parsing or downloading (read more about this below). [#48]
- Fixed bug where tag including a colon crashed the downloader. [#51] [#52]
- Fixed bug where the program suddenly crashed when grabbing a single explicit or questionable post while in safe mode. [#54]

Misc:
- Marked regex, smallvec, and tokio to their latest versions to fix security vulnerabilities.
- Updated indicatif to 0.16.2 indefinitely until 0.17.0 releases with no performance impacts.
- Updated dialoguer to 0.10.1.
- Updated log to 0.4.17.
- Updated simplelog to 0.12.0.
- Updated reqwest to 0.11.10.
- Updated serde to 1.0.137.
- Updated serde_json to 1.0.81.
- Enforced more intense field visibility rules for structs.
- Created an auto builder to determine if the build is passing once commits are pushed.
- Fixed many Clippy warnings.
- Added debug printing for grabbing tags so that they are traceable as well.
- Created a rustfmt.yml for rustfmt to format more to a unified style I work with (this may change in the future as I figure out new things to change with it).

Notice for users who use VPNs

I have had a recurring “bug” that has shown in my issues the last couple of months, and they tend to crop up right after a new release, so I am going to supply a new notice for those using VPNs to prevent this becoming issue spam. There are users who are experiencing crashes consistently when parsing, obtaining blacklist, or downloading [#48] [#50] [#27 ]. It is an issue that is consistent, and each person thus far have been using a VPN with no other noticeable cause linked. After a multitude of testing, I have concluded that users using VPNs will occasionally have either e621 directly or Cloudflare prompt for a captcha, or a test for whether you are a robot. Since my program does not support GUI, or no tangible way of handling that, it will crash immediately. I have looked for fixes to this issue and have yet to find anything. So, if you are using a VPN, be warned, this can happen. The current work around for this issue is switching locations in the VPN (if you have that feature) or disabling the VPN altogether (if you have that option). I understand it is annoying, and can be a pain, but this is all I can do until I come across a fix. Sorry for the inconvenience, and apologies if you are some of the users experiencing this issue.

What's Changed

  • 1.7.1 Dev Branch by @McSib in #49

Full Changelog: 1.7.0...1.7.1

Updated

So I'm no longer able to download pools with the downloader anymore. General tags are fine, but when I try any pool number, all I get is:

Could not convert entry to type "e621_downloader::e621::sender::entries::PoolEntry"!
thread 'main' panicked at 'called Result::unwrap() on an Err value: Error("missing field is_deleted", line: 0, column: 0)

missing field is_deleted', src\e621\sender\mod.rs:295:18

I'm assuming it's due to a change on the backend of the site and the downloader will need to be updated for it.

1.7.2 Release

This is more of a QOL update for the codebase and the bugs that were persisting inside of it. I took the time to clean things up, get everything working a little bit smoother, as well as fixing critical high-severity bugs that were bricking portions of the program. You can view this as more of a hotfix rather than a full fledge release, and that will be evident in the release notes, but I feel this was a needed update, especially with how messy and hard it was to work on the codebase the more things got added to it. There is only a Windows build right now, but I'm looking to do Linux build for this version (and future versions) too, just need the time to sit down and set everything up for that. That might be a minute, but be patient. Enjoy this update, and have a smoother experience from here on.

I also want to note that I moved to a different TLS crate for making connections since the default one was causing "Checking if the connection is secure..." issues. This may or may not have fixed the issue, but so far it has in my testing. This may also help with the VPN issue, but that will have to be tested by those who use VPNs.

What's Changed

General

  • Fixed critical bug where is_deleted could not be found when downloading pools [#67, #60, #62].
  • Blacklists can now use score:.
  • Fixed bug where tags containing a space at the end would attempt to create invalid directories that resulted in crashes [#59, #58, #68].

Misc

  • Rolled back workflow Ubuntu version to 18.4 as the latest causes issues with OpenSSL.
  • Documentation has been expanded and given more detail [ #69 ... #91 ].
  • Tons of code cleanup that are listed in the pull request (#65) below.
  • Turned Config and Login into singletons for easier use and access.
  • Imports are now organized across all files.
  • Grabber now has all the code related to grabbing posts broken off into multiple functions, increasing readability greatly.
  • Converted the programming workflow of the entire project to Gitflow for a more structured software development cycle (also reducing bugs from leaking into the main branch).

Pull Requests

  • Removed is_deleted from PoolEntry by @head-gardener in #61
  • Crate toml updates. by @McSib in #63
  • Update workflow for this branch and update crates. by @McSib in 64
  • Codebase cleanup by @McSib in #65

New Contributors

  • @head-gardener made their first contribution in #61

Full Changelog: 1.7.1...1.7.2

Is their a way to configure the downloader to download every image in a gallery as png even if the image is a jpg?

Downloader is refusing to work at all now. No matter what I get

thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', src\e621\io\tag.rs:233:13

Redownloaded the latest version just to be sure, let it recreate the config and tag files, even tried single tag downloads and it does nothing but throw that error.

Hello, it seems as if I'm not able to get the program to work properly? I'm not sure as to what I'm running the program on windows 11 and downloaded the program from the zip file in github but to no avail. is there something im doing wrong?

Updated so I could get Pools again, and feels like each time I have to do three extra downloads and take a crash-course in programming-- but Iiiiiiiiiiiiiiiiiiiiii gueeeeeeeeeeeeeessssssssssssssss I got it worrrrrrrking? Only in the debug folder do I see something familiar, and iiiiiiiiiiiiiiiiiiiiiiiitttttttt wwooooorrrrkkkkkkkkkkssssss? Embedded not where I want it but fuck it not gonna move shit around, just making a shortcut (on windows 10) and call it a day so I can just drag and drop to relocate it to my to my master "batch download porn goes here" folder (ie; the previous version's folder, while the latest version I got).

QED, it looks like this right now: J:\e621_downloader_1.7.0-hotfix.1_release\e621_downloader-1.7.2\target\debug

ie: my main download folder is in that first folder, and the new one is in debug..... sooo..... yeah.... I was trying to get it to just replace the old version..... Oh well, fuck it, I was lucky enough to get it to work. See you next time.

Updated

achiever2501 said:
Is their a way to configure the downloader to download every image in a gallery as png even if the image is a jpg?

Currently no. The goal of the program is to download the posts and keep them lossless (ie, the program will retain the exact data the image originally had), you can use ffmpeg to convert the images to pngs, however. A quick Google search should help you with that on Windows and Linux.

drakkenfyre said:
Downloader is refusing to work at all now. No matter what I get

thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', src\e621\io\tag.rs:233:13

Redownloaded the latest version just to be sure, let it recreate the config and tag files, even tried single tag downloads and it does nothing but throw that error.

This is more than likely tied to an update on the JSON data received from the API again. I will have to look into this and see what's happening. I've primarily been silent because life taking up a ton of my time, so this project has had to sit on the wayside for a period.

alexmitch69 said:
Hello, it seems as if I'm not able to get the program to work properly? I'm not sure as to what I'm running the program on windows 11 and downloaded the program from the zip file in github but to no avail. is there something im doing wrong?

If you're issue is related to an error message along the line of thread "main" panicked at "called "Option::unwrap()" on a "None" value", src\e621\io\tag.rs:233:13 or similar, this is due to an API change that breaks how the JSON data received is serialized into the program. This is something that should be relatively quick to fix, as long as I can find free time to do the change and recompile everything.

I want to add, I have been reading and keeping up-to-date with the issues for the downloader and the project. It's just that life picked up and things have been taking up a lot of my free time. I do plan on updating and fixing these issues rather soon. I just want to ensure I have the free time to dedicate to it. There's been some new feature requests, and some bugs (like the ones occurring here), among other things that need modifying. On top of this, I have been having to consider the direction of the project in terms of the language used (Rust) because of a new proposed trademark policy of the Rust foundation that has made Rust a little more unstable and possibly more annoying to use due to legal shenanigans that I shouldn't even need to worry about. I digress. A new update will be in the works soon, once I have time to dedicate to it. I hope everyone can be a little bit more patient while I do this, thank you for reading and understanding, and have a nice day.

mcsib said:
This is more than likely tied to an update on the JSON data received from the API again. I will have to look into this and see what's happening. I've primarily been silent because life taking up a ton of my time, so this project has had to sit on the wayside for a period.

Howdy. I went over all the tags I was trying and at some point the downloader started working again fine. I found one errant extra space in a copy/pasted tag and after removing that it seemed to fix it, but that space didn't exist in every tag I tried, so I'm not sure.

As for the updates, I'm glad to hear you're still maintaining the downloader as it has become an invaluable tool for me versus the old one I used which had a GUI but was clunky, slower, and when e621 changed the API the author, for whatever reason, kept insisting it erroring out had nothing to do with the API (it did) and AFAIK years later is STILL insisting the problem is on the users' end and refusing to change his program. So yeah, your program just stomped all over his and I'm very, very fond of it.

That being said, though, real life does take priority so I completely understand. No worries.

I did have one question that I've been wondering about. There used to be a 1200 picture limit per tag, and sometimes I've seen it go slightly over that. I know the limit was put into place to keep scrapes from getting nuts, but is there some way to paginate it so we can download 1200 in one batch, then another 1200 in a second and so on for the same tag? Just curious.

drakkenfyre said:
Howdy. I went over all the tags I was trying and at some point the downloader started working again fine. I found one errant extra space in a copy/pasted tag and after removing that it seemed to fix it, but that space didn't exist in every tag I tried, so I'm not sure.

As for the updates, I'm glad to hear you're still maintaining the downloader as it has become an invaluable tool for me versus the old one I used which had a GUI but was clunky, slower, and when e621 changed the API the author, for whatever reason, kept insisting it erroring out had nothing to do with the API (it did) and AFAIK years later is STILL insisting the problem is on the users' end and refusing to change his program. So yeah, your program just stomped all over his and I'm very, very fond of it.

That being said, though, real life does take priority so I completely understand. No worries.

I did have one question that I've been wondering about. There used to be a 1200 picture limit per tag, and sometimes I've seen it go slightly over that. I know the limit was put into place to keep scrapes from getting nuts, but is there some way to paginate it so we can download 1200 in one batch, then another 1200 in a second and so on for the same tag? Just curious.

Thanks for understanding the situation. There are a ton of bugs that have popped up recently, new feature requests I need to look through, and those "wonderful" dependency security bugs (have to love them) that have popped up. I can't give time estimates for when I can work on the project, but I will be able to soonish.

As for the question, I have considered making a toggle for the post limiting in the downloader. The only reason I haven't is due to scrapes. I don't know how good the servers on e621 are, but I'm reluctant to make that functionality easier as not to anger the e621 staff into emailing me about it (I mean that as a light joke). It more or less depends, if an admin or staff worker gives me a thumbs up that I'm good to implement that toggle, I can add it. I just remember way back then when other downloaders did the same thing with other sites, and then immediately DDOSed the servers with hundreds of users downloading 20k+ images. That's something I'd rather not do. The main worry, in this case, wouldn't be a DDOS situation (most modern servers have gotten better about it) but more so throttling.

  • 1
  • 2