Read the rules before proceeding!

Topic: e621 Downloader made in Rust that can download sets, pools, bulk posts, and single posts!

Posted under e621 Tools and Applications

I have returned. This program is my Opus Magnum of programs. I've wanted a software that could do these things for a long time, but could never find one that handled all of the options I wanted. So, I decided to make it myself.

Release Description

e621_downloader is a low-level, close-to-hardware program meant to download a large number of images at a decent pace. It can handle bulk posts, single posts, sets, and pools via a custom easy-to-read language that I made.

Having tested this software extensively with downloading, I managed to download 10,000 posts (which was over 20+ GB of data with some animations going over 100+ MB) in just two hours.

Download Link

FEATURES:
- A tag file that has a simple language with generated examples and comments.
- Searching for single posts, bulk posts, sets, and pools.
- Downloads images in their own folders named after their searching tag.
- The ability to log in and use your blacklist.

Notice for users using the new version (1.6.0 and newer)

If you are not logged into E6, a filter (almost like a global blacklist) is applied with the entry young -rating:s. This blacklist will nullify any posts that fall under it. If you wish to download any posts of this kind, you must log in and download it, otherwise, this filter will be there.

Features Planned

None at the moment.

Installation

All you have to do is decompress e621_downloader.7z and run the software (make sure e621_downloader.exe is in its own directory!). The program will print errors first, don't worry! It is simply telling you that the config is missing and will generate it. It will also make the tag.txt and emergency exit. The program exits to allow for you to edit the tag file. This file will have examples and comments explaining how the file works. Don't worry, it's easy :).

You don't have to worry about making the download directory, it is handled for you. The download directory is configurable in the config.json file (please make sure the directory always ends with /!).

Why does the program only grab only 1,280 posts with certain tags?

When a tag passes the limit of 1,500 posts, it is considered too large a collection for the software to download. The program will opt to download only 5 pages worth of posts to compensate for this hard limit. The pages use the highest post limit the e621/e926 servers will allow, which is 320 posts per page. In total, it will grab 1,280 posts as its maximum.

Something to keep a note of, depending on the type of tag, the program will either ignore or use this limit. This is handled low-level by categorizing the tag into two sections: General and Special.

General will force the program to use the 1,280 post limit. The tags that register under this flag are as such: General (this is basic tags, such as fur, smiling, open_mouth), Copyright (any form of copyrighted media should always be considered too large to download in full), Species (since species are very close to general in terms of number of posts they can hold, it will be treated as such), and Character in special cases (when a character has greater than 1,500 posts tied to them, it will be considered a General tag to avoid longer wait times while downloading).

Tags that register under the Special flag are as such: Artist (generally, if you are grabbing an artist's work directly, you plan to grab all their work for archiving purposes. Thus, it will always be considered Special), and Character (if the amount of posts tied to the character is below 1,500, it will be considered a Special tag and the program will download all posts with the character in it).

This system is more complex than what I have explained so far, but in a basic sense, this is how the downloading function works with tags directly. These checks and grabs happen with a tight-knit relationship that is carried with the parser and the downloader. The parser will help grab the number of posts and also categorize the tags to their correct spots while the downloader focuses on using these tag types to grab and download their posts correctly.

Hopefully, this explains how and why the limit is there.

Go to my Github if you encounter any errors and wish to report it there. Or leave a comment here if you have unexpected crashes or errors being thrown.

Updated

It's been a long time. About four months to be exact since I started working on this software. I'm happy it's done because now it's in a good enough state for me to use myself. So, enjoy, this is a very fun project.

1.0.0 Release notes:

- Added new date system when downloading artists and characters .
- Ability to enter safe mode .
- Updated tage parser to support pools, sets, single posts, and bulk posts via groups .
- Tag validator has been added to ensure the tags exist on e621 .
- Rewrote the entire web connector.

Improvements and Bug Fixes

- Fixed bug where the tag would have a trailing space when a comment was on the same line .
- Improved client speed, making downloading 20+ GB of images and animations in two hours possible .

Updated by anonymous

There was a small bug that I found and fixed. Nothing breaking, just the wrong order of code executing.

1.0.2 Hotfix Release Notes:

- Fixed the issue where the connector would ask for what mode they want to run before actually creating the tag file.

Updated by anonymous

I have done another patch to the software after it abruptly crashed while downloading images. I can't tell if this is my crappy internet (which cut out shortly after the program crashed) or if it's a 503 response. So to be on the safe side for right now, I have added a check to see if the server responds with a 503. If it does, it will tell the user to contact me or to post on this forum so I can fix it.

1.0.3 Release Notes:

- Added a safety guard in case the server responds with a 503 response code while downloading images .
- Updated the user-agent version from 0.0.1 to 1.0.3.

Updated by anonymous

Been testing this all night and so far I really like it for one thing, it works! which is surprisingly hard to find one of these that do or stays working for long...
So here's hoping it stays working unlike the last 3 I have used over the years XD
right now the only thing I can think of to recommend as a feature tho is cross comparing the posts to be downloaded between the [pools] and [artists] sections.
If I have a comic to be downloaded in [pools] and the artist who drew the comic in the [artist] section, I will end up downloading 2 copies.
Not a huge problem from me since I have other programs that find duplicate images and deal with that but it would be one less step~

Anyways, you rock and happy coding XD

Updated by anonymous

good program helped me to download the collections of the artists, and comics
In summary, there is a small bug at the time of putting a tag related to species (shark, renamon) in the "general" option does not show the correct number of publications to download.

Updated by anonymous

umbreon13 said:

good program helped me to download the collections of the artists, and comics
In summary, there is a small bug at the time of putting a tag related to species (shark, renamon) in the "general" option does not show the correct number of publications to download.

The downloading of posts by the species tag can be iffy in a lot of situations. For instance, the two examples you supplied is "shark" and "renamon". Renamon contains over 11,653 posts and the shark tag has over 18,272 posts. This is a large number of posts to download in a single run but is manageable. The problem comes in when dealing with other species tags. Mammal, for example, has over 1,308,848 posts tied to it. If I didn't have a check inside the program for this, a user would crash there computer or be denied access by the server for attempting to download over one million posts. Currently, there is a checker in place if the tag has under 1,500 posts. If it does, then it will download all the images in one go; if not, it will grab only 5 pages of posts and download those. The best thing I can recommend right now is to use a single post's id and grab images by that, or make your search more specific for what you want to batch download. This will ensure the program grabs most, or all, of what you need.

Updated by anonymous

Dythul said:
Been testing this all night and so far I really like it for one thing, it works! which is surprisingly hard to find one of these that do or stays working for long...
So here's hoping it stays working unlike the last 3 I have used over the years XD
right now the only thing I can think of to recommend as a feature tho is cross comparing the posts to be downloaded between the [pools] and [artists] sections.
If I have a comic to be downloaded in [pools] and the artist who drew the comic in the [artist] section, I will end up downloading 2 copies.
Not a huge problem from me since I have other programs that find duplicate images and deal with that but it would be one less step~

Anyways, you rock and happy coding XD

I would be happy to add a feature like this! But currently, there is something a little more important that needs to be handled. At this moment, aliases have no support and need to be added to the program. That way you won't have to write "my_little_pony" if you want posts with that tag; it's much easier to just type "mlp" and have the aliase system kick in rather than typing the longer form of the tag.

Updated by anonymous

I have released another update after starting work on an issue someone was having with the downloader on Linux. There is still another issue to handle, but what I have currently added is quite important for people who use tags that have Unicode characters in them.

1.1.3 Release Notes:

- Added Unicode character support to the parser [#23.
- Fixed a bug where the parser would see an empty tag at the end of the tag file [#23.

Updated by anonymous

Another quick update!

1.2.3 Release Notes:

- Added an alias checker, so now aliases can be used in the tag file.
- Added the ability for the user to create directories for every tag in the tag file when the images are being saved. This can be enabled and disabled in the config.json file.

Updated by anonymous

P.S. Looking at your list of things to do. You don't have to sign in to get your favorites, you can always access them with fav:username even when signed out. The only thing it would potentially give you access to would be the blacklist and post votes.

Updated by anonymous

KiraNoot said:
P.S. Looking at your list of things to do. You don't have to sign in to get your favorites, you can always access them with fav:username even when signed out. The only thing it would potentially give you access to would be the blacklist and post votes.

I ended up figuring that out when I tinkered with how favorites worked. Since you have posted here, I would love to ask a question!

How do I go about accessing the blacklist? Currently, the only way I see it working is logging in through a request, then accessing the created cookie "blacklisted_tags" and parsing it. Is there an API call I can do to access this easily?

I have looked through the entire API and haven't found anything on it, but I assume I'm either blind or I do have to go down the route I described above. My goal is to have the blacklist used when my program is grabbing posts. My plan to do this is to grab a bulk of posts, then checking for rating, tags, and users on the client-side. But I do want to make sure I'm not overcomplicating anything.

Updated by anonymous

The Blacklist Release is now out, enjoy the fun of using blacklists once more!

1.3.3 Release Notes:

- Users can now use their blacklist by logging in with username and API key.
- login.json is created for logging into the account, having fields for username, API key, and boolean (for whether or not you wish to download your favorites).
- Users can now download their favorites by simply giving the username and making sure DownloadFavorites is true. An API key is not required if you wish to just download your favorites.
- DO NOT SHARE YOUR LOGIN FILE! The API key you supply is exactly like your password, so treat it as such.

Updated by anonymous

The Turbo Release is here! This brings a lot of code optimization with a ton of improvement, on the user's end and the programmer's end. There are also minor features added that I'm happy to have, as I use this program. I think some are good little quality of life things you will enjoy. Have fun!

1.4.3 Release Notes:

- General optimization brought throughout the entire program.
- Removed the date system as it is no longer needed with how the program works with tags.
- Images are now categorized when saved.

Updated by anonymous

The Client Release is done. This implements a lot of general optimizations and a huge change behind the scenes. Without getting too tech-savvy with it, this mostly changes how the clients are handled when making API calls. Before, a client would simply be created when needed, and the user-agent header would be applied manually each time. Now, there is a single client, and every request passes through a call that adds the user-agent header. This will improve the overall security of the program, making sure that every call is passing through all the checks safely. Besides that, there are still other optimizations that make this release even quicker. So enjoy the release!

1.5.3 Release Notes:

- Major update to the way requests are sent.
- Organized and improved more code, and removed the ability to create or not create directories when downloading, as it seems to be an unneeded feature.
- Reversed the posts vector right before starting the download process so that the oldest images are downloaded first and the newest last. This was done for sorting purposes.

Tech-savvy Release Notes:

- Major update to the way requests are sent, relying on a new class called RequestSender rather than Client directly.
- Added a trait for the blacklist and tag parser.
- Added a HashMap macro that's similar to vec!
- Renamed EsixWebConnector to WebConnector.
- Updated the grabber to only get what is needed. The memory usage is not lessened a great deal.
- More renames, minor optimizations, and formatting.
- Updated the parser and how the tags are tokenized.
- Huge update to all documentation along with minor optimizations throughout the program.
- Optimized imports and formatted code.
- A logical error ended up reversing how the blacklist saw and blacklisted posts, this is now fixed.
- Removed Chrono from cargo.

Updated by anonymous

A hotfix for the Client Release, felt the need to make this change as I have already written extensively on why it's there.

1.5.4 Release Notes:

- Fixed a bug where the post limit for the grabber went up to 1,600 instead of 1,280

Updated by anonymous

This is a minor update that brings a lot of improvement visually along with some faster speed in downloading images. Liked how this turned out, so enjoy~

1.5.5 Progress Release Notes:

- Replaced the progress bar with a better one. This one can now show the total file size you need to download, and how far along you are.
- The prompt for when you want to enter safe mode is now handled through a better system.

Tech-savvy Release Notes:

- Fixed bug where the images became directories.
- Renamed function get_length_posts to get_file_size_from_posts.
- Made the download_set function much more readable.
- Reverses array when grabbing posts instead of reversing them right before downloading them.
- Added new progress bar that displays the size of bytes that are going to be downloaded, instead of how many images.
- Small variable rename in FlagWorker
- Fixed bug where the flags raised would pass the margin because the artist tags were being flagged twice.
- Grabber now manages the Blacklist completely, not relying on the WebConnector for the information anymore.
- Blacklist is no longer created and parsed every time a new page of posts are grabbed.

Updated by anonymous

This release is pretty small and contains minor updates with one big change. This change is towards pools. Pools had an issue where it would only download one page, and this was admittedly due to my ignorance over reading the API reference incorrectly, so this is now fixed. I have also numbered the posts that are downloaded for pools, so now you can read them like an actual comic. So enjoy~!

1.5.6 Pools Release Notes

- Pools are now downloaded in its entirety instead of just downloading one page.
- Pools' posts are now numbered when downloading, so reading them is much easier.

Tech-savvy Release Notes

- Better formating on tags.txt.
- When a duplicate is skipped, it will now increment the file_size before continuing to the next post to download.
- Removed the "Favorites" category when downloading the user's favorites.
- Documented code I forgot to document.
- Updated reqwest from 0.9.24 to 0.10.1.
- Updated serde from 1.0.103 to 1.0.104
- The progress bar now fills the line it's on.
- Removed README.txt

Updated by anonymous

Hello everyone! As I'm sure you have noticed, there was a new update to the e621 website, and with that update, it carried a lot of new and exciting changes. One of these changes are the responses we get when we call the API. My program, if you use it, will crash now. The reason for this is because the API calls that I do now are invalid because the response they are expecting no longer exists. So what I have to do is develop a new update that can handle the new responses that I receive from the e621 platform. Hopefully, this makes sense. If not, don't worry, the update that will fix my program shouldn't take too long. It's all just a matter of how quickly I can write my code, and how much free time I have to focus on it (which is a lot because I am currently sick).

So stay tuned, there should be an update for you all soon!

Update

So... I have taken a break from this program for a while, and have been working on life for a while, that includes my social life, job, and family. While doing this, I decided that it was best to put several things on hold, and this ended up being one of them. But, after upgrading my computer with the power of Ryzen, playing games to my heart's content, and becoming a manager at my job, I'm back. I've been working non-stop on making the much-needed changes for this program to work with the new API, and have just finished a fully working version, but, there are a few things that need to be worked on a little more.

Firstly, there are now deprecated/dead parts of my code, sections that are no longer used, this is something I ended up doing since I was stepping through my code quickly to get the program in a working state once more. For me to feel right with releasing the next version, I want to remove this code and make the program a little faster.

Secondly, there are bugs and issues right now that I have noticed recently. One of the large ones, that I'm assuming was here before the API update, is that the blacklist in my program isn't 100% accurate. I'm not sure why, but it seems to be the case. I'm taking a deep look at it since it was one of the things I had to change for the new API.

Another, almost more worrying issue that this API has, is that some posts that are grabbed.... well.... they just don't even have a file URL. I'm not sure if these were deleted posts, and the reason I say that is the API has a flagging section with a boolean for whether are not the post is deleted, and every case showed it wasn't, just the file URL is null. It's confusing to me, but my fix for this is to filter it out from the posts that will be downloaded. I want to add that you are notified when a post like this is filtered, so don't worry.

The blacklist also had another change. How the blacklist on your account was obtained, at least back then, was to access a certain URL with your login, doing this allowed me to grab your blacklist string, this string contained everything you had blacklisted and nothing else. I had built a parser, and a flagging system that emulated e621's version to a T. At the time, it had 100% accuracy through numerous amounts of testing that I did. This API change caused the blacklist string I was requesting to be shifted to a more general area, your user page. To actually get to this page, I need your user ID, so now it has to be manually put in inside of the same file that holds your API key and username. I plan on simplifying this in the future by sending one request to the API for your user page by searching for your username. This request won't show me your blacklist at first, but it will give me your ID, so you won't have to input it. After obtaining it, it will then log in, grab your blacklist, and logout. By logout, I mean it doesn't save the cookie that tells e621 that my client is using your account anymore xp

So yeah, there is a lot of changes to my program, but with my new computer, development has definitely become easier. The fact I can use a debug version of my program and still download it at 5 MB/s is astonishing. I didn't think it could download so quickly in debug mode. Also, I'm currently thinking about releasing a debug build for people to use, but the only issue with it is that if I do, users of the software will need to understand it is in development and is highly likely to crash from small things, especially with half of the API I made torn apart. And yes, I have an API that communicates with e621's API, they're buddies, at least I try to make them buddies, there are quite a lot of safeguards built in it to make sure it doesn't cross a boundary with e621. Beyond that though, this is the current update. Thanks for listening to my rambling, a build should be out soon, I've stated before that I use my own program, and I'm getting to a point where I need it again, so I'm doing this for current users and me. Be patient, I should have more info soon. Bye!~

Fileurl is null because it's blocked by global blacklist, you need to authenticate.
And you can get the profile ID from an of the profile pages.

aobird said:
Fileurl is null because it's blocked by global blacklist, you need to authenticate.

If this is the case, then I'm going to go through and disable my blacklist from my post grabbing and then enable cookies on my client. I'll test it internally for a bit and see if certain blacklisted tags will actually make the file URL null. back then, the API ignored the blacklist entirely, it would show a post no matter if you had your blacklist enabled or not. It's what resulted in me creating my own blacklist filtering system. I'll play around with it for a bit and see what comes of it, if this ends up being the case then I can remove one of my systems and give my program a little more space to process through posts quicker.

I will probably also look into finding out how to remove the global blacklist. I know the cookie that stores it, but I'd rather not tell my client to access a cookie manually and then delete it. I feel like that would be a major work around for something that could be solved a little more simple. But I'll have to look into it.

Thanks for telling me what might be causing it, I'll definitely be looking into it since it wasn't my first thought when I was encountering this problem. I wondered if the blacklist on the site would actually filter through with the new API now, I was expecting it to just straight up delete posts from the list that I would grab, but I guess they're just nullifying it and considering it deleted.

Updated

aobird said:
And you can get the profile ID from an of the profile pages.

I did talk about this in my main post, I guess I didn't really clarify it too clearly. I was half asleep when I wrote it. But yeah, I know how to actually grab the ID, it's just for right now I'm going to have it to where you have to put it in manually. Until I can get other parts of my API sorted out/deleted, it's going to have to be this way for a moment. The API I'd built to communicate with e621's API is currently torn apart and being worked on. My API had a lot of things I used to be able to grab certain things relatively quickly, it was also highly optimized, and because of this, digging through optimized code is a lot of work, especially when I'm just adding new unoptimized code. A better way to look at it, I'm basically adding a new room to a house, and then I'm going through and destroying other rooms that don't need to be there anymore.

Something else I didn't really clarify too clearly was that I was trying to get the program in a working state as quickly as possible. Because currently there was a lot of errors in the code and all the sudden it was crashing every time you would start it, I was trying to work around and get my program and a working state again so I can actively test and improve it. It's currently in that state, it's just right now there's a lot of issues and small things that will crash my software in a heartbeat. So I'm currently trying to get it leveled out and more stable again. once it's balanced and I have everything sorted out, I can then start production on a more stable build, and then I can release it and have my software open for the public again. For right now though, the only thing public is my code.

Updated

mcsib said:
If this is the case, then I'm going to go through and disable my blacklist from my post grabbing and then enable cookies on my client. I'll test it internally for a bit and see if certain blacklisted tags will actually make the file URL null. back then, the API ignored the blacklist entirely, it would show a post no matter if you had your blacklist enabled or not. It's what resulted in me creating my own blacklist filtering system. I'll play around with it for a bit and see what comes of it, if this ends up being the case then I can remove one of my systems and give my program a little more space to process through posts quicker.

I will probably also look into finding out how to remove the global blacklist. I know the cookie that stores it, but I'd rather not tell my client to access a cookie manually and then delete it. I feel like that would be a major work around for something that could be solved a little more simple. But I'll have to look into it.

Thanks for telling me what might be causing it, I'll definitely be looking into it since it wasn't my first thought when I was encountering this problem. I wondered if the blacklist on the site would actually filter through with the new API now, I was expecting it to just straight up delete posts from the list that I would grab, but I guess they're just nullifying it and considering it deleted.

What AoBird means by this is that it's behind the login filter. The site doesn't make distinctions between API usage and requesting through the normal html site. The only entry on this blacklist is currently young -rating:s So you have to support letting users authenticate using their username/api key if they want to download content of this nature.

kiranoot said:
What AoBird means by this is that it's behind the login filter. The site doesn't make distinctions between API usage and requesting through the normal html site. The only entry on this blacklist is currently young -rating:s So you have to support letting users authenticate using their username/api key if they want to download content of this nature.

I recently pushed a new commit to my Github linked here which allows authentication when grabbing posts through Http Basic Auth, what was recommended on the wiki, this removes that global blacklist and allows post grabbing without the worry of the filter you stated. I do have a couple of questions though since you are here. Is there a way to disable this blacklist for users who aren't willing to authenticate? If there isn't, I will state that in the release notes for users in case they wonder why they can't grab certain posts.

I noticed there is an API limit on my user page too, and I wanted to know what this actually meant, since nowhere on the wiki page for the API mentions it. Does it correlate to how many calls I can make with my account authenticated through programs, or is it something entirely different?

Beyond that, I also noticed that with the authentication the global blacklist is removed, but the user's blacklist isn't added in the filtering on posts that I'm grabbing. I was able to grab posts that I had tags blacklisted for, and those blacklisted tags were ignored, so I'm assuming I'll still have to utilize my client-side blacklist system unless there is a way to use the user's blacklist server-side.

mcsib said:
I recently pushed a new commit to my Github linked here which allows authentication when grabbing posts through Http Basic Auth, what was recommended on the wiki, this removes that global blacklist and allows post grabbing without the worry of the filter you stated. I do have a couple of questions though since you are here. Is there a way to disable this blacklist for users who aren't willing to authenticate? If there isn't, I will state that in the release notes for users in case they wonder why they can't grab certain posts.

I noticed there is an API limit on my user page too, and I wanted to know what this actually meant, since nowhere on the wiki page for the API mentions it. Does it correlate to how many calls I can make with my account authenticated through programs, or is it something entirely different?

Beyond that, I also noticed that with the authentication the global blacklist is removed, but the user's blacklist isn't added in the filtering on posts that I'm grabbing. I was able to grab posts that I had tags blacklisted for, and those blacklisted tags were ignored, so I'm assuming I'll still have to utilize my client-side blacklist system unless there is a way to use the user's blacklist server-side.

Bypass login filter without logging in: You can manually construct URLs, but I'm not encouraging this method in applications that can allow authentication, as it may break in the future if things change. It's only really suited for applications that can't allow user auth. The login filter is there to force consent to view that type of content, and the acceptance of the legalities for the region.

API Limit on profile page goes down based on changes made and is a window based throttle. Unless you're doing really weird things, it should be impossible to hit, as it is slightly higher than than the hard rate limit on requests. You get a throttle response if you do manage to hit it. Read actions usually don't cause the number to go down.

Think of it as closer to a login filter for images. It is distinct from the blacklisting system, which is still client side. So your client side code is still needed.

kiranoot said:
Bypass login filter without logging in: You can manually construct URLs, but I'm not encouraging this method in applications that can allow authentication, as it may break in the future if things change. It's only really suited for applications that can't allow user auth. The login filter is there to force consent to view that type of content, and the acceptance of the legalities for the region.

API Limit on profile page goes down based on changes made and is a window based throttle. Unless you're doing really weird things, it should be impossible to hit, as it is slightly higher than than the hard rate limit on requests. You get a throttle response if you do manage to hit it. Read actions usually don't cause the number to go down.

Think of it as closer to a login filter for images. It is distinct from the blacklisting system, which is still client side. So your client side code is still needed.

Okay, that makes a lot of sense to me. I don't plan on bypassing the system like that unless it's an API call, or a query entry to tell the server to not apply the filter, I'll manage it. Mainly because it's also understandable from a legality standpoint. Should I implement something into my program about the legalities of the login filter, or explain what it is exactly filtering, or is that something I shouldn't really worry about?

I'm at a point now where I can make the last steps towards cleaning up my code and getting it stable again, so I'm in a good spot to do it if I have to.

Release 1.6.0-Indev.2

This is it, finally. After numerous amounts of testing, the build is finally available. While this isn’t a release build, as indicated by the indev label in my version, this is a functional and usable build. It can use the tags, download images, and use blacklist (after I fought through updating it…. After finding a huge memory leak….) and basically handle everything you need. There was a lot of changing to my code, and it is ugly at the moment, but that is something for me to worry about. I think I’m finally at a point in my development where I can focus on stabilizing this version and get it functioning with release code. I’m proud with how quick I manage to get this together, albeit, there was a lot of slicing and cutting in my quality that I had to do.

For those curious, I never had to use the #[deprecated(since=”…”, note=”…”) until this build. I was moving so fast through it that I had to start utilizing a lot of features to get this build off the ground, mainly to get it to even run so I could set it up, but it’s all done now. There are over 23 warnings at the moment, many functions not documented, and a ton of commented out stuff as well as mismatched naming. So, with all of this, I’ll have to dedicate a day to get this all cleaned up and looking nice. Once I have everything set up and finished, I can begin focusing on other things. The next release will have quite a nice changelog to it to say the least. With that though, this is all I have to say for now. I hope you all enjoy the build, and I’m sorry if it runs slow for you, this is only a debug build, which means speeds will be horrible, but that will be fixed when the main release is out. Till then, see you all around.

mcsib said:
Okay, that makes a lot of sense to me. I don't plan on bypassing the system like that unless it's an API call, or a query entry to tell the server to not apply the filter, I'll manage it. Mainly because it's also understandable from a legality standpoint. Should I implement something into my program about the legalities of the login filter, or explain what it is exactly filtering, or is that something I shouldn't really worry about?

I'm at a point now where I can make the last steps towards cleaning up my code and getting it stable again, so I'm in a good spot to do it if I have to.

You might make a note in the readme that the login filter exists, what it contains and if that content is desired how they can obtain an api token and log in using the config file. That way you have obvious documentation about it for those that want to use the app for that.

Release 1.6.0 In the Future

The 1.6.0 In the Future release is here, and there was a lot I had altered, the list will contain everything that was altered, so be prepared for a very nerdy list. Have fun~

Also, before I fully get to the release notes, I must clarify that there is now a filter with e621. If you are not logged into E6, a filter (almost like a global blacklist) is applied with the entry young -rating:s. This blacklist will nullify any posts that fall under it. If you wish to download any posts of this kind, you must log in and download it, otherwise, this filter will be there.

Release Notes:
- Downloader now works with the new API on e621
- Blacklist system updated and reworked to provide much greater accuracy and performance
- Login system uses a much safer method to authenticate (the safer method was added with the new version of e621)
- The server now conforms to a new filtering system that nullifies the file URL to posts containing the tag young and not having the post rating set to safe
- Users no longer need their ID in the login.json file to log into their accounts (this is a specific note from those who used the indev release of 1.6.0)
- The flagging system for the blacklist on the downloader now supports negated ratings, users, and IDs.

Bug fixes:
- Fixed massive memory leak in the blacklist system
- Fixed issue where the program would crash if the user's blacklist is empty
- Fixed issue where deleted posts and filtered posts (part of the new API) would crash the downloader while it was grabbing and downloading posts
- Fixed bug where the user was required to log in, rather than logging in being an option
- Fixed bug where the blacklist would completely ignore blacklisted users

Misc:
- Added new depenancy base64-url at version 1.2.0 for the new authentacation system
- Updated dialoguer from 0.5.0 to 0.6.2
- Updated indicatif from 0.13.0 to 0.15.0
- Updated reqwest from 0.10.1 to 0.10.6
- Updated serde from 1.0.104 to 1.0.112
- Updated serde_json from 1.0.44 to 1.0.55
- Updated dialoguer from 0.1.6 to 0.1.8

Updated

  • 1