Topic: Translating favs into upvotes

Posted under General

I did it. In 14 months I added around 20k to my favorites list and capped out at 80k.

Upvotes don't have a limit, so I want to take advantage of that by purging my favorites list and turning it all into upvotes. Is there any way this can be easily done because I got 268 pages of upvotes and 750 pages of favorites. That's a difference of 482 pages, and I'm not exactly kosher on the idea of going back through everything.

Actually, now that I think of it, not everything that's in my upvotes would be something I'd return to and was just an acknowledgment of good work. Shit. /o\

Did a search and didn't find much in the way of anything aside from others asking this same question after last years site update. Would stick it in the tools section but this is more of a tool request of sorts.

Try putting them into sets. You can have multiple sets and organize your favorites into them.

Edit: The most posts in a set right now seems to be well over a million, which is quite a lot more than your favorites limit.

Updated

Pup

Privileged

furrin_gok said:
Try putting them into sets. You can have multiple sets and organize your favorites into them.

Edit: The most posts in a set right now seems to be well over a million, which is quite a lot more than your favorites limit.

Sets are limited to 10k per set since the update, and I'm pretty sure there's a limit on how many sets you can have.

mdf said:
I did it. In 14 months I added around 20k to my favorites list and capped out at 80k.

Upvotes don't have a limit, so I want to take advantage of that by purging my favorites list and turning it all into upvotes. Is there any way this can be easily done because I got 268 pages of upvotes and 750 pages of favorites.
[..]

I could make a Python script to do it pretty easily, though it'd be awkward for you to run if you've never done anything with Python before.

Quick Edit:
One downside of using votes though is that they don't get transferred to new posts when old ones get deleted as inferior versions, like faves do.

Updated

furrin_gok said:
Try putting them into sets. You can have multiple sets and organize your favorites into them.

One of my initial thoughts, yes, but why bother with organizing when everything is tagged? Before I found E6 I was quite the save and sort extraordinaire since artist set tags could be haphazard and all over the place. Community managed tags on the other hand proved amazingly useful, so I stopped. Just actually got back to saving again, only because artists sometimes like to nuke their galleries.

pup said:
Quick Edit:
One downside of using votes though is that they don't get transferred to new posts when old ones get deleted as inferior versions, like faves do.

Figured there'd be a downside with everything... Ugh, not the cherry on top I wanted.

Cmon get it together man, these are just lewds, its not the end of the world, there are more important things in life.

... These are lewds I like though... :J

Haven't done much with python outside some introductory courses I took waaaay back in high school. Won't say no to the offer, but I was just hoping that after over a year now there would have been a solution. Suppose I'll take what I can get and put a smile on my face. C:

pup said:
Sets are limited to 10k per set since the update, and I'm pretty sure there's a limit on how many sets you can have.

Weird that the one is over a million then. You can't search for multiple sets at the same time through a fuzzy search either, so even if the amount of sets is unlimited it doesn't work great.

furrin_gok said:
Weird that the one is over a million then. You can't search for multiple sets at the same time through a fuzzy search either, so even if the amount of sets is unlimited it doesn't work great.

There are users with over 80,000 favourites too. The new limits weren't retroactively applied.

Pup

Privileged

mdf said:
Haven't done much with python outside some introductory courses I took waaaay back in high school. Won't say no to the offer, but I was just hoping that after over a year now there would have been a solution. Suppose I'll take what I can get and put a smile on my face. C:

This should work on both Windows and Linux, when you run it it'll make an "apikey.txt" for you to add your details to.

To get an API key you'll need to go to https://e621.net/users/home and "Manage API access", which'll ask you for a password and give you your API key.

For the script you'll need Python 3 and the Requests library, then should be able to run it through command prompt/powershell.

With it only un-faving a post after it's added an upvote it should be fine to stop it at any point with ctrl + c, and continue it later, for 80k posts it'll probably take a while.

E6FavesToUpvotes.py
import requests
from requests.auth import HTTPBasicAuth
import time
import os

# File paths
thisFolder = os.path.dirname(os.path.realpath(__file__))
apikeyFilePath = thisFolder + os.sep + 'apikey.txt'

# API URL paths
favURL='https://e621.net/favorites.json'
postsURL='https://e621.net/posts.json'
postVoteURL='https://e621.net/posts/{}/votes.json'
favIDURL='https://e621.net/favorites/{}.json'

# User-Agent header
# Add "edited by <username>" if you edit this script
headers = {'User-Agent':'Faves_to_upvotes/1.1 (by Pup on E621)'}

# Params
getFavParams = {'limit':320}
voteParams = {'score':1, 'no_unvote':"true"}

# Rate limiting
rateLimit = 1
lastTimeValue = time.time()
def rateLimitThread():
    global lastTimeValue
    elapsedTime = time.time() - lastTimeValue
    if elapsedTime <= rateLimit:
        time.sleep(rateLimit-elapsedTime)
    lastTimeValue = time.time()


# Load API key from file
try:
    with open(apikeyFilePath) as apiFile:
        apiTxt = apiFile.read().splitlines()
except FileNotFoundError:
    with open(apikeyFilePath, 'a') as apiFile:
        apiFile.write("username=" + os.linesep + "api_key=")
    print("apikey.txt created - please add your username and api key and restart the script.")
    exit()
    
apiUsername = apiTxt[0].split('=')[1].strip().replace(" ", "_")
apiKey = apiTxt[1].split('=')[1].strip()

getFavParams['tags'] = 'fav:'+apiUsername

session = requests.Session()

lowestID = -1
stop = False
while stop == False:

    # Get 320 favourited posts
    rateLimitThread()
    response = session.get(postsURL, headers=headers, params=getFavParams, auth=HTTPBasicAuth(apiUsername, apiKey))
    
    if response.status_code != 200:
        print("error - non-200 status code: " + str(response.status_code))
        print(response.json())
        exit()

    else:
        returnedJSON = response.json()
        
        if len(returnedJSON['posts']) < 320:
            stop = True

        for post in returnedJSON['posts']:
        
            if post['id'] < lowestID or lowestID == -1:
                lowestID = post['id']
                getFavParams['page'] = "b" + str(lowestID)
            
            print("Updating post #" + str(post['id']))
            
            # Upvote the post
            rateLimitThread()
            response = session.post(postVoteURL.format(str(post['id'])), headers=headers, params=voteParams, auth=HTTPBasicAuth(apiUsername, apiKey))
            
            if response.status_code != 200:
                print("error - non-200 status code: " + str(response.status_code))
                print(response.json())
                exit
            
            # Un-favourite the post
            rateLimitThread()
            response = session.delete(favIDURL.format(str(post['id'])), headers=headers, auth=HTTPBasicAuth(apiUsername, apiKey))
            
            if response.status_code != 204:
                print("error - non-204 status code: " + str(response.status_code))
                print(response.json())
                exit
                
print("All posts checked.")

Updated

pup said:
This should work on both Windows and Linux, when you run it it'll make an "apikey.txt" for you to add your details to.

I got python (3.9.1) for windows installed along with my API key in hand. The code you provided was pasted into notepad and saved under the .py extension on my desktop.

I'm getting hung up on the generation of the apikey.txt file getting generated. The next hurdle appears to be the requests library, I have no idea what to do there.

Tried running the code in an opened python window but didn't appear to get anywhere.

Pup

Privileged

mdf said:
I got python (3.9.1) for windows installed along with my API key in hand. The code you provided was pasted into notepad and saved under the .py extension on my desktop.

I'm getting hung up on the generation of the apikey.txt file getting generated. The next hurdle appears to be the requests library, I have no idea what to do there.

Tried running the code in an opened python window but didn't appear to get anywhere.

I think you'd need to open the script in Command Prompt or Powershell, with py E6FavesToUpvotes.py rather than running it from inside the python command line.

I'm not sure why it wouldn't create the file, but if it doesn't you should be able to make a file called apikey.txt in the same place as the script with the contents being:

username=your_username_here
api_key=your_api_key_here

If it gives an authentication error you might need to add an empty line between the username/api key as well. Windows handles new lines differently to Linux, so it can be awkward like that.

For the library you should be able to install it with py -m pip install requests on Windows or python -m pip install requests on Linux.

I'll do some bug testing on Windows later today to see what I've missed, in case it's something other than those.

pup said:
py -m pip install requests

Got a couple of warnings that popped up during this process:
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. (read timeout=15)")': /packages/29/c1/24814557f1d22c56d50280771a17307e6bf87b70727d975fd6b2ce6b014a/requests-2.25.1-py2.py3-none-any.whl WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. (read timeout=15)")': /packages/29/c1/24814557f1d22c56d50280771a17307e6bf87b70727d975fd6b2ce6b014a/requests-2.25.1-py2.py3-none-any.whl WARNING: The script chardetect.exe is installed in 'C:\Users\USER\AppData\Local\Programs\Python\Python39\Scripts' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.

Everything else was successfully installed and the script seems to function as expected as my profile is indicating favorites are being removed.

Something unexpected is happening though and I can't quite explain it. I got some pages open to track my progress:

-upvote:mdf fav:mdf
upvote:mdf -fav:mdf

The latter of the two shows its page count increasing from 2 to 6, while the former is still holding at 498. Going through the former, it doesn't appear as if favorites are being removed (disastrous) but that doesn't explain why the latter is still at a static 498 pages while the favorite count on my profile went down.

Pup

Privileged

mdf said:
Got a couple of warnings that popped up during this process:
[..]

In not sure about those, at some point Python swapped pip to use it's own site rather python.org, so I'm guessing it's that causing the errors, though if it's working fine then it must have found a working version at some point.

mdf said:
Everything else was successfully installed and the script seems to function as expected as my profile is indicating favorites are being removed.

Really glad to hear it :3

mdf said:
Something unexpected is happening though and I can't quite explain it.
[..] it doesn't appear as if favorites are being removed (disastrous) but that doesn't explain why the latter is still at a static 498 pages while the favorite count on my profile went down.

With your searches it took me a few minutes to figure it out, the script uses /favorites.json, which has your favourites in the order you faved them, whereas when you search with fav:x it shows them in the order of highest to lowest ID. From the Posts page if you click "Favourites" on the top bar while running the script then you should see it removing them there when you refresh it. What'll be happening is that you've favourited some older posts recently which are on a different page when you're searching them, and it's removing those first.

The 498 pages could be that the website cached that you have that many pages of faves, I know in the beta it had trouble updating page counts like that and it was more a rough estimate. If you go to page 498 does it return an error? If it does then it's just that.

Pup

Privileged

mdf said:
It does not.

Did you check the favourites page, rather than searching, to see if they get removed from there? It should give a better idea of what's happening.

If posts get upvoted then removed from there, then it'll be something to do with the site, rather than my script, as it'll stop if the site returns an an error code.

Updated

pup said:
If posts get upvoted then removed from there, then it'll be something to do with the site, rather than my script, as it'll stop if the site returns an an error code.

I did check the favorites page and it is still staying at 498, but just so we're on the same page, I'm understanding that it's more likely to be going through posts that were already upvoted and are simply removing the favorites correct?

Pup

Privileged

mdf said:
I did check the favorites page and it is still staying at 498, but just so we're on the same page, I'm understanding that it's more likely to be going through posts that were already upvoted and are simply removing the favorites correct?

It could do that, but then you'd have favourites disappearing without new upvotes, but you said it was the other way around, upvotes without unfavouriting.

When it runs it gets 320 posts, upvotes the first one, unfavourites it, then moves onto the next one, getting another batch of 320 posts after it's gone through those. Because it has the "no unvote" flag set it won't remove a vote, so they'll either be swapped to positive votes or left as positive.

With using the favourites page, https://e621.net/favorites, it doesn't need to keep track of which posts it's checked as it unfavourites the posts it's upvoted.

I was more thinking you could check the first page, as every time it removes a favourite it should remove the first one on that favourites page, so you could see if it's actually skipping any. Maybe open the first 5 favourited posts, see if they're upvoted or not, run the script for 5 posts, then refresh the ones you opened to make sure they're upvoted and no longer favourited.

If the last page stays the same when it's running I still think it could be a caching issue, with the site caching "these posts are on page X", to avoid having to recalculate how many faves, how many pages, and what to show on that page. It's another reason that checking the first page is more reliable.

I could be wrong with that idea, but I know that the E6 beta site had trouble with page counts being inaccurate.

Also, just so you know, the script won't touch your deleted favourites as they don't show up on that page, so after it's run you'll still have some favourites with fav:x status:deleted.

pup said:
I was more thinking you could check the first page, as every time it removes a favourite it should remove the first one on that favourites page, so you could see if it's actually skipping any. Maybe open the first 5 favourited posts, see if they're upvoted or not, run the script for 5 posts, then refresh the ones you opened to make sure they're upvoted and no longer favourited.

Also, just so you know, the script won't touch your deleted favourites as they don't show up on that page, so after it's run you'll still have some favourites with fav:x status:deleted.

Took a look into this and indeed, it seems to be working off my most recent favorites as the older images are all unaffected. It's also a good idea on your part to not delete favorites that were deleted so thanks for that. Looking through that alone, damn, 36 pages...

Right now the script is doing things at 100 pages before stopping, sounds like I'll have to find a value and finagle it. I'd like to have to only do this once and let it run overnight but I bet that would cause issues with site bandwidth and whatnot so I'll need to poke around.

If you're going to turn this into an actual tool and slap some user-friendly UI with some toggles and dropdowns or input boxes, I'd say you passed an alpha test for the hard part. o7

Pup

Privileged

mdf said:
Took a look into this and indeed, it seems to be working off my most recent favorites as the older images are all unaffected. It's also a good idea on your part to not delete favorites that were deleted so thanks for that. Looking through that alone, damn, 36 pages...

I'm not sure I could set it to do deleted ones if I wanted to, to be honest, at least not while using that favourites page. I'd need to swap it to use IDs for that instead, which could end up being better to be honest given it stops after so many posts.

mdf said:
Right now the script is doing things at 100 pages before stopping, sounds like I'll have to find a value and finagle it. I'd like to have to only do this once and let it run overnight but I bet that would cause issues with site bandwidth and whatnot so I'll need to poke around.

I just had a look, and it seems like the favourites page doesn't use the "limit" variable to say "get this many posts", so to have it keep going you want to swap a line over:

if len(returnedJSON['posts']) < 100:
            stop = True

The < 100 used to be 320, as with IDs the code would get 320 at a time, but it doesn't seem to work with that page, so if it's stopping after 100 posts just set it to 100.

And don't worry about the site bandwidth, that's why there's a rate limit of 1 action per second, so 2 seconds per vote/unfave, which at 80k faves would take about 45 hours apparently. It's a while, but at least you can leave it running overnight for a few days, or in the background whenever you're doing anything else.

mdf said:
If you're going to turn this into an actual tool and slap some user-friendly UI with some toggles and dropdowns or input boxes, I'd say you passed an alpha test for the hard part. o7

Having a quick google there's a few ways to make python scripts into standalone executables, so I might give that a go at some point, though maybe for a bigger project instead of this one.

I'm glad it's working well though, only had to change that one line :3

I better edit the script I posted earlier to have it be 75, as I think that's the default for new users.

Edit:
Just checked, it does actually see the deleted posts, I'll post an updated one soon, so that it doesn't remove them.

Second Edit:
I just edited the code in the first post (forum #308915) so now it uses IDs instead of the faves page. Now it'll get 320 posts at a time and ignore deleted ones :3

Just replace the old code with that new one and it should work fine.

Also, with it using IDs instead of that faves page, your earlier searches should work now:

-upvote:mdf fav:mdf
upvote:mdf -fav:mdf

(So much for only needing to change one line XD)

Updated

pup said:
I'm not sure I could set it to do deleted ones if I wanted to, to be honest, at least not while using that favourites page. I'd need to swap it to use IDs for that instead, which could end up being better to be honest given it stops after so many posts.

I just had a look, and it seems like the favourites page doesn't use the "limit" variable to say "get this many posts", so to have it keep going you want to swap a line over:

if len(returnedJSON['posts']) < 100:
            stop = True

The < 100 used to be 320, as with IDs the code would get 320 at a time, but it doesn't seem to work with that page, so if it's stopping after 100 posts just set it to 100.

And don't worry about the site bandwidth, that's why there's a rate limit of 1 action per second, so 2 seconds per vote/unfave, which at 80k faves would take about 45 hours apparently. It's a while, but at least you can leave it running overnight for a few days, or in the background whenever you're doing anything else.

Having a quick google there's a few ways to make python scripts into standalone executables, so I might give that a go at some point, though maybe for a bigger project instead of this one.

I'm glad it's working well though, only had to change that one line :3

I better edit the script I posted earlier to have it be 75, as I think that's the default for new users.

Edit:
Just checked, it does actually see the deleted posts, I'll post an updated one soon, so that it doesn't remove them.

Second Edit:
I just edited the code in the first post (forum #308915) so now it uses IDs instead of the faves page. Now it'll get 320 posts at a time and ignore deleted ones :3

Just replace the old code with that new one and it should work fine.

Also, with it using IDs instead of that faves page, your earlier searches should work now:

-upvote:mdf fav:mdf
upvote:mdf -fav:mdf

(So much for only needing to change one line XD)

Can you provide a walkthrough? I can't seem to get it to work

anon25 said:
Can you provide a walkthrough? I can't seem to get it to work

I made some edits to the script to make it a little more user friendly. (Also some useless nitpicks lol)

Walkthrough (Windows)

Step 1: Install Python.
Step 2: Open cmd (the windows Command Prompt). E.g. by searching for cmd in the windows search bar.
Step 3: Make a file and put the script in it. The file name & extension don't matter; Python doesn't care. I recommend you make the file from the Command Prompt. Type notepad main.py. Click yes when it prompts you "do you want to make a file." Then copy and paste the script in and save it.
Step 4: Run it with the command python3 main.py The prompts should guide you; it will try to install a library called requests, then ask for your Username and API key. You can get your API key by going to https://e621.net/users/250094/api_key

Program
# From https://e621.net/forum_topics/29460

# uncomment for a list of all http requests sent out
#import logging
#logging.basicConfig(level=logging.DEBUG)
import subprocess
import sys, time, os
from pathlib import Path
try:
	import requests
	from requests.auth import HTTPBasicAuth
except ModuleNotFoundError:
	print("error - Could not find module 'requests'.")
	print("If this next part fails, you can probably install it manually with this command:")
	print("python3 -m pip install requests")
	a = input("Try to install it automatically? y/n: ")
	if not a.lower().startswith("y"):
		print("Aborting...")
		exit()


	returnCode = subprocess.check_call([sys.executable, "-m", "pip", "install", "requests"])
	if returnCode != 0:
		print("Something went wrong installing the module.")
		print("Return code: {}".format(returnCode))
		print("Aborting...")
		exit()
	print("Ok, installed. Proceeding...")
	import requests
	from requests.auth import HTTPBasicAuth


# File paths
scriptDir = Path(__file__).resolve().parent
apikeyPath = scriptDir.joinpath("apikey.txt")

# API URL paths
postsURL="https://e621.net/posts.json"
postVoteURL="https://e621.net/posts/{}/votes.json"
favIDURL="https://e621.net/favorites/{}.json"

# Rate limiting
rateLimit = 1
lastTimeValue = time.time()
def rateLimitThread():
	global lastTimeValue
	elapsedTime = time.time() - lastTimeValue
	if elapsedTime <= rateLimit:
		time.sleep(rateLimit-elapsedTime)
	lastTimeValue = time.time()

def inputLogin():
	print("Please input your username and API key.")
	if sys.platform != "win32": # Bash on linux. idk for macos/darwin
		print("You may be able to paste into this terminal with Ctrl + Shift + v.")
	apiUsername = input("Username: ").rstrip("\n")
	apiKey = input("Key: ").rstrip("\n")
	return apiUsername, apiKey

# Load API key from file
try:
	with open(apikeyPath) as apikeyFile:
		apiTxt = apikeyFile.read().splitlines()
	apiUsername = apiTxt[0].split("=")[1].strip().replace(" ", "_")
	apiKey = apiTxt[1].split("=")[1].strip()
except FileNotFoundError:
	apiUsername, apiKey = inputLogin()
	a = input("Save your login to file for next time? y/n: ")
	if a.lower().startswith("y"):
		with open(apikeyPath, "w") as apikeyFile:
			raw = "username={}\napi_key={}".format(apiUsername, apiKey) # "\n" instead of os.linesep because file.write() convert automagically in text mode
			apikeyFile.write(raw)
	print("Ok. Proceeding...")

session = requests.Session()
session.auth = HTTPBasicAuth(apiUsername, apiKey)

# User-Agent header
# Add "edited by <username>" if you edit this script
session.headers = {"User-Agent":"Faves_to_upvotes_mm/1.0 (by Pup on E621 edited by MatrixMash)"}

# Params
favParams = {"limit":320, "tags":"fav:{}".format(apiUsername)}
voteParams = {"score":1, "no_unvote":"true"}

retry_limit = 5
retry_break = 3

postCount = 0
lowestID = -1
stop = False
try:
	while not stop:
		# Get 320 favourited posts
		rateLimitThread()
		response = session.get(postsURL, params=favParams)
		if response.status_code != 200:
			print('Exception: API returned non-200 status code while fetching favorites list: ' + str(response.status_code))
			print(response.json())
			print('Trying again.')
			for _ in range(retry_limit):
				print('wating {} seconds...'.format(retry_break))
				time.sleep(retry_break)
				response = session.get(postsURL, params=favParams)
				if response.status_code == 200:
					break
			else:
				print('Error: tried {} times and still failed.'.format(retry_limit))
				print('Aborting...')
				response.raise_for_status()
		
		returnedJSON = response.json()
		
		if len(returnedJSON["posts"]) < 320:
			stop = True

		for post in returnedJSON["posts"]:
		
			if post["id"] < lowestID or lowestID == -1:
				lowestID = post["id"]
				favParams["page"] = "b" + str(lowestID)
			try:
				print("Updating post #" + str(post["id"]))
				
				# Upvote the post
				rateLimitThread()
				response = session.post(postVoteURL.format(str(post["id"])), params=voteParams)
				response.raise_for_status()
				
				# Un-favourite the post
				rateLimitThread()
				response = session.delete(favIDURL.format(str(post["id"])))
				response.raise_for_status()
				
				postCount += 1
			except requests.exceptions.HTTPError:
				print('Exception: api returned non-200 status code ' + str(response.status_code))
				print(response.json())
				print('Skipping that post for now.')
except KeyboardInterrupt:
	print("Updated {} posts.".format(postCount))
else:
	print("All posts checked.")


Edit: Updated to retry a couple times after errors.

Updated

matrixmashsifting said:
I made some edits to the script to make it a little more user friendly. (Also some useless nitpicks lol)

Walkthrough (Windows)

Step 1: Install Python.
Step 2: Open cmd (the windows Command Prompt). E.g. by searching for cmd in the windows search bar.
Step 3: Make a file and put the script in it. The file name & extension don't matter; Python doesn't care. I recommend you make the file from the Command Prompt. Type notepad main.py. Click yes when it prompts you "do you want to make a file." Then copy and paste the script in and save it.
Step 4: Run it with the command python3 main.py The prompts should guide you; it will try to install a library called requests, then ask for your Username and API key. You can get your API key by going to https://e621.net/users/250094/api_key

Program
# From https://e621.net/forum_topics/29460

# uncomment for a list of all http requests sent out
#import logging
#logging.basicConfig(level=logging.DEBUG)
import subprocess
import sys, time, os
from pathlib import Path
try:
    import requests
    from requests.auth import HTTPBasicAuth
except ModuleNotFoundError:
    print("error - Could not find module 'requests'.")
    print("If this next part fails, you can probably install it manually with this command:")
    print("python3 -m pip install requests")
    a = input("Try to install it automatically? y/n: ")
    if not a.lower().startswith("y"):
        print("Aborting...")
        exit()


    returnCode = subprocess.check_call([sys.executable, "-m", "pip", "install", "requests"])
    if returnCode != 0:
        print("Something went wrong installing the module.")
        print("Return code: {}".format(returnCode))
        print("Aborting...")
        exit()
    print("Ok, installed. Proceeding...")
    import requests
    from requests.auth import HTTPBasicAuth


# File paths
scriptDir = Path(__file__).resolve().parent
apikeyPath = scriptDir.joinpath("apikey.txt")

# API URL paths
postsURL="https://e621.net/posts.json"
postVoteURL="https://e621.net/posts/{}/votes.json"
favIDURL="https://e621.net/favorites/{}.json"

# Rate limiting
rateLimit = 1
lastTimeValue = time.time()
def rateLimitThread():
    global lastTimeValue
    elapsedTime = time.time() - lastTimeValue
    if elapsedTime <= rateLimit:
        time.sleep(rateLimit-elapsedTime)
    lastTimeValue = time.time()

def inputLogin():
    print("Please input your username and API key.")
    if sys.platform != "win32": # Bash on linux. idk for macos/darwin
        print("You may be able to paste into this terminal with Ctrl + Shift + v.")
    apiUsername = input("Username: ").rstrip("\n")
    apiKey = input("Key: ").rstrip("\n")
    return apiUsername, apiKey

# Load API key from file
try:
    with open(apikeyPath) as apikeyFile:
        apiTxt = apikeyFile.read().splitlines()
    apiUsername = apiTxt[0].split("=")[1].strip().replace(" ", "_")
    apiKey = apiTxt[1].split("=")[1].strip()
except FileNotFoundError:
    apiUsername, apiKey = inputLogin()
    a = input("Save your login to file for next time? y/n: ")
    if a.lower().startswith("y"):
        with open(apikeyPath, "w") as apikeyFile:
            raw = "username={}\napi_key={}".format(apiUsername, apiKey) # "\n" instead of os.linesep because file.write() convert automagically in text mode
            apikeyFile.write(raw)
    print("Ok. Proceeding...")

session = requests.Session()
session.auth = HTTPBasicAuth(apiUsername, apiKey)

# User-Agent header
# Add "edited by <username>" if you edit this script
session.headers = {"User-Agent":"Faves_to_upvotes_mm/1.0 (by Pup on E621 edited by MatrixMash)"}

# Params
favParams = {"limit":320, "tags":"fav:{}".format(apiUsername)}
voteParams = {"score":1, "no_unvote":"true"}

postCount = 0
lowestID = -1
stop = False
try:
	while not stop:
		# Get 320 favourited posts
		rateLimitThread()
		response = session.get(postsURL, params=favParams)
		response.raise_for_status()

		returnedJSON = response.json()
		
		if len(returnedJSON["posts"]) < 320:
			stop = True

		for post in returnedJSON["posts"]:
		
			if post["id"] < lowestID or lowestID == -1:
				lowestID = post["id"]
				favParams["page"] = "b" + str(lowestID)
			
			print("Updating post #" + str(post["id"]))
			
			# Upvote the post
			rateLimitThread()
			response = session.post(postVoteURL.format(str(post["id"])), params=voteParams)
			response.raise_for_status()
			
			# Un-favourite the post
			rateLimitThread()
			response = session.delete(favIDURL.format(str(post["id"])))
			response.raise_for_status()
			
			postCount += 1
except KeyboardInterrupt:
	print("Updated {} posts.".format(postCount))
else:
	print("All posts checked.")


The process keeps getting interrupted by this every 10 min or so:

File "C:\Users\*User*\main.py", line 112, in <module>
    response.raise_for_status()
  File "C:\Users\*User*\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\models.py", line 953, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://e621.net/posts/2785103/votes.json?score=1&no_unvote=true 

Updated

I haven't the faintest idea what could be causing that. The best I could suggest would be commenting out the lines that go response.raise_for_status(), but that may result in posts being removed frim favorites without getting upvoted.

I don't have a working computer atm, so I can't do actual testing/coding, sorry. If you know a little Python, I'd suggest wrapping the inner for loop in a try except block swallowing requests.exceptions.HTTPError.

Edit: Added some automatic retries. The script will say "all posts checked" even if it skips some due to errors, so run it again if there's anything left behind.

Updated

I've never used Python in my life before but I still got it to work with no issues thanks to this thread, thank you!

  • 1