Is there a way to make this pop-up redirect to the page you were going to instead of the home page? I'm pretty sure some other sites do that.
I get it all the time and it's rather annoying.
Posted under Site Bug Reports & Feature Requests
Is there a way to make this pop-up redirect to the page you were going to instead of the home page? I'm pretty sure some other sites do that.
I get it all the time and it's rather annoying.
+1
hate having to re-searchup something after getting it +1
meaterbeater431 said:
hate having to re-searchup something after getting it +1
you can just hit back and refresh
+1 is super annoying to have to copy url click not a robot then paste&go. much easier if just clicking not a robot takes me where I was trying to go in the first place.
When I get it, after I click and it shunts me to the home page, I can press back on my browser (and sometimes refresh is needed too) I always end up where I meant to be.
That's still really annoying though, so +1
there must be some kind of limitation of the site itself that would prevent this, otherwise you'd think they would've done it by now
but if not than +1, and another +2 to at least give the screen a splash of blue
sipothac said:
you can just hit back and refresh
Still would prefer not having to do that
snpthecat said:
Still would prefer not having to do that
aye, aye.
What I want to know is, is it working? This is supposed to fight off the rule34.xxx scraper bot, right?
why does rule34 have a scraper bot in the first place
dimoretpinel said:
why does rule34 have a scraper bot in the first place
Because anons. They don't need to leach this website's bandwith by scraping.
dimoretpinel said:
why does rule34 have a scraper bot in the first place
Beats the heck out of me, but they've been one of the banes of our existence for a while. There's even been some conspiracy theories accusing e6 of owning rule34.xxx as a way of circumventing DNP requests or something like that; in fact, quite a few DNP requests are less because of e6 specifically and more because of people not wanting their work to be scraped by rule34.
Real.
It is absolutely brutal when tagging something with a lot of tags, if I forget to copy the tags before submitting. It just obliterates whatever you were doing and redirects to the front page. I've taken to consolidating all of the tags in one submission field before making a new upload rather than leaving them in each category to mitigate that.
I wonder if it's the cause of re621 sometimes getting stuck when loading tag information/suggestions/implications.
Irony: A side-effect of making it do a redirect to the desired page is it would make it so that you don't get the chance to resubmit forms. *cough*tagging*cough*
Firefox 122 just asks me if I want to resubmit the request, when I hit refresh after clicking on back.
alphamule said:
Irony: A side-effect of making it do a redirect to the desired page is it would make it so that you don't get the chance to resubmit forms. *cough*tagging*cough*
Firefox 122 just asks me if I want to resubmit the request, when I hit refresh after clicking on back.
This is yet another on the list of reasons I'm considering switching back to Firefox after changing to Chrome a few years ago. Granted usually this isn't behavior you want with the back/refresh button.
If this were improved, making it continue through with the form submission would make sense.
zeorp said:
This is yet another on the list of reasons I'm considering switching back to Firefox after changing to Chrome a few years ago. Granted usually this isn't behavior you want with the back/refresh button.If this were improved, making it continue through with the form submission would make sense.
Until they patch it for web compatibility reasons AKA do everything like Chrome on a touchscreen. XD
I actually have a userscript to just do that for me:
// ==UserScript== // @name e621 Bot Redirect // @version 0.1 // @description Redriects you back to the page you were going to after the confirm not a bot popup occurs. // @author DefinitelyNotAFurry4 // @match https://e621.net/* // @icon https://www.google.com/s2/favicons?sz=64&domain=e621.net // @run-at document-end // ==/UserScript== function waitForElm(selector) { return new Promise((resolve) => { if (document.querySelector(selector)) { return resolve(document.querySelector(selector)) } const observer = new MutationObserver(() => { if (document.querySelector(selector)) { observer.disconnect() resolve(document.querySelector(selector)) } }) observer.observe(document.body, { childList: true, subtree: true }) }) } (async function () { 'use strict'; if (document.title.toLowerCase().includes("attention required!")) { let wantedLocation = window.location.href let button = await waitForElm("input.e6-challenge") button.addEventListener("click", async (e) => { e.preventDefault() e.stopImmediatePropagation() const data = new URLSearchParams() data.append("_e621", document.querySelector("input[name='_e621']").value) let res = await fetch("https://e621.net/challenge", { method: "POST", body: data, credentials: "same-origin" }) if (res.ok) { window.location.href = wantedLocation } else { alert("CHALLENGE FAILED!") } }) } })();
It basically just disables the default behavior of the button and replaces it with a request to do the challenge, then it redirects you to where you were going.
Funnily enough, this could very easily be changed to just automate the task, which, imo, is pretty hilarious.
definitelynotafurry4 said:
It basically just disables the default behavior of the button and replaces it with a request to do the challenge, then it redirects you to where you were going.Funnily enough, this could very easily be changed to just automate the task, which, imo, is pretty hilarious.
Yeah, but like with Robots.txt, the assholes get the 'special' treatment. Escalation works both ways. It's strange to scrape a site that has a literal database export. I mean, I even mirror it on Archive.org so they don't even have to hit e621's servers. 99% of anything you'd be interested in besides the images is available that way, and the images are likely mirrored already if just uploaded (they want the recent ones since they already have the old ones). It would only make sense to download images AFTER failing to find it at source (or missing source). Not to be nice, but to not get blocked as fast. But explaining this to some people is probably a lot harder than taking technical measures. Not everyone has a pass to board the clue train (functioning brain), it seems. Common courtesy would be to you know, ask first/donate for servers/etc.
Reminds me... I haven't asked Wayback to do that archive this month.
Oddly, these don't show up even though they work:
http://web.archive.org/web/20240205002014/https://e621.net/db_export/pools-2024-02-03.csv.gz
http://web.archive.org/web/20240205002014/https://e621.net/db_export/posts-2024-02-03.csv.gz
http://web.archive.org/web/20240205002014/https://e621.net/db_export/tag_aliases-2024-02-03.csv.gz
http://web.archive.org/web/20240205002014/https://e621.net/db_export/tag_implications-2024-02-03.csv.gz
http://web.archive.org/web/20240205002014/https://e621.net/db_export/tags-2024-02-03.csv.gz
http://web.archive.org/web/20240205002014/https://e621.net/db_export/wiki_pages-2024-02-03.csv.gz
Probably explains why the ones from last month are no good. :(
Updated