Topic: [QUESTION] want to learn how to make an E621 App!

Posted under e621 Tools and Applications

So guru's I want to make my first web app, but im struggling to wrap my head around where to start. I'm a C++/C# dev, any tips that'll point me for just a basic downloader? I'll try for the harder stuff later

Updated

Kaitoukitsune said:
So guru's I want to make my first web app, but im struggling to wrap my head around where to start. I'm a C++/C3 dev, any tips that'll point me for just a basic downloader? I'll try for the harder stuff later

I'm super noob, but you didn't even say what the app is supposed to do, without context question can not be answered.
Like, explain how would a regular day using the app look like.

Updated by anonymous

Kaitoukitsune said:
So guru's I want to make my first web app, but im struggling to wrap my head around where to start. I'm a C++/C3 dev, any tips that'll point me for just a basic downloader? I'll try for the harder stuff later

Which operating system?

Updated by anonymous

DelurC said:
I'm super noob, but you didn't even say what the app is supposed to do, without context question can not be answered.
Like, explain how would a regular day using the app look like.

Basically, I want the first version f the app to work in command prompt for testing purposes. if i had to define my use case:
Execute program from within VS2015
Give it a string tags
List number of hits
confirm a number of images to download
Send those files to a test folder(hard coded for testing, but exposed variable later on
Does that answer your question?

Munkelzahn said:
Which operating system?

Windows 10

Also for the sake of clarification, I have never done anything web like before. All my programming experience revolves around Unity/Unreal Engine

Updated by anonymous

Sanitia said:
You'll want to work with the WebRequest class to make calls out to the website, and the API to know what request to send.

Pretty much you'll send out a request (like https://e621.net/post/index.json?tags=cat%20banana&page=1&limit=100 ) to the API and get a response (in XML or JSON).

The response will contain a list of results that you can loop through and grab the file_url link from, and then download that.

I'm going to have to discourage the use of using pages for any sort of mass download operation. Start using before_id and for each set of results, take the lowest post id and use that as the before_id parameter value on the next request. The server, and your sanity will thank you.

Known caveat: order:x is ignored when using before_id Posts are always ordered by id when using before_id but that usually doesn't matter.

P.S. Eventually page numbers above a certain value won't work anymore, so getting this change done early saves you from having to fix it later.

Updated by anonymous

Seems easy then, the way I would do it is :
Navigate to https://e621.net/post/index/1/ + <tags> in webbrowser.
Check how many pages are there (maybe report back)
Grab all images from page, navigate to next page, grab all, navigate to next...
Then I would loop over Image List and fix("rename") preview url to full resolution ("...data/preview/x/y..." > "...data/x/y...").
Then download.

There's also API but I don't know to work with that stuff since I'm a nub + it requires account so it wouldn't work for everyone by default. (I mean it could work, but if there's an hourly limit on request like in twitter its not vise to give out your account)

It all depends what exactly you want to do though, so can't give exact instructions.

Updated by anonymous

Sanitia said:
You'll want to work with the WebRequest class to make calls out to the website, and the API to know what request to send.

Pretty much you'll send out a request (like https://e621.net/post/index.json?tags=cat%20banana&page=1&limit=100 ) to the API and get a response (in XML or JSON).

The response will contain a list of results that you can loop through and grab the file_url link from, and then download that.

Which would you prefer to use, XML or JSON? I found the Rest

SDK for reading JSON, but i wonder if XML would be easier?(never worked with either before)

KiraNoot said:
I'm going to have to discourage the use of using pages for any sort of mass download operation. Start using before_id and for each set of results, take the lowest post id and use that as the before_id parameter value on the next request. The server, and your sanity will thank you.

Known caveat: order:x is ignored when using before_id Posts are always ordered by id when using before_id but that usually doesn't matter.

P.S. Eventually page numbers above a certain value won't work anymore, so getting this change done early saves you from having to fix it later.

Lost me Dude. Like, i haven't even gotten to the point of making a request yet XD

Updated by anonymous

DelurC said:
Seems easy then, the way I would do it is :
Navigate to https://e621.net/post/index/1/ + <tags> in webbrowser.
Check how many pages are there (maybe report back)
Grab all images from page, navigate to next page, grab all, navigate to next...
Then I would loop over Image List and fix("rename") preview url to full resolution ("...data/preview/x/y..." > "...data/x/y...").
Then download.

There's also API but I don't know to work with that stuff since I'm a nub + it requires account so it wouldn't work for everyone by default. (I mean it could work, but if there's an hourly limit on request like in twitter its not vise to give out your account)

It all depends what exactly you want to do though, so can't give exact instructions.

This is not a safe way to do this. Preview urls are not always the same as file urls. Preview urls always have jpg extensions, file urls do not. Also, what happens with gif/swf/webm? Properly using the API(which doesn't require an API key to use) is the way to go.

Updated by anonymous

KiraNoot said:
This is not a safe way to do this. Preview urls are not always the same as file urls. Preview urls always have jpg extensions, file urls do not. Also, what happens with gif/swf/webm? Properly using the API(which doesn't require an API key to use) is the way to go.

It's easy to check for extension.
As I said I'm a noob, it might not be the best way but it works.

Updated by anonymous

So i've run into the first problem.

To test to see if I get a response I'm just sending in using Sanitia's example request , I'm getting the 403 error(Forbidden). Where do I put Agent header? Is it attached to the string somehow?
(Using WebRequst they recommended earlier)

Updated by anonymous

  • 1