Replies: 1 comment 3 replies
-
You'd have to take out the IDs in the JSON file and make it so that they're one ID per line in a text file. This is very easy to do in python, just a snippet of a few lines. I can write the snippet for you if you'd like. You just need to post some of the JSON file so I can see the structure. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I've already used Expanse (https://github.com/jc9108/eternity/) to scrape all of my saved posts/comments/my posts/my comments/my upvotes etc/import my reddit data export (including more then the 1000+ reddit limit for the regular feed.) I have a .json with texts of posts and links for them all.
Is there a way I can feed that json in to BDRF to scrape the links/related comments/post etc? I'm sure its possible if my python-fu existed.
That way I don't need to be limited by reddit's 1000 limitation. Or can I import my reddit exported file to scrape?
Beta Was this translation helpful? Give feedback.
All reactions