Skip to content

Feature request: scrape sites, add to db, but don't download #17

@grantbarrett

Description

@grantbarrett

Since some book catalogs are very large, it would be a useful feature to be able to run the scrapes to build the database for review, then mark books for download, then run the script again to fetch them. Perhaps that is beyond the purpose of this script, which seems more for broad archival purposes. But if I see a public domain book that is an edition I do not have, I want only that edition, and not the others which I already have. It would save a lot of unnecessary data transfer.

Alternately, being able to specify a list of ISBNs, titles, or keywords before scraping would also serve to reduce the total data transfer.

Thank you!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions