Skip to content

Cache documents on client-side #4

@aef-

Description

@aef-

Presigned urls are generated on every load -- the browser can't cache them.

Potential solutions

  1. Either increase the expiration time and bucket them so the URLs don't change.
    The big issue with this is security. At the moment we have a very short expiration time. We don't want users sharing these files.
  2. Offer a URL from our server which serves the file.
    We'd be downloading the file and serving it from some static URL. While not ideal, this could be the best solution as it also offers better security control. Requires bunch of infra/dev to get this working optimally https://9elements.com/blog/streaming-downloads-in-elixir--a-protocol-love-story/ https://dev.to/onpointvn/download-stream-large-file-with-hackney-in-elixir-539m https://hexdocs.pm/phoenix/Phoenix.Controller.html#send_download/3
  3. Break apart the document into pages and serve them directly from S3.
    We expect to do this anyways. And while it doesn't remove the issue at hand, it does minimize the effect. Pages are typically <100kb. A document of 100 pages, especially text heavy pages that aren't rasterized, will be much smaller as a PDF than this other approach.

While PDF.js currently supports piecemeal loading of PDFs, the efficacy depends on the document and if you're towards the end of the document, it will typically have to load the previous X pages.

Following may be able to help in the meantime, as it's brutal to download a PDF that's 50mb+
mozilla/pdf.js#8897

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions