Skip to content

File listing Optimization Proposal #691

@sculptex

Description

@sculptex

Retrieving file listings, and in particular list-all will increasingly become more resource hungry as the number of files and directories in an allocation grow

Separate to having a folders-only option, here is an idea how file listings could be done more efficiently for large numbers of files, (compared with apparent current method that client requests lists from all blobbers and seeks sufficient consensus majority to consider a correct listing):-

  • instead of each blobber sending entire files listing, they create internal list with file paths and content hash only (consistent between blobbers)
  • This list is sorted
  • A hash of this list is generated (consistent between blobbers)
  • This list (referenced by it's hash) is saved (temporarily)
  • The hash is what is returned to client initially
  • The client only has to compare majority (consensus) of hashes to ensure file listing correctness.
  • Client only has to actually retrieve file listing from random one of matching blobbers
  • This can be done with pagination, solving consistency issue when time elapses between pages
  • In fact pagination requests can be split by Blobbers so blobber1 page 1, blobber2 page 2 etc.
  • Cached list can be retained at least for a short while for pagination requests but actually remains valid until write operation performed on allocation. So perhaps flagged as stale as soon as a CRUD operation performed then stale listing removed after say 1 minute.
  • This could form a secondary method of file listing at a certain threshold, perhaps for any more than say, 1,000 files or wherever pagination would be decided.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions