-
Notifications
You must be signed in to change notification settings - Fork 292
CA-396751: write updated RRDD data before headers #5915
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -283,7 +283,8 @@ def wait_until_next_reading(self, neg_shift=1): | |
self.lazy_complete_init() | ||
next_reading = self.register() | ||
wait_time = next_reading - neg_shift | ||
if wait_time < 0: wait_time %= self.frequency_in_seconds | ||
if wait_time < 0: | ||
wait_time %= self.frequency_in_seconds | ||
time.sleep(wait_time) | ||
return | ||
except socket.error: | ||
|
@@ -310,20 +311,27 @@ def update(self): | |
metadata_json = json.dumps(metadata, sort_keys=True).encode('utf-8') | ||
metadata_checksum = crc32(metadata_json) & 0xffffffff | ||
|
||
self.dest.seek(0) | ||
self.dest.write('DATASOURCES'.encode()) | ||
self.dest.write(pack(">LLLQ", | ||
data_checksum, | ||
metadata_checksum, | ||
len(self.datasources), | ||
timestamp)) | ||
# First write the updated data and metadata | ||
encoded_datasource_header = 'DATASOURCES'.encode() | ||
# DATASOURCES + 20 for 32 + 32 + 32 + 64 | ||
self.dest.seek(len(encoded_datasource_header) + 20) | ||
for val in data_values: | ||
# This is already big endian encoded | ||
self.dest.write(val) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The reader can do its reading at any time? Is it reasonable to assume that this action of writing the the data/metadata is atomic? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. the reader should not read any more than the header unless and until the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Header and data are protected by checksums. So if you read garbage, the checksum should tell you. You can't avoid reading garbage in the general case because there is always a race between the data and its checksum being written as no locking is implemented. The total amount of data can change, and in particular shrink. I believe the memory-mapped file is of constant size though. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok, I think at least it is better to write the data before writing the header, that sounds like a safer appraoch than the other way round |
||
|
||
self.dest.write(pack(">L", len(metadata_json))) | ||
self.dest.write(metadata_json) | ||
self.dest.flush() | ||
|
||
MarkSymsCtx marked this conversation as resolved.
Show resolved
Hide resolved
|
||
# Now write the updated header | ||
self.dest.seek(0) | ||
self.dest.write(encoded_datasource_header) | ||
self.dest.write(pack(">LLLQ", | ||
data_checksum, | ||
metadata_checksum, | ||
len(self.datasources), | ||
timestamp)) | ||
self.dest.flush() | ||
self.datasources = [] | ||
time.sleep( | ||
0.003) # wait a bit to ensure wait_until_next_reading will block | ||
|
Uh oh!
There was an error while loading. Please reload this page.