-
Notifications
You must be signed in to change notification settings - Fork 12
Update the support export tool to gather Patroni logs #117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
be32d7b
to
44f6d70
Compare
if stderr != "" { | ||
writeDebug(cmd, stderr) | ||
} | ||
|
||
if strings.Contains(stderr, "No such file or directory") { | ||
writeDebug(cmd, "Cannot find any Patroni log files. This is acceptable in some configurations.\n") | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the stderr is No such file or directory
, we'll print two lines for this one err, right? Is that what we want?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think so. It follows are existing pattern and only shows up in the DEBUG
logs. For reference, here's what you would see in cli.log
:
2024-12-18 15:15:31.620 -0500 EST - DEBUG - Error getting Patroni logs
2024-12-18 15:15:31.620 -0500 EST - DEBUG - command terminated with exit code 2
2024-12-18 15:15:31.620 -0500 EST - DEBUG - ls: cannot access 'pgdata/patroni/log/*': No such file or directory
2024-12-18 15:15:31.620 -0500 EST - DEBUG - Cannot find any Patroni log files. This is acceptable in some configurations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm, if that's the pattern we've got, I'm fine with it (with the plan to come back and review our logging patterns).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the other cases, the check at strings.Contains(stderr, "No such file or directory")
was necessary because the lack of logs was not necessarily an erroneous event. That extra information confirms there are no files, not that we encountered an error getting them.
(If I remember correctly)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think it's worth adding to the support export kuttl test?
I considered it but it doesn't look like the goal of that test is to test every item we collect, especially when there's little variance in what's being done. Perhaps the approach should be updated? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious if @philrhurst has any feedback on this.
No blockers for me, just a question about our help text/documentation.
Ensures that the on volume Patroni log file is exported, if that file exists. If the PostgresCluster is not configured to create this file, note in the debug logs that this is acceptable for some configurations. Issue: PGO-1701
44f6d70
to
6f52eb0
Compare
writeDebug(cmd, fmt.Sprintf("LOG FILE: %s\n", logFile)) | ||
var buf bytes.Buffer | ||
|
||
stdout, stderr, err := Executor(exec).catFile(logFile) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fine with this call with catFile
, but I am concerned that as the Patroni logs grow we will encounter issues similar to the Postgres logs: the entire file cannot be held in memory. We stream the PG logs to local disk to get around that issue. We've got good error handling here, so I don't think we should refactor for that; but it's something to keep in mind.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the Patroni logs, there is a defined storage limit, but I agree this could still be a problem for larger file sizes. I'm wondering if the right answer is to do some refactoring and combine the various functions that pull logs from the instance Pods so they all benefit from the streaming implementation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this isn't a blocker, I'm going to merge this PR now but I'll (hopefully) have a refactor PR up soon for consideration.
Ensures that the on volume Patroni log file is exported, if that file exists. If the PostgresCluster is not configured to create this file, note in the debug logs that this is acceptable for some configurations.
Issue: PGO-1701