Skip to content

Improve performance when a file has many matching lines #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
krobelus opened this issue Apr 4, 2022 · 2 comments
Open

Improve performance when a file has many matching lines #1

krobelus opened this issue Apr 4, 2022 · 2 comments

Comments

@krobelus
Copy link

krobelus commented Apr 4, 2022

We open and close a file for every line. This can be needlessly slow if one file has lots of matching lines. I wonder if there's an elegant way of optimizing that.

@jtrv
Copy link
Owner

jtrv commented May 16, 2022

Sorry to leave this without a response, it's been on the back of my mind and I was unsure of how to approach this.

Ideally I would like to diff the grep buffer and only replace the inserted lines in the diff. I'm not quite sure how to best implement this cleanly.

Alternatively we could pipe the buffer out to a faster tool for replacement like awk which should be much faster than scripting kakoune to open files and replace lines the way we are now.

https://unix.stackexchange.com/questions/677578/sed-replace-line-number-with-new-text-and-variables-in-text-file

I still need to look into this more...

@jtrv jtrv pinned this issue May 19, 2022
@krobelus
Copy link
Author

No worries. I actually don't know what kind of solution I want. I rarely need it these days (usually lsp-rename does the job) but it can be handy sometimes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants