-
-
Notifications
You must be signed in to change notification settings - Fork 8.9k
Open
Labels
performancePerformance-related issuesPerformance-related issuesstaleOver 90 days of inactivityOver 90 days of inactivity
Description
Proposal to improve performance
We were running logit bias with a big dictionary and noticed a significant slowdown in generation. We looked into the implementation and saw that it uses a for loop
for token_id, bias in logit_bias.items(): |
Any specific reason this is done this way? From quick tests it seems that a scatter_add
would be significantly faster. If this has not been considered before, I'll spend some time to make a proper benchmark and a PR.
On my Mac, with a -100 bias on 40k tokens out of 150k
In [48]: %timeit f_for(x, logit_bias)
106 ms ± 2.92 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [49]: %timeit f_scatter(x, logit_bias)
3.74 ms ± 13.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
NickLucche, mgrankin and DmitrySirakov
Metadata
Metadata
Assignees
Labels
performancePerformance-related issuesPerformance-related issuesstaleOver 90 days of inactivityOver 90 days of inactivity