- 
          
 - 
                Notifications
    
You must be signed in to change notification settings  - Fork 858
 
Add concurrency for local mode #881
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| 
           这个pr使用协程实现并发。 协程可能不是最佳的方式,多进程更好,我是这么分析的,整个翻译过程可以分成三步: 
 其中第二步如果使用api的话算是IO密集任务,协程能很好地加速这部分。第一步第三步属于计算密集型任务,协程机制上来说无法给计算密集型任务加速很多。 但我目前感觉协程的加速效果也足够满足需求,可能等以后有更大批量翻译需求的时候再给升级成多进程吧。  | 
    
| 
           @liyuheng55555 shouldn’t this work in server mode too? It was meant to run on multiple servers instead of one so i didnt test it and only registered a single instance. I thought that a single instance would just allocate all the resources  | 
    
| 
           @frederik-uni I recently started participating in this project, so there are many things I still don’t understand. Are you referring to the server mode as the MangaTranslatorWS class? 
 This sounds like a distributed system, or is it ?  | 
    
| 
           @liyuheng55555 no the server is a separate module outside of the translator. the shared module is used by the server. It allows one connection and allows to execute functions within the instance. It uses a pickle to send attribute values. I would just start another instance of the translator. This might take up some extra memory as another python runtime will be running but this won't cause any blocking issues or temporary variables that might be stored in the translator object. The only issue that might cause is probably that tensorflow/pytorch allocating the gpu/cpu could cause some issues. I know that working with notebooks or whatever they are called can cause this issue. Not sure if this will cause an issue tho  | 
    
| 
           @frederik-uni Thanks for your explanation. I’ll continue looking into it tomorrow.  | 
    
| 
           有谁发现了batch模式没有logger,我一直用老版本不知道什么时候改没掉了,看起来有一段时间了没人觉得奇怪吗 添加logger后发现获取子文件夹文件有问题,需要处理一下  | 
    
          
 @popcion 我发现了,但还以为是没开某个参数导致的 XD 
 好的,晚些时候我处理一下  | 
    
| 
           文件夹的问题把  | 
    
| 
           有问题吗,没有我就合了  | 
    
Default concurrency is 3.
Using --concurrency to set it.
Based on my tests, a concurrency of 3 can speed up execution by approximately 2.5 times, while a concurrency of 10 can achieve around 3.2 times speedup. Therefore, the default concurrency level is set to 3.
My test environment is macbook air m3, deepseek-chat api.
Other discussion at: #870