-
-
Notifications
You must be signed in to change notification settings - Fork 456
Make RuleManager.runNow() run in the dedicated rule thread #5069
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Not an argument against this, but it is going to be abused more so than it already is. People already try to treat UI Script rules as if they were some sort of way to define a private library of common "functions" to call from their rules. But having their rules slow way down is certainly way better than throwing multithreaded exceptions all over the place. Though it will also lead to subtile "why are my rules so slow" problems which will be hard to diagnose. Removing the blocking could help a little perhaps, but that could also be a breaking change if anyone is depending on the current blocking behavior. |
I don't mean to bring this completely off topic, so this "suggestion" should probably be discussed somewhere else. But, it's more a vague idea than an actual suggestion, so I'll just throw out the little I have here, and if somebody wants to pick it up, it can be done elsewhere. The subject of "libraries" with shared functions was briefly mentioned here the other day: openhab/openhab-js#490 (comment) It wasn't really serious from my side, and @florian-h05 answered that people could just create their own npm modules. That might be, but if people keep using rules as a substitution, maybe it's not easy enough for many to handle. I'm thinking that it could be possible to make another, I'm not sure what it's called in JS, but "annotation based entity" like |
I think it is essential to manage to document what currently can be executed in parallel, and for what is enforced sequential execution (webpages widgets actions, rules, scripts, transformations, profiles). Besides, I think the parallelism should be increased, allowing one and the same rule, transformation and profile to run in parallel to itself. But in case this can consume more resources, then in the rule configuration it should be possible to specify if a single-thread rule is created, or a rule, which can run in parallel to itself, once it is triggered. (Likewise for transformations). I do not think it is a problem, if a rule (transformation) can be re-run, while it is executed, as long as this is documented, and examples show how to set locks. That said, I am against the idea to alter RunNow to restrict further the parallelism. But I am in favour of adding a configuration to a rule, allowing or disabling that rule to be executed in parallel to itself. (which could be overwritten by a parameter to RunNow). |
I don't think you quite understand: OH can't handle that rules run in parallel with themselves, that's why the single-threaded execution is there in the first place. It's not just about rule scripts and synchronization/locks, it's about many other things as well. Triggers, Conditions and Actions run sequentially, and are supposed to do so. They pass on results to each other. All this would have to be redesigned to allow the same rule to run "in parallel with itself". This PR doesn't "restrict parallelism", it prevents bugs. Even if the same rule were allowed to run multiple times at once in the future, this PR wouldn't prevent that, since all it does is make sure that Also note, that this has nothing to do with different rules running in parallel - it's only to prevent the same rule from "running in parallel with itself". When it comes to transformations and profiles, I completely agree that they should be able to run in parallel, and I think they mostly are, and that the exception might be those using script engines that have an engine lock. Whether that could be solved in a better way, I'm not really in a position to answer. I would think, theoretically, that it should be possible to compile code and run it in separate, independent contexts. They shouldn't have any "state" anyway IMO, they should apply all logic there and then following the rules of the transformation independently of what other transformations do. But, some might disagree with this. When it comes to rules running in parallel with themselves, I'm not quite sure if that would be useful, even if it were a per-rule setting. Regardless, making such a setting even possible takes quite a lot of refactorings. |
Signed-off-by: Ravi Nadahar <nadahar@rediffmail.com>
2fef8ef
to
7d38009
Compare
What if the transformation depends on, for example, the previous value or the previous five values or how long ago the previous update was done? You can't do those use cases without holding state. As for individual rules running in parallel. Been there, done that. It was a disaster. |
This ends up being a matter of design choices/philosophy. As I think of transformations, they should be (normally small/quick) pure functions. Because that makes them completely independent and "free", they don't have to coordinate with anything else and can run in their own pace in parallel with itself and other parts of the system. The lack of state means that there are things you can't do, it always does. All the current "functional programming craze" has the same problem, but a lot of people seem to ignore that. There are pros and cons, the pros are speed and simplicity, the cons are flexibility and capability. In this case, I think it's natural that this divide should be between a transformation and a rule. You can make rules that update Items, using more complex, stateful, evaluations. So, those relatively rare cases where that is needed, should probably be handled using rules, freeing up transformations to be quick and without concurrency concerns. |
You call it a craze, but Lisp is one of the original programming languages (created in 1958). Functional programming is nothing new. And it's pretty awesome at some types of problems, and not so good at others, same as any other programming paradigm.
This divide has never existed. A lack of this divide is one of the two major reasons why I argued against Profiles way back when they were first introduced in the first place (the second reason was they could only be used if there is a Link). The maintainers disagreed and we have profiles now. Anyone is free to make their transformations stateless, but it's far too late to enforce that without breaking a lot of user's systems. |
I've never said that it doesn't have any merit, not that it's new. In fact, what is traditionally called static utility methods in Java, for example, is basically the same thing. I also like immutability and avoid state where I can. I don't have any problem with any of this. The problem is that these days, it has become a "fashion", and "everything" suddenly has to be done in this way or be considered "outdated", regardless of whether it makes sense or not. One of the main reasons why it has been around for a long time is that it's often a very inefficient way to do things. It's easy for the developer because it helps keep concerns separated, but it generates a lot of function calls, which are expensive, and let's not start with recursion. It's also very difficult to debug when things do go wrong. So, my only issue is with the "trend", not the principles as such, if used when appropriate.
Ok, I just assumed - because having stateful transformations don't make any sense to me. That said, if that is water under the bridge, so is making them run efficiently. I don't know if there's a way to "tweak" this to enable stateless behavior. It doesn't help if people make their transformations stateless if the system has to take precautions as if there weren't stateless. You still have to live with the limitations. |
I excepted some more discussion on this issue, the one thing I want to know is: Is this something OH wants? If so, I'll convert this into a PR. If somebody would want to test it, I'd be happy, but I've done basic testing, and it appears to work to me. @florian-h05 I know you're busy, but do you have any opinion as to whether this is a good thing? In "my world" this is a bug fix, but as with anything that changes behavior, I'm sure some will have found a way to rely on this as well. |
As I said above, slow rules are better than MultiThreadedExceptions, even though the root cause is an end user missusing Scripts. So I would say this is a net improvement and would like to see it added.
As long as the call to run the rule behaves the same as it does now (i.e. it blocks) I don't think it would be a breaking change. This enables a new use case, it doesn't require end users to change their code to continue implementing current use cases. Nothing really breaks. |
I'm thinking, what if somebody have, e.g. from Rules DSL, successfully implemented locks and use timers to mass-run some rule in parallel, for example. It will "break" that scenario. |
That probably wouldn't work anyway unless that called rule was stateless. Otherwise, each call would interfere with the running of the already running instances because event the consts/vals are shared between each run. They would be overwriting each other's variables. I think the likelihood that anyone is successfully doing this is low to the point of being near zero. |
I've taken this out of draft then. Let's see how the maintainers feel about it... |
Sorry for chiming a bit late … WRT to transformations and profiles: Please note only SCRIPT transformations and profiles have the limitation of running „sequentially“. You can also provide custom transformations and profiles from scripts by registering some OSGi services (would have to look that up though) — IIRC JRuby supports this through its helper library — then you can decide to allow for parallelism or not. |
The fact that if you run a rule manually, from another rule, from the web UI, from a timer or in any way not triggered by a
Trigger
, has long been a topic seen mentioned as problematic in discussions.This changes
RuleManager.runNow()
, which I assume is the source of all these "non-triggered executions", to use the same rule thread as rules do when triggered.I have done basic testing, and it seems to work well when I test it. I'm however uncertain of what implications this might have on the system, and feel that further testing/evaluation is needed.
@rkoshak, @florian-h05 and anybody else that might have some knowledge on the topic, please chime in.
The call to
runNow()
will block until execution has completed in the rule's dedicated thread. This means that if the rule thread is already busy with something else, there could be additional waiting beforerunNow()
returns. This might have implications that's hard for me to evaluate. At the same time, this should solve all the concurrency issues that arise when the rule is run outside its dedicated rule thread.