-
Notifications
You must be signed in to change notification settings - Fork 48
[15/n] Pass monarch tensors to actor endpoints, part 1 #518
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
zdevito
wants to merge
2
commits into
gh/zdevito/38/base
Choose a base branch
from
gh/zdevito/38/head
base: gh/zdevito/38/base
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This makes it possible to send a monarch tensor to an actor endpoint that is defiend over the same proc mesh as the tensor. The send is done locally so the actor can do work with the tensors. The stream actor in the tensor engine is sent a SendResultOfActorCall message, which it will forward to the actor, binding to the message the real local tensors that were passed as arguments. The stream actor that owns the tensor waits for called the actor to finish since the actor 'owns' the tensor through the duration of the call. Known limitation: this message to the actor can go out of order w.r.t to other messages sent from the owner of the tensor engine because the real message is being sent from the stream actor. The next PR will fix this limitation by sending _both_ the tensor engine and the actor a message at the same time. The actor will get a 'wait for SendResultOfActorCall' message, at which point it will stop processing any messages except for the SentResultOfActorCall message it is suppose to be waiting for. This way the correct order is preserved from the perspective of the tensor engine stream and the actor. Differential Revision: [D78196701](https://our.internmc.facebook.com/intern/diff/D78196701/) [ghstack-poisoned]
zdevito
added a commit
that referenced
this pull request
Jul 11, 2025
This makes it possible to send a monarch tensor to an actor endpoint that is defiend over the same proc mesh as the tensor. The send is done locally so the actor can do work with the tensors. The stream actor in the tensor engine is sent a SendResultOfActorCall message, which it will forward to the actor, binding to the message the real local tensors that were passed as arguments. The stream actor that owns the tensor waits for called the actor to finish since the actor 'owns' the tensor through the duration of the call. Known limitation: this message to the actor can go out of order w.r.t to other messages sent from the owner of the tensor engine because the real message is being sent from the stream actor. The next PR will fix this limitation by sending _both_ the tensor engine and the actor a message at the same time. The actor will get a 'wait for SendResultOfActorCall' message, at which point it will stop processing any messages except for the SentResultOfActorCall message it is suppose to be waiting for. This way the correct order is preserved from the perspective of the tensor engine stream and the actor. Differential Revision: [D78196701](https://our.internmc.facebook.com/intern/diff/D78196701/) ghstack-source-id: 295768021 Pull Request resolved: #518
This pull request was exported from Phabricator. Differential Revision: D78196701 |
This makes it possible to send a monarch tensor to an actor endpoint that is defiend over the same proc mesh as the tensor. The send is done locally so the actor can do work with the tensors. The stream actor in the tensor engine is sent a SendResultOfActorCall message, which it will forward to the actor, binding to the message the real local tensors that were passed as arguments. The stream actor that owns the tensor waits for called the actor to finish since the actor 'owns' the tensor through the duration of the call. Known limitation: this message to the actor can go out of order w.r.t to other messages sent from the owner of the tensor engine because the real message is being sent from the stream actor. The next PR will fix this limitation by sending _both_ the tensor engine and the actor a message at the same time. The actor will get a 'wait for SendResultOfActorCall' message, at which point it will stop processing any messages except for the SentResultOfActorCall message it is suppose to be waiting for. This way the correct order is preserved from the perspective of the tensor engine stream and the actor. Differential Revision: [D78196701](https://our.internmc.facebook.com/intern/diff/D78196701/) [ghstack-poisoned]
zdevito
added a commit
that referenced
this pull request
Jul 11, 2025
Pull Request resolved: #518 This makes it possible to send a monarch tensor to an actor endpoint that is defiend over the same proc mesh as the tensor. The send is done locally so the actor can do work with the tensors. The stream actor in the tensor engine is sent a SendResultOfActorCall message, which it will forward to the actor, binding to the message the real local tensors that were passed as arguments. The stream actor that owns the tensor waits for called the actor to finish since the actor 'owns' the tensor through the duration of the call. Known limitation: this message to the actor can go out of order w.r.t to other messages sent from the owner of the tensor engine because the real message is being sent from the stream actor. The next PR will fix this limitation by sending _both_ the tensor engine and the actor a message at the same time. The actor will get a 'wait for SendResultOfActorCall' message, at which point it will stop processing any messages except for the SentResultOfActorCall message it is suppose to be waiting for. This way the correct order is preserved from the perspective of the tensor engine stream and the actor. ghstack-source-id: 295768514 @exported-using-ghexport Differential Revision: [D78196701](https://our.internmc.facebook.com/intern/diff/D78196701/)
This pull request was exported from Phabricator. Differential Revision: D78196701 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Stack from ghstack (oldest at bottom):
This makes it possible to send a monarch tensor to an actor endpoint that is defiend over the same proc mesh as the tensor.
The send is done locally so the actor can do work with the tensors.
The stream actor in the tensor engine is sent a SendResultOfActorCall message, which it will forward to the actor, binding to the message the real local tensors that were passed as arguments.
The stream actor that owns the tensor waits for called the actor to finish since the actor 'owns' the tensor through the duration of the call.
Known limitation: this message to the actor can go out of order w.r.t to other messages sent from the owner of the tensor engine because the real message is being sent from the stream actor. The next PR will fix this limitation by sending both the tensor engine and the actor a message at the same time. The actor will get a 'wait for SendResultOfActorCall' message, at which point it will stop processing any messages except for the SentResultOfActorCall message it is suppose to be waiting for. This way the correct order is preserved from the perspective of the tensor engine stream and the actor.
Differential Revision: D78196701