Skip to content

Conversation

adam-fowler
Copy link
Collaborator

@fabianfett @Joannis

As discussed on Discord

try await resp3Upgrade(outbound: outbound, inboundIterator: &inboundIterator)
}
for await (request, continuation) in requestStream {
do {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This shows quite nicely that working with subscriptions at the same time can get weird... we don't want to wait for a write in order to read.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess in this situation a subscription will change the state of the pipeline. And turn into a loop returning the published content. The difficulty is in RESP3 where you can continue to send commands.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I follow you here. Sure we can add another case for subscriptions. Problem is we might not consume those if we are stuck in writing.

Copy link
Collaborator

@fabianfett fabianfett left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Three small comments. All in all I really like this!

public static func clusterSlaves(nodeId: String) -> RESPCommand {
RESPCommand("CLUSTER", "SLAVES", nodeId)
/// Returns the mapping of cluster slots to shards.
public struct CLUSTERSHARDS: RedisCommand {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you considered using namespaces here? So that we get CLUSTER.SHARDS?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will be a little more complex but should be easy enough

RESPCommand("CLUSTER", "SLAVES", nodeId)
/// Returns the mapping of cluster slots to shards.
public struct CLUSTERSHARDS: RedisCommand {
public typealias Response = [RESPToken]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we autogen the responses here as well?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The resp3_replies.json file is a bit shit. It does not define responses particularly well. The text for this one is

"[Array reply](/docs/reference/protocol-spec#arrays): a nested list of [Map reply](/docs/reference/protocol-spec#maps) of hash ranges and shard nodes describing individual shards."

I was thinking of adding in some way to patch the responses, as some of them end up being quite complex.

@adam-fowler adam-fowler merged commit 5ccd431 into main Apr 1, 2025
2 checks passed
@adam-fowler adam-fowler deleted the redis-command branch April 1, 2025 16:02
adam-fowler added a commit that referenced this pull request Jul 11, 2025
* Command types

* fix hello 3 command

* writeToRESPBuffer -> encode

* Add support for returning maps

Makes the assumption the map key is always a string

* Enum namespaces for containers

* Catch more response arrays in replies

* Parameter pack return from pipeline

* Update README

* Make sure we parse all responses from pipelined requests
adam-fowler added a commit that referenced this pull request Jul 14, 2025
* Command types

* fix hello 3 command

* writeToRESPBuffer -> encode

* Add support for returning maps

Makes the assumption the map key is always a string

* Enum namespaces for containers

* Catch more response arrays in replies

* Parameter pack return from pipeline

* Update README

* Make sure we parse all responses from pipelined requests
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants