Skip to content

Questions about architecture details and evaluation criteria #185

@Lim-Sung-Jun

Description

@Lim-Sung-Jun

Hi, thank you for your great work!

I have a few questions regarding some implementation details:

  1. In the methodology section, you describe the coordination protocols (Star, Tree, Chain, Graph), but could you provide more architectural details? Specifically:
  • How many agents were used in each setup?
  • What roles did the agents play?
  • How did they communicate under each topology across different benchmarks?
  • Were the agent configurations (e.g., number and roles) tuned individually for each task?
  1. For the main experiment, what was the evaluation criterion?
  • Was the final result averaged across different multi-agent architecture configurations (e.g., star, chain, tree, graph) for each benchmark task?
    Or was it computed by selecting one configuration per task?
  1. In the ablation studies, Figure 5 shows results from the research scenario, while Figure 7 shows results from Minecraft.
  • Are the insights (e.g., optimal number of iterations or coordination strategy) intended to generalize across all scenarios, or are they specific to each individual task?

Looking forward to your clarification. Thanks again!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions