-
To build a multimodal agent that can interact with its own PC in a multimodal manner. This means it can autonomously operate the mouse and click anywhere on the screen, rather than relying solely on browser analysis to make decisions.
-
This agent system will be used for subsequent reinforcement learning training of agents.
-
I have not yet conducted large-scale testing of this agent beyond the benchmark; please feel free to report any bugs or submit pull requests. 👋
Now, it is only tested on linux
server with Nvidia Tesla GPU (A100, H200 ...)
. The GPU is for open-spurce model inference. There may be some bugs for Mac/Windows.
- Setup environment
cd InfantAgent
conda create --name infant python=3.11
conda activate infant
conda install -c conda-forge uv
uv pip install -e .
- Pull the Docker. Only required on the first use. It will pull the docker image from the Docker Hub.
docker pull winsonchen108/ubuntu-gnome-nomachine:latest
- Run
export CUDA_VISIBLE_DEVICES=3 # For Visual Grounding model inference
uvicorn backend:app --log-level info
- Configure Virtual Machine (You only need to configure it once when using it for the first time.)
-
Enter your api key in
setting
By default, you should enter the Claude API key. You can also change this in theconfig.py
file. -
Wait for the backend to configure automatically until you see the following instruction:
For first-time users, please go to https://localhost:4443 to set up and skip unnecessary steps.
-
Go to https://localhost:4443 Skip the security check (this is HTTPS, not HTTP), and you will see the Linux desktop.
-
Go back to terminal and press
enter
to skip this reminder:When the computer setup is complete, press Enter to continue
-
Go back to the frontend and refresh the page. In the upper-right corner of the virtual machine, enter your username and password. By default, the username is
infant
and the password is123
. You can also change these in theconfig.py
file.
Now the agent is ready to use, and you don't need to configure the virtual machine again as long as the container still exists.
demo.mp4
- Add: More emoj/user friendly front end.
Thanks to the many outstanding open-source projects and models.
-
OpenHands Our Docker container’s configuration, connection setup, and Jupyter execution method are based on OpenHands, and we used the OpenHands testbed for SWE-Bench testing.
-
browser-use Our web-browser tools are modified from browser-use.
-
docker-ubuntu-gnome-nomachine We modified the code for this to setup the nomachine display.
-
UI-TARS We use UI-TARS-1.5 7B as our default visual-grounding model.
@misc{lei2025infantagentnextmultimodalgeneralistagent,
title={InfantAgent-Next: A Multimodal Generalist Agent for Automated Computer Interaction},
author={Bin Lei and Weitai Kang and Zijian Zhang and Winson Chen and Xi Xie and Shan Zuo and Mimi Xie and Ali Payani and Mingyi Hong and Yan Yan and Caiwen Ding},
year={2025},
eprint={2505.10887},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2505.10887},
}