You do NOT need to compile anything if you just want to use the aimbot!
Precompiled .exe
builds are provided for both CUDA (NVIDIA only) and DirectML (all GPUs).
-
Works on:
- Any modern GPU (NVIDIA, AMD, Intel, including integrated graphics)
- Windows 10/11 (x64)
- No need for CUDA or special drivers!
-
Recommended for:
- GTX 10xx/9xx/7xx series (old NVIDIA)
- Any AMD Radeon or Intel Iris/Xe GPU
- Laptops and office PCs with integrated graphics
-
Download DML build: DirectML Release
-
Works on:
- NVIDIA GPUs GTX 1660, RTX 2000/3000/4000 or newer
- Requires: CUDA 12.8, TensorRT 10.8 (included in build)
- Windows 10/11 (x64)
-
Not supported: GTX 10xx/Pascal and older (TensorRT 10 limitation)
-
Includes both CUDA+TensorRT and DML support (switchable in settings)
-
Download CUDA build: CUDA + TensorRT Release
Both versions are ready-to-use: just download, unpack, run ai.exe
and follow instructions in the overlay.
- Download and unpack your chosen version (see links above).
- For CUDA build, install CUDA 12.8 if not already installed.
- For DML build, no extra software is needed.
- Run
ai.exe
. On first launch, the model will be exported (may take up to 5 minutes). - Place your
.onnx
model in themodels
folder and select it in the overlay (HOME key). - All settings are available in the overlay. Use the HOME key to open/close overlay.
- Right Mouse Button: Aim at the detected target
- F2: Exit
- F3: Pause aiming
- F4: Reload config
- Home: Open/close overlay and settings
If you want to compile the project yourself or modify code, follow these instructions.
-
Visual Studio 2022 Community (Download)
-
Windows 10 or 11 (x64)
-
Windows SDK 10.0.26100.0 or newer
-
CMake (Download)
-
OpenCV 4.10.0
-
[For CUDA version]
-
[For DML version]
- You can use pre-built OpenCV DLLs (just copy
opencv_world4100.dll
to your exe folder)
- You can use pre-built OpenCV DLLs (just copy
-
Other dependencies:
- DML (DirectML):
Select
Release | x64 | DML
(works on any modern GPU) - CUDA (TensorRT):
Select
Release | x64 | CUDA
(requires supported NVIDIA GPU, see above)
Before building the project, download and place all third-party dependencies in the following directories inside your project structure:
Required folders inside your repository:
sunone_aimbot_cpp/
โโโ sunone_aimbot_cpp/
โโโ modules/
Place each dependency as follows:
Library | Path |
---|---|
SimpleIni | sunone_aimbot_cpp/sunone_aimbot_cpp/modules/SimpleIni.h |
serial | sunone_aimbot_cpp/sunone_aimbot_cpp/modules/serial/ |
TensorRT | sunone_aimbot_cpp/sunone_aimbot_cpp/modules/TensorRT-10.8.0.43/ |
GLFW | sunone_aimbot_cpp/sunone_aimbot_cpp/modules/glfw-3.4.bin.WIN64/ |
OpenCV | sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/ |
cuDNN | sunone_aimbot_cpp/sunone_aimbot_cpp/modules/cudnn/ |
-
SimpleIni: Download
SimpleIni.h
Place inmodules/
. -
serial: Download the
serial
library (whole folder). To build, opensunone_aimbot_cpp/sunone_aimbot_cpp/modules/serial/visual_studio/visual_studio.sln
- Set C/C++ > Code Generation > Runtime Library to Multi-threaded (/MT)
- Build in Release x64
- Use the built DLL/LIB with your project.
-
TensorRT: Download TensorRT 10.8.0.43 Place the folder as shown above.
-
GLFW: Download GLFW Windows binaries Place the folder as shown above.
-
OpenCV: Use your custom build or official DLLs (see CUDA/DML notes below). Place DLLs either next to your exe or in
modules/opencv/
. -
cuDNN: Place cuDNN files here (for CUDA build):
sunone_aimbot_cpp/sunone_aimbot_cpp/modules/cudnn/
Example structure after setup:
sunone_aimbot_cpp/
โโโ sunone_aimbot_cpp/
โโโ modules/
โโโ SimpleIni.h
โโโ serial/
โโโ TensorRT-10.8.0.43/
โโโ glfw-3.4.bin.WIN64/
โโโ opencv/
โโโ cudnn/
This section is only required if you want to use the CUDA (TensorRT) version and need OpenCV with CUDA support. For DML build, skip this step โ you can use the pre-built OpenCV DLL.
Step-by-step instructions:
-
Download Sources
-
Prepare Directories
- Create:
sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/
sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/build
- Extract
opencv-4.10.0
intosunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/opencv-4.10.0
- Extract
opencv_contrib-4.10.0
intosunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/opencv_contrib-4.10.0
- Extract cuDNN to
sunone_aimbot_cpp/sunone_aimbot_cpp/modules/cudnn
- Create:
-
Configure with CMake
- Open CMake GUI
- Source code:
sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/opencv-4.10.0
- Build directory:
sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/build
- Click Configure (Choose "Visual Studio 17 2022", x64)
-
Enable CUDA Options
-
After first configure, set the following:
WITH_CUDA
= ONWITH_CUBLAS
= ONENABLE_FAST_MATH
= ONCUDA_FAST_MATH
= ONWITH_CUDNN
= ONCUDNN_LIBRARY
=full_path_to/sunone_aimbot_cpp/sunone_aimbot_cpp/modules/cudnn/lib/x64/cudnn.lib
CUDNN_INCLUDE_DIR
=full_path_to/sunone_aimbot_cpp/sunone_aimbot_cpp/modules/cudnn/include
CUDA_ARCH_BIN
= See CUDA Wikipedia for your GPU. Example for RTX 3080-Ti:8.6
OPENCV_DNN_CUDA
= ONOPENCV_EXTRA_MODULES_PATH
=full_path_to/sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/opencv_contrib-4.10.0/modules
BUILD_opencv_world
= ON
-
Uncheck:
WITH_NVCUVENC
WITH_NVCUVID
-
Click Configure again (make sure nothing is reset)
-
Click Generate
-
-
Build in Visual Studio
- Open
sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/build/OpenCV.sln
or click "Open Project" in CMake - Set build config: x64 | Release
- Build
ALL_BUILD
target (can take up to 2 hours) - Then build
INSTALL
target
- Open
-
Copy Resulting DLLs
- DLLs:
sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/build/install/x64/vc16/bin/
- LIBs:
sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/build/install/x64/vc16/lib/
- Includes:
sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/build/install/include/opencv2
- Copy needed DLLs (
opencv_world4100.dll
, etc.) next to your projectโs executable.
- DLLs:
-
For CUDA build (TensorRT backend):
- You must build OpenCV with CUDA support (see the guide above).
- Place all built DLLs (e.g.,
opencv_world4100.dll
) next to your executable or in themodules
folder.
-
For DML build (DirectML backend):
- You can use the official pre-built OpenCV DLLs if you only plan to use DirectML.
- If you want to use both CUDA and DML modes in the same executable, you should always use your custom OpenCV build with CUDA enabled (it will work for both modes).
-
Note: If you run the CUDA backend with non-CUDA OpenCV DLLs, the program will not work and may crash due to missing symbols.
- Open the solution in Visual Studio 2022.
- Choose your configuration (
Release | x64 | DML
orRelease | x64 | CUDA
). - Build the solution.
- Run
ai.exe
from the output folder.
-
Convert PyTorch
.pt
models to ONNX:pip install ultralytics -U yolo export model=sunxds_0.5.6.pt format=onnx dynamic=true simplify=true
-
To convert
.onnx
to.engine
for TensorRT, use the overlay export tab (open overlay with HOME).
- See all configuration options and documentation here: config_cpp.md
- License: Apache License 2.0
- License: MIT License
This project is actively developed thanks to the people who support it on Boosty and Patreon.
By supporting the project, you get access to improved and better-trained AI models!
Need help or want to contribute? Join our Discord server or open an issue on GitHub!