This is a guide/ write up on getting things working on the the Milk V Duo. This repo is mainly looking into getting software and programming languages runnning on the default buildroot system.
- Image Requirements
- Compilers for Duo
- C Flags
- Other Compiler (Status: Non of them working ⛔)
- C on Duo (Status: Working, Offical Method ✅)
- WASM on Duo (Status: Working, more testing 🟨)
- Nim on Duo (Status: Working, testing required ✅)
- Rust on Duo (Status: Working, I haven't tested ✅)
- Go on Duo (Status: Working, more testing ✅)
- Contributing
All of my testing is done on Milk V Duo Buildroot V1 on the Milk V Duo 64mb. These should also work on the Buildroot V2, but I haven't tested them.
Note: I am using V1 as that is the only one I was able to build with a larger sd card image size. As I wasn't able to expand the root parition after boot and wasn't able to get the V2 to compile with the larger size.
So far we need to use a riscv64-musl toolchain to compile C programs. It is easy to setup with the included setup script in the examples repo from Milk V.
Simpily copy the envsetup.sh
from the example repo into the root of your project or where ever you want and run source envsetup.sh
and select your board with 1 or 2 (I select 1 as I have the base model).
The script downloads and setups the toolchain. Sadly the toolchain only supports x86 only and has not arm64 support, there might be a hacky way around it.
The script also sets up your CC
and CFLAGS
env variable, so you simpliy need to run make on a project which uses the CC
and CFLAGS
var to compile. How ever the vars is only for this terminal instance and you need to run it every time you open a new terminal instance.
Here is the custom CFLAGS
if you want to hardcode it:
-mcpu=c906fdv -march=rv64imafdcv0p7xthead -mcmodel=medany -mabi=lp64d -O3 -DNDEBUG -I/workspace/wasm3/platforms/openwrt/build/include/system
Note: This is only for the Milk V Duo 64mb version. For other models, look at the setup script for their respective flags.
I have found some other compilers that might also work, I haven't tested them and after testing I couldn't get any of them to work.
- ⛔ toolchains.bootlin.com/releases_riscv64.html: Ensure to get the
musl
version if you are compiling for the default buildroot os.- Cannot get this to work, gives me error:
error: unrecognized command-line option ‘-mcpu=c906fdv’
, I need the this specific C Flag as that defines the CPU version of the Duo. - Trying the riscv64-lp64d version gives me diffrent error of
error: ‘-mcpu=c906fdv’: unknown CPU
.
- Cannot get this to work, gives me error:
- ⛔ github.com/riscv-collab/riscv-gnu-toolchain: Under the releases section get the latest
riscv64-musl-ubuntu-*
version.4- Cannot get this to work, gives me error:
lib/x86_64-linux-gnu/libc.so.6: version 'GLIBC_2.36' not found
- Cannot get this to work, gives me error:
- ⛔ github.com/ejortega/milkv-host-tools : Under the releases section get the toolchain for your host platform. The only one with arm64 version.
- Cannot get this to work, gives me error:
error: '-march=rv64imafdcv0p7xthead': extension 'xthead' starts with 'x' but is unsupported non-standard extension
- Removing
xthead
from the-march
flag, gives me error:error: '-mcpu=c906fdv': unknown CPU
- Removing the
-mcpu
flag leads to it compling but not having the correct linking, it has/lib/ld-musl-riscv64.so.1
, which will not run. (Correct one:/lib/ld-musl-riscv64v0p7_xthead.so.1
) I think the official compiler has a custom extra config for that specific cpu.
- Cannot get this to work, gives me error:
This section isn't a guide. I am just accumlating info.
Simpily use the toolchain from the compiler section. Use the compiler provided at host-tools/gcc/riscv64-linux-musl-x86_64/bin/riscv64-unknown-linux-musl-gcc
and use the compiler flags for the Milk V Duo specifically.
I think if you compile for riscv64 but ask it to build static, you can ignore the CFLAGS
which set custom cpu and cpu version flags.
Some C programs I have got compiled.
An LLM inference in pure C. Edited the Makefile to not overwrite the compiler and added the CFLAGS and compiled without an issue. Uploaded the executable, tokens.bin and the smallest model it recommends and took 0.25 tokens per second, 11 mins to generate a small story. I posted about it here
A JS interpreter with ES2023 support. For this compiling took some trial and error. I modified the Makefile to use my C Flags and had to add a library.
I wasn't able to get it working with the version string being injected into the source, so I just hard coded it.
You can apply my patch to the Github Mirror, ensure your PREFIX
is set to a temp location you want to store the files.
Ensure to set the CROSS_PREFIX
enviornment variable to the toolchain prefix (like this: host-tools/gcc/riscv64-linux-musl-x86_64/bin/riscv64-unknown-linux-musl-
, point to your actual location and keep the ending blank, removing the gcc).
And run make qjs
and make install
. Go to the location of install and transfer the files to the duo and place the contents in the corresponding directories in /usr/local
.
Also I ran calculating Pi on the the Duo with quickjs. For 100,000 numbers it takes 26.16 seconds to complete. Which 100 times slower than the benchmark on a Core i5 4570 CPU at 3.2 GHz.
I think that wasm will make it easier to port programs to these more obscure architectures by using WASM (Web Assembly) as a universal binary format. Specially with the rise of the WASI (Web Assembley System Interface), which makes making native applications easier.
So far WASM interpreters I would like to port a few of them, including: Wasmer, Wasmtime, Wasmedge, wazero and Wasm3. The reason for multiple is to get the max perfomance and some have varying levels of api support.
Wasm3 is a fast WASM runtime with wasi support. Compiling a cli tool is quite simple as using the included cli example.
Simpily setup the toolcahin (from before) and run make
in the /platforms/openwrt/build
(actual cli code is in platforms/app
, and interpreter in /source
), no modifications required.
If you try running the resulting wasm3
binary on your host platform you will get an error, but copying to the MilkV and trying will result in the application giving you a proper message.
Wazero is a go based WASM runtime with wasi support. Compiling was as wasy as Compiling with Go for the Duo.
Simply cd into the cmd/wazero
directory and run env GOOS=linux GOARCH=riscv64 go build
, and you get a resulting binary.
However, when it comes to performance wazero is really bad, and I think it steams from the fact it is written in go and thus runs a garbage collector at runtime and leads to slower perfomance (I think).
Here is the result comparing wasm3 and wazero.
[root@milkv-duo]~# time wasm3 cowsay.wasm -f tux hello
_______
< hello >
-------
\
\
.--.
|o_o |
|:_/ |
// \ \
(| | )
/'\_ _/`\
\___)=(___/
real 0m 0.06s
user 0m 0.03s
sys 0m 0.01s
[root@milkv-duo]~# time wazero run cowsay.wasm -f tux hello
_______
< hello >
-------
\
\
.--.
|o_o |
|:_/ |
// \ \
(| | )
/'\_ _/`\
\___)=(___/
real 0m 0.64s
user 0m 0.54s
sys 0m 0.08s
Approximately 10 times reduction in speed.
Some WASM programs I have got compiled. The kinda hard part is that I have to compile .wasm files from scratch as no one provides me a binary. Secondly, most wasm projects is for the browser, actual cli projects are few. So, would need to build a wrapper around it.
Fisrt thing I compiled to webassembly and uploaded to the MilkV and it worked. I posted it about it here.
The Nim compiler supports custom compilers. So I used the risv64 toolchain as my compiler by adding a config.nim
to the root of my nimble directory and adding this config:
import std/envvars
switch("cc", "gcc")
switch("gcc.exe", getEnv("CC"))
switch("gcc.linkerexe", getEnv("CC"))
switch("passC", getEnv("CFLAGS"))
switch("passL", getEnv("CFLAGS"))
If you have your toolchain in a more permanent location, replace the getEnv("CC")
with the location of the gcc compiler. And hard code the CFLAGS
from the compiler section
If you want to pass it as flags to the compiler.
# Nimble
nimble build --cc:gcc \
--gcc.exe="$CC" \
--gcc.linkerexe="$CC" \
--passC="$CFLAGS" \
--passL="$CFLAGS"
# Nim
nim c --cc:gcc \
--gcc.exe="$CC" \
--gcc.linkerexe="$CC" \
--passC="$CFLAGS" \
--passL="$CFLAGS"
Again, if you have your toolchain in a more permanent location, replace the $("CC")
with the location of the gcc compiler. And hard code the CFLAGS
from the compiler section.
The compiler produces an executable that will not run on the host system, but runs on the MilkV.
In conculsion, all you have to do is use the riscv64 musl toolchain c compiler and the custom C compiler flags used for the MilkV Duo.
Some Nim programs I have got compiled.
A terminal file browser that I am working on. Had to copy the default config file and it runs fine.
github.com/ejortega/milkv-duo-rust
After reading a bit online, it seems like go has the best cross compilation system. All you do is ask the go compiler to build for a diffrent arch. No having to ask it to use a diffent compiler, no custom configs and linking and custom flags.
All you do is set GOOS
to linux
and GOARCH
to riscv64
.
env GOOS=linux GOARCH=riscv64 go build
Because go builds static binaries by default, it dosen't need any custom custom flags and cpu version. However the downside is larger binaries.
Also, to run go binaries on the duo, you need all the 64 mb of ram. If you are using the default official image, the ram is limited to ~23mb, some of it is allocated to camera processing.
You can disable it by building your own version and setting ION
to 0, mentiond here.
Some golang programs I have got compiled.
Mentiond in the wasm interpreter section.
A terminal markdown viewer. I use this all the time when viewing docs and readmes in the terminal. Compiling was so simple. And it runs, I have nothing else much to say.
If you would like to contribute your knowledge to this repo, open a pull request with the any modifications in it. Or open an issue and provide any info there.