NOT PROD READY YET
For running dev env just:
- server: run server via
cargo run -p server
- client: copy config.yaml.template to config.yaml with according properties & run via
cargo run -p client
Assuming setup with services on linux (pi) via systemctl, windows (nssm) and mac (launchctl).
Got a little bash script for deployment ./deploy/deploy_server.sh
or manually
- cross_compile (eg. cross)
cross build -p server --release --target=aarch64-unknown-linux-gnu
- stop current service on pi (
systemctl stop ...
) - upload & overwrite old binary ...
- start service (
systemctl start ...
) - profit (or new bugs)
Basically just cargo build -p client --release
and run it as os-service.
- for windows use nssm (
./deploy/deploy_client_windows.sh
) - for mac use launchctl (
./deploy/deploy_client_windows.sh
)
file sync for obsidian or exchange for files? server should be on pi and in rust (obviously)
Maybe something like Syncthing - just not in go
Server:
- Axum
- SqLite or maybe MongoDb - what about just a plain .txt with event-sourcing entries?
Clients:
- thin background service to sync with server
clients can't be pure web-apps, bc I want to automatically sink files and full access over the filesystem maybe finally my chance to wipe out Tauri ?
hm on second thought - why would I need a second frontend...
obv. a lot of similar community plugins exist - eg.:
- live-sync
- remotely-save
- or probably the best option git
for the UI probably writing a similar plugin would make the best UX (see obsidians docs)
- watches files under tracking
- diffs them (possibly file-size, last-updated date, etc.)
- recognizes change and communicates the change as precise as possible to the server (for create and update this includes an upload of the new data)
- owns master data (meaning he has the data that is considered origin)
- keeps record of every change across the system (event-sourcing)
- comes up with a strategy on how to flash the current "true" state onto a device that is out of sync (the tricky part)