FTProxy exposes a localhost HTTP + WebSocket bridge from inside the
Tauri process, plus a small set of Tauri-only commands callable from
the embedded webview (or any Tauri-aware client).
REST base: http://127.0.0.1:7878 (preferred; falls back to an
OS-assigned ephemeral port when 7878 is in use)
Discovery file: <app data dir>/FTProxy/data/bridge.url — written
at startup, contains the actual http:// and ws:// URLs
Auth: every endpoint except /health requires
Authorization: Bearer <token>. The WebSocket validates the token
via the ?token= query parameter. CORS is open for local callers;
the bind is loopback-only.
Token: generated on first launch, stored at
<app data dir>/FTProxy/data/token (48 chars).
Local-sync flavors — for users who have the desktop client of a cloud
provider installed; FTProxy reads/writes the local sync folder
directly via LocalCloudTransport (filesystem speed, zero OAuth).
The desktop client uploads to the cloud asynchronously:
List remote directory (single page; default for filesystem protocols)
GET
/files/remote/page?path=&continuation=&limit=
Paginated listing for object stores. Returns { entries, nextToken }. Pass nextToken back as continuation to fetch the next page; nextToken: null means no more pages. Token is opaque and per-protocol: S3 next_continuation_token, Azure NextMarker, GCS nextPageToken, Drive/OneDrive @odata.nextLink cursor. Don't parse it — just round-trip the string.
Create a new site (password optionally stored in OS keychain)
PUT
/sites/:id
Update existing site
DELETE
/sites/:id
Delete (removes keychain entry too)
GET
/sites/:id/password
{ hasPassword, password } — returns the stored keychain value so the UI's "eye" toggle can reveal it
GET
/sites/:id/secret-extras
{ secrets: { client_secret, service_account_json, ... } } — returns the keychain-stored secret-extras (OAuth client_secret, GCS JSON keys) so the Site Form can pre-populate masked fields on edit. Empty secrets object when no secret-extras are stored
POST
/scheduler/run-now
Trigger an on-demand sweep of the scheduler. Any schedule whose cron matched in the past 24h fires immediately. Used by the Site Form's "Run schedule now" button. Body is ignored; just {}. Returns { ok: true, data: { fired: true } }
Project every scheduled firing within the half-open window [from, to). Returns { ok, data: [{ kind, id, name, fireAt, cron, flavor, enabled }, …] }. kind is "site" (legacy) or "batch" (post-Phase-F canonical). flavor is upload / download / mirror for site rows or "batch" for jobs. Capped at 500 firings to keep busy crons from grinding the calendar UI
After the Phase F unification, Jobs are the single primitive for
all scheduled automation. A Job is a named, ordered list of steps that
run sequentially (fail-fast on first error). The wire path is still
/batch-jobs for back-compat. Sync/file-sync steps reuse the
scheduler's session-isolation (Phase B) and stamp batch_id onto the
schedule-history rows they produce.
A Job carries an optional schedule (scheduleCron + optional
scheduleStartAt / scheduleEndAt). Without a schedule, the Job is
manual-only.
Method
Path
Purpose
GET
/batch-jobs
List every persisted Job. Empty array when none
GET
/batch-jobs/:id
Fetch one. 404 if unknown
POST
/batch-jobs
Create. Body is the full Job JSON; an empty id triggers a fresh UUID
PUT
/batch-jobs/:id
Update. Path id wins over body id
DELETE
/batch-jobs/:id
Remove. 404 if unknown
POST
/batch-jobs/:id/run
Trigger an immediate run. Returns RunSummary { jobId, jobName, succeeded, failed, total }
Step shape — tagged on type, all fields camelCase on the wire:
file-sync(new, Phase F): siteId, direction
(upload/download — mirror rejected), localDir, remoteDir,
files: string[] (relative paths), optional policy. For
syncing 1–N specific files instead of an entire directory tree.
Each file uses the same per-session
/sessions/:id/transfers/{upload,download} path the manual UI does.
Audit log of every scheduled-sync firing. Each row is a ScheduleRun
(see schedule_history.rs::ScheduleRun). Persisted at
<data_dir>/schedule_history.json, capped at 10 000 rows.
{ "command": "uptime", "maxOutputBytes": 1048576 } — one-shot SSH exec on the active SFTP session. Returns { stdout, stderr, exitStatus, truncated }. SFTP-only — other protocols return 400.
POST
/benchmark
{ "sizeMib": 8, "remoteDir": "/" } — uploads N MiB of test data, downloads it back, then deletes the test file. Returns { uploadBytesPerSec, downloadBytesPerSec, uploadMs, downloadMs, downloadedBytes }
Notes on encryption for FTP:
- auto — try AUTH TLS, fall back to plain. Best-effort; the bridge
logs which path won.
- explicit / implicit — require TLS; connect fails if the server
refuses.
- plain — refuse TLS even if offered. Insecure; tooltip warns.
Notes on logon_type:
- normal — username + password as supplied.
- anonymous — bridge substitutes anonymous / anonymous@example.com.
- ask — caller (the JS modal) prompts the user and includes the
password verbatim in the connect body; the bridge doesn't do
anything special.
- key (SFTP only) — bridge reads key_path from extras, calls
russh_keys::load_secret_key, authenticates via
authenticate_publickey. Optional key_passphrase decrypts the
key in-memory.
Tauri commands (callable from the embedded webview)¶
The OAuth flow for Google Drive / OneDrive can't go through REST —
it needs to open the user's default browser and bind a temporary
loopback listener for the redirect. Those steps live in Tauri commands
invoked via window.__TAURI_INTERNALS__.invoke(name, args):