Skip to content

API

FTProxy exposes a localhost HTTP + WebSocket bridge from inside the Tauri process, plus a small set of Tauri-only commands callable from the embedded webview (or any Tauri-aware client).

  • REST base: http://127.0.0.1:7878 (preferred; falls back to an OS-assigned ephemeral port when 7878 is in use)
  • WebSocket: ws://127.0.0.1:7878/events?token=<token>
  • Discovery file: <app data dir>/FTProxy/data/bridge.url — written at startup, contains the actual http:// and ws:// URLs
  • Auth: every endpoint except /health requires Authorization: Bearer <token>. The WebSocket validates the token via the ?token= query parameter. CORS is open for local callers; the bind is loopback-only.
  • Token: generated on first launch, stored at <app data dir>/FTProxy/data/token (48 chars).

Response envelope

{ "ok": true, "data": ... }

Errors:

{
  "ok": false,
  "error": { "code": "<code>", "message": "...", "retryable": true }
}

Codes: bad_request (400), unauthorized (401), not_connected (409), not_found (404), protocol_error (502), internal_error (500).

Supported protocols (16)

API-flavor / wire protocols:

protocol value Family Auth
sftp SSH file transfer password / key
ftp RFC 959 password
ftps FTP over TLS password
webdav HTTP file share basic auth
smb SMB / CIFS file share NTLM credentials (or anonymous)
s3 AWS S3 + compatible (MinIO, R2, Spaces, Wasabi) access keys
azure Azure Blob Storage (object store) account name + key
azure-files Azure Files (SMB-mountable file share) — bridge translates to SMB shape (\\<account>.file.core.windows.net\<share>, Azure\<account> username) storage account name + access key + share name
gcs Google Cloud Storage service-account JSON
dropbox Dropbox v2 API OAuth2 + refresh-token (per-user app + client_secret)
gdrive Google Drive v3 OAuth2 + PKCE + refresh-token
onedrive Microsoft OneDrive (Graph) OAuth2 + PKCE + refresh-token

Local-sync flavors — for users who have the desktop client of a cloud provider installed; FTProxy reads/writes the local sync folder directly via LocalCloudTransport (filesystem speed, zero OAuth). The desktop client uploads to the cloud asynchronously:

protocol value Backed by Detected via
dropbox-local Dropbox desktop client folder %LOCALAPPDATA%\Dropbox\info.json (Win), ~/.dropbox/info.json (Mac/Linux)
onedrive-local OneDrive desktop client folder Windows registry HKCU\Software\Microsoft\OneDrive\Accounts\*\UserFolder; falls back to ~/OneDrive
gdrive-local Google Drive for Desktop virtual drive drive-letter scan for <drive>:\My Drive (any drive letter)
icloud-local iCloud Drive ~/Library/Mobile Documents/com~apple~CloudDocs (Mac)
localcloud Generic — caller supplies extras.local_root Caller-supplied

Aliases accepted by normalize_protocol: davwebdav, azureblob/azure-blobazure, azurefiles/azurefileazure-files, gsgcs, dbxdropbox, google-drive/googledrivegdrive, one-drive/msgraphonedrive, dropboxlocaldropbox-local, onedrivelocalonedrive-local, gdrivelocal/google-drive-localgdrive-local, icloudlocal/icloudicloud-local.

Full REST endpoint list (40 distinct routes)

Meta / events (2)

Method Path Purpose
GET /health Liveness probe + capability list (public — no auth)
GET /events?token= WebSocket upgrade (typed event stream)

Session — single active (3)

Method Path Purpose
GET /session Current active-session snapshot
POST /session/connect Open a session in the active slot
POST /session/disconnect Close active session

Sessions — multi-tab (5)

Method Path Purpose
GET /sessions List all session slots
POST /sessions Create a new empty slot (returns its id)
GET /sessions/active Get the active slot id
POST /sessions/active { "id": "..." } — switch active slot
DELETE /sessions/:id Close + remove a slot
POST /sessions/:id/disconnect Disconnect a specific slot
POST /sessions/:id/transfers/upload Targeted upload on a non-active slot
POST /sessions/:id/transfers/download Targeted download on a non-active slot

Remote filesystem (7)

Method Path Purpose
GET /files/remote?path= List remote directory (single page; default for filesystem protocols)
GET /files/remote/page?path=&continuation=&limit= Paginated listing for object stores. Returns { entries, nextToken }. Pass nextToken back as continuation to fetch the next page; nextToken: null means no more pages. Token is opaque and per-protocol: S3 next_continuation_token, Azure NextMarker, GCS nextPageToken, Drive/OneDrive @odata.nextLink cursor. Don't parse it — just round-trip the string.
POST /files/remote/mkdir { "path": "/foo" }
POST /files/remote/rename { "from": "", "to": "" }
POST /files/remote/delete { "path": "", "isDirectory": false }
GET /files/remote/raw?path= Stream bytes as application/octet-stream
PUT /files/remote/raw?path= Upload raw body bytes to a path
GET /files/remote/hash?path=&algo=md5 Server-computed hash (FTP HASH/XMD5; SFTP fallback)

Local filesystem (4)

Method Path Purpose
GET /files/local?path= List local directory
POST /files/local/mkdir { "path": "" }
POST /files/local/rename { "from": "", "to": "" }
POST /files/local/delete { "path": "" }

Combined convenience (1)

Method Path Purpose
GET /files Returns both panes in one payload (used by the UI on boot)

Transfers / queue (8)

Method Path Purpose
GET /transfers Current queue + history (in-memory)
POST /transfers Body {} — removes completed/failed entries
GET /transfers/:id One transfer snapshot
DELETE /transfers/:id Cancel + remove from queue
POST /transfers/download { "remotePath": "", "localPath": "" } — remote→disk
POST /transfers/upload { "localPath": "", "remotePath": "" } — disk→remote
POST /transfers/upload-blob?path= Multipart upload to a remote directory (used by drag-drop / file-picker single-pane upload)
POST /transfers/verify { "transferId": "..." } — explicit hash verify after upload

Saved sites (5)

Method Path Purpose
GET /sites List all saved sites
POST /sites Create a new site (password optionally stored in OS keychain)
PUT /sites/:id Update existing site
DELETE /sites/:id Delete (removes keychain entry too)
GET /sites/:id/password { hasPassword, password } — returns the stored keychain value so the UI's "eye" toggle can reveal it
GET /sites/:id/secret-extras { secrets: { client_secret, service_account_json, ... } } — returns the keychain-stored secret-extras (OAuth client_secret, GCS JSON keys) so the Site Form can pre-populate masked fields on edit. Empty secrets object when no secret-extras are stored
POST /scheduler/run-now Trigger an on-demand sweep of the scheduler. Any schedule whose cron matched in the past 24h fires immediately. Used by the Site Form's "Run schedule now" button. Body is ignored; just {}. Returns { ok: true, data: { fired: true } }

Calendar projection (1)

Method Path Purpose
GET /schedules/upcoming?from=<unix_ts>&to=<unix_ts> Project every scheduled firing within the half-open window [from, to). Returns { ok, data: [{ kind, id, name, fireAt, cron, flavor, enabled }, …] }. kind is "site" (legacy) or "batch" (post-Phase-F canonical). flavor is upload / download / mirror for site rows or "batch" for jobs. Capped at 500 firings to keep busy crons from grinding the calendar UI

Notifications (1)

Method Path Purpose
POST /notify/test Body { channel, message? }. Fires one delivery against slack / discord / telegram / webhook-success / webhook-failure / email. Returns { configured, sent, error }. Channels are env-driven (SLACK_WEBHOOK_URL, DISCORD_WEBHOOK_URL, TELEGRAM_BOT_TOKEN+TELEGRAM_CHAT_ID, WEBHOOK_ON_SUCCESS_URL / WEBHOOK_ON_FAILURE_URL, SMTP_*). Email is a stub today

Jobs (6)

After the Phase F unification, Jobs are the single primitive for all scheduled automation. A Job is a named, ordered list of steps that run sequentially (fail-fast on first error). The wire path is still /batch-jobs for back-compat. Sync/file-sync steps reuse the scheduler's session-isolation (Phase B) and stamp batch_id onto the schedule-history rows they produce.

A Job carries an optional schedule (scheduleCron + optional scheduleStartAt / scheduleEndAt). Without a schedule, the Job is manual-only.

Method Path Purpose
GET /batch-jobs List every persisted Job. Empty array when none
GET /batch-jobs/:id Fetch one. 404 if unknown
POST /batch-jobs Create. Body is the full Job JSON; an empty id triggers a fresh UUID
PUT /batch-jobs/:id Update. Path id wins over body id
DELETE /batch-jobs/:id Remove. 404 if unknown
POST /batch-jobs/:id/run Trigger an immediate run. Returns RunSummary { jobId, jobName, succeeded, failed, total }

Step shape — tagged on type, all fields camelCase on the wire:

  • sync (folder sync): siteId, localPath, remotePath, direction (upload/download/mirror), optional policy, optional maxDepth. Recursive, mirrors /dir/sync semantics.
  • file-sync (new, Phase F): siteId, direction (upload/downloadmirror rejected), localDir, remoteDir, files: string[] (relative paths), optional policy. For syncing 1–N specific files instead of an entire directory tree. Each file uses the same per-session /sessions/:id/transfers/{upload,download} path the manual UI does.
  • wait: seconds. Pure delay between steps.
  • webhook: url, optional method (POST/GET/PUT/DELETE), optional body (JSON). Fire-and-forget HTTP call.

Schedule history (4)

Audit log of every scheduled-sync firing. Each row is a ScheduleRun (see schedule_history.rs::ScheduleRun). Persisted at <data_dir>/schedule_history.json, capped at 10 000 rows.

Method Path Purpose
GET /schedule-history?since=<unix_ts>&limit=<n> Cross-site list, newest first. Defaults: since=0, limit=200. Each row includes id, siteId, siteName, direction, localPath, remotePath, startedAt, finishedAt, status (running/succeeded/failed/cancelled), stats: { uploaded, downloaded, failed, bytesIn, bytesOut }, optional error, triggeredBy (scheduler/manual/Phase-D batch values)
GET /sites/:id/schedule-history?since=<ts>&limit=<n> Per-site list, newest first. Default limit=50. Same row shape as above
DELETE /schedule-history Clear all rows. Returns { ok: true, data: { cleared: true } }
DELETE /schedule-history/:run_id Delete one row. Returns { ok: true, data: { removed: true } }. 404 if the id is unknown

Bookmarks (4)

Method Path Purpose
GET /bookmarks List
POST /bookmarks Create
PUT /bookmarks/:id Update
DELETE /bookmarks/:id Delete

Config (2)

Method Path Purpose
GET /config Get app-level settings
PUT /config Patch settings (concurrency, theme, verify-after-upload, etc.)

Host keys — SFTP (3)

Method Path Purpose
GET /hostkeys List pinned SFTP host keys
POST /hostkeys/trust Trust a fingerprint (resolves the mismatch modal)
DELETE /hostkeys/:host/:port Remove a pin

Directory compare (1)

Method Path Purpose
POST /dir/compare { "localPath", "remotePath", "maxDepth" } — recursive depth-limited diff. Returns { localOnly, remoteOnly, differing, same }

Server tools (2)

Method Path Purpose
POST /sftp/exec { "command": "uptime", "maxOutputBytes": 1048576 } — one-shot SSH exec on the active SFTP session. Returns { stdout, stderr, exitStatus, truncated }. SFTP-only — other protocols return 400.
POST /benchmark { "sizeMib": 8, "remoteDir": "/" } — uploads N MiB of test data, downloads it back, then deletes the test file. Returns { uploadBytesPerSec, downloadBytesPerSec, uploadMs, downloadMs, downloadedBytes }

Observability (3)

Method Path Purpose
GET /logs Last 1000 log entries
GET /metrics Prometheus scrape body (text/plain). Pre-auth — bypasses bearer-token middleware so scrapers don't need credentials. Loopback-only deployment makes this safe. Surfaces: ftproxy_http_requests_total, ftproxy_http_request_duration_seconds, ftproxy_transfers_total, ftproxy_transfer_bytes_total, ftproxy_transfer_duration_seconds, ftproxy_queue_depth, ftproxy_sessions_connected.
GET /health Rich health — returns status (ok / degraded), version, queue depth by status, last-minute transfer rate + error rate + bytes, session counts. Suitable for k8s liveness/readiness probes.

POST /session/connect body

{
  "protocol": "sftp",   // see "Supported protocols" above
  "host": "sftp.example.com",
  "port": 22,
  "username": "bot",
  "password": "optional",   // for OAuth protocols this is unused; for Dropbox this is the access_token
  "remotePath": "/inbox",
  "localPath": "C:/Transfers",
  "passiveMode": true,
  "secureDataChannel": true,
  "acceptAnyHostKey": true,
  "siteId": "optional-saved-site-uuid",
  "saveCredential": false,
  "extra": {                // protocol-specific config — see below
    "bucket": "my-bucket",
    "region": "us-east-1"
  }
}

extra field per protocol

Protocol Required keys Optional keys
s3 bucket region, endpoint, path_style (set for MinIO / R2 / DO Spaces / Wasabi)
azure container endpoint_suffix (default core.windows.net)
gcs bucket, service_account_json (full JSON; routed to OS keychain on save, never persisted to sites.json)
webdav — (host = full base URL)
dropbox — (access token goes in the password field, keyringed) access_token (legacy alias for the token; use password instead)
gdrive client_id, oauth_key (keychain key from oauth_sign_in)
onedrive client_id, oauth_key tenant (default common)
sftp logon_type (normal | ask | key); when key: key_path, key_passphrase
ftp encryption (auto (default) | explicit | implicit | plain), logon_type (normal | anonymous | ask)
ftps encryption (explicit (default) | implicit), logon_type (normal | anonymous | ask)

Notes on encryption for FTP: - auto — try AUTH TLS, fall back to plain. Best-effort; the bridge logs which path won. - explicit / implicit — require TLS; connect fails if the server refuses. - plain — refuse TLS even if offered. Insecure; tooltip warns.

Notes on logon_type: - normal — username + password as supplied. - anonymous — bridge substitutes anonymous / anonymous@example.com. - ask — caller (the JS modal) prompts the user and includes the password verbatim in the connect body; the bridge doesn't do anything special. - key (SFTP only) — bridge reads key_path from extras, calls russh_keys::load_secret_key, authenticates via authenticate_publickey. Optional key_passphrase decrypts the key in-memory.

WebSocket event types

{ "type": "hello",              "data": { "service": "ftproxy-bridge" } }
{ "type": "session.changed",    "data": { "connected": true, "sessionId": "..." } }
{ "type": "sessions.changed",   "data": [ /* updated slot list */ ] }
{ "type": "remote.changed",     "data": { "path": "/inbox" } }
{ "type": "transfer.started",   "data": { "id": "...", "direction": "upload", ... } }
{ "type": "transfer.progress",  "data": { "id": "...", "bytesDone": 1234, "bytesTotal": 9999 } }
{ "type": "transfer.completed", "data": { ... } }
{ "type": "transfer.cancelled", "data": { ... } }
{ "type": "transfer.failed",    "data": { ... } }
{ "type": "sites.changed",      "data": [ ... ] }
{ "type": "bookmarks.changed",  "data": [ ... ] }
{ "type": "config.changed",     "data": { ... } }
{ "type": "hostkey.seen",       "data": { "host", "port", "algorithm", "fingerprint", "new": true|false } }
{ "type": "hostkey.mismatch",   "data": { "host", "port", "message" } }
{ "type": "log",                "data": { "at": 177700..., "level": "info", "message": "..." } }

Tauri commands (callable from the embedded webview)

The OAuth flow for Google Drive / OneDrive can't go through REST — it needs to open the user's default browser and bind a temporary loopback listener for the redirect. Those steps live in Tauri commands invoked via window.__TAURI_INTERNALS__.invoke(name, args):

Command Args Returns
bridge_token string (the bearer token)
bridge_url string (REST base)
bridge_ws_url string (WebSocket base)
read_winscp_sites Vec<SitePayload> (Windows registry import — HKCU\Software\Martin Prikryl\WinSCP 2\Sessions)
read_putty_sites Vec<SitePayload> (Windows registry import — HKCU\Software\SimonTatham\PuTTY\Sessions)
read_coreftp_sites Vec<SitePayload> (Windows registry import — HKCU\Software\FTPware\CoreFTP[LE]\Sites)
read_smartftp_sites Vec<SitePayload> (file-tree import — %APPDATA%\SmartFTP\Client 2.0\Favorites\)
read_cuteftp_sites Vec<SitePayload> (Windows registry import — CuteFTP 7 / 8 / 9 site trees)
install_send_to_shortcut string — drops a .lnk in %APPDATA%\Microsoft\Windows\SendTo\ pointing at ftproxy-cli.exe. Returns the shortcut path. Windows-only.
uninstall_send_to_shortcut booltrue if a Send To shortcut existed and was deleted, false if there was nothing to remove.
list_drives Vec<Place> — every drive the OS reports (fixed / removable / network / CD), each with kind, label, path, drive (letter on Windows), volumeLabel
list_quick_locations Vec<Place> — UserDirs entries (Documents, Downloads, Desktop, Pictures, Music, Videos, Public, Home) that exist on the user's profile
map_network_drive { req: { unc, driveLetter?, username?, password? } } { path, drive } — mounts an SMB share via net use (Win) / mount_smbfs (Mac) / mount -t cifs (Linux)
unmap_network_drive { target: "Z" or "/Volumes/foo" } bool — unmounts the drive / mount point
oauth_sign_in { args: { provider, clientId, scopes, key } } { ok, provider, scopes, expiresAt } — opens browser, blocks until redirect, persists token under key in keychain
oauth_status { key } { signedIn, provider, scopes, expiresAt, hasRefreshToken }
oauth_sign_out { key } { ok: true } — drops the keychain entry for key. Used by the Site Form's "Sign out" button.

provider is "google" or "microsoft". key is typically the SavedSite UUID, so each site has its own token slot.

Tests

Suite Command Scope Current
Rust unit tests cargo test --lib Pure logic, handler-level via tower::oneshot, OAuth scaffolding, FTPS mode parsing, transport contracts, token redaction, hash-query deserialisation, registry-import format mapping, Azure NextMarker + GCS nextPageToken roundtrip, OverwritePolicy wire/back-compat, persistent-queue interrupt-on-restart, throttle pacing, SmartFTP/CuteFTP format mapping — no network 134/134 PASS
Frontend tests npm test Vitest + jsdom. Per-protocol action-policy contract (Verify/Compare visibility matrix), pure helpers, FTPS mode parser, native-menu / registry-import dispatch contract 39/39 PASS
Live endpoint suite .\scripts\test-endpoints.ps1 Full HTTP round-trip against the configured SFTP site 25/25 PASS