josephversace/IIM.Setup
Folders and files
| Name | Name | Last commit date | ||
|---|---|---|---|---|
Repository files navigation
“Go-by” for replication & cloud-drive Use these as operator runbooks you can ship with the appliance. A) Active-active (filer↔filer) # one-shot, or put this behind the profile we created weed filer.sync -a filerA.example:8888 -b filerB.example:8888 -concurrency 16 Active-passive: add -isActivePassive (A→B only). This is the same flag we map from SyncProfile.IsActivePassive. GitHub +1 B) Cloud-drive (S3/GCS/Azure/etc.) as primary with write-back cache Configure the remote storage (run in weed shell): # S3 example (set endpoint/region/keys to your provider) remote.configure -name=aws1 -type=s3 -s3.access_key=AKIA... -s3.secret_key=... \ -s3.endpoint=s3.us-east-1.amazonaws.com -s3.force_path_style=true Mount a bucket path into the filer’s namespace: # Mount bucket "cases-prod" under /buckets/cases-prod remote.mount -dir=/buckets/cases-prod -remote=aws1/cases-prod -dirAutoCreate Start continuous write-back: # This uploads changes from local cache to the remote weed filer.remote.sync -filer localhost:8888 -dir=/buckets/cases-prod -concurrency 16 # Optional initial catch-up from last N hours: # weed filer.remote.sync -filer localhost:8888 -dir=/buckets/cases-prod -timeAgo=24h Those commands match what we generate in the profile when BucketDir is provided. Filebase Mos.ru GitHub Why this matters: Cloud Drive makes S3 the system of record while SeaweedFS provides fast local cache & S3 API. That satisfies your “primary, not just backup” requirement. Medium Security notes (for your docs) Surface & auth Expose opsd only on the appliance admin LAN (firewalled). Require TLS and mTLS or OIDC (AuthN). Enable the included OpsAdmin policy and back it by your IdP (OIDC) or a device-local JWT issuer (OpenIddict). Rate-limit is enabled (toggle thresholds per environment). Least privilege Run weed and the sync wrapper as the seaweed user, not root. Write sensitive files only under allow-listed roots (enforced by AllowListService). Unit files & env files are created with 0600 (owner-only) and atomically replaced. Input validation SyncProfile validates: names, host:port, directories, bucket names, numeric concurrency, timeAgo grammar. Paths must pass the allow-list check. Command execution No shell interpretation; arguments use ArgumentList, eliminating injection via spaces/quotes. Bounded wait with kill-on-timeout; logs include stdout/stderr for incident review. Logging & audit Avoid logging secrets. If you pass credentials, prefer: ~/.aws/credentials for S3-style IAM roles where possible Kubernetes/WSL secrets (if deployed that way) Certificates Terminate TLS on Kestrel or a local reverse proxy. Pin the admin CA in the UI. If you later expose the Seaweed S3 gateway externally, terminate TLS there too (or put behind your IAM gateway).