storagevolumessyncai-agentscross-platform

Sync a Local Workspace with a Cloud Sandbox for AI Agents

Sandbox0 Team·

Every time you hand a task to an AI agent running in a remote sandbox, you face the same logistics problem: how does the agent see your actual codebase? And when it's done making changes, how do those edits get back to you?

The usual answers are all variants of the same workaround. Push to Git, have the agent clone, let the agent make changes, pull the diff back locally. Or zip the relevant files, upload them, unzip inside the sandbox, download the result. Or — worse — copy the file contents into the context window and reconstruct the changes by hand.

All of these break down at different thresholds. Git workflows introduce round-trip latency and require clean working trees. File uploads are manual and don't stay in sync. Context window transfers are expensive and size-limited: passing a 200KB file costs roughly 50,000 tokens, and files larger than the context limit can't be passed at all.

s0 sync takes a different approach. Instead of moving files between environments, it keeps one Volume attached to both at the same time.

What s0 sync Is#

s0 sync is Sandbox0's mechanism for keeping a local directory continuously in sync with a cloud-hosted Volume. Once a local workspace is attached to a Volume, local file changes upload to the Volume automatically; writes made inside a sandbox or by an AI agent replay back to the local directory. The same Volume can be mounted into one or more running sandboxes simultaneously, so the agent works on the real project tree — not a copy of it.

The sync is bidirectional and journal-based. Every write to the Volume — whether it originates locally or inside a sandbox — is appended to a durable journal. Local replicas advance through that journal continuously. When a new machine attaches to an existing Volume, it bootstraps from the current Volume state and then replays any journal entries that arrived since. There is no "primary" side and no manual merge step.

Getting Started#

The basic workflow has three steps: attach a local directory to a Volume, mount the same Volume into a sandbox, and work normally from either side.

Step 1: Attach the local workspace.

If you already have a local project directory:

bash
s0 sync attach vol_abc123xyz ~/work/my-project

If you are on a new machine and want to pull the existing Volume contents down first:

bash
mkdir -p ~/work/my-project s0 sync attach vol_abc123xyz ~/work/my-project --init-from volume

--init-from volume is the safe choice when the remote Volume already has content you want before making any local changes.

Step 2: Mount the same Volume into a sandbox.

bash
s0 sandbox volume mount \ --volume-id vol_abc123xyz \ --path /workspace \ --sandbox-id sb_abc123xyz

The sandbox now sees the same files at /workspace that you are editing locally at ~/work/my-project.

Step 3: Work from either side.

From this point forward:

  • Local edits upload to the Volume and become visible inside the sandbox at /workspace
  • Agent or sandbox writes to /workspace replay back into ~/work/my-project
  • No manual export, download, or re-sync is needed between steps

The AI Agent Collaboration Pattern#

The most useful property of s0 sync for agent workflows is that you can review the agent's work in your local editor while the agent is still running.

A typical loop looks like this:

  1. Attach your project locally with s0 sync attach
  2. Mount the same Volume into a sandbox at /workspace
  3. Start an AI agent task inside the sandbox — a refactor, a test run, a code generation pass
  4. Watch the agent's file changes appear locally in your editor as they land
  5. Make corrections locally; those corrections are visible to the agent in the sandbox within seconds

The agent reads and writes real project files through the standard POSIX interface. There is no special API, no serialization protocol, and no context window transfer needed to hand the codebase to the agent or to receive its output. If the agent modifies ten files across five directories, all ten changes replay locally as discrete file operations — not as a diff to interpret or a zip to unpack.

This pattern also composes with the rest of the Sandbox0 volume model. You can snapshot the Volume before handing it to an agent, let the agent work on the live Volume, and restore the snapshot if the result is not what you wanted — without the agent needing to know any of this is happening.

Cross-Platform: macOS, Linux, and Windows#

s0 sync is designed to move the same workspace across machines and operating systems without silent corruption.

When a local replica attaches, it registers its filesystem capabilities with the Volume. Sandbox0 uses those capabilities to reject path mutations that are not safe for that replica's platform before they are written — producing an explicit sync conflict rather than a broken checkout.

The cross-OS workflow is the same in every direction:

Start work on macOS:

bash
s0 sync attach vol_abc123xyz ~/work/my-project

Continue on a Linux workstation:

bash
s0 sync attach vol_abc123xyz ~/work/my-project --init-from volume

Attach on Windows for review or edits:

bash
s0 sync attach vol_abc123xyz C:\Users\you\my-project --init-from volume

Mount the Volume into a Linux sandbox regardless of which machine initiated the sync:

bash
s0 sandbox volume mount --volume-id vol_abc123xyz --path /workspace -s sb_abc123xyz

Path patterns to avoid across platforms:

PatternWhy It Causes Problems
Foo and foo in the same directoryCase-insensitive filesystems (macOS HFS+, Windows NTFS) collapse them to one entry
Unicode-equivalent names that look identicalDifferent normalization forms can produce the same visual name with different bytes
Names ending in . or trailing spacesWindows rejects them at the filesystem layer
Windows reserved device names (CON, PRN, AUX, NUL, COM1COM9, LPT1LPT9)These names cannot exist as files on any Windows path

When Sandbox0 detects a namespace conflict between what a sandbox wrote and what is safe for a registered local replica, it records a sync conflict instead of silently creating an unreadable path. The conflict is surfaced through s0 sync conflicts list and can be resolved before the write propagates.

s0 sync vs. the Alternatives#

ApproachBidirectionalAgent-nativeCross-platform safeWorks while agent is running
s0 syncYesYesYesYes
Git push / pullYesPartialYesNo — requires commit and push
rsync / scpManualNoPartialNo — point-in-time copy
Context window transferNoYesYesNo — static snapshot
devcontainer bind mountYesYesPartialYes — but local-only, no cloud

s0 sync is the only option in this table that is bidirectional, requires no agent code changes, handles cross-platform path constraints, and stays live while the agent is running.

Daily Commands#

After a workspace has been attached, s0 sync infers the workspace root from your current directory.

bash
# Check what is syncing and whether the worker is healthy s0 sync status # Follow the sync worker log in real time s0 sync logs -f # List all locally attached sync workspaces on this machine s0 sync list # List unresolved sync conflicts s0 sync conflicts list # Inspect a specific conflict s0 sync conflicts show path/to/file # Mark a conflict resolved after you have repaired the local copy s0 sync conflicts mark path/to/file # Stop syncing the current workspace without deleting the Volume s0 sync detach

One operational note: unresolved conflicts pause local uploads. If s0 sync status shows the worker blocked, s0 sync conflicts list is the fastest way to understand why. After you fix the local path and mark the conflict, the upload queue drains automatically.

If a local machine falls too far behind the retained sync journal — because it was offline for a long time — the worker detects this and bootstraps from the current Volume state before resuming replay. This is automatic; you do not need to manually trigger a resync.

FAQ#

Does the agent need to change its code to use s0 sync?

No. From the agent's perspective, /workspace is a normal directory. Every POSIX file operation — open, read, write, rename, stat — works through the standard interface. The storage-proxy translates those operations to JuiceFS calls transparently. The agent does not need to know it is working on a distributed filesystem.

What happens if a sandbox crashes while the agent is writing?

The Volume is unaffected. Sandbox crashes disconnect the pod from the mount, but the Volume data written up to that point is durable — it is stored in JuiceFS, backed by S3 and PostgreSQL, outside the sandbox pod lifecycle. Mount the Volume to a new sandbox to continue from where the crash occurred.

How does this interact with conflicts between local edits and agent edits?

s0 sync does not perform content merging. If you and an agent write to the same file simultaneously and the writes conflict at the filesystem level, Sandbox0 records a sync conflict. You resolve it by editing the local file to the desired state and running s0 sync conflicts mark. For active collaboration with an agent, the cleanest pattern is to work in different directories or establish an informal turn-taking convention in your prompts.

Does s0 sync work on Windows?

Yes. The s0 sync attach command works on Windows, and Sandbox0 registers the replica's filesystem capabilities (including Windows path restrictions) at attach time so the server can reject non-portable paths before they reach your machine.

How is this different from a devcontainer bind mount?

A devcontainer bind mount shares your local directory into a container running on the same machine. It requires Docker, works only for that one local machine, and does not persist the workspace independently of your machine or that container. s0 sync stores the workspace in a cloud-hosted Volume backed by JuiceFS. The Volume exists independently of any machine or sandbox — you can attach from any machine, mount into any sandbox in any region, and the workspace is available whether your laptop is on or off.

Can I run multiple agents against the same Volume simultaneously?

Yes. Mount the same Volume into multiple sandboxes at the same time using RWX access mode. Each agent sees the live filesystem state including writes from other agents. As with any concurrent filesystem, you are responsible for write coordination at the application level — for example, having agents write to separate output directories or use a lock file for coordination. The filesystem does not enforce write ordering between concurrent writers.


s0 sync configuration and the underlying sync protocol are documented in the s0 sync section of the Sandbox0 docs. For a broader view of the volume model — persistent storage, snapshots, copy-on-write forks, and multi-agent sharing — see Persistent Storage for AI Agent Sandboxes.