INTRODUCING LAUNCHABLESA powerful way to share your GPU configuration with others.

See the launch

Build Brev Yourself

How & why I stopped using Docker on my Mac

Alec Fong

July 8, 202210 min read

Why you shouldn't run Docker on your Mac

Sometimes Docker on your laptop sucks. It’s inconvenient to install on your new Apple Silicon and casually eats 11.23 GB of RAM. Docker Mac incurs significant performance penalties since it depends on VMs. Docker Linux, however, does not suffer from this constraint. Here’s one weird trick to improve Mac performance that docker doesn’t want you to know about.

Run docker remotely in 2 easy steps:

  • Run Docker daemon on a remote computer
  • Connect your local Docker client to the remote daemon

Run Docker daemon on a remote computer

You will need a remote computer running with SSH access. Most cloud providers have guides on how to do this (AWS, AZURE, GCP). Next, install Docker on this remote computer.

A simple way to do this

If you’d like to create a pre-configured Linux computer, you can use Brev, which has ssh, Docker, and docker-compose installed by default. You will need to install the CLI.

Connect your local Docker client to the remote daemon

For the tutorial’s sake, I will use a Brev environment named “brev-environment” as my remote computer.

Docker can connect to the remote Docker computer by using the environment variable DOCKER_HOST. Out of the box, docker supports SSH as a connection protocol.

export SSH_CONNECTION_STRING=brev-environment-b4qd
docker -H ${DOCKER_HOST} ps

Heads up

If you are not using brev your SSH_CONNECTION_STRING may look more like --> user@X.X.X.X

Key Verification Failed?

key verification failed. --> if you get this error, you may need to run ssh ${SSH_CONNECTION_STRING} and accept the key fingerprint

You can also pass the Docker host in as a CLI argument

docker -H ${DOCKER_HOST} ps

Improve Client Performance

The Docker ssh protocol creates a new connection for each command. This operation can sometimes be slow. You can re-use an ssh connection by forwarding the docker endpoint to your local computer. The docker API, by default, is exposed as a UNIX socket.

export CONTEXT_NAME=remote-example
export LOCAL_SOCKET_DIR=${HOME}/remote-docker-sockets/${CONTEXT_NAME}
mkdir -p ${LOCAL_SOCKET_DIR}
ssh -L ${LOCAL_SOCKET_PATH}:/var/run/docker.sock ${SSH_CONNECTION_STRING} -N &

Docker.sock: Address already in use?

docker.sock: Address already in use --> to fix this run rm ${LOCAL_SOCKET_PATH}

Change the Docker host to point to the locally forwarded unix file.


Globally configure the remote connection with Docker contexts

To permanently use this configuration, you can export the env variable DOCKER_HOST in your shell config(.zshrc, .bashrc), or you could use Docker contexts which allow you to switch between different Docker hosts easier.

export CONTEXT_NAME=remote-example
docker context create ${CONTEXT_NAME} --
docker "host=${DOCKER_HOST}"
docker context ls # see all contexts
docker context use ${CONTEXT_NAME} # set as global default

Use Docker!

You can now use Docker as if it’s running on your local computer!

Set your Docker context.

docker context use ${CONTEXT_NAME} # set context
docker run hello-world # run image
git clone docker-example && cd docker-example
docker-compose up # run docker-compose services
docker build . # build images

Pushing, pulling, and running large Docker images is vastly improved since you can take advantage of data center network speed!

docker run hello-world

Docker containers can be built faster on remote computers since you can provision powerful machines in the cloud. You can also trust that your lengthy builds won’t be interrupted by a dead battery or wifi outage!

git clone docker-example && cd docker-example # example repodocker build.

Run your dev helper services.

# working directory = docker-example
docker-compose up -d
docker-compose exec nginx sh

Bad CPU type in executable?

fork/exec /usr/local/bin/docker-compose-v1: bad CPU type in executable --> you may need to run softwareupdate --install-rosetta if you're on Apple Silicon

Next Steps & Tradeoffs

Running remote Docker isn’t entirely a free lunch.


One of the first things you may want to do is connect to one of your Docker containers over the network. Since it runs on a remote computer, you can not simply access it at localhost:X000. You must forward the desired ports from your local computer to your remote computer to access a port.

Port forwarding can be achieved with SSH.

export REMOTE_PORT=3002
export LOCAL_PORT=3002
docker run -p ${REMOTE_PORT}:80 -d nginx
curl localhost:${LOCAL_PORT}

Port forward with the Brev CLI

If using brev you can also use the port-forward command brev port-forward -p ${LOCAL_PORT}:${REMOTE_PORT} ${BREV_NAME}

Another issue you may encounter is building Docker images with large local contexts. Moving large files over the network can result in poor performance. Moving more of your development assets onto the remote computer fixes this problem and is the standard approach at Brev.


Since Docker is running on a remote computer, the Docker daemon will make file volumes and mounts on the remote computer, not the local computer. You can use the following tools if you intend to sync files between your local computer and the remote one.

  • scp
  • sshfs
  • rsync
  • mutagen

Besides using these tools, moving more of your development assets into the remote computer removes many file sync issues.

Extra Management and Context

Working across multiple computers (local and remote) increases cognitive load. You need to know, in all contexts, what filesystem and network you’re currently working on and how to interface between them. Managing connections and keys yourself can quickly become tedious. Finally, I want to mention that running cloud computers can become quite costly if you are not actively managing and tracking computer type and usage.

If these problems interest you check out Brev; it’s what we work on full-time!

Sneaking into an Uber parking lot to get your Development Environment up and running