Guides

How to run Replicate/Cog models with Brev

In this guide, we show you how easy it is to use Cog environments with Brev. Everything is pre-configured so you won't have any hassle to get setup!

What is Cog?

Cog is a tool that abstracts Docker. It let's you package your ML model into a production-ready container, without having to touch Docker. Here at Brev, we think it's quite awsome!

It'll configure a VM for you with all the dependencies you need to run the model. By default, we've configured it with the Nvidia A10G. If you want to use a different GPU, you can change it in the Brev console. But bear in mind that Stable Diffusion requires a GPU with 24GB of VRAM.

Downloading the model:

cog run script/download-weights

(If you're running on a container, put "sudo" in front of each cog command)

Running inference:

cog predict -i prompt="monkey scuba diving"

Deploying the model:

cog login
cog push r8.im/your-username/your-model

If you want to do more things with Cog, check out Replicate's docs.