In this guide, we show you can run the Riffusion model, inference API and Next.js frontend out of the box with our template.
Riffusion is a fine tuned Stable Diffusion model that has been trained to generate spectograms from input descriptions of music. Those spectograms are then used to generate audio. It was created by Seth Forsgren and Hayk Martiros.
To get started, hit this link to create a new Brev environment
We've preset it with the config you will need - including the Tesla T4 GPU which is the cheapest that'll work with Riffusion.
1) Open your remote environment
brev open riffusion --wait
If you don't have the Brev CLI, you can install it here
2) Start the inference server
Run the following commands to start the inference API:
cd ~/riffusion-inference conda activate riffusion-inference python -m riffusion.server --port 3013 --host 127.0.0.1
3) Start the Nextjs frontend
Then in a new terminal:
cd ~/riffusion-app npm run dev
And bam, head over to http://localhost:3000 to start generating music!
Exposing ports publically
To expose your app to the world, go to the Brev console and under environment settings, scroll to Public Ports and expose port 3000
Changing instance type
With Brev, you can scale between instance types super easily:
brev scale riffusion --gpu g5.2xlarge # to change to an A10G instance brev scale riffusion --cpu 2x8 # to change to a CPU instance