INTRODUCING LAUNCHABLESA powerful way to share your GPU configuration with others.

See the launch
DocsBlogPricing

Ollama on Brev

Get Started with Ollama!

Let's launch Ollama on Brev with just one command

First off, what is Ollama?

Ollama is an open-source tool that democratizes the use of LLMs by enabling users to run them locally on their own machines. Ollama simplifies the complex process of setting up LLMs by bundling model weights, configurations, and datasets into a unified "Modelfile." This approach not only democratizes these models but also optimizes their performance, especially in CPU environments. One of the key advantages of Ollama is its ability to also run efficiently on GPU-accelerated cloud infrastructure. By leveraging the power of GPUs, Ollama can process and generate text at lightning-fast speeds, making it an ideal choice for applications that require real-time or high-throughput language processing.

Why run Ollama on Brev.dev?

Brev allows users to easily provision a GPU and set up a Linux VM. This setup is super ideal for running sophisticated models via Ollama, providing a seamless experience from model selection to execution.

Together, Ollama and Brev.dev offer a powerful combination for anyone looking to use LLMs without the traditional complexities of setup and optimization. Let's dive into how to get started with Ollama on Brev!

1. Create an account

Make an account on the Brev console.

2. Launch an instance

Go to your terminal and download the Brev CLI

brew install brevdev/homebrew-brev/brev && brev login

Check out the installation instructions if you need help.

Now run the following command to launch Ollama with a specific model

brev ollama -m <model name>

You can see the full list of available models here.

Hang tight for a couple of minutes, while we provision an instance and load Ollama into it!

4. Use your Ollama endpoint!

If you want to use your Ollama endpoint, we'll give you the curl command in your terminal after the instance is ready.

You just deployed Ollama with one command!

Working with Ollama gives you a quick way to get a model running. We'll be adding a lot more support for Ollama in the coming months - if you have any special requests, feel free to email us eng@brev.dev and we'll be sure to add it as a feature!

🤙🦙🤙🦙🤙🦙🤙🦙🤙🦙🤙🦙🤙🦙🤙🦙🤙🦙🤙🦙🤙🦙🤙🦙