Machine learning doesn’t need to be so hard.

Run models in the cloud at scale.

01
Run

Replicate lets you run machine learning models with a few lines of code, without needing to understand how machine learning works.

Use our Python library:

import replicate
output = replicate.run(
    "stability-ai/sdxl:2b017d9b67edd2ee1401238df49d75da53c523f36e363881e057f5dc3ed3c5b2",
    input={"prompt": "an astronaut riding a rainbow unicorn"},
)

...or query the API directly with your tool of choice:

$ curl -s -X POST \
    -H "Authorization: Token $REPLICATE_API_TOKEN" \
    -d '{"version": "2b017d9b67edd2ee1401238df49d75da53c523f36e363881e057f5dc3ed3c5b2", \
         "input": { "prompt": "an astronaut riding a rainbow unicorn" } }' \
    https://api.replicate.com/v1/predictions

Thousands of models, ready to use

Machine learning can do some extraordinary things. Replicate's community of machine learning hackers have shared thousands of models that you can run.

Image to text

Image and video generation models trained with diffusion processes

Text to image

Image and video generation models trained with diffusion processes

Explore models or, learn more about our API

02
Push

You're building new products with machine learning. You don't have time to fight Python dependency hell, get mired in GPU configuration, or cobble together a Dockerfile.

That's why we built Cog, an open-source tool that lets you package machine learning models in a standard, production-ready container.

First, define the environment your model runs in with cog.yaml:

build:
  gpu: true
  system_packages:
    - "libgl1-mesa-glx"
    - "libglib2.0-0"
  python_version: "3.10"
  python_packages:
    - "torch==1.13.1"
predict: "predict.py:Predictor"

Next, define how predictions are run on your model with predict.py:

from cog import BasePredictor, Input, Path
import torch

class Predictor(BasePredictor):
    def setup(self):
        """Load the model into memory to make running multiple predictions efficient"""
        self.model = torch.load("./weights.pth")

    # The arguments and types the model takes as input
    def predict(self,
          image: Path = Input(description="Grayscale input image")
    ) -> Path:
        """Run a single prediction on the model"""
        processed_image = preprocess(image)
        output = self.model(processed_image)
        return postprocess(output)

Now, you can run predictions on this model locally:

$ cog predict -i @input.jpg
--> Building Docker image...
--> Running Prediction...
--> Output written to output.jpg

Or, build a Docker image for deployment:

$ cog build -t my-colorization-model
--> Building Docker image...
--> Built my-colorization-model:latest

Finally, push your model to Replicate, and you can run it in the cloud with a few lines of code:

$ cog push
Pushed model to replicate.com/your-username/my-colorization-model
import replicate
output = replicate.run(
    "your-username/your-model:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf",
    image=open("input.jpg"),
)

Push a model or, learn more about Cog

03
Scale

Deploying machine learning models at scale is horrible. If you've tried, you know. API servers, weird dependencies, enormous model weights, CUDA, GPUs, batching. If you're building a product fast, you don't want to be dealing with this stuff.

Replicate makes it easy to deploy machine learning models. You can use open-source models off the shelf, or you can deploy your own custom, private models at scale.

Get started or, learn more about us