View example on GitHub

In this example, we’ll deploy an anime style transfer ComfyUI workflow using truss. This example won’t require any Python code, but there are a few pre-requisites in order to get started.

Pre-Requisites:

  1. Convert your ComfyUI workflow to an API compatible JSON format. The regular JSON format that is used to export Comfy workflows will not work here.
  2. Have a list of the models your workflow requires along with URLs to where each model can be downloaded

Setup

Clone the truss-examples repository and navigate to the comfyui-truss directory

git clone https://github.com/basetenlabs/truss-examples.git
cd truss-examples/comfyui-truss

This repository already contains all the files we need to deploy our ComfyUI workflow. There are just two files we need to modify:

  1. config.yaml
  2. data/comfy_ui_workflow.json

Setting up the config.yaml

build_commands:
- git clone https://github.com/comfyanonymous/ComfyUI.git
- cd ComfyUI && git checkout b1fd26fe9e55163f780bf9e5f56bf9bf5f035c93 && pip install -r requirements.txt
- cd ComfyUI/custom_nodes && git clone https://github.com/LykosAI/ComfyUI-Inference-Core-Nodes --recursive && cd ComfyUI-Inference-Core-Nodes && pip install -e .[cuda12]
- cd ComfyUI/custom_nodes && git clone https://github.com/ZHO-ZHO-ZHO/ComfyUI-Gemini --recursive && cd ComfyUI-Gemini && pip install -r requirements.txt
- cd ComfyUI/custom_nodes && git clone https://github.com/kijai/ComfyUI-Marigold --recursive && cd ComfyUI-Marigold && pip install -r requirements.txt
- cd ComfyUI/custom_nodes && git clone https://github.com/omar92/ComfyUI-QualityOfLifeSuit_Omar92 --recursive
- cd ComfyUI/custom_nodes && git clone https://github.com/Fannovel16/comfyui_controlnet_aux --recursive && cd comfyui_controlnet_aux && pip install -r requirements.txt
- cd ComfyUI/models/controlnet && wget -O control-lora-canny-rank256.safetensors https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-canny-rank256.safetensors
- cd ComfyUI/models/controlnet && wget -O control-lora-depth-rank256.safetensors https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-depth-rank256.safetensors
- cd ComfyUI/models/checkpoints && wget -O dreamshaperXL_v21TurboDPMSDE.safetensors https://civitai.com/api/download/models/351306
- cd ComfyUI/models/loras && wget -O StudioGhibli.Redmond-StdGBRRedmAF-StudioGhibli.safetensors https://huggingface.co/artificialguybr/StudioGhibli.Redmond-V2/resolve/main/StudioGhibli.Redmond-StdGBRRedmAF-StudioGhibli.safetensors
environment_variables: {}
external_package_dirs: []
model_metadata: {}
model_name: Anime Style Transfer
python_version: py310
requirements:
  - websocket-client
  - accelerate
  - opencv-python
resources:
  accelerator: H100
  use_gpu: true
secrets: {}
system_packages:
  - wget
  - ffmpeg
  - libgl1-mesa-glx

The main part that needs to get filled out is under build_commands. Build commands are shell commands that get run during the build stage of the docker image.

In this example, the first two lines clone the ComfyUI repository and install the python requirements. The latter commands install various custom nodes and models and place them in their respective directory within the ComfyUI repository.

Modifying data/comfy_ui_workflow.json

The comfy_ui_workflow.json contains the entire ComfyUI workflow in an API compatible format. This is the workflow that will get executed by the ComfyUI server.

Here is the workflow we will be using for this example.

Important: If you look at the JSON file above, you’ll notice we have templatized a few items using the {{handlebars}} templating style.

If there are any inputs in your ComfyUI workflow that should be variables such as input prompts, images, etc, you should templatize them using the handlebars format.

In this example workflow, there are two inputs: {{input_image}} and {{prompt}}

When making an API call to this workflow, we will be able to pass in any variable for these two inputs.

Deploying the Workflow to Baseten

Once you have both your config.yaml and data/comfy_ui_workflow.json filled out we can deploy this workflow just like any other model on Baseten.

  1. pip install truss --upgrade
  2. truss push --publish

Running Inference

When you deploy the truss, it will spin up a new deployment in your Baseten account. Each deployment will expose a REST API endpoint which we can use to call this workflow.

import requests
import os
import base64
from PIL import Image
from io import BytesIO

# Replace the empty string with your model id below
model_id = ""
baseten_api_key = os.environ["BASETEN_API_KEY"]
BASE64_PREAMBLE = "data:image/png;base64,"

def pil_to_b64(pil_img):
   buffered = BytesIO()
   pil_img.save(buffered, format="PNG")
   img_str = base64.b64encode(buffered.getvalue()).decode("utf-8")
   return img_str

def b64_to_pil(b64_str):
   return Image.open(BytesIO(base64.b64decode(b64_str.replace(BASE64_PREAMBLE, ""))))

values = {
 "prompt": "american Shorthair",
 "input_image": {"type": "image", "data": pil_to_b64(Image.open("/path/to/cat.png"))}
}

resp = requests.post(
   f"https://model-{model_id}.api.baseten.co/production/predict",
   headers={"Authorization": f"Api-Key {baseten_api_key}"},
   json={"workflow_values": values}
)

res = resp.json()
results = res.get("result")

for item in results:
   if item.get("format") == "png":
       data = item.get("data")
       img = b64_to_pil(data)
       img.save(f"pet-style-transfer-1.png")

If you recall, we templatized two variables in our workflow: prompt and input_image. In our API call we can specify the values for these two variables like so:

values = {
 "prompt": "Maltipoo",
 "input_image": {"type": "image", "data": pil_to_b64(Image.open("/path/to/dog.png"))}
}

If your workflow contains more variables, simply add them to the dictionary above.

The API call returns an image in the form of a base64 string, which we convert to a PNG image.