Introduction:
- Repository: comfyui-mixlab-nodes
- Stars: 1600
- Description: Workflow-to-APP、ScreenShare&FloatingVideo、GPT & 3D、SpeechRecognition&TTS
- Author: MixLabPro
This document provides an overview of the LaMaInpainting
node, part of the comfyui-mixlab-nodes
suite. This node allows users to perform image inpainting using the SimpleLama algorithm directly within ComfyUI.
comfyui-mixlab-nodes Overview
The comfyui-mixlab-nodes
repository offers a collection of custom nodes designed to enhance the functionality of ComfyUI, a popular node-based visual programming environment. These nodes cover a wide range of applications, including workflow automation, screen sharing, GPT integration, 3D processing, and speech recognition/TTS capabilities.
LaMaInpainting ♾️Mixlab Introduction
The LaMaInpainting
node integrates the SimpleLama inpainting algorithm into ComfyUI. Inpainting is the process of reconstructing missing or damaged parts of an image. This node takes an image and a mask as input, where the mask defines the regions to be inpainted. It leverages the simple_lama_inpainting
library, automatically installing it if necessary, provided that the user's PyTorch version is >= 2.1.
LaMaInpainting ♾️Mixlab Input
The LaMaInpainting
node accepts the following inputs:
image
: The input image to be inpainted. This should be a ComfyUIIMAGE
tensor.mask
: A mask defining the regions to be inpainted. This should be a ComfyUIMASK
tensor. The masked area will be filled.
LaMaInpainting ♾️Mixlab Output
The LaMaInpainting
node produces a single output:
IMAGE
: The inpainted image as a ComfyUIIMAGE
tensor.
LaMaInpainting ♾️Mixlab Usage Tips
- Ensure that your PyTorch version is compatible (>= 2.1) for automatic installation of
simple_lama_inpainting
. - The node attempts to automatically download the
big-lama.pt
model if it's not found in the expected location (folder_paths.models_dir/lama
). If the download fails, you may need to manually download it from the provided link in the code comments and place it in the correct directory. - The node automatically switches the SimpleLama model to the GPU if CUDA is available, and back to the CPU after inpainting to save memory.
- The input
mask
should accurately represent the areas you wish to inpaint.