Ways to Fix Hand in Stable Diffusion

Stable Diffusion is a powerful AI tool that can be used to create stunning images from text descriptions. However, one issue that some users have encountered is that Stable Diffusion can sometimes generate bad, ugly, or distorted hands and fingers. This article will show you how to fix this issue using a few simple techniques.

In this article, we address these issues head-on, presenting a comprehensive set of techniques that can be employed to fix the flaws commonly encountered in stable diffusion-based hand and finger generation. What sets this article apart is its focus on straightforward and accessible solutions that can be implemented by practitioners of all skill levels. We believe that democratizing these fixes will empower a broader community of researchers, artists, and developers to produce visually stunning and true-to-life hand and finger images.

Fix Hand in Stable Diffusion

Negative Prompt

image 8 - Ways to Fix Hand in Stable Diffusion
Stable Diffusion Art

One way is to use a negative prompt. A negative prompt is a phrase that you can add to your text description that tells Stable Diffusion to avoid generating certain things. In this case, you could use a negative prompt like “no bad hands” or “no distorted fingers.”

  • Be specific. The more specific you are with your negative prompt, the more likely Stable Diffusion is to avoid generating what you don’t want. For example, instead of saying “no bad hands,” you could say “no distorted fingers.”
  • Use multiple negative prompts. You can use multiple negative prompts to further discourage Stable Diffusion from generating what you don’t want. For example, you could use the negative prompts “no bad hands” and “no distorted fingers.”
  • Experiment. The best way to learn how to use negative prompts is to experiment. Try different negative prompts and see what works best for you.

Here are some examples of negative prompts that you can use to fix bad hands and fingers in Stable Diffusion:

  • bad hands
  • no distorted fingers
  • no ugly hands
  • no creepy hands
  • unnatural hands
  • no robotic hands
  • alien hands
  • no cartoon hands
  • no pixelated hands
  • blurry hands

Inpaint Mode

One powerful feature in stable diffusion that can effectively address hand-related issues is the “Inpaint” mode. By utilizing this mode, users can selectively mask and repair problematic areas, such as hands and fingers, within the generated image.

After generating the initial image, users can send it to the inpaint interface, where they can define a mask around the areas they want to fix. Setting the mask mode to “only mask” ensures that the inpainting process focuses solely on the selected region.

Additionally, adjusting the diffusion parameter to values between 0.30 and 0.55 can yield improved results. By experimenting with prompts and their weights, such as using variants like “hand” or “perfect hand” within the (Prompt) bracket, users can further refine the inpainting process. The combination of these techniques allows for precise and targeted enhancements, resulting in significantly improved hand representations in stable diffusion-generated images.

Control Net

Fixing finger stability in Stable Diffusion can be achieved by utilizing the Control Net feature to control the generation process beyond the simple text prompt. Control Net offers various methods to control Stable Diffusion, but for fixing hand-related issues in the generated images, we will employ a different approach. Here’s a step-by-step guide:

  1. Install Control Net in the Stable Diffusion web UI and select the desired control net method for fixing the hand issue. In this case, we recommend using “openopse” to estimate the pose of your character.
  2. Generate the character using Stable Diffusion and observe that the hands might appear distorted or inaccurate. Download the generated image.
  3. Open the downloaded image in an image editing software that supports layers. We suggest using GIMP as it is a free option, or you can use Photopea if you prefer not to download any software.
  4. Find a suitable hand pose image to replace the distorted hand. There are multiple ways to obtain such an image. You can take a screenshot of the hand 3D model from websites like Sketchfab or create your own hand pose using software like Blender. Alternatively, you can search for a hand image on Google or stock sites. The goal is to find a hand image that roughly matches the position of the character’s hand. It doesn’t need to be perfect.
  5. Once you have the hand pose image, composite it onto the generated image using the image editing software. Align the hand image with the approximate position of the character’s hand. This will involve placing the hand image as a separate layer over the existing hand in the generated image.
  6. Import the composited image back into the Stable Diffusion web UI.
  7. Open the Control Net settings and choose “depth” as the preprocessor and “control net method” as the selected method.
  8. Proceed to generate the image using the edited composite as the input. You should observe a noticeable improvement in the quality and appearance of the hand.
  9. If you wish to preserve other elements of the previously generated image while fixing the hand, you can employ the “inpain” method. This involves importing the composited image into the “inpaint” area of the web ui. Then, you can paint a mask specifically in the hand area and set the denoise scale below 0.5. Additionally, set the mask mode to “mask only” to isolate the hand region. Finally, click on “generate” to initiate the image generation process.

Use Textual Inversion Embeddings

image 26 - Ways to Fix Hand in Stable Diffusion

To fix the issue of bad, ugly, and distorted hands in Stable Diffusion, we will utilize a method called “Use Textual Inversion Embeddings.” Textual Inversion is a technique that allows us to capture novel concepts from a small number of example images. While initially demonstrated with a latent diffusion model, it has also been applied to other variants like Stable Diffusion. By leveraging Textual Inversion, we can better control the generation of images from text-to-image pipelines.

Many individuals have trained their own Textual Inversion Embeddings specifically on images of bad hands. To make use of these trained embeddings in Stable Diffusion, follow the steps outlined below:

Download the Textual Inversion Embeddings from the provided list. These embeddings have been trained on examples of bad hands.

Once you have obtained the embeddings, you can incorporate them into Stable Diffusion by adding them to the negative prompt. When generating images, these embeddings will be activated and utilized.

It’s important to note that since you have added the embeddings in the negative prompt and they have been trained on bad hand images. However, by employing this technique, you increase the likelihood of generating better hand poses and reducing the occurrence of distorted or undesirable results.

Sujeet Kumar
Sujeet Kumar
SK, an ardent writer whose creativity knows no bounds. With a profound love for anime, a fascination for the world of VFX, and an insatiable appetite for innovative storytelling, SK embarks on a journey where art and artificial intelligence converge to bring captivating narratives to life.

Recent