[A.I.] Huge Proportions Workflow - 2023-09
This is a small write-up on how I create the images of women with huge proportions, like here.
This process isn’t the most efficient or correct at all, just something I developed when experimenting around. There are probably far better methods to get to the same or better results.
Required Software / Models / LoRAs
I always use SD.Next, a fork of the popular Automatic1111 WebUI. It has some of the most used extensions already integrated, and in general feels like a bit more refined version of the UI than the original. The installation instructions for the UI are in the README.md, so you shouldn’t have any problems with it; otherwise, there are many tutorials on YouTube for installing these two UIs.
With the UI installed, there’s an extension I use for better faces and hands. ADetailer, which can automatically detect faces and hands and then slightly inpaint over them in a high resolution to fix the typical ugly results of a standalone result.
Finally, I used the epiCPhotoGasm model, which is a different version of the popular epiCRealism model. With the addition of the epiCRealismHelper-LoRA and the epiCRealism - Negative-embedding, one can achieve some truly amazing skin textures and actually realistic looking photos. So let’s ruin them with unrealistic gigantic proportions.
For these things, there are two more LoRAs, and a LyCORIS if you want some extremely tine waists.
- BimboStyleFiveEpic - For the huge breasts and hips
- BoltonTits - The “bolt-on” effect of the breasts
- Waist Slider (Microwaist) - Super tiny waists, can sometimes get some really extreme results
With these models, LoRAs and optional LyCORIS, we can start generating.
1. Good base image
With tools like ComfyUI, there’s more weight on creating the best image with the initial txt2img step. I don’t really work with that and more tend to the incremental approach of generating a good base and then changing things more and more, until you get the result you want.
First I come up with a base prompt and test it out a few times. With a base resolution of 512x768 and no other things enabled (no upscaler, no adetail, etc.).
Positive Prompt:
RAW photo, three-quarter shot, young slim 1woman, from front, BREAK
(skin-tight white sundress:1.0) BREAK
(long straight light brown hair:1.0), (gigantic round huge breasts:1.0), (super small compressed waist:1.0), BREAK
looking at the camera, (slight smile:1.0)
standing in a park,
<lora:epiCRealismHelper:0.75>
---
Negative Prompt:
(wrinkled cloth:1.0), watermark, epiCNegative,
I use the BREAK
keyword to separate between different chunks, with which some better separation of subjects can be achieved. I want the hair to be brown and the dress to be white, not the other way around, which can easily happen.
Generating a few test images:
These are a bit low quality, but can easily be fixed by using an upscaler. Enabling the second pass or also called high-res fix and choosing the upscaler epiCRealism recommends, quite some different results can be achieved:
These already look a bit better, but to finally fix the face and possible hands, let’s enable ADetailer with short custom prompts to describe how the face should excactly look like. Both tabs of the extension are used, one for the face and the other for the hands. Make sure to select the correct model in the two tabs. For the face I use face_yolov8n.pt
and for the hands hand_yolov8n.pt
.
These are the two prompts:
Face:
super attractive young female face, light brown hair, slight smile, bright blue eyes
negative: epiCNegative
Hands:
detailed hand
negative: epiCNegative
I added the epiCNegative
negative embedding just as a reassurance.
Running through the entire process again with upscaling and Adetailer enabled leads to something like this:
So now we have a pretty good base image to choose from (here I’ll go with the image #1). If there’s nothing that fits in these, just generate a few more with the batch functionality. Once I have something I like, I get to the entire inpainting process.
2. Inpainting
Now in the inpainting step, the two LoRAs come into play to severely change the appearance of the subject. Going for quite the large inpaint mask, I generally want the entire chest area to be changed, so make sure to use a denoising strength > 0.7.
Adding the two LoRAs to the prompt, I generally got some good results with bimbostyleFive
at a weight of 0.7
and boltontits
on something around 0.4
.
RAW photo, three-quarter shot, young slim 1woman, from front, BREAK
(skin-tight white sundress:1.0) BREAK
(long straight light brown hair:1.0), (gigantic round huge breasts:1.6), (super small compressed waist:1.0), BREAK
looking at the camera, (slight smile:1.0)
standing in a park,
<lora:epiCRealismHelper:0.4> <lora:bimbostyleFiveEpic:0.7> <lora:boltedontitsV3:0.4>
I increased the weight of the breast part of the prompt, just to force all different parts to focus on that. As you can see, I also decreased the weight of epiCRealismHelper
just so they don’t interfere with each other too much.
Now, instantly hitting generate can result in something useable, but as we are working with an already upscaled image (1024x1536), the LoRAs might not produce what you want, as they’ve been trained on smaller image sizes. Switching to Only Masked
for the inpaint area, and fixating the resolution to 512x768, I often get some better, larger results.
We have to play around with the settings and weights in the prompt a bit, especially if the breasts shouldn’t be naked. Also play around with the denoising strength and the padding. After a few attempts, we might get something like this:
Now, as we had a pretty high denoising strength and the two LoRAs aren’t really that much on high-quality skin, the result looks quite bad, but has the “correct” sizes. We can fix the rough edges and weird skin by sending it to img2img, restoring the original prompt (without the two LoRAs) and with a denoising strength of around 0.4
. If the shape of the breasts is changed too much in that step, reducing the denoising strength helps, but can also require repeated runs to fix the edges. Also enable ADetailer in that step, to fix the face and hands after the img2img step.
This entire inpainting step for the breasts can be repeated now, to maybe make them even bigger. As original
is used for the Masked Content
setting, the current state of the image will be used as a base, so perhaps going from an image with already large breasts can help to increase it further.
I’m satisfied with the size here, but I want to shrink down the waist a bit to create a more extreme hour-glass shape. To achieve this, the image is sent to inpaint again and the entire waist section is masked:
Here, I make especially sure to follow the counturs of the breasts, so the waist actually starts going in from that point on (this is up to the creator of course).
Now, I add the microwaist
LyCORIS to the prompt and increase the weight of the waist part of the prompt:
RAW photo, three-quarter shot, young slim 1woman, from front, BREAK
(skin-tight white sundress:1.0) BREAK
(long straight light brown hair:1.0), (gigantic round huge breasts:1.0), (super tiny waist:1.4), BREAK
looking at the camera, (slight smile:1.0)
standing in a park,
<lora:epiCRealismHelper:0.4><lyco:microwaistV05:0.65>
I set the weight of the LyCORIS to around 0.5
+= 1.5
, depending on how much I want the waist to be reduced. Too high of a weight can lead to some pretty weird and distorted results.
After multiple attempts and playing around with the settings and weights again, we might get something like this:
Again, like with the previous result, the edges are a bit rough. These can again be fixed with an img2img step.
Repeating that step, we get this:
One can go through these inpainting steps over and over again to change what you want, always using img2img to somewhat smooth it over. I’m satisfied with the current result, so as a final step, I upscale it again.
3. Final Upscale
For upscaling, I achieved the best results with ControlNet Tile and the Ultimate SD Upscale extension. Using the 4x-Ultrasharp
upscaler this time, as well as enabling ADetailer again, one has to play around with the steps, denoising strength and tile size a bit to get the best results. With some fiddling around we arrive at a final image: