Furkan Gözükara

MonsterMMORPG

AI & ML interests

Check out my youtube page SECourses for Stable Diffusion tutorials. They will help you tremendously in every topic

Articles

Organizations

Posts 38

view post
Post
3433
Huge news for Kohya GUI - Now you can fully Fine Tune / DreamBooth FLUX Dev with as low as 6 GB GPUs without any quality loss compared to 48 GB GPUs - Moreover, Fine Tuning yields better results than any LoRA training could

Config Files
I published all configs here : https://www.patreon.com/posts/112099700

Tutorials
Fine tuning tutorial in production

Windows FLUX LoRA training (fine tuning is same just config changes) : https://youtu.be/nySGu12Y05k

Cloud FLUX LoRA training (RunPod and Massed Compute ultra cheap) : https://youtu.be/-uhL2nW7Ddw

LoRA Extraction
The checkpoint sizes are 23.8 GB but you can extract LoRA with almost no loss quality - I made a research and public article / guide for this as well

LoRA extraction guide from Fine Tuned checkpoint is here : https://www.patreon.com/posts/112335162

Info
This is just mind blowing. The recent improvements Kohya made for block swapping is just amazing.

Speeds are also amazing that you can see in image 2 - of course those values are based on my researched config and tested on RTX A6000 - same speed as almost RTX 3090

Also all trainings experiments are made at 1024x1024px. If you use lower resolution it will be lesser VRAM + faster speed

The VRAM usages would change according to your own configuration - likely speed as well

Moreover, Fine Tuning / DreamBooth yields better results than any LoRA could

Installers
1-Kohya GUI accurate branch and Windows Torch 2.5 Installers and test prompts shared here : https://www.patreon.com/posts/110879657

The link of Kohya GUI with accurate branch : https://github.com/bmaltais/kohya_ss/tree/sd3-flux.1
view post
Post
2814
Detailed Comparison of JoyCaption Alpha One vs JoyCaption Pre-Alpha — 10 Different Style Amazing Images — I think JoyCaption Alpha One is the very best image captioning model at the moment for model training — Works very fast and requires as low as 8.5 GB VRAM

Where To Download And Install

You can download our APP from here : https://www.patreon.com/posts/110613301

1-Click to install on Windows, RunPod and Massed Compute
Official APP is here where you can try : fancyfeast/joy-caption-alpha-one

Have The Following Features

Auto downloads meta-llama/Meta-Llama-3.1–8B into your Hugging Face cache folder and other necessary models into the installation folder

Use 4-bit quantization — Uses 8.5 GB VRAM Total

Overwrite existing caption file

Append new caption to existing caption

Remove newlines from generated captions

Cut off at last complete sentence

Discard repeating sentences

Don’t save processed image

Caption Prefix

Caption Suffix

Custom System Prompt (Optional)

Input Folder for Batch Processing

Output Folder for Batch Processing (Optional)

Fully supported Multi GPU captioning — GPU IDs (comma-separated, e.g., 0,1,2)

Batch Size — Batch captioning