Conversation

Fahim Farook

Now that I’ve got my new #CoreML #StableDiffusion GUI working, I wanted to test the new img2img functionality. That’s when I discovered that none of the existing models worked for img2img … well at least not the ones I could find 😛

Incidentally, if you search HuggingFace for “coreml img2img” you will find a bunch of models which are supposed to work. But for whatever reason, those didn’t work for me. Maybe they were created before the latest changes to the Apple CoreML code?

Either way, I ended up spending the afternoon trying to generate new models which would work for img2img.

My first try was to go with Guernika (https://huggingface.co/Guernika/CoreMLStableDiffusion) since that has worked for me in the past. It has an option to create the Encoder that I needed, but unfortunately, that didn’t work.

I then spent some time trying a lot of things with the Apple Python code before I finally got it to work. So in case you are stuck in the same place, here are the important things to remember:

1. You need to use Python 3.8. This is very important. Other Python versions do not seem to work and result in errors.

2. I saw a note about also needing Ventura 13.1 or higher and this probably is correct since I know Apple added some changes to make StableDiffusion to work in Ventura 13.1 … but since I am on Ventura 13.2.1, I can’t verify this one for sure.

Once I did #1, the Apple Python code for generating models worked so much better. Their documentation is still a bit confusing and all over the place, but I was finally able to convert existing CKPT models to CoreML and be able to use them with img2img. So now I’m going to be doing a lot of conversions I guess? 🙂
0
1
2