Okay, so today I wanted to mess around with Stable Diffusion and try to get a decent image of this “Sharon Zhou” person. I’ve seen her AI-generated lectures floating around, and I was curious if I could do it myself.

First, I fired up my local Stable Diffusion install. I’m running the Automatic1111 web UI, because, let’s be real, it’s just easier to deal with. I spent a little time making sure all my models were up to date – gotta have the latest and greatest, right?
The Prompts Begin
Then came the fun part: crafting the perfect prompt. I started simple:
- “sharon zhou, portrait”
Results? Pretty terrible. A bunch of random women, none of whom looked remotely like her. Okay, back to the drawing board.
I started adding more details. I figured maybe specifying things like hair color and style would help:
- “sharon zhou, brown hair, long hair, smiling, professional photo”
Still not great. Some were closer, but it felt like I was playing a guessing game. It can be so hard somtimes!

Refining and Iterating and Iterating…
Next, I tried incorporating things I knew about her from the courses, like:
- “sharon zhou, teaching, university lecture hall, whiteboard in background”
This got me some images that kind of looked like a lecture, but the faces were still way off. It was a bit frustrating, to be honest.I started playing with different samplers and CFG scales. I usually stick with Euler a, but I tried DPM++ 2M Karras and a few others, just to see if it would shake things up. CFG scale, I usually keep it around 7, but I bumped it up and down a bit, too. No magic bullet, though.
My models suck!
Then it hit me: maybe my base model was the problem! I was using the standard Stable Diffusion 1.5 model, which is fine for general stuff, but maybe not great for realistic portraits.I have some other models on my computer and I’ll need to find a better one next time!

After messing around for at least and hour, I realized that I was getting nowhere, and I gave *’s all about the learning process!