Forum Discussion

Azael123's avatar
Azael123
Copper Contributor
Apr 16, 2026

I showed my son the GPT Image 2 leaks: Is the yellow filter finally gone?

My teenager can usually spot an AI image from across the room. He calls it the "OpenAI glow." You know the exact aesthetic I'm talking about. That weird, overly lit, slightly plastic yellow-orange filter that makes every single DALL-E 3 output look like a high-budget mobile game ad.

So when the LMarena leaks dropped last week, I pulled him over to my monitor. I had managed to catch the "maskingtape-alpha" model before OpenAI scrubbed it from the Image Battles tab. I threw in a prompt I saw floating around the community: "Amateur photograph of an elderly couple sat inside of a Yorkshire pub, amateur composition, candid."

He stared at the screen for a good ten seconds. Then he asked, "Wait, whose grandparents are those?"

That was the exact moment I realized GPT Image 2 is going to completely reset the baseline for generative vision.

Let’s talk about what actually happened on LMarena, because the implications are massive, and the brief two-day window we had to test these models told us a lot about OpenAI's current architecture struggles. They snuck three unannounced models onto the public leaderboard: maskingtape-alpha, gaffertape-alpha, and packingtape-alpha. No blog post. No big PR push. Just raw A/B testing against the current generation models. Someone even noticed it completely crushed "nano banana" in side-by-side prompt adherence.

The community picked up on it almost instantly because the photorealism was jarring. Indistinguishable from real photos. And then, within hours, right as the TikTokers started making noise about it, OpenAI yanked all three models offline. They reverted everything back to the highly styled, heavily filtered outputs we are so exhausted by.

Why does that original "yellow filter" exist in the first place? If you’ve spent any time working with latent diffusion models, you know that base models are actually incredibly good at capturing grimy, imperfect reality. The plastic look is an artificial layer. It’s the result of heavy reinforcement learning from human feedback (RLHF) and aggressive safety tuning. When you force a model to perfectly adhere to a rigid set of safety guidelines and corporate alignment goals, it tends to collapse into a safe, homogenous aesthetic. It smooths out the rough edges. It adds that golden-hour lighting because human raters consistently rank "pretty" images higher than "realistic but ugly" ones during the early training phases.

1 Reply

  • Caspians's avatar
    Caspians
    Brass Contributor

    OpenAI yanking those models so quickly suggests they’re acutely aware that breaking the “glow” also breaks their ability to easily flag synthetic content. It’s the generative equivalent of disabling the forced restart button—suddenly you can’t tell if the phone is truly dead or just running an unfiltered OS.