The post China’s Z-Image Dethrones Flux as King of AI Art—And Your Potato PC Can Run It appeared on BitcoinEthereumNews.com. In brief The new Z-Image model runs on 6GB VRAM—hardware Flux2 can’t even touch. Z-Image already has 200+ community resources and over a thousand positive reviews versus Flux2’s 157 reviews. It is ranked as the best open-source model to date. Alibaba’s Tongyi Lab Z-Image Turbo, a 6-billion-parameter image generation model, dropped last week with a simple promise: state-of-the-art quality on hardware you actually own. That promise is landing hard. Upon days of its release, developers had been cranking out LoRAs—custom fine-tuned adaptations—at a pace that’s already outstripping Flux2, Black Forest Labs’ much-hyped successor to the wildly popular Flux model. Z-Image’s party trick is efficiency. While competitors like Flux2 demand 24GB of VRAM minimum (and up to 90GB for the full model), Z-Image runs on quantized setups with as little as 6GB.  That’s RTX 2060 territory—basically hardware from 2019. Depending on the resolution, users can generate images in as little as 30 seconds.   For hobbyists and indie creators, this is a door that was previously locked. The AI art community was fast to praise the model.  “This is what SD3 was supposed to be,” wrote user Saruhey on CivitAI, the world’s largest repository of open source AI art tools. “The prompt adherence is pretty exquisite… a model that can do text right away is game-changing. This thing is packing the same, if not better, power than Flux is black magic on its own. The Chinese are way ahead of the AI game.” Z-Image Turbo has been available on Civitai since last Thursday and has already gotten over 1,200 positive reviews. For context, Flux2—released a few days before Z-Image—has 157. The model is fully uncensored from scratch. Celebrities, fictional characters, and yes, explicit content are all on the table.  As of today, there are around 200 resources (finetunes, LoRAs, workflows) for… The post China’s Z-Image Dethrones Flux as King of AI Art—And Your Potato PC Can Run It appeared on BitcoinEthereumNews.com. In brief The new Z-Image model runs on 6GB VRAM—hardware Flux2 can’t even touch. Z-Image already has 200+ community resources and over a thousand positive reviews versus Flux2’s 157 reviews. It is ranked as the best open-source model to date. Alibaba’s Tongyi Lab Z-Image Turbo, a 6-billion-parameter image generation model, dropped last week with a simple promise: state-of-the-art quality on hardware you actually own. That promise is landing hard. Upon days of its release, developers had been cranking out LoRAs—custom fine-tuned adaptations—at a pace that’s already outstripping Flux2, Black Forest Labs’ much-hyped successor to the wildly popular Flux model. Z-Image’s party trick is efficiency. While competitors like Flux2 demand 24GB of VRAM minimum (and up to 90GB for the full model), Z-Image runs on quantized setups with as little as 6GB.  That’s RTX 2060 territory—basically hardware from 2019. Depending on the resolution, users can generate images in as little as 30 seconds.   For hobbyists and indie creators, this is a door that was previously locked. The AI art community was fast to praise the model.  “This is what SD3 was supposed to be,” wrote user Saruhey on CivitAI, the world’s largest repository of open source AI art tools. “The prompt adherence is pretty exquisite… a model that can do text right away is game-changing. This thing is packing the same, if not better, power than Flux is black magic on its own. The Chinese are way ahead of the AI game.” Z-Image Turbo has been available on Civitai since last Thursday and has already gotten over 1,200 positive reviews. For context, Flux2—released a few days before Z-Image—has 157. The model is fully uncensored from scratch. Celebrities, fictional characters, and yes, explicit content are all on the table.  As of today, there are around 200 resources (finetunes, LoRAs, workflows) for…

China’s Z-Image Dethrones Flux as King of AI Art—And Your Potato PC Can Run It

2025/12/02 20:50

In brief

  • The new Z-Image model runs on 6GB VRAM—hardware Flux2 can’t even touch.
  • Z-Image already has 200+ community resources and over a thousand positive reviews versus Flux2’s 157 reviews.
  • It is ranked as the best open-source model to date.

Alibaba’s Tongyi Lab Z-Image Turbo, a 6-billion-parameter image generation model, dropped last week with a simple promise: state-of-the-art quality on hardware you actually own.

That promise is landing hard. Upon days of its release, developers had been cranking out LoRAs—custom fine-tuned adaptations—at a pace that’s already outstripping Flux2, Black Forest Labs’ much-hyped successor to the wildly popular Flux model.

Z-Image’s party trick is efficiency. While competitors like Flux2 demand 24GB of VRAM minimum (and up to 90GB for the full model), Z-Image runs on quantized setups with as little as 6GB. 

That’s RTX 2060 territory—basically hardware from 2019. Depending on the resolution, users can generate images in as little as 30 seconds. 

For hobbyists and indie creators, this is a door that was previously locked.

The AI art community was fast to praise the model. 

“This is what SD3 was supposed to be,” wrote user Saruhey on CivitAI, the world’s largest repository of open source AI art tools. “The prompt adherence is pretty exquisite… a model that can do text right away is game-changing. This thing is packing the same, if not better, power than Flux is black magic on its own. The Chinese are way ahead of the AI game.”

Z-Image Turbo has been available on Civitai since last Thursday and has already gotten over 1,200 positive reviews. For context, Flux2—released a few days before Z-Image—has 157.

The model is fully uncensored from scratch. Celebrities, fictional characters, and yes, explicit content are all on the table. 

As of today, there are around 200 resources (finetunes, LoRAs, workflows) for the model on Civitai alone, many of which are NSFW. 

On Reddit, user Regular-Forever5876 tested the model’s limits with gore prompts and came away stunned: “Holy cow!!! This thing understands gore AF! It generates it flawlessly,” they wrote.

The technical secret behind Z-Image Turbo is its S3-DiT architecture—a single-stream transformer that processes text and image data together from the start, rather than merging them later. This tight integration, combined with aggressive distillation techniques, enables the model to meet quality benchmarks that usually require models five times its size.

Testing the model

We ran Z-Image Turbo through extensive testing across multiple dimensions. Here’s what we found.

Speed: SDXL Pace, Next-Gen Quality

At nine steps, Z-Image Turbo generates images at roughly the same speed as SDXL, with the usual 30 steps—a model that dropped back in 2023. 

The difference is that Z-Image’s output quality matches or beats Flux. On a laptop with an RTX 2060 GPU with 6GB of VRAM, one image took 34 seconds. 

Flux2, by comparison, takes approximately ten times longer to generate a comparable image.

Realism: The new benchmark

Z-Image Turbo is the most photorealistic open-source model available right now for consumer-grade hardware. It beats Flux2 outright, and the base distilled model outperforms dedicated realism fine-tunes of Flux. 

Skin and hair texture look detailed and natural. The infamous “Flux chin” and “plastic skin” are mostly gone. Body proportions are consistently solid, and LoRAs enhancing realism even further are already circulating.

Text generation: Finally, words that work

This is where Z-Image truly shines. It’s the best open-source model for in-image text generation, performing on par with Google’s Nanobanana and Seedream—models that set the current standard. 

For Mandarin speakers, Z-Image is the obvious choice. It understands Chinese natively and renders characters correctly.

Pro tip: Some users have reported that prompting in Mandarin actually helps the model produce better outputs, and the developers even published a “prompt enhancer” in Mandarin.

English text is equally strong, with one exception: uncommon long words like “decentralized” can trip it up—a limitation shared by Nanobanana too.

Spatial awareness and prompt adherence: Exceptional

Z-Image’s prompt adherence is outstanding. It understands style, spatial relationships, positions, and proportions with remarkable precision. 

For example, take this prompt:

A dog with a red hat standing on top of a TV showing the words “Decrypt 是世界上最好的加密货币与人工智能媒体网站” on the screen. On the left, there is a blonde woman in a business suit holding a coin; on the right, there is a robot standing on top of a first aid box, and a green pyramid stands behind the box. The overall scenery is surreal. A cat is standing upside down on top of a white soccer ball, next to the dog. An Astronaut from NASA holds a sign that reads “Emerge” and is placed next to the robot.

As noticeable, it had only one typo, probably because of the language mixture, but other than that, all the elements are accurately represented. 

Prompt bleeding is minimal, and complex scenes with multiple subjects stay coherent. It beats Flux on this metric and holds its own against Nanobanana.

What’s next?

Alibaba plans to release two more variants: Z-Image-Base for fine-tuning, and Z-Image-Edit for instruction-based modifications. If they land with the same polish as Turbo, the open-source landscape is about to shift dramatically.

For now, the community’s verdict is clear: Z-Image has taken Flux’s crown, much like Flux once dethroned Stable Diffusion.

The real winner will be whoever attracts the most developers to build on top of it.

But if you asked us, yeah, Z-Image is our favorite home-oriented open source model right now.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Source: https://decrypt.co/350572/chinas-z-image-dethrones-flux-king-of-ai-art

Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen service@support.mexc.com ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

The Channel Factories We’ve Been Waiting For

The Channel Factories We’ve Been Waiting For

The post The Channel Factories We’ve Been Waiting For appeared on BitcoinEthereumNews.com. Visions of future technology are often prescient about the broad strokes while flubbing the details. The tablets in “2001: A Space Odyssey” do indeed look like iPads, but you never see the astronauts paying for subscriptions or wasting hours on Candy Crush.  Channel factories are one vision that arose early in the history of the Lightning Network to address some challenges that Lightning has faced from the beginning. Despite having grown to become Bitcoin’s most successful layer-2 scaling solution, with instant and low-fee payments, Lightning’s scale is limited by its reliance on payment channels. Although Lightning shifts most transactions off-chain, each payment channel still requires an on-chain transaction to open and (usually) another to close. As adoption grows, pressure on the blockchain grows with it. The need for a more scalable approach to managing channels is clear. Channel factories were supposed to meet this need, but where are they? In 2025, subnetworks are emerging that revive the impetus of channel factories with some new details that vastly increase their potential. They are natively interoperable with Lightning and achieve greater scale by allowing a group of participants to open a shared multisig UTXO and create multiple bilateral channels, which reduces the number of on-chain transactions and improves capital efficiency. Achieving greater scale by reducing complexity, Ark and Spark perform the same function as traditional channel factories with new designs and additional capabilities based on shared UTXOs.  Channel Factories 101 Channel factories have been around since the inception of Lightning. A factory is a multiparty contract where multiple users (not just two, as in a Dryja-Poon channel) cooperatively lock funds in a single multisig UTXO. They can open, close and update channels off-chain without updating the blockchain for each operation. Only when participants leave or the factory dissolves is an on-chain transaction…
Paylaş
BitcoinEthereumNews2025/09/18 00:09
SOLANA NETWORK Withstands 6 Tbps DDoS Without Downtime

SOLANA NETWORK Withstands 6 Tbps DDoS Without Downtime

The post SOLANA NETWORK Withstands 6 Tbps DDoS Without Downtime appeared on BitcoinEthereumNews.com. In a pivotal week for crypto infrastructure, the Solana network
Paylaş
BitcoinEthereumNews2025/12/16 20:44
Why The Green Bay Packers Must Take The Cleveland Browns Seriously — As Hard As That Might Be

Why The Green Bay Packers Must Take The Cleveland Browns Seriously — As Hard As That Might Be

The post Why The Green Bay Packers Must Take The Cleveland Browns Seriously — As Hard As That Might Be appeared on BitcoinEthereumNews.com. Jordan Love and the Green Bay Packers are off to a 2-0 start. Getty Images The Green Bay Packers are, once again, one of the NFL’s better teams. The Cleveland Browns are, once again, one of the league’s doormats. It’s why unbeaten Green Bay (2-0) is a 8-point favorite at winless Cleveland (0-2) Sunday according to betmgm.com. The money line is also Green Bay -500. Most expect this to be a Packers’ rout, and it very well could be. But Green Bay knows taking anyone in this league for granted can prove costly. “I think if you look at their roster, the paper, who they have on that team, what they can do, they got a lot of talent and things can turn around quickly for them,” Packers safety Xavier McKinney said. “We just got to kind of keep that in mind and know we not just walking into something and they just going to lay down. That’s not what they going to do.” The Browns certainly haven’t laid down on defense. Far from. Cleveland is allowing an NFL-best 191.5 yards per game. The Browns gave up 141 yards to Cincinnati in Week 1, including just seven in the second half, but still lost, 17-16. Cleveland has given up an NFL-best 45.5 rushing yards per game and just 2.1 rushing yards per attempt. “The biggest thing is our defensive line is much, much improved over last year and I think we’ve got back to our personality,” defensive coordinator Jim Schwartz said recently. “When we play our best, our D-line leads us there as our engine.” The Browns rank third in the league in passing defense, allowing just 146.0 yards per game. Cleveland has also gone 30 straight games without allowing a 300-yard passer, the longest active streak in the NFL.…
Paylaş
BitcoinEthereumNews2025/09/18 00:41