Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
AuryGlenz
4 months ago
|
parent
|
context
|
favorite
| on:
Qwen-Image: Crafting with native text rendering
You can’t split image models over 2 GPUs like you can LLMs.
BoredPositron
4 months ago
[–]
They also released an inference server for their models. Wan and qwen-image can be split without problems.
https://github.com/modelscope/DiffSynth-Engine
AuryGlenz
4 months ago
|
parent
[–]
Unless I missed something just from skimming their tutorial it looks like they can do parallelism to speed things up with some models, not actually split the model (apart from the usual chunk offloading techniques).
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: