support neopp model 8 steps infer#1060
Conversation
There was a problem hiding this comment.
Code Review
This pull request implements LoRA support for the NeoPP model by updating weight classes, model initialization, and the runner. It also adds a new inference configuration and an example script. The review feedback identifies missing LoRA path propagation in the FM head weights and a limitation in the runner that only applies the first LoRA configuration dynamically. Additionally, the reviewer recommended replacing hardcoded absolute paths with relative ones, removing large blocks of commented-out code, and aligning CFG settings between the example script and the configuration file.
| @@ -186,11 +191,13 @@ def __init__(self, block_index, mm_type): | |||
| class NeoppFmHeadWeights(WeightModule): | |||
| def __init__(self, mm_type): | |||
There was a problem hiding this comment.
The NeoppFmHeadWeights class is missing the lora_path parameter in its __init__ method. This will prevent LoRA weights from being correctly associated with the FM head modules when using lazy loading. Note that you will also need to update the call site in NeoppTransformerWeights (around line 46) to pass this parameter.
| def __init__(self, mm_type): | |
| def __init__(self, mm_type, lora_path=None): |
| MM_WEIGHT_REGISTER["Default"]( | ||
| "fm_modules.fm_head.0.weight", | ||
| "fm_modules.fm_head.0.bias", | ||
| lora_prefix=lora_prefix, |
| MM_WEIGHT_REGISTER["Default"]( | ||
| "fm_modules.fm_head.2.weight", | ||
| "fm_modules.fm_head.2.bias", | ||
| lora_prefix=lora_prefix, |
| "use_triton_qknorm_rope": true, | ||
| "lora_configs": [ | ||
| { | ||
| "path": "/data/nvme1/yongyang/kkk/models/sensenova/SenseNova-U1-8B-MoT-LoRAs/SenseNova-U1-8B-MoT-LoRA-8step-V1.0.safetensors", |
| # ------------------------------------------------- | ||
|
|
||
| pipe = LightX2VPipeline( | ||
| model_path="/data/nvme1/yongyang/kkk/models/sensenova/SenseNova-U1-8B-MoT", |
| index_offset_cond=298, | ||
| index_offset_uncond=9, | ||
| cfg_interval=(-1, 2), | ||
| cfg_scale=4.0, |
There was a problem hiding this comment.
The cfg_scale is set to 4.0, but the corresponding configuration file neopp_dense_8steps.json has "enable_cfg": false. In the NeoppModel.infer implementation, the CFG logic is entirely skipped if enable_cfg is False. This might be confusing for users; consider aligning the example with the config or enabling CFG if it's intended to be used.
| # # ------------------------------------------------- | ||
| # # TURN 1 | ||
| # # ------------------------------------------------- | ||
| # pipe.runner.load_kvcache( | ||
| # "/data/nvme1/yongyang/FL/neo_9b_new/vlm_tensor_44000_ema_2k/to_x2v_cond_kv_1_360.pt", | ||
| # "/data/nvme1/yongyang/FL/neo_9b_new/vlm_tensor_44000_ema_2k/to_x2v_uncond_kv_1_12.pt", | ||
| # ) | ||
| # pipe.runner.set_inference_params( | ||
| # index_offset_cond=366, | ||
| # index_offset_uncond=12, | ||
| # cfg_interval=(-1, 2), | ||
| # cfg_scale=4.0, | ||
| # cfg_norm="none", | ||
| # timestep_shift=3.0, | ||
| # ) | ||
|
|
||
| # pipe.generate( | ||
| # seed=None, | ||
| # save_result_path="/data/nvme1/yongyang/kkk/LightX2V/save_results/output_lightx2v_neopp_dense_2k_1.png", | ||
| # target_shape=[2048, 2048], # Height, Width | ||
| # ) | ||
|
|
||
|
|
||
| # # ------------------------------------------------- | ||
| # # TURN 2 | ||
| # # ------------------------------------------------- | ||
| # pipe.runner.load_kvcache( | ||
| # "/data/nvme1/yongyang/FL/neo_9b_new/vlm_tensor_44000_ema_2k/to_x2v_cond_kv_2_439.pt", | ||
| # "/data/nvme1/yongyang/FL/neo_9b_new/vlm_tensor_44000_ema_2k/to_x2v_uncond_kv_2_15.pt", | ||
| # ) | ||
| # pipe.runner.set_inference_params( | ||
| # index_offset_cond=441, | ||
| # index_offset_uncond=15, | ||
| # cfg_interval=(-1, 2), | ||
| # cfg_scale=4.0, | ||
| # cfg_norm="none", | ||
| # timestep_shift=3.0, | ||
| # ) | ||
|
|
||
| # pipe.generate( | ||
| # seed=None, | ||
| # save_result_path="/data/nvme1/yongyang/kkk/LightX2V/save_results/output_lightx2v_neopp_dense_2k_2.png", | ||
| # target_shape=[2048, 2048], # Height, Width | ||
| # ) |
| lora_path = lora_configs[0]["path"] | ||
| lora_strength = lora_configs[0]["strength"] | ||
| model_kwargs["lora_path"] = lora_path | ||
| model_kwargs["lora_strength"] = lora_strength |
There was a problem hiding this comment.
No description provided.