使用gradio生成,卡在Loading HiFT
[load_quantize_encoder] start. model_path='ckpt/speech_tokenizer'
Configured for 24kHz frontend.
2025-12-12 09:52:10.6152300 [W:onnxruntime:, transformer_memcpy.cc:74 onnxruntime::MemcpyTransformer::ApplyImpl] 2 Memcpy nodes are added to the graph torch_jit for CUDAExecutionProvider. It might have negative impact on performance (including unable to run CUDA graph). Set session_options.log_severity_level=1 to see the detail logs before this message.
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████| 2/2 [00:11<00:00, 5.51s/it]
Loading HiFT model from ckpt/hift/hift.pt on cuda...
还有,希望能优化下对win的支持,否则坑太多
使用gradio生成,卡在Loading HiFT
[load_quantize_encoder] start. model_path='ckpt/speech_tokenizer'
Configured for 24kHz frontend.
2025-12-12 09:52:10.6152300 [W:onnxruntime:, transformer_memcpy.cc:74 onnxruntime::MemcpyTransformer::ApplyImpl] 2 Memcpy nodes are added to the graph torch_jit for CUDAExecutionProvider. It might have negative impact on performance (including unable to run CUDA graph). Set session_options.log_severity_level=1 to see the detail logs before this message.
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████| 2/2 [00:11<00:00, 5.51s/it]
Loading HiFT model from ckpt/hift/hift.pt on cuda...
还有,希望能优化下对win的支持,否则坑太多