Based on this tutorial http://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html, we need to specify the batch_size while exporting the model from pytorch to onnx. In some cases we need a dynamic batch_size for inference, do you have any advice of how we can do this?
Based on this tutorial http://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html, we need to specify the batch_size while exporting the model from pytorch to onnx. In some cases we need a dynamic batch_size for inference, do you have any advice of how we can do this?