I try to run this tutorial:Example: End-to-end AlexNet from PyTorch to Caffe2
However, I found the reference speed of onnx-caffe2 is 10x slower than the origin pytorch model.
Anyone help? Thanks. If the reference time is comparable, It would be great to deploy models of Pytorch using Caffe2.
My Machine:
Ubuntu 14.04
CUDA 8.0
cudnn 7.0.3
Caffe2 latest
Pytorch 0.3.0
I try to run this tutorial:Example: End-to-end AlexNet from PyTorch to Caffe2
However, I found the reference speed of onnx-caffe2 is 10x slower than the origin pytorch model.
Anyone help? Thanks. If the reference time is comparable, It would be great to deploy models of Pytorch using Caffe2.
My Machine:
Ubuntu 14.04
CUDA 8.0
cudnn 7.0.3
Caffe2 latest
Pytorch 0.3.0