Z
z li1
Guest
I have a problem when I use some deep learning software for inference, such as roop and topaz, which can choose to use GPU or CPU for inference. However, no matter if I force the GPU or let the software automatically choose the best method, it cannot properly select my 3080. But if I use stable diffusion, which is another software, it can correctly select my 3080 for inference,because it is cuda. I checked and found that the model format of topaz is ov, and the model format of roop is onnx. If I use onnxruntime, which runs on directml, I cannot use the GPU properly. The possible reason i
Continue reading...
Continue reading...