Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

全志A133 OPENCL性能测试 #10578

Open
fangbaolei opened this issue Oct 23, 2024 · 4 comments
Open

全志A133 OPENCL性能测试 #10578

fangbaolei opened this issue Oct 23, 2024 · 4 comments
Assignees

Comments

@fangbaolei
Copy link

想全志A133 OPENCL性能测试,用下面的命令跑起来后发现CPU 一直有一个核被占用100%,GPU 利用率在30-40%的样子,这个正常吗,opengl 模式怎么会消耗CPU资源这么多
./benchmark_bin
--model_file=MobileNetV3_small_x1_0_infer/inference.pdmodel
--param_file=MobileNetV3_small_x1_0_infer/inference.pdiparams
--input_shape=1,3,224,224
--warmup=10
--repeats=2000
--backend=opencl,arm
--opencl_cache_dir=./tmp
--opencl_kernel_cache_file=MobileNetV1_kernel.bin
--opencl_tuned_file=MobileNetV1_tuned.bin

@ddchenhao66
Copy link
Collaborator

可以把对应的nb模型上传,检查下是不是大部分算子都还是跑在cpu上,导致cpu跑满。

@fangbaolei
Copy link
Author

inference_model2.tar.gz
抱歉,上面的问题描述有些问题,CPU占用100%的是上传的这个yolo模型,麻烦帮忙看下算子是不是很多在CPU上跑,MobileNetV1那个CPU 占用在30%-40%的样子

@ddchenhao66
Copy link
Collaborator

看了nb模型,确实里面会插入很多io_copy算子把数据传回给cpu上计算,会在cpu上去做reshape和transpose等操作。

@fangbaolei
Copy link
Author

有没有工具可以自己看nb模型的算子是CPU上执行的还是在gpu上,这样针对一些效果好的,算子都在gpu上跑的名模型,开发者可以初步做一个判断,能不能通过opencl 在GPU上跑,且能获得比较好的综合性能

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants