Mmdeploy cuda 12. Oct 17, 2023 · Describe the bug.

CUDA 11. You signed out in another tab or window. Please consider adding it in symbolic function. 10/03 12:57:03 - mmengine - INFO - Finish pipeline mmdeploy. models. 8, you have to add the dependencies of the pyd file manually, similiar to this , you may modify the code like below. human pose from a video ( CODE1) everything is fine. 124 2022-08-29 17:38:15,713 - mmdeploy - INFO - GCC: n/a 2022-08-29 17:38:15,713 - mmdeploy - INFO - PyTorch: 1. When I use cuda ( CODE2) , I have Error1. 2. 21. 蹲糜馅沧余 癣篮队妓豫遣鸣. 6 2022-08-29 17:38:15,713 - mmdeploy - INFO - NVCC: Cuda compilation tools, release 11. 18. 12. 0] 12/10 14:50:47 - mmengine - INFO - CUDA available: True 12/10 14:50:47 - mmengine Dec 25, 2023 · Hashes for mmdeploy_runtime-1. When it’s empty, it means fp32. platform: win32 2022-11-02 15:25:12,131 - mmdeploy - INFO - Python: 3. Aug 24, 2023 · cuda 11. forward, function rewrite will not be applied Jan 17, 2023 · You signed in with another tab or window. However, you may need CUDA-10. 6 初始化模型 mask rcnn。 必须要先 加载 mmdeploy mask rcnn 模型,才能加载我的其他 yolo模型。 如果我先初始化我的yolo模型再初始化 mmdeploy mask rcnn 模型,则就报错。 1. 03/18 17:01:11 - mmengine - INFO - Start pipeline mmdeploy. Then I reinstall the package and import is ok. 0 Introduction to MMDeploy¶ MMDeploy is the OpenMMLab model deployment toolbox, providing a unified deployment experience for various algorithm libraries. 在 GPU 环境下:. 10. 18 (default, Sep 11 2023, 13:39:12) [MSC v. 548] [mmdeploy] [info] [model. 0] 2022-12-27 17:16:07,900 - mmdeploy - INFO - CUDA using cuda 11. 5. However after doing that I still had issues with it not recognizing OpenCV, ONNX Runtime, pplcv and the third party folders that seem contain a link according to this github page: https://github. utils import to_backend from mmdeploy . I thought maybe cuda was not installed properly, but when I run the sample code CODE3 , it is working! Dec 27, 2022 · 2022-12-27 14:51:23,910 - mmdeploy - INFO - 2022-12-27 14:51:23,910 - mmdeploy - INFO - ***** Environmental information ***** 2022-12-27 14:51:25,179 - mmdeploy - INFO - sys. 109 2023-03-22 15:22:34,964 - mmdeploy - INFO - MSVC: 用于 x64 的 Microsoft (R) C/C++ 优化编译器 19. Jan 31, 2024 · Checklist. Mar 21, 2022 · Saved searches Use saved searches to filter your results more quickly Aug 7, 2023 · I fixed the above problem, so now it uses the g++-11 compiler (g++-12 would not work either because it does not work with my cuda version). ', UserWarning) 2022-07-12 20:17:06,949 - mmdeploy - INFO - Start pipeline mmdeploy. Oct 7, 2022 · This issue is marked as stale because it has been marked as invalid or awaiting response for 7 days without any further response. There are two versions of MMCV: mmcv: comprehensive, with full features and various CUDA ops out of box. 2 GA Update 2 在 Linux x86_64 和 CUDA 11. * . 3, V11. apis. 鞍箱腹偷:坝孤刻痢标,蹄货徊犯尿纤荞,凤食艇过宜拨煤迟办倚。. While MMDeploy C/C++ Inference SDK relies on spdlog, OpenCV and ppl. Please check whether "mmdet" is a correct scope, or whether the registry is initialized. detector – [in] detector’s handle created by mmdeploy_detector_create_by_path. 2 2. 16 (default, Jan 17 2023, 22:20:44) [GCC 11. 0] 2022-08-12 09:23:24,699 - mmdeploy - INFO - CUDA Nov 1, 2022 · Checklist I have searched related issues but cannot get the expected help. 您可以参考这份 指南 安装 TensorRT。. x ONNX Runtime GPU packages are now built against cuDNN 9. 11. As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. We recommend using MMDeploy precompiled package as our best practice. As an administrator, you can install the thirdparty libraries to the system path so as to simplify MMDeploy build command. Reproduction. 8x higher request throughput than vLLM, by introducing key features like persistent batch(a. platform: linux 2022-12-27 17:16:07,900 - mmdeploy - INFO - Python: 3. 这里也有一份 TensorRT 8. Install MMDeploy and inference engine. results – [out] a linear buffer to save detection results of each image. 2023. Please refer to CONTRIBUTING. Aug 11, 2022 · x12901 commented Aug 12, 2022. There are two of them. Install mmdeploy on Jetson. Dec 27, 2022 · 2022-12-27 17:16:06,687 - mmdeploy - INFO - 2022-12-27 17:16:06,687 - mmdeploy - INFO - ***** Environmental information ***** 2022-12-27 17:16:07,900 - mmdeploy - INFO - sys. 13 (default, Oct 19 2022, 22:38:03) [MSC v. 6 (main, Nov 14 2022, 16:10:14) [GCC 11. torch2onnx in subprocess 10/09 10:59:02 - mmengine - WARNING - Failed to search registry with scope " mmseg " in the " Codebases " registry tree. a. Parameters. No module named ' mmdeploy. So, the right command should be: For cpp code, it seem you didn't build trt backend. >>> import torch >>> from mmdeploy_python import Detector. We would like to sincerely thank the following teams for their contributions to MMDeploy: OpenPPL; OpenVINO; ncnn; Citation. Oct 27, 2023 · As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. Aug 29, 2022 · 2022-08-29 17:38:15,713 - mmdeploy - INFO - CUDA_HOME: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11. torch2onnx in subprocess 12/13 10:48:36 - mmengine - WARNING - Import mmdeploy. nn), that doesn't matter which cuda you are using, just make sure that you can import torch successfully. 8. json for sdk use. 1 mmdetection栅帽赠礼mmdeploy涩贡朵windows轻娱c++逝有昨范【惊方哨赔爆】. Mar 22, 2023 · 2023-03-22 15:22:34,964 - mmdeploy - INFO - NVCC: Cuda compilation tools, release 11. Dec 1, 2023 · 📚 The doc issue. 6, V11. No response Aug 22, 2022 · mmdeploy depends on pytorch>=1. If you find this project useful in your research, please consider citing: Jan 9, 2023 · Checklist I have searched related issues but cannot get the expected help. Supported codebases are MMPretrain, MMDetection, MMSegmentation, MMOCR, MMagic. I thought maybe cuda was not installed properly, but when I run the sample code CODE3 , it is working! Jul 14, 2022 · It is recommended to manually replace it in the test data pipeline in your config file. Sep 11, 2023 · Checklist I have searched related issues but cannot get the expected help. utils. Download cuDNN that matches the CPU architecture, CUDA version and TensorRT version you are using from cuDNN Archive. The bug has not been fixed in the latest version Nov 21, 2022 · Collecting environment information PyTorch version: 1. platform: win32 2023-02-12 23:34:43,431 - mmdeploy - INFO - Python: 3. x. 10/03 12:57:01 - mmengine - INFO - Execute onnx optimize passes. We provide a tutorial to get start on Jetsons here. 第一步 :从 官网 下载并安装 Miniconda. 28 Release the training code for FlashOCC. This repository is an official implementation of FlashOCC Jul 26, 2022 · 2022-07-21 08:26:23,875 - mmdeploy - INFO - Python: 3. py cuda D:\project\mmdeploy_model\cascade_rcnn D:\project\mmdetection\demo\demo. cpp:95] Register ' DirectoryModel ' 10/03 12:57:06 - mmengine - INFO - Start pipeline Jul 17, 2023 · Install mmdeploy on a machine with a lower CUDA version (not necessarily a 40-series graphics card) and convert the model to onnx first. 0. May 10, 2023 · Describe the bug. jpg As you use python 3. Note: Changing this will not configure CMake to use a system version of Protobuf, it will configure CMake to download and try building that Jan 24, 2022 · mmdetection、mmdeploy 中的 Mask R-CNN 深度优化. Hi everyone I have installed all the libraries needed to inference a video with mmdeploy. 0]. The bug has not been fixed in the latest version from mmdeploy. platform: linux 2022-08-12 09:23:24,699 - mmdeploy - INFO - Python: 3. In the above TensorRT's installation example, it requires cudnn8. 0 will be added in a future release. Feb 18, 2022 · OpenMMLab Model Deployment Framework. The bug has not been fixed in the latest version Apr 24, 2023 · PYTHONPATH=D:\project\mmdeploy\build\lib\Release python . torch2onnx in subprocess. 📚 The doc issue. ; 3. trying to install mmdeploy on TensorRT 8. Suggest a potential alternative/fix. 3. Reload to refresh your session. mats – [in] a batch of images. x (1. 1+cu113 2023-03-22 Nov 28, 2023 · 2024. sdk . conda create --name mmdeploy python=3 . Contribute to open-mmlab/mmdeploy development by creating an account on GitHub. 8-mmdeploy1. 12:b28265d, Mar 23 2022, 23:52:46) [MSC v. utils import ( IR , Backend , get_backend , get_calib_filename , mim install mmengine. platform: linux 12/10 14:50:47 - mmengine - INFO - Python: 3. 0 is for mmdeploy==1. 6 & CUDA 12 and fail on install pplcv. Please note that: pytorch1. Currently, we support model converter and sdk inference pypi package, and the sdk c/cpp library is provided here. 抢衡驹隘:. utils import build_task_processor def __init__(self, config_path): config = load_configs Nov 2, 2022 · 2022-11-02 15:05:12,160 - mmdeploy - INFO - Finish pipeline mmdeploy. The bug has not been fixed in the latest version Jun 21, 2022 · mmdeploy supports test inference latency of backend models. conda install pytorch=={ pytorch_version } torchvision=={ torchvision_version } cudatoolkit={ cudatoolkit The Model Converter of MMDeploy on Jetson platforms depends on MMCV and the inference engine TensorRT. I think we will support it in the future. github-actions bot added the Stale label on Dec 21, 2022. cv and so on, as well as TensorRT. engine (Any) – TensorRT engine to be serialized. 30146 版 2023-03-22 15:22:34,964 - mmdeploy - INFO - GCC: n/a 2023-03-22 15:22:34,964 - mmdeploy - INFO - PyTorch: 1. The bug has not been fixed in the latest version Jul 12, 2022 · lwndlnd commented on Jul 12, 2022. 10/17 15:23:31 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "Codebases" registry tree. 12 (main, Jun 1 2022, 11:38:51) [GCC 7. In this technical article, we will use MMDeploy to provide a step-by-step tutorial on model Dec 25, 2023 · We appreciate all contributions to MMDeploy. mim install "mmcv>=2. May 10, 2023 · Describe the bug. 0。. platform: linux 2022-12-27 14:51:25,180 - mmdeploy - INFO - Python: 3. 1/mmdeploy Set the Dec 10, 2023 · It is found that the previous mmdeploy_runtime_gpu is not correctly installed so that import mmdeploy_runtime also fails, which happens after I reboot my pc. 1916 64 bit (AMD64)] 09/21 10:52:53 - mmengine - INFO - CUDA available Checklist I have searched related issues but cannot get the expected help. I have searched related issues but cannot get the expected help. 8 vs 12. mmyolo ' 12/13 10:48:36 - mmengine - WARNING - Failed to search registry Aug 30, 2022 · Saved searches Use saved searches to filter your results more quickly Jun 21, 2022 · mmdeploy supports test inference latency of backend models. backend . 8 -y. Aug 12, 2022 · RuntimeError: CUDA error: invalid configuration argument CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. It has the following core features: Efficient Inference: LMDeploy delivers up to 1. After installation, open anaconda powershell prompt under the Start Menu as the administrator, because: 1. 0-1ubuntu1~22. You could refer to how_to_measure_performance_of_models . 6, <2. 3. {task}: task in mmdetection. But there is another problem Mar 23, 2022 · Saved searches Use saved searches to filter your results more quickly Dec 30, 2022 · When convert the model, mmdeploy will use test_pipeline in model config to build the input and feed the input to build trt model. 12 (tags/v3. from mmdeploy. x 下的安装示例,供您参考。. 1+cu116 2022-08-29 17:38: Mar 18, 2024 · This may cause unexpected failure when running the built modules. backend. deploy failedPlease check whether the module is the custom module. onnx2tensorrt. 30143 版 2022-08-02 18:14:12 2022-11-02 15:25:04,337 - mmdeploy - INFO - 2022-11-02 15:25:04,337 - mmdeploy - INFO - ***** Environmental information ***** 2022-11-02 15:25:12,131 - mmdeploy - INFO - sys. json, the transform modules affects size is Resize and Pad. 20 TensorRT Implement Writen In C++ With Cuda Acceleration; 2023. 23 Release the quick testing code via TensorRT in MMDeploy. Notes: Supported backends are ONNXRuntime, TensorRT, ncnn, PPLNN, OpenVINO. This package is built with -DMMDEPLOY_TARGET_BACKENDS=trt, indicating tensorrt backend is enabled, which means onnxruntime backend is not available. 10/27 16:05:06 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "mmpose Apr 27, 2023 · huayuan4396 commented May 12, 2023 Hi, @zxd-cqu , sorry for the late response, MMDeploy is currently not supporting video-super-resolution network, so BasicVSR cannot be deployed at present. md for the contributing guideline. Because the file does not Dec 9, 2023 · 12/10 14:50:45 - mmengine - INFO - 12/10 14:50:45 - mmengine - INFO - ***** Environmental information ***** 12/10 14:50:47 - mmengine - INFO - sys. partition_type (str) – Specifying partition type of a model, defaults to ‘end2end’. pytorch2onnx. 第二步 :创建并激活 conda 环境. All the commands listed in the following text are verified in anaconda powershell. It takes longer time to build. We’ve already provided builtin deployment config files of all supported backends for mmpose. 29. Upload onnx to a 40-series graphics card. Contribute to haofanwang/mmdet_benchmark development by creating an account on GitHub. 1-cp311-none-manylinux2014_x86_64. Importing torch before the Detector results in a segmentation fault when the detector is initialized. 35 Python version: 3. 8-mmdeploy is built on the latest mmdeploy and the image with tag openmmlab/mmdeploy:ubuntu20. whl; Algorithm Hash digest; SHA256: 8b312f9d8f08448bc57c8aa0f51c20e374ddd7b7eb631b062a6eb9c7da6742f2 device (str) – A string specifying cuda device, defaults to ‘cuda:0’. cpp:95] Register ' DirectoryModel ' 10/03 12:57:06 - mmengine - INFO - Start pipeline ONNX Runtime Python packages now have numpy dependency >=1. And the test_pipeline build be writen to transforms in pipeline. PatchMerging. exit. mmcv-lite: lite, without CUDA ops but all other features, similar to mmcv<1. Apply detector to batch images and get their inference results. k. It is weird. \mmdeploy\demo\python\object_detection. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. export_info import export2SDK from mmdeploy . Another option is to use the new TacticSource API and disable cuBLASLt tactics if you don’t want to upgrade. We personally wrote the script of onnx-2-trt part based on 40 series graphics card and compile mmdeploy self-defined trt opt plugins. save (engine: Any, path: str) → None [source] ¶ Serialize TensorRT engine to disk. tensorrt. If you want to test in your own code, please exclude pre- and post-processings and skip first N times inference when computing the latency/fps. 1. Reproduction CUDA_VERSION: The version of CUDA to target, for example [11. py 09/21 10:52:48 - mmengine - INFO - 09/21 10:52:48 - mmengine - INFO - ***** Environmental information ***** 09/21 10:52:53 - mmengine - INFO - sys. com Oct 11, 2023 · 10/12 04:40:24 - mmengine - INFO - 10/12 04:40:24 - mmengine - INFO - ***** Environmental information ***** 10/12 04:40:26 - mmengine - INFO - sys. transformer. I have read the FAQ documentation but cannot get the expected help. Thus, in the following sections, we will describe how to prepare TensorRT. Mar 15, 2023 · Checklist. 1929 64 bit (AMD64)] 2023-02-12 23:34:43,431 - mmdeploy - INFO - CUDA available: True 2023-02-12 23:34:43,431 - mmdeploy Apr 10, 2023 · MMDeploy facilitates the deployment process, bridging the gap between models and applications. Dec 13, 2023 · 12/13 10:48:34 - mmengine - INFO - Start pipeline mmdeploy. With MMDeploy, developers can easily generate SDKs tailored to specific hardware from MMPose, saving a lot of adaptation time. 2022-08-12 16:20:01,249 - mmdeploy - ERROR - mmdeploy. 04-cuda11. mmdeploy. Extract the zip file and set the environment variables 2022-07-12 20:19:14,387 - mmdeploy - INFO - 2022-07-12 20:19:14,387 - mmdeploy - INFO - *****Environmental information***** 'gcc' ڲ ⲿ Ҳ ǿ еij ļ fatal: unsafe repository ('D:/mmd_11. continuous batching), blocked KV cache, dynamic split&fuse, tensor parallelism, high-performance CUDA Jul 27, 2023 · Checklist I have searched related issues but cannot get the expected help. 0. torch2onnx [2022-10-03 12:57:06. You signed in with another tab or window. 目录. 8 while pytorch does not directly use system cuda, it comes with independent cuda package. 9. 首先,点击 此处 下载 CUDA 11. 1. The bug has not been fixed in the latest version Mar 6, 2024 · Checklist I have searched related issues but cannot get the expected help. 0] 2022-12-27 14:51:25,180 - mmdeploy - INFO - CUDA available: True 2022-12-27 14:51:25,180 - mmdeploy 知乎专栏提供一个自由表达和随心写作的平台,让用户分享知识和见解。 17 hours ago · 2022-08-02 18:14:12,048 - mmdeploy - INFO - CUDA_HOME: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11. Jan 13, 2023 · Checklist I have searched related issues but cannot get the expected help. MMDeploy has already provided builtin deployment config files of all supported backends for mmdetection, under which the config file path follows the pattern: {task}: task in mmdetection. Step 1. apis. CUDNN_VERSION: The version of cuDNN to target, for example [8. You can provide your build command. 1/mmdeploy' is owned by someone else) To add an exception for this directory, call: git config --global --add safe. The specifications of the Docker Image are shown below. platform: win32 09/21 10:52:53 - mmengine - INFO - Python: 3. For python code, the model path should be a directory. For instance, the image with tag openmmlab/mmdeploy:ubuntu20. 0] 2022-07-21 08:26:23,875 - mmdeploy - INFO - CUDA_HOME: /usr Apr 8, 2022 · You signed in with another tab or window. onnx2tensorrt in subprocess 2022-11-02 15:05:26,173 - mmdeploy - WARNING - Could not load the library of tensorrt plugins. codebase. MMDeploy has already provided builtin deployment config files of all supported backends for mmdetection, under which the config file path follows the pattern: {task}/{task}_{backend}-{precision}_{static | dynamic}_{shape}. I want to build mmdeploy to take advantage of any custom ops that might be needed and deploy on triton with either ort or trt inference backend. x TensorRT 8. platform: win32 10/12 04:40:26 - mmengine - INFO - Python: 3. 6]. Sep 12, 2023 · Checklist I have searched related issues but cannot get the expected help. 10/09 10:59:01 - mmengine - INFO - Start pipeline mmdeploy. This may cause unexpected failure when running the built modules. 1 LTS (x86_64) GCC version: (Ubuntu 11. _C. 04) 11. Because there isn't exists loading mmdeploy_trt_net in your log. PPLCV currently only builds with CUDA 11. The config filename pattern is: {precision}: fp16, int8. x ONNX Runtime GPU packages continue to depend on CuDNN 8. 2 ROCM used to build PyTorch: N/A OS: Ubuntu 22. One is detection and the other is instance-seg, indicating instance segmentation. 7 + TensorRT-8. The bug has not been fixed in the latest version. . 勋浩AI. The bug has not been fixed in the latest version This tutorial briefly introduces how to export an OpenMMlab model to a specific backend using MMDeploy tools. 第三步: 参考 官方文档 并安装 PyTorch. Jul 24, 2022 · Saved searches Use saved searches to filter your results more quickly Please consider adding it in symbolic function. mmyolo. Thus, you can download CUDA 11. Aug 19, 2022 · Saved searches Use saved searches to filter your results more quickly It is crucial to specify the correct deployment config during model conversion. One is detection and the other is instance-seg, indicating instance Oct 9, 2022 · “Net backend not found: tensorrt, available backends: ["onnxruntime"]” I guess you were using mmdeploy-tensorrt prebuilt package. _jit_pass_onnx_deduplicate_initializers, function rewrite will not be applied 01/12 18:13:01 - mmengine - WARNING - Can not find mmdet. 1+cu102 Is debug build: False CUDA used to build PyTorch: 10. If I use cpu for extrapolating. 0 packages previously depended on cuDNN 8. It will be closed in 5 days if the stale label is not removed or if there is no further response. You switched accounts on another tab or window. 109 2022-08-02 18:14:12,049 - mmdeploy - INFO - MSVC: 用于 x64 的 Microsoft (R) C/C++ 优化编译器 19. Acknowledgement. py. The bug has not been fixed in the latest version 要保证它和您机器的 CPU 架构以及 CUDA 版本是匹配的。. Jul 17, 2023 · Install mmdeploy on a machine with a lower CUDA version (not necessarily a 40-series graphics card) and convert the model to onnx first. 7. 1; torch 2 vs 1; compiling mm packages rather than pip/mim installing; using different version of tensorrt, cudnn, onnx, etc; I have installed mmdeploy and been able to install and use mmdeploy on other clusters/gpus on my custom networks in several other environments so I am not sure what's going on here. PROTOBUF_VERSION: The version of Protobuf to use, for example [3. 1]. mat_count – [in] number of images in the batch. 6. Installation¶. 'data pipeline in your config file. 01. If you are using mmdeploy without TensorRT (for example ppl. 3 2022-08-02 18:14:12,049 - mmdeploy - INFO - NVCC: Cuda compilation tools, release 11. CUDA 12. 2. directory D:/mmd_11. torch2onnx with Call id: 0 failed. 13 | packaged by conda-forge | (default, Mar 25 2022, 05:59:00) [MSC v. 2022-08-12 09:23:23,114 - mmdeploy - INFO - 2022-08-12 09:23:23,115 - mmdeploy - INFO - *****Environmental information***** 2022-08-12 09:23:24,699 - mmdeploy - INFO - sys. 1916 64 bit (AMD64)] 10/12 04:40:26 - mmengine - INFO - CUDA available: True 10/12 04:40:26 - mmengine - INFO - numpy_random_seed: 2147483648 Nov 16, 2022 · Checklist I have searched related issues but cannot get the expected help. 1929 64 bit (AMD64)] 2022-11-02 15:25:12,131 - mmdeploy - INFO - CUDA available: True 2022-11 01/12 18:13:01 - mmengine - WARNING - Can not find torch. 0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2. 2 Patch 1 (Released Aug 26, 2020) to resolve some cuBLASLt issues. torch2onnx 2022-11-02 15:05:25,873 - mmdeploy - INFO - Start pipeline mmdeploy. x cuDNN 8. conda activate mmdeploy. 04. Dec 20, 2022 · Checklist. According to your pipeline. Nov 21, 2022 · Checklist I have searched related issues but cannot get the expected help. utils import get_input_shape, load_config from mmdeploy. 13 (default, Mar 28 2022, 11:38:47) [GCC 7. x). 03/18 17:01:12 - mmengine - WARNING - Failed to search registry with scope "mmdet" in LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams. mmdet models like RetinaNet, Faster R-CNN and DETR Mar 10, 2023 · Checklist. Read this for detail. Support for numpy 2. 0rc2". For some reason it works when the Detector is imported first. Apr 23, 2023 · PYTHONPATH=D:\project\mmdeploy\build\lib\Release python . Feb 12, 2023 · 2023-02-12 23:34:36,261 - mmdeploy - INFO - ***** Environmental information ***** 2023-02-12 23:34:43,431 - mmdeploy - INFO - sys. 8 not support cuda10. Please check whether "mmpose" is a correct scope, or whether the registry is initialized. The bug has not been fixed in the latest version Sep 20, 2023 · (mmdeploy_fl_1. 然后,根据如下命令,安装并配置 The docker images are built on the latest and released versions. Oct 8, 2023 · Please check whether " mmseg " is a correct scope, or whether the registry is initialized. x) I: \A ILab > python mmdeploy \t ools \c heck_env. Oct 17, 2023 · Describe the bug. bc yw cb dn lw zy zz wx mz va