Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DML EP] ORT would crash after deleting one of the models and then doing an inference #22948

Open
klin2024 opened this issue Nov 26, 2024 · 1 comment
Labels
ep:DML issues related to the DirectML execution provider

Comments

@klin2024
Copy link

Describe the issue

[DML EP] ORT would crash after deleting one of the models and then doing an inference

To reproduce

  1. Load multi-models.
  2. Delete the last model we load.
  3. Do an Inference

Urgency

No response

Platform

Windows

OS Version

26100.2314

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

1.20.0

ONNX Runtime API

Python

Architecture

X64

Execution Provider

DirectML

Execution Provider Library Version

No response

@klin2024
Copy link
Author

klin2024 commented Nov 26, 2024

Once the ExecutionProvider is released, m_allocator is released as well.

We have to set the allocator of the current ExecutionProvider to ExecutionContext in ExecutionProviderImpl::ExecuteOperator().

The issue will be gone after we make this modification.

Image

@github-actions github-actions bot added the ep:DML issues related to the DirectML execution provider label Nov 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:DML issues related to the DirectML execution provider
Projects
None yet
Development

No branches or pull requests

1 participant