Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Web] ONNX Runtime Initialization Fails with irVersion Error #22931

Open
lstrhsu opened this issue Nov 23, 2024 · 5 comments
Open

[Web] ONNX Runtime Initialization Fails with irVersion Error #22931

lstrhsu opened this issue Nov 23, 2024 · 5 comments
Labels
platform:web issues related to ONNX Runtime web; typically submitted using template

Comments

@lstrhsu
Copy link

lstrhsu commented Nov 23, 2024

Describe the issue

I am encountering an issue with the ONNX Runtime initialization in my userscript. The error message indicates that the irVersion property is being accessed on a null object, which suggests that the ONNX model is not being loaded correctly. Additionally, the ONNX Runtime version is showing as undefined, which implies that the onnxruntime-web library might not be properly initialized or loaded.

To reproduce

Load the userscript in a browser environment

  • Chrome 133.0.6847.2
  • Tampermonkey v5.3.2

Observe the console logs for the following error

ONNX Runtime initialization failed: TypeError: Cannot read properties of null (reading 'irVersion')
Error stack: TypeError: Cannot read properties of null (reading 'irVersion')
Initialization failed: Error: ONNX initialization failed
Error stack: Error: ONNX initialization failed

Console Logs

Script execution started...
Starting initialization...
Initializing ONNX...
ort: object
ONNX Runtime version: undefined
Fetching model from:[my model](https://raw.githubusercontent.com/lstrhsu/MyHost/main/model.onnx)
Fetching model...
Model data size: 1153332 bytes
Creating ONNX Runtime session...
ONNX Runtime initialization failed: TypeError: Cannot read properties of null (reading ['irVersion')]

Related Userscript code

// @require      https://cdn.jsdelivr.net/npm/[email protected]/dist/ort.min.js
// @connect      raw.githubusercontent.com
// @connect      cdn.jsdelivr.net
// @grant        GM_xmlhttpRequest
// @grant        unsafeWindow
// @connect      microsoft.github.io
// @resource     WASM_SIMD https://cdn.jsdelivr.net/npm/[email protected]/dist/ort-wasm-simd.wasm
// @resource     WASM https://cdn.jsdelivr.net/npm/[email protected]/dist/ort-wasm.wasm
// ==/UserScript==
(async function() {
    'use strict';
    
    console.log('Script execution started...');

    // Initialize ONNX session
    let session;

    // Configure ONNX Runtime
    const initONNX = async () => {
        try {
            // Check if ONNX Runtime is loaded correctly
            console.log('ort:', typeof ort);
            if (typeof ort === 'undefined') {
                throw new Error('ONNX Runtime not loaded');
            }
            console.log('ONNX Runtime version:', ort.version);

            // Configuration options
            const options = {
                executionProviders: ['webgl'], // Use WebGL backend
                graphOptimizationLevel: 'all'
            };

            // Use the correct model URL
            const MODEL_URL = 'https://raw.githubusercontent.com/lstrhsu/MyHost/main/model.onnx';
            console.log('Fetching model from:', MODEL_URL);

            // Fetch model data
            console.log('Fetching model...');
            const modelResponse = await new Promise((resolve, reject) => {
                GM_xmlhttpRequest({
                    method: 'GET',
                    url: MODEL_URL,
                    responseType: 'arraybuffer',
                    onload: (response) => {
                        // Validate response
                        if (response.status !== 200) {
                            reject(new Error(`Model download failed: ${response.status}`));
                            return;
                        }
                        resolve(response);
                    },
                    onerror: reject
                });
            });

            // Use ArrayBuffer directly
            const modelBuffer = modelResponse.response;
            console.log('Model data size:', modelBuffer.byteLength, 'bytes');

            // Check if model data is valid
            if (!modelBuffer || modelBuffer.byteLength === 0) {
                throw new Error('Model data is invalid or empty');
            }

            // Create session
            console.log('Creating ONNX Runtime session...');
            session = await ort.InferenceSession.create(modelBuffer, options);
            
            // Validate session
            console.log('Session created successfully');
            console.log('Input nodes:', session.inputNames);
            console.log('Output nodes:', session.outputNames);

            return true;
        } catch (error) {
            console.error('ONNX Runtime initialization failed:', error);
            console.error('Error stack:', error.stack);
            return false;
        }
    };

Model Composition

Environment Information:

  • ONNX Version: 1.13.1
  • ONNX Runtime Version: 1.13.1

Version Compatibility Check:

  • Model Opset Version: 13
  • Maximum Supported Opset Version in Current Environment: 17
  • Compatibility: ✓ Compatible

Model Validation: Passed ✓

  • Warnings: The following operators may need updates: Squeeze & Reshape

Web Deployment Compatibility Check (ONNX.js):

  • IR Version 7: ✓ Compatible
  • Opset Version 13: ✓ Compatible

Metadata Information:

  • ONNX IR Version: 7
  • Producer Name: tf2onnx
  • Producer Version: 1.13.0 2c1db5
  • Model Version: 0
  • Opset Versions: [13, 2]

Urgency

This issue is urgent because it blocks a critical feature in our application that relies on ONNX Runtime for real-time captcha solving. We have a project deadline approaching in two weeks, and resolving this issue is crucial for our deployment schedule.

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

Version: 1.13.1

Execution Provider

'webgl' (WebGL)

@lstrhsu lstrhsu added the platform:web issues related to ONNX Runtime web; typically submitted using template label Nov 23, 2024
@github-actions github-actions bot added the .NET Pull requests that update .net code label Nov 23, 2024
@fs-eire
Copy link
Contributor

fs-eire commented Nov 24, 2024

1.13.1 is quite old. here are the options:

  • keep using 1.13.1 with WebGL. then you need to downgrade the IR version of the onnx model to an older version.
  • [recommended] using latest version (1.20.1) of onnxruntime-web and use webgpu as execution provider(Chrome 133 supports it). webgpu EP has better performance than webgl and we are keeping improving it

@fs-eire fs-eire removed the .NET Pull requests that update .net code label Nov 24, 2024
@lstrhsu
Copy link
Author

lstrhsu commented Nov 25, 2024

1.13.1 is quite old. here are the options:

  • keep using 1.13.1 with WebGL. then you need to downgrade the IR version of the onnx model to an older version.
  • [recommended] using latest version (1.20.1) of onnxruntime-web and use webgpu as execution provider(Chrome 133 supports it). webgpu EP has better performance than webgl and we are keeping improving it

Thank you for your reply. Here's what I've tried:

  • Convert the model with different IR versions (6 and 4), but still getting the same error:
    ONNX Runtime initialization failed: TypeError: Cannot read properties of null (reading 'irVersion')
    
  • Using the latest version (1.20.1) of onnxruntime-web and WebGPU as execution provider, but got this error:
    removing requested execution provider "webgpu" from session options because it is not available: backend not found.
    
    {
      message: "Failed to load model as ONNX format: Error: unrecognized input '' for node: LSTM__61\nas ORT format: TypeError: Cannot read properties of null (reading 'irVersion')",
      name: "Error",
      stack: "Error: Failed to load model as ONNX format: Error: unrecognized input '' for node: LSTM__61\nas ORT format: TypeError: Cannot read properties of null (reading 'irVersion')\n    at dn.load (...)"
    }
    

I've tried many methods, including retraining multiple times. Then I found some old posts mentioning that ONNX.js didn't support LSTM? I'd like to know if it's supported now and how I should improve my approach.
And I use Bidirectional LSTM.

Here's my model conversion code:

import tf2onnx
import tensorflow as tf
from tensorflow import keras
import logging
import os
import shutil
import subprocess

# 1. Set up logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

# 2. Keep CTCLayer definition consistent with original Python version
class CTCLayer(keras.layers.Layer):
    def __init__(self, name=None, trainable=True, **kwargs):
        super().__init__(name=name, trainable=trainable, **kwargs)
        self.loss_fn = keras.backend.ctc_batch_cost

    def call(self, y_true, y_pred):
        return y_pred

def convert_model_to_onnx(model_path, output_path):
    try:
        logger.info("Loading model...")
        # Load model
        model = keras.models.load_model(
            model_path, 
            custom_objects={'CTCLayer': CTCLayer},
            compile=False
        )
        
        # Create prediction model
        logger.info("Creating prediction model...")
        prediction_model = keras.models.Model(
            model.get_layer(name="image").input,
            model.get_layer(name="dense2").output
        )
        
        # Save as SavedModel format with specific input shape
        logger.info("Saving as SavedModel format...")
        temp_saved_model = 'temp_saved_model'
        prediction_model.save(
            temp_saved_model, 
            save_format='tf',
            signatures={
                'serving_default': tf.function(
                    lambda x: prediction_model(x)
                ).get_concrete_function(
                    tf.TensorSpec(shape=(1, 280, 80, 1), dtype=tf.float32, name="image")
                )
            }
        )
        
        # Convert using command line
        logger.info("Starting ONNX conversion...")
        cmd = f"python -m tf2onnx.convert --saved-model {temp_saved_model} --output {output_path} --opset 13"
        result = subprocess.run(cmd, shell=True, capture_output=True, text=True)
        
        if result.returncode != 0:
            logger.error(f"Conversion failed: {result.stderr}")
            raise Exception(f"Conversion failed: {result.stderr}")
            
        # Clean up temporary files
        if os.path.exists(temp_saved_model):
            shutil.rmtree(temp_saved_model)
        
        logger.info(f"Model successfully converted and saved to: {output_path}")
        return True
        
    except Exception as e:
        logger.error(f"Error during conversion: {str(e)}")
        if os.path.exists(temp_saved_model):
            shutil.rmtree(temp_saved_model)
        raise

if __name__ == "__main__":
    try:
        model_path = "model.h5"
        output_path = "model.onnx"
        
        # Verify input file exists
        if not os.path.exists(model_path):
            raise FileNotFoundError(f"Model file not found: {model_path}")
            
        # Execute conversion
        convert_model_to_onnx(model_path, output_path)
        
        # Verify output file
        if os.path.exists(output_path):
            logger.info("Conversion completed successfully!")
        else:
            raise FileNotFoundError("Conversion failed, output file not generated")
            
    except Exception as e:
        logger.error(f"Program execution failed: {str(e)}")
        raise

Thank you in advance for your response.

@fs-eire
Copy link
Contributor

fs-eire commented Nov 26, 2024

Using the latest version (1.20.1) of onnxruntime-web and WebGPU as execution provider, but got this error:
removing requested execution provider "webgpu" from session options because it is not available: backend not found.

sorry I forgot to mention that you need to use import "onnxruntime-web/webgpu" instead of import "onnxruntime-web" to use webgpu in 1.20.x

@lstrhsu
Copy link
Author

lstrhsu commented Nov 26, 2024

sorry I forgot to mention that you need to use import "onnxruntime-web/webgpu" instead of import "onnxruntime-web" to use webgpu in 1.20.x

I apologize for any confusion, as I am a complete beginner. Attempted to use the following URLs:

// @require      https://unpkg.com/[email protected]/dist/ort.webgpu.min.js
or
// @require      https://cdn.jsdelivr.net/npm/[email protected]/dist/ort.webgpu.min.js

However, both resulted in the following output:

Checking WebGPU support...
WebGPU supported: true
Creating session with options: {"executionProviders":["webgpu"],"graphOptimizationLevel":"all","logSeverityLevel":0,"executionMode":"sequential"}
Failed to load resource: the server responded with a status of 404 ()

After checking, they are correct URLs.
Could you please advise on any additional steps I might need to take to resolve this issue?
Thank you for your assistance!

@fs-eire
Copy link
Contributor

fs-eire commented Nov 26, 2024

Failed to load resource: the server responded with a status of 404 ()

is the model URL 404? A full or reproducible example would be helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
platform:web issues related to ONNX Runtime web; typically submitted using template
Projects
None yet
Development

No branches or pull requests

2 participants