Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Test only] BFloat16 test for SkipSimplifiedLayerNormalization #22941

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

jiafatom
Copy link
Contributor

Description

Motivation and Context

@jiafatom jiafatom changed the title BFloat16 test for SkipSimplifiedLayerNormalization [Test only] BFloat16 test for SkipSimplifiedLayerNormalization Nov 25, 2024
Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can commit the suggested changes from lintrunner.

Comment on lines 122 to 126
skip_size);
}
else
{
LaunchSkipLayerNormKernel<CudaT, Simplified>(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
skip_size);
}
else
{
LaunchSkipLayerNormKernel<CudaT, Simplified>(
skip_size);
} else {
LaunchSkipLayerNormKernel<CudaT, Simplified>(

Comment on lines 18 to 22
import tempfile
from typing import Dict
from enum import Enum

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
import tempfile
from typing import Dict
from enum import Enum
import tempfile
from enum import Enum
from typing import Dict

Comment on lines 24 to 27
from onnx import AttributeProto, GraphProto, ModelProto, NodeProto, TensorProto, helper, numpy_helper
from onnx.shape_inference import infer_shapes, infer_shapes_path
from onnx.helper import float32_to_bfloat16
from packaging import version
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
from onnx import AttributeProto, GraphProto, ModelProto, NodeProto, TensorProto, helper, numpy_helper
from onnx.shape_inference import infer_shapes, infer_shapes_path
from onnx.helper import float32_to_bfloat16
from packaging import version
from onnx import AttributeProto, GraphProto, ModelProto, NodeProto, TensorProto, helper, numpy_helper
from onnx.helper import float32_to_bfloat16
from onnx.shape_inference import infer_shapes, infer_shapes_path
from packaging import version

Comment on lines 40 to 41


def convert_np_to_float16(np_array, min_positive_val=5.96e-08, max_finite_val=65504.0):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
def convert_np_to_float16(np_array, min_positive_val=5.96e-08, max_finite_val=65504.0):
def convert_np_to_float16(np_array, min_positive_val=5.96e-08, max_finite_val=65504.0):

Comment on lines 110 to +111

def convert_tensor_float_to_bfloat16(tensor):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
def convert_tensor_float_to_bfloat16(tensor):
def convert_tensor_float_to_bfloat16(tensor):

Comment on lines +189 to +190
class NodeValueType(Enum):
FP32 = 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
class NodeValueType(Enum):
FP32 = 1
class NodeValueType(Enum):

Comment on lines 194 to 195
class InitializerTracker:
"""Class for keeping track of initializer."""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
class InitializerTracker:
"""Class for keeping track of initializer."""
class InitializerTracker:

Comment on lines 211 to 212
def convert_float_to_float16(
model,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
def convert_float_to_float16(
model,
def convert_float_to_float16(

Comment on lines 470 to 471

# Some operators have data type fixed as float for some input. Add a float16 to float cast for those inputs.
for node in mixed_float_type_node_list:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# Some operators have data type fixed as float for some input. Add a float16 to float cast for those inputs.
for node in mixed_float_type_node_list:
# Some operators have data type fixed as float for some input. Add a float16 to float cast for those inputs.

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can commit the suggested changes from lintrunner.

Comment on lines 18 to 23
import tempfile
from typing import Dict
from enum import Enum
import ml_dtypes

import numpy as np
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
import tempfile
from typing import Dict
from enum import Enum
import ml_dtypes
import numpy as np
import tempfile
from enum import Enum
from typing import Dict
import ml_dtypes
import numpy as np

@jiafatom jiafatom force-pushed the skip_bf_16 branch 3 times, most recently from a198187 to 03bf839 Compare November 26, 2024 02:18
Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can commit the suggested changes from lintrunner.

Comment on lines +122 to +126
skip_size);
}
else
{
LaunchSkipLayerNormKernel<CudaT, Simplified>(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
skip_size);
}
else
{
LaunchSkipLayerNormKernel<CudaT, Simplified>(
skip_size);
} else {
LaunchSkipLayerNormKernel<CudaT, Simplified>(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant