5 Data encoding ToC Previous Next

5.1 General ToC Previous Next

5.1.7 Decimal ToC Previous Next

A Decimal is a high-precision signed decimal number. It consists of an arbitrary precision integer unscaled value and an integer scale. The scale is the power of ten that is applied to the unscaled value.

A Decimal has the fields described in Table 3.

Table 3 – Layout of Decimal

Field Type Description
TypeId NodeId The identifier for the Decimal DataType.
Encoding Byte This value is always 1.
Length Int32 The length of the Decimal. If the length is less than or equal to 0 then the Decimal value is 0.
Scale Int16 A signed integer representing the power of ten used to scale the value.
i.e. the decimal number of the value * 10-scale
The integer is encoded starting with the least significant bit.
Value Byte [*] A 2-complement signed integer representing the unscaled value.
The number of bits is inferred from the length of the length field.
If the number of bits is 0 then the value is 0.
The integer is encoded with the least significant byte first.

When a Decimal is encoded in a Variant the built-in type is set to ExtensionObject. Decoders that do not understand the Decimal type shall treat it like any other unknown Structure and pass it on to the application. Decoders that do understand the Decimal can parse the value and use any construct that is suitable for the DevelopmentPlatform.

If a Decimal is embedded in another Structure then the DataTypeDefinition for the field shall specify the NodeId of the Decimal Node as the DataType. If a Server publishes an OPC Binary type description for the Structure then the type description shall set the DataType for the field to ExtensionObject.

Previous Next