Knowledgebase: Performance Tuning
MarkLogic Server and the decimal type implementation
11 May 2017 10:40 AM

Introduction: the decimal type

In order to be compliant with the XQuery specification and to satisfy the needs of customers working with financial data, MarkLogic Server implements a decimal type, available in XQuery and server-side JavaScript.

Decimal type has been implemented for very specific requirements, decimals have about a dozen more bits of precision than doubles but take up more memory and arithmetic operations over them are much slower.

Use the double where possible

Unless you have a specific requirement to use a Decimal data type, in most case it's better and faster to use the double data type to represent large numbers.

Specific details about the decimal data type

If you still want or need to use a decimal data type below are its limitations and details on how exactly it is implemented in MarkLogic Server:

o   Precision

  • How many decimal digits of precision does it have?

The MarkLogic implementation of xs:decimal representation is designed to meet the XQuery specification requirements to provide at least 18 decimal digits of precision. In practice, up to 19 decimal digits can be represented with full fidelity.

  • If it is a binary number, how many binary digits of precision does it have?

 A decimal number is represented inside MarkLogic with 64 binary bits of digits and an additional 64 bits of sign and a scale (specifies where the decimal point is).

  • What are the exact upper and lower bounds of its precision?

-18446744073709551615 to 18446744073709551615 

Any operation producing number smaller or bigger than this range will result in XDMP-DECOVRFLW error (decimal overflow)

o   Scale

  • Does it have a fixed scale or floating scale?

It has a floating scale.

  • What are the limitations on the scale?

-20 to 0

So you can only represent numbers between 1 * (2^-64) and 18446744073709551615

  • Is the scale binary or decimal?


  • How many decimal digits can it scale?


  • How many binary digits can it scale?


  • What is the smallest number it can represent and the largest?

smallest: -1*(2^64)
closest to zero: 1*(10^-20)
largest: (2^64)

  • Are all integers safe or does it have a limited safe range for integers?

It can represent 64 bit unsigned integers with full fidelity.


o   Limitations

  • Does it have binary rounding errors?

The division algorithm on Linux in particular does convert to an 80-bit binary floating point representation to calculate reciprocals - which can result in binary rounding errors. Other arithmetic algorithms work solely in base 10.

  • What numeric errors can it throw and when?

Overflow: Number is too big or small to represent
Underflow: Number is close to zero to represent
Loss of precision: The result has too many digits of precision (essentially the 64bit digits value has overflowed)

  • Can it represent floating point values, such as NaN, -Infinity, +Infinity, etc.?


o   Implementation

  • How is the DECIMAL data type implemented?

It has a representation with 64 bits of digits, a sign, and a base 10 negative exponent (fixed to range from -20 to 0). So the value is calculated like this:

sign * digits * (10 ^ -exponent)

  • How many bytes does it consume?

On disk, for example in triple indexes, it's not a fixed size as it uses integer compression. At maximum, the decimal scalar type consumes 16 bytes per value: eight bytes of digits, four bytes of sign, and four bytes of scale. It is not space efficient but it keeps the digits aligned on eight-byte boundaries.

(4 vote(s))
Not helpful

Comments (0)