jensenShannonDivergence

Computes the Jensen-Shannon divergence between a and b, which is the sum (ai * log(2 * ai / (ai + bi)) + bi * log(2 * bi / (ai + bi))) / 2. The base of logarithm is 2. The ranges are assumed to contain elements in [0, 1]. Usually the ranges are normalized probability distributions, but this is not required or checked by jensenShannonDivergence. If the inputs are normalized, the result is bounded within [0, 1]. The three-parameter version stops evaluations as soon as the intermediate result is greater than or equal to limit.

  1. CommonType!(ElementType!Range1, ElementType!Range2) jensenShannonDivergence(Range1 a, Range2 b)
  2. CommonType!(ElementType!Range1, ElementType!Range2) jensenShannonDivergence(Range1 a, Range2 b, F limit)
    jensenShannonDivergence
    (
    Range1
    Range2
    F
    )
    (
    Range1 a
    ,
    Range2 b
    ,)
    if (
    isInputRange!Range1 &&
    &&
    is(typeof(CommonType!(ElementType!Range1, ElementType!Range2).init >= F.init) : bool)
    )

Examples

import std.math.operations : isClose;

double[] p = [ 0.0, 0, 0, 1 ];
assert(jensenShannonDivergence(p, p) == 0);
double[] p1 = [ 0.25, 0.25, 0.25, 0.25 ];
assert(jensenShannonDivergence(p1, p1) == 0);
assert(isClose(jensenShannonDivergence(p1, p), 0.548795, 1e-5));
double[] p2 = [ 0.2, 0.2, 0.2, 0.4 ];
assert(isClose(jensenShannonDivergence(p1, p2), 0.0186218, 1e-5));
assert(isClose(jensenShannonDivergence(p2, p1), 0.0186218, 1e-5));
assert(isClose(jensenShannonDivergence(p2, p1, 0.005), 0.00602366, 1e-5));

Meta