Demos Applications Components Optimizers Experiments Datasets

Basic

Basic Test


Project maintained by SimiaCryptus Java, CuDNN, and CUDA are others' trademarks. No endorsement is implied.
  1. Serialization
    1. Raw Json
  2. Example Input/Output Pair
  3. Differential Validation
    1. Feedback Validation
    2. Learning Validation
    3. Total Accuracy
    4. Frozen and Alive Status
  4. Performance
  5. Training Characteristics
    1. Input Learning

Target Description: The type Cross dot meta layer.

Report Description: Basic Test

Serialization

This run will demonstrate the layer’s JSON serialization, and verify deserialization integrity.

Raw Json

Code from SerializationTest.java:84 executed in 0.00 seconds:

    final JsonObject json = layer.getJson();
    final NNLayer echo = NNLayer.fromJson(json);
    if (echo == null) throw new AssertionError("Failed to deserialize");
    if (layer == echo) throw new AssertionError("Serialization did not copy");
    if (!layer.equals(echo)) throw new AssertionError("Serialization not equal");
    return new GsonBuilder().setPrettyPrinting().create().toJson(json);

Returns:

    {
      "class": "com.simiacryptus.mindseye.layers.java.CrossDotMetaLayer",
      "id": "3b8a95e4-5399-465b-a795-cf50b7ac337a",
      "isFrozen": false,
      "name": "CrossDotMetaLayer/3b8a95e4-5399-465b-a795-cf50b7ac337a"
    }

Wrote Model to CrossDotMetaLayer_Basic.json; 209 characters

Example Input/Output Pair

Display input/output pairs from random executions:

Code from ReferenceIO.java:69 executed in 0.00 seconds:

    final SimpleEval eval = SimpleEval.run(layer, inputPrototype);
    return String.format("--------------------\nInput: \n[%s]\n--------------------\nOutput: \n%s\n--------------------\nDerivative: \n%s",
                         Arrays.stream(inputPrototype).map(t -> t.prettyPrint()).reduce((a, b) -> a + ",\n" + b).get(),
                         eval.getOutput().prettyPrint(),
                         Arrays.stream(eval.getDerivative()).map(t -> t.prettyPrint()).reduce((a, b) -> a + ",\n" + b).get());

Returns:

    --------------------
    Input: 
    [[ 1.708, -1.248, 1.064 ]]
    --------------------
    Output: 
    [ [ 0.0, -2.131584, 1.817312 ], [ -2.131584, 0.0, -1.3278720000000002 ], [ 1.817312, -1.3278720000000002, 0.0 ] ]
    --------------------
    Derivative: 
    [ -0.3679999999999999, 5.5440000000000005, 0.9200000000000002 ]

Differential Validation

Code from SingleDerivativeTester.java:292 executed in 0.00 seconds:

    log.info(String.format("Inputs: %s", Arrays.stream(inputPrototype).map(t -> t.prettyPrint()).reduce((a, b) -> a + ",\n" + b).get()));
    log.info(String.format("Inputs Statistics: %s", Arrays.stream(inputPrototype).map(x -> new ScalarStatistics().add(x.getData()).toString()).reduce((a, b) -> a + ",\n" + b).get()));
    log.info(String.format("Output: %s", outputPrototype.prettyPrint()));
    log.info(String.format("Outputs Statistics: %s", new ScalarStatistics().add(outputPrototype.getData())));

Logging:

    Inputs: [ 0.496, -0.016, 1.812 ]
    Inputs Statistics: {meanExponent=-0.6140800491710278, negative=1, min=1.812, max=1.812, mean=0.7639999999999999, count=3, positive=2, stdDev=0.7699627696627069, zeros=0}
    Output: [ [ 0.0, -0.007936, 0.898752 ], [ -0.007936, 0.0, -0.028992 ], [ 0.898752, -0.028992, 0.0 ] ]
    Outputs Statistics: {meanExponent=-1.2281600983420555, negative=4, min=0.0, max=0.0, mean=0.19151644444444446, count=9, positive=2, stdDev=0.3781843188026797, zeros=3}
    

Feedback Validation

We validate the agreement between the implemented derivative of the inputs with finite difference estimations:

Code from SingleDerivativeTester.java:303 executed in 0.00 seconds:

    return testFeedback(statistics, component, inputPrototype, outputPrototype);

Logging:

    Feedback for input 0
    Inputs Values: [ 0.496, -0.016, 1.812 ]
    Value Statistics: {meanExponent=-0.6140800491710278, negative=1, min=1.812, max=1.812, mean=0.7639999999999999, count=3, positive=2, stdDev=0.7699627696627069, zeros=0}
    Implemented Feedback: [ [ 0.0, -0.016, 1.812, -0.016, 0.0, 0.0, 1.812, 0.0, 0.0 ], [ 0.0, 0.496, 0.0, 0.496, 0.0, 1.812, 0.0, 1.812, 0.0 ], [ 0.0, 0.0, 0.496, 0.0, 0.0, -0.016, 0.496, -0.016, 0.0 ] ]
    Implemented Statistics: {meanExponent=-0.6140800491710279, negative=4, min=0.0, max=0.0, mean=0.33955555555555555, count=27, positive=8, stdDev=0.6384419306134521, zeros=15}
    Measured Feedback: [ [ 0.0, -0.015999999999991715, 1.8119999999999248, -0.015999999999991715, 0.0, 0.0, 1.8119999999999248, 0.0, 0.0 ], [ 0.0, 0.4960000000000034, 0.0, 0.4960000000000034, 0.0, 1.811999999999994, 0.0, 1.811999999999994, 0.0 ], [ 0.0, 0.0, 0.4959999999998299, 0.0, 0.0, -0.016000000000009063, 0.4959999999998299, -0.016000000000009063, 0.0 ] ]
    Measured Statistics: {meanExponent=-0.6140800491710517, negative=4, min=0.0, max=0.0, mean=0.3395555555555372, count=27, positive=8, stdDev=0.6384419306134352, zeros=15}
    Feedback Error: [ [ 0.0, 8.28503932126523E-15, -7.527312106958561E-14, 8.28503932126523E-15, 0.0, 0.0, -7.527312106958561E-14, 0.0, 0.0 ], [ 0.0, 3.3861802251067274E-15, 0.0, 3.3861802251067274E-15, 0.0, -5.995204332975845E-15, 0.0, -5.995204332975845E-15, 0.0 ], [ 0.0, 0.0, -1.7008616737257398E-13, 0.0, 0.0, -9.06219543850284E-15, -1.7008616737257398E-13, -9.06219543850284E-15, 0.0 ] ]
    Error Statistics: {meanExponent=-13.784941504065344, negative=8, min=0.0, max=0.0, mean=-1.8425590271649357E-14, count=27, positive=4, stdDev=4.7305343018002514E-14, zeros=15}
    

Returns:

    ToleranceStatistics{absoluteTol=2.0155e-14 +- 4.6595e-14 [0.0000e+00 - 1.7009e-13] (27#), relativeTol=1.2323e-13 +- 1.1969e-13 [1.6543e-15 - 2.8319e-13] (12#)}

Learning Validation

We validate the agreement between the implemented derivative of the internal weights with finite difference estimations:

Code from SingleDerivativeTester.java:311 executed in 0.00 seconds:

    return testLearning(statistics, component, inputPrototype, outputPrototype);

Returns:

    ToleranceStatistics{absoluteTol=2.0155e-14 +- 4.6595e-14 [0.0000e+00 - 1.7009e-13] (27#), relativeTol=1.2323e-13 +- 1.1969e-13 [1.6543e-15 - 2.8319e-13] (12#)}

Total Accuracy

The overall agreement accuracy between the implemented derivative and the finite difference estimations:

Code from SingleDerivativeTester.java:319 executed in 0.00 seconds:

    //log.info(String.format("Component: %s\nInputs: %s\noutput=%s", component, Arrays.toString(inputPrototype), outputPrototype));
    log.info(String.format("Finite-Difference Derivative Accuracy:"));
    log.info(String.format("absoluteTol: %s", statistics.absoluteTol));
    log.info(String.format("relativeTol: %s", statistics.relativeTol));

Logging:

    Finite-Difference Derivative Accuracy:
    absoluteTol: 2.0155e-14 +- 4.6595e-14 [0.0000e+00 - 1.7009e-13] (27#)
    relativeTol: 1.2323e-13 +- 1.1969e-13 [1.6543e-15 - 2.8319e-13] (12#)
    

Frozen and Alive Status

Code from SingleDerivativeTester.java:327 executed in 0.00 seconds:

    testFrozen(component, inputPrototype);
    testUnFrozen(component, inputPrototype);

Performance

Now we execute larger-scale runs to benchmark performance:

Code from PerformanceTester.java:183 executed in 0.00 seconds:

    test(component, inputPrototype);

Logging:

    100 batches
    Input Dimensions:
    	[3]
    Performance:
    	Evaluation performance: 0.000088s +- 0.000008s [0.000073s - 0.000096s]
    	Learning performance: 0.000003s +- 0.000001s [0.000002s - 0.000004s]
    

Training Characteristics

Input Learning

In this run, we use a network to learn this target input, given it’s pre-evaluated output:

Code from TrainingTester.java:423 executed in 0.00 seconds:

    return Arrays.stream(input_target)
                 .flatMap(x -> Arrays.stream(x))
                 .map(x -> x.prettyPrint())
                 .reduce((a, b) -> a + "\n" + b)
                 .orElse("");

Returns:

    [ -0.824, 0.988, -0.972 ]
    [ -0.824, -0.972, 0.988 ]
    [ -0.824, 0.988, -0.972 ]