Demos Applications Components Optimizers Experiments Datasets

BandLimitTest

Test truncation feature that limits the image to N bands, discarding the last as needed.


Project maintained by SimiaCryptus Java, CuDNN, and CUDA are others' trademarks. No endorsement is implied.
  1. Serialization
    1. Raw Json
  2. Example Input/Output Pair
  3. Batch Execution
  4. Differential Validation
    1. Feedback Validation
    2. Learning Validation
    3. Total Accuracy
    4. Frozen and Alive Status
  5. Performance

Target Description: Concatenates two or more inputs, assuming they have the same width and height, to produce an image with both inputs’ color bands. (e.g. Used in Inception modules in GoogLeNet.)

Report Description: Test truncation feature that limits the image to N bands, discarding the last as needed.

Serialization

This run will demonstrate the layer’s JSON serialization, and verify deserialization integrity.

Raw Json

Code from SerializationTest.java:84 executed in 0.00 seconds:

    final JsonObject json = layer.getJson();
    final NNLayer echo = NNLayer.fromJson(json);
    if (echo == null) throw new AssertionError("Failed to deserialize");
    if (layer == echo) throw new AssertionError("Serialization did not copy");
    if (!layer.equals(echo)) throw new AssertionError("Serialization not equal");
    return new GsonBuilder().setPrettyPrinting().create().toJson(json);

Returns:

    {
      "class": "com.simiacryptus.mindseye.layers.cudnn.ImgConcatLayer",
      "id": "eb1f49b3-6a0d-4463-82c2-65f7e3742586",
      "isFrozen": false,
      "name": "ImgConcatLayer/eb1f49b3-6a0d-4463-82c2-65f7e3742586",
      "maxBands": 3,
      "precision": "Double"
    }

Wrote Model to ImgConcatLayer_BandLimitTest.json; 246 characters

Example Input/Output Pair

Display input/output pairs from random executions:

Code from ReferenceIO.java:69 executed in 0.00 seconds:

    final SimpleEval eval = SimpleEval.run(layer, inputPrototype);
    return String.format("--------------------\nInput: \n[%s]\n--------------------\nOutput: \n%s\n--------------------\nDerivative: \n%s",
                         Arrays.stream(inputPrototype).map(t -> t.prettyPrint()).reduce((a, b) -> a + ",\n" + b).get(),
                         eval.getOutput().prettyPrint(),
                         Arrays.stream(eval.getDerivative()).map(t -> t.prettyPrint()).reduce((a, b) -> a + ",\n" + b).get());

Returns:

    --------------------
    Input: 
    [[
    	[ [ 1.14, -1.104 ], [ -1.032, -0.172 ] ],
    	[ [ 1.116, -0.084 ], [ 0.448, -1.54 ] ]
    ],
    [
    	[ [ 1.94, -1.832 ], [ 1.572, 0.852 ] ],
    	[ [ 1.116, 0.124 ], [ 1.06, 1.616 ] ]
    ]]
    --------------------
    Output: 
    [
    	[ [ 1.14, -1.104, 1.94 ], [ -1.032, -0.172, 1.572 ] ],
    	[ [ 1.116, -0.084, 1.116 ], [ 0.448, -1.54, 1.06 ] ]
    ]
    --------------------
    Derivative: 
    [
    	[ [ 1.0, 1.0 ], [ 1.0, 1.0 ] ],
    	[ [ 1.0, 1.0 ], [ 1.0, 1.0 ] ]
    ],
    [
    	[ [ 1.0, 0.0 ], [ 1.0, 0.0 ] ],
    	[ [ 1.0, 0.0 ], [ 1.0, 0.0 ] ]
    ]

GPU Log

Batch Execution

Most layers, including this one, should behave the same no matter how the items are split between batches. We verify this:

Code from BatchingTester.java:113 executed in 0.01 seconds:

    return test(reference, inputPrototype);

Returns:

    ToleranceStatistics{absoluteTol=0.0000e+00 +- 0.0000e+00 [0.0000e+00 - 0.0000e+00] (280#), relativeTol=0.0000e+00 +- 0.0000e+00 [0.0000e+00 - 0.0000e+00] (239#)}

Differential Validation

Code from SingleDerivativeTester.java:292 executed in 0.00 seconds:

    log.info(String.format("Inputs: %s", Arrays.stream(inputPrototype).map(t -> t.prettyPrint()).reduce((a, b) -> a + ",\n" + b).get()));
    log.info(String.format("Inputs Statistics: %s", Arrays.stream(inputPrototype).map(x -> new ScalarStatistics().add(x.getData()).toString()).reduce((a, b) -> a + ",\n" + b).get()));
    log.info(String.format("Output: %s", outputPrototype.prettyPrint()));
    log.info(String.format("Outputs Statistics: %s", new ScalarStatistics().add(outputPrototype.getData())));

Logging:

    Inputs: [
    	[ [ 1.964, 1.128 ], [ -1.132, -1.932 ] ],
    	[ [ -1.564, 0.212 ], [ 1.852, -0.448 ] ]
    ],
    [
    	[ [ -1.576, 1.928 ], [ 0.564, -0.436 ] ],
    	[ [ -0.2, -1.076 ], [ -0.868, -1.344 ] ]
    ]
    Inputs Statistics: {meanExponent=0.015599467253327712, negative=4, min=-0.448, max=-0.448, mean=0.010000000000000057, count=8, positive=4, stdDev=1.4258583379845278, zeros=0},
    {meanExponent=-0.09085123789238415, negative=6, min=-1.344, max=-1.344, mean=-0.376, count=8, positive=2, stdDev=1.0802592281485033, zeros=0}
    Output: [
    	[ [ 1.964, 1.128, -1.576 ], [ -1.132, -1.932, 0.564 ] ],
    	[ [ -1.564, 0.212, -0.2 ], [ 1.852, -0.448, -0.868 ] ]
    ]
    Outputs Statistics: {meanExponent=-0.0572349353330022, negative=7, min=-0.868, max=-0.868, mean=-0.16666666666666663, count=12, positive=5, stdDev=1.2756481576916976, zeros=0}
    

Feedback Validation

We validate the agreement between the implemented derivative of the inputs with finite difference estimations:

Code from SingleDerivativeTester.java:303 executed in 0.02 seconds:

    return testFeedback(statistics, component, inputPrototype, outputPrototype);

Logging:

    Feedback for input 0
    Inputs Values: [
    	[ [ 1.964, 1.128 ], [ -1.132, -1.932 ] ],
    	[ [ -1.564, 0.212 ], [ 1.852, -0.448 ] ]
    ]
    Value Statistics: {meanExponent=0.015599467253327712, negative=4, min=-0.448, max=-0.448, mean=0.010000000000000057, count=8, positive=4, stdDev=1.4258583379845278, zeros=0}
    Implemented Feedback: [ [ 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, ... ] ]
    Implemented Statistics: {meanExponent=0.0, negative=0, min=0.0, max=0.0, mean=0.08333333333333333, count=96, positive=8, stdDev=0.2763853991962833, zeros=88}
    Measured Feedback: [ [ 0.9999999999998899, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.9999999999998899, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.9999999999998899, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.9999999999998899, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.9999999999998899, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.9999999999998899, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.9999999999998899, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.9999999999998899, ... ] ]
    Measured Statistics: {meanExponent=-4.7830642341045674E-14, negative=0, min=0.0, max=0.0, mean=0.08333333333332416, count=96, positive=8, stdDev=0.2763853991962529, zeros=88}
    Feedback Error: [ [ -1.1013412404281553E-13, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, -1.1013412404281553E-13, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, -1.1013412404281553E-13, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, -1.1013412404281553E-13, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, -1.1013412404281553E-13, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, -1.1013412404281553E-13, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -1.1013412404281553E-13, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -1.1013412404281553E-13, ... ] ]
    Error Statistics: {meanExponent=-12.958078098036824, negative=8, min=0.0, max=0.0, mean=-9.177843670234628E-15, count=96, positive=0, stdDev=3.0439463838706555E-14, zeros=88}
    Feedback for input 1
    Inputs Values: [
    	[ [ -1.576, 1.928 ], [ 0.564, -0.436 ] ],
    	[ [ -0.2, -1.076 ], [ -0.868, -1.344 ] ]
    ]
    Value Statistics: {meanExponent=-0.09085123789238415, negative=6, min=-1.344, max=-1.344, mean=-0.376, count=8, positive=2, stdDev=1.0802592281485033, zeros=0}
    Implemented Feedback: [ [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ] ]
    Implemented Statistics: {meanExponent=0.0, negative=0, min=0.0, max=0.0, mean=0.041666666666666664, count=96, positive=4, stdDev=0.19982631347136331, zeros=92}
    Measured Feedback: [ [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ] ]
    Measured Statistics: {meanExponent=-4.7830642341045674E-14, negative=0, min=0.0, max=0.0, mean=0.04166666666666208, count=96, positive=4, stdDev=0.1998263134713413, zeros=92}
    Feedback Error: [ [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ], [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... ] ]
    Error Statistics: {meanExponent=-12.958078098036825, negative=4, min=0.0, max=0.0, mean=-4.588921835117314E-15, count=96, positive=0, stdDev=2.2007695994873667E-14, zeros=92}
    

Returns:

    ToleranceStatistics{absoluteTol=6.8834e-15 +- 2.6659e-14 [0.0000e+00 - 1.1013e-13] (192#), relativeTol=5.5067e-14 +- 0.0000e+00 [5.5067e-14 - 5.5067e-14] (12#)}

Learning Validation

We validate the agreement between the implemented derivative of the internal weights with finite difference estimations:

Code from SingleDerivativeTester.java:311 executed in 0.00 seconds:

    return testLearning(statistics, component, inputPrototype, outputPrototype);

Returns:

    ToleranceStatistics{absoluteTol=6.8834e-15 +- 2.6659e-14 [0.0000e+00 - 1.1013e-13] (192#), relativeTol=5.5067e-14 +- 0.0000e+00 [5.5067e-14 - 5.5067e-14] (12#)}

Total Accuracy

The overall agreement accuracy between the implemented derivative and the finite difference estimations:

Code from SingleDerivativeTester.java:319 executed in 0.00 seconds:

    //log.info(String.format("Component: %s\nInputs: %s\noutput=%s", component, Arrays.toString(inputPrototype), outputPrototype));
    log.info(String.format("Finite-Difference Derivative Accuracy:"));
    log.info(String.format("absoluteTol: %s", statistics.absoluteTol));
    log.info(String.format("relativeTol: %s", statistics.relativeTol));

Logging:

    Finite-Difference Derivative Accuracy:
    absoluteTol: 6.8834e-15 +- 2.6659e-14 [0.0000e+00 - 1.1013e-13] (192#)
    relativeTol: 5.5067e-14 +- 0.0000e+00 [5.5067e-14 - 5.5067e-14] (12#)
    

Frozen and Alive Status

Code from SingleDerivativeTester.java:327 executed in 0.00 seconds:

    testFrozen(component, inputPrototype);
    testUnFrozen(component, inputPrototype);

Performance

Now we execute larger-scale runs to benchmark performance:

Code from PerformanceTester.java:183 executed in 0.00 seconds:

    test(component, inputPrototype);

Logging:

    100 batches
    Input Dimensions:
    

Returns:

    java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: com.simiacryptus.mindseye.lang.GpuError: returnCode = CUDNN_STATUS_BAD_PARAM
    	at com.simiacryptus.util.lang.TimedResult.time(TimedResult.java:61)
    	at com.simiacryptus.util.io.MarkdownNotebookOutput.lambda$code$2(MarkdownNotebookOutput.java:205)
    	at com.simiacryptus.util.test.SysOutInterceptor.withOutput(SysOutInterceptor.java:107)
    	at com.simiacryptus.util.io.MarkdownNotebookOutput.code(MarkdownNotebookOutput.java:203)
    	at com.simiacryptus.util.io.NotebookOutput.code(NotebookOutput.java:41)
    	at com.simiacryptus.mindseye.test.unit.PerformanceTester.test(PerformanceTester.java:183)
    	at com.simiacryptus.mindseye.test.unit.PerformanceTester.test(PerformanceTester.java:41)
    	at com.simiacryptus.mindseye.layers.cudnn.CudnnLayerTestBase.lambda$getPerformanceTester$1(CudnnLayerTestBase.java:60)
    	at com.simiacryptus.mindseye.test.unit.StandardLayerTests.lambda$run$7(StandardLayerTests.java:261)
    	at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
    	at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
    	at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
    	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
    	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
    	at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
    	at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
    	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
    	at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
    	at com.simiacryptus.mindseye.test.unit.StandardLayerTests.run(StandardLayerTests.java:260)
    	at com.simiacryptus.mindseye.test.NotebookReportBase.lambda$run$0(NotebookReportBase.java:105)
    	at com.simiacryptus.util.lang.TimedResult.time(TimedResult.java:76)
    	at com.simiacryptus.mindseye.test.NotebookReportBase.run(NotebookReportBase.java:103)
    	at com.simiacryptus.mindseye.layers.LayerTestBase.test(LayerTestBase.java:37)
    	at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
    	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
    	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
    	at org.junit.runners.Suite.runChild(Suite.java:128)
    	at org.junit.runners.Suite.runChild(Suite.java:27)
    	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
    	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
    	at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
    	at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
    	at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
    	at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
    	at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
    Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: com.simiacryptus.mindseye.lang.GpuError: returnCode = CUDNN_STATUS_BAD_PARAM
    	at com.simiacryptus.mindseye.layers.cudnn.GpuController.lambda$call$1(GpuController.java:88)
    	at com.simiacryptus.util.lang.StaticResourcePool.run(StaticResourcePool.java:95)
    	at com.simiacryptus.mindseye.layers.cudnn.GpuController.call(GpuController.java:84)
    	at com.simiacryptus.mindseye.test.SimpleEval.call(SimpleEval.java:78)
    	at com.simiacryptus.mindseye.test.SimpleEval.run(SimpleEval.java:57)
    	at com.simiacryptus.mindseye.test.unit.PerformanceTester.test(PerformanceTester.java:148)
    	at com.simiacryptus.mindseye.test.unit.PerformanceTester.lambda$test$6(PerformanceTester.java:184)
    	at com.simiacryptus.util.io.NotebookOutput.lambda$code$0(NotebookOutput.java:42)
    	at com.simiacryptus.util.io.MarkdownNotebookOutput.lambda$null$1(MarkdownNotebookOutput.java:205)
    	at com.simiacryptus.util.lang.TimedResult.time(TimedResult.java:59)
    	... 51 more
    Caused by: java.util.concurrent.ExecutionException: com.simiacryptus.mindseye.lang.GpuError: returnCode = CUDNN_STATUS_BAD_PARAM
    	at java.util.concurrent.FutureTask.report(FutureTask.java:122)
    	at java.util.concurrent.FutureTask.get(FutureTask.java:192)
    	at com.simiacryptus.mindseye.layers.cudnn.GpuController.lambda$call$1(GpuController.java:86)
    	... 60 more
    Caused by: com.simiacryptus.mindseye.lang.GpuError: returnCode = CUDNN_STATUS_BAD_PARAM
    	at com.simiacryptus.mindseye.layers.cudnn.CuDNN.handle(CuDNN.java:781)
    	at com.simiacryptus.mindseye.layers.cudnn.CuDNN.newTensorDescriptor(CuDNN.java:1034)
    	at com.simiacryptus.mindseye.layers.cudnn.ImgConcatLayer.eval(ImgConcatLayer.java:98)
    	at com.simiacryptus.mindseye.test.SimpleEval.lambda$call$4(SimpleEval.java:79)
    	at com.simiacryptus.mindseye.layers.cudnn.GpuController.lambda$null$0(GpuController.java:86)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    

GPU Log