Personal tools
The Open Lighting Project has moved!

We've launched our new site at www.openlighting.org. This wiki will remain and be updated with more technical information.

Difference between revisions of "RDM Responder Testing"

From wiki.openlighting.org

Jump to: navigation, search
(Created page with "OLA comes with a suite of tests for RDM responders which can be used to check how well an RDM device implements the E1.20 specification. Writing RDM Responder Tests e…")
 
Line 15: Line 15:
 
== Running the Tests ==
 
== Running the Tests ==
  
The tests are written in Python, and run using ''ola_rdm_test.py''.  Below is the output from a typical test run:
+
The tests are written in Python and run using ''ola_rdm_test.py''.  Below is the output from a typical test run:
  
 
   ./ola_rdm_test.py --universe 1  --pid_file ../../python/pids.config  00a1:00010003
 
   ./ola_rdm_test.py --universe 1  --pid_file ../../python/pids.config  00a1:00010003
Line 42: Line 42:
 
; -t Test1,Test2  , --tests=Test1,Test2
 
; -t Test1,Test2  , --tests=Test1,Test2
 
: Only run a subset of the Tests. Only the tests listed (and their dependancies) will be run.
 
: Only run a subset of the Tests. Only the tests listed (and their dependancies) will be run.
 +
 +
== Information on Tests ==
 +
 +
Some tests have dependancies, which are other tests that need to be completed before the test can be run. Dependancies can be used to check for supported parameters and other conditions that may affect responder behavior.
 +
 +
There are 4 result states for a test:
 +
 +
; Passed
 +
: The responder replied with the expected result
 +
; Failed
 +
: The responder failed to reply, or replied with an un-expected result
 +
; Not Run
 +
: This test wasn't run because the responder doesn't support the required functionality
 +
; Broken
 +
: An internal error occurred, this indicates a programming error or an error with the test rig.

Revision as of 20:00, 29 December 2010

OLA comes with a suite of tests for RDM responders which can be used to check how well an RDM device implements the E1.20 specification. Writing RDM Responder Tests explains how to add to / improve the tests.

Setup the Test Rig

The following controller devices are supported:

Connect the device under test to the controller device and start olad. Patch the output port on the controller device to a universe (UNIVERSE_NUMBER). Then run ola_rdm_discover, you should see the responder's UID appear:

 $ ola_rdm_discover -u UNIVERSE_NUMBER
 00a1:00010003
 7a70:ffffff00

Running the Tests

The tests are written in Python and run using ola_rdm_test.py. Below is the output from a typical test run:

 ./ola_rdm_test.py --universe 1  --pid_file ../../python/pids.config  00a1:00010003
 Starting tests, universe 3, UID 00a1:00010003
 SetManufacturerLabel: Passed
 SetSoftwareVersionLabel: Passed
 GetManufacturerLabel: Passed
 GetSoftwareVersionLabelWithData: Failed
 ...
 ------------- Warnings --------------
 ------------ By Category ------------
   Product Information:  7 /  7   100%
       RDM Information:  1 /  1   100%
    Core Functionality:  2 /  2   100%
      Error Conditions: 10 / 16   62%
          DMX512 Setup:  3 /  3   100%
 -------------------------------------
 29 / 30 tests run, 23 passed, 6 failed, 0 broken

Useful Options

ola_rdm_test.py has some options which can assist in debugging failures. For a full list of options run with -h

-d, --debug
Show all debugging output, including actual & expected responses.
-t Test1,Test2 , --tests=Test1,Test2
Only run a subset of the Tests. Only the tests listed (and their dependancies) will be run.

Information on Tests

Some tests have dependancies, which are other tests that need to be completed before the test can be run. Dependancies can be used to check for supported parameters and other conditions that may affect responder behavior.

There are 4 result states for a test:

Passed
The responder replied with the expected result
Failed
The responder failed to reply, or replied with an un-expected result
Not Run
This test wasn't run because the responder doesn't support the required functionality
Broken
An internal error occurred, this indicates a programming error or an error with the test rig.