Personal tools
The Open Lighting Project has moved!

We've launched our new site at www.openlighting.org. This wiki will remain and be updated with more technical information.

Difference between revisions of "Running the tests"

From wiki.openlighting.org

Jump to: navigation, search
(Running the Tests)
m (Useful Options: Update options)
 
(20 intermediate revisions by 3 users not shown)
Line 1: Line 1:
This describes how to run the [[OLA RDM Responder Testing | RDM Responder Tests]]. Before starting you may want to read the [[Responder Testing FAQ]].
+
This describes how to run the [[OLA RDM Responder Testing | RDM Responder Tests]] from the command line. A similar guide exists for running the tests from the  [[Using_the_RDM_Test_UI|Web UI]]. Before starting you may want to read the [[Responder Testing FAQ]].
  
== Setup the Test Rig ==
+
This assumes you've already got OLA running and have patched the RDM controller to a universe.
  
The following controller devices are supported:
+
<div style="border: 1px solid #ff0000; background-color: #ffccff; padding: 3px">
* [[RDM-TRI]]
+
<b>Warning</b>: The tests send SET messages to the broadcast UIDs. This means that they will change the start address, device label etc. of all devices connected to the controller. Think twice about running this on your production lighting rig.
*  [[DMXter4 RDM]] / [[MiniDMXter]]
+
</div>
  
Connect the device under test to the controller device and start ''olad''. Patch the output port on the controller device to a universe (UNIVERSE_NUMBER). Then run ''ola_rdm_discover'', you should see the responder's UID appear:
+
 
 +
== Discovering Responders ==
 +
Run <tt>ola_rdm_discover</tt> from the command line, you should see the responder's UID appear:
  
 
   $ ola_rdm_discover -u UNIVERSE_NUMBER
 
   $ ola_rdm_discover -u UNIVERSE_NUMBER
Line 15: Line 17:
 
== Running the Tests ==
 
== Running the Tests ==
  
The tests are written in Python and run using ''ola_rdm_test.py''.  Below is the output from a typical test run:
+
The tests are written in Python and run using <tt>rdm_responder_test.py</tt>.  Below is the output from a typical test run:
  
   ./ola_rdm_test.py --universe 1  00a1:00010003
+
   $ rdm_responder_test.py --universe 1  00a1:00010003
 
   Starting tests, universe 3, UID 00a1:00010003
 
   Starting tests, universe 3, UID 00a1:00010003
 
   SetManufacturerLabel: Passed
 
   SetManufacturerLabel: Passed
Line 33: Line 35:
 
   -------------------------------------
 
   -------------------------------------
 
   29 / 30 tests run, 23 passed, 6 failed, 0 broken
 
   29 / 30 tests run, 23 passed, 6 failed, 0 broken
 +
 +
Be sure to read the [[RDM_Test_Output|Guide to interpreting the test output]] for an explanation of the logs.
  
 
== Useful Options ==
 
== Useful Options ==
  
''ola_rdm_test.py'' has some options which can assist in debugging failures. For a full list of options run with -h
+
<tt>rdm_responder_test.py</tt> has some options which can assist in debugging failures. For a full list of options run with <tt>--help</tt>
  
 
; -d, --debug
 
; -d, --debug
Line 44: Line 48:
 
; -t Test1,Test2  , --tests=Test1,Test2
 
; -t Test1,Test2  , --tests=Test1,Test2
 
: Only run a subset of the Tests. Only the tests listed (and their dependencies) will be run.
 
: Only run a subset of the Tests. Only the tests listed (and their dependencies) will be run.
 +
;-w BROADCAST_WRITE_DELAY, --broadcast-write-delay=BROADCAST_WRITE_DELAY
 +
: Some responders take time to write state changes to memory. When using broadcast SETs, the controller doesn't know how long to wait before sending the next message. This setting adds a delay after (in ms) to wait after sending broadcast set commands.

Latest revision as of 09:44, 28 February 2014

This describes how to run the RDM Responder Tests from the command line. A similar guide exists for running the tests from the Web UI. Before starting you may want to read the Responder Testing FAQ.

This assumes you've already got OLA running and have patched the RDM controller to a universe.

Warning: The tests send SET messages to the broadcast UIDs. This means that they will change the start address, device label etc. of all devices connected to the controller. Think twice about running this on your production lighting rig.


Discovering Responders

Run ola_rdm_discover from the command line, you should see the responder's UID appear:

 $ ola_rdm_discover -u UNIVERSE_NUMBER
 00a1:00010003
 7a70:ffffff00

Running the Tests

The tests are written in Python and run using rdm_responder_test.py. Below is the output from a typical test run:

 $ rdm_responder_test.py --universe 1  00a1:00010003
 Starting tests, universe 3, UID 00a1:00010003
 SetManufacturerLabel: Passed
 SetSoftwareVersionLabel: Passed
 GetManufacturerLabel: Passed
 GetSoftwareVersionLabelWithData: Failed
 ...
 ------------- Warnings --------------
 ------------ By Category ------------
   Product Information:  7 /  7   100%
       RDM Information:  1 /  1   100%
    Core Functionality:  2 /  2   100%
      Error Conditions: 10 / 16   62%
          DMX512 Setup:  3 /  3   100%
 -------------------------------------
 29 / 30 tests run, 23 passed, 6 failed, 0 broken

Be sure to read the Guide to interpreting the test output for an explanation of the logs.

Useful Options

rdm_responder_test.py has some options which can assist in debugging failures. For a full list of options run with --help

-d, --debug
Show all debugging output, including actual & expected responses.
-l, --log
Log the output of the tests to a file. The UID and timestamp is appended to the filename
-t Test1,Test2 , --tests=Test1,Test2
Only run a subset of the Tests. Only the tests listed (and their dependencies) will be run.
-w BROADCAST_WRITE_DELAY, --broadcast-write-delay=BROADCAST_WRITE_DELAY
Some responders take time to write state changes to memory. When using broadcast SETs, the controller doesn't know how long to wait before sending the next message. This setting adds a delay after (in ms) to wait after sending broadcast set commands.