|
RDM Timing Discussion Discussion and questions relating to the timing requirements of RDM. |
|
Thread Tools | Search this Thread | Display Modes |
August 18th, 2011 | #1 |
Junior Member
Join Date: Aug 2011
Posts: 8
|
Required handling of sequences of RDM broadcast cmds
Hello
When looking at table 3-2 line 6 a controller may continuously send a sequence of broadcast RDM SET commands with minimum spacing of 176 us. In the broadcast situation the ACK_TIMER mechanism is not available to a responder to prevent any potential overrun situations since an answer is not part of the broadcast mechanism. What would you suggest a responder to do when an overrun to a responder actually happens? Could a controller be made aware of this situation such that different handling is possible? Greetings Marc |
August 18th, 2011 | #2 |
Task Group Member
Join Date: Aug 2008
Posts: 379
|
The best solution is to make sure your responder can always handle back to back broadcast requests. That means either fully processing the request within 176 us, or saving the request for processing as a background (low priority) task.
Take the common case of an EEPROM write which can take several ms. I keep a copy of the DMX address in both RAM and EEPROM. When the Set DMX Address broadcast request happens, I immediately update the RAM copy, and set a flag that the EEPROM needs to be updated. Then, the lazy loop (low priority task that handles EEPROM writes, display updates, housekeeping, etc.) writes it to EEPROM whenever it gets around to it. If a GET DMX Address comes in before the EEPROM write is finished, it sends the address from RAM because the EEPROM may be busy or out of date. There's a slight risk that if the system loses power between the request and the end of the EEPROM write that the address change could be lost, but the window is very small (a few ms) and the impact if it does occur is minimal. If you can't do this, then you may have to drop the request. RDM Broadcasts are not guaranteed to be reliable, so you are allowed to drop the packet. However, this should be avoided if at all possible. That's really all that a responder can do. If you're building a controller, it's a good idea to wait 10 to 20ms after sending a broadcast request before you send another broadcast packet. This will allow any poorly implemented responders time to finish processing the previous broadcast. |
August 18th, 2011 | #3 |
Task Group Member
Join Date: May 2010
Location: San Franciscio
Posts: 57
|
This came up at the most recent plugfest. The responder tests now have an additional flag --broadcast_write_delay which you can use to adjust the delay between sending a broadcast set and the next RDM request.
|
August 19th, 2011 | #4 |
Junior Member
Join Date: Aug 2011
Posts: 8
|
Hi Eric
Separate from EE memory writes, which I agree can be handled as you describe, there are other reasons a responder needs more time e.g. when it is a proxy or bridge/gateway device to where the data should be going. Such a device can indeed store some or even many SET requests but at some point it can lead to an overrun in some practical device, that is the case I meant to be referring to also. To the standard defining folks: With regards to the 10 to 20 ms back-off as Eric is suggesting in practice for a controller to handle the overrun issues, are there plans on integrating in the standard a number as a suggested spec for this, or somehow a way informing a controller on responder capabilities on this point? Leaving this point open and up to implementation can (and probably will) lead to incompatibilities between equipments that in a large part could be prevented by some agreement and future definition on this point. A related aspect to this is also that the real life figure of 5 to 20 ms is often sufficient to handle these situations while the ACK_TIMER has a granularity of 100 ms. These large time chunks lead to extraordinary long delays that can be reduced a lot by allowing, in addition to the current definition, some finer grained timing. Very interested to hear your opinions! Greetings Marc |
August 19th, 2011 | #5 | ||
Task Group Member
Join Date: Aug 2008
Posts: 379
|
Quote:
Fortunately, assuming a reasonable buffer size in the proxy, these conditions are unlikely to occur in the real world. The fraction of broadcast packets in most RDM systems is relatively small. It's quite rare to see more than a handful of broadcast requests back-to-back. I mostly see broadcast used for: DISCOVER_UNIQUE_BRANCH UNMUTE IDENTIFY OFF The first two are only for discovery, and since proxies handle discovery on behalf of their proxied devices these can be handled immediately in the proxy and don't need to be queued. You may see a Broadcast IDENTIFY OFF sent 3 or 4 times back-to-back by a controller (to mitigate lost or corrupt packets), but you won't see hundreds at once. Quote:
Anyone who's designing a controller that makes heavy use of broadcast packets will need to consider the real-world behavior of proxies. Fortunately, other than test equipment, this kind behavior is very rare. |
||
August 19th, 2011 | #6 |
Administrator
|
As Eric said, NACK:NR_PROXY_BUFFER_FULL was specifically intended to allow Gateways/Bridges the ability to tell the Controller they have too many pending commands already to handle any more.
__________________
Scott M. Blair RDM Protocol Forums Admin |
August 21st, 2011 | #7 |
Junior Member
Join Date: Aug 2011
Posts: 8
|
Hi Scott/Eric
Would this be NR_BUFFER_FULL (0x0007), or has a new NR_PROXY_BUFFER_FULL be added after E1.20? Greetings Marc |
August 21st, 2011 | #8 |
Task Group Member
Join Date: Aug 2008
Posts: 379
|
The NACK Reason Code for Proxy Buffer Full was added in the 2010 version of RDM.
The list of changes between E1.20-2006 and and E1.20-2010 is available here http://tsp.plasa.org/tsp/documents/d...006_Errata.pdf |
August 23rd, 2011 | #9 |
Junior Member
Join Date: Aug 2011
Posts: 8
|
Hi Scott/Eric,
What is then the exact difference between NR_BUFFER_FULL which states buffers or queue full, and NR_PROXY_BUFFER_FULL? Is the difference only the fact that a responder queue in general would still be available (since no NR_BUFFER_FULL was indicated) and only the proxy-function of that responder is blocked by the NR_PROXY_BUFFER_FULL? A certain implementation might also just use NR_BUFFER_FULL for the both OR'd as reasons together then? Greetings Marc |
August 23rd, 2011 | #10 |
Task Group Member
Join Date: Aug 2008
Posts: 379
|
NR_BUFFER_FULL is for when the target responder has no buffer space available.
NR_PROXY_BUFFER_FULL is for when the proxy, or some other in-line device between the controller and the target responder is out of buffer space. So, in the system consisting of: Console -----> RDM to RDM Proxy ------> Moving Light If the console is sending a packet to the moving light, and the moving light can't handle the message, the moving light would send NR_BUFFER_FULL. If the console is sending a packet to the moving light, and the proxy can't handle the message, the proxy would send NR_PROXY_BUFFER_FULL. |
Bookmarks |
Thread Tools | Search this Thread |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Is there a minimum required PID list for sub-devices? | p_richart | RDM Interpretation Questions | 4 | November 7th, 2008 02:04 PM |
broadcast Disvovery Mute/Unute | berntd | RDM Interpretation Questions | 2 | April 8th, 2008 11:07 PM |