View Single Post
Old February 19th, 2015   #4
Task Group Member
Join Date: Aug 2008
Posts: 373

One slight correction to my earlier post: I wrote:

Originally Posted by ericthegeek View Post
For inline devices it can be a bit harder to determine what value to use. You'll want to use a value somewhere between 2.9ms (DUB response delay + Max packet time) and 5.8ms. I think I use 5.6ms in my implementations.
I forgot about section of the E1.20-2010 standard that says:
"the in-line device shall return to forward data flow no sooner than Table 3-2 Line 2 Minimum Time minus 200μs from Table 3-2 Note 3. The in-line device shall be capable of returning to forward data flow no later than Table 3-2 Line 2 Minimum Time."

Based on this, during the discovery response period you have to turn around after between 5.6 and 5.8ms, not between 2.9ms and 5.8ms.

Originally Posted by dannito View Post
For instance, what about DISC_UN_MUTE message which is supposed to be sent by the controller prior DISC_UNIQUE_BRANCH in order to unmute all muted devices? Should I add it in the port-turning algorithm during Discovery? Same goes for the DISC_MUTE?
As a transparent in-line device, there are two kinds of turnarounds you have to handle. The two differ in the format and timing of the expected response.

The first is when you see a Discover Unique Branch request from the controller. When you see this, you have to switch all ports into receive mode and wait for a discovery response for 5.6 to 5.8ms. When you see activity on one of the downstream ports you switch to reverse data flow for the remainder of the response period. When you see activity on the upstream port, you switch for forward data flow. The response does not have a break. This kind of turnaround is only used for DUB requests (not for Mute and Unmute).

The other kind of turnaround is used for all other requests, including Mute, Unmute, and any Gets or Sets. In this case, the response has a break before it, and you're allowed to shorten the break by up to 22us.

Originally Posted by dannito View Post
And what about Missing Response, what will happen then?
Responders must respond to a request within 2ms (Table 3-4 line 1), and the system can have up to 704us of delay (Table 3.2 note 2)

Controllers must wait at least 3ms before deciding that a response has been lost (Table 3-2 line 5).

This means that after the end of the request, you should wait between 2.7 and 3.0ms for the response to start. If it does not you can assume that the response is lost and return to forward data flow.

Originally Posted by dannito View Post
Is this the reason the device should listen to Responder port too? If it is pulled LOW then the repeater should go to normal data flow (?)

Originally Posted by dannito View Post
If I go with routing the data from the active to all the ports, I think I need to change a little the concepta as I am trying to build opto-isolated repeater. Maybe I will stick to data sent only to responder for now, just to run and test it. Later I might change it to all ports.
It's easier and fully functional to do it the way that you describe. Many other splitters behave this way.

Originally Posted by dannito View Post
Thank you for the encouraging words, but I am really a RDM noob!
We all have to start somewhere. You've actually picked a challenging place to start. I've built a controller, a responder, a sniffer, and a splitter. It's my opinion that an intelligent, protocol-aware splitter is the hardest part of RDM to get right. It's easy to get mostly right, but the timing has to be exact, and there are lots of corner cases to deal with (for example, how do you deal with a partial response, or a corrupt checksum?). You'll want to do lots of interoperability testing, and consider attending one of the periodic RDM plugfests, they're a great way to test a product like this.

Last edited by ericthegeek; February 19th, 2015 at 12:07 AM. Reason: Clarification
ericthegeek is offline   Reply With Quote