E1.20 RDM (Remote Device Management) Protocol Forums  

Go Back   E1.20 RDM (Remote Device Management) Protocol Forums > RDM Developer Forums > RDM Timing Discussion

RDM Timing Discussion Discussion and questions relating to the timing requirements of RDM.

Reply
 
Thread Tools Search this Thread Display Modes
Old April 28th, 2010   #1
sjackman
Task Group Member
 
Join Date: Sep 2006
Posts: 26
Default In-line device turnaround time

Hi,

When an in-line device receives a request packet, it must switch to receiving on the command ports within 132 μs. It then listens to the command ports, and if it sees a falling edge, it must switch to transmitting on the responder port within some period of time, t. What is the maximum time of t between the falling edge on the command port and transmitting on the responder port?

If it's a non-discovery packet with a break, the device must shorten the break by no more than 22 μs, so that's an upper limit on this time. For a discovery response packet without a break, any delay shortens the actual bit times of the first byte of the discovery response.

The first bytes of the discovery response are the seven 0xfe bytes of the preamble, the bit pattern on the line of a preamble frame is 00111111111 -- the first 0 is the start bit, the last two ones are the stop bits, for a total of 11 bits. So, the first low period (the two zeros) should be 8 μs. By how much can this be shortened? The most relevant timing I can find is 4.2.3 Bit Distortion, which states that the total bit time must not change by more than 75 ns (1.875%). That seems like very tight requirements on the turnaround time, t, which likely can't be met with firmware polling a PIO input. If the first low period is shortened by too much (causing the first zero bit to be lost), the receiver (that is, the controller) will see a 0xff byte rather than 0xfe. This may cause the controller to drop the packet.

The inline device may drop an entire preamble byte (section 7.5). That requires that the inline device turn around no sooner than 8 us and no later than 88 us.

I see two possible solutions.

1. All devices should ignore preamble bytes of 0xff as well as 0xfe. This would allow the first 8 μs low period of the discovery response to be shortened by an arbitrary amount.

2. Specify the timing requirement between receiving a falling edge on the command ports and transmitting on the responder port. This timing requirement would be

less than 75 ns or between 8 μs and 22 μs.
between 75 ns and 8 μs is disallowed.

The `less than 75 ns' option allows the turn around to be implemented in hardware and permit in line devices that do not drop a preamble byte. The `between 8 μs and 22 μs' option allows a software solution that does drop a preamble byte. Disallowing the range between 75 ns and 8 μs prevents distorting the bit times of the first preamble byte.

Cheers,
Shaun

4.2.1 Port Turnaround
After receiving an RDM request packet, the in-line device shall switch to receiving data at its
Command Ports, within 132μs of the end of the RDM packet.

After receiving an RDM request packet, the first port that is pulled to a low state for the start of a
BREAK becomes the active port. Note that this port may be the responder port, in which case the
in-line device shall return to forward data flow. Otherwise, data from the active port shall drive the
responder port and may drive all other command ports on the in-line device.
sjackman is offline   Reply With Quote
Old April 28th, 2010   #2
Gerry
Junior Member
 
Join Date: Jan 2010
Location: UK
Posts: 8
Default

Shaun,

On my own inline device (in development), turning the bus around is just raising a single port pin high on the processor and takes no time worth worrying about. Once you've received the 1st preamble, you turn the port and Tx it to the controller. You are introducing a 44uS delay out of an allowed 88uS.

The other option is to drop the 1st preamble byte - that's also allowed and gives you plenty of time to turn the bus.

I would not consider shortening bit times under any circumstanses.
Gerry is offline   Reply With Quote
Old April 28th, 2010   #3
sjackman
Task Group Member
 
Join Date: Sep 2006
Posts: 26
Default

Hi Gerry,

There's two ways that an inline device could be designed. For simplicity, let's consider a repeater that has just one responder port and one command port.

1. A microcontroller sits between the two ports. Each port is connected to a UART of the microcontroller. There is no direct connection between the two ports. The microcontroller receives on one port and transmits to the other.

2. A transceiver is connected to each port, and the two transceivers are connected to each other with nearly nothing in between, except perhaps a little logic. A microcontroller listens on each port to control the transmit/receive direction of the two transceivers, but doesn't transmit bits itself.

I've worked on devices of both types, and each has its advantage. The latter maintains the exact wave form of both DMX and RDM from the controller to devices on the command ports and vice versa.

To borrow some terminology from network switching (which is really what we're doing here) method #1 is a store-and-forward switch (each byte is stored and retransmitted, introducing a delay of 44 µs) and #2 is a switching fabric.

You're correct that a store-and-forward implementation has no timing issue between the transceiver turnaround and the microcontroller transmitting the first byte on the responder port. A switching-fabric implementation must however consider the turnaround delay.

Cheers,
Shaun
sjackman is offline   Reply With Quote
Old April 28th, 2010   #4
ericthegeek
Task Group Member
 
Join Date: Aug 2008
Posts: 380
Default

> What is the maximum time of t between the falling
> edge on the command port and transmitting on the
> responder port?

Section 4.2.2 permits each inline device to delay the data by 88uS.

> If it's a non-discovery packet with a break, the device
> must shorten the break by no more than 22 μs, so that's
> an upper limit on this time.

An inline device can receive a byte, hold on to it for 88uS, then send it out. It can also shorten the break by 22us. The two are separate, and an inline device could conceivably do both to some degree.

> 1. All devices should ignore preamble bytes of
> 0xff as well as 0xfe. This would allow the
> first 8 μs low period of the discovery response
> to be shortened by an arbitrary amount.

It is good practice for a controller to ignore corrupt preamble bytes (as you propose). But in order for an inline device to comply with the standard it must shorten the preamble byte by a very small amount of time, or by an entire byte. The requirements in your proposal #2 are all included in the document, just not in a single list.

I worked on an inline device that accidentally corrupted the first byte of a discovery response. It confused many controllers.

Last edited by ericthegeek; April 28th, 2010 at 01:17 PM.
ericthegeek is offline   Reply With Quote
Old April 28th, 2010   #5
sjackman
Task Group Member
 
Join Date: Sep 2006
Posts: 26
Default

Quote:
Originally Posted by ericthegeek View Post
An inline device can receive a byte, hold on to it for 88uS, then send it out. It can also shorten the break by 22us. The two are separate, and an inline device could conceivably do both to some degree.
If the inline device uses a switching fabric rather than a store-and-forward implementation, it doesn't have the ability to `hold on to it'. See my response to Gerry above.

Quote:
Originally Posted by ericthegeek View Post
It is good practice for a controller to ignore corrupt preamble bytes (as you propose).
Could this be proposed as an amendment to the standard? If not a SHALL directive, even a SHOULD directive would be better.

Quote:
Originally Posted by ericthegeek View Post
But in order for an inline device to comply with the standard it must shorten the preamble byte by a very small amount of time, or by an entire byte. The requirements in your proposal #2 are all included in the document, just not in a single list.
It's this `very small amount of time' that I feel needs clarification. My best reading of the spec indicates that the first bit be shortened by no more than 75 ns (section 4.2.3 Bit Distortion). Do you agree?

Cheers,
Shaun
sjackman is offline   Reply With Quote
Old April 28th, 2010   #6
Gerry
Junior Member
 
Join Date: Jan 2010
Location: UK
Posts: 8
Default

Quote:
Originally Posted by sjackman View Post
It's this `very small amount of time' that I feel needs clarification. My best reading of the spec indicates that the first bit be shortened by no more than 75 ns (section 4.2.3 Bit Distortion). Do you agree?

Cheers,
Shaun
Hi Shaun,

The 75nS is for non-cumlative bit distortion of the data. Alowing a shortning of 75ns for each inline device could result in a total 300nS shortning. That's a 7.5% error in bit timing.

For the specific instance you mentioned: the processor controlling the switch matrix, I would drop the 1st preamble byte. This gives the processor time to set up the routing to get the next byte out on time.
Gerry is offline   Reply With Quote
Old April 29th, 2010   #7
sjackman
Task Group Member
 
Join Date: Sep 2006
Posts: 26
Default

If the turnaround logic is implemented in software, I agree that the implementation should drop the first preamble byte. If the turnaround logic is implemented in hardware (logic, PLD or FPGA), the first bit can be shortened by a very small amount, and it's not necessary to drop the first preamble byte. It would be helpful to define that very small amount in the spec.

It should be possible to shorten one bit up to 50% and the UART should still be able to recover it. 7.5% shouldn't be any problem. Most UARTs sample each bit 16 times. If the bit is shortened by 7.5%, the UART would see 15 low samples and one high sample, well within tolerance.

Cheers,
Shaun
sjackman is offline   Reply With Quote
Old April 29th, 2010   #8
ericthegeek
Task Group Member
 
Join Date: Aug 2008
Posts: 380
Default

> Most UARTs sample each bit 16 times. If the bit is
> shortened by 7.5%, the UART would see 15 low
> samples and one high sample, well within tolerance.

Most *decent* UARTs use 16x sampling. Many of the 8051-derived UARTs are not nearly so sophisticated and only sample once per bit.

In the real world, you can probably drop a hundred nanoseconds or so off of the start bit and be OK. If your switching logic drops this much, you may not be able to use slew-rate limited drivers since they can introduce further asymmetry into the 485 line. I'd also suggest using a crystal oscillator rather than an internal osc. or RC since these can further cut into your timing margins.

Obviously it's nice to hew as close to the standard as you can, but sometimes you have to make reasonable compromises based on real-world behavior.
ericthegeek is offline   Reply With Quote
Old October 1st, 2014   #9
sergeychk
Junior Member
 
Join Date: Apr 2014
Posts: 9
Default

Some of ETHERNET device and wireless communication device LUMENRADIO that supports RDM in the ETHERNET make more than 5 milliseconds of delay.

How the RDM protocol will work in this case if the maximum delay in the standard RDM is 2.8ms?
What the maximum timeout delay for standard RDM controller software?
sergeychk is offline   Reply With Quote
Old October 1st, 2014   #10
ericthegeek
Task Group Member
 
Join Date: Aug 2008
Posts: 380
Default

Wireless Devices typically behave as an RDM Proxy. They receive the controller's request, respond with an ACK_TIMER, and then pass the request along to the other end of the wireless link.

The Controller then sends GET QUEUED_MESSAGE requests to fulfill the ACK_TIMER.

(This is true for all of the wireless RDM systems that I have worked with)
ericthegeek is offline   Reply With Quote
Old October 2nd, 2014   #11
sergeychk
Junior Member
 
Join Date: Apr 2014
Posts: 9
Default

If the RDM controller send to proxy request and proxy send response with an ACK TIMER and proxy send query for wireless RDM device, but wireless RDM responder answer with ACK TIMER, what to do in this situation for proxy device?
is it possible twice send answer with ACK TIMER to say for controller that device is not ready yet? Is it standard way for RDM standard?
Is it possible to use vdsl2 modems for extension of ethernet line and send RDM packets inside this net?The delay of packets maybe around 4 or more milliseconds.
As alternate is it possible to use ethernet to control RDM devices, as i understand it is possible if i will use timeout with 5 or more milliseconds. For example i will send DMX512 and then RDM query, then again send DMX512 packets and wait RDM answer with big timeout, if i receive RDM answer or big timeout time is up, then send next RDM query. Is it normal practice to use this way for RDM controller? Can i think that it is a standard use of RDM control? How other ethernet controller software developer use the big timeout or not?
sergeychk is offline   Reply With Quote
Old October 2nd, 2014   #12
sblair
Administrator
 
Join Date: Feb 2006
Posts: 438
Send a message via AIM to sblair Send a message via MSN to sblair
Default

Using arbitrary large numbers here just for an example. If a controller sends to a proxy, and the Proxy replies with ACK_TIMER for 1 second. The Proxy then queries the device and the device gives an ACK_TIMER of 3 seconds. If the controller comes back after 1 second asking again, then the controller can simply then give another ACK_TIMER with the new estimated time of 3 seconds.

Yes, the proxy can reply multiple times with ACK_TIMER if needed.

You can't just simply pipe all RDM messages over Ethernet. There are many differences. For example, RDM Discovery won't work over Ethernet because of the timing requirements. You have to have an intelligent RDM Controller or Proxy on the far end of the Ethernet network. There are all kinds of other issues that come into play also. We are currently working on a new standard that allows for RDM messages to be encapsulated over Ethernet. It includes provisions for RDM Gateways to handle encapsulating RDM messages into Ethernet packets. Gateways are responsible for doing their own Discovery. There is quite a bit to the standard and has not been trivial for us to write. The next draft version for this upcoming standard should be out for Public Review and comment at the end of this year. Announcements will be sent out to all the forum members here when it is available for review.
__________________
Scott M. Blair
RDM Protocol Forums Admin
sblair is offline   Reply With Quote
Old October 2nd, 2014   #13
sergeychk
Junior Member
 
Join Date: Apr 2014
Posts: 9
Default

For now with long delay, how the proxy device work with RDM controller, they have internal discovery result table - list of devices connected to proxy and then query Get Proxied Devices send this list to rdm controller? But delay for answer in internet or adsl line will more then 2.8 milliseconds is it normal conditions as point of view of RDM standard? it will work only if i am extend the timeout for the controller. Is it possible to use long delay line for RDM control for now RDM standard release?
sergeychk is offline   Reply With Quote
Old October 3rd, 2014   #14
sblair
Administrator
 
Join Date: Feb 2006
Posts: 438
Send a message via AIM to sblair Send a message via MSN to sblair
Default

You have to at least be able to respond with the ACK_TIMER to the controller within the required time frame. If you're unable to send back an ACK_TIMER at least then the system is not going to work. RDM is designed to be running on RS485 and not going over the Internet or ADSL type line.

For something like this, you'll really need to be using RDMnet (ANSI BSR E1.33) which as I mentioned is still being written. The next draft version for Public Comment will be getting released near the end of the year.
__________________
Scott M. Blair
RDM Protocol Forums Admin
sblair is offline   Reply With Quote
Reply

Bookmarks

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
4.2.1.1 In-line device Port Turnaround during Discovery prwatE120 RDM Timing Discussion 1 August 7th, 2006 11:52 PM


All times are GMT -6. The time now is 04:58 PM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.