Did a test of RS-485 lines. See the schematic for the transmitter section of the circuit. Three SN75176B transceivers were employed. One was configured as driver (transmitter), two as receivers. Driver was on only when transmitting. Vcc was cut off when not transmitting. Transmitter and receivers were connected using Samsung 4-pair AWG24 Category 5 UTP cable. Cable length from transmitter to the farthest receiver was around 30 meters. One pair was used for RS-485 line A and B, two pairs for power supply to the transmitter. The test receiver used below had its own power supply but all transceivers shared a common ground. There were no termination resistors across lines A and B. Pullup and pulldown resistors were as per schematic--20kohms each. Data packet consisted of 4 bytes + a 9th bit. Minimum time interval between packet transmission was ~50ms. Baud rate was 19.2 kbps.
In the following screenshots,
Channel 1 (yellow) = Line A
Channel 2 (cyan) = Line B
Math channel (purple) = A - B
Overshoots and undershoots are clearly visible during high/low transitions. In all probability these are due to signal reflections in the cable.
Zooming in:
With math channel turned on:
Channels 1 and 2 are turned off and vertical scale of math is decreased from 5 to 2V/division:
Vertical scale reduced to1V/div. Look at the left half of the trace. When transmitter is off A - B is approximately 1.0V. (Adjusting the vertical scale to <1V/div and using cursors, I was able to take a more accurate reading of 1.08V. With only one receiver connected A-B = 1.50V). Since this is >200mV pin R of the receivers will be high--the mandatory and expected line idle state.
Here are zoomed in views of over- and undershoots. The ringing dies out well before half a bit width. The first screenshot shows line A going from low to high. The second shows line A going from high to low. Note the different horizontal scales--500ns in the first, 200ns in the second.
My Rigol scope only has two channels so in the following screenshots
Channel 1: Line A
Channel 2: SN75176B pin R
Even with the ringing there are no errors in the output of the receiver, although undershoot does occur occasionally in pin R output (the Rigol is prone to aliasing, so there may in fact be very high frequency ringing which isn't showing up at this horizontal scale.)
With horizontal scale reduced, we get to see details of undershoots on pin R.
Overshoots when pin R goes from low to high are probably less frequent--I haven't been able to capture any yet.
Just to be fair to line B I let
Channel 1: Line B
Channel 2: SN75176B pin R
What I'd like to do in the future is to increase the baud rate to >100kbps and increase the cable length. Should be interesting to see how bad the ringing will get. And how it will affect data integrity. Something like the following is expected.
That's from a lab test performed by Texas Instruments. Baud rate was 200kbps. 100 feet of Bertek 100-ohm AWG24 twisted pair cable was used. There was one driver and only one receiver (TI AM26C31C and AM26C32C).
Tuesday, August 30, 2011
Sunday, August 28, 2011
Using RS-485 transceivers
The beauty of using RS-485 (more formally known as the TIA/EIA-485 standard) for long distance serial communication is that because it uses differential lines it's largely immune to common mode voltages and noise. It's also utterly simple electrically--at least in a half duplex setup--in that it uses only one twisted pair cable and a common ground.
Here's a setup for perhaps the simplest possible configuration: there's only one transmitter and one or more receivers. In other words communications is a simplex type--information goes only one way. The transmitter sends out data while the receivers intercept the data stream but do not transmit anything.
In the above circuit the non-inverting input of the unity gain buffer is connected to a sensor circuit with an analog output. The MCU's ADC converts the sensor reading to a digital value and sends it out serially. U1 is a RS-485 transceiver. Pin DE and /RE are controlled by the MCU, switching them high only when transmitting.
At the heart of the RS-485 is the transceiver. In fact RS-485 is merely an electrical standard. And the particular electrical characteristics of the transceivers determine whether one has a RS-485 circuit or not. There are a lot of transceivers out there and I would rather use something like Maxim's MAX48x series, but the price is simply eye popping! They're as expensive as instrumentation amplifiers. Instead I've been using the jurassic SN75176B which is over five times cheaper. Texas Instruments says it's rated for 10Mbps throughput. But hey my requirements are 2 to 3 orders of magnitude lower.
The problem with the 757176 is that it's a gas guzzler. An informal test shows that it draws some 16mA @Vcc = 5V when it's set up for reception only (pins DE and /RE are low). This shoots up to around 32mA when it's in transmission mode (pins DE and /RE are high). As Dave Jones would say, you can fly to moon on 32mA!
To limit the amount of juice it gulps, we can--at least in a simplex arrangement--turn off the 75176 when no transmission is being performed. As can be seen in the schematic above, a PNP transistor is used to power it up/down. When transmission of a data packet is about to commence the transceiver is switched on and right after the last stop bit is sent, the 75176 is powered down. Average power consumption of the transmitter circuit will depend on how frequent transmissions are made. It will also depend on the baud rate--the higher, the less time every transmission takes and therefore the shorter the time the transceiver will be on.
In half duplex systems where drivers (transmitters) are also receivers powering down the transceiver will probably be not an option.
A very important issue with these transceivers is to ensure that they will be fail-safe. Fail safe in this context means that even if the transmitter's DE pin is grounded--ie., the driver is disabled and lines B and A become tri-stated to high impedance--or if power to the transmitting transceiver is cut off as it is in the circuit above--any and all the receivers will still see a differential voltage between B and A that's > 200mV. As per the RS-485 standard, if -200mV < B - A < +200mV, then output of pin R will be indeterminate. To guard against this bias resistors are added to lines A and B. In the schematic above these are pull-up resistor Rb1 and pull-down resistor Rb2.
Very important note: The RS-485 standard states that when the difference between the voltages between B and A is > +200mV (i.e, B - A > 200mV) then a logic 1 is output on the receiver, and when B - A <. -200mV then a logic 0 is output. But chip manufacturers like Texas Instruments and Maxim have reversed this. And so B is A and A is B on their transceivers. No problem as long you don't mix transceivers that use the RS-485 standard labeling. From hereon I'll follow the TI / Maxim convention.
The values of the bias resistors determine the differential voltage A - B. How much bias resistance should be installed depends on how many receivers are connected to the network. For the transceiver used above, each receiver has a minimum input resistance of 12Kohms. The more receivers there are the less the total resistance there is since the input resistances are in parallel. This translates to lower value bias resistors. Here's how to compute the values:
Let:
Ritotal = total input resistance
I = current through Ritotal
VCC = power supply voltage to which pull-up resistor will be connected
Rb = bias resistor value
RT = total resistance which includes the bias resistors and Ritotal
We require that the voltage across the total input resistance be greater than 200mV:
I Ritotal > 200mV
Solving for current:
I > 200mV / Ritotal
The bias resistors and Ritotal are in series and form a voltage divider and we define their sum as the total resistance:
RT = 2Rb + Ritotal
It's also equal to:
RT = VCC / I
Since we're looking for the value of the bias resistors:
Rb = (RT - Ritotal)/2
Substituting and simplifying we obtain:
Rb < (5VCC - 1)Ritotal / 2
Example: Let's say we have two SN75176B transceivers on the network configured as receivers. VCC = 5V. Each transceiver has an input resistance of 12Kohms. Because they're in parallel, Ritotal = 6K. Using the formula above, we find that bias resistors should be less than 72Kohms each in order for A - B to be > 200mV when transmitter is disabled (lines A and B are in a high impedance state).
If termination resistors are present then total resistance would be = Ritotal|| termination resistors. So if two 120 ohm termination resistors were present (termination resistors are usually between 120 and 150 ohms when using UTP cables) and the Ritotal= 6K then the total resistance = 6K || 120 || 120 = 1/(1/6000 + 1/120 + 1/120) = 59.4 ohms. Computing for the bias resistors we obtain 720 ohms. Because the very low resistance values of termination resistors, as much as possible we don't want to install them since they significantly increase power consumption.
Termination resistors are used across lines A and B for impedance matching to prevent/minimize signal reflections leading to ringing. So when do we need termination? Well, it depends if we have a short or long line/cable. One heuristic for determining whether a line is long or short is as follows.
Let:
tt = transition time also known as rise/fall time of transceiver's transmitter signal in seconds
RP = propagation rate of electrical signals in copper = 0.2m/ns = 2 x 108m/s
L = cable length in meters
A line is considered long if:
tt < 2(L/RP)
(see Maxim's equation for a more conservative figure).
According to the SN75176B datasheet its tt = 20ns. Let's solve for L to see how long the cable can be before it's deemed long.
L = ttRP/2 = (20 x 10-9)(2 x 108)/2 = 2 meters.
So if we're using the SN75176B and our cable is longer than 2 meters termination resistors are necessary. That's not the end of the story, however. We may still be able to get away without having any termination even at cable lengths much longer than this:
The key to obviating the need for termination is to have a sufficiently slow baud rate so that reflections in the transmission line (a "long line") have already died out by the time the microcontroller reads the bit. The 1-way cable delay is merely L/RP. Thus for a 100-meter wire, the 1-way cable delay = 500ns or 0.5µs. Using the above guideline we can determine the maximum cable length given a particular transmission baud rate--the bit width is the reciprocal of baud rate.
Let:
B = transmission baud rate in bits/sec
RP = propagation rate of electrical signals in copper = 0.2m/ns = 2 x 108m/s
L = cable length in meters
The guideline states that:
1/B >= 40(L/RP)
Solving for L we have:
L <= [(1/B)/40]RP
L <= RP/(40B)
For a baud rate of 19,200, L = 260 meters. So theoretically no termination is necessary if cable length is less than this. Just out of curiosity let's see the maximum baud rate at which we can transmit for the 2-meter cable above.
B = RP/(40L) = 2.5Mbps
----
References:
Janet Axelson, Serial Port Complete: COM Ports, USB Virtual COM Ports, and Ports for Embedded Systems, 2ed, Lakeview Research LLC, 2007, Chapters.6 & 7.
Polarities for Differential Pair Signals (RS-422 and RS-485)
RS-422 and RS-485 Application Note (computation of bias resistor values)
AN1090 Methods for Trimming the Power Required in RS-485 Systems
Here's a setup for perhaps the simplest possible configuration: there's only one transmitter and one or more receivers. In other words communications is a simplex type--information goes only one way. The transmitter sends out data while the receivers intercept the data stream but do not transmit anything.
In the above circuit the non-inverting input of the unity gain buffer is connected to a sensor circuit with an analog output. The MCU's ADC converts the sensor reading to a digital value and sends it out serially. U1 is a RS-485 transceiver. Pin DE and /RE are controlled by the MCU, switching them high only when transmitting.
At the heart of the RS-485 is the transceiver. In fact RS-485 is merely an electrical standard. And the particular electrical characteristics of the transceivers determine whether one has a RS-485 circuit or not. There are a lot of transceivers out there and I would rather use something like Maxim's MAX48x series, but the price is simply eye popping! They're as expensive as instrumentation amplifiers. Instead I've been using the jurassic SN75176B which is over five times cheaper. Texas Instruments says it's rated for 10Mbps throughput. But hey my requirements are 2 to 3 orders of magnitude lower.
The problem with the 757176 is that it's a gas guzzler. An informal test shows that it draws some 16mA @Vcc = 5V when it's set up for reception only (pins DE and /RE are low). This shoots up to around 32mA when it's in transmission mode (pins DE and /RE are high). As Dave Jones would say, you can fly to moon on 32mA!
To limit the amount of juice it gulps, we can--at least in a simplex arrangement--turn off the 75176 when no transmission is being performed. As can be seen in the schematic above, a PNP transistor is used to power it up/down. When transmission of a data packet is about to commence the transceiver is switched on and right after the last stop bit is sent, the 75176 is powered down. Average power consumption of the transmitter circuit will depend on how frequent transmissions are made. It will also depend on the baud rate--the higher, the less time every transmission takes and therefore the shorter the time the transceiver will be on.
In half duplex systems where drivers (transmitters) are also receivers powering down the transceiver will probably be not an option.
A very important issue with these transceivers is to ensure that they will be fail-safe. Fail safe in this context means that even if the transmitter's DE pin is grounded--ie., the driver is disabled and lines B and A become tri-stated to high impedance--or if power to the transmitting transceiver is cut off as it is in the circuit above--any and all the receivers will still see a differential voltage between B and A that's > 200mV. As per the RS-485 standard, if -200mV < B - A < +200mV, then output of pin R will be indeterminate. To guard against this bias resistors are added to lines A and B. In the schematic above these are pull-up resistor Rb1 and pull-down resistor Rb2.
Very important note: The RS-485 standard states that when the difference between the voltages between B and A is > +200mV (i.e, B - A > 200mV) then a logic 1 is output on the receiver, and when B - A <. -200mV then a logic 0 is output. But chip manufacturers like Texas Instruments and Maxim have reversed this. And so B is A and A is B on their transceivers. No problem as long you don't mix transceivers that use the RS-485 standard labeling. From hereon I'll follow the TI / Maxim convention.
The values of the bias resistors determine the differential voltage A - B. How much bias resistance should be installed depends on how many receivers are connected to the network. For the transceiver used above, each receiver has a minimum input resistance of 12Kohms. The more receivers there are the less the total resistance there is since the input resistances are in parallel. This translates to lower value bias resistors. Here's how to compute the values:
Let:
Ritotal = total input resistance
I = current through Ritotal
VCC = power supply voltage to which pull-up resistor will be connected
Rb = bias resistor value
RT = total resistance which includes the bias resistors and Ritotal
We require that the voltage across the total input resistance be greater than 200mV:
I Ritotal > 200mV
Solving for current:
I > 200mV / Ritotal
The bias resistors and Ritotal are in series and form a voltage divider and we define their sum as the total resistance:
RT = 2Rb + Ritotal
It's also equal to:
RT = VCC / I
Since we're looking for the value of the bias resistors:
Rb = (RT - Ritotal)/2
Substituting and simplifying we obtain:
Rb < (5VCC - 1)Ritotal / 2
Example: Let's say we have two SN75176B transceivers on the network configured as receivers. VCC = 5V. Each transceiver has an input resistance of 12Kohms. Because they're in parallel, Ritotal = 6K. Using the formula above, we find that bias resistors should be less than 72Kohms each in order for A - B to be > 200mV when transmitter is disabled (lines A and B are in a high impedance state).
If termination resistors are present then total resistance would be = Ritotal|| termination resistors. So if two 120 ohm termination resistors were present (termination resistors are usually between 120 and 150 ohms when using UTP cables) and the Ritotal= 6K then the total resistance = 6K || 120 || 120 = 1/(1/6000 + 1/120 + 1/120) = 59.4 ohms. Computing for the bias resistors we obtain 720 ohms. Because the very low resistance values of termination resistors, as much as possible we don't want to install them since they significantly increase power consumption.
Termination resistors are used across lines A and B for impedance matching to prevent/minimize signal reflections leading to ringing. So when do we need termination? Well, it depends if we have a short or long line/cable. One heuristic for determining whether a line is long or short is as follows.
Let:
tt = transition time also known as rise/fall time of transceiver's transmitter signal in seconds
RP = propagation rate of electrical signals in copper = 0.2m/ns = 2 x 108m/s
L = cable length in meters
A line is considered long if:
tt < 2(L/RP)
(see Maxim's equation for a more conservative figure).
According to the SN75176B datasheet its tt = 20ns. Let's solve for L to see how long the cable can be before it's deemed long.
L = ttRP/2 = (20 x 10-9)(2 x 108)/2 = 2 meters.
So if we're using the SN75176B and our cable is longer than 2 meters termination resistors are necessary. That's not the end of the story, however. We may still be able to get away without having any termination even at cable lengths much longer than this:
If the rise time is unknown, another way of deciding whether a line is long or short is to compare the shortest expected bit width and the 1-way cable delay. This method considers two factors: the reflections may bounce back and forth several times before settling, and the bit rates at the transmitter and receiver may vary slightly from each other. As a general guideline, if the bit width is 40 or more times greater than the delay, any reflections will have settled by the time the receiver reads the bits. (Axelson, p.111)
The key to obviating the need for termination is to have a sufficiently slow baud rate so that reflections in the transmission line (a "long line") have already died out by the time the microcontroller reads the bit. The 1-way cable delay is merely L/RP. Thus for a 100-meter wire, the 1-way cable delay = 500ns or 0.5µs. Using the above guideline we can determine the maximum cable length given a particular transmission baud rate--the bit width is the reciprocal of baud rate.
Let:
B = transmission baud rate in bits/sec
RP = propagation rate of electrical signals in copper = 0.2m/ns = 2 x 108m/s
L = cable length in meters
The guideline states that:
1/B >= 40(L/RP)
Solving for L we have:
L <= [(1/B)/40]RP
L <= RP/(40B)
For a baud rate of 19,200, L = 260 meters. So theoretically no termination is necessary if cable length is less than this. Just out of curiosity let's see the maximum baud rate at which we can transmit for the 2-meter cable above.
B = RP/(40L) = 2.5Mbps
----
References:
Janet Axelson, Serial Port Complete: COM Ports, USB Virtual COM Ports, and Ports for Embedded Systems, 2ed, Lakeview Research LLC, 2007, Chapters.6 & 7.
Polarities for Differential Pair Signals (RS-422 and RS-485)
RS-422 and RS-485 Application Note (computation of bias resistor values)
AN1090 Methods for Trimming the Power Required in RS-485 Systems
Wednesday, August 17, 2011
ADC LSb mystery
Does a 10-bit ADC really have a resolution (not accuracy) of 10 bits? Apparently it might not. I've been having problems with the LSb (least significant bit) of the ADC output of PICs. What's happening is that even with a stable, fixed-voltage input signal to the ADC, the digitized output--depending on the VDD level and the input voltage signal--may either be stable or the value flip flops between two values (one LSb apart) from one conversion to the next.
The LSb problem is dependent on both VDD and ADC input signal level. For example the problem might not occur at input signal X volts with a VDD = 4.950V but will appear when VDD is within the range of 3.0 to 3.6V. And/or it might get worse (i.e., the number of times the LSb changes becomes more frequent) as one approaches a certain voltage range. Or for a certain signal voltage Y, the problem may be nonexistent at all with VDD from 2.5 to 5.0V. Or for signal Z the the LSb glitch may be present at practically the entire operating voltage range of the MCU/ADC.
I've tried everything I can think of to get rid of this hair-tearing problem: I've used a linear power supply instead of a switchmode to minimize power supply noise. I've used separate power and ground lines for digital and analog and employed star connections. Actually digital noise should be minimal since, as we'll see, the MCU goes to sleep when the ADC is reading. I've also peppered the circuit with filter and bypass caps. The circuit is in open air but I still allowed it to settle down thermally so drifts due to temperature are eliminated. I admit the circuit is breadboarded and not soldered onto a PCB and so contact problems may be the culprit. To minimize that possibility I've used redundant connections. I also tried buffering the voltage signal using an op amp. Though there has been a reduction, the above solutions have not completely eliminated the problem.
Here's the test circuit:
Bridge rectifier output is approximately 12VAC. Potentiometer is a 5-ohm multi-turn Bourns 3590S-2-502L. Because this is the only highly reliable pot that I have and because output voltage of the LM317 must not exceed 5V (the PIC's max VDD ), I had to add the 560 and 220 ohm resistors. When the pot is turned all the way to zero, the 560 ohm resistor is shunted leaving only the 220 ohms. When the pot is at its maximum 5K, it's in parallel with the 560 giving a net resistance of 504 ohms. This in turn is in series with the 220 ohm, giving a total of 724 ohms. Plug those values into the LM317 equation and the effective output range of this power supply is 2.4 to 5.0V.
Firmware is as follows. It's the 3rd or 4th revision. The ADC uses VDD as its positive reference and ground as its negative voltage reference. Thus the ADC output will be ratiometric with respect to the power supply. This way the analog to digital conversion is immune to drifts in the power supply. ADC read is performed approximately every 8ms. Because the PIC12F1822's ADC has silicon flaws, ADC read is performed during sleep, thus although timer0 tick is set to 8ms, several hundred microseconds are added while the system clock is shut down. The ADC read takes so long because there are actually two sets of reads. One is a single read and the other is a set of 16 reads which are then averaged. I tried averaging the reads to see if I could get obtain a value that doesn't keep flipping around. After the reads, the current and previous reads are compared. If they're different then the corresponding MCU pins (
I used the oscilloscope with time base on roll mode with one channel monitoring
I. Ra = 5.1K, Rb = 4.3K
A. VDD = 2.502V
B. VDD = 4.000V
C. VDD = 5.000V
II. Ra = 5.1K || 330 = 310, Rb = 4.3K
I provide only a screenshot for VDD = 5.000V. There is absolutely no LSb problem for VDD = 2.5V to 5.0V
III. Ra = 5.1K, Rb = 4.3K || 330 = 306
A. VDD = 2.999V
B. VDD = 3.499V
C. VDD = 4.500V
D. VDD = 5.000V
So what could be causing this ADC read glitch? Well, no ADC is 100% accurate. It will have various sources of errors--among them INL, DNL, offset, and gain errors--due to silicon and its fabrication. But accuracy isn't our problem; it's resolution. Ostensibly none of these is the error that I've encountered. What it does look very much like is conversion error due to transition noise, also known as code-transition noise or code-edge noise. Transition noise is,
Keeping the input signal constant and performing lots of ADC reads, the distribution of ADC output codes is supposedly Gaussian (normal distribution). In the tests with the PIC, however, apparently there are only two codes, i.e, output varies by only one LSb. I haven't done it, but it should be easy to have the MCU read a thousand or ten thousand times and tell us whether the codes vary by more than one LSb.
Interestingly, averaging, which I used in the firmware above, is the recommended algorithm to address LSb flickering. However, as we've seen even averaging (at least the way it was implemented above) isn't a panacea. Maybe I'm doing it wrong.
The LSb problem is dependent on both VDD and ADC input signal level. For example the problem might not occur at input signal X volts with a VDD = 4.950V but will appear when VDD is within the range of 3.0 to 3.6V. And/or it might get worse (i.e., the number of times the LSb changes becomes more frequent) as one approaches a certain voltage range. Or for a certain signal voltage Y, the problem may be nonexistent at all with VDD from 2.5 to 5.0V. Or for signal Z the the LSb glitch may be present at practically the entire operating voltage range of the MCU/ADC.
I've tried everything I can think of to get rid of this hair-tearing problem: I've used a linear power supply instead of a switchmode to minimize power supply noise. I've used separate power and ground lines for digital and analog and employed star connections. Actually digital noise should be minimal since, as we'll see, the MCU goes to sleep when the ADC is reading. I've also peppered the circuit with filter and bypass caps. The circuit is in open air but I still allowed it to settle down thermally so drifts due to temperature are eliminated. I admit the circuit is breadboarded and not soldered onto a PCB and so contact problems may be the culprit. To minimize that possibility I've used redundant connections. I also tried buffering the voltage signal using an op amp. Though there has been a reduction, the above solutions have not completely eliminated the problem.
Here's the test circuit:
Bridge rectifier output is approximately 12VAC. Potentiometer is a 5-ohm multi-turn Bourns 3590S-2-502L. Because this is the only highly reliable pot that I have and because output voltage of the LM317 must not exceed 5V (the PIC's max VDD ), I had to add the 560 and 220 ohm resistors. When the pot is turned all the way to zero, the 560 ohm resistor is shunted leaving only the 220 ohms. When the pot is at its maximum 5K, it's in parallel with the 560 giving a net resistance of 504 ohms. This in turn is in series with the 220 ohm, giving a total of 724 ohms. Plug those values into the LM317 equation and the effective output range of this power supply is 2.4 to 5.0V.
Firmware is as follows. It's the 3rd or 4th revision. The ADC uses VDD as its positive reference and ground as its negative voltage reference. Thus the ADC output will be ratiometric with respect to the power supply. This way the analog to digital conversion is immune to drifts in the power supply. ADC read is performed approximately every 8ms. Because the PIC12F1822's ADC has silicon flaws, ADC read is performed during sleep, thus although timer0 tick is set to 8ms, several hundred microseconds are added while the system clock is shut down. The ADC read takes so long because there are actually two sets of reads. One is a single read and the other is a set of 16 reads which are then averaged. I tried averaging the reads to see if I could get obtain a value that doesn't keep flipping around. After the reads, the current and previous reads are compared. If they're different then the corresponding MCU pins (
neflag
and neflagave
) are made high, else they're made low. Those pins are connected to an oscilloscope and logic analyzer so I can see how (un)stable the ADC reads are. In this final revision of the firmware I've also enabled the EUSART so I can see by how many bits the ADC reads differ (the Saleae Logic analyzer automatically decodes the serial data stream). Only the lower byte of the ADC output (register ADRESL
) is transmitted.
/*
processor = PIC 12F1822
compiler = mikroC Pro v.5.0.0
August 2011
Test of ADC LSb
internal oscillator is used, WDT disabled, power up timer and brown out reset disabled, MCLR disabled
*/
#define neflag PORTA.f5 // not equal flag; 1 = previous and current ADC values are not equal
#define neflagave PORTA.f4 // not equal flag average; 1 = previous and current averaged ADC values are not equal
#define t0ini 256 - 250 // value loaded into TMR0 every time it overflows
void IniReg()
{
OSCCON = 0b1101000; // internal clock 4Mhz
TRISA = 0b100;
ANSELA = 0b100;
PORTA = 0;
TMR0 = t0ini;
OPTION_REG = 0b10000100; // Weak pull ups disabled
// timer0 uses internal clock,
// prescaler assigned to timer0, prescaler = 1:32
ADCON0 = 0b1001; // channel 2, ADC on
ADCON1 = 0b11110000; // right justified, use Frc, Vdd as positive reference voltage
INTCON.GIE = 0; // global interrupt disabled
PIE1.ADIE = 1; // ADC interrupt enabled
// baud rate = 2400
SPBRGH = 0;
SPBRGL = 25;
RCSTA.SPEN = 1;
TXSTA.TXEN = 1;
TXSTA.SYNC = 0;
} // void IniReg()
/* =========================================================================================================
According to PIC12F1822 Silicon Errata sheet DS80502C the ADC unit in certain silicon revisions is buggy:
"Under certain device operating conditions, the ADC conversion may not complete properly.
When this occurs, the ADC Interrupt Flag (ADIF) does not get set,
the GO/DONE bit does not get cleared and the conversion result
does not get loaded into the ADRESH and ADRESL result registers."
mikroC Pro v5.00 compiler's Adc_Read() built-in ADC function uses the GO/DONE bit to check conversion completion (without putting the MCU to sleep, of course)
so the function cannot be used. And indeed the MCU hangs (when WDT is disabled) or the MCU resets when WDT is enabled.
The workaround used here is as per method 1 in the said errata:
"Select the dedicated RC oscillator as the ADC conversion clock source,
and perform all conversions with the device in Sleep."
========================================================================================================= */
unsigned int ADC()
{
PIR1.ADIF = 0; // clear ADC interrupt flag
INTCON.PEIE = 1; // peripheral interrupts enabled
ADCON0.GO = 1; // start ADC conversion
asm {sleep} // zzzzzzzzzzzzzzz.... when ADIF = 1 MCU will wake up and will continue execution of the program
PIR1.ADIF = 0; // clear ADC interrupt flag
INTCON.PEIE = 0; // disable peripheral interrupts
return ADRESH*256 + ADRESL; // return the 10-bit ADC value as a 16-bit number
}
void Compare()
{
unsigned char i;
static unsigned int CURR = 0; // current single ADC reading
static unsigned int CURRAVE = 0; // current average of 16 ADC readings
static unsigned int PREV; // previous single ADC reading
static unsigned int PREVAVE; // previous average of 16 ADC reading
unsigned int SUM; // sum of 16 ADC readings
PREVAVE = CURRAVE;
SUM = 0;
for (i=0; i<16; i++)
SUM += ADC();
PREV = CURR;
CURR = ADC();
TXREG = CURR; // bec ADC is performed during sleep while serial data transmission is dependent of the system clock
// transmission can only begin after all ADC reads are done
if (CURR == PREV)
neflag = 0;
else
neflag = 1;
CURRAVE = SUM / 16;
if (SUM % 16 >= 8) // number rounding algorithm
CURRAVE += 1;
if (CURRAVE == PREVAVE)
neflagave = 0;
else
neflagave = 1;
}
void main()
{
IniReg();
while(1)
{
if (INTCON.TMR0IF)
{
TMR0 = t0ini;
Compare();
INTCON.TMR0IF = 0;
}
} // while(1)
} // void main()
I used the oscilloscope with time base on roll mode with one channel monitoring
neflag
and other neflagave
. I opted not to take screenshots of the scope--would take too much effort. But here are some of the logic analyzer data at particular voltages. As you can see the averaging routine sometimes works, sometimes doesn't, depending on how bad the LSb problem is. Resistor values are as per their color code. VDD values are actual measured voltages using a Fluke 87V. I. Ra = 5.1K, Rb = 4.3K
A. VDD = 2.502V
B. VDD = 4.000V
C. VDD = 5.000V
II. Ra = 5.1K || 330 = 310, Rb = 4.3K
I provide only a screenshot for VDD = 5.000V. There is absolutely no LSb problem for VDD = 2.5V to 5.0V
III. Ra = 5.1K, Rb = 4.3K || 330 = 306
A. VDD = 2.999V
B. VDD = 3.499V
C. VDD = 4.500V
D. VDD = 5.000V
So what could be causing this ADC read glitch? Well, no ADC is 100% accurate. It will have various sources of errors--among them INL, DNL, offset, and gain errors--due to silicon and its fabrication. But accuracy isn't our problem; it's resolution. Ostensibly none of these is the error that I've encountered. What it does look very much like is conversion error due to transition noise, also known as code-transition noise or code-edge noise. Transition noise is,
the range of input voltages that cause an ADC output to toggle between adjacent output codes. As you increase the analog input voltage, the voltages that define where each code transition occurs (code edges) are uncertain due to the associated transition noise.
Keeping the input signal constant and performing lots of ADC reads, the distribution of ADC output codes is supposedly Gaussian (normal distribution). In the tests with the PIC, however, apparently there are only two codes, i.e, output varies by only one LSb. I haven't done it, but it should be easy to have the MCU read a thousand or ten thousand times and tell us whether the codes vary by more than one LSb.
Interestingly, averaging, which I used in the firmware above, is the recommended algorithm to address LSb flickering. However, as we've seen even averaging (at least the way it was implemented above) isn't a panacea. Maybe I'm doing it wrong.
Subscribe to:
Posts (Atom)