Category Archives: Hardware

Care and Feeding of Batteries

My last post dealt with designing your product to get long battery life. The title implied it was about primary batteries but in reality most of what I wrote would apply to get the best efficiency from any power source.  This week I’m going to continue in a similar vein but with considerations for using rechargeable batteries.

The first thing to keep in mind regarding rechargeable batteries is you have to separate charge duration from actual battery service life.  Many people think of charge duration as “battery life”, particularly non-technical consumers but they are considerably different things. Charge duration is the amount of run time a product gets from a single charge cycle. The primary things that impact charge duration are the same things I discussed last week – sleep mode and active mode current draw and duty cycle and the various environmental conditions that can reduce battery output.

With rechargeable batteries, the actual service life is typically measured in recharge cycles rather than in service hours. Charge duration is still measured in the number of hours from fully charged to a cutoff voltage (usually battery chemistry specific) but there are a number of factors that will change the voltage/time discharge curve for individual batteries over time or simply reduce the number of charge cycles a battery can endure. As it turns out, most of these factors are design related so you do have some control over them.

Instead of repeating it all here, the article at the link below gives an excellent overview of what you need to pay attention to in order to get the longest service life from your batteries. The article mainly discusses lithium-ion batteries but most of it applies to any rechargeable battery chemistry, it is also somewhat dated but still very applicable since the general concepts apply to almost any battery chemistry with only slight differences in the details.

The main take-away from the article should be that the service life of a rechargeable battery is very much dependent on how the battery is treated. Charging in particular is very stressful on any battery and most anything you do to charge a battery faster will also decrease its service life, often disproportionately so. Trying to extend battery life by lowering the cutoff voltage is very unwise since the discharge curve for most batteries is so steep around the specified cutoff voltage (particularly lithium based batteries). The modest improvements you might get by cheating a tenths of a volt on the cutoff voltage or saving a few minutes when charging can easily be overshadowed by the reduction in service life this treatment inflicts on a battery.


The 10 year battery life – myth or reality?

Myths and legends are usually the subject of incredible beauty, unsurpassed strength or attempting what mere mortals assume is an unobtainable goal. It is often difficult to separate myth from reality or even know where one ends and the other begins. The same can be said for much of what you read these days about products with a 10 year battery life. When you see advertisements for wireless devices claiming 10 year battery life from a coin-cell battery it is easy to think that is the stuff of myths and legends. Yet, given the right hardware/firmware design and the appropriate battery technology and capacity for a certain application, 10 year battery life is certainly a doable thing.

As a hypothetical reality, you’ve just been tasked with designing your company’s next product and prominent in the features list is 10 year battery life. As I’ve tried to stress throughout this blog, keep in mind that low-power design is really low-power system design, not just hardware or firmware design.  Designing for long battery life often requires compromises in feature set and performance, hardware/firmware design, battery selection and even industrial design. Here are some of the keys to achieving maximum battery life.

Develop your power management strategy before even looking at that blank page in your schematic capture program. There are many micros that have really low sleep mode current specs that may be impossible to achieve in your design or may have serious implications for your design.  Just a few of the many things to keep an eye out for are:

  • The clocks to all the micro’s on-chip peripherals are stopped when the micro core is asleep. This would prevent timed wake-ups unless you also have a real time clock with enough resolution to meet your timed wake-up needs.  If your code needs to run more than once per second, depending on the RTC design this may not be possible.
  • Only a few bytes of data in special registers are maintained in power-down mode. This may be OK for very simple applications but when using a micro like this, coming out of the power-down mode is essentially the same as a power-up. This means the micro core, firmware structures and on-chip peripherals will all have to be initialized every wake-up before any real application code can run. This can seriously impact response times and waste a lot of power (not to mention jeopardize your friendship with the people writing the firmware for your hardware).
  • A micro that powers down the oscillator, flash and internal RAM can take several milliseconds to start running code when coming out of a deep sleep or power down mode. If your application requires a quick response to the event that wakes the micro it may not be achievable without using a higher current sleep mode. The same thing applies if you rely on a PLL internal to the micro to generate a high speed clock for the micro or on-chip peripheral.
  • Most micros have a limited number of GPIO pins that can wake them from sleep mode. Generally the deeper the sleep mode the fewer wake-up options you will have and in some cases only a single dedicated pin can be used to wake the micro from the deeper sleep modes.
  • Some micros allow a timer to be clocked with a slow clock like the RTC clock or watchdog timer clock in a deep sleep mode. Be careful with this because the timer may not be able to generate an interrupt to wake the micro. The timer may have to drive a GPIO to wake the micro.

Using your power management strategy, prepare a power budget for the device. How to do this could be the subject of several blog posts but suffice it to say that an accurate power budget is essential to having a chance of meeting your battery life target (no matter how long or short that is). If your product’s behavior is well defined and isn’t too complicated it is possible to create an accurate power budget in a fairly simple spreadsheet. If your first cut at a power budget shows you have battery capacity to spare, don’t congratulate yourself just yet because it is probably grossly wrong. It’s a good idea to always double check, then triple check then have at least one other person check your power budget.

Before getting too far into the product design (particularly the mechanical/industrial design), you should know how your battery will behave under all of the conditions it will encounter. There are three characteristics of almost all battery chemistries that can combine to produce a voltage too low for your circuit to function long before the battery has completely discharged:

  • Battery voltage typically degrades over time, whether it is just a “wear” effect on a primary battery or the voltage droop that is common for rechargeable batteries.
  • Battery voltage is typically reduced as temperatures become lower.  Batteries can be designed to minimize this effect but these are typically expensive industrial oriented batteries, not your common coin cell or AA type batteries.
  • Battery voltage is typically reduced as the load increases. This is primarily caused by internal resistance of the battery. For some battery types the internal resistance increases over time so this effect becomes more severe as the battery ages.

The extent that these characteristics are exhibited varies considerably from one battery chemistry to another and most battery data sheets provide graphs showing these effects under different conditions. However, these graphs rarely depict what happens for combinations of these effects  so when your design is subject to more than one of these conditions (such as high load and low temperatures) you should experiment with these conditions to characterize the battery output voltage. You may find that your power budget says you are good for 10 years when in reality after a few years when your wireless radio transmits in cold weather the battery voltage drops too much for your circuit to function. Also, pay attention to the battery’s self discharge current and account for it in your power budget, with today’s ultra-low power micros the battery’s self discharge may be higher than what the circuit draws in sleep modes.

You need comprehensive and accurate current measurements for all of the operating modes your device has. If your device performs some function several hundred times a day, any error in your current measurement when that function is being performed will add up over time but these higher power states often have surprisingly little impact on battery life. More importantly, if your device spends almost all day in sleep mode, a 10% error in the sleep mode current measurement could mean you miss your calculated battery life by 10%. If you are taking nano-amp to micro-amp current measurements with a sense resistor and regular scope probe your measurements could easily be off by much more than 10% (see our white paper at to learn more about why this can happen).  To make things worse, as the article at the link below discusses, several recent studies have shown substantial differences in the current draw from several samples of the same micro, particularly in sleep modes and over temperature. Since these differences are mostly attributed to leakage currents, this should apply to nearly all complex CMOS ICs and not just micros.

You must have a thorough understanding of how your device performs in the actual environment it is used in. This point is particularly important for devices that incorporate wireless radios. If the product enclosure design or the surroundings where the product is installed attenuates the radio signal such that transmission retries are the norm or higher transmit power is used by the radio, you may miss your battery life target by years. Particularly for industrial applications, mounting a device on a metal pole or on the side of a metal box is commonly done but the proximity of that large metal area to a low power radio antenna will severely degrade its performance. Unless the wireless radio provides retry counts, transmit power levels and similar diagnostic information, monitoring current waveforms while your device is operating is a good way to literally see exactly what it is doing.

Last of all, while doing your design keep in mind all of the various leakage currents (and “current leaks”) your design may be subject to. It is difficult enough to design around all the obvious current consumption points but these easily forgettable points of wasted power can easily ruin your best design efforts.  It is easy to forget about things like the quiescent current of a voltage regulator which can be considerable compared to a micro’s deep sleep mode current. Achieving long battery life requires excruciating attention to these types of design details.

So, for the unavoidable mythology references, when designing for long battery life you may feel like Sisyphus from Greek mythology who was condemned to rolling a boulder uphill only to have to roll back down the hill when he approached the top. You may get your power budget close to your design goal only to have to take big steps backwards to try again. With thorough front research, considerable engineering effort plus a little luck, your project won’t crash and burn like Icarus when he flew too close to the sun and his wings melted so your 10 year battery life can become a reality.


Keep it cool – part 2

Driving loads

You may not expect to read about multiple amp loads in the context of “Low Power Design”. It could be an indication of how wide-spread the push for energy efficient products has become.  In many embedded applications it is quite common for a micro drawing a few milliamps to control motors or solenoids that require several amps of current or high-brightness LEDs that draw several hundred milliamps. Even if these current ranges are way above what your design has to deal with, the information presented here may be applicable to your design.

When driving a large load such as a DC motor, solenoid or a cluster of LEDs, there are a number of things you can do to ensure you drive them efficiently to help reduce heat buildup:

  • You already selected the lowest RDSon FET you could find but are you taking full advantage of its capabilities? Make sure your gate voltage is such that the FET is fully turned on and operating in its lowest RDSon range. Most N-channel FETs usually need a gate voltage in the 8-10V range to fully turn on so simply driving the gate with a GPIO won’t put the FET in its lowest RDSon range. Logic level gate FETs may be better in this regard but typically they just have a lower minimum turn-on threshold and may still need over 5V to fully turn on. In these cases you should consider using a P-channel FET or even a gate driver IC to drive the gate of the N-channel FET with the voltage it is switching (or use a step-up regulator or voltage doubler circuit to provide a higher voltage if the input voltage exceeds the maximum gate voltage). The graph below shows the impact of gate voltage on Rds(on) for the Fairchild FDS8449 N-channel FET. The FDS8449 has a max gate turn-on threshold of 3V but as you can see at 3V the Rds(on) is 2.4X higher than at 10V at no load, much higher as the load increases.

Rds(on) vs gate voltage

  • When using a P-channel FET to drive a load, a GPIO may not drive the gate high enough to completely turn off the FET so you may be leaking power through the FET. This can often go un-noticed since the amount of power is too low to activate the load. Using an open-collector driver or an N-channel FET to drive the gate of the P-channel FET can solve this problem (don’t forget a pull-up resistor from the P-channel FET’s gate to the voltage being switched).
  • A P-channel FET of similar rated voltage and current as an N-channel FET will typically have 50-100% higher Rds(on) than the N-channel FET. With Rds(on) specs on modern FETs in the double-digit milliohm range even doubling the Rds(on) produces a fairly low value. However, that is simply wasted power that can easily be eliminated if low-side switching is an option for your application. This is also important to keep in mind if for some reason you can’t address one of the issues discussed here that prevents the FET from operating close to it’s lowest Rds(on), changing the type of FET may alleviate the problem. If you have a P-channel FET operating at 2X its lowest Rds(on) then you could possibly reduce the Rds(on) by a factor of 4X by using an N-channel FET.
  •  Just like a resistor, the Rds(on) of a FET increases with temperature. The graph below shows the impact of temperature on Rds(on) for the Fairchild FDS8449 N-channel FET. As you can see, a 50°C increase in temperature results in a nearly 20% increase in Rds(on). Even if your product is normally used in a room temperature environment, a 20-50°C temperature rise at the FET’s die isn’t uncommon (another reason to operate the FET in its lowest Rds(on) range). This is another situation where keeping a part cooler helps prevent wasting power, as the Rds(on) increases the part will get hotter, increasing the Rds(on) and so on. A hot FET and a non-optimal gate voltage can combine to generate a lot of excess heat and waste a lot of power.

Rds(on) vs junct temp

  • Similar to LEDs, electromechanical devices can often be driven with a PWM to reduce power without impacting the performance of the device. Solenoids can often be “kicked” with a several hundred millisecond pulse to actuate them and then driven at as low as 30 or 40% duty cycle to keep them actuated. Depending on size, a DC motor may require a “kick” for up to a few seconds to get it up to speed and more than a 50% duty cycle to avoid slowing down but with such large loads, even a 10-20% savings can be a considerable reduction in power consumption.

 If your application involves high currents or other potential heat sources, consider buying or renting an infrared thermal camera. The camera will help you find hot spots on your board and access the effectiveness of your heat spreading/isolation efforts. Don’t be tempted to skimp and use a laser pointer IR thermometer instead of the camera. The IR thermometer has a fairly narrow range of view and only displays the temperature where you point it. An IR thermometer may not accurately measure the temperature of a small hot spot like one caused by high current flowing through a small surface mount FET.  The beauty and value of the camera is it can show you even tiny hotspots where you don’t think about looking for them. The first time I used a thermal camera it did just that, allowing us to fix an issue at the prototype stage that would have likely lead to field failures and high warranty costs. That camera more than paid for itself in a matter of hours.


Driving loads

Keep it cool

Several of my recent posts have mentioned the very negative impact of heat on power consumption. This is the first of a two part series of posts on thermal management for low power devices. This information is mostly taken from my “Low Power Design” PDF e-book.

As semiconductor geometries have shrunk, in recent years leakage current has become a significant component of the overall power consumed by ICs. As parts heat up, their leakage current typically increases. It is not uncommon for parts to consume twice as much current at their highest rate temperature than at their lowest. For example, the AD8226 op-amp is rated for -40°C to 125°C. The quiescent current ranges from 325uA at -40°C to 425uA at 25°C to 600uA at 125°C. This is nearly a 100% increase across the temperature span and nearly a 50% increase from “room temperature” to the maximum temperature. You should conduct your current measurements at the temperature your product will normally operate at if not at the temperature extremes too.

 Controlling Heat

Even if you don’t have the luxury of airflow in your product, there are a number of things you can do to keep the heat under control and reduce the impact of heat on your power consumption:

  • If your product is vertically mounted, place as much circuitry as you can below the main heat generating parts.
  • If you have the option for voltage regulators and other heat generating parts, select packages with bottom side ground or thermal pads. Dissipating heat into the circuit board can help localize the heat buildup and maintain a lower air temperature within the enclosure.
  • For products that operate in high temperatures and have more than a few ICs, select part packages based on what you want to do thermally for a part. For parts that drive large loads or use high clock speeds it can be beneficial to select a package with a low thermal resistance to help get the heat away from the die. On the other hand, if you must place other parts near heat sources on your board you can choose a package with a higher thermal resistance for those other parts to reduce the heat transferred from the PCB to the die. Most surface mount ICs have several options for package styles that can have a wide range of thermal resistances. For instance, Texas Instruments offers the 74LV74 in 6 different packages with thermal impedances ranging from 47°C/W to 127°C/W.
  • Without forced air flow in your product, the junction-to-ambient thermal resistance spec probably isn’t relevant for selecting parts. You need to pay attention to the junction-to-case thermal resistance specs. Some manufacturers are specifying a junction-to-board thermal resistance which is even better. When comparing the junction-to-case thermal resistance of different packages you must understand where on the case this spec applies. Newer parts and particularly those in SMT packages intended for power applications will use the bottom of the case for this spec while older parts and non-power packages will likely use the top of the case since the assumption is air flow is used to dissipate heat and not the circuit board.
  • To fully realize the heat dissipating potential of low thermal coefficient packages with large tabs or bottom side thermal pads, you have to place several thermal vias in the pad to tie this pad to the internal ground plane or a large copper pour on the back of the PCB. You also need to pay special attention to the datasheet on parts with an exposed bottom side pad. These exposed pads are often connected to ground internally but not always and on some parts the pad may be the only ground connection for the part. On other parts, the pad may not be electrically connected and can be safely grounded or it may be the negative voltage supply on dual supply analog parts (which may or may not be ground in your design). The datasheets usually contain details on the size of the copper area and other PCB layout requirements to achieve the specified thermal resistances. The diagram below shows a typical arrangement for a DPAK and SMT DIP packages with a thermal copper pour and vias to connect to an internal ground plane or back-side copper pour. Most of the IC manufacturers that have packages with bottom side thermal pads provide app notes or even on-line calculators to help determine the minimum size of the copper area and number of thermal vias you need for a given package.

Footprints with Cu pour

  • With thermal vias, more isn’t always better since they can also disrupt the spread of heat across a copper pour or ground plane. The ground plane connections for vias are usually made with four thermal “spokes” and not a direct connection, some of these spokes may be missing if the vis are packed too close together. Some assembly houses may complain about thermal vias in device pads robbing solder from the pad. If so, reducing the via size or covering the via on the opposite side of the board with solder mask can help minimize the amount of solder that may seep into the via holes.
  • When using the PCB for heat sinking, keep in mind that FR4 and other laminates that PCBs are commonly made from are very poor thermal conductors. You are really spreading the heat through the copper in/on the PCB instead of transferring the heat into the PCB. It is best to use at least 2 oz copper for the outer layers and 1oz copper for inner layers. The heavier copper isn’t significantly more expensive than the “standard” 1 oz and ½ oz copper used for most PCBs but does provide significantly better heat transfer across the ground plane and copper pours.
  •  If you have traces on your board that carry more than a few amps, the traces themselves can be a significant source of heat if you aren’t careful. There are a number of good on-line PCB trace width calculators you can use to help prevent this. The maximum allowed temperature increase for the trace is an input for most of these calculators. If a trace normally carries high current you should set this to 5°C or 10°C. If the high currents are of a fairly low frequency and short duration, you can usually go as high as 25°C max temperature rise (but first make certain your product isn’t subject to any specifications that restrict the max temperature rise of traces). The overall trace resistance is also a function of trace length so you can reduce this resistance and resulting generated heat by shortening the trace length. Vias can have significantly higher resistance than copper traces causing them to generate additional heat so above a few amps you should use multiple vias instead of a single via.
  • To some degree the leakage current of an IC is influenced by the die size and circuit density on the die. For typical embedded devices the micro will usually have the largest die size and highest circuit density of the ICs in the device and therefore the highest leakage current. Simply increasing the space between heat generating parts and the micro can drastically reduce the amount of heat the micro is exposed to thereby decreasing its leakage current.
  • If possible, heat sink your heat generating parts to the enclosure. You may be able to do this directly but also indirectly be placing your heat generating parts near mounting holes on the board so the mounting hardware can help carry the heat out of the circuit board.
  • Pay the premium for more efficient voltage regulators. The power that an inefficient regulator wastes is generally turned into heat which can increase the power consumed. You aren’t likely to get into a thermal runaway condition but to some degree this increased power consumption generates more heat which increases power consumption which generates more heat …. you get the point.

That wraps up this part on thermal management. As you can tell while much of thermal management is in the circuit board layout, the extent that heat can be managed in the layout and the impact heat will have on the power consumption of your design are highly dependent on device and part package selection.

If you find the Low Power Design blog interesting, please help spread the word about it. We’re also on Facebook, Twitter, Google+, the Element14  Community and have a LinkedIn group.

Non-volatile memory for low power designs

I came across a mention in a newsletter recently of Texas Instruments’s line of FRAM based MSP430 microcontrollers and thought a short post on FRAM would be of interest.

If you aren’t familiar with FRAM (also called FeRAM), think of it as essentially a non-volatile SRAM. FRAM was commercially developed primarily by Ramtron International starting in the mid 1980s. The ‘F’ in FRAM stands for ferroelectric but the underlying technology isn’t based on magnetics as might be assumed. The ferroelectric film used in constructing an FRAM is often described as creating a capacitor that can be charged or discharged to store a ‘1’ or a ‘0’. These are near perfect capacitors that can retain a charge for over 150 years. Reading a FRAM location causes the charge to degrade so read operations must be followed by a re-write of the data to that location. This is very similar to the way a DRAM operates and the re-write cycle is transparent to the user (except that read operations count toward the device’s endurance cycle count) and refresh cycles are not required like they are for DRAM.

FRAM is interesting for low power designs that require non-volatile memory. FRAM devices typically have standby currents comparable to or slightly less than EEPROM and NOR Flash devices. For write cycles, the FRAM has significantly lower currents and significantly faster write cycles. For applications that primarily read from non-volatile memory FRAM provides a modest power savings. For applications that frequently write to non-volatile memory FRAM can provide a huge power savings and does not require erase operations like Flash devices do.  Unlike EEPROM and Flash devices where the number of write cycles can limit or prevent their use in some applications, FRAM devices are typically spec’d for 10 trillion write cycles or more. The main drawbacks to FRAM are higher cost and relatively low density compared to EEPROM and Flash but with 8Mb parts on the market, for most low power embedded applications that really should not be an issue.

The table below presents some performance specs for typical FRAM, EEPROM and NOR Flash devices along with a few characteristics of the technologies. The devices used in the comparison all have SPI interfaces and capacities in the lower end of the capacity range for each part type.




Standby Current

6uA @ 3.3V

5uA @ 5.5V

1uA @ 2.5V

15uA @ 3.3V

Slow Read Current

200uA @ 1Mhz, 3.3V

2.5mA @ 5Mhz, 2.5V

Not specified

Fast Read Current

3mA @ 20Mhz, 3.3V

6mA @ 10Mhz, 5.5V

10mA @ 20Mhz, 3.3V

Write Current

Same as read

3mA @ 5.5V

30mA @ 3.3V

Erase time



25mS (sector or block)

Byte write time

< SPI transfer time




> 1 trillion read/writes

> 1,000,000 writes

> 100,000 writes

Data retention (years)




Density range

4Kb – 8Mb

64b – 8Gb

256Kb – 2Gb

1 = FRAM specs from Cypress FM25L16B, 2Kx8 organization, 2.7V to 3.6V operating voltage

2 = EEPROM specs from Microchip 25LC160, 2Kx8 organization, 2.5V to 5.5V operating voltage

3 = Flash specs from SST (Microchip) SST25VF512, 64Kx8 organization, 2.7V to 3.6V operating voltage

FRAMs are available with SPI, I2C and parallel interfaces. There is a considerable amount of more detailed information on the Internet about FRAMs. The primary suppliers are Cypress Semiconductor (acquired Ramtron in 2012), Fujitsu Semiconductor (RAMtron’s primary foundry) and Rohm Semiconductor.

Back to what originally caught my eye regarding FRAM, Texas Instruments has integrated FRAM into their MSP430 micros. The MSP430FR series of micros has from 4KB to 128KB of on-chip FRAM along with a smaller amount of SRAM (presumably for stack and other frequently used variables). In contrast to some micros that lose some or all of their RAM contents in a deep sleep mode, with some consideration over whether variables are stored in FRAM or SRAM these devices could be completely powered down while retaining important state information and data.  With operating currents as low as 100uA/Mhz the MSP430 micros are strong players in the ultra low power micro market but the addition of internal FRAM provides them with a distinct advantage in certain types of application.


If you find the Low Power Design blog interesting, please help spread the word about it. We’re also on Facebook, Twitter, Google+, the Element14  Community and have a LinkedIn group.


Leakage current & current leaks – part 3

This post will focus on “current leaks”, the points of power consumption that are often designed into a device without much thought. In some cases they are things you have some amount of control over, in other cases they are things you have little control over but have to account for in your power budget in order to get an accurate power consumption or battery life estimate. While technically not leaks as in leakage current, I call them leaks because they are where your battery life drains away.

Resistive leaks

The amount of power a pull-up resistor consumes surprises most engineers new to low power design. At 10K ohms, a pull-up to 3.3V on a single low level input to a micro can draw over 300uA. Even a fairly high value like 1Meg ohm can draw over 3uA in the same situation. This may seem low but can be considerable compared to a micro with sub-microamp sleep current or a power budget of several microamps average current to achieve several year life from a coin cell battery. The internal pull-up/down resistors on a micro’s GPIO are usually not well controlled and typically range from 5K to 20K ohms. They should never be used if you are concerned with low power consumption.

In situations like open-collector or tri-state outputs a pull-up or pull-down resistor is required (and depending on signal rise-time requirements a fairly low value may be required). In this case the power just needs to be accounted for in the power budget. In other cases, you may have other options. For example, for a switch that only connects to the micro’s inputs and does not directly connect to a hardware circuit, consider the following:

  • Instead of grounding one side of a SPST switch with a pull-up on the micro’s side, use a GPIO instead of ground. This GPIO can be set normally high and then only set low when the micro needs to “read” the switch input. This output from the micro can be shared with several switches. There will still be a small current flow when the switch is closed since the GPIO may only reach 70%-90% of the Vcc rail. To reduce the current flow even more, when not needed enable the internal pull-up for this GPIO and configure it as an input.
  • Use a SPDT switch with one side pulled-up and the other tied to ground. In this setup a pull-up isn’t required on the micro’s input (be sure to select a switch that doesn’t have an “open” position). A SPDT switch is usually larger and more expensive than a SPST switch but the current savings can be significant.

Voltage dividers can be another source of wasted power. When used in a circuit there is little you can do about them other than using the highest value resistors your circuit can tolerate (and accounting for them in your power budget). When used to reduce a voltage for an A/D converter input, a P-FET can be used to turn off the voltage at the top of the divider until the firmware is ready to read the voltage level (the bottom resistor will act as a pull-down to prevent the A/D input from floating). A small amount of current will flow through the P-FET when turned off but it should be much lower than the current through the divider if the P-FET is not used.  In this situation, an N-FET can not be used at the bottom of the divider since when it is turned off the A/D converter input may exceed its input voltage spec.

A thermistor circuit is essentially a voltage divider with one resistor whose value varies with temperature. When using a thermistor with an A/D converter, depending on the voltage rail used for the high side of the thermistor, a P-FET may be used here too or a lower cost N-FET can be used at the bottom the divider. If super accurate temperature measurements are not important, a GPIO can be connected to the top or bottom of the divider. If doing this, it is better to use the GPIO at the bottom since there is usually less variation in the low level output (0.2V to 0.4V for most micros) than for the high level output (70% to 90% of Vcc) so the temperature measurements will be more accurate, however this will consume more power in the “off” state than if you use the GPIO at the top. You can also take the GPIO to an A/D input to and use the actual voltage to adjust the thermistor reading for improved accuracy.

Termination resistors for a serial communications interface are another sneaky path for leaking current. The two signal I2C interface is commonly used for connecting memory, A/D converters and various types of sensors to a micro. Over short distances, the value of the pull-up resistors on these two lines is determined mainly by the maximum transfer speed required on the interface along with the input leakage current of the devices attached to the interface. If I2C communications are part of the normal operation of your device you should work through the exercise of sizing the pull-up resistors and not just rely on values you saw on a reference design. Properly sizing the resistors will ensure the least amount of current is lost through these resistors and provide the information you need to budget for this current draw.

RS485 is another good example of a serial interface with terminating resistors. RS485 is often used in industrial environments and uses a differential pair for transferring data. In a point to point setup, there is typically a 100 or 120 ohm resistor placed across the pair. When the interface is in the idle state, 1.6mA or 2mA flows through this resistor. Since the two differential lines are always in opposite states during a data transfer, at the 1.5V minimum differential voltage 12.5mA or 15mA will flow through the resistor during data transfers. In a multi-drop configuration bias resistors are incorporated, a pull-up to 5V on one of the lines and a pull-down to ground on the other line. These resistors are usually in the 680 ohm to 1K ohm range. With a 1K ohm pull-up, a 1K pull-down and a 120 ohm resistor across the pair, about 2.4mA will flow through these resistors at all times.

Termination resistors make wired Ethernet a real power hog. Depending on the speed and configuration, the two termination resistors on the differential output driver alone can draw upwards of 60mA CONTINOUSLY.  Also, transmit currents can easily run in the 100mA to 180mA range for an Ethernet controller. Don’t think your device will only be transmitting when it has data to send. This could make for another series of blog posts but the protocol being used on the network can generate a significant amount of network traffic that your application is completely unaware of. It seems like for a wired Ethernet connected device, the only practical alternatives to line-power would be Power-over-Ethernet or a good sized solar panel and lead-acid battery unless the Ethernet interface is kept powered down except when the device needs to transmit (and the protocol selected/tweaked to handle that).

Enables and chip selects

In an analog signal chain feeding into an A/D converter for periodic sampling, using an op-amp with an enable can save milliamps compared to an op-amp that is operating all the time. The short time penalty paid in allowing the signal to settle after enabling the op-amp before performing the A/D conversion needs to be taken into consideration. Even if the settling time extends into a few milliseconds, the micro should be able to wake up, enable the op-amp and then go back to sleep while waiting for the settling period.

It may be tempting when using a part like an EEPROM with a SPI interface to tie the chip select pin low to save a GPIO for some other purpose when there are no other devices on the SPI interface. This is a bad idea if you are concerned about low power consumption. On many devices the chip select also serves to put the device into either active mode or standby mode so always having the chip select active will prevent the device from entering standby mode. In reviewing a few small EEPROM datasheets (256 x 8 parts), the active read currents were in the 1mA to 3mA range while the standby currents were in the 1uA to 5uA range (with the CS pin specifically called out at Vcc). This bears further research and experimentation to see what the current draw would be with CS active while the SPI lines are in an idle state but it is likely to be closer to the read current than the standby current. As an aside, some parts may not even function with the chip select always active.  Some devices require a transition on the CS line to enter an active state and for some the CS pin can serve multiple functions, such as starting a conversion on an A/D converter.

For the ultimate power savings with a device like a serial EEPROM or Flash device, if the part isn’t frequently used then use a GPIO and a P-FET to control power to the FET. Some micro manufacturers have design notes showing an EEPROM being powered directly by a high-current GPIO. This may not work reliably depending on several specs of the micro and the EEPROM. As mentioned earlier, the high level output for a CMOS device is typically spec’d at 70% to 90% of Vcc. If the EEPROM is being powered at 70% of the micro’s Vcc rail (2.31V for a 3.3V Vcc) that may be below the EEPROM’s operating voltage. If that is not an issue, if the EEPROM output is at 70% its Vcc rail (1.6V if Vcc is at 2.3V) it likely will not reach the minimum high level input voltage for the micro. If that isn’t a problem, when turned off the EEPROM’s Vcc pin may still be at 0.2V to 0.4V (the maximum low level output voltage) so the EEPROM may draw more current than it normally would in standby. Using a P-FET to control power to the EEPROM addresses all of these potential issues.

If you decide to control power to an EEPROM, Flash or any other device, be sure to consider:

  • Any pull-up resistors used on signals connected to the device powered down must be tied to the Vcc rail of the device, not the micro or considerable current may flow through the pull-up when the device is turned off. Otherwise, if the pull-up value is low enough the device may even remain powered up and if an internal GPIO pull-up is used enough current may flow to damage the micro.
  • Inputs to the micro from the powered down circuit should have pull-down resistors to prevent the signals from floating or the GPIO reconfigured as an output driving a low when the device is off.
  • If using a bypass cap on the device, it will be charged and discharged every time the device is turned on so use as small a value cap as possible. This will minimize power wasted in charging the cap and allow the device Vcc rail to fall quickly and minimize any unexpected behavior as the device powers down.

If you are concerned about ultra-low power consumption and need a device with a SPI or I2C interface, take the time to research the standby currents for similar devices with each interface. I did this with two EEPROMs from the same manufacturer with nearly identical specifications except for the interface. The standby current for the SPI device across its voltage range was 0.5uA to 3.5uA while the range for the I2C device was 1uA to 6uA. Presumably the difference is due to the additional circuitry required to monitor the I2C interface for activity and transition between standby and active modes. This may not seem like a significant difference but it would make literally years of difference in operational life when using a coin cell battery.

Voltage regulators

Voltage regulator circuits can use and waste considerable amounts of power. The current draw of the voltage regulator IC itself can be substantial and easily overlooked while concentrating on efficiency numbers or drop-out voltage specs. For LDOs, look for the “ground pin current” spec which typically increases with the load current and can vary from tens of microamps to tens of milliamps. For switching regulators, look for the “operating current” or “quiescent current” spec. This typically increases with the switching frequency and ranges from hundreds of microamps to a few milliamps.

Switching voltage regulators although generally more efficient than linear regulators can also waste power through poor selection of topologies and components.  This wasted power is hard to “see” in a circuit and can be hard to uncover without comparing the power into the regulator and power out of the regulator. I won’t get into a full discussion of the regulator circuits and topologies but here are a few things to consider:

  • An integrated regulator chip is typically smaller and cheaper than a controller chip and external devices but generally are not as efficient as a well designed regulator circuit. Having said that, if you don’t have much experience with regulator circuits and don’t have time to thoroughly research them, an integrated regulator chip will likely provide a more efficient solution.
  • A synchronous rectifier circuit will generally improve efficiency over a comparable non-synchronous circuit by several percentage points.  However, as the circuit’s duty cycle increases the difference in efficiency tends to decrease. Duty cycle typically increases as the difference between voltage-in and voltage-out decreases so with a low voltage differential (say 5V to 3.3V) synchronous rectification has less benefit.  For relatively low power circuits there won’t be a significant cost or size difference between the two circuits.
  • It can be tempting to pick regulators with very high switching frequencies in order to use smaller inductors and capacitors. As the switching frequency increases the amount of power used in turning the MOSFET(s) on also increases and can become a significant part of the power dissipation of the regulator circuit. If a high switching frequency is required, look for a MOSFET with low gate charge or gate capacitance specs.
  • As mentioned in Part 2, the efficiency of a MOSFET is a function of gate voltage and load current. Most smallish N-channel FETs need a gate voltage in the 8-10V range to fully turn on so if stepping down from 5V the FET will never completely turn-on and the actual RDS(on) may be several times higher than the datasheet spec. There are a number of newer N-FETs that are optimized for low voltage operation and will perform better with a low input voltage and relatively low currents.
  • Before selecting a switching regulator over an LDO for battery powered application, be sure to consider the minimum voltage in/out differential of the switching regulator. Where an LDO may only require a 100mV differential a switching regulator may require 500mV or more. Depending on the battery technology and discharge curve, that high of a voltage differential could reduce the effective battery capacity by 10% to 20% or much more.

Temperature & current leaks

It has been touched on a couple of times through this series of posts but bears repeating, hot temperatures can negatively impact a device’s power consumption. This may be due to increased leakage current or increased RDS(on) of the FET(s) in a regulator circuit. For an ultra-low power design it is critical that you pay attention to the thermal aspect of your design to get as much heat out of the packages as practical, poor thermal design will contribute to your current leaks. Particularly in small SMT packages, for a specific device offered in a variety of packages the thermal resistance can vary by a factor of 10X or more from package to package. The encapsulation materials used for most SMT parts are very poor thermal conductors so in general, the fewer pins a part has and the smaller the pins are the worse its thermal performance will be. Leadless packages that solder directly to the PCB tend to have much better thermal characteristics than leaded packages that must conduct heat through their pins.

It is important to really understand the thermal resistance specifications in a device’s datasheet. If there is no airflow inside your product the basic junction-to-ambient thermal resistance spec is of little use.  With no airflow the majority of the heat dissipation will occur through the circuit board, making the junction-to-board or junction-to-case thermal resistance the spec of interest. With SMT packages used for power devices (particularly packages with a thermal pad) the junction-to-case spec usually refers to the bottom of the case but that isn’t always the true, it may refer to the top of the case.  Particularly when comparing thermal resistance specs from different manufacturers it is important to understand what each spec really means to know if a comparison is valid or not.

That wraps up this series of posts on leakage currents and current leaks.  These are all areas that must be considered for successful low power design.  As you should have noticed, missing just one of these points of leakage current or current leaks can blow away your entire power budget and the lower your power budget is the more critical all of them become.


Accurate low-current measurements with a scope

Last week was incredibly busy so I wasn’t able to put the time in to complete the third part of the “Leakage currents & current leaks” post. This will be a short post with a link to a white paper on our website for more details.

Most engineers consider the oscilloscope their first tool of choice for hardware development work. Yet very few engineers ever consider how accurate their scope is. Most of the major oscilloscope manufacturers place great importance on the timing aspects of their products. Multi-gigahertz sample rates are fairly common today in mid-range digital scopes today yet most of those scopes only have 8-bit A/D converters. While timing accuracy is often spec’d in double-digit ppm, voltage measurement error on the same scope can be as much as the signal level you need to measure using a scope’s lowest volts/division setting.

Below are screen shots taken on the same mid-range MSO scope from one of the top scope companies of similar current waveforms using a “standard” scope probe and our CMicrotek µCP100 low current probe. Using a 10mV/division setting on the scope with the standard probe produced a waveform that was way too fuzzy to be useful. The measurements taken with the cursors were almost 2X too high for the peak and plateau portions of the waveform and well over 5X too high for the portions before the peak and after the plateau.

probe_SR_FW_task_shot“Standard” Scope Probe

uCP_FW_task_shotµCP100 Current Probe

For more information on these waveforms and an example of the calculations used to determine the measurement accuracy of a digital scope check out our “Accurate Current Measurements with Oscilloscopes” whitepaper.

My plan is to wrap up the “Leakage currents & current leaks” posts next week. If you find the Low Power Design blog interesting, please help spread the word about it so we can build up a large enough audience to make it worth the time it takes. We are on Facebook, Twitter, Google+, the Element14  Community and have a LinkedIn group.

Leakage current & current leaks – part 2

Virtually all semiconductor devices have some amount of leakage current. It is interesting to note as operating voltages and device power consumption keep dropping, leakage current is becoming a larger percentage of a device’s power consumption. In most cases there isn’t much you can do about leakage currents other than be aware of them and account for them in your power analysis. In some cases there may be a significant difference in leakage current levels from manufacturer to manufacturer for devices that perform the same function so it pays to take the time to include leakage current comparison in your component selection. For a CMOS device that isn’t actively being clocked, leakage current can make up a significant part of its power consumption that may be called out as “standby current” or “quiescent state current” in the datasheet specs.

Diodes (including LEDs) are one a few types of devices where the circuit design can play a part in determining the extent of the leakage current in that design.

Diode & LED leakage current

Diodes can present substantial leakage currents when in a reverse voltage condition, this is often referred to as reverse current. Similar to capacitors, there are a number of factors that come into play in determining the level of leakage current. Unlike capacitors, there aren’t any simple formulas to help you estimate what the reverse current is for a diode. Diode reverse current varies considerably from device to device and isn’t necessarily dependent on voltage rating, current rating or physical size.

The graph below shows a typical diode voltage/current curve. Notice that under forward voltage conditions (blue shaded area) diodes conduct very little current until the voltage starts approaching the diode’s forward voltage. Although not technically a leakage current, current will start flowing at a few hundred millivolts below the forward voltage level where the diode is expected to “turn on” and start conducting. Under reverse voltage conditions (pink shaded area), some amount of leakage current occurs as soon as the voltage is reversed and increases as the reverse voltage increases. Note that this graph is not to scale, the forward voltage of a diode is typically less than one volt while the reverse voltage and breakdown voltage are usually tens to hundreds of volts.

diode V-C curve

There are a few things to consider regarding diode reverse current:

  • Schottky diodes tend to have higher reverse currents than standard diodes.  In a recent search on DigiKey, for SMT Schottky diodes the reverse current specs ranged from 100nA to 15mA while for standard diodes the range was nearly an order of magnitude lower, from 500pA to 1.5mA. A Schottky diode may be appropriate for your design because of its low forward voltage but be aware that its leakage current can be considerable.
  • Similar to leakage current for caps, the applied voltage relative to the rated reverse voltage can have a significant impact on the reverse current of diodes. Reductions in reverse current as the applied voltage is reduced relative to the rated reverse voltage aren’t quite linear but can reach 90% or more.  This is often shown in a graph in the diode data sheet with reverse current plotted against percent of rated reverse voltage.
  • Temperature will also have a significant impact on a diode’s reverse current.  It is not uncommon for a 20°C temperature increase to cause a 10X or greater increase in reverse current. One option to improve this situation in power application where a diode can heat up while operating is to utilize a physically larger device and large copper areas on the circuit board to help transfer heat out of the diode and into the circuit board.
  • As shown above, the reverse current can become significant as the breakdown voltage is approached and increase to many times the rated current of the device if the breakdown voltage is exceeded. This is referred to as “avalanche current” because of the sudden increase and is the point where the part is likely to be destroyed.

LEDs will exhibit similar reverse and forward currents as other diodes. A few things to consider specific to how LEDs are typically used:

  • Reverse voltage with an LED typically is not a problem unless there are multiple power rails in a design and the cathode is driven to a higher voltage than the anode when the LED is off.
  • It can be tempting to use a high-drive GPIO to directly control an LED as shown in the diagram below. Because of the typical logic “high” and “low” levels of a CMOS micro, this can still result in hundreds of millivolts across the LED when it is off. This will create the condition just to the left of the “Forward Voltage” point on the voltage/current curve where there may be from tens of microamps to a few milliamps of current flow through the LED. If you choose to use a GPIO for cost/space reasons, it is better to connect the GPIO to the anode side and drive it high to turn on the LED. A CMOS micro will usually have a low level output below 0.4V while the high level can be as low as 70% of the Vcc rail. If you connect the GPIO to the cathode, at 70% of 3.3V that is only 2.3V which may not even be high enough to turn the LED completely off.
  • To virtually eliminate the current flow through an LED in the off state, use an N-channel MOSFET to control the cathode of the LED (see diagram below). This allows the cathode to float so the only current paths available are the circuit board itself and the solder mask (typically 100M ohm or higher) and the leakage current path through the MOSFET (typically in the low nanoamp range for a small N-channel FET but it can vary).

LED hookups

Unrelated to leakage current, LEDs can waste a lot of power if used without careful consideration.  Users love LED indicators but generally don’t have a clue about their impact on a device’s battery life. When LEDs are required, here are a few things to consider to reduce their power consumption:

  • Keep in mind that with current requirements of minimally several milliamps, an LED can draw much more current than a sleeping or slow running micro and high brightness LEDs can draw more current than a Bluetooth or ZigBee radio uses when transmitting.
  • Think about how bright your LEDs really need to be. Really bright LEDs generally aren’t needed unless a device is used outdoors or needs to be visible from across a large room. Light pipes and similar low cost plastic optics can be very useful for making an LED appear to be brighter or larger and may allow you to decrease the LED current by several milliamps. Particularly in red and green, high efficiency LEDS are available today that provide much better brightness/power performance than older LEDs.
  • When using an LED as an on/off indicator, consider a slow flash of the LED instead of having it on constantly.  Turning the LED on for ½ second every 3 seconds provides an almost 84% reduction in the power used for this indicator.
  • On a much smaller time scale, use a PWM to control the on/off duty cycle of the LED. LEDs tend to stay lit for a relatively long time after they are turned off. Switching the LED on/off with a 50/50 duty cycle at a rate faster than 1Khz will cut the power by half with an imperceptible reduction in brightness. The timers on many modern micros have PWM outputs or other output modes that can be used for this with little to no involvement by the firmware other than starting or stopping the timer.
  • If your device has more than a few LEDs that can be on simultaneously, consider adding an ambient light sensor to your product and controlling a PWM to adjust the brightness based on ambient light conditions.

Up to this point I have covered leakage currents, currents that may not be obvious but are usually specified in part data sheets. This part will deal with “current leaks”, non-obvious current flows and power losses that are caused by the circuit design. In some cases these “current leaks” may be reduced or managed somehow, in other cases you just need to be aware of them so they can be accounted for in your power budget.

Before leaving semiconductor devices, one of the biggest power wasters if not used carefully are MOSFETs. While MOSFETs usually have a leakage current spec, it is usually on the order of tens to a few hundred nanoamps. The bigger issue with MOSFETs is inefficient operation from not operating them under the right conditions to allow them to meet their Rds(on) spec. Borrowed from the “Low Power Design” e-book, here are a few things to consider when using MOSFETs:

  • The efficiency of a MOSFET is a function of gate voltage and load current. Most N-channel FETs usually need a gate voltage in the 8-10V range to fully turn on so simply driving the gate with a GPIO won’t put the FET in its lowest RDS(on) range. Logic level gate FETs may be better in this regard but typically they just have a lower minimum turn-on threshold and may still need over 5V to achieve their RDS(on) spec. In these cases you should consider using a P-channel FET or even a gate driver IC to drive the gate of the N-channel FET with the voltage it is switching (or use a step-up regulator or voltage doubler circuit to provide a higher voltage if the input voltage exceeds the maximum gate voltage). The graph below shows the impact of gate voltage on Rds(on) for the Fairchild FDS8449 N-channel FET. The FDS8449 has a max gate turn-on threshold of 3V but as you can see at 3V the Rds(on) is about 2.4X higher than at 10V at no load, much higher as the load increases.

Rds(on) vs gate voltage

  • When using a P-channel FET to drive a load, a GPIO may not drive the gate high enough to completely turn off the FET so you may be leaking power through the FET. This can often go un-noticed since the amount of power is too low to activate the load.
  • A P-channel FET of similar rated voltage and current as an N-channel FET will typically have 50-100% higher Rds(on) than the N-channel FET. With Rds(on) specs on modern FETs in the double-digit milliohm range even doubling the Rds(on) produces a fairly low value. However, that is simply wasted power that can easily be eliminated if low-side switching is an option for your application. This is also important to keep in mind if for some reason you can’t address one of the issues discussed here that prevents the FET from operating close to its lowest Rds(on), changing the type of FET may alleviate the problem. If you have a P-channel FET operating at 2X its lowest Rds(on) then you could possibly reduce the Rds(on) by a factor of 4X by using an N-channel FET.
  • Just like a resistor, the Rds(on) of a FET increases with temperature. The graph below shows the impact of temperature on Rds(on) for the Fairchild FDS8449 N-channel FET. As you can see, a 50°C increase in temperature results in a nearly 20% increase in Rds(on). Even if your product is normally used in a room temperature environment, a 20-50°C temperature rise at the FET’s die isn’t uncommon (another reason to operate the FET in its lowest Rds(on) range). This is another situation where keeping a part cooler helps prevent wasting power, as the Rds(on) increases the part will get hotter, increasing the Rds(on) and so on. Thermal runaway isn’t likely to happen but a hot FET and a non-optimal gate voltage can combine to generate a lot of excess heat and waste a lot of power.

Rds(on) vs junct temp

This started out to be a 2 part article, next week I’ll cover more sources of power loss commonly found in circuit designs to wrap-up the 3rd and final part.


Battery selection, part 1

There are a wide variety of battery technologies available today, ranging from the traditional lead-acid battery to the latest lithium polymer (LiPo) batteries. For the most part, your application will dictate the type of battery you use. In some cases you may have to decide between battery types when more than one would be suitable for you application.

Batteries seem like they should be easy to use, just hook them up and go. In fact you can usually do just that but chances are you won’t be satisfied with the resulting charge duration or battery life. Batteries can easily be abused if you don’t fully understand their specs. Some battery chemistries are more tolerant of being abused than others but any abuse tends to shorten battery life. Each battery technology has its own unique set of characteristics and many of those characteristics vary based on the environment they are in, how they are used and for rechargeable batteries, how they are charged. To a large degree and for the sake of this discussion, you can model a battery as a fixed voltage source connected to the load through a variable series resistor. There are a number of things such as temperature, load and age that act on this variable resistor to decrease the battery output.

 Below are several important things you should know that apply to nearly all battery chemistries:

  • Nominal voltage – My advice to you is forget you ever heard the nominal voltage spec for the battery you are working with. Nominal voltages generally reflect a certain point in the battery’s discharge curve under conditions your battery may never see in the real world. For example, a LiPo cell has a nominal voltage of 3.6V but its operating voltage range is from 4.2V to 3.0V. As you can see in the discharge curves below for a LiPo battery, under a light load (the blue line) the curve flattens out around 3.6V just before falling off the cliff. Other than this short period of time, it would be hard to correlate anything on these discharge curves to 3.6V. The two critical voltages you need to be concerned with are maximum voltage for charging and the rated cut-off voltage for discharging. To get the full life of your batteries, these two voltages should never be exceeded.
  •  Factors effecting output – Load and temperature are the primary factors that create the discharge curves for rechargeable batteries. In the case of load, the variable resistor in our battery model acts much like any other resistor and the output decreases as the load increases. Temperature appears to make this variable resistor act the opposite of a real resistor, as the temperature decreases the apparent resistance increases, reducing the voltage at the output. The combination of a heavy load and low temperatures can result in a considerable decrease in the output voltage. As shown in the graph below, the output of a LiPo battery can decrease by over a volt from light load at high temperature to a heavy load at low temperature. For non-rechargeable batteries, the age of the battery also impacts the voltage. As these batteries get older (whether they are in use or not), the apparent resistance increases causing the output to decrease.

LiPo temp curves


  • Discharge curves – All battery technologies have a characteristic discharge curve, typically plotting voltage against mAh capacity (or age). While useful for comparing battery chemistries, once you select a battery technology you need to look at the discharge curves for specific battery models. Variations on the chemistry and battery construction from manufacturer to manufacturer yield different discharge curves. Specifically, you need to pay attention to the curves for the load you expect to place on the battery and the temperatures you expect your product to encounter. These two factors can have a significant impact on the charge duration your batteries will have in actual use. The discharge curves in the graph below are for a LiPo battery at three different loads (rated discharge rate multiplied by 2, 1 and 0.2). You really need to look at both the voltage at temperature and load graph and the discharge curves to get a feel for how a battery will perform in your application.

LiPo discharge curves

  • Rated current – Batteries typically have both maximum continuous current and maximum pulse current specs. These values should never be exceeded. Best case you can severely decrease the life of the battery. Worst case, with lithium based batteries the result can literally be smoke and flames. If your design requires getting close to these rated currents, contact the battery manufacturer to discuss your application to make sure your product will be safe and operate properly. You may also consider using two batteries in parallel to increase the available current (this carries its own set of issues that are beyond the scope of this discussion).

Next week I’ll discuss battery charging and several other important aspects of battery selection.

Micro selection, part 4

This week I will discuss power savings modes in micros. You will need to thoroughly research the modes you want to use AND what the on-chip peripherals are capable of doing in these modes. On a firmware project I recently worked on, there was a single sentence with HUGE implications buried on page 358 of a 550+ page datasheet that said “During Sleep mode, all clocks to the EUSART are suspended.” Because of this, the micro was unable to be put in a sleep mode while communicating with a wireless radio module. More significantly at the product feature level, a data transmission to the device from another device in the system would not wake-up the micro, the micro would have to wake-up periodically to poll other devices to check if they had a message for it.

Low Power Modes

Modern micros provide a wide variety of power saving modes, from a simple run/idle/sleep selection to extreme control over individual circuits and on-chip peripherals. This can get very confusing because you can run into terms like “idle”, “nap”, “snooze”, “sleep”, “hibernate” and “deep sleep”. There are no standards regarding low-power states and the power levels and functionality available for a given mode name may vary widely from micro to micro.

There are several things you need to pay close attention to regarding power saving modes:

  • Make sure in the low power modes you want to implement that the on-chip peripherals you need are functional. For example, in some micros the UARTs are fully operational while the micro is sleeping, others like I mentioned above require the micro to be running for the UART to be operational. As discussed in a previous post, pay special attention to the clock chains in the micro you select. If the peripheral clock is the same as or is simply divided down from the CPU clock the micro will probably have to be running while any peripherals are functioning. You may still be able to put the micro in an “idle” mode which means it won’t be executing code but it’s internal clocks are still running and consuming power.
  • One popular family of small micros powers down its main registers and internal SRAM to achieve sub 1uA deep-sleep mode currents. On this micro, only the instruction pointer and two special registers for program state information are maintained while in deep-sleep. Other than these two registers, waking up from deep sleep isn’t much different than coming out of reset. One very important difference is any variables that the compiler caused to be automatically initialized on reset or are initialized in your reset code won’t be initialized when coming out of the deep sleep mode. Your wake-up code must handle initializing these variables and any on-chip peripherals, delaying the firmware’s response to the event that triggered the wake-up.
  • Pay close attention to what conditions bring a micro out of its low power states. In light and medium sleep modes, any internal or external interrupt will generally wake the micro. The lowest current modes usually have the fewest options for waking the micro. Some micros allow GPIO pins to be configured to wake the micro on a per-pin or per-port basis while some micros will only wake on transitions on external interrupt pins or a specific “wake” pin. For the micro mentioned earlier, the only conditions that will wake the part from deep-sleep are an alarm interrupt from its Real Time Clock or a specific external interrupt pin.
  • Micros with an on-chip voltage regulator for generating the micro core voltage will often have the core voltage on the device pins for filtering. If you select a micro that turns off or reduces its core voltage in a low-power mode, be careful with the amount of capacitance on these core voltage pins. These pins typically only require a filter capacitor so the smallest capacitor possible should be used since it will be charged and discharged every time the low-power mode is entered or exited.

This wraps up the discussion on micro selection. As you can tell, selecting the micro for your application will either enable a good low power design or set you up for failure in your power saving efforts. Unfortunately there is no “right” micro for all low power applications. For a low power design, the micro selection must be a collaborative effort between the hardware and firmware engineers working on the project. Without knowing the specifics of what the firmware needs to do, it is impossible for a hardware engineer to select the right micro for the job.