Category Archives: Firmware

The 10 year battery life – myth or reality?

Myths and legends are usually the subject of incredible beauty, unsurpassed strength or attempting what mere mortals assume is an unobtainable goal. It is often difficult to separate myth from reality or even know where one ends and the other begins. The same can be said for much of what you read these days about products with a 10 year battery life. When you see advertisements for wireless devices claiming 10 year battery life from a coin-cell battery it is easy to think that is the stuff of myths and legends. Yet, given the right hardware/firmware design and the appropriate battery technology and capacity for a certain application, 10 year battery life is certainly a doable thing.

As a hypothetical reality, you’ve just been tasked with designing your company’s next product and prominent in the features list is 10 year battery life. As I’ve tried to stress throughout this blog, keep in mind that low-power design is really low-power system design, not just hardware or firmware design.  Designing for long battery life often requires compromises in feature set and performance, hardware/firmware design, battery selection and even industrial design. Here are some of the keys to achieving maximum battery life.

Develop your power management strategy before even looking at that blank page in your schematic capture program. There are many micros that have really low sleep mode current specs that may be impossible to achieve in your design or may have serious implications for your design.  Just a few of the many things to keep an eye out for are:

  • The clocks to all the micro’s on-chip peripherals are stopped when the micro core is asleep. This would prevent timed wake-ups unless you also have a real time clock with enough resolution to meet your timed wake-up needs.  If your code needs to run more than once per second, depending on the RTC design this may not be possible.
  • Only a few bytes of data in special registers are maintained in power-down mode. This may be OK for very simple applications but when using a micro like this, coming out of the power-down mode is essentially the same as a power-up. This means the micro core, firmware structures and on-chip peripherals will all have to be initialized every wake-up before any real application code can run. This can seriously impact response times and waste a lot of power (not to mention jeopardize your friendship with the people writing the firmware for your hardware).
  • A micro that powers down the oscillator, flash and internal RAM can take several milliseconds to start running code when coming out of a deep sleep or power down mode. If your application requires a quick response to the event that wakes the micro it may not be achievable without using a higher current sleep mode. The same thing applies if you rely on a PLL internal to the micro to generate a high speed clock for the micro or on-chip peripheral.
  • Most micros have a limited number of GPIO pins that can wake them from sleep mode. Generally the deeper the sleep mode the fewer wake-up options you will have and in some cases only a single dedicated pin can be used to wake the micro from the deeper sleep modes.
  • Some micros allow a timer to be clocked with a slow clock like the RTC clock or watchdog timer clock in a deep sleep mode. Be careful with this because the timer may not be able to generate an interrupt to wake the micro. The timer may have to drive a GPIO to wake the micro.

Using your power management strategy, prepare a power budget for the device. How to do this could be the subject of several blog posts but suffice it to say that an accurate power budget is essential to having a chance of meeting your battery life target (no matter how long or short that is). If your product’s behavior is well defined and isn’t too complicated it is possible to create an accurate power budget in a fairly simple spreadsheet. If your first cut at a power budget shows you have battery capacity to spare, don’t congratulate yourself just yet because it is probably grossly wrong. It’s a good idea to always double check, then triple check then have at least one other person check your power budget.

Before getting too far into the product design (particularly the mechanical/industrial design), you should know how your battery will behave under all of the conditions it will encounter. There are three characteristics of almost all battery chemistries that can combine to produce a voltage too low for your circuit to function long before the battery has completely discharged:

  • Battery voltage typically degrades over time, whether it is just a “wear” effect on a primary battery or the voltage droop that is common for rechargeable batteries.
  • Battery voltage is typically reduced as temperatures become lower.  Batteries can be designed to minimize this effect but these are typically expensive industrial oriented batteries, not your common coin cell or AA type batteries.
  • Battery voltage is typically reduced as the load increases. This is primarily caused by internal resistance of the battery. For some battery types the internal resistance increases over time so this effect becomes more severe as the battery ages.

The extent that these characteristics are exhibited varies considerably from one battery chemistry to another and most battery data sheets provide graphs showing these effects under different conditions. However, these graphs rarely depict what happens for combinations of these effects  so when your design is subject to more than one of these conditions (such as high load and low temperatures) you should experiment with these conditions to characterize the battery output voltage. You may find that your power budget says you are good for 10 years when in reality after a few years when your wireless radio transmits in cold weather the battery voltage drops too much for your circuit to function. Also, pay attention to the battery’s self discharge current and account for it in your power budget, with today’s ultra-low power micros the battery’s self discharge may be higher than what the circuit draws in sleep modes.

You need comprehensive and accurate current measurements for all of the operating modes your device has. If your device performs some function several hundred times a day, any error in your current measurement when that function is being performed will add up over time but these higher power states often have surprisingly little impact on battery life. More importantly, if your device spends almost all day in sleep mode, a 10% error in the sleep mode current measurement could mean you miss your calculated battery life by 10%. If you are taking nano-amp to micro-amp current measurements with a sense resistor and regular scope probe your measurements could easily be off by much more than 10% (see our white paper at to learn more about why this can happen).  To make things worse, as the article at the link below discusses, several recent studies have shown substantial differences in the current draw from several samples of the same micro, particularly in sleep modes and over temperature. Since these differences are mostly attributed to leakage currents, this should apply to nearly all complex CMOS ICs and not just micros.

You must have a thorough understanding of how your device performs in the actual environment it is used in. This point is particularly important for devices that incorporate wireless radios. If the product enclosure design or the surroundings where the product is installed attenuates the radio signal such that transmission retries are the norm or higher transmit power is used by the radio, you may miss your battery life target by years. Particularly for industrial applications, mounting a device on a metal pole or on the side of a metal box is commonly done but the proximity of that large metal area to a low power radio antenna will severely degrade its performance. Unless the wireless radio provides retry counts, transmit power levels and similar diagnostic information, monitoring current waveforms while your device is operating is a good way to literally see exactly what it is doing.

Last of all, while doing your design keep in mind all of the various leakage currents (and “current leaks”) your design may be subject to. It is difficult enough to design around all the obvious current consumption points but these easily forgettable points of wasted power can easily ruin your best design efforts.  It is easy to forget about things like the quiescent current of a voltage regulator which can be considerable compared to a micro’s deep sleep mode current. Achieving long battery life requires excruciating attention to these types of design details.

So, for the unavoidable mythology references, when designing for long battery life you may feel like Sisyphus from Greek mythology who was condemned to rolling a boulder uphill only to have to roll back down the hill when he approached the top. You may get your power budget close to your design goal only to have to take big steps backwards to try again. With thorough front research, considerable engineering effort plus a little luck, your project won’t crash and burn like Icarus when he flew too close to the sun and his wings melted so your 10 year battery life can become a reality.


Accurate low-current measurements with a scope

Last week was incredibly busy so I wasn’t able to put the time in to complete the third part of the “Leakage currents & current leaks” post. This will be a short post with a link to a white paper on our website for more details.

Most engineers consider the oscilloscope their first tool of choice for hardware development work. Yet very few engineers ever consider how accurate their scope is. Most of the major oscilloscope manufacturers place great importance on the timing aspects of their products. Multi-gigahertz sample rates are fairly common today in mid-range digital scopes today yet most of those scopes only have 8-bit A/D converters. While timing accuracy is often spec’d in double-digit ppm, voltage measurement error on the same scope can be as much as the signal level you need to measure using a scope’s lowest volts/division setting.

Below are screen shots taken on the same mid-range MSO scope from one of the top scope companies of similar current waveforms using a “standard” scope probe and our CMicrotek µCP100 low current probe. Using a 10mV/division setting on the scope with the standard probe produced a waveform that was way too fuzzy to be useful. The measurements taken with the cursors were almost 2X too high for the peak and plateau portions of the waveform and well over 5X too high for the portions before the peak and after the plateau.

probe_SR_FW_task_shot“Standard” Scope Probe

uCP_FW_task_shotµCP100 Current Probe

For more information on these waveforms and an example of the calculations used to determine the measurement accuracy of a digital scope check out our “Accurate Current Measurements with Oscilloscopes” whitepaper.

My plan is to wrap up the “Leakage currents & current leaks” posts next week. If you find the Low Power Design blog interesting, please help spread the word about it so we can build up a large enough audience to make it worth the time it takes. We are on Facebook, Twitter, Google+, the Element14  Community and have a LinkedIn group.

Leakage current & current leaks – part 2

Virtually all semiconductor devices have some amount of leakage current. It is interesting to note as operating voltages and device power consumption keep dropping, leakage current is becoming a larger percentage of a device’s power consumption. In most cases there isn’t much you can do about leakage currents other than be aware of them and account for them in your power analysis. In some cases there may be a significant difference in leakage current levels from manufacturer to manufacturer for devices that perform the same function so it pays to take the time to include leakage current comparison in your component selection. For a CMOS device that isn’t actively being clocked, leakage current can make up a significant part of its power consumption that may be called out as “standby current” or “quiescent state current” in the datasheet specs.

Diodes (including LEDs) are one a few types of devices where the circuit design can play a part in determining the extent of the leakage current in that design.

Diode & LED leakage current

Diodes can present substantial leakage currents when in a reverse voltage condition, this is often referred to as reverse current. Similar to capacitors, there are a number of factors that come into play in determining the level of leakage current. Unlike capacitors, there aren’t any simple formulas to help you estimate what the reverse current is for a diode. Diode reverse current varies considerably from device to device and isn’t necessarily dependent on voltage rating, current rating or physical size.

The graph below shows a typical diode voltage/current curve. Notice that under forward voltage conditions (blue shaded area) diodes conduct very little current until the voltage starts approaching the diode’s forward voltage. Although not technically a leakage current, current will start flowing at a few hundred millivolts below the forward voltage level where the diode is expected to “turn on” and start conducting. Under reverse voltage conditions (pink shaded area), some amount of leakage current occurs as soon as the voltage is reversed and increases as the reverse voltage increases. Note that this graph is not to scale, the forward voltage of a diode is typically less than one volt while the reverse voltage and breakdown voltage are usually tens to hundreds of volts.

diode V-C curve

There are a few things to consider regarding diode reverse current:

  • Schottky diodes tend to have higher reverse currents than standard diodes.  In a recent search on DigiKey, for SMT Schottky diodes the reverse current specs ranged from 100nA to 15mA while for standard diodes the range was nearly an order of magnitude lower, from 500pA to 1.5mA. A Schottky diode may be appropriate for your design because of its low forward voltage but be aware that its leakage current can be considerable.
  • Similar to leakage current for caps, the applied voltage relative to the rated reverse voltage can have a significant impact on the reverse current of diodes. Reductions in reverse current as the applied voltage is reduced relative to the rated reverse voltage aren’t quite linear but can reach 90% or more.  This is often shown in a graph in the diode data sheet with reverse current plotted against percent of rated reverse voltage.
  • Temperature will also have a significant impact on a diode’s reverse current.  It is not uncommon for a 20°C temperature increase to cause a 10X or greater increase in reverse current. One option to improve this situation in power application where a diode can heat up while operating is to utilize a physically larger device and large copper areas on the circuit board to help transfer heat out of the diode and into the circuit board.
  • As shown above, the reverse current can become significant as the breakdown voltage is approached and increase to many times the rated current of the device if the breakdown voltage is exceeded. This is referred to as “avalanche current” because of the sudden increase and is the point where the part is likely to be destroyed.

LEDs will exhibit similar reverse and forward currents as other diodes. A few things to consider specific to how LEDs are typically used:

  • Reverse voltage with an LED typically is not a problem unless there are multiple power rails in a design and the cathode is driven to a higher voltage than the anode when the LED is off.
  • It can be tempting to use a high-drive GPIO to directly control an LED as shown in the diagram below. Because of the typical logic “high” and “low” levels of a CMOS micro, this can still result in hundreds of millivolts across the LED when it is off. This will create the condition just to the left of the “Forward Voltage” point on the voltage/current curve where there may be from tens of microamps to a few milliamps of current flow through the LED. If you choose to use a GPIO for cost/space reasons, it is better to connect the GPIO to the anode side and drive it high to turn on the LED. A CMOS micro will usually have a low level output below 0.4V while the high level can be as low as 70% of the Vcc rail. If you connect the GPIO to the cathode, at 70% of 3.3V that is only 2.3V which may not even be high enough to turn the LED completely off.
  • To virtually eliminate the current flow through an LED in the off state, use an N-channel MOSFET to control the cathode of the LED (see diagram below). This allows the cathode to float so the only current paths available are the circuit board itself and the solder mask (typically 100M ohm or higher) and the leakage current path through the MOSFET (typically in the low nanoamp range for a small N-channel FET but it can vary).

LED hookups

Unrelated to leakage current, LEDs can waste a lot of power if used without careful consideration.  Users love LED indicators but generally don’t have a clue about their impact on a device’s battery life. When LEDs are required, here are a few things to consider to reduce their power consumption:

  • Keep in mind that with current requirements of minimally several milliamps, an LED can draw much more current than a sleeping or slow running micro and high brightness LEDs can draw more current than a Bluetooth or ZigBee radio uses when transmitting.
  • Think about how bright your LEDs really need to be. Really bright LEDs generally aren’t needed unless a device is used outdoors or needs to be visible from across a large room. Light pipes and similar low cost plastic optics can be very useful for making an LED appear to be brighter or larger and may allow you to decrease the LED current by several milliamps. Particularly in red and green, high efficiency LEDS are available today that provide much better brightness/power performance than older LEDs.
  • When using an LED as an on/off indicator, consider a slow flash of the LED instead of having it on constantly.  Turning the LED on for ½ second every 3 seconds provides an almost 84% reduction in the power used for this indicator.
  • On a much smaller time scale, use a PWM to control the on/off duty cycle of the LED. LEDs tend to stay lit for a relatively long time after they are turned off. Switching the LED on/off with a 50/50 duty cycle at a rate faster than 1Khz will cut the power by half with an imperceptible reduction in brightness. The timers on many modern micros have PWM outputs or other output modes that can be used for this with little to no involvement by the firmware other than starting or stopping the timer.
  • If your device has more than a few LEDs that can be on simultaneously, consider adding an ambient light sensor to your product and controlling a PWM to adjust the brightness based on ambient light conditions.

Up to this point I have covered leakage currents, currents that may not be obvious but are usually specified in part data sheets. This part will deal with “current leaks”, non-obvious current flows and power losses that are caused by the circuit design. In some cases these “current leaks” may be reduced or managed somehow, in other cases you just need to be aware of them so they can be accounted for in your power budget.

Before leaving semiconductor devices, one of the biggest power wasters if not used carefully are MOSFETs. While MOSFETs usually have a leakage current spec, it is usually on the order of tens to a few hundred nanoamps. The bigger issue with MOSFETs is inefficient operation from not operating them under the right conditions to allow them to meet their Rds(on) spec. Borrowed from the “Low Power Design” e-book, here are a few things to consider when using MOSFETs:

  • The efficiency of a MOSFET is a function of gate voltage and load current. Most N-channel FETs usually need a gate voltage in the 8-10V range to fully turn on so simply driving the gate with a GPIO won’t put the FET in its lowest RDS(on) range. Logic level gate FETs may be better in this regard but typically they just have a lower minimum turn-on threshold and may still need over 5V to achieve their RDS(on) spec. In these cases you should consider using a P-channel FET or even a gate driver IC to drive the gate of the N-channel FET with the voltage it is switching (or use a step-up regulator or voltage doubler circuit to provide a higher voltage if the input voltage exceeds the maximum gate voltage). The graph below shows the impact of gate voltage on Rds(on) for the Fairchild FDS8449 N-channel FET. The FDS8449 has a max gate turn-on threshold of 3V but as you can see at 3V the Rds(on) is about 2.4X higher than at 10V at no load, much higher as the load increases.

Rds(on) vs gate voltage

  • When using a P-channel FET to drive a load, a GPIO may not drive the gate high enough to completely turn off the FET so you may be leaking power through the FET. This can often go un-noticed since the amount of power is too low to activate the load.
  • A P-channel FET of similar rated voltage and current as an N-channel FET will typically have 50-100% higher Rds(on) than the N-channel FET. With Rds(on) specs on modern FETs in the double-digit milliohm range even doubling the Rds(on) produces a fairly low value. However, that is simply wasted power that can easily be eliminated if low-side switching is an option for your application. This is also important to keep in mind if for some reason you can’t address one of the issues discussed here that prevents the FET from operating close to its lowest Rds(on), changing the type of FET may alleviate the problem. If you have a P-channel FET operating at 2X its lowest Rds(on) then you could possibly reduce the Rds(on) by a factor of 4X by using an N-channel FET.
  • Just like a resistor, the Rds(on) of a FET increases with temperature. The graph below shows the impact of temperature on Rds(on) for the Fairchild FDS8449 N-channel FET. As you can see, a 50°C increase in temperature results in a nearly 20% increase in Rds(on). Even if your product is normally used in a room temperature environment, a 20-50°C temperature rise at the FET’s die isn’t uncommon (another reason to operate the FET in its lowest Rds(on) range). This is another situation where keeping a part cooler helps prevent wasting power, as the Rds(on) increases the part will get hotter, increasing the Rds(on) and so on. Thermal runaway isn’t likely to happen but a hot FET and a non-optimal gate voltage can combine to generate a lot of excess heat and waste a lot of power.

Rds(on) vs junct temp

This started out to be a 2 part article, next week I’ll cover more sources of power loss commonly found in circuit designs to wrap-up the 3rd and final part.


Low power firmware – switch statements

This week I’m going to focus on switch statements. While easy to implement and a good way to improve code readability, frequently executed switch statements can burn a considerable number of clock cycles. It can be easy to fall into the trap of thinking in a switch statement the micro examines a variable and auto-magically goes to the correct block of code for that value of the variable. Since they are really a sequence of if-then-else statements, frequently executed switch statements can be very wasteful of power. Here are several ways to make them more efficient, some of them can be combined for even greater efficiency improvements:

  • Arrange the order of the case statements so that the most frequently executed cases are listed first. You should check the compiled code to make sure the assembly code checks for the cases in the order they are listed or abandon the switch statement and implement your own sequence of if-then-else statements to ensure the order you want.
  • You may be able to sacrifice some simplicity in the code and do a binary decode on the switch variable. The example below for a simple 4 case switch statement doesn’t appear to offer much improvement but is just intended to illustrate the binary decode. As the number of cases increases, the savings in time and power can be considerable, an 8 case switch statement goes from potentially seven tests to only three tests, a 16 case switch statement goes from potentially 15 tests to four and so on. If a few values are encountered significantly more often or are more time critical than the others, you can specifically test for those values before starting the binary decode. If you do this, remove the code for those values in the decoder code for clarity and to reduce code size.


  • Using several if-then-else sequences testing for ranges of values of the switch variable with each test having its own if-then-else sequence can considerably improve the efficiency. The example below shows splitting what would be a 16 value switch statement into two “if” statements each with an eight value switch statement, reducing the worst case from 15 tests to 8 tests. Breaking it down further to four “if” statements each with a four value switch statement reduces the worst case to six tests.


  • Nesting switch statements can produce similar improvements in the worst case number of tests required. To do this effectively you really need two variables or a variable that can be cleanly split into two fields like a “mode” for the first level switch and “command code” for the second level switch statements. The example below shows how an instruction opcode parser could be done with the opcode split into an instruction type field and command code field.


By now you should be seeing that writing low power firmware requires assuming a level of control in how you structure your code to minimize execution time. Next week I’ll continue on low power firmware design with arrays/structures and a discussion about complex algorithms and floating point math.



Low power firmware – timers, compilers and structures


The last two posts were about general concepts around low power firmware design. This week I’ll start getting into details including some code examples.


  • Use the largest clock pre-scaler that provides the resolution your firmware needs. The pre-scaler is typically a 4 to 8 bit counter while the counter/timer may be 8, 16 or 32 bits. Letting the pre-scaler run faster so the timer runs slower can reduce power considerably, particularly for free-running timers.
  • Software based timers running on a tick interrupt are easy to implement and may be necessary if your application requires more than a few timers to be running simultaneously.  Dedicated timers will be more power efficient IF the time-out period can be achieved with a single terminal count interrupt from the timer.
  • When using a periodic tick interrupt for software timers, use the longest tick interval your code can tolerate. If sections of your code require tight timing it will usually be more efficient to use one timer with a longer tick interval for general use and another timer for with a short tick interval for the tight timing.
  • For a timeout protection timer that doesn’t have tight timing requirements, consider using an RTC alarm interrupt for a 1 second (or longer) timeout. The RTC running at 32Khz should be much lower power than an 8 or 16 bit timer using a clock divided down from the micro’s much faster clock. This also provides a means for long timeout periods without taking periodic tick interrupts. On most modern micros, the RTC is functional even without a battery voltage present but may still require a dedicated crystal so be sure to check the datasheet if you aren’t using the RTC for its intended purpose.
  • Turn off timers when they aren’t being used. If software based timers are appropriate for your application, turn off the free-running timer when it is not being used. This sounds like a no-brainer but it’s common to leave the free-running timer running all the time. For dedicated timers, this is usually just a matter of selecting the right mode for the timer so it stops automatically when it reaches the terminal count.
  • For ultra-low power, when possible use a timer that counts down to 0 or counts up to all ones and generates an interrupt. Using a counter and match register containing the terminal count value requires more circuitry to be active and will consume more power.


If you are writing firmware in any high level language you need to become intimately familiar with the compiler and learn what it does well and what it doesn’t it. The only way to do that is to write some code and then examine the assembly code it generates.

  • Efficiency – Every instruction your micro executes that isn’t required is wasted power. Compiler efficiency is usually a case of you get what you pay for. Free and low-cost compilers based on the GNU compiler typically produce code 2X to 5X larger, slower and less power efficient than a compiler written for a specific architecture.
  • In-line assembly code – If you decide to use assembly language for time critical sections of code or for other reasons, you need to research how your compiler handles in-line assembly instructions. In some compilers, the registers you think you are using are actually memory based variables. If your assembly code uses many variables or contains a loop that is executed more than a few times, you are generally better off calling a function written in actual assembly code since the function calling overhead uses less power than the pseudo assembly code. If you do use in-line assembly language, review the compiler generate assembly language before and after your assembly language code to see how much overhead the compiler imposes for saving/restoring the micro state information.
  • Compiler options – Compilers usually have many options, some of which can greatly increase or decrease the power efficiency of your code. A few things to look for:
    • Position independent code – Disable this option unless you absolutely need it. The relative addressing required for position independent code will use more power than fixed location code on every jump/call instruction executed.
    • Optimization options – Compilers typically provide options to optimize the generated code for speed or for size. The optimizations made for speed should also improve power efficiency since fewer clocks uses less power but will result in larger programs. If you are tight on code space, check to see if your compiler supports optimization on a per file basis so you can optimize the most frequently executed sections of code.


Structures are great for organizing variables but you need to consider whether this convenience is worth the cost in power compared to individual variables. A few things to consider:

  • Every time a structure element is used the micro has to add the element offset to the structure base address. On most 32-bit micros this is USUALLY achieved with an indexed addressing mode so no additional clock cycles are required (but check the assembly code to make sure). On a low-end 8-bit micro this requires code to calculate the address, taking 8 to 10 instructions or more.
  • Arrays of structures further complicate the math involved in calculating addresses. To calculate the offset into the array, the array index must be multiplied by the structure size and that is done in software on most micros used in embedded applications. If the array only contains a few instances of the structure it can be considerably more power efficient to have individual structures with another variable containing a pointer to the structure to use. Another technique for use with larger arrays will be discussed in the “Arrays and structures” section.
  • Don’t assume your compiler is smart enough to calculate the base address for a particular structure in an array once and use it for several consecutive lines of C code that access that structure. There is a good chance it will calculate that base address for each line of C code. To use less power, use a pointer to the structure in this situation, the compiler should only calculate the address based on the offset for structure members.



  • One place where structures may work to save power is in parameter passing. Particularly with low-end 8 bit micros, passing multiple parameters can be considerably more expensive in terms of power and time than passing a pointer to a structure containing those parameters. The example below illustrates this, even using the structure for the return value.


Next week I’ll continue on low power firmware design with switch statements and arrays/structures.



Low power firmware concepts, part 2

This week I will continue from last weeks post with some concepts for low power firmware design.

Structured code

Highly structured source code is nice to work with but unfortunately can run considerably slower and consume considerably more power than less structured code. You don’t have to forget everything you learned about structured code but violating some principals of structured code can help save power:

  • Global and fixed address variables – Global variables can save a considerable amount of power by not passing parameters to functions (particularly for older 8-bit micros or when using external RAM).  Fixed address variables can help reduce the power used when calculating addresses for structure members. If you have the good fortune of having extra RAM, using fixed address variables in functions can be much more power efficient than working with temporary variables that are addressed relative to the stack pointer.
  • Put small functions in-line – If you have small functions that are called frequently you will save power by placing the code for those functions in-line. Particularly with older 8-bit micros, the overhead for calling and returning from a function can take dozens of clock cycles. Some compilers support using function calls in your source code to help keep the source code cleaner but will place the code for the function in-line instead of doing actual function calls.
  • Peripheral “drivers” – If you use a micro with multiple instances of a peripheral type, say 4 UARTs for example, you can save power by having a set of code for each UART instead of generic code that is passed a parameter to specify which UART to use. This avoids the parameter passing and allows you to hardwire the peripheral register addresses in the source code instead of calculating addresses at run time. Particularly with 8-bit micros this can yield a considerable savings for frequently used peripherals. These types of functions are typically fairly small so you won’t necessarily use a lot of program space by doing this.


Value added and non-value added code

Lean manufacturing has a concept of value added steps and non-value added steps. Value added steps directly contribute to producing the end product. Non-value added steps are essentially overhead steps that may be necessary but don’t directly contribute to producing the end product. An example of a non-value added step would be moving sub-assemblies from one section of the manufacturing floor to an assembly line. It is natural to focus on the value-added code for power optimizations because that is where the primary tasks for your product are. For the ultimate power savings, you should analyze your code to identify the non-value added sections of code that don’t get much consideration. You need to determine if the non-value added sections of code are absolutely necessary, “nice to have” or not really necessary. For the non-value added sections of code that can’t be eliminated, can they be streamlined, executed less frequently or grouped with other non-value added functions in order to save power (particularly if the micro is being taken out of a low-power state to execute the non-value added code)?


The power of loops

Firmware loops can be very effective in reducing code size and making the source code cleaner and more readable. They can also be huge wasters of power simply because of the overhead instructions required to implement the loop. Managing the loop counter, checking for the loop termination and the jump back to the start of the loop can all be considered non-value added code. Unless you are severely constrained on code space, unrolling frequently executed loops into a repetitive series of instructions can save a considerable amount of power. This is most easily done for loops with a small, fixed number of iterations. Even loops with a variable number of iterations can be made more power efficient this way. The test for “loop” termination would need to be done between each series of instructions but the jump would only be taken once instead of each time through the actual loop. This can be a considerable power savings particularly for micros performing instruction pre-fetches that are discarded at the end of each pass through the loop.


Know your variables

This may seem like a no-brainer but particularly with C compilers what you see in the code isn’t always what you get in execution. Pay special attention to this with variables used in loops since that one line in your source code can be executed hundreds or thousands of times.

  • Variable size – Particularly with 8-bit micros, make sure your variable sizes aren’t larger than they need to be. As shown in the code segment below, simply incrementing a 32-bit value can turn into 3 tests for overflow, 3 add instructions and as many as 8 memory accesses. In most cases this is just a waste of code space but if the variable is a free running counter it will be wasting power every 256th time it is incremented. A 32-bit add or compare is even worse since all four bytes of the variable must be operated on every time. Most compilers will size an “int” to be the word size of the micro but to be sure you should explicitly declare variables as int8, int16, etc. (or whatever syntax your compiler uses).


  • Signed vs unsigned variables – Arithmetic operations on signed and unsigned values aren’t handled the same way by compilers. To avoid extra processing (and weird results from the math operations) be careful about the declarations and similar to the size, explicitly declare variables signed or unsigned and don’t assume an “int” is one or the other. Also be very careful about mixing signed and unsigned values in math operations.
  • Variable alignment – Some 32-bit micros require 16-bit values to be on 2-byte boundaries and 32-bit values to be on 4-byte boundaries and will generate exceptions or bus faults for unaligned accesses. More sophisticated micros can handle mis-alignments and will happily turn an unaligned 32-bit access into two, three or four memory accesses without you knowing it. C compilers typically have options to allow misalignments or force certain alignments. Not so much related to power usage, compiler forced alignments can be very wasteful of RAM space by placing every variable on a 4-byte or even 8-byte address boundary. This can be particularly wasteful with large structures with various sized elements and arrays of 8 or 16-bit elements. The “pragma pack” directive can help you in this regard with structures but should be used with care since it puts you in charge of determining alignment.
  •  ASCII vs Unicode – If your product interfaces with text strings to a Windows application you may need to support 16-bit Unicode values, otherwise 8-bit ASCII should be adequate. If you are stuck with Unicode and your product requires a proprietary driver or uses a Windows app, you should be able to save power by using ASCII in the firmware and translating to Unicode in the PC based code.


Next week I’ll get into the details on what you can do to make your firmware more power efficient.

Low power firmware concepts, part 1

So far I’ve discussed mostly hardware aspects of low power design. This is the first of several posts on low power firmware design. Some high level concepts will be presented first but they shouldn’t be glossed over, they can make a profound difference on power consumption. As was mentioned in the introductory post, a product’s hardware design will establish the minimum level of power consumption for the product. For many types of products, the firmware will determine the highest level of power consumption. More importantly, the firmware will also determine how much time is spent at the minimum level of power consumption.

Some power saving concepts will be presented that can be applied whether your firmware is in assembly language or C. There are also a number of techniques that can be applied when using C to force more power efficient code than a compiler would normally generate. Some of these techniques will go against the basic principles of structured code. Highly structured code unfortunately is very inefficient power-wise due to all the instructions the micro has to execute because of the code structure that don’t directly contribute to completing the task at hand.

Power saving in firmware is all about (1) eliminating unnecessary clock cycles the micro uses while performing a task and (2) putting the micro in a low-power state as often and for as long as possible for the given application. To a large extent, eliminating unnecessary clock cycles naturally leads to spending more time in a low-power state but the time spent in a low-power state is more heavily influenced by the code structure. Keep in mind that everything your code does that it doesn’t need to do or doesn’t need to do as frequently as it does is just wasting power. This is a good place to remind you about equation for current used during a specific event:

Ievent = (Timeoperating x Ioperating) + (Timesleep x Isleep)

Since the operating current of modern micros can be several orders of magnitude higher than their standby or sleep currents, your goal is to reduce time spent performing the task and maximize the time spent in a low power state.


Choice of programming languages

For the ultimate power savings you must be in control of the instructions your micro executes as much as possible. Carefully written assembly language can provide lower power consumption than the best compiled code but is slower to develop and harder to maintain. If you choose to use a compiled language, C is probably the best choice since you can achieve a decent level of control over the compiled code and there won’t be as much run-time code generated by the compiler that you aren’t aware of as there can be with C++. A considerable amount has been written about the suitability of C++ for embedded programming. Whatever your stance on this, it is hard to debate that well written C code is more efficient than equally well written C++ code.

Main function vs the rest of the time

Very few products actively perform their main function 100% of the time yet that is where many engineers spend their time trying to reduce power consumption. Particularly in embedded applications, most firmware spends most of its time waiting for something to do. This may be waiting for input from a user, waiting for some event or just waiting for time to perform some repetitive task. What the firmware does while inactive often has a bigger impact on power consumption than what it does while active. Maximizing the time the micro spends in the highest power saving mode suitable for the application is key to reducing long term power consumption.

 Frequency of events

Make sure your product only performs its main function as often as is really necessary. Except for low level control systems, it is fairly rare that monitoring physical conditions needs to be done multiple times per second or even per minute.  For example, if monitoring temperature is a primary activity you may need to do you may be able to get by only checking the temperature once every few minutes. The same can be said for monitoring battery voltage. Even when response time is critical, for monitoring most physical conditions a longer interval can be used when the condition being monitored is well within normal bounds and then the interval reduced as the condition being monitored starts approaching a critical threshold.

 Polled vs interrupt driven events

Particularly with slow peripherals such as UARTs, interrupt driven firmware will use considerably less power than pollIng firmware. A micro may execute thousands of instructions during the time it takes for a UART to transmit or receive a byte of data. Even at a relatively fast 115.2K baud, an 8Mhz processor will burn almost 700 clocks in one byte time, drop that to 19.2K baud and it goes up to over 4,100 clocks per byte. On the other hand, there are situations where polling can be more power efficient. For example, with a fast peripheral such as an A/D converter or SPI/I2C controller the interrupt processing overhead for a slow 8-bit micro may use more power than a simple polling loop.

 DMA vs firmware loops

If an application requires moving large amounts of data to or from a peripheral (or even small amounts of data with slow peripherals), it is generally more power efficient to move the data with a DMA controller than in a firmware loop. Even if the DMA controller is running at the same clock speed as the micro, the circuitry in the DMA controller is much smaller than the circuitry in the micro core so it will use less power. This is an area where you must do your homework up front to ensure you select a micro that supports DMA operations with the required peripherals and that the DMA controller is operational with the micro in a sleep mode.


If DMA is not an option and your micro has transmit/receive FIFOs in its UARTs, take advantage of these FIFO so that your firmware doesn’t take an interrupt for each byte transferred. For instance, when transmitting data you can fill the FIFO and only take an interrupt when the last byte in the FIFO has started shifting out. Many newer micros with UART FIFOs allow you to set the point where the interrupt is generated so you don’t have to wait for the last byte to get an interrupt if you have over-run/under-run concerns. If you are moving a string of data between a UART and a buffer in memory and only taking an interrupt every 8th byte, not only will you save power incurred in the processing overhead for seven interrupts every eight bytes, a good compiler will use the micro’s registers for the loop control variables, buffer pointers and only load/save those variables once for every eight bytes.

Next week we’ll continue with a few more high level concepts before getting into some of the details of low power firmware design.

Micro selection, part 4

This week I will discuss power savings modes in micros. You will need to thoroughly research the modes you want to use AND what the on-chip peripherals are capable of doing in these modes. On a firmware project I recently worked on, there was a single sentence with HUGE implications buried on page 358 of a 550+ page datasheet that said “During Sleep mode, all clocks to the EUSART are suspended.” Because of this, the micro was unable to be put in a sleep mode while communicating with a wireless radio module. More significantly at the product feature level, a data transmission to the device from another device in the system would not wake-up the micro, the micro would have to wake-up periodically to poll other devices to check if they had a message for it.

Low Power Modes

Modern micros provide a wide variety of power saving modes, from a simple run/idle/sleep selection to extreme control over individual circuits and on-chip peripherals. This can get very confusing because you can run into terms like “idle”, “nap”, “snooze”, “sleep”, “hibernate” and “deep sleep”. There are no standards regarding low-power states and the power levels and functionality available for a given mode name may vary widely from micro to micro.

There are several things you need to pay close attention to regarding power saving modes:

  • Make sure in the low power modes you want to implement that the on-chip peripherals you need are functional. For example, in some micros the UARTs are fully operational while the micro is sleeping, others like I mentioned above require the micro to be running for the UART to be operational. As discussed in a previous post, pay special attention to the clock chains in the micro you select. If the peripheral clock is the same as or is simply divided down from the CPU clock the micro will probably have to be running while any peripherals are functioning. You may still be able to put the micro in an “idle” mode which means it won’t be executing code but it’s internal clocks are still running and consuming power.
  • One popular family of small micros powers down its main registers and internal SRAM to achieve sub 1uA deep-sleep mode currents. On this micro, only the instruction pointer and two special registers for program state information are maintained while in deep-sleep. Other than these two registers, waking up from deep sleep isn’t much different than coming out of reset. One very important difference is any variables that the compiler caused to be automatically initialized on reset or are initialized in your reset code won’t be initialized when coming out of the deep sleep mode. Your wake-up code must handle initializing these variables and any on-chip peripherals, delaying the firmware’s response to the event that triggered the wake-up.
  • Pay close attention to what conditions bring a micro out of its low power states. In light and medium sleep modes, any internal or external interrupt will generally wake the micro. The lowest current modes usually have the fewest options for waking the micro. Some micros allow GPIO pins to be configured to wake the micro on a per-pin or per-port basis while some micros will only wake on transitions on external interrupt pins or a specific “wake” pin. For the micro mentioned earlier, the only conditions that will wake the part from deep-sleep are an alarm interrupt from its Real Time Clock or a specific external interrupt pin.
  • Micros with an on-chip voltage regulator for generating the micro core voltage will often have the core voltage on the device pins for filtering. If you select a micro that turns off or reduces its core voltage in a low-power mode, be careful with the amount of capacitance on these core voltage pins. These pins typically only require a filter capacitor so the smallest capacitor possible should be used since it will be charged and discharged every time the low-power mode is entered or exited.

This wraps up the discussion on micro selection. As you can tell, selecting the micro for your application will either enable a good low power design or set you up for failure in your power saving efforts. Unfortunately there is no “right” micro for all low power applications. For a low power design, the micro selection must be a collaborative effort between the hardware and firmware engineers working on the project. Without knowing the specifics of what the firmware needs to do, it is impossible for a hardware engineer to select the right micro for the job.


Micro selection, part 3

This week, I will discuss how using a micro’s on-chip peripherals can help to reduce your power consumption along with how GPIOs can help or hinder your power savings efforts.

Internal vs External Peripherals

With modern micro architectures, using internal peripherals tends to be considerably more power efficient than using external peripherals. There are a number of reasons for this:

  • The main reason is most modern micros operate with a lower core voltage so internal peripherals operate at the lower core voltage and external peripherals operate at the higher system voltage.
  • The trend towards using 2 and 3 wire peripheral interfaces like I2C and SPI to reduce pin count and package size means clocked serializer and deserializer circuits are required for external peripherals while internal peripherals have much faster parallel interfaces. These added circuits consume additional power plus the micro is also using power while waiting for data transfers with the external device to complete.
  • Passing signals through the micro’s I/O buffers to access off-chip peripherals consumes additional power. When you must use off-chip peripherals, high-speed serial SPI/I2C interfaces may be more efficient than 8 or 16 bit parallel interfaces since the I/O buffers will always consume power, not just during the data transfers. Using these serial interfaces will be even more power efficient on micros that have SPI/I2C controllers so the micro can sleep while the transfers take place. If you must use parallel interface external peripherals, the I/O pins used for data lines should be driven low when not in use to minimize power.


There are a number of ways that GPIO can impact power consumption:

  • The I/O ports on most older micro architectures have fixed drive strength. Many modern architectures provide drive strength control on a per port or even per pin basis so the drive strength can be tailored to the circuit requirements.
  • Similarly, if your design uses more than a few GPIO outputs that change states frequently, for ultimate power savings look for a micro that supports programmable slew rates. Faster signal transitions require more power than slower ones for the same capacitive load.
  • Most micros provide options for internal pull-ups and/or pull-downs on their I/O ports. While convenient to use and a good way to reduce total parts count, these internal pull-ups tend to not be well controlled and can be as low as a few K-ohms on some micros, leading to excessive current draw.  Consider a 3.3V micro, with an internal 5K pull-up on a switch input that is normally grounded. This 5K pull-up will pull 660uA, compared to 33uA for an external 100K pull-up. Some micros also provide control of the pull-ups/pull-downs on a per port basis so all inputs on a port will have the pull-up/pull-down enabled just because one input needs it. If extreme low power is required, it’s much better to use discrete pull-up/down resistors only where needed and with as high a value resistor as can be tolerated in the circuit.
  • Unless the micro datasheet says otherwise, it’s usually best for low power usage to configure unused GPIO as outputs driving a logic low. In this configuration, the I/O pin is trying to sink current so it is using virtually no power.
  • Slowly rising or falling input signals can cause excessive power consumption and even generate noisy oscillations while the input is between the low and high input thresholds. Few micros provide Schmitt trigger inputs so an external Schmitt trigger buffer may be needed to provide fast, clean transitions to the micro. If the slow transition time is due to a weak pull-up it may be more efficient to use a stronger pull-up than to power the Schmitt trigger device.

Next week we’ll wrap up the topic of micro selection with a discussion on low-power modes. This is an area where there is no standardization so “idle mode” and “deep sleep” may have different meanings and levels of functionality even between micros from the same manufacturer.

Micro selection, part 2

Last week I presented some high level considerations for selecting a micro for low power design. In this post we’ll get into more details on how your micro selection can benefit or impact your power reduction efforts.

Program Location

For most microcontrollers, accessing on-chip RAM typically use less power than accessing on-chip Flash so you may be able to save power by executing code in RAM instead of Flash. A number of micros don’t support executing code in RAM (particularly Harvard architecture micros) and the power consumption when accessing RAM compared to Flash varies so be sure to research this in depth before counting on this power savings. This may also not be true if the Flash and RAM data busses aren’t the same width (a 16-bit path to Flash and 8-bit path to RAM for instance).

You should avoid using off-chip memory for program execution if at all possible. Besides taking more clock cycles to access off-chip memory, the power consumed by the off-chip I/O buffers and the external memory devices make off-chip memory accesses very expensive in terms of power usage.

C or Assembly Language

It is hard to beat lovingly hand crafted assembly language for efficient power usage. However, unless the energy usage for your application is so critical you are counting nanoamps, it is hard to justify the extra development time and maintenance issues associated with assembly language.

Most modern micros (even some 8-bit parts) are architected to work efficiently with compiled C code. If a micro has a very limited/fixed stack size, doesn’t support indexed memory addressing modes, can’t do arithmetic or logical operations directly on memory locations or doesn’t deal well with immediate addresses or values, it won’t efficiently execute C code and will waste considerable power because of it.

Clock Frequency

Most modern micros are implemented in a CMOS process where power consumption scales almost linearly with clock frequency. Consider again the equation for current used by a micro for performing a particular task:

Ievent = (Timeoperating x Ioperating) + (Timesleep x Isleep)

Don’t fall into the trap of thinking you can run the micro faster to use less current. Since the power scales linearly with the clock frequency, running twice as fast while using twice as much current produces the same number for the operating portion of the equation. Ironically, if the task occurs repeatedly at a specific interval, the total current used will actually increase since the sleep time increases.

A number of modern micros use a PLL to generate a higher speed clock for the micro core. Before using one of these parts, if you plan to use clock throttling as part of your power management scheme you must be aware that the PLL response time when turning it on/off or even significantly changing the clock speed can greatly increase the firmware’s response time for events that trigger the clock speed increase.

It is also important to consider the internal clock chains for anything but the simplest micros. As shown in the diagram below, the more branches on the clock tree the more control you generally have over clock speeds and enables. In simpler micros the peripheral clock is divided down from the micro core clock. Two things to keep in mind in this case are (1) the micro must be running when any peripherals are being used and (2) reducing the micro clock speed is not an option for saving power when a higher speed clock is required for a peripheral (like a high-speed UART).  Most modern micros will have at a minimum a clock for the micro core and another clock for the on-chip peripherals. This isn’t sufficient if your goal is the ultimate power savings.

Regarding UARTs, some of the more recent micros provide a fractional divider in addition to the basic baud rate divider for better accuracy at higher baud rates. The drawback to this capability is it usually requires a clock of at least 16X the desired baud rate so a 115K baud rate requires a clock of over 1.8Mhz, greatly increasing the power used by the UART. There is a family of Cortex M3 based micros that have the fractional divider that requires the 16X clock whether the fractional divider is being used or not so this can be an expensive feature power-wise even when not used.

Next week, we’ll continue looking into the detail level considerations for your microcontroller selection.