Low power firmware – switch statements

This week I’m going to focus on switch statements. While easy to implement and a good way to improve code readability, frequently executed switch statements can burn a considerable number of clock cycles. It can be easy to fall into the trap of thinking in a switch statement the micro examines a variable and auto-magically goes to the correct block of code for that value of the variable. Since they are really a sequence of if-then-else statements, frequently executed switch statements can be very wasteful of power. Here are several ways to make them more efficient, some of them can be combined for even greater efficiency improvements:

  • Arrange the order of the case statements so that the most frequently executed cases are listed first. You should check the compiled code to make sure the assembly code checks for the cases in the order they are listed or abandon the switch statement and implement your own sequence of if-then-else statements to ensure the order you want.
  • You may be able to sacrifice some simplicity in the code and do a binary decode on the switch variable. The example below for a simple 4 case switch statement doesn’t appear to offer much improvement but is just intended to illustrate the binary decode. As the number of cases increases, the savings in time and power can be considerable, an 8 case switch statement goes from potentially seven tests to only three tests, a 16 case switch statement goes from potentially 15 tests to four and so on. If a few values are encountered significantly more often or are more time critical than the others, you can specifically test for those values before starting the binary decode. If you do this, remove the code for those values in the decoder code for clarity and to reduce code size.


  • Using several if-then-else sequences testing for ranges of values of the switch variable with each test having its own if-then-else sequence can considerably improve the efficiency. The example below shows splitting what would be a 16 value switch statement into two “if” statements each with an eight value switch statement, reducing the worst case from 15 tests to 8 tests. Breaking it down further to four “if” statements each with a four value switch statement reduces the worst case to six tests.


  • Nesting switch statements can produce similar improvements in the worst case number of tests required. To do this effectively you really need two variables or a variable that can be cleanly split into two fields like a “mode” for the first level switch and “command code” for the second level switch statements. The example below shows how an instruction opcode parser could be done with the opcode split into an instruction type field and command code field.


By now you should be seeing that writing low power firmware requires assuming a level of control in how you structure your code to minimize execution time. Next week I’ll continue on low power firmware design with arrays/structures and a discussion about complex algorithms and floating point math.



Low power firmware – timers, compilers and structures


The last two posts were about general concepts around low power firmware design. This week I’ll start getting into details including some code examples.


  • Use the largest clock pre-scaler that provides the resolution your firmware needs. The pre-scaler is typically a 4 to 8 bit counter while the counter/timer may be 8, 16 or 32 bits. Letting the pre-scaler run faster so the timer runs slower can reduce power considerably, particularly for free-running timers.
  • Software based timers running on a tick interrupt are easy to implement and may be necessary if your application requires more than a few timers to be running simultaneously.  Dedicated timers will be more power efficient IF the time-out period can be achieved with a single terminal count interrupt from the timer.
  • When using a periodic tick interrupt for software timers, use the longest tick interval your code can tolerate. If sections of your code require tight timing it will usually be more efficient to use one timer with a longer tick interval for general use and another timer for with a short tick interval for the tight timing.
  • For a timeout protection timer that doesn’t have tight timing requirements, consider using an RTC alarm interrupt for a 1 second (or longer) timeout. The RTC running at 32Khz should be much lower power than an 8 or 16 bit timer using a clock divided down from the micro’s much faster clock. This also provides a means for long timeout periods without taking periodic tick interrupts. On most modern micros, the RTC is functional even without a battery voltage present but may still require a dedicated crystal so be sure to check the datasheet if you aren’t using the RTC for its intended purpose.
  • Turn off timers when they aren’t being used. If software based timers are appropriate for your application, turn off the free-running timer when it is not being used. This sounds like a no-brainer but it’s common to leave the free-running timer running all the time. For dedicated timers, this is usually just a matter of selecting the right mode for the timer so it stops automatically when it reaches the terminal count.
  • For ultra-low power, when possible use a timer that counts down to 0 or counts up to all ones and generates an interrupt. Using a counter and match register containing the terminal count value requires more circuitry to be active and will consume more power.


If you are writing firmware in any high level language you need to become intimately familiar with the compiler and learn what it does well and what it doesn’t it. The only way to do that is to write some code and then examine the assembly code it generates.

  • Efficiency – Every instruction your micro executes that isn’t required is wasted power. Compiler efficiency is usually a case of you get what you pay for. Free and low-cost compilers based on the GNU compiler typically produce code 2X to 5X larger, slower and less power efficient than a compiler written for a specific architecture.
  • In-line assembly code – If you decide to use assembly language for time critical sections of code or for other reasons, you need to research how your compiler handles in-line assembly instructions. In some compilers, the registers you think you are using are actually memory based variables. If your assembly code uses many variables or contains a loop that is executed more than a few times, you are generally better off calling a function written in actual assembly code since the function calling overhead uses less power than the pseudo assembly code. If you do use in-line assembly language, review the compiler generate assembly language before and after your assembly language code to see how much overhead the compiler imposes for saving/restoring the micro state information.
  • Compiler options – Compilers usually have many options, some of which can greatly increase or decrease the power efficiency of your code. A few things to look for:
    • Position independent code – Disable this option unless you absolutely need it. The relative addressing required for position independent code will use more power than fixed location code on every jump/call instruction executed.
    • Optimization options – Compilers typically provide options to optimize the generated code for speed or for size. The optimizations made for speed should also improve power efficiency since fewer clocks uses less power but will result in larger programs. If you are tight on code space, check to see if your compiler supports optimization on a per file basis so you can optimize the most frequently executed sections of code.


Structures are great for organizing variables but you need to consider whether this convenience is worth the cost in power compared to individual variables. A few things to consider:

  • Every time a structure element is used the micro has to add the element offset to the structure base address. On most 32-bit micros this is USUALLY achieved with an indexed addressing mode so no additional clock cycles are required (but check the assembly code to make sure). On a low-end 8-bit micro this requires code to calculate the address, taking 8 to 10 instructions or more.
  • Arrays of structures further complicate the math involved in calculating addresses. To calculate the offset into the array, the array index must be multiplied by the structure size and that is done in software on most micros used in embedded applications. If the array only contains a few instances of the structure it can be considerably more power efficient to have individual structures with another variable containing a pointer to the structure to use. Another technique for use with larger arrays will be discussed in the “Arrays and structures” section.
  • Don’t assume your compiler is smart enough to calculate the base address for a particular structure in an array once and use it for several consecutive lines of C code that access that structure. There is a good chance it will calculate that base address for each line of C code. To use less power, use a pointer to the structure in this situation, the compiler should only calculate the address based on the offset for structure members.



  • One place where structures may work to save power is in parameter passing. Particularly with low-end 8 bit micros, passing multiple parameters can be considerably more expensive in terms of power and time than passing a pointer to a structure containing those parameters. The example below illustrates this, even using the structure for the return value.


Next week I’ll continue on low power firmware design with switch statements and arrays/structures.



Low power firmware concepts, part 2

This week I will continue from last weeks post with some concepts for low power firmware design.

Structured code

Highly structured source code is nice to work with but unfortunately can run considerably slower and consume considerably more power than less structured code. You don’t have to forget everything you learned about structured code but violating some principals of structured code can help save power:

  • Global and fixed address variables – Global variables can save a considerable amount of power by not passing parameters to functions (particularly for older 8-bit micros or when using external RAM).  Fixed address variables can help reduce the power used when calculating addresses for structure members. If you have the good fortune of having extra RAM, using fixed address variables in functions can be much more power efficient than working with temporary variables that are addressed relative to the stack pointer.
  • Put small functions in-line – If you have small functions that are called frequently you will save power by placing the code for those functions in-line. Particularly with older 8-bit micros, the overhead for calling and returning from a function can take dozens of clock cycles. Some compilers support using function calls in your source code to help keep the source code cleaner but will place the code for the function in-line instead of doing actual function calls.
  • Peripheral “drivers” – If you use a micro with multiple instances of a peripheral type, say 4 UARTs for example, you can save power by having a set of code for each UART instead of generic code that is passed a parameter to specify which UART to use. This avoids the parameter passing and allows you to hardwire the peripheral register addresses in the source code instead of calculating addresses at run time. Particularly with 8-bit micros this can yield a considerable savings for frequently used peripherals. These types of functions are typically fairly small so you won’t necessarily use a lot of program space by doing this.


Value added and non-value added code

Lean manufacturing has a concept of value added steps and non-value added steps. Value added steps directly contribute to producing the end product. Non-value added steps are essentially overhead steps that may be necessary but don’t directly contribute to producing the end product. An example of a non-value added step would be moving sub-assemblies from one section of the manufacturing floor to an assembly line. It is natural to focus on the value-added code for power optimizations because that is where the primary tasks for your product are. For the ultimate power savings, you should analyze your code to identify the non-value added sections of code that don’t get much consideration. You need to determine if the non-value added sections of code are absolutely necessary, “nice to have” or not really necessary. For the non-value added sections of code that can’t be eliminated, can they be streamlined, executed less frequently or grouped with other non-value added functions in order to save power (particularly if the micro is being taken out of a low-power state to execute the non-value added code)?


The power of loops

Firmware loops can be very effective in reducing code size and making the source code cleaner and more readable. They can also be huge wasters of power simply because of the overhead instructions required to implement the loop. Managing the loop counter, checking for the loop termination and the jump back to the start of the loop can all be considered non-value added code. Unless you are severely constrained on code space, unrolling frequently executed loops into a repetitive series of instructions can save a considerable amount of power. This is most easily done for loops with a small, fixed number of iterations. Even loops with a variable number of iterations can be made more power efficient this way. The test for “loop” termination would need to be done between each series of instructions but the jump would only be taken once instead of each time through the actual loop. This can be a considerable power savings particularly for micros performing instruction pre-fetches that are discarded at the end of each pass through the loop.


Know your variables

This may seem like a no-brainer but particularly with C compilers what you see in the code isn’t always what you get in execution. Pay special attention to this with variables used in loops since that one line in your source code can be executed hundreds or thousands of times.

  • Variable size – Particularly with 8-bit micros, make sure your variable sizes aren’t larger than they need to be. As shown in the code segment below, simply incrementing a 32-bit value can turn into 3 tests for overflow, 3 add instructions and as many as 8 memory accesses. In most cases this is just a waste of code space but if the variable is a free running counter it will be wasting power every 256th time it is incremented. A 32-bit add or compare is even worse since all four bytes of the variable must be operated on every time. Most compilers will size an “int” to be the word size of the micro but to be sure you should explicitly declare variables as int8, int16, etc. (or whatever syntax your compiler uses).


  • Signed vs unsigned variables – Arithmetic operations on signed and unsigned values aren’t handled the same way by compilers. To avoid extra processing (and weird results from the math operations) be careful about the declarations and similar to the size, explicitly declare variables signed or unsigned and don’t assume an “int” is one or the other. Also be very careful about mixing signed and unsigned values in math operations.
  • Variable alignment – Some 32-bit micros require 16-bit values to be on 2-byte boundaries and 32-bit values to be on 4-byte boundaries and will generate exceptions or bus faults for unaligned accesses. More sophisticated micros can handle mis-alignments and will happily turn an unaligned 32-bit access into two, three or four memory accesses without you knowing it. C compilers typically have options to allow misalignments or force certain alignments. Not so much related to power usage, compiler forced alignments can be very wasteful of RAM space by placing every variable on a 4-byte or even 8-byte address boundary. This can be particularly wasteful with large structures with various sized elements and arrays of 8 or 16-bit elements. The “pragma pack” directive can help you in this regard with structures but should be used with care since it puts you in charge of determining alignment.
  •  ASCII vs Unicode – If your product interfaces with text strings to a Windows application you may need to support 16-bit Unicode values, otherwise 8-bit ASCII should be adequate. If you are stuck with Unicode and your product requires a proprietary driver or uses a Windows app, you should be able to save power by using ASCII in the firmware and translating to Unicode in the PC based code.


Next week I’ll get into the details on what you can do to make your firmware more power efficient.

Low power firmware concepts, part 1

So far I’ve discussed mostly hardware aspects of low power design. This is the first of several posts on low power firmware design. Some high level concepts will be presented first but they shouldn’t be glossed over, they can make a profound difference on power consumption. As was mentioned in the introductory post, a product’s hardware design will establish the minimum level of power consumption for the product. For many types of products, the firmware will determine the highest level of power consumption. More importantly, the firmware will also determine how much time is spent at the minimum level of power consumption.

Some power saving concepts will be presented that can be applied whether your firmware is in assembly language or C. There are also a number of techniques that can be applied when using C to force more power efficient code than a compiler would normally generate. Some of these techniques will go against the basic principles of structured code. Highly structured code unfortunately is very inefficient power-wise due to all the instructions the micro has to execute because of the code structure that don’t directly contribute to completing the task at hand.

Power saving in firmware is all about (1) eliminating unnecessary clock cycles the micro uses while performing a task and (2) putting the micro in a low-power state as often and for as long as possible for the given application. To a large extent, eliminating unnecessary clock cycles naturally leads to spending more time in a low-power state but the time spent in a low-power state is more heavily influenced by the code structure. Keep in mind that everything your code does that it doesn’t need to do or doesn’t need to do as frequently as it does is just wasting power. This is a good place to remind you about equation for current used during a specific event:

Ievent = (Timeoperating x Ioperating) + (Timesleep x Isleep)

Since the operating current of modern micros can be several orders of magnitude higher than their standby or sleep currents, your goal is to reduce time spent performing the task and maximize the time spent in a low power state.


Choice of programming languages

For the ultimate power savings you must be in control of the instructions your micro executes as much as possible. Carefully written assembly language can provide lower power consumption than the best compiled code but is slower to develop and harder to maintain. If you choose to use a compiled language, C is probably the best choice since you can achieve a decent level of control over the compiled code and there won’t be as much run-time code generated by the compiler that you aren’t aware of as there can be with C++. A considerable amount has been written about the suitability of C++ for embedded programming. Whatever your stance on this, it is hard to debate that well written C code is more efficient than equally well written C++ code.

Main function vs the rest of the time

Very few products actively perform their main function 100% of the time yet that is where many engineers spend their time trying to reduce power consumption. Particularly in embedded applications, most firmware spends most of its time waiting for something to do. This may be waiting for input from a user, waiting for some event or just waiting for time to perform some repetitive task. What the firmware does while inactive often has a bigger impact on power consumption than what it does while active. Maximizing the time the micro spends in the highest power saving mode suitable for the application is key to reducing long term power consumption.

 Frequency of events

Make sure your product only performs its main function as often as is really necessary. Except for low level control systems, it is fairly rare that monitoring physical conditions needs to be done multiple times per second or even per minute.  For example, if monitoring temperature is a primary activity you may need to do you may be able to get by only checking the temperature once every few minutes. The same can be said for monitoring battery voltage. Even when response time is critical, for monitoring most physical conditions a longer interval can be used when the condition being monitored is well within normal bounds and then the interval reduced as the condition being monitored starts approaching a critical threshold.

 Polled vs interrupt driven events

Particularly with slow peripherals such as UARTs, interrupt driven firmware will use considerably less power than pollIng firmware. A micro may execute thousands of instructions during the time it takes for a UART to transmit or receive a byte of data. Even at a relatively fast 115.2K baud, an 8Mhz processor will burn almost 700 clocks in one byte time, drop that to 19.2K baud and it goes up to over 4,100 clocks per byte. On the other hand, there are situations where polling can be more power efficient. For example, with a fast peripheral such as an A/D converter or SPI/I2C controller the interrupt processing overhead for a slow 8-bit micro may use more power than a simple polling loop.

 DMA vs firmware loops

If an application requires moving large amounts of data to or from a peripheral (or even small amounts of data with slow peripherals), it is generally more power efficient to move the data with a DMA controller than in a firmware loop. Even if the DMA controller is running at the same clock speed as the micro, the circuitry in the DMA controller is much smaller than the circuitry in the micro core so it will use less power. This is an area where you must do your homework up front to ensure you select a micro that supports DMA operations with the required peripherals and that the DMA controller is operational with the micro in a sleep mode.


If DMA is not an option and your micro has transmit/receive FIFOs in its UARTs, take advantage of these FIFO so that your firmware doesn’t take an interrupt for each byte transferred. For instance, when transmitting data you can fill the FIFO and only take an interrupt when the last byte in the FIFO has started shifting out. Many newer micros with UART FIFOs allow you to set the point where the interrupt is generated so you don’t have to wait for the last byte to get an interrupt if you have over-run/under-run concerns. If you are moving a string of data between a UART and a buffer in memory and only taking an interrupt every 8th byte, not only will you save power incurred in the processing overhead for seven interrupts every eight bytes, a good compiler will use the micro’s registers for the loop control variables, buffer pointers and only load/save those variables once for every eight bytes.

Next week we’ll continue with a few more high level concepts before getting into some of the details of low power firmware design.

Battery selection, part 2

Last we I discussed several aspects of using batteries. This week I’ll discuss battery charging related issues and wrap up with a couple of physical considerations for battery usage.

  • Charging – A thorough discussion of battery charging would take a book several times the size of this one but there are a few basics you should understand about battery charging. The first thing you should be aware of is each battery technology has certain charging algorithms that are optimized for that technology. Using an algorithm designed for a different technology can have results ranging from poor charging to disastrous (back to smoke and flames again). The basic parameters of battery charging are voltage, current and time. In general, the higher the voltage and/or current the shorter the charging time and each battery technology will have limits on the maximum voltage and current. Some types of batteries respond well to charging with high current pulses while others won’t. Some will benefit from a trickle top-off charge while others won’t. Fortunately, a number of semiconductor companies have battery charger chips for specific battery chemistries so you don’t need to get bogged down in this particular detail.
  • Charge duration vs battery life – It is somewhat important to make a distinction between charge duration and battery life. Charge duration refers to the length of time from charging until the output reaches the cut-off voltage. Charge duration can be impacted by improper or incomplete charging, load, temperature and battery age. All batteries have a finite useful life (based on number of charge cycles or age) and battery life literally refers to the useful life of a battery. Battery life can be impacted by aggressive charging and over charging, exceeding the maximum current rates and using the battery below its rated cut-off voltage. Long charge duration and long battery life are not mutually exclusive but to achieve both you must treat a battery gently. Deep discharging to increase charge duration will reduce the life of a battery (as will very aggressive charging algorithms). Particularly in the case of LiPo batteries, the discharge curve is so steep when the cut-off voltage is reached there is minimal benefit and a high penalty in battery life reduction for going below the cut-off voltage. The term “battery life” is often used when referring to charge duration but when discussing ways to save power, what you do to increase charge duration will generally lead to longer battery life too.
  • Self-discharge – All battery chemistries suffer some amount of self-discharge over time. This generally isn’t a concern as far as your product design is concerned. However, electronics have reached such low current levels that the battery’s self discharge may play a significant factor in your product’s charge duration and may even be higher than your product’s discharge rate.
  • State of charge – One of the trickiest things about using batteries is determining their state of charge. Simply monitoring the battery voltage level is the simplest method to implement but often tends to provide poor results. A lead-acid battery that hasn’t been fully charged will have a fairly high voltage that drops rapidly when a load is applied, giving the impression the battery is discharging very quickly. A lightly loaded lithium polymer battery has such a flat discharge rate that there won’t be a significant voltage drop until there is only 5-10% of capacity left. A battery “gas-gauge” chip can provide a better indication of the state of charge but you still have to characterize the battery in your application to properly make use of the information the chip provides.
  • Multiple batteries – Batteries are easily put in series to increase voltage, in parallel to increase current capability or both. This can be done with little concern with non-rechargeable batteries. For rechargeable batteries, there are a number of considerations primarily around charging. The details on this are also outside the scope of this book but if you use multiple batteries in your product, you should use a battery pack instead of individual cells to ensure the cells are from the same manufacturer and roughly the same age
  • Physical considerations – There are a few physical considerations you must take into account in your product design when using certain types of batteries. For example, some types of batteries may out-gas during charging (usually a sign the battery is being abused) and require venting to the outside world to avoid explosion. The lithium based rechargeable batteries can literally swell during charging and require some amount of room to expand. Battery manufacturers usually provide information if special considerations are needed.

If you are new to designing battery powered products, it will be well worth your time to do some in research into the various battery technologies Even if you are an old hand at designing with batteries, if you are considering a battery technology you haven’t used before for you product you should do some in depth research in that technology to understand it’s intricacies. Manufacturers of batteries intended for industrial applications or OEM use generally provide considerably more details in their spec sheets than those for consumer oriented batteries. These spec sheets and app notes are a good source of valuable information.

For the most part, your application will dictate the type of battery you use. In some cases you may have a choice of battery types or may have to decide between battery types when more than one would be suitable for you application. Below is a high level overview of the pro’s and con’s of the most popular battery types.

 batt table 


Battery selection, part 1

There are a wide variety of battery technologies available today, ranging from the traditional lead-acid battery to the latest lithium polymer (LiPo) batteries. For the most part, your application will dictate the type of battery you use. In some cases you may have to decide between battery types when more than one would be suitable for you application.

Batteries seem like they should be easy to use, just hook them up and go. In fact you can usually do just that but chances are you won’t be satisfied with the resulting charge duration or battery life. Batteries can easily be abused if you don’t fully understand their specs. Some battery chemistries are more tolerant of being abused than others but any abuse tends to shorten battery life. Each battery technology has its own unique set of characteristics and many of those characteristics vary based on the environment they are in, how they are used and for rechargeable batteries, how they are charged. To a large degree and for the sake of this discussion, you can model a battery as a fixed voltage source connected to the load through a variable series resistor. There are a number of things such as temperature, load and age that act on this variable resistor to decrease the battery output.

 Below are several important things you should know that apply to nearly all battery chemistries:

  • Nominal voltage – My advice to you is forget you ever heard the nominal voltage spec for the battery you are working with. Nominal voltages generally reflect a certain point in the battery’s discharge curve under conditions your battery may never see in the real world. For example, a LiPo cell has a nominal voltage of 3.6V but its operating voltage range is from 4.2V to 3.0V. As you can see in the discharge curves below for a LiPo battery, under a light load (the blue line) the curve flattens out around 3.6V just before falling off the cliff. Other than this short period of time, it would be hard to correlate anything on these discharge curves to 3.6V. The two critical voltages you need to be concerned with are maximum voltage for charging and the rated cut-off voltage for discharging. To get the full life of your batteries, these two voltages should never be exceeded.
  •  Factors effecting output – Load and temperature are the primary factors that create the discharge curves for rechargeable batteries. In the case of load, the variable resistor in our battery model acts much like any other resistor and the output decreases as the load increases. Temperature appears to make this variable resistor act the opposite of a real resistor, as the temperature decreases the apparent resistance increases, reducing the voltage at the output. The combination of a heavy load and low temperatures can result in a considerable decrease in the output voltage. As shown in the graph below, the output of a LiPo battery can decrease by over a volt from light load at high temperature to a heavy load at low temperature. For non-rechargeable batteries, the age of the battery also impacts the voltage. As these batteries get older (whether they are in use or not), the apparent resistance increases causing the output to decrease.

LiPo temp curves


  • Discharge curves – All battery technologies have a characteristic discharge curve, typically plotting voltage against mAh capacity (or age). While useful for comparing battery chemistries, once you select a battery technology you need to look at the discharge curves for specific battery models. Variations on the chemistry and battery construction from manufacturer to manufacturer yield different discharge curves. Specifically, you need to pay attention to the curves for the load you expect to place on the battery and the temperatures you expect your product to encounter. These two factors can have a significant impact on the charge duration your batteries will have in actual use. The discharge curves in the graph below are for a LiPo battery at three different loads (rated discharge rate multiplied by 2, 1 and 0.2). You really need to look at both the voltage at temperature and load graph and the discharge curves to get a feel for how a battery will perform in your application.

LiPo discharge curves

  • Rated current – Batteries typically have both maximum continuous current and maximum pulse current specs. These values should never be exceeded. Best case you can severely decrease the life of the battery. Worst case, with lithium based batteries the result can literally be smoke and flames. If your design requires getting close to these rated currents, contact the battery manufacturer to discuss your application to make sure your product will be safe and operate properly. You may also consider using two batteries in parallel to increase the available current (this carries its own set of issues that are beyond the scope of this discussion).

Next week I’ll discuss battery charging and several other important aspects of battery selection.

Micro selection, part 4

This week I will discuss power savings modes in micros. You will need to thoroughly research the modes you want to use AND what the on-chip peripherals are capable of doing in these modes. On a firmware project I recently worked on, there was a single sentence with HUGE implications buried on page 358 of a 550+ page datasheet that said “During Sleep mode, all clocks to the EUSART are suspended.” Because of this, the micro was unable to be put in a sleep mode while communicating with a wireless radio module. More significantly at the product feature level, a data transmission to the device from another device in the system would not wake-up the micro, the micro would have to wake-up periodically to poll other devices to check if they had a message for it.

Low Power Modes

Modern micros provide a wide variety of power saving modes, from a simple run/idle/sleep selection to extreme control over individual circuits and on-chip peripherals. This can get very confusing because you can run into terms like “idle”, “nap”, “snooze”, “sleep”, “hibernate” and “deep sleep”. There are no standards regarding low-power states and the power levels and functionality available for a given mode name may vary widely from micro to micro.

There are several things you need to pay close attention to regarding power saving modes:

  • Make sure in the low power modes you want to implement that the on-chip peripherals you need are functional. For example, in some micros the UARTs are fully operational while the micro is sleeping, others like I mentioned above require the micro to be running for the UART to be operational. As discussed in a previous post, pay special attention to the clock chains in the micro you select. If the peripheral clock is the same as or is simply divided down from the CPU clock the micro will probably have to be running while any peripherals are functioning. You may still be able to put the micro in an “idle” mode which means it won’t be executing code but it’s internal clocks are still running and consuming power.
  • One popular family of small micros powers down its main registers and internal SRAM to achieve sub 1uA deep-sleep mode currents. On this micro, only the instruction pointer and two special registers for program state information are maintained while in deep-sleep. Other than these two registers, waking up from deep sleep isn’t much different than coming out of reset. One very important difference is any variables that the compiler caused to be automatically initialized on reset or are initialized in your reset code won’t be initialized when coming out of the deep sleep mode. Your wake-up code must handle initializing these variables and any on-chip peripherals, delaying the firmware’s response to the event that triggered the wake-up.
  • Pay close attention to what conditions bring a micro out of its low power states. In light and medium sleep modes, any internal or external interrupt will generally wake the micro. The lowest current modes usually have the fewest options for waking the micro. Some micros allow GPIO pins to be configured to wake the micro on a per-pin or per-port basis while some micros will only wake on transitions on external interrupt pins or a specific “wake” pin. For the micro mentioned earlier, the only conditions that will wake the part from deep-sleep are an alarm interrupt from its Real Time Clock or a specific external interrupt pin.
  • Micros with an on-chip voltage regulator for generating the micro core voltage will often have the core voltage on the device pins for filtering. If you select a micro that turns off or reduces its core voltage in a low-power mode, be careful with the amount of capacitance on these core voltage pins. These pins typically only require a filter capacitor so the smallest capacitor possible should be used since it will be charged and discharged every time the low-power mode is entered or exited.

This wraps up the discussion on micro selection. As you can tell, selecting the micro for your application will either enable a good low power design or set you up for failure in your power saving efforts. Unfortunately there is no “right” micro for all low power applications. For a low power design, the micro selection must be a collaborative effort between the hardware and firmware engineers working on the project. Without knowing the specifics of what the firmware needs to do, it is impossible for a hardware engineer to select the right micro for the job.


Micro selection, part 3

This week, I will discuss how using a micro’s on-chip peripherals can help to reduce your power consumption along with how GPIOs can help or hinder your power savings efforts.

Internal vs External Peripherals

With modern micro architectures, using internal peripherals tends to be considerably more power efficient than using external peripherals. There are a number of reasons for this:

  • The main reason is most modern micros operate with a lower core voltage so internal peripherals operate at the lower core voltage and external peripherals operate at the higher system voltage.
  • The trend towards using 2 and 3 wire peripheral interfaces like I2C and SPI to reduce pin count and package size means clocked serializer and deserializer circuits are required for external peripherals while internal peripherals have much faster parallel interfaces. These added circuits consume additional power plus the micro is also using power while waiting for data transfers with the external device to complete.
  • Passing signals through the micro’s I/O buffers to access off-chip peripherals consumes additional power. When you must use off-chip peripherals, high-speed serial SPI/I2C interfaces may be more efficient than 8 or 16 bit parallel interfaces since the I/O buffers will always consume power, not just during the data transfers. Using these serial interfaces will be even more power efficient on micros that have SPI/I2C controllers so the micro can sleep while the transfers take place. If you must use parallel interface external peripherals, the I/O pins used for data lines should be driven low when not in use to minimize power.


There are a number of ways that GPIO can impact power consumption:

  • The I/O ports on most older micro architectures have fixed drive strength. Many modern architectures provide drive strength control on a per port or even per pin basis so the drive strength can be tailored to the circuit requirements.
  • Similarly, if your design uses more than a few GPIO outputs that change states frequently, for ultimate power savings look for a micro that supports programmable slew rates. Faster signal transitions require more power than slower ones for the same capacitive load.
  • Most micros provide options for internal pull-ups and/or pull-downs on their I/O ports. While convenient to use and a good way to reduce total parts count, these internal pull-ups tend to not be well controlled and can be as low as a few K-ohms on some micros, leading to excessive current draw.  Consider a 3.3V micro, with an internal 5K pull-up on a switch input that is normally grounded. This 5K pull-up will pull 660uA, compared to 33uA for an external 100K pull-up. Some micros also provide control of the pull-ups/pull-downs on a per port basis so all inputs on a port will have the pull-up/pull-down enabled just because one input needs it. If extreme low power is required, it’s much better to use discrete pull-up/down resistors only where needed and with as high a value resistor as can be tolerated in the circuit.
  • Unless the micro datasheet says otherwise, it’s usually best for low power usage to configure unused GPIO as outputs driving a logic low. In this configuration, the I/O pin is trying to sink current so it is using virtually no power.
  • Slowly rising or falling input signals can cause excessive power consumption and even generate noisy oscillations while the input is between the low and high input thresholds. Few micros provide Schmitt trigger inputs so an external Schmitt trigger buffer may be needed to provide fast, clean transitions to the micro. If the slow transition time is due to a weak pull-up it may be more efficient to use a stronger pull-up than to power the Schmitt trigger device.

Next week we’ll wrap up the topic of micro selection with a discussion on low-power modes. This is an area where there is no standardization so “idle mode” and “deep sleep” may have different meanings and levels of functionality even between micros from the same manufacturer.

Micro selection, part 2

Last week I presented some high level considerations for selecting a micro for low power design. In this post we’ll get into more details on how your micro selection can benefit or impact your power reduction efforts.

Program Location

For most microcontrollers, accessing on-chip RAM typically use less power than accessing on-chip Flash so you may be able to save power by executing code in RAM instead of Flash. A number of micros don’t support executing code in RAM (particularly Harvard architecture micros) and the power consumption when accessing RAM compared to Flash varies so be sure to research this in depth before counting on this power savings. This may also not be true if the Flash and RAM data busses aren’t the same width (a 16-bit path to Flash and 8-bit path to RAM for instance).

You should avoid using off-chip memory for program execution if at all possible. Besides taking more clock cycles to access off-chip memory, the power consumed by the off-chip I/O buffers and the external memory devices make off-chip memory accesses very expensive in terms of power usage.

C or Assembly Language

It is hard to beat lovingly hand crafted assembly language for efficient power usage. However, unless the energy usage for your application is so critical you are counting nanoamps, it is hard to justify the extra development time and maintenance issues associated with assembly language.

Most modern micros (even some 8-bit parts) are architected to work efficiently with compiled C code. If a micro has a very limited/fixed stack size, doesn’t support indexed memory addressing modes, can’t do arithmetic or logical operations directly on memory locations or doesn’t deal well with immediate addresses or values, it won’t efficiently execute C code and will waste considerable power because of it.

Clock Frequency

Most modern micros are implemented in a CMOS process where power consumption scales almost linearly with clock frequency. Consider again the equation for current used by a micro for performing a particular task:

Ievent = (Timeoperating x Ioperating) + (Timesleep x Isleep)

Don’t fall into the trap of thinking you can run the micro faster to use less current. Since the power scales linearly with the clock frequency, running twice as fast while using twice as much current produces the same number for the operating portion of the equation. Ironically, if the task occurs repeatedly at a specific interval, the total current used will actually increase since the sleep time increases.

A number of modern micros use a PLL to generate a higher speed clock for the micro core. Before using one of these parts, if you plan to use clock throttling as part of your power management scheme you must be aware that the PLL response time when turning it on/off or even significantly changing the clock speed can greatly increase the firmware’s response time for events that trigger the clock speed increase.

It is also important to consider the internal clock chains for anything but the simplest micros. As shown in the diagram below, the more branches on the clock tree the more control you generally have over clock speeds and enables. In simpler micros the peripheral clock is divided down from the micro core clock. Two things to keep in mind in this case are (1) the micro must be running when any peripherals are being used and (2) reducing the micro clock speed is not an option for saving power when a higher speed clock is required for a peripheral (like a high-speed UART).  Most modern micros will have at a minimum a clock for the micro core and another clock for the on-chip peripherals. This isn’t sufficient if your goal is the ultimate power savings.

Regarding UARTs, some of the more recent micros provide a fractional divider in addition to the basic baud rate divider for better accuracy at higher baud rates. The drawback to this capability is it usually requires a clock of at least 16X the desired baud rate so a 115K baud rate requires a clock of over 1.8Mhz, greatly increasing the power used by the UART. There is a family of Cortex M3 based micros that have the fractional divider that requires the 16X clock whether the fractional divider is being used or not so this can be an expensive feature power-wise even when not used.

Next week, we’ll continue looking into the detail level considerations for your microcontroller selection.

Micro selection, part 1

This post begins a series of posts on selecting the micro for your low power design. It would be hard to argue that selecting the right micro is the most important factor in designing a low power product. Selecting the wrong micro is the most expensive mistake you can make in designing a low power product and the most difficult mistake to recover from. Unfortunately, there is no right micro for all low power designs. As you will see, the specific requirements of your product should steer you to the best micro for your application.

For many years, the basis for the micro selection for a new product design was either use the same micro as on previous products or pick the latest micro from your favorite micro manufacturer. While there are many good reasons to work within the same family of micros, if low power usage has recently become a requirement for your products it may be time to evaluate other micros. Particularly for firmware intensive products it pays to do your research up front and select a micro that will allow you to have a power efficient design. You must go way past the operating and sleep mode current specs in the datasheet and research the low power modes and the peripherals a part has to make sure you can use those peripherals in the power savings modes you need to implement. Debugging your code isn’t the time to discover you can’t use a particular on-chip peripheral while the micro is sleeping.

It should go without saying but the micro selection should be a collaborative effort between the hardware and firmware engineer, with each having an equal say in the final decision. If the ultimate in power savings is your goal, it will pay to get a eval board for your choice of micros so you can run some code on it and take some current measurements. If your firmware will be written in C, this should also give you the opportunity to review the assembly language the compiler generates so you can see how efficient the compiler is. If an eval board isn’t available, an apps engineer with the micro manufacturer should be willing to compile some code for you to review.

8-bit vs 32-bit Micros

It seems like a no-brainer that an 8-bit micro would inherently use less power than a 32-bit micro. This isn’t necessarily true and for most applications just looking at the data sheet current specs won’t provide an accurate comparison. Most 8-bit micros have much lower operating and sleep mode currents than 32-bit micros of similar vintage but a 32-bit micro is likely to spend considerably more time sleeping than an 8-bit micro because it can perform its tasks much faster. In applications with complex algorithms or floating point math, a 32-bit micro may be considerably more power efficient simply because it can complete tasks in considerably fewer clock cycles than an 8-bit micro (floating point operations can take thousands of clock cycles on some 8-bit micros). Even in simpler applications, for firmware written in C the math required to calculate addresses for structures or arrays can greatly increase the power consumption of an 8-bit micro for a given task compared to a 32-bit micro.

There are also some 16-bit micros that shouldn’t be ignored, particularly if an 8-bit micro is suitable for the task and your firmware is in C. If your firmware fits in a 64KB address space then a 16-bit micro can greatly reduce the number of instructions required for calculating structure and array addresses. The main drawback to 16-bit micros in general is most of them are older architectures and may not have many of the power management features implemented in the newer generations of 8-bit and 32-bit micros.

Many of the micros released within the past 2-3 years have operating currents 1,000X to 10,000X higher than their sleep and deep sleep mode current. The upcoming generation of low power micros will increase this difference by another 10X or more. For the ultimate power savings it is crucial that the micro spend as little time executing code and the most time possible in a sleep or deep sleep mode. If your firmware will be written in C, for anything but the simplest applications 32-bit micros will have a significant advantage in executing code quickly. Consider the equation below for the current used during a specific event:

Ievent = (Timeoperating x Ioperating) + (Timesleep x Isleep)

Assume a hypothetical 8-bit micro with a 1mA operating current and 50uA sleep current and a 32-bit micro with twice the current levels (2mA and 100uA). The micro must perform a specific task that includes 100mS of sleep time while waiting on a mechanical action. Assuming the 8-bit micro takes 50mS to perform the task and the 32-bit micro is able to perform the task 4X faster than the 8-bit micro, the two equations below show the current required is cut by about one third with the 32-bit micro.

8-bit micro => (50 x 0.001) + (100 x 0.00005) = 0.055

32-bit micro => (12.5 x 0.002) + (100 x 0.0001) = 0.035


It would be hard to say any one particular microcontroller architecture has lower power consumption than another for a given application. An architecture with single clock cycle instruction execution would use less power for a given task than a micro that requires multiple clock cycles per instruction given that everything else is equal (which is rarely the case). For applications that are very data intensive, a Harvard architecture with separate data paths for program and data memory may use less power than a micro with a shared data path for program and data memory because it can complete tasks faster. On the other hand, for an application that has very little RAM use, the shared data path architecture could be lower power than a Harvard architecture simply because there is less circuitry to power.

It would be fair to say that a more modern architecture will likely provide better control over power consumption. A more modern architecture may allow clock and power control for individual peripherals instead of the all or none approach that older micros used for power management. This can be crucial for power savings in applications that utilize a number of on-chip peripherals like timers, UARTS, DMA, etc.

Next week I will get into more details on what to look for in selecting a micro for your low power design.