Monthly Archives: June 2015

Keep it cool – part 2

Driving loads

You may not expect to read about multiple amp loads in the context of “Low Power Design”. It could be an indication of how wide-spread the push for energy efficient products has become.  In many embedded applications it is quite common for a micro drawing a few milliamps to control motors or solenoids that require several amps of current or high-brightness LEDs that draw several hundred milliamps. Even if these current ranges are way above what your design has to deal with, the information presented here may be applicable to your design.

When driving a large load such as a DC motor, solenoid or a cluster of LEDs, there are a number of things you can do to ensure you drive them efficiently to help reduce heat buildup:

  • You already selected the lowest RDSon FET you could find but are you taking full advantage of its capabilities? Make sure your gate voltage is such that the FET is fully turned on and operating in its lowest RDSon range. Most N-channel FETs usually need a gate voltage in the 8-10V range to fully turn on so simply driving the gate with a GPIO won’t put the FET in its lowest RDSon range. Logic level gate FETs may be better in this regard but typically they just have a lower minimum turn-on threshold and may still need over 5V to fully turn on. In these cases you should consider using a P-channel FET or even a gate driver IC to drive the gate of the N-channel FET with the voltage it is switching (or use a step-up regulator or voltage doubler circuit to provide a higher voltage if the input voltage exceeds the maximum gate voltage). The graph below shows the impact of gate voltage on Rds(on) for the Fairchild FDS8449 N-channel FET. The FDS8449 has a max gate turn-on threshold of 3V but as you can see at 3V the Rds(on) is 2.4X higher than at 10V at no load, much higher as the load increases.

Rds(on) vs gate voltage

  • When using a P-channel FET to drive a load, a GPIO may not drive the gate high enough to completely turn off the FET so you may be leaking power through the FET. This can often go un-noticed since the amount of power is too low to activate the load. Using an open-collector driver or an N-channel FET to drive the gate of the P-channel FET can solve this problem (don’t forget a pull-up resistor from the P-channel FET’s gate to the voltage being switched).
  • A P-channel FET of similar rated voltage and current as an N-channel FET will typically have 50-100% higher Rds(on) than the N-channel FET. With Rds(on) specs on modern FETs in the double-digit milliohm range even doubling the Rds(on) produces a fairly low value. However, that is simply wasted power that can easily be eliminated if low-side switching is an option for your application. This is also important to keep in mind if for some reason you can’t address one of the issues discussed here that prevents the FET from operating close to it’s lowest Rds(on), changing the type of FET may alleviate the problem. If you have a P-channel FET operating at 2X its lowest Rds(on) then you could possibly reduce the Rds(on) by a factor of 4X by using an N-channel FET.
  •  Just like a resistor, the Rds(on) of a FET increases with temperature. The graph below shows the impact of temperature on Rds(on) for the Fairchild FDS8449 N-channel FET. As you can see, a 50°C increase in temperature results in a nearly 20% increase in Rds(on). Even if your product is normally used in a room temperature environment, a 20-50°C temperature rise at the FET’s die isn’t uncommon (another reason to operate the FET in its lowest Rds(on) range). This is another situation where keeping a part cooler helps prevent wasting power, as the Rds(on) increases the part will get hotter, increasing the Rds(on) and so on. A hot FET and a non-optimal gate voltage can combine to generate a lot of excess heat and waste a lot of power.

Rds(on) vs junct temp

  • Similar to LEDs, electromechanical devices can often be driven with a PWM to reduce power without impacting the performance of the device. Solenoids can often be “kicked” with a several hundred millisecond pulse to actuate them and then driven at as low as 30 or 40% duty cycle to keep them actuated. Depending on size, a DC motor may require a “kick” for up to a few seconds to get it up to speed and more than a 50% duty cycle to avoid slowing down but with such large loads, even a 10-20% savings can be a considerable reduction in power consumption.

 If your application involves high currents or other potential heat sources, consider buying or renting an infrared thermal camera. The camera will help you find hot spots on your board and access the effectiveness of your heat spreading/isolation efforts. Don’t be tempted to skimp and use a laser pointer IR thermometer instead of the camera. The IR thermometer has a fairly narrow range of view and only displays the temperature where you point it. An IR thermometer may not accurately measure the temperature of a small hot spot like one caused by high current flowing through a small surface mount FET.  The beauty and value of the camera is it can show you even tiny hotspots where you don’t think about looking for them. The first time I used a thermal camera it did just that, allowing us to fix an issue at the prototype stage that would have likely lead to field failures and high warranty costs. That camera more than paid for itself in a matter of hours.


Driving loads

Keep it cool

Several of my recent posts have mentioned the very negative impact of heat on power consumption. This is the first of a two part series of posts on thermal management for low power devices. This information is mostly taken from my “Low Power Design” PDF e-book.

As semiconductor geometries have shrunk, in recent years leakage current has become a significant component of the overall power consumed by ICs. As parts heat up, their leakage current typically increases. It is not uncommon for parts to consume twice as much current at their highest rate temperature than at their lowest. For example, the AD8226 op-amp is rated for -40°C to 125°C. The quiescent current ranges from 325uA at -40°C to 425uA at 25°C to 600uA at 125°C. This is nearly a 100% increase across the temperature span and nearly a 50% increase from “room temperature” to the maximum temperature. You should conduct your current measurements at the temperature your product will normally operate at if not at the temperature extremes too.

 Controlling Heat

Even if you don’t have the luxury of airflow in your product, there are a number of things you can do to keep the heat under control and reduce the impact of heat on your power consumption:

  • If your product is vertically mounted, place as much circuitry as you can below the main heat generating parts.
  • If you have the option for voltage regulators and other heat generating parts, select packages with bottom side ground or thermal pads. Dissipating heat into the circuit board can help localize the heat buildup and maintain a lower air temperature within the enclosure.
  • For products that operate in high temperatures and have more than a few ICs, select part packages based on what you want to do thermally for a part. For parts that drive large loads or use high clock speeds it can be beneficial to select a package with a low thermal resistance to help get the heat away from the die. On the other hand, if you must place other parts near heat sources on your board you can choose a package with a higher thermal resistance for those other parts to reduce the heat transferred from the PCB to the die. Most surface mount ICs have several options for package styles that can have a wide range of thermal resistances. For instance, Texas Instruments offers the 74LV74 in 6 different packages with thermal impedances ranging from 47°C/W to 127°C/W.
  • Without forced air flow in your product, the junction-to-ambient thermal resistance spec probably isn’t relevant for selecting parts. You need to pay attention to the junction-to-case thermal resistance specs. Some manufacturers are specifying a junction-to-board thermal resistance which is even better. When comparing the junction-to-case thermal resistance of different packages you must understand where on the case this spec applies. Newer parts and particularly those in SMT packages intended for power applications will use the bottom of the case for this spec while older parts and non-power packages will likely use the top of the case since the assumption is air flow is used to dissipate heat and not the circuit board.
  • To fully realize the heat dissipating potential of low thermal coefficient packages with large tabs or bottom side thermal pads, you have to place several thermal vias in the pad to tie this pad to the internal ground plane or a large copper pour on the back of the PCB. You also need to pay special attention to the datasheet on parts with an exposed bottom side pad. These exposed pads are often connected to ground internally but not always and on some parts the pad may be the only ground connection for the part. On other parts, the pad may not be electrically connected and can be safely grounded or it may be the negative voltage supply on dual supply analog parts (which may or may not be ground in your design). The datasheets usually contain details on the size of the copper area and other PCB layout requirements to achieve the specified thermal resistances. The diagram below shows a typical arrangement for a DPAK and SMT DIP packages with a thermal copper pour and vias to connect to an internal ground plane or back-side copper pour. Most of the IC manufacturers that have packages with bottom side thermal pads provide app notes or even on-line calculators to help determine the minimum size of the copper area and number of thermal vias you need for a given package.

Footprints with Cu pour

  • With thermal vias, more isn’t always better since they can also disrupt the spread of heat across a copper pour or ground plane. The ground plane connections for vias are usually made with four thermal “spokes” and not a direct connection, some of these spokes may be missing if the vis are packed too close together. Some assembly houses may complain about thermal vias in device pads robbing solder from the pad. If so, reducing the via size or covering the via on the opposite side of the board with solder mask can help minimize the amount of solder that may seep into the via holes.
  • When using the PCB for heat sinking, keep in mind that FR4 and other laminates that PCBs are commonly made from are very poor thermal conductors. You are really spreading the heat through the copper in/on the PCB instead of transferring the heat into the PCB. It is best to use at least 2 oz copper for the outer layers and 1oz copper for inner layers. The heavier copper isn’t significantly more expensive than the “standard” 1 oz and ½ oz copper used for most PCBs but does provide significantly better heat transfer across the ground plane and copper pours.
  •  If you have traces on your board that carry more than a few amps, the traces themselves can be a significant source of heat if you aren’t careful. There are a number of good on-line PCB trace width calculators you can use to help prevent this. The maximum allowed temperature increase for the trace is an input for most of these calculators. If a trace normally carries high current you should set this to 5°C or 10°C. If the high currents are of a fairly low frequency and short duration, you can usually go as high as 25°C max temperature rise (but first make certain your product isn’t subject to any specifications that restrict the max temperature rise of traces). The overall trace resistance is also a function of trace length so you can reduce this resistance and resulting generated heat by shortening the trace length. Vias can have significantly higher resistance than copper traces causing them to generate additional heat so above a few amps you should use multiple vias instead of a single via.
  • To some degree the leakage current of an IC is influenced by the die size and circuit density on the die. For typical embedded devices the micro will usually have the largest die size and highest circuit density of the ICs in the device and therefore the highest leakage current. Simply increasing the space between heat generating parts and the micro can drastically reduce the amount of heat the micro is exposed to thereby decreasing its leakage current.
  • If possible, heat sink your heat generating parts to the enclosure. You may be able to do this directly but also indirectly be placing your heat generating parts near mounting holes on the board so the mounting hardware can help carry the heat out of the circuit board.
  • Pay the premium for more efficient voltage regulators. The power that an inefficient regulator wastes is generally turned into heat which can increase the power consumed. You aren’t likely to get into a thermal runaway condition but to some degree this increased power consumption generates more heat which increases power consumption which generates more heat …. you get the point.

That wraps up this part on thermal management. As you can tell while much of thermal management is in the circuit board layout, the extent that heat can be managed in the layout and the impact heat will have on the power consumption of your design are highly dependent on device and part package selection.

If you find the Low Power Design blog interesting, please help spread the word about it. We’re also on Facebook, Twitter, Google+, the Element14  Community and have a LinkedIn group.

Non-volatile memory for low power designs

I came across a mention in a newsletter recently of Texas Instruments’s line of FRAM based MSP430 microcontrollers and thought a short post on FRAM would be of interest.

If you aren’t familiar with FRAM (also called FeRAM), think of it as essentially a non-volatile SRAM. FRAM was commercially developed primarily by Ramtron International starting in the mid 1980s. The ‘F’ in FRAM stands for ferroelectric but the underlying technology isn’t based on magnetics as might be assumed. The ferroelectric film used in constructing an FRAM is often described as creating a capacitor that can be charged or discharged to store a ‘1’ or a ‘0’. These are near perfect capacitors that can retain a charge for over 150 years. Reading a FRAM location causes the charge to degrade so read operations must be followed by a re-write of the data to that location. This is very similar to the way a DRAM operates and the re-write cycle is transparent to the user (except that read operations count toward the device’s endurance cycle count) and refresh cycles are not required like they are for DRAM.

FRAM is interesting for low power designs that require non-volatile memory. FRAM devices typically have standby currents comparable to or slightly less than EEPROM and NOR Flash devices. For write cycles, the FRAM has significantly lower currents and significantly faster write cycles. For applications that primarily read from non-volatile memory FRAM provides a modest power savings. For applications that frequently write to non-volatile memory FRAM can provide a huge power savings and does not require erase operations like Flash devices do.  Unlike EEPROM and Flash devices where the number of write cycles can limit or prevent their use in some applications, FRAM devices are typically spec’d for 10 trillion write cycles or more. The main drawbacks to FRAM are higher cost and relatively low density compared to EEPROM and Flash but with 8Mb parts on the market, for most low power embedded applications that really should not be an issue.

The table below presents some performance specs for typical FRAM, EEPROM and NOR Flash devices along with a few characteristics of the technologies. The devices used in the comparison all have SPI interfaces and capacities in the lower end of the capacity range for each part type.




Standby Current

6uA @ 3.3V

5uA @ 5.5V

1uA @ 2.5V

15uA @ 3.3V

Slow Read Current

200uA @ 1Mhz, 3.3V

2.5mA @ 5Mhz, 2.5V

Not specified

Fast Read Current

3mA @ 20Mhz, 3.3V

6mA @ 10Mhz, 5.5V

10mA @ 20Mhz, 3.3V

Write Current

Same as read

3mA @ 5.5V

30mA @ 3.3V

Erase time



25mS (sector or block)

Byte write time

< SPI transfer time




> 1 trillion read/writes

> 1,000,000 writes

> 100,000 writes

Data retention (years)




Density range

4Kb – 8Mb

64b – 8Gb

256Kb – 2Gb

1 = FRAM specs from Cypress FM25L16B, 2Kx8 organization, 2.7V to 3.6V operating voltage

2 = EEPROM specs from Microchip 25LC160, 2Kx8 organization, 2.5V to 5.5V operating voltage

3 = Flash specs from SST (Microchip) SST25VF512, 64Kx8 organization, 2.7V to 3.6V operating voltage

FRAMs are available with SPI, I2C and parallel interfaces. There is a considerable amount of more detailed information on the Internet about FRAMs. The primary suppliers are Cypress Semiconductor (acquired Ramtron in 2012), Fujitsu Semiconductor (RAMtron’s primary foundry) and Rohm Semiconductor.

Back to what originally caught my eye regarding FRAM, Texas Instruments has integrated FRAM into their MSP430 micros. The MSP430FR series of micros has from 4KB to 128KB of on-chip FRAM along with a smaller amount of SRAM (presumably for stack and other frequently used variables). In contrast to some micros that lose some or all of their RAM contents in a deep sleep mode, with some consideration over whether variables are stored in FRAM or SRAM these devices could be completely powered down while retaining important state information and data.  With operating currents as low as 100uA/Mhz the MSP430 micros are strong players in the ultra low power micro market but the addition of internal FRAM provides them with a distinct advantage in certain types of application.


If you find the Low Power Design blog interesting, please help spread the word about it. We’re also on Facebook, Twitter, Google+, the Element14  Community and have a LinkedIn group.


Leakage current & current leaks – part 3

This post will focus on “current leaks”, the points of power consumption that are often designed into a device without much thought. In some cases they are things you have some amount of control over, in other cases they are things you have little control over but have to account for in your power budget in order to get an accurate power consumption or battery life estimate. While technically not leaks as in leakage current, I call them leaks because they are where your battery life drains away.

Resistive leaks

The amount of power a pull-up resistor consumes surprises most engineers new to low power design. At 10K ohms, a pull-up to 3.3V on a single low level input to a micro can draw over 300uA. Even a fairly high value like 1Meg ohm can draw over 3uA in the same situation. This may seem low but can be considerable compared to a micro with sub-microamp sleep current or a power budget of several microamps average current to achieve several year life from a coin cell battery. The internal pull-up/down resistors on a micro’s GPIO are usually not well controlled and typically range from 5K to 20K ohms. They should never be used if you are concerned with low power consumption.

In situations like open-collector or tri-state outputs a pull-up or pull-down resistor is required (and depending on signal rise-time requirements a fairly low value may be required). In this case the power just needs to be accounted for in the power budget. In other cases, you may have other options. For example, for a switch that only connects to the micro’s inputs and does not directly connect to a hardware circuit, consider the following:

  • Instead of grounding one side of a SPST switch with a pull-up on the micro’s side, use a GPIO instead of ground. This GPIO can be set normally high and then only set low when the micro needs to “read” the switch input. This output from the micro can be shared with several switches. There will still be a small current flow when the switch is closed since the GPIO may only reach 70%-90% of the Vcc rail. To reduce the current flow even more, when not needed enable the internal pull-up for this GPIO and configure it as an input.
  • Use a SPDT switch with one side pulled-up and the other tied to ground. In this setup a pull-up isn’t required on the micro’s input (be sure to select a switch that doesn’t have an “open” position). A SPDT switch is usually larger and more expensive than a SPST switch but the current savings can be significant.

Voltage dividers can be another source of wasted power. When used in a circuit there is little you can do about them other than using the highest value resistors your circuit can tolerate (and accounting for them in your power budget). When used to reduce a voltage for an A/D converter input, a P-FET can be used to turn off the voltage at the top of the divider until the firmware is ready to read the voltage level (the bottom resistor will act as a pull-down to prevent the A/D input from floating). A small amount of current will flow through the P-FET when turned off but it should be much lower than the current through the divider if the P-FET is not used.  In this situation, an N-FET can not be used at the bottom of the divider since when it is turned off the A/D converter input may exceed its input voltage spec.

A thermistor circuit is essentially a voltage divider with one resistor whose value varies with temperature. When using a thermistor with an A/D converter, depending on the voltage rail used for the high side of the thermistor, a P-FET may be used here too or a lower cost N-FET can be used at the bottom the divider. If super accurate temperature measurements are not important, a GPIO can be connected to the top or bottom of the divider. If doing this, it is better to use the GPIO at the bottom since there is usually less variation in the low level output (0.2V to 0.4V for most micros) than for the high level output (70% to 90% of Vcc) so the temperature measurements will be more accurate, however this will consume more power in the “off” state than if you use the GPIO at the top. You can also take the GPIO to an A/D input to and use the actual voltage to adjust the thermistor reading for improved accuracy.

Termination resistors for a serial communications interface are another sneaky path for leaking current. The two signal I2C interface is commonly used for connecting memory, A/D converters and various types of sensors to a micro. Over short distances, the value of the pull-up resistors on these two lines is determined mainly by the maximum transfer speed required on the interface along with the input leakage current of the devices attached to the interface. If I2C communications are part of the normal operation of your device you should work through the exercise of sizing the pull-up resistors and not just rely on values you saw on a reference design. Properly sizing the resistors will ensure the least amount of current is lost through these resistors and provide the information you need to budget for this current draw.

RS485 is another good example of a serial interface with terminating resistors. RS485 is often used in industrial environments and uses a differential pair for transferring data. In a point to point setup, there is typically a 100 or 120 ohm resistor placed across the pair. When the interface is in the idle state, 1.6mA or 2mA flows through this resistor. Since the two differential lines are always in opposite states during a data transfer, at the 1.5V minimum differential voltage 12.5mA or 15mA will flow through the resistor during data transfers. In a multi-drop configuration bias resistors are incorporated, a pull-up to 5V on one of the lines and a pull-down to ground on the other line. These resistors are usually in the 680 ohm to 1K ohm range. With a 1K ohm pull-up, a 1K pull-down and a 120 ohm resistor across the pair, about 2.4mA will flow through these resistors at all times.

Termination resistors make wired Ethernet a real power hog. Depending on the speed and configuration, the two termination resistors on the differential output driver alone can draw upwards of 60mA CONTINOUSLY.  Also, transmit currents can easily run in the 100mA to 180mA range for an Ethernet controller. Don’t think your device will only be transmitting when it has data to send. This could make for another series of blog posts but the protocol being used on the network can generate a significant amount of network traffic that your application is completely unaware of. It seems like for a wired Ethernet connected device, the only practical alternatives to line-power would be Power-over-Ethernet or a good sized solar panel and lead-acid battery unless the Ethernet interface is kept powered down except when the device needs to transmit (and the protocol selected/tweaked to handle that).

Enables and chip selects

In an analog signal chain feeding into an A/D converter for periodic sampling, using an op-amp with an enable can save milliamps compared to an op-amp that is operating all the time. The short time penalty paid in allowing the signal to settle after enabling the op-amp before performing the A/D conversion needs to be taken into consideration. Even if the settling time extends into a few milliseconds, the micro should be able to wake up, enable the op-amp and then go back to sleep while waiting for the settling period.

It may be tempting when using a part like an EEPROM with a SPI interface to tie the chip select pin low to save a GPIO for some other purpose when there are no other devices on the SPI interface. This is a bad idea if you are concerned about low power consumption. On many devices the chip select also serves to put the device into either active mode or standby mode so always having the chip select active will prevent the device from entering standby mode. In reviewing a few small EEPROM datasheets (256 x 8 parts), the active read currents were in the 1mA to 3mA range while the standby currents were in the 1uA to 5uA range (with the CS pin specifically called out at Vcc). This bears further research and experimentation to see what the current draw would be with CS active while the SPI lines are in an idle state but it is likely to be closer to the read current than the standby current. As an aside, some parts may not even function with the chip select always active.  Some devices require a transition on the CS line to enter an active state and for some the CS pin can serve multiple functions, such as starting a conversion on an A/D converter.

For the ultimate power savings with a device like a serial EEPROM or Flash device, if the part isn’t frequently used then use a GPIO and a P-FET to control power to the FET. Some micro manufacturers have design notes showing an EEPROM being powered directly by a high-current GPIO. This may not work reliably depending on several specs of the micro and the EEPROM. As mentioned earlier, the high level output for a CMOS device is typically spec’d at 70% to 90% of Vcc. If the EEPROM is being powered at 70% of the micro’s Vcc rail (2.31V for a 3.3V Vcc) that may be below the EEPROM’s operating voltage. If that is not an issue, if the EEPROM output is at 70% its Vcc rail (1.6V if Vcc is at 2.3V) it likely will not reach the minimum high level input voltage for the micro. If that isn’t a problem, when turned off the EEPROM’s Vcc pin may still be at 0.2V to 0.4V (the maximum low level output voltage) so the EEPROM may draw more current than it normally would in standby. Using a P-FET to control power to the EEPROM addresses all of these potential issues.

If you decide to control power to an EEPROM, Flash or any other device, be sure to consider:

  • Any pull-up resistors used on signals connected to the device powered down must be tied to the Vcc rail of the device, not the micro or considerable current may flow through the pull-up when the device is turned off. Otherwise, if the pull-up value is low enough the device may even remain powered up and if an internal GPIO pull-up is used enough current may flow to damage the micro.
  • Inputs to the micro from the powered down circuit should have pull-down resistors to prevent the signals from floating or the GPIO reconfigured as an output driving a low when the device is off.
  • If using a bypass cap on the device, it will be charged and discharged every time the device is turned on so use as small a value cap as possible. This will minimize power wasted in charging the cap and allow the device Vcc rail to fall quickly and minimize any unexpected behavior as the device powers down.

If you are concerned about ultra-low power consumption and need a device with a SPI or I2C interface, take the time to research the standby currents for similar devices with each interface. I did this with two EEPROMs from the same manufacturer with nearly identical specifications except for the interface. The standby current for the SPI device across its voltage range was 0.5uA to 3.5uA while the range for the I2C device was 1uA to 6uA. Presumably the difference is due to the additional circuitry required to monitor the I2C interface for activity and transition between standby and active modes. This may not seem like a significant difference but it would make literally years of difference in operational life when using a coin cell battery.

Voltage regulators

Voltage regulator circuits can use and waste considerable amounts of power. The current draw of the voltage regulator IC itself can be substantial and easily overlooked while concentrating on efficiency numbers or drop-out voltage specs. For LDOs, look for the “ground pin current” spec which typically increases with the load current and can vary from tens of microamps to tens of milliamps. For switching regulators, look for the “operating current” or “quiescent current” spec. This typically increases with the switching frequency and ranges from hundreds of microamps to a few milliamps.

Switching voltage regulators although generally more efficient than linear regulators can also waste power through poor selection of topologies and components.  This wasted power is hard to “see” in a circuit and can be hard to uncover without comparing the power into the regulator and power out of the regulator. I won’t get into a full discussion of the regulator circuits and topologies but here are a few things to consider:

  • An integrated regulator chip is typically smaller and cheaper than a controller chip and external devices but generally are not as efficient as a well designed regulator circuit. Having said that, if you don’t have much experience with regulator circuits and don’t have time to thoroughly research them, an integrated regulator chip will likely provide a more efficient solution.
  • A synchronous rectifier circuit will generally improve efficiency over a comparable non-synchronous circuit by several percentage points.  However, as the circuit’s duty cycle increases the difference in efficiency tends to decrease. Duty cycle typically increases as the difference between voltage-in and voltage-out decreases so with a low voltage differential (say 5V to 3.3V) synchronous rectification has less benefit.  For relatively low power circuits there won’t be a significant cost or size difference between the two circuits.
  • It can be tempting to pick regulators with very high switching frequencies in order to use smaller inductors and capacitors. As the switching frequency increases the amount of power used in turning the MOSFET(s) on also increases and can become a significant part of the power dissipation of the regulator circuit. If a high switching frequency is required, look for a MOSFET with low gate charge or gate capacitance specs.
  • As mentioned in Part 2, the efficiency of a MOSFET is a function of gate voltage and load current. Most smallish N-channel FETs need a gate voltage in the 8-10V range to fully turn on so if stepping down from 5V the FET will never completely turn-on and the actual RDS(on) may be several times higher than the datasheet spec. There are a number of newer N-FETs that are optimized for low voltage operation and will perform better with a low input voltage and relatively low currents.
  • Before selecting a switching regulator over an LDO for battery powered application, be sure to consider the minimum voltage in/out differential of the switching regulator. Where an LDO may only require a 100mV differential a switching regulator may require 500mV or more. Depending on the battery technology and discharge curve, that high of a voltage differential could reduce the effective battery capacity by 10% to 20% or much more.

Temperature & current leaks

It has been touched on a couple of times through this series of posts but bears repeating, hot temperatures can negatively impact a device’s power consumption. This may be due to increased leakage current or increased RDS(on) of the FET(s) in a regulator circuit. For an ultra-low power design it is critical that you pay attention to the thermal aspect of your design to get as much heat out of the packages as practical, poor thermal design will contribute to your current leaks. Particularly in small SMT packages, for a specific device offered in a variety of packages the thermal resistance can vary by a factor of 10X or more from package to package. The encapsulation materials used for most SMT parts are very poor thermal conductors so in general, the fewer pins a part has and the smaller the pins are the worse its thermal performance will be. Leadless packages that solder directly to the PCB tend to have much better thermal characteristics than leaded packages that must conduct heat through their pins.

It is important to really understand the thermal resistance specifications in a device’s datasheet. If there is no airflow inside your product the basic junction-to-ambient thermal resistance spec is of little use.  With no airflow the majority of the heat dissipation will occur through the circuit board, making the junction-to-board or junction-to-case thermal resistance the spec of interest. With SMT packages used for power devices (particularly packages with a thermal pad) the junction-to-case spec usually refers to the bottom of the case but that isn’t always the true, it may refer to the top of the case.  Particularly when comparing thermal resistance specs from different manufacturers it is important to understand what each spec really means to know if a comparison is valid or not.

That wraps up this series of posts on leakage currents and current leaks.  These are all areas that must be considered for successful low power design.  As you should have noticed, missing just one of these points of leakage current or current leaks can blow away your entire power budget and the lower your power budget is the more critical all of them become.