Feature

Does the Battery Fuel Gauge Lie?


Find more content on:

Why the battery state-of-charge cannot be measured accurately.

Measuring stored energy in an electrochemical device, such as a battery, is complex, and state-of-charge (SoC) readings on a fuel gauge provide only a rough estimate. Users often compare battery SoC with the fuel gauge of a vehicle. Calculating fluid in a tank is simple because a liquid is a tangible entity; battery state-of-charge is not. Nor can the energy stored in a battery be quantified because prevailing conditions such as load current and operating temperature influence its release. A battery works best when warm; performance suffers when cold. In addition, a battery gradually loses capacity through aging.

For more on batteries, check out:

Current fuel gauge technologies are fraught with limitations. This came to light when users of the new iPad assumed that a 100% charge on the fuel gauge should also relate to a fully charged battery. This is not always so, and users complained that the battery was only at 90% when the fuel gauge said 100% charge.

The modern fuel gauge used in iPads, smartphones, and laptops reads SoC through coulomb counting and voltage comparison. The complexity lies in managing these variables when the battery is in use. Applying a charge or discharge acts like a rubber band, pulling the voltage up or down, making a calculated SoC reading meaningless. In open circuit condition, as is the case when measuring a naked battery, a voltage reference may be used; however, temperature and battery age will affect the reading. The open terminal voltage as a SoC reference is only reliable when these environmental conditions are considered and the battery has rested for a few hours before taking the measurement.

A 10% discrepancy between fuel gauge and true battery SoC is acceptable for consumer products such as the iPad. Although not ideal, the accuracy will likely drop further with use, and depending on the effectiveness of a self-learning algorithm, battery aging can add another 20–30% to the error. By this time the user will have gotten used to the quirks of the device and the oddity is mostly forgotten or accepted.

While differences in the runtime cause only a mild inconvenience to a casual user, industrial applications such as the electric powertrain in an electric vehicle or a critical medical device need higher accuracies. Fuel gauge errors have caused EV drivers to miss the destination even though the meter showed sufficient reserve charge. Fuel gauges on medical devices must also provide reliable SoC accounts; however, few of current technologies are sufficient. Capturing the high inrush current of a defibrillator, sporadic use of a device, and not allowing the battery to full discharge once in a while are a huge challenge for fuel gauge makers. Advancements are being made in compensation for these anomalies.

Figure 1. Shown is the principle of a fuel gauge based on coulomb counting. The stored energy represents state-of-charge; a circuit measures the in-and-out flowing current.

Coulomb counting is the heart of today’s fuel gauge. The theory goes back 250 years, when Charles-Augustin de Coulomb first established the “Coulomb Rule.” It works on the principle of measuring in-and-out flowing currents. Figure 1 illustrates the principle graphically.

Coulomb counting is not perfect, and the error manifests itself in the difference between in and outflowing energies; what comes out is always less than what went in. Inefficiencies in charge acceptance, especially towards the end of charge, tracking errors, as well as losses during discharge and self-discharge while in storage contribute to this. Self-learning algorithms and periodic calibrations through a full charge/discharge assure an acceptable level of accuracy for most applications.

The fuel gauge only indicates SoC; the battery capacity remains unknown. A faded battery will show 100% charge even if the capacity has dropped to 50%, delivering only half the specified runtime with each full charge. Until fuel gauge technologies are available that estimate state-of-charge and state-of-health, medical batteries for critical missions should periodically be checked in a battery analyzer performing a full discharge/charge cycle. Besides verifying capacity, such a service calibrates the battery to maintain fuel gauge accuracy. On a frequently used battery the recommended calibration interval is once every three months or after 40 partial cycles. Manufacturers may have their own requirements.

Isidor Buchmann is the founder and CEO of Cadex Electronics Inc. For three decades, Buchmann has studied the behavior of rechargeable batteries in practical, everyday applications, has written award-winning articles including the best-selling book “Batteries in a Portable World,” now in its third edition. Cadex specializes in the design and manufacturing of battery chargers, analyzers and monitoring devices.

Author: 
Isidor Buchmann
No votes yet

Login or register to post comments