There's a script for testing Android-based medical devices. Designers should take notice.
Within three years, Android as a platform for mobile devices has made significant inroads. In fact, Android recently claimed the top position in the U.S. smartphone market, with more than 50% of smartphones in the U.S. running Android. Android, developed by the Open Handset Alliance and led by Google, is used by most mobile operators and many handset manufacturers, including Motorola, Sony-Ericsson, Samsung, LG, and HTC.
Android is also proving itself to be the platform of choice beyond mobile devices. In fact, if you believe a recent article, Android is the best OS choice for many medical applications.
At the same time, launching Android-based devices remains an on-going challenge for many OEMs, operators, and device makers. The variations in the Android platform and its flexibility and configurability, even when in the hands of end-users, make it a challenging platform to design for, test, and launch.
Android device testing
To perform a complete and effective device-testing project, a test team needs to develop the required test cases, and in each test case, the correct behavior of the device under test must be described. If errors introduced in the testing process as a result of missing or incomplete requirements, and consequently by developing wrong test cases, the testing project will not succeed in achieving its objectives, which is to identify defects in the device.
The first step to developing relevant test cases is to gather all the expected behavior of the device for all of the use cases and usage scenarios which should already be captured in the technical and test requirements documents.
Developing technical and test requirements are important steps in a successful device launch. For Android-based devices, this becomes even more important since these platforms are highly configurable and device OEMs design them with a variety of different form factors, UIs, and software configurations.
When the test requirements aren’t captured and developed properly and sufficiently, test cases won’t reflect the desired behavior of the devices the way it was intended, and rather will be based on the behavior that the device under test displays, which may or may not be correct.
To develop sufficiently complete technical and testing requirements, the device launch team needs to build a complete set of requirements from several different sources, as shown Figure 1. Test teams often tend to skip this step and jump right into developing test cases, and then testing. Experience shows that this process will make developing test cases more accurate, effective, and faster, and shortens the development life cycle.
1. As shown, the requirements for testing should come from multiple sources.
Android test plan development
Android’s high level of configurability makes it challenging to develop a complete and comprehensive test plan which covers every aspect of the device functionality. Unlike most other platforms, Google releases new Android versions frequently and each new release gets modified at different levels of the OS, particularly at the UI level. At the same time, it’s very common for device OEMs to design their own proprietary UI or skin to fit Android for a particular purpose, such as a medical device. This is why that when testing an Android device, the test plan should include both functional aspects as well as UI and usability features that need to be tested. For battery-operated devices, battery life and power consumptions must be tested as well. Figure 2 shows the categories of test cases that an Android test plan should cover.
2. The categories of test cases should cover those shown here.
Functional test cases are straightforward and are developed based on the feature requirements definitions and design documents. The technical design requirements and test requirements are the guiding documents for defining and creating the test cases required for testing and verifying the device’s performance, features, and functionality.
Google has been releasing a new version of Android few times a year, and to keep up with each new release, test cases should be updated. Google recently announced that it’s aiming to release new Android versions only once per year in the future.
Good user-interface design significantly lowers user error and improves device usability. It’s important to test it before the device is launched. To test usability, test cases must first be developed. A common approach is to develop usability test cases at the task level and then map each task level test case to the steps needed to complete the task on the device under test.
Testing usability is particularly important in medical devices where use error can be catastrophic and it should be given a great deal of attention. FDA views user error as a serious source of risk for medical devices and there are many instances of recalls associated with user errors and design issues. FDA also provides specific guidelines for designing and testing medical devices to minimize use errors. Some of these guidelines and standards on usability, all of which can be accessed at fda.gov, are:
International standards and guidelines on usability in medical devices:
How can usability be evaluated and tested? One approach is to use beta testers and simulated use. This can provide significant insights on how well the device performs in a user’s hands. Another potentially more efficient way is to leverage engineering methods to test usability. Usability experts have developed methodologies that are based on scientific research in psychology and human factors, to test usability. In this approach, usability is evaluated and tested based on metrics which affects human users when using the devices. These metrics include efficiency, accessibility, visibility, feedback, and responsiveness, among others.
If the device is battery operated, an important part of designing and testing involves power consumption and battery life. Modern, handheld battery-operated devices have larger screens, faster CPUs, and faster network connections, and all of these features can impose a significant strain on batteries.
For a mobile battery-operated device, there are two approaches to measure power consumption and the expected battery life. The first approach is component level, in which the power consumption is measured for each subsystem separately. In the second approach, the power consumption is measured at the device level. Each approach has its advantages and disadvantages.
Using the component level method, the device power consumption is the aggregate of power consumption measurements for each component. This method is more accurate than the device level, and the results can be consistently reproduced, but it requires more effort and consequently is more expensive. It also requires access to the detailed device hardware documentation to find the power supplying points on the pcb for each component.
With the device-level method, power is measured at the aggregate point of the battery connection. Power consumption measurements are taken for each of the device use cases. This approach is easier than the component-level measurements and is more flexible. Because use cases are usually performed by human testers, the results may vary from run to run so it may be necessary to repeat the tests several times to achieve stability and statistical significance.
To conserve battery life, most modern battery-operated devices employ some form of power-management policy and change the state of the device when it’s not actively used for a period of time. In such cases, to get the full picture of the device’s power consumption behavior and a good estimate of the battery life, the test should include power readings at different states of the device, which usually include suspend, idle, and active states.
Executing test cases
Once test cases are written, test engineers will execute them to identify defects and determine whether the device and/or the feature are ready to be shipped. There are a few rules of thumb when executing test cases which can make the process faster and less error prone:
The following table shows a sample test case for testing the accuracy of touch screen data entry on an Android device.
Moe Tanabian is the managing partner of Intuigence Group (Irvine, CA), and has more than 15 years of experience in designing and developing products in embedded systems, wireless devices, and infrastructure. Tanbaian holds a Masters degree in Computers/Systems Engineering from Carleton University, Ottawa Canada, and an MBA from Queens University, Kingston Canada. He is a senior member of the IEEE. Contact Tanbaian at email@example.com.