-
NPI: A How To Guide for Engineers & Their Leaders
-
Leading from the Front
-
Screws & Glue: Getting Stuff Done
-
Choosing the best CAD software for product design
-
Screws vs Glues in Design, Assembly, & Repair
-
Best Practices for Glue in Electronics
-
A Practical Guide to Magnets
-
Inspection 101: Measurements
-
A Primer on Color Matching
-
OK2Fly Checklists
-
Developing Your Reliability Test Suite
-
Guide to DOEs (Design of Experiments)
-
Ten Chinese phrases for your next build
-
-
NPI Processes & Workflows
-
I love reliability testing. As an engineer at Apple, I had the pleasure of inventing a series of tests to judge improvements in contoured cover glass strengthening. I devised a series of tests designed to be repeatable so different configurations could be evaluated on equal footing -- including one that was essentially a "roller coaster into a granite rolling pin." When we started working on the Apple Watch, we had to completely re-evaluate the test suite we would use for the product, considering the new use cases for a wrist-worn product. Have you ever slammed your hand into the side of a door as you walked through it? Needed a test for that. We had tests for "waterproofness," but what about the pressure differentials caused when your arm enters the water while doing the free stoke? Yup, test for that. What about if a user wears perfume, spray sunscreen, or bug repellant? Yup, yup, yup.
A good reliability test unearths issues that could arise from common uses of your product. But great testing starts with planning and imagining the use cases for your product. Here are three things to remember when developing your reliability test suite.
Failure to anticipate a use. For those who have ever used an iPhone, you know that little buzzer switch on the side of the device that changes it to "silent" mode? It makes a satisfying little bzzt every time you cycle it back and forth. If you were designing that switch, you might think the maximum number of times a user would switch it on or off during the day would be four or five times, surely no more than ten. But add on a delightful bzzt, and now users are cycling the switch absent-mindedly. I hope you designed it to be cycled 100 times per day instead of 10!
These situations are hard because you don’t know exactly how your product will be used until it’s in the field. Instead of guessing, build some units and get them into the hands of real users. Don’t just hand devices out to your engineers (though you should do this too), but look for users who are particularly extreme in ways that will stress your devices — such as someone who does a lot of physical activity, traveling, or who has cracked the coverglass on every phone they’ve ever owned. Do in-depth interviews after some time to understand the corner cases so you can create tests for any scenarios considered “reasonable use.”
Undertesting. In the early builds, it’s tempting to skimp on reliability test quantities because you know the design isn’t finalized anyway, and building 50 additional units to destroy them seems like a waste of money. But the earlier you test, the sooner you can work out issues that can cost even more time and money once you’re further along in NPI. Testing needs to be baked into the budget and the schedule. While some specialized testing might require external laboratories, make every attempt to work with factories with basic thermal chambers and drop robots — that equipment will cover most of the testing needs.
Overtesting. This can happen at larger companies with bigger budgets, where engineers will put hundreds of units through reliability testing to test A, B, C, and many more. Overtesting requires more units to be built and can also slow down getting the results due to a finite amount of testing resources. To better understand how to avoid these problems, download Instrumental’s Reliability Test Kit. You’ll find detailed test definitions and setup instructions for various reliability tests for electronic products. In this kit, we also demonstrate the best practices for some of the most commonly tested scenarios and offer efficiency tips for minimizing units tested and maximizing data.