-
NPI: A How To Guide for Engineers & Their Leaders
-
Leading from the Front
-
Marcel Tremblay: The Olympic Mindset & Engineering Leadershippopular
-
Anurag Gupta: Framework to Accelerate NPI
-
Kyle Wiens on Why Design Repairability is Good for Business
-
Nathan Ackerman on NPI: Do The Hard Thing First
-
JDM Operational Excellence in NPI
-
Building the Team
-
Quality is Set in Development & Maintained in Production
-
3 Lessons from Tesla’s Former NPI Leader
-
Maik Duwensee: The Future of Hardware Integrity & Reliability
-
Reject Fake NPI Schedules to Ship on Time
-
Leadership Guidance for Failure to Meet Exit Criteria
-
-
Screws & Glue: Getting Stuff Done
-
Choosing the best CAD software for product design
-
Screws vs Glues in Design, Assembly, & Repair
-
Best Practices for Glue in Electronics
-
A Practical Guide to Magnets
-
Inspection 101: Measurements
-
OK2Fly Checklists
-
Developing Your Reliability Test Suite
-
Guide to DOEs (Design of Experiments)
-
Ten Chinese phrases for your next build
-
-
NPI Processes & Workflows
-
-
Production: A Primer for Operations, Quality, & Their Leaders
-
Leading for Scale
-
Proven Strategies for Collaborating with Contract Manufacturers
-
Greg Reichow’s Manufacturing Process Performance Quadrants
-
8D Problem Solving: Sam Bowen Describes the Power of Stopping
-
Cut Costs by Getting Your Engineers in the Field
-
Garrett Bastable on Building Your Own Factory
-
Oracle Supply Chain Leader Mitigates Risk with Better Relationships
-
Brendan Green on Working with Manufacturers
-
Surviving Disaster: A Lesson in Quality from Marcy Alstott
-
-
Ship It!
-
Production Processes & Workflows
-
Failure Analysis Methods for Product Design Engineers: Tools and Techniques
-
-
Thinking Ahead: How to Evaluate New Technologies
-
How to Buy Software (for Hardware Leaders who Usually Don’t)
-
Adopting AI in the Aerospace and Defense Electronics Space
-
Build vs Buy: A Guide to Implementing Smart Manufacturing Technology
-
Leonel Leal on How Engineers Should Frame a Business Case for Innovation
-
Saw through the Buzzwords
-
Managed Cloud vs Self-Hosted Cloud vs On-Premises for Manufacturing Data
-
AOI, Smart AOI, & Beyond: Keyence vs Cognex vs Instrumentalpopular
-
Visual Inspection AI: AWS Lookout, Landing AI, & Instrumental
-
Manual Inspection vs. AI Inspection with Instrumentalpopular
-
Electronics Assembly Automation Tipping Points
-
CTO of ASUS: Systems Integrators for Manufacturing Automation Don't Scale
-
-
ROI-Driven Business Cases & Realized Value
-
-
Webinars and Live Event Recordings
-
Build Better 2024 Sessions On Demand
-
Superpowers for Engineers: Leveraging AI to Accelerate NPI | Build Better 2024
-
The Motorola Way, the Apple Way, and the Next Way | Build Better 2024
-
The Future of Functional Test: Fast, Scalable, Simple | Build Better 2024
-
Build Better 2024 Keynote | The Next Way
-
Principles for a Modern Manufacturing Technology Stack for Defense | Build Better 2024
-
What's Next for America's Critical Supply Chains | Build Better 2024
-
Innovating in Refurbishment, Repair, and Remanufacturing | Build Better 2024
-
Leading from the Front: The Missing Chapter for Hardware Executives | Build Better 2024
-
The Next Way for Reducing NPI Cycles | Build Better 2024
-
The State of Hardware 2025: 1,000 Engineers on Trends, Challenges, and Toolsets | Build Better 2024
-
Scaling Manufacturing: How Zero-to-One Lessons Unlock New Opportunities in Existing Operations | Build Better 2024
-
-
Design for Instrumental - Simple Design Ideas for Engineers to Get the Most from AI in NPI
-
Webinar | Shining Light on the Shadow Factory
-
How to Prepare for Tariffs in 2025: Leaders Share Lessons and Strategies
-
Tactics in Failure Analysis : A fireside chat with Dr. Steven Murray
-
A tech company was struggling to release its latest product. It had a big problem: the glass on its handheld device had an astronomically high failure rate in drop tests. Stumped, engineers initiated a product failure analysis through a series of experiments with different variables to isolate the cause of a problem, known as a DOE (dee-oh-ee). They mounted the glass with thicker adhesive, tested double-stick foam, and tried a rigid glue. They attempted several controlled experiments to make the product more rigid or flexible. Nothing was working. Panic began to set in with many thousands of units ready to be manufactured and costs mounting by the day.
While DOE has a precise definition of a specific, systematic method of determining cause-and-effect relationships, it has taken on a more general meaning of a controlled experiment in the consumer electronics lexicon -- and that's how we're referring to it here. Usually, you're only doing DOEs because something has gone wrong, so setting them up correctly is critical to getting back on track. Here's our five-step DOE process, with some do’s and don’ts for each step.
1. Identify the variable to test
DO: Test enough units. Engineers are sometimes pushed to accomplish their DOEs with the least number of units possible, usually because of cost or scheduling concerns. But you must ensure you have enough units to create a reasonable, statistically significant result. Scale the number of tests to the failure rate of the problem area. As a rule of thumb, if you are validating a solution to an issue with a failure rate of p, you should test at least n = 3/p units with zero observed failures to have confidence (⍺= 0.05) that you have improved your parts or process. For example, to statistically validate improvement on an issue with a 10% failure rate, you should expect to test 30 units with zero failures!
DO: Keep extremely accurate records. You might have 20 different configurations to test. Record everything that changes from one configuration to another — this will help to isolate variables once testing begins. Even if you have excellent records, component kitting errors are more common than any engineer would like to imagine -- leverage visual inspection or in-person validation to ensure the right parts are in the expected configuration.
DON’T: Test everything at once. In a time crunch, a common urge is to throw everything and the kitchen sink at the problem into the same configuration. Test one variable or solution at a time. This minimizes complications and makes it easier to pinpoint the most effective solution. When testing different configurations, don’t make ten modifications to the design if only one of them matters — the other nine will raise costs and risk creating new issues.
2. Run the right test.
DO: Pinpoint potential problems beforehand. Consider when you’ll have the data from the test — what will the potential results say? Try to identify weaknesses in your test setup before you test. For example, a product can fail a condensation test if it was unintentionally sealed at high heat and humidity — even if it didn’t spring any leaks. Project what mistakes may be built into a test and adjust accordingly. Consider whether the results you get will be truly actionable or if something is missing, and account for that in advance.
DON’T: Game your test results. At one tech company, an engineer considered throwing out a failing drop test result because the product impacted at a 15-degree angle instead of head-on. Passing on an arbitrary technicality is still a failure. In deploying the water-resistance IPX7 test on a phone, an engineering team could design the speaker mesh in a way that passes IPX7, but fails IPX5. Customers might think that because it passed one, it passed the other (seven is two more than five, right?!), but we know that will not fly in the field.
DO: Validate your test with the real world. Tests are simulations, but it’s real life that matters. One tech company’s product test involved a robot repeatedly pushing a button on the unit. Units passed the robot tests, but test units failed in the field. The failures involved fatigue cracks and peeling only when buttons were pushed slower. The problem took a smart team weeks to unravel. Test your products in real-world environments, not just in the factory.
3. Build and execute the test.
DO: Err on the side of hyper-vigilance during assembly. Don’t underestimate the capability for massive errors during the assembly process. Engineers will sometimes take apart a failed unit and see that it was put together using configuration B when it should have been configuration A. Be on hand to ensure that doesn't happen.
4. Review the data carefully and present the results.
DO: Make a slide to justify the decision you’re recommending. Use some of the raw data. You may have backup slides, but your suggested action should be defensible in one slide. If it isn’t, you have too many holes and caveats in your results — so go back to step 2. The single-slide technique is one of the core ways I built a strong technical reputation at Apple, such that near the end, I could assert something was true, and people believed me (without data).
5. Deploy the solution and validate the change at scale.
DON’T: Test changes on a small sample size and call it a day. Ensure that your change didn’t unintentionally mess up another part of your product by performing online performance tests, reliability tests and any other regulatory tests or forms of validation required. Once your product clears those, you can consider the change “made.” Don’t forget to tell the rest of your team about it so they can update any SOPs or fixtures accordingly!
Conclusion
So what happened to the product with the high failure rate in the drop tests? At the end of the DOE process, engineers determined that the best way to prevent the issue wasn’t by changing the adhesive but by stiffening the product’s corners. That insight — and discovering the correct solution of reinforcing the corners — came from executing the right DOE process. First, the team built enough units for testing. They didn’t skimp due to cost. Then, the engineers were hyper-vigilant about how the tests were run. No gaming of the results or testing too many configurations at once. Finding the right solution was challenging, but the well-executed DOE enabled the product to ship with reliable field performance.