Measuring the ROI of Your SPC Software
Learn how to quantify the value of your quality system.
Video Transcription
Good afternoon, my name is Steve Wise, I am the vice president of statistical methods for Infinity QS and I’ve been playing in the world of statistics and manufacturing environments since about 1986. And so, what I want to talk about today is how can you use statistical methods to slash your costs, and I see a lot of times where folks use SPC software but you’re only getting half the benefit of the data. We just have twenty minutes today so I won’t have time to go into a lot of the areas that we could cover but, we’ll go through a couple good examples here.
So I want to talk about waste, and waste is a very broad term but if we take a simple example of, you’ve got various operations and each operation there can be a little bit of waste that comes out of each operation. So we go through operations 1, 2, 3 and they all have some component of waste associated with that. And all that waste just goes into the great big trash bin. Waste can be measured in time, in materials, or in resources, any activities that come into play when there’s waste going on, that all just gets trashed. And we’re paying for it. I’m paying for it, you’re paying for it, everyone pays for that.
People sometimes refer to this as the “Hidden Factory”, you’ve got all this activity that goes on but it’s not making any profits. You’ve heard of hidden factories from World War II times, am thankful for certain hidden factories. I was raised just outside of oak ridge, so my forefathers and myself were all grateful for hidden factories, and if you’ve been on the Boeing tour, that plant was hidden during the World War II times. But the kind of hidden factories that I’d like to talk about are the ones that we’d like to expose and how we get to these hidden factories.
When we think about reducing cost, then a lot of times we have heard the stories about SPC, so let’s apply some basic SPC methods to see how far we can get. So here’s your basic run chart, those of you who are statisticians out there can see that I’m dealing with individual x plot points up there, so mathematically it’s safe to put spec limits as control limits, so I’m not violating anything mathematical there. But as you can see I have 18 plot points there, these are out of spec, so I’ve got 7 fallouts out of 18, so we can get some numbers off of that. I can see that it’s not capable. If you can read the small print over here where the CPKs are, you can see that these are pretty bad numbers. This chart does really nothing more than provide the capability and out of spec visibility, and it’s good! It’s good to see the trends and so forth but it’s not really addressing the cost associated with this type of activity.
The other side of control charts are your attribute charts. So there’s a lot of activities where we’re counting up our defects and tallying those up and we have attribute charts to check that, and then here’s my chart, I’m counting defects and on average every sample I take, I’m getting 6.7 defects per subgroup. So again, it shows trends, it shows defect counts, but this is still not addressing the costs. But a lot of you are spending money in collecting data but you’re not getting all the value out of this data.
Let’s look at another type of data which is very common, it’s the old histogram or capability analysis. Those numbers are kind of hard to see but if you look at a few of the stats here, I’ve got forty percent fallout, so there’s about twenty percent on the low side and twenty percent on the high side. You can kind of intuitively look at, one side’s a scrap side and one side’s a re-work side. You can talk about it, but it’s not really down to the hard cost with that. We can also report numbers like Cpk and sigma levels and we feel good or bad about those numbers, but again, it’s not dealing with the cost.
So I want to go through a couple of examples, assuming that you’re already collecting this kind of data from your SPC software, what additional few things can you do start gathering your costs with this. Here is just the spreadsheet view of that first chart that I had, where we’ve got 18 pieces of information, so I’ve got 18 pieces there, and of those, 7 of those fell out. So if we just did something as simple as put a cost component in there, in this case the factor value and you can see the column says 17.25. So that’s what I’ve assigned to this as the cost, if we have to scrap this part at this operation. My value at this point is $17.25. You can do some math and figure that out and then at any time when things are dropping out, just multiply it by that factor and you can start tallying up some costs there.
But this is a very immature way of looking at cost, and it kind of goes back to this goal-posting business, to where the historical philosophy is and the conventional philosophy is anything that falls within the upper and lower spec limit is 100% is no loss whatsoever, but only once it goes outside the spec limit can we have total loss at that point. But those of you who have heard it and believed in the Taguchi philosophy which has been around for years, what he said was is that any time where you deviate from your target, there is cost associated with it. It pretty much has just been a philosophical discussion, that motivated us to try to get us towards target. But how can we take this knowledge and actually apply hard math to it and get some hard costs out of it. So depending on the type of test you’re measuring, that loss function curve can be different, so you can figure out what the best guess of the loss function curve is in your situation, make an equation out of that and then apply that to your numbers so you when you get a value that falls a little bit off target, you’re applying this equation curve to it to get the cost associated with it. So if we take the assumption that as soon as it hits the upper spec limit or the lower spec limit, it’s a $17.25 loss.
With the example here, again, our target in this case is 7.5, the cost is $17.25, and so I’ve got this sample data entry window. Infinity QS, if you’ve heard of our company, we’ve been in the business of collecting data and doing real time SPC for years, and this is just screenshots from our software. But I’ve got my aside diameter at 7.5, or since that’s running dead nuts, I have no fallout whatsoever, so that if you can read that that says, aside diameter spec cam which is out of zero, and then the Taguchi loss is zero as well because we’re running right on target.
If we take the other example where I’m right on 7 and the upper spec limit is 10, if I get a value that falls at 10 I still don’t have any fallout, but yet I’ve realized all my costs. So still $17.25 cost there. So if it goes to 10.01, what will my fallout be at that point? I’ll get a hit of 1, but what happens to the Taguchi value, the loss, at 10.01 versus 10.0? Is there much difference, is the 17.25 going to change that much? No, it won’t change hardly at all, we’re just multiplying the factor times 10.01 rather than 10, so it will be more like 26 cents rather than 25. We play games. If it’s barely in spec, it’s fine, but if it barely out of spec it’s bad, and we either scrap it or we put it into MRB and we call people in, we call the costumer in, we get use as is designation or deviations on that, and we’re playing games with ourselves, but all that costs money to play those games. There are costs associated with it.
If we take a couple more examples, here a value that came in at 8. So it’s well within spec, but if we compare it to the loss function, we’ve got a loss of $3.45 at that point. Or we take something way out of spec, the loss just keeps going. So it doesn’t stop at 17 and a quarter, we keep applying the math to it and it’s a $31 loss at that point. The reason for that is, is if that part escapes, and gets further down the line, and more value gets put into it, then you need to add more cost to it. And the further down it gets, the more cost when it gets discovered. You really need to give credit, and apply the math, even though it’s out of spec.
So here, now that we have these statistics, that are actually numbers of cost, why can’t we puts those on control charts? We can. So here’s a control chart for every sample we take, we calculate the cost and then have a cost control chart on this. So the red line is the upper spec limit, that’s the $17.25 line, but the center line on that is 14.99, basically, it’s a $15 on average loss every part we make. Even though not all of them are out of spec, we can start calculating that. So when it comes time to justify what it costs to bring the process on target, or to reduce variation about that target, if you start looking at it this way, you might get to your return on investment faster if you’re making investments and trying to get it on target.
If you want to drill that even further and try to isolate the culprits, add more descriptive information to your data. For example, lot numbers, or shift, or operators, in this case I’ve got a 5-spindle operator. Assign that to the data and you can calculate further down as to where the problems are residing. So spindle 3 and spindle 2, since they tend to run off target more than the other spindles, that’s where most of your costs are. Even if nothing went out of spec, you’re still going to accumulate loss if you apply the Taguchi loss function curve to this. So now I can see over all, when I ran part one, if you look at the yellow bar, everything to the right, I’ve got $285 I’ve accumulated from that, and from there I can see it divided up into the different spindles. So that’s one approach, is just taking the loss function and deviation from target and applying some actual math to that. It’s fairly simple, you’re already spending that money on the data, you just put some free data behind it and start getting those dollars.
If you’re in a filling operation, or any time you’re trying to do volumetric measurements then there’s give away where a lot of waste and dollars go. So in our simple example here, I’ve got a can of mixed vegetables, it’s a filling operation and I’ve got a label stated content of 567g net weight, anything over label stated content is give away, anything under is a violation. So if we target the process on 567g, how much of the product is going to be give away? How much will fall in the giveaway category? Half will have some sort of giveaway associated with it. How much of our product is going to be in violation? Half will be in violation as well, so in this condition, we don’t want to target on the label stated, do we. We want to target above label stated, but the question is how far above do we need to run this? And really if you play these numbers, it’s a function of the standard deviation and how long you can hold targets. So that’s kind of getting into your controls and how stable is your process. And If you can reduce your standard deviation, you can run it closer to your label stated without having any fallout problems on the violation side. But we still want to calculate give away. Because every bit of giveaway is justifying whatever amount of money it might cost to reduce your variation or to get better equipment or what not. We want to always calculate give away. So it’s very simple math, we just set the lower spec limit at 567 grams. I’ve arbitrarily set a target of 577 grams upper spec, 587 because we want to run it above label stated. So I came in at 571g, I subtract that off my label stated and I come up with a 4.5658g giveaway. So I do that on all 5 cans, which is my sample size is 5 in this case.
Now I can do some additional math, and if you follow along the spreadsheet there I take those 5 readings and the giveaway of 4.79, and that’s that number there, and on average 4.7g per can is what I’m giving away. Then I’d do some more math. Let’s say you’re doing a sample every 15 minutes or every hour, whatever your sampling frequencies are, you want to know how many cans have passed through and then do some math on that to where the number of can from our last check was 1092 cans, so almost 1100 cans have passed through. So I take my number of cans and multiply it by the average giveaway for that sample, and I can kind of come up with an average give away, or the give away during that last sampling period of 5473g and then I multiply that by per gram cost, which down here is as hundredth of a penny basically and in between my last sample I away $8.42 and so as I continued my lot run. I’m sampling throughout the lot every sample I get, I’m also calculating the giveaways so my so my total giveaway for that run was $35.28. So now we can take these numbers and start tallying up my giveaway. So on my part, I’m calling it my part 3 my mixed vegetable can. When I ran that job I gave away $72 with a product and I can see 102 is $35 and lot 101 was $37 effectively and I can add those up.
So again, you’re going to be doing you weights anyway, you may as well do that math and calculate the giveaway and use that to justify any improvement efforts that you want to spend to bring that in. Or even better yet, you may not know what’s going on with your sigma. You may be better than you think you are and you’re running way above target, more than you need and if you did nothing except to understand what your sigmas are, you can dial it down and you can actually predict how much give away are you willing to live with and depending what the feds or the governing bodies are saying, if you can give away 2%, dial it down to where statistically you’re trying to get a 2% percent giveaway. So that’s some value of just having the numbers, to know what those are.
So other areas and ways to slash costs. We talk about reducing inspection, Imagine if you’re doing in process checks on one end, and then you’ve got final inspection that has sampling plans on the other, your sampling plans are telling you on a particular lot size I’ve got a sample 12 pieces or whatever the number is. Well, some of those features that you’re checking in final, you’re also checking in process. If you make sure that the samples and the sampling or the measurement systems and so forth are good enough on the floor, give yourself credit for that and have the software a lot already done seven of these tests on the floor so in my final, I just have to do two of them that require specialized equipment maybe they don’t have on the floor. So you can reduce inspection that way and then we get negative into the duplicate inspections, we can eliminate some of the redundant inspection and then if you’re good and you have a track record of being good and mathematically and statistically over time see you’re good you can start expand the amount of time between when you do the checks so that it costs less money
And then this idea of smaller sample sizes. A lot of times we deal with the mil standard window 5d or 414 where it’s just old school type of sampling and the sample sizes are very very big. If you look up lot tolerance percent defective sampling plans LT PD plans those are actually much smaller sample sizes. They take on more of the known knowledge of your statistics and you can actually reduce your sampling sizes if you look at the LTP d-type sampling plans also if you know after you know what the process capabilities are, you know what you’re trying to achieve with the jobs that are coming in. You can better match your job to the equipment out there so you’re not wasting your expensive tight tolerance equipment running jobs that don’t require the tight tolerances so don’t bottleneck the better equipment when you don’t have to.
Also, if you know what your process capabilities are then you can decide when a job comes in , should I really attempt to make this because I like to think to statistics is the science of predicting the future and so once you understand what your process capabilities are, you can actually look at a job, look at the tolerances you’re trying to hold on that and actually predict your fall out before you ever run the job and if you know you’re going to be fighting this, you can look at the prices of what a supplier that will do it for you and farm it out if it makes sense, so you can you can better justify your decisions on that. But even if you don’t make any improvements, you have no money the technology is what it is, I still want to run the job the be honest with yourself and just order enough material if you know you’re going to have seventeen percent fallout, order 20% more, whatever it is and just be honest with yourself. It’s much better to order additional material than to run out in the middle of a job and have to call stores to get more in or heaven forbid wait for suppliers to bring you the extra product.
Then also account for enough time. If you know you’re going to have fallout, you’re willing to live with it, the ROI is not there yet to make things better, then account for enough time for the job. You’re going to have to sort parts, you’re going to have to dispositions things and do the rework so you know your account for that and embed that into you estimates.
There’s just two statistics and a fact that you have to understand that your process is to make these predictions and that is: where’s my mean based on my set point? How well can I hold that set point? How much variation is around that set point? Is it stable over time? That’s just you control charts that are doing that, giving you that information if you had that, nowhere else could you use this knowledge so in this example, I’ve given you just two little examples in this short time we’ve had together.
So you’ve got your hidden factors, we all have hidden factors and they are costing you money. So let’s figure out how to expose these. And in my submission, if you’re going to invest in collecting data, then I submit that you ought to embed some cost tallies into your standard QC checks and SPC checks, add a little bit more math to it and get those costs associated with that. Consider the Taguchi loss function curve. A lot of times our bosses will say, well I’m running everything, they’re always in spec, we can’t justify making improvements. We all know that if I’m on operation 10 and operation 11 is expecting my product to be on target, but it’s actually everything on the high side, I’m going to cause operation 11 problems in many cases, even though it’s all in spec.
So there is something to the Taguchi loss fiction curve so use that to help calculate your cost giveaways, a great place to be if you’re in that volume metra in the filling operation label stated content type operations then it’s a great way to start to looking into slashing your cost and look for other areas where costs can be incorporated.
That’s the end of my discussion. I hope it was worth the 18 or so minutes that you have and I hope I gave you some decent ideas into SPC software and I appreciate your time.