Quality Advisor

Default White Guide

SPC DEMO

Minimize Production Costs, Quickly Detect Issues, and Optimize Your Product Quality

Don’t miss out! Book a demo of our specialized SPC software and unlock immediate improvements in your processes.

Capability analysis for normal data

The calculations for capability analysis are based on the following assumptions:

  1. The data is normally distributed. In other words, the shape shown by the histogram  follows the “normal” bell curve.
  2. The system being studied is stable and no assignable causes for variation are present. A control chart of the system has been made to determine stability before a capability analysis is done.
  3. The mean of the system being studied falls between the upper and lower specification limits defined for the process.

If these assumptions are not met, the results of a capability analysis will be misleading.

See also:
>> Can a process produce output within specifications?
>> Capability vs control
>> Normal data capability analysis
>> Non-normal data capability analysis
>> What is capability analysis and when is it used?
>> What are the capability indices?

Quality Advisor

Default White Guide

SPC DEMO

Minimize Production Costs, Quickly Detect Issues, and Optimize Your Product Quality

Don’t miss out! Book a demo of our specialized SPC software and unlock immediate improvements in your processes.

Capability vs. control

A process is said to be in control or stable, if it is in statistical control. A process is in statistical control when all special causes of variation have been removed and only common cause variation remains.

Control charts are used to determine whether a process is in statistical control or not. If there are no points beyond the control limits, no trends up, down, above, or below the centerline, and no patterns, the process is said to be in statistical control.

Capability is the ability of the process to produce output that meets specifications. A process is said to be capable if nearly 100% of the output from the process is within the specifications. A process can be in control, yet fail to meet specification requirements. In this situation, you would need to take steps to improve or redesign the process.

See also:
>> Can a process produce output within specifications?
>> Capability vs control
>> Normal data capability analysis
>> Non-normal data capability analysis
>> What is capability analysis and when is it used?
>> What are the capability indices?

Quality Advisor

Default White Guide

SPC DEMO

Minimize Production Costs, Quickly Detect Issues, and Optimize Your Product Quality

Don’t miss out! Book a demo of our specialized SPC software and unlock immediate improvements in your processes.

Capability analysis for non-normal data

Since a capability study makes the assumption that the data being analyzed is normally distributed, what can be done if the data is not normally distributed?

Usually if the data is not normally distributed, the process is not in control and a capability study is premature. However, in some cases the non-normal process is due to a measure that legitimately has only a single-sided specification. For example, if you are measuring flatness, the measurements can never be smaller than 0. In these cases, you will need to use Pearson curve fitting. Pearson curve fitting is a technique in which the distribution is compared to one of many theoretical distributions. If the data matches closely enough, it will pass a chi-square test and the capability indices will be useful. As with normally distributed data, if the data does not match one of the theoretical distributions, then the capability indices may be misleading and should not be used.

See also:
>> Can a process produce output within specifications?
>> Capability vs control
>> Normal data capability analysis
>> Non-normal data capability analysis
>> What is capability analysis and when is it used?
>> What are the capability indices?

Quality Advisor

Default White Guide

SPC DEMO

Minimize Production Costs, Quickly Detect Issues, and Optimize Your Product Quality

Don’t miss out! Book a demo of our specialized SPC software and unlock immediate improvements in your processes.

When should you recalculate limits?

Eventually, everyone using SPC charts will have to decide whether they should change the control limits or leave them alone. There are no hard and fast rules, but here are some thoughts to help you make your decision.

The purpose of any control chart is to help you understand your process well enough to take the right action. This degree of understanding is only possible when the control limits appropriately reflect the expected behavior of the process. When the control limits no longer represent the expected behavior, you have lost your ability to take the right action. Merely recalculating the control limits, however, is no guarantee that the new limits will properly reflect the expected behavior of the process either.

  1. Have you seen the process change significantly, i.e., is there an assignable cause present?
  2. Do you understand the cause for the change in the process?
  3. Do you have reason to believe that the cause will remain in the process?
  4. Have you observed the changed process long enough to determine if newly-calculated limits will appropriately reflect the behavior of the process?

You should ideally be able to answer yes to all of these questions before recalculating control limits.

To create control charts and easily recalculate control limits, try software products like SQCpack.

Quality Advisor

Default White Guide

SPC DEMO

Minimize Production Costs, Quickly Detect Issues, and Optimize Your Product Quality

Don’t miss out! Book a demo of our specialized SPC software and unlock immediate improvements in your processes.

Testing a theory about your data

If your theory is concerned with different results coming from different shifts, operators, or equipment, try separating the data. For example, you might suspect that one machine is the source of more scrap than another machine. If you are considering process improvements, one way to test a theory is to make a change in the process and track the effects. To do this, isolate data.

  1. If you are collecting data from multiple lines or shifts, you might make a change on one shift or line, and stratify data for analysis. If you are using SQCpack, the filter function can help create a subset of data from the process you have changed.
  2. Create a control charthistogram, or run chart, or perform capability analysis with data collected after the change. Compare charts or capability indices created before and after the change.
  3. Create a control chart showing data collected before and after the change. You can create a separate set of control limits for each group of data. Has the process improved? Stayed the same? Worsened?

Quality Advisor

Default White Guide

SPC DEMO

Minimize Production Costs, Quickly Detect Issues, and Optimize Your Product Quality

Don’t miss out! Book a demo of our specialized SPC software and unlock immediate improvements in your processes.

Which control chart should you use?

Correct control chart selection is a critical part of creating a control chart. If the wrong control chart is selected, the control limits will not be correct for the data. The type of control chart required is determined by the type of data to be plotted and the format in which it is collected. Data collected is either in variables or attributes format, and the amount of data contained in each sample (subgroup) collected is specified.

Variables data is defined as a measurement such as height, weight, time, or length. Monetary values are also variables data. Generally, a measuring device such as a weighing scale, vernier, or clock produces this data. Another characteristic of variables data is that it can contain decimal places e.g. 3.4, 8.2.

Attributes data is defined as a count such as the number of employees, the number of errors, the number of defective products, or the number of phone calls. A standard is set, and then an assessment is made to establish if the standard has been met. The number of times the standard is either met or not is the count. Attributes data never contains decimal places when it is collected, it is always whole numbers, e.g. 2, 15.

Sample or subgroup size is defined as the amount of data collected at one time. This is best explained through examples.

  • When assessing the temperature in a vat of liquid, the reading is measured once hourly; therefore the sample size is one per hour.
  • When measuring the height of parts, a sample of five parts is taken and measured every 15 minutes; therefore the sample size is five.
  • When checking the number of phone calls that ring more than three times before being answered, the sample size is the total number of phone calls received, which will vary.
  • When checking 10 invoices per day for errors, the sample size is 10.

More information on types of data, sample sizes, and how to select them is given in Practical Tools for Continuous Improvement which is available from PQ Systems. Once the type of data and the sample size are known, the correct control chart can be selected. Use the following “Control chart selection flow chart” to choose the most appropriate chart.

Once you’ve determined which control chart is appropriate, software like SQCpack can be used to create the chart.

Quality Advisor

Default White Guide

SPC DEMO

Minimize Production Costs, Quickly Detect Issues, and Optimize Your Product Quality

Don’t miss out! Book a demo of our specialized SPC software and unlock immediate improvements in your processes.

Improving measurement accuracy

What is it?

Variation is inherent to any system, and the data collection process is no exception. However, excessive variation in the data collection process will appear as variation on the control chart and can have a negative effect on process analysis. In addition to using operational definitions to ensure measurement consistency, you should periodically perform repeatability and reproducibility  tests and recalibrate gages.

Gage R&R refers to testing the repeatability and reproducibility of the measurement system. Repeatability is the variation found in a series of measurements that have been taken by one person using one gage to measure one characteristic of an item. Reproducibility is the variation in a series of measurements that have been taken by different people using the same gage to measure one characteristic of an item.

Gage R&R studies let you address two major categories of variation in measuring systems: gage variability and operator variability. Gage variability refers to factors that affect the gage’s accuracy, such as its sensitivity to temperature, magnetic and electrical fields and, if it is mounted, how tight or loose the mount is. Operator variability refers to variation caused by differences among people. It can be caused by different interpretations of a vague operational definition, as well as differences in training, attitude, and fatigue level.

Performing gage R&R studies can be made easier by using software such as GAGEpack.

When is it used?

Gages need to be recalibrated only when repeated test measurements show a lack of statistical control. Calibrating gages that do not need it or failing to calibrate gages that do need it can impair your ability to make accurate judgments about a process. Setting up a regular gage repeatability and reproducibility testing schedule can prevent either problem.

How is it made?

Note: The following are steps for a very basic gage R&R study. For a more in-depth analysis, refer to AIAG’s Measurement Systems Analysis or Evaluating the Measurement Process, by Donald J. Wheeler, Ph.D. and Richard W. Lyday.

  1. Determine the number of operators, the number of parts, and the number of repeat readings. Consider how critical the dimension is. For more critical dimensions, use more parts to increase your degree of confidence in the study results. Also consider the part itself; large parts may be harder to handle and call for fewer samples and more trials.
  2. Select 2 or 3 operators from those who are normally involved with the measurement process you are evaluating.
  3. Collect the parts for the test. Parts should represent the range of variation in the day-to-day operation of the process. Number the parts, but do this in such a way that the operators will not know which part they are measuring.
  4. Let the first operator measure each part in a random order and have another observer record the results. Enter the value in a column that represents that specific part number. Let the second and third operators measure the same parts in the same order without seeing the others’ readings. Record the data in the same manner, keeping data from each operator separated (alternating rows or different pages).
  5. Repeat Step 4 for each trial (repeat reading), with the parts in a different random order each time.
  6. Calculate averages and ranges for each operator for each part.
  7. To analyze the repeatability , create an X-bar and R chart with the data. The range chart will show the consistency of the measurement process. If the range chart is in control, the repeatability of the measurement system is adequate.
  8. The X-bar chart will show the reproducibility  (operator variation). Roughly 50% of the sample averages should fall outside the control limits. This indicates that the gage can distinguish among parts. To analyze the consistency among operators, use a Whiskers Plot.

Quality Advisor

Default White Guide

SPC DEMO

Minimize Production Costs, Quickly Detect Issues, and Optimize Your Product Quality

Don’t miss out! Book a demo of our specialized SPC software and unlock immediate improvements in your processes.

Run chart interpretation

To analyze run charts:

  1. Look for runs.
    If you find seven or more points in a row rising or falling, you have found an unusual circumstance that calls for investigation. Finding evidence of a run is neither good nor bad. It simply raises a flag that says “ask why.”
  2. Look for other nonrandom patterns.
    You may find a repeating pattern that corresponds to other data. Any nonrandom or repeating pattern is cause for investigation. If you find no unusual patterns, you may notice differences among readings. Do they swing from highs to lows or are they quite similar to each other? Further analysis by control chart is the next likely step.

To create run charts that will highlight these patterns, use software such as SQCpack.

Run chart interpretation

Quality Advisor

Default White Guide

SPC DEMO

Minimize Production Costs, Quickly Detect Issues, and Optimize Your Product Quality

Don’t miss out! Book a demo of our specialized SPC software and unlock immediate improvements in your processes.

Whiskers plot

This chart shows the high and low data values and the averages by part and by appraiser. The vertical line represents the range deviation made by an appraiser on one part. This helps determine measurement consistency by an appraiser, across appraisers, and shows abnormal readings, and part appraiser interaction.

To create a Whisker plot:

  1. Plot the high and low data values and the average by part for each operator.
  2. Draw a line to connect the high value to the low value.
  3. Connect the averages for each part for each operator, as shown below.

The longer the line, the larger the deviation from the true value of each part. The Whisker-Box Plot also lets you compare the results of each operator. If one operator’s results vary greatly, the operator may need more training on the measurement techniques and practices.