Quality Advisor

Default White Guide

SPC DEMO

Minimize Production Costs, Quickly Detect Issues, and Optimize Your Product Quality

Don’t miss out! Book a demo of our specialized SPC software and unlock immediate improvements in your processes.

Pareto diagram

What is it?

A Pareto diagram is a simple bar chart that ranks related measures in decreasing order of occurrence. The principle was developed by Vilfredo Pareto, an Italian economist and sociologist who conducted a study in Europe in the early 1900s on wealth and poverty. He found that wealth was concentrated in the hands of the few and poverty in the hands of the many. The principle is based on the unequal distribution of things in the universe. It is the law of the “significant few versus the trivial many.” The significant few things will generally make up 80% of the whole, while the trivial many will make up about 20%.

The purpose of a Pareto diagram is to separate the significant aspects of a problem from the trivial ones. By graphically separating the aspects of a problem, a team will know where to direct its improvement efforts. Reducing the largest bars identified in the diagram will do more for overall improvement than reducing the smaller ones.

There are two ways to analyze Pareto data depending on what you want to know:

Counts Pareto: Use this type of Pareto analysis to learn which category occurs most often, you will need to do a counts Pareto diagram. To create a counts Pareto, you will need to know the categories and how often each occurred.

Cost Pareto: Use this type of Pareto analysis if you want to know which category of problem is the most expensive in terms of some cost. A cost Pareto provides more details about the impact of a specific category, than a count Pareto can. For example, suppose you have 50 occurrences of one problem and 3 occurrences of another. Based on a count Pareto, you would be likely to tackle the problem that occurred 50 times first. However, suppose the problem that occurred 50 times costs only $.50 per occurrence ($25 total) and the problem that occurs 3 times costs $50 each time ($150 total). Based on the cost Pareto, you may want to tackle the more expensive problem first. To create a cost Pareto, you will need to know the categories, how often each occurred, and a cost for each category.

What does it look like?

An example of a counts Pareto diagram is shown below.

g-chart

When is it used?

Use a Pareto diagram when you can answer “yes” to both these questions:

  1. Can data be arranged into categories?
  2. Is the rank of each category important?

 

Getting the most

Despite its simplicity, Pareto analysis is one of the most powerful of the problem-solving tools for system improvement. Getting the most from Pareto analysis includes making subdivisions, multi-perspective analyses, and repeat analyses.

Subdivisions are useful when data has been first recorded at a very general level, but problem solving needs to occur at a more specific level. A retail chain manager might create a Pareto diagram for all the customer returns of furniture by store in his district. Once he or she has identified the store which contributes most returns to the total, the next step might be to analyze that store’s returns by furniture type. If “chairs” turned up as the biggest category of furniture returns for the store in question, yet another Pareto of chair returns might help to discover whether dining room chairs, occasional chairs, wooden chairs, or upholstered chairs were being returned more frequently. Because the Pareto principle holds for subgroupings of data, such successive analyses can be performed to help teams target small elements of a large problem.

Multi-perspective analyses are useful when data can be stratified or subdivided in several different ways. The retail manager might study customer returns of furniture by number of units and again by cost. A store might discover that chairs have accounted for the majority of items returned over a period of time, but that fine dining sets accounted for the majority of cost. Depending on priority, the problem could be attacked to reduce either the highest frequency or the highest cost item. The district retail manager might study his or her district-wide furniture returns by store, by lot number, by furniture type, by cause for return, by frequency, by cost, by salesperson, by delivery carrier, or by any other set of categories he or she thinks may reveal opportunities for improvement. Multi-perspective Pareto analysis helps assure that a set of data is reviewed from all angles and that many explanations for variability are considered.

Repeat analyses are useful when improvement activity is underway and performance data is changing over time. If the retail manager worked with the store’s delivery staff to reduce the number of fine dining sets being damaged and subsequently returned, it would be useful to repeat an earlier Pareto analysis using more recent data to see if the target category has shrunk. Depending on the cycle of data collection—hourly, daily, weekly, monthly, quarterly, or other—repeated Pareto analyses help to monitor the improvements made to the system producing the data.

Caution is in order for users of Pareto analysis who have not monitored the systems they are studying for stability. A wildly fluctuating system will produce inconsistent Pareto rankings that can lead to misjudgments. If, for example, the retail manager failed to note that customer furniture returns varied greatly from month to month, the ranking of categories may be entirely different in a month with high returns from those of a month in which returns were unusually low. Repeated Pareto analyses can help to confirm rankings, but the most effective protection against being misled is to first use a control chart to tell if the system is stable and predictable.

Quality Advisor

Default White Guide

SPC DEMO

Minimize Production Costs, Quickly Detect Issues, and Optimize Your Product Quality

Don’t miss out! Book a demo of our specialized SPC software and unlock immediate improvements in your processes.

Pareto data

What is it?

If your counts data can be arranged into categories and the rank of each category is important it is considered Pareto data. Pareto analysis is based on the law of the significant few versus the trivial many. For example there are often many causes to a problem but only some of them are significant.

How is it used?

The purpose of a Pareto diagram is to separate the significant aspects of a problem from the trivial ones. By graphically separating the aspects of a problem, a team will know where to direct its improvement efforts. Reducing the largest bars identified in the diagram will do more for overall improvement than reducing the smaller ones. You can use a Pareto diagram to see frequency of occurrence or to compare cost and occurrence. Software packages such as SQCpack can generate Pareto diagrams from your data.

Use a Pareto diagram to analyze this type of data.

Quality Advisor

Default White Guide

SPC DEMO

Minimize Production Costs, Quickly Detect Issues, and Optimize Your Product Quality

Don’t miss out! Book a demo of our specialized SPC software and unlock immediate improvements in your processes.

Frequently-asked questions about capability

  1. Are Cpk & Ppk acronyms? If so, what do they actually mean or represent?
  2. What is the difference between Cp and Pp?
  3. What is the difference in the formulas for Cpk and Ppk?
  4. Are there maximum values for Cp, Cpk, Pp and Ppk?
  5. How can I improve Cpk value, when it is less than 1.0?
  6. Is it possible to have a Ppk value of 10 and a Pp number of 5?
  7. What do the letters in Cp and Cpk stand for?
  8. Why do capability indices formulas divide by 3?
  9. What is an ideal Cpm value?
  10. Can I compare two processes based on only the Cpk values of each of them?
  11. Can the process performance index Ppk be applied on the ongoing process? If yes, how?
  12. Why would I have Cp and Cpk indices well over 1 when some readings are outside the specification limits?

Are Cpk & Ppk acronyms? If so, what do they actually mean or represent?

Since their introduction, there has been a lot of speculation as to meaning. Here is my two cents worth.

Cp has always been know as capability of the process since I became aware of it and it has been around for some time. My connection with Cpk came through the Ford “Continuous Process Control and Process Capability Improvement Manual” probably more than 20 years ago. In the Ford manual, a k value was used to represent the number of standard deviations between the Target. I would assume that the Cpk came literally from Cp with a k factor adjustment. In reference to the Pp and Ppk, the reference from the beginning has been to Process Performance as opposed to Process Capability.

What is the difference between Cp and Pp?

The technical difference is that the 6 sigma used for the Cp calculation (or the 3 sigma used for the Cpk calculation) comes from the estimate of sigma based on the average range, and the 6 sigma used for Pp calculation (or 3 sigma used for the Ppk calculation) comes from the estimate of sigma based on using all the data and the classical formula for the standard deviation. View the formulas for Cp and Cpk; view the formulas for Pp and Ppk.

In general, if the process is in control and normally distributed (standard assumptions when doing capability analysis), both values should be close. However, since most processes wander around a little bit (and are in control), an intuitive interpretation is that the Cpk is what you could be doing and Ppk is what you are doing.

What is the difference in the formulas for Cpk and Ppk? The only difference I see is the i and r after the sigma symbol. What are these referring to?

The six sigma used for the Cpk calculation comes from the estimate of sigma based on the average range (r). The six sigma used for the Ppk calculation comes from the estimate of sigma based on using all the Individual data (i) and the classical formula.

In general, if the process is in control and normally distributed (standard assumptions when doing capability analysis), both values should be close. However, since most processes wander around a little bit (and are still in control), an intuitive interpretation is that the Cpk is what you could be doing and Ppk is what you are doing.

Are there maximum values for Cp, Cpk, Pp and Ppk?

No. As long as the spec range does not change and you continually reduce the variation, you will increase these indices. I have seen as high as 36 and have heard of higher.

How can I improve Cpk value, when it is less than 1.0?

First, compare Cpk to Cp. If Cpk is less than Cp and Cp is greater than one, center the process in the specification. This should make Cpk comparable to Cp. If Cp and Cpk are less than one, there are two actions you can take. The first (an unadvisable one) is to widen the specification particularly on the side that has the spec limit closest to the center of the process. The second and more advisable answer is to improve the process by reducing variation in the process. If the process is off-center, it would be advisable to try to center it as you try to improve it.

Is it possible to have a Ppk value of 10 and a Pp number of 5?

This should not occur. You might have a negative number for the Ppk that is larger in absolute value then the Pp number. This implies that the process mean lies outside the specification limits.

What do the letters in Cp and Cpk stand for?

There is no authoritative answer. Cp has been around for a long time and many believe it stands for Capability of the Process. Others say Process Capability, but that would reverse the letters.

As for Cpk, in the literature that I first saw about Cpk, k was the amount of the difference in the target value and in standard deviations (the number of standard deviations that the process is off target). Before you ask, Pp generally is said to be Process Performance.

Why do capability indices formulas divide by 3?

When calculating Cp you divide the specification range by six sigma. This is plus and minus three sigma on each side of the mean of the process which would include about 99.7% of the distribution of output if the process is normal. Cp considers only the spread and not the centering of the process. Consequently, you can have a capable process (Cp > 1) and not be making any good product. Cpk considers the mean of the process and calculates two values. Since the specification has been split into two pieces, the process spread is split into two as well.

What is an ideal Cpm value?

Generally there is no “ideal.” Bigger is always better. The difference in Cpm as defined in SQCpack is in the calculation of the stand deviation or variance term. The standard deviation for Cpm is based on using the target value rather than the mean which will make sigma(pm) larger and Cpm smaller when the process is not centered on the target value. You could say “ideally” the process should be centered in the specification making Cpm = Cp. However, Cp might only be 0.80 which clearly is not “ideal.”

Can I compare two processes based on only the Cpk values of each of them? Is there any other tool by which I can say that one process is better than other?

It depends on what you mean by better. If the processes are producing the same product dimension, then you can compare them more or less directly.

Cpk includes a centering factor as well as the variation factor. Unless you want to compare centering as part of the two processes, use Cp.

Can the process performance index Ppk be applied on the ongoing process? If yes, how?

The capability indices are designed to be applied to ongoing processes. They are an indication of what a customer can expect in terms of quality from a particular process.

If you have a control chart on a characteristic for a process, SQCpack will calculate these values for you if you enter the specifications. If you do not have the software, the capability analysis article series provides information on calculating capability.

Why would I have Cp and Cpk indices well over 1 when some readings are outside the specification limits?

My first guess would be that if you look at a control chart of the data, it is out of control. Before you can do capability analysis, the process should be predictable and that requires that it be stable (in-control). For a more detailed discussion, see How can Cpk be good with data outside the specification?

Quality Advisor

Default White Guide

SPC DEMO

Minimize Production Costs, Quickly Detect Issues, and Optimize Your Product Quality

Don’t miss out! Book a demo of our specialized SPC software and unlock immediate improvements in your processes.

Analyze for special cause variation

The key to chart interpretation is to initially ascertain the type of variation in the system—that is, whether the variation is coming from special or common causes. When the system has only common causes of variation, it is referred to as stable or in control. If, however, the system has special causes of variation, it is referred to as unstable, or out of control.

Look any of the conditions listed below, which indicate that the process is statistically unstable:

>> Any point lying outside the control limits
>> 7 or more points in a row above or below the centerline
>> 7 or more points in one direction
>> Any nonrandom pattern
>> Too close to the average
>> Too far from the average
>> Cycles
>> Trends
>> Clusters
>> Sawtooth
>> 2 of 3 points beyond 2 sigma
>> 4 of 5 points beyond 1 sigma

When you have determined whether or not there is special cause variation, declare the system stable or unstable.

Quality Advisor

Default White Guide

SPC DEMO

Minimize Production Costs, Quickly Detect Issues, and Optimize Your Product Quality

Don’t miss out! Book a demo of our specialized SPC software and unlock immediate improvements in your processes.

Capability analysis

What is it?

Capability analysis is a set of calculations used to assess whether a system is statistically able to meet a set of specifications or requirements. To complete the calculations, a set of data is required, usually generated by a control chart; however, data can be collected specifically for this purpose.

Specifications or requirements are the numerical values within which the system is expected to operate, that is, the minimum and maximum acceptable values. Occasionally there is only one limit, a maximum or minimum. Customers, engineers, or managers usually set specifications. Specifications are numerical requirements, goals, aims, or standards. It is important to remember that specifications are not the same as control limits. Control limits come from control charts and are based on the data. Specifications are the numerical requirements of the system.

All methods of capability analysis require that the data is statistically stable, with no special causes of variation present. To assess whether the data is statistically stable, a control chart should be completed. If special causes exist, data from the system will be changing. If capability analysis is performed, it will show approximately what happened in the past, but cannot be used to predict capability in the future. It will provide only a snapshot of the process at best. If, however, a system is stable, capability analysis shows not only the ability of the system in the past, but also, if the system remains stable, predicts the future performance of the system.

Capability analysis is summarized in indices; these indices show a system’s ability to meet its numerical requirements. They can be monitored and reported over time to show how a system is changing. Various capability indices are presented in this section; however, the main indices used are Cp and Cpk. The indices are easy to interpret; for example, a Cpk of more than one indicates that the system is producing within the specifications or requirements. If the Cpk is less than one, the system is producing data outside the specifications or requirements. This section contains detailed explanations of various capability indices and their interpretation.

Capability analysis is an excellent tool to demonstrate the extent of an improvement made to a process. It can summarize a great deal of information simply, showing the capability of a process, the extent of improvement needed, and later the extent of the improvement achieved.

Capability indices help to change the focus from only meeting requirements to continuous improvement of the process. Traditionally, the focus has been to reduce the proportion of product or service that does not meet specifications, using measures such as percentage of nonconforming product. Capability indices help to reduce the variation relative to the specifications or requirements, achieving increasingly higher Cp and Cpk values.

Before capability analysis is completed, a histogram and control chart need to be completed. Easily create these charts and perform capability analysis using software like SQCpack.

When is it used?

Use the standard method for calculating capability analysis when you can answer “yes” to all of the following questions:

  1. Is it necessary to understand how the system performs in comparison to specification limits?
    Specifications or requirements must be available to complete capability analysis. The system must also be measured in the same way as the specifications, so a direct comparison can be made.
  2. Does the specification consist of an upper and lower requirement?
    For processes with one-sided specifications, see the article capability analysis for one-sided specifications.
  3. Are no special causes of variation present?
    A system with special causes is unstable and constantly changing. If capability analysis is performed under these circumstances, it will be unreliable. Always construct a control chart and check for special causes before completing capability analysis.
  4. Is the data in variables form?
    In order to complete the standard method for capability analysis, the data must be in variables form, that is measured data, such as time, length, weight, or distance.
  5. Do the individual values form a normal distribution?
    In order to complete capability analysis using the standard method, a normal distribution is required. Use a histogram to check for normal distribution. If the distribution is not normal, non-normal capability analysis can be used.
  6. Has the data been collected over a period of time?
    There are two ways to collect data for capability analysis. The standard method is from a control chart, where the data is collected over a period of time. If data has been collected in this way, the standard method for performing capability analysis is used.

Note that if you cannot answer these questions with a “yes,” Practical Tools for Continuous Improvement includes capability analysis for one-sided specifications, trial runs, attributes data, and nonnormal data.

What does it look like?

A courier company has set up a team to look at the actual arrival time at customers’ locations to pick up packages, in comparison to the scheduled arrival time. The company guarantees pick up of packages within 14 minutes of the scheduled time. It is unacceptable to customers for the courier to arrive early. Therefore, there are two requirements: on time and up to 14 minutes late. The result of the capability analysis for this example follows.

  = 2.00
Zupper = 2.00
Zlower = 5.00
Cpk = 0.67
Cp = 1.17
Cpu = 0.67
Cpl = 1.67

 

A capability analysis like this can be accomplished using software like SQCpack.

Capability analysis - See an example of capability analysis

How is it made?

These steps assume that variables data has been collected over time, and that a control chart and histogram have been completed. The control chart should show no special causes, and the histogram should reveal that the data is normally distributed.

In the example, the team examined the arrival time of couriers in comparison to the scheduled arrival time over a month. Since time is being measured, the data is variables. The team completed a histogram and found the data to be normal. An X- R control chart was also completed, showing no special causes of variation. Since a control chart must be completed before performing capability analysis, the calculations from the control chart can be utilized. The information taken from the control chart follows:

  • The sample size used in the control chart, n
  • The overall average, (from an chart) or (from an X-MR chart.)
  • The average range, (from an chart) or (from an X-MR chart.)

The numerical specifications or requirements should also be known. Information for the example is shown below:

n = 5

= 10.00

= 4.653

USL = upper specification limit = 14 minutes

LSL = lower specification limit = 0 minutes (on time)

Steps:

  1. Sketch the distribution
  2. Calculate the estimated standard deviation
  3. Determine the location of the tails for the distribution
  4. Draw the specification limits on the distribution
  5. Calculate how much data is outside the specifications
  6. Calculate and interpret the capability indices
  7. Analyze the results

The above article is an excerpt from the “Capability Analysis” section of Practical Tools for Continuous Improvement.

Frequently-asked questions about capability

  1. Are Cpk & Ppk acronyms? If so, what do they actually mean or represent?
  2. What is the difference between Cp and Pp?
  3. What is the difference in the formulas for Cpk and Ppk?
  4. Are there maximum values for Cp, Cpk, Pp and Ppk?
  5. How can I improve Cpk value, when it is less than 1.0?
  6. Is it possible to have a Ppk value of 10 and a Pp number of 5?
  7. What do the letters in Cp and Cpk stand for?
  8. Why do capability indices formulas divide by 3?
  9. What is an ideal Cpm value?
  10. Can I compare two processes based on only the Cpk values of each of them?
  11. Can the process performance index Ppk be applied on the ongoing process? If yes, how?
  12. Why would I have Cp and Cpk indices well over 1 when some readings are outside the specification limits?

 

Additional articles about capability

>> Cpk or Ppk: Which should you use?
>> How can Cpk be good with data outside the specification?
>> How do we determine process capability if the process isn’t normal?
>> Is Cpk the best capability index?
>> Should you calculate Cpk when your process is not in control?
>> The capability index dilemma: Cpk, Ppk, or Cpm
>> Calculating capability indices with one specification

Quality Advisor

Default White Guide

SPC DEMO

Minimize Production Costs, Quickly Detect Issues, and Optimize Your Product Quality

Don’t miss out! Book a demo of our specialized SPC software and unlock immediate improvements in your processes.

Histogram: Study the shape

A histogram can be created using software such as SQCpack. How would you describe the shape of the histogram?

Bell-shaped: A bell-shaped picture, shown below, usuallypresents a normal distribution.

Bimodal: A bimodal shape, shown below, has two peaks. This shape may show that the data has come from two different systems. If this shape occurs, the two sources should be separated and analyzed separately.

Skewed right: Some histograms will show a skewed distribution to the right, as shown below. A distribution skewed to the right is said to be positively skewed. This kind of distribution has a large number of occurrences in the lower value cells (left side) and few in the upper value cells (right side). A skewed distribution can result when data is gathered from a system with has a boundary such as zero. In other words, all the collected data has values greater than zero.

Skewed left: Some histograms will show a skewed distribution to the left, as shown below. A distribution skewed to the left is said to be negatively skewed. This kind of distribution has a large number of occurrences in the upper value cells (right side) and few in the lower value cells (left side). A skewed distribution can result when data is gathered from a system with a boundary such as 100. In other words, all the collected data has values less than 100.

Uniform: A uniform distribution, as shown below, provides little information about the system. An example would be a state lottery, in which each class has about the same number of elements. It may describe a distribution which has several modes (peaks). If your histogram has this shape, check to see if several sources of variation have been combined. If so, analyze them separately. If multiple sources of variation do not seem to be the cause of this pattern, different groupings can be tried to see if a more useful pattern results. This could be as simple as changing the starting and ending points of the cells, or changing the number of cells. A uniform distribution often means that the number of classes is too small.

Random: A random distribution, as shown below, has no apparent pattern. Like the uniform distribution, it may describe a distribution that has several modes (peaks). If your histogram has this shape, check to see if several sources of variation have been combined. If so, analyze them separately. If multiple sources of variation do not seem to be the cause of this pattern, different groupings can be tried to see if a more useful pattern results. This could be as simple as changing the starting and ending points of the cells, or changing the number of cells. A random distribution often means there are too many classes.

Follow these steps to interpret histograms.

  1. Study the shape.
  2. Calculate descriptive statistics.
  3. Compare the histogram to the normal distribution.

Quality Advisor

Default White Guide

SPC DEMO

Minimize Production Costs, Quickly Detect Issues, and Optimize Your Product Quality

Don’t miss out! Book a demo of our specialized SPC software and unlock immediate improvements in your processes.

Run chart

What is it?

A run chart is a line graph of data plotted over time. By collecting and charting data over time, you can find trends or patterns in the process. Because they do not use control limits, run charts cannot tell you if a process is stable. However, they can show you how the process is running. The run chart can be a valuable tool at the beginning of a project, as it reveals important information about a process before you have collected enough data to create reliable control limits.

What does it look like?

Run charts show individual data points in chronological order.

g-chart

Quality Advisor

Default White Guide

SPC DEMO

Minimize Production Costs, Quickly Detect Issues, and Optimize Your Product Quality

Don’t miss out! Book a demo of our specialized SPC software and unlock immediate improvements in your processes.

X-MR chart

What is it?

An individuals and moving range (X-MR) chart is a pair of control charts for processes with a subgroup size of one. Used to determine if a process is stable and predictable, it creates a picture of how the system changes over time. The individual (X) chart displays individual measurements. The moving range (MR) chart shows variability between one data point and the next. Individuals and moving range charts are also used to monitor the effects of process improvement theories.

What does it look like?

The individuals chart, on top, shows each reading. It is used to analyze central location. The moving range chart, on the bottom, shows the difference between consecutive readings. It is used to study system variability.

g-chart

Quality Advisor

Default White Guide

SPC DEMO

Minimize Production Costs, Quickly Detect Issues, and Optimize Your Product Quality

Don’t miss out! Book a demo of our specialized SPC software and unlock immediate improvements in your processes.

Median chart

What is it?

A median chart is a special purpose variation of the X-bar chart. This chart uses the median instead of the subgroup average to show the system’s central location. The median is the middle point when data points are arranged from high to low. The chart shows all the individual readings. Use charts to determine if the system is stable and predictable or to monitor the effects of process improvement theories.

Although median charts show both central location and spread, they are often paired with range charts.

What does it look like?

The median chart shows each reading in a subgroup. Subgroup medians are connected by the data line and are used to analyze the central location.

g-chart

When is it used?

Use the median chart when you want to plot all the measured values, not just subgroup statistics. This may be the case when subgroup ranges vary a great deal, as showing all the points will emphasize the spread. It shows users that individual data points can fall outside the control limits, while the central location is within the limits.

Use the median chart when you can answer yes to these questions:

  1. Do you need to assess system stability?
  2. Is the data in variables form?
  3. Is the data collected in subgroups larger than one?
  4. Is the time order of subgroups preserved?
  5. Do you want to see individual data points?

 

Getting the most

Collect as many subgroups as possible before calculating control limits. With smaller amounts of data, the median chart may not represent variability of the entire system. The more subgroups you use in control limit calculations, the more reliable the analysis. Typically, twenty to twenty-five subgroups will be used in control limit calculations.

Median charts have several applications. When you begin to improve a system, use them to assess the system’s stability.

After the stability has been assessed, determine if you need to stratify the data. You may find entirely different results between shifts, among workers, among different machines, among lots of materials, etc. To see if variability on the median chart is caused by these factors, you should collect and enter data in a way that lets you stratify by time, location, symptom, operator, and lots.

You can also use median charts to analyze the results of process improvements. Here you would consider how the process is running and compare it to how it ran in the past. Do process changes produce the desired improvement?

Finally, use median charts for standardization. This means you should continue collecting and analyzing data throughout the process operation. If you made changes to the system and stopped collecting data, you would have only perception and opinion to tell you whether the changes actually improved the system. Without a control chart, there is no way to know if the process has changed or to identify sources of process variability.

How is it used?

Variables data is normally analyzed in pairs of charts which present data in terms of location or central location and spread. Location, usually the top chart, shows data in relation to the process average. It is presented in X-bar, individuals, or median charts. Spread, usually the bottom chart, looks at piece-by-piece variation. Range, sigma, and moving range charts are used to illustrate process spread. Another aspect of these variables control charts is that the sample size is generally constant.

Use the following types of charts and analysis to study variables data:

These charts, and more, can be created easily using software packages such as SQCpack.