What is Six Sigma?

The concepts surrounding the Six Sigma quality drive are essentially statistical and plausible. In simple language, these concepts boil down to, “How confident can I be that what I planned to happen will actually happen?” Basically, the concept of Six Sigma is about measuring and improving how close we get to delivering what we planned to do.

Everything we do varies, if only slightly, from the schedule. Since no result can exactly match our intention, we usually think in terms of the range of acceptability of what we plan to do. These areas of acceptability (or tolerance limits) correspond to the intended use of the product from our work – the customer’s needs and expectations.

Here is an example. Consider how your tolerance limits can be structured to meet customer expectations in these two instructions:

“Cut two medium-sized potatoes into cubes of one-quarter inch.” and “Drill and lose two holes of a quarter inch in carbon steel fittings.”

What would be your range of acceptability or tolerances for the quarter inch value? (Hint: a 5/16 “potato cube would probably be acceptable; a 5/16” thread hole would probably not be.) Another consideration in your potato dice and hole making process would be the inherent ability of the way you make a quarter inch dimension – the ability of the process. Do you hand cut potatoes with a knife or use a special cutting machine with preset blades?

Do you drill holes with a portable bore, or do you use a drill press? If we measured enough finished potato cubes and holes, the capacity of the various processes would speak to us. Their language would be distribution curves.

Distribution curves not only tell us how well our processes have worked; they also tell us the probability of what our process will do next. Statisticians group these probabilities into segments of the distribution curve called standard deviations from the mean. The symbol they use for standard deviation is the lowercase sigma.

For any process with a standard distribution (something similar to a bell-shaped curve), the probability is 68.26% that the next value is within a standard deviation of the mean. The probability is 95.44% for the same next value to fall within two standard deviations. The probability is 99.73% that it will be within three sigma; and 99.994% that it will be within four sigma.

If the acceptability or tolerance range of your product is at or outside the four sigma point on the distribution curve for your process, you are virtually certain to produce acceptable material each time, of course, that your process is centered and remains centered on your target value.

Unfortunately, even if you can center your process once, it will tend to soar. Experimental data show that most processes under control still drive about 1.5 sigma on each side of their midpoint over time.

This means that the real probability of a process of tolerance limits at four sigma producing acceptable material is actually more like 98.76%, not 99.994%.

To achieve near perfect process output, the process capacity curve must fit within the tolerances such that the tolerances are at or above six standard deviations, or Six Sigma, on the distribution curve. That’s why we call our goal Six Sigma quality.

Quality makes us strong

In the past, conventional wisdom said that high levels of quality cost more in the long run than poorer quality, raising the price you had to ask for your product and make yourself less competitive. Balancing quality with cost was believed to be the key to economic survival. The surprising discovery of companies that originally developed Six Sigma, or world-class, is that the best quality does not cost more. It actually costs less. The reason for this is something called cost quality. Quality cost is actually the cost of deviating from paying for quality for things like rework, scrap and warranty claims. Doing things right the first time – though it takes more effort to get to that level of performance – actually costs a lot less than creating and then finding and correcting mistakes.

Shooting for Six Sigma:

An illustrative fable

The underlying logic of Six Sigma quality involves some understanding of the role of statistical variation. Here is a story about it. Robin Hood is out in the meadow training for the archery competition to be held next week at the castle. After Robin’s first 100 shots, Friar Tuck, Robin’s Master Black Belt in archery, fills the number of hits in each moment. He finds that Robin hit within the bulls 68% of the time.

Friar Tuck plans the results of Robin’s target practice on a chart called a histogram. The results look like this. “Note that the bars in the chart form a curve that looks something like a bell,” says the friar. “This is a standard distribution curve. Each process that varies uniformly around a midpoint will form a plot that looks like a smooth bell curve if you make a large enough number of attempts or in this case shoot enough arrows.”

Robin scratches his head. Friar Tuck explains that Robin’s process involves choosing straight arrows (raw material); keeping the book steady and releasing the smooth bean string (the human factor); the wood of the barley and the strength of the string (machines); and the technique of aiming to center the process on the bulls (calibration and statistical process control).

The product of Robin’s process is an arrow in a target. More specifically, products that satisfy the customer are arrows that score. Arrows outside the third circle on these targets do not count, so they are missing. Robin’s process seems to be 100% within specification. In other words, every product produced is acceptable in the eyes of the customer.

“You seem to be an archer with three to four sigmas,” the friar continues. “We’ll have to measure a lot more holes to know that for sure, but let’s assume 99.99% of your shots score that you’re a four sigma shooter.” Robin goes off to tell his merry men.

The next day, the wind is constantly changing directions; there is a light mist. Robin thinks he feels a cold coming on. Whatever the reason, his process does not remain centered on the average way it did before. In fact, it unpredictably drives up to 1.5 sigma on each side of the mean. Instead of producing no defects, Robin has, after a hundred shots, produced a defect, a hole outside the third circle. In fact, instead of 99.99% of his shots scoring only 99.38%.

While this may not seem like much has changed, you can imagine Robin instead of shooting at target laser drilled holes in turbine blades. Let’s say there were 100 holes in each magazine. The probability of producing even a defect-free blade will not be good. (Since defect creation would be random, his process would produce some good blades as well as some blades with multiple defects.)

Without inspecting everything many times (not to mention using a huge amount for rework and rejected material), Robin, the laser drill, would find it virtually impossible to ever deliver a single turbine blade with properly drilled holes.

Not only would the firesigma manufacturer have to spend a lot of time and money finding and fixing errors before the products could be shipped, but since inspection cannot find all the defects, she would also have to solve problems after they came to the customer. The Six Sigma manufacturer, on the other hand, would be able to concentrate on only a handful of errors to further improve the process.

How can Six Sigma quality tools help? If archer Robin were to use these tools to become a Six Sigma sharpshooter instead of a firsigma shooter as he went out in the wind and rain, he would still give every shot. Some arrows may now be in the second circle, but they will all still be acceptable to the customer, which guarantees the first prize at the competition. The Robin laser drill would also be successful; he would produce almost defective free turbine blades.

The steps on the Six Sigma quality path:

1. Measurement

Six Sigma quality means achieving a business-wide standard with fewer than 3.4 errors per Million opportunities to make a mistake.

This quality standard includes design, manufacturing, marketing, administration, service, support for all facets of the business. Everyone has the same quality goal and pretty much the same method to achieve it. Although the engine design and manufacturing application is obvious, the goal of the Six Sigma performance – and most of the same tools – is also applicable to the softer, more administrative processes.

After the improvement project is clearly defined and delimited, the first element of the process of quality improvement is the measurement of performance. Effective measurement requires a statistical overview of all processes and problems. This reliance on data and logic is critical to the practice of Six Sigma quality.

The next step is knowing what to measure. The determination of sigma level is mainly based on counting defects, so we need to measure the frequency of defects. Errors or deficiencies in a manufacturing process tend to be relatively easy to define – just a failure to meet a specification. To extend the application to other processes and further improve manufacturing, a new definition is helpful: a deficiency is any failure to meet a customer satisfaction requirement and the customer is always the next person in the process.

In this initial phase, you select the critical features you plan to improve. These would be based on an analysis of your customer’s requirements – (usually using a tool such as quality function implementation.) Once you have clearly defined your performance standards and validated your measurement system (with reliability and repeatability studies), you would then be in able to Determine short and long term process capability and actual process performance (Cp and Cpk).

2. Analysis

The second step is to define performance goals and identify the sources of process variation. As a company, we have set Six Sigma performance of all processes within five years as our goal. This needs to be translated into specific goals in each operation and process. To identify sources of variation, after counting the deficiencies, we need to determine when, where, and how they occur. Many tools can be used to identify the causes of the variation that creates defects.

These include tools many people have seen before (process mapping, Pareto charts, fishbone charts, histograms, scatter charts, driving charts) and some that may be new (affinity charts, box-and-whisker charts, multivariate analysis, hypothesis testing).

3. Improvement

This phase involves screening for potential causes of variation and discovering their relationship. (The tool commonly used in this phase is the Design of Experiment or DOE.) Understanding these complex interconnections and then allowing for the setting of individual process tolerances that interact to produce the desired result.

4. Control

In the control phase, the process of validating the measurement system and evaluating the capacity is repeated to ensure that improvements have been made. Steps are then taken to control the improved processes. (Some examples of tools used in this phase are statistical process control, error checking, and internal quality auditing.)

Words of wisdom about quality

If you think it is natural to have defects and that the quality consists in finding defects and correcting them before they reach the customer, you are just waiting to go out of business. To improve speed and quality, you have to measure it first – and you need a common measure.

The common business-wide measures that drive our quality improvement are defects per year. Work unit and cycle time per Unit. These measures also apply to design, production, marketing, service, support and administration.

Everyone is responsible for producing quality; therefore, everyone must be measured and held accountable for quality. Measuring quality within an organization and pursuing an aggressive improvement is the responsibility of operational management.

Customers want on-time delivery, a product that works instantly, no early life mistakes, and a product that is reliable over its lifetime. If the process fails, the customer cannot be easily rescued from them by inspection and testing.

A robust design (one that is within the capabilities of existing processes to produce it) is key to increasing customer satisfaction and reducing costs. The way to a robust design is through simultaneous construction and integrated design processes.

As higher quality ultimately reduces costs, the highest quality producer is most able to be the lowest cost producer and therefore the most efficient competitor in the market.