Statistical process control (SPC) is a method for achieving quality control in manufacturing processes. It is a set of methods using statistical tools such as mean, variance and others, to detect whether the process observed is under control.
History
Statistical process control was pioneered by Walter A. Shewhart and taken up by W. Edwards Deming with significant effect by the Americans during the World War II to improve aircraft production. Deming was also instrumental in introducing SPC techniques into Japanese industry after that war.
General
Classical Quality Control was achieved by observing important properties of the finished product and accept/reject the finished product. As opposed to this statistical process control uses statistical tools to observe the performance of the production line to predict significant deviations that may result in reject products.
The underlying assumption in the SPC method is that any production process will produce products whose properties vary slightly from their designed values, even when the production line is running normally, and these variances can be analyzed statistically to control the process.
For example, a breakfast cereal packaging line may be designed to fill each cereal box with 500 grams of product, but some boxes will have slightly more than 500 grams, and some will have slightly less, producing a distribution of net weights. If the production process itself changes (for example, the machines doing the manufacture begin to wear) this distribution can shift or spread out. For example, as its cams and pulleys wear out, the cereal filling machine may start putting more cereal into each box than it was designed to. If this change is allowed to continue unchecked, product may be produced that fall outside the tolerances of the manufacturer or consumer, causing product to be rejected.
By using statistical tools, the operator of the production line can discover that a significant change has been made to the production line, by wear and tear or other means, and correct the problem - or even stop production - before producing product outside specifications. An example of such a statistical tool would be the Shewhart control chart, and the operator in the aforementioned example plotting the net weight in the Shewhart chart.
QA , QC & Quality tools
Quality Assurance , Quality Control & Quality tools
Just In Time (JIT) is an inventory strategy implemented to improve the return on investment of a business by reducing in-process inventory and its associated costs. The process is driven by a series of signals, or Kanban that tell production processes to make the next part. Kanban are usually simple visual signals, such as the presence or absence of a part on a shelf. JIT can lead to dramatic improvements in a manufacturing organization's return on investment, quality, and efficiency when implemented correctly.
New stock is ordered when stock reaches the re-order level. This saves warehouse space and costs. However, one drawback of the JIT system is that the re-order level is determined by historical demand. If demand rises above the historical average planning duration demand, the firm could deplete inventory and cause customer service issues. To meet a 95% service rate a firm must carry about 2 standard deviations of demand in safety stock. Forecasted shifts in demand should be planned for around the Kanban until trends can be established to reset the appropriate Kanban level. In recent years manufacturers have touted a trailing 13 week average is a better predictor than most forecastors could provide.For More info referhttp://en.wikipedia.org/wiki/Just_In_Time
Kaizen ( Japanese for "change for the better" or "improvement") is an approach to productivity improvement originating in applications of the work of American experts such as Frederick Winslow Taylor, Frank Bunker Gilbreth, Walter Shewhart,and of the War Department's Training Within Industry program by post-WWII Japanese manufacturers. The development of Kaizen went hand-in-hand with that of Quality control circles, but it was not limited to quality assurance.
The goals of kaizen include the elimination of waste (defined as "activities that add cost but do not add value"), just-in-time delivery, production load leveling of amount and types, standardized work, paced moving lines, right-sized equipment, and others. A closer definition of the Japanese usage of Kaizen is "to take it apart and put back together in a better way." What is taken apart is usually a process, system, product, or service.
Kaizen is a daily activity whose purpose goes beyond improvement. It is also a process that when done correctly humanizes the workplace, eliminates hard work (both mental and physical), teaches people how to do rapid experiments using the scientific method, and how to learn to see and eliminate waste in business processes.
"Kaizen" is the correct usage. "Kaizen event" or "kaizen blitz" are incorrect usage. Kaizen is often misunderstood and applied incorrectly, resulting in bad outcomes including, for example, layoffs. This is called "kaiaku" - literally, "change for the worse." Layoffs are not the intent of kaizen. Instead, kaizen must be practiced in tandem with the "Respect for People" principle. Without "Respect for People," there can be no continuous improvement. Instead, the usual result is one-time gains that quickly fade.
Importantly, kaizen must operate with three principles in place: process and results (not results-only); systemic thinking (i.e. big picture, not solely the narrow view); and non judgmental, non-blaming (because blaming is wasteful).
Everyone participates in kaizen; people of all levels in an organization, CEO on down, as well as external stakeholders if needed. The format for kaizen can be individual, suggestion system, small group, or large group.
The only way to truly understand the intent, meaning, and power of kaizen is through direct participation - many, many times. Lean accounting and just in time producton are related concepts.
Process Models
A decades-long goal has been to find repeatable, predictable processes or methodologies (software engineering) that improve productivity and quality. Some try to systematize or formalize the seemingly unruly task of writing software. Others apply project management techniques to writing software. Without project management, software projects can easily be delivered late or over budget. With large numbers of software projects not meeting their expectations in terms of functionality, cost, or delivery schedule, effective project management is proving difficult.
Waterfall processes
The best-known and oldest process is the waterfall model, where developers (roughly) follow these steps in order. They state requirements, analyze them, design a solution approach, architect a software framework for that solution, develop code, test (perhaps unit tests then system tests), deploy, and maintain. After each step is finished, the process proceeds to the next step, just as builders don't revise the foundation of a house after the framing has been erected. If iteration is not included in the planning, the process has no provision for correcting errors in early steps (for example, in the requirements), so the entire (expensive) engineering process may be executed to the end, resulting in unusable or unneeded software features.In old style (CMM) processes, architecture and design preceded coding, usually by separate people in a separate process step.
Iterative processes
Iterative development prescribes the construction of initially small but ever larger portions of a software project to help all those involved to uncover important issues early before problems or faulty assumptions can lead to disaster. Iterative processes are preferred by commercial developers because it allows a potential of reaching the design goals of a customer who does not know how to define what he wants.
Agile software development processes are built on the foundation of iterative development. To that foundation they add a lighter, more people-centric viewpoint than traditional approaches. Agile processes use feedback, rather than planning, as their primary control mechanism. The feedback is driven by regular tests and releases of the evolving software.Agile processes seem to be more efficient than older methodologies, using less programmer time to produce more functional, higher quality software, but have the drawback from a business perspective that they do not provide long-term planning capability. In essence, they say that they will provide the most bang for the buck, but won't say exactly when that bang will be.
Extreme Programming, XP, is the best-known agile process. In XP, the phases are carried out in extremely small (or "continuous") steps compared to the older, "batch" processes. The (intentionally incomplete) first pass through the steps might take a day or a week, rather than the months or years of each complete step in the Waterfall model. First, one writes automated tests, to provide concrete goals for development. Next is coding (by a pair of programmers), which is complete when all the tests pass, and the programmers can't think of any more tests that are needed. Design and architecture emerge out of refactoring, and come after coding. Design is done by the same people who do the coding. (Only the last feature - merging design and code - is common to all the other agile processes.) The incomplete but functional system is deployed or demonstrated for (some subset of) the users (at least one of which is on the development team). At this point, the practitioners start again on writing tests for the next most important part of the system.
While Iterative development approaches have their advantages, software architects are still faced with the challenge of creating a reliable foundation upon which to develop. Such a foundation often requires a fair amount of upfront analysis and prototyping to build a development model. The development model often relies upon specific design patterns and entity relationship diagrams (ERD). Without this upfront foundation, Iterative development can create long term challenges that are significant in terms of cost and quality.
Critics of iterative development approaches point out that these processes place what may be an unreasonable expectation upon the recipient of the software: that they must possess the skills and experience of a seasoned software developer. The approach can also be very expensive, akin to... "If you don't know what kind of house you want, let me build you one and see if you like it. If you don't, we'll tear it all down and start over." A large pile of building-materials, which are now scrap, can be the final result of such a lack of up-front discipline.
Formal methods
Formal methods are mathematical approaches to solving software (and hardware) problems at the requirements, specification and design levels. Examples of formal methods include the B-Method, Petri nets, RAISE and VDM. Various formal specification notations are available, such as the Z notation. More generally, automata theory can be used to build up and validate application behaviour by designing a system of finite state machines.
Finite state machine (FSM) based methodologies allow executable software specification and by-passing of conventional coding (see virtual finite state machine or event driven finite state machine).
Recent approaches try to merge the specification and code into one activity to ensure the specification and code match. While Agile methods propagate specification of all requirements in code, methods such as VFSM develop executable specifications, trying to avoid the coding activity entirely
Quality control tools
The following are the mostwidely used Quality control tools
Run Chart : Run charts are used to analyze processes according to time or order. Run charts are useful in discovering patterns that occur over time. Detailed tutorial
Pareto Chart : Pareto charts are extremely useful because they can be used to identify those factors that have the greatest cumulative effect on the system, and thus screen out the less significant factors in an analysis. Ideally, this allows the user to focus attention on a few important factors in a process.
Flow Chart : Flowcharts are pictorial representations of a process. By breaking the process down into its constituent steps, flowcharts can be useful in identifying where errors are likely to be found in the system. Detailed tutorial
Cause and Effect Diagram :This diagram, also called an Ishikawa diagram (or fish bone diagram), is used to associate multiple possible causes with a single effect. Thus, given a particular effect, the diagram is constructed to identify and organize possible causes for it. Detailed tutorial
Histogram : Histograms provide a simple, graphical view of accumulated data, including its dispersionand central tendancy. In addition to the ease with which they can beconstructed, histograms provide the easiest way to evaluate the distribution ofdata. Detailed tutorialScatter diagrams : Scatter diagramsare graphical tools that attempt to depict the influence that one variable has on another. A common diagram of this type usually displays points representing the observed value of one variable corresponding to the value of another variable. Detailed tutorial
Control Chart : The control chart is the fundamental tool of statistical process control, as it indicates the range of variability that is built into a system (known as common cause variation). Thus, it helps determine whether or not a process is operating consistently or if a special cause has occurred to change the process mean or variance.
source : http://q-environment.blogspot.com/
No comments:
Post a Comment