by Daniel Gonzalez | Jun 27, 2021 | Casos de Estudio
PROJECT SUMMARY
The client is a multinational conglomerate that focuses on industrial engineering and steel production. PMC offered a discrete-event simulation model to evaluate the design of a new production line and validate the throughput capacity envisioned by the client. Our utilization of discrete-event simulation techniques allowed the client to test different layout and process configurations in their design phase of their project.
SYSTEM DESCRIPTION
The facility has four operations – two assembling operations, one press area and one assembling/testing process split into various work stations.
One operator replenishes raw components for the first two operations and there is one robotic arm in each of the operations to transfer parts between the work stations.
The press area contains one press that receives assembled parts from previous operations and compress them so one of the three operators, on the last two operations, can pick and transfer the compressed part to the last operation.
The fourth operation has one welder, one conveyor, three work stations and three operators who will finish assembling the parts and perform various tests accordingly.
OPPORTUNITY
The current line design was not finalized and had not been tested to see if it was able to meet customer demands in terms of volume, cost, and quality. Hence, there was a need to simulate the different operations to identify any design problems, equipment utilization, headcount and overall throughput capacity.
APPROACH
The data and layout provided by the client was imported to SIMUL8®. The four different operations were included in the model. An Excel® interface was created to input data for the simulation model. This unique technique by PMC allowed the client to have the flexibility of changing most of the inputs for the simulation directly from the Excel® interface, reducing the modeling.
SOLUTION
The discrete-event simulation model successfully and accurately determined the overall throughput capacity of the given production line design as well as the utilization of the different operators and equipment. Using the results from the baseline model, process improvements were made to the original production line. These improvements were then tested by running the simulation model for multiple scenarios. The results were used to find the best configuration that would maximize the overall throughput capacity and reduce the headcount.
BENEFIT
In addition to the simulation study, an Excel® interface was provided to the client for making changes to the operation times, which will allow them to run what-if scenarios in case the process specifications change. Additionally, by using the simulation model to test different layout and process configurations, the client reduced the headcount by one and the number of tools used on the last operation by two. Furthermore, the client also found the best way to use its resources and maximize the line production capacity. The ROI on this project was 10 times the amount invested on the simulation study.
by Daniel Gonzalez | Jun 16, 2020 | Casos de Estudio
PROJECT SUMMARY
PMC’s simulation team created a model of a state-of-the-art steel mill for the purpose of capacity validation. The model was used to conduct several ‘what-if’ analyses of mill operations. PMC demonstrated that the mill was capable of maintaining desired producing levels, ascertained the locations of tight constraints in the system, and also identified potential bottlenecks in the production process.
CLIENT CHALLENGES
• Sufficient Capacity Demands
• Uncertain Machinery Constraints
• Need for alternative shift patterns
• Sensitive changes to product mix and demand
SYSTEM DESCRIPTION
The steel mills target production level was 600,000 tons per year. Approximately two-thirds of available capacity was slated for tubes used in
the oil industry, with the remainder allocated for line pipe and casings. The major elements of the production system studied were the hot mill, the heat treat and finishing lines, and the intermediate work-in-process (WIP) storage areas between them. Shift patterns and definitions, as well as the availability of the crane material handling resource were also areas of concern.
OPPORTUNITY
The steel mills target production level was 600,000 tons per year. Approximately two-thirds of available capacity was slated for tubes used in
the oil industry, with the remainder allocated for line pipe and casings. The major elements of the production system studied were the hot mill, the heat treat and finishing lines, and the intermediate work-in-process (WIP) storage areas between them. Shift patterns and definitions, as well as the availability of the crane material handling resource were also areas of concern.
APPROACH
PMC created process flow charts for the relevant areas of the mill. After mapping the process from start to finish, the modeling portion of the project began; applicable input parameters were identified and the appropriate data was collected.
Simulation input parameters included:
• Product descriptions and attributes
• Processing rates for all equipment
• Cycle, set-up, and down times
• Storage and buffer area capacities
• Plant operation hours and shift patterns
• Crane cycle times and attributes
With the extensive discrete event simulation model created, PMC utilized sensitivity analysis coupled with Goldratt’s Theory of Constraints (TOC) methodology to explore multiple what-if scenarios.
SOLUTION
By studying worst-case scenarios and varying simulated shift patterns and resource availability, PMC showed the mill was capable of meeting its production goal of 600,000 tons per year. Additionally, tight constraints and their potential impact on throughput were identified. Finally, an ideal shift pattern that maximized throughput was delivered. This shift schedule illustrated that an additional 100,000 tons of steel could be produced annually if the Hot Mill operated six days per week rather than five.
BENEFIT
PMC’s discrete event simulation model was used to both validate the production capacity of the steel mill and to identify constraining elements. Validation activities showed the mill could meet its goal of 600K tons/ year production level. Examination of potential bottlenecks allowed PMC to design a shift schedule capable of producing an additional 100K tons/year. Limited resources and constraints were ably identified so that any negative effect on production could be proactively addressed.
by Daniel Gonzalez | Sep 19, 2011 | White Papers
SIMULATION HELPS ASSESS AND INCREASE AIRPLANE MANUFACTURING CAPACITY
Marcelo Zottolo, Edward J. Williams, Onur M. Ülgen
PMC
15726 Michigan Avenue Dearborn, Michigan 48126 USA
ABSTRACT
Simulation has long been used in the manufacturing industry to help determine, and suggest ways of increasing, production capacity under a variety of scenarios. Indeed, historically, this economic sector was the first to make extensive use of simulation. Over the last several decades, and continuing today, the most numerous applications of simulation to manufacturing operations involve mass production facilities such as those fabricating motor vehicles or home appliances. Less frequently, but very usefully, simulation has been applied to customized manufacturing or fabrication applications, such as the building of ships to individualized specifications. In the case study described in this paper, simulation was successfully applied, in synergy with other techniques of industrial engineering, to assess and increase the throughput capacity of a manufacturer of custom-built personal jet airplanes with a four-to-six passenger (plus moderate amounts of luggage) carrying capacity.
INTRODUCTION
Very likely, the most long-standing user of simulation, as distinguished by economic sector, is the manufacturing sector (Miller and Pegden 2000). Within this sector, simulation analysis helps production and industrial engineers (and their managers) assess and improve production capacity, identify and ameliorate bottlenecks, improve deployment of resources (whether labor, equipment, or both), and hence strengthen a company’s economic performance (Harrell and Tumay 1995). Frequently, these applications of simulation analyze a mass-production process, such as those producing automobiles or home appliances. Such processes are typically high-volume, have largely linear flow, and have a relatively low ratio of workers to machines. Somewhat less frequently, simulation analysis is applied to “job-shop” manufacturing, which typically involves a lower volume of production, with markedly higher cost and price per unit, directed toward often customized requests. Such manufacturing systems typically have more, and more highly skilled, workers relative to machines and equipment (El Wakil 2002). In view of the lower number of units produced and their higher prices and cost, each unit is “high stakes,” meriting careful attention to work flow, buffer capacities, and buffer placements to streamline workflow and minimize total process time (Heragu 2008). Various examples of “job shop” simulation appear in the literature. Implementation of an application to model the custom production of trains (general, fast, freight, etc.) is discussed in (Lian and Van Landeghem 2002); significantly, this analysis combines value stream mapping with simulation. The application of simulation to design-build construction projects is discussed in (Orsoni and Karadimas 2006). The expansion plan of a marine container terminal, incorporating production of custom equipments to be installed therein, is discussed in (Ambrosino and Tànfani 2009).
In the study described here, discrete-event process simulation was successfully applied to the paint-shop processes involved in the manufacture of custom-built jet airplanes for personal and corporate use. Such airplanes are a publicly inconspicuous but economically and logistically important part of the overall aviation infrastructure (McCartney 2011). The manufacturing company aspired, in view of trends indicating increasing order volume, to produce two or even three airplanes per day, yet initially was unable to produce 1½ airplanes on average per day. Since the painting operation was already known to be a painfully obvious bottleneck, simulation analysis was concentrated on it, and coupled synergistically with other industrial engineering techniques such as value stream mapping, layout analysis, and lean manufacturing.
2. OVERVIEW OF PAINTING PROCESSES
The airplane manufacturing facility comprises three large buildings, and the painting processes occupy all of the intermediate (in the process flow sense) building. This building, in turn, is divided into four major “positions.” Position 1 handles preparatory work: body work, washing, chemical coating, and thermal baking (hardening) of the chemical coating. Position 2 handles the vast majority of the actual painting: wrapping, spraying the primer coat, two consecutive sprayings of the top coat (to achieve durability and opacity), and thermal baking of these three coats. Position 3 handles the painting of custom-ordered markings, such as signature stripes and corporate logos, on the airplane. The work done in this position is labor-intensive due to the necessity of frequently applying and then removing masking tape. Each of these first three positions involves work done in either of two parallel floor spaces within this building. Position 4 handles final detailing, cleaning and varnishing, and painting the airplane door and its frame. This basic work flow is shown in Figure 1, Appendix.
3. OBJECTIVES DEFINITION AND MODEL DEVELOPMENT
3.1 Setting Objectives and Scope
The project charter specified that the consultants (1) examine the overall process flow to determine the maximum number of planes per day (two? three?) given the current painting facility “footprint” (overall square meters and building cross-section) as a binding constraint, and (2) use simulation and allied techniques to suggest revisions to the painting process to achieve that maximum. Value stream mapping and time studies, conducted before the simulation model-building effort began, soon convinced both the consultants and the client managers that “two planes per day” would plausibly be achievable but “three planes per day” would not be. Given this firm and well-defined foundation for the simulation portion of the study, the consultants undertook the design and construction of the base-case simulation model. Much of the input data needed for this model, such as cycle times, worker requirements, buffer capacities, and transfer times for airplanes between workstations, had just been collected during the value stream mapping and time studies. Indeed, the “double use” of these data is one of many strong justifications for using simulation synergistically with other industrial engineering analysis methods (Chung 2004). All additional data needed was collected during a two-month period whose final two weeks coincided with the base model development described in the next section. As the construction of the base case model began, client and consultant engineers brainstormed promising modifications of the current system.
3.2 Choice of Software
The clients and the consultants concurred on the use of the WITNESS® simulation software for model development. This software provides convenient high- quality animation, logical support for both “pull” and “push” operational logic, the ability to build reusable sub-models, and a powerful “labor” construct capable of modeling operationally complex rules for the deployment and transit of both laborers and portable pieces of equipment (Mehta and Rawles 1999). A small, vivid, and typical example of WITNESS® flexibility appears in the following “output rule” (a rule specifying whether, to where, and when a machine sends an entity [here, an airplane] which has just finished processing at that machine:
IF vPaint_02_Done = 0 PUSH to PAINT_02_1 ELSE
Wait ENDIF
This output rule relies on the current value of the variable vPaint_02 to decide whether to send the airplane downstream (in this case, to machine PAINT_02_1) or to hold the airplane at its current location until the variable becomes equal to zero.
WITNESS® also provides automatic collection and graphical display of system metrics such as minimum, average, and maximum queue lengths, number of cycles undertaken by each machine, utilization of each labor resource, and total entities throughput.
The animation layout constructed within the WITNESS® simulation model is shown in Figure 2 in the Appendix.
3.3 Choice of Stochastic Distributions
Arrival of WITNESS® “parts” (planes) to the model was based on historical records of planes leaving the upstream operation. Historical time-to-fail (or number- of-cycles-to-fail) and time-to-repair data were entered into a distribution fitter, ExpertFit® (Law and McComas 2003) to determine suitable closed-form distributions (if indeed, such existed) using techniques such as Kolmogorov-Smirnov and Anderson-Darling goodness of fit tests for maximum-likelihood estimators (Leemis 2004). As examples of these data, the paint booths routinely require a filter change every thirty airplanes on average, with out-of-service time averaging eight hours. Similarly, the preparation booths routinely require a filter change every twenty airplanes on average, with out-of-service time averaging four hours. Routine preventative maintenance lasting four hours on average is done at the paint booths weekly. Major equipment breakdowns, lasting an average of three days, occur on average every three months at paint booths and once a year at preparation booths. With few exceptions, times-to-fail were modeled with exponential or Weibull distributions, and times-to-repair were modeled with gamma (of which the Erlang is a special case), Weibull, or log-normal distributions.
3.4 Model Development Timing
Setting of the objectives and construction of the base case (reflective of the current system) model required two calendar weeks and three person-weeks. During those two weeks one simulation analyst worked on the model full time and another contributed additional work on the model part time.
4. MODEL VERIFICATION AND VALIDATION
4.1 Documentation of Assumptions
As data collection efforts drew to a close, the clients and the analysts agreed upon and documented the following assumptions pertinent to building the model of the base case system:
- Planes are always available from upstream to be painted (consistent with the long-standing recognition that the paint shop was the bottleneck blocking upstream processes).
- No downstream blocking occurs relative to planes leaving the painting operations (consistent with the long-standing recognition that the paint shop was the bottleneck starving downstream processes).
- Labor resources are not the constraint (consistent with anecdotal evidence, and also with the observation that – contrary to many manufacturing contexts – in this context, capital equipment is more expensive and harder to obtain than the relatively unskilled labor needed for various operations [e.g., the application and removal of masking tape mentioned above]).
- Equipment preventive maintenance and unscheduled downtime data are still valid as provided from historical data.
4.2 Verification, Validation, and Credibility
Early in the project, even the most casual observations of the painting process convinced both clients and consultants that the system was conceptually steady- state (indeed, some queues were never observed empty). As initial settings for verification and validation of the base model, warm-up time was set to one month and run time (with gathering of performance statistics) to one year. Typical techniques were then used for model verification and validation. As a fundamental basis for initial high-level verification and validation, the “observed” versus “estimated” collective cycle times for each of the four positions (shown in Figure 1, Appendix) were examined for reasonably close agreement. These methods included running the model with all variability eliminated for easy checking against spreadsheet computations, running one entity through the model. Structured walkthroughs held by the two modelers and their technical leader, careful examination of the animation, extreme condition tests, and discussion of plausibility of preliminary results with the client’s process engineers (including Turing tests) all proved useful to the tasks of verification and validation (Sargent 2004). After routine errors (e.g., mismatched variable names) were found and corrected, the analysts graphed performance metrics of the base model against simulated time. These graphs demonstrated that accurate determination of performance metrics, with sufficiently narrow 95% confidence intervals required increasing the warm-up length to two months and the run length to two years of simulated time, with 10 replications for each situation to be examined. Note that even with a two-year statistics-gathering run length, on average only only two major equipment breakdowns will occur at preparation booths. The usual analytical recommendation is that the most unusual event in a system be expected to occur five or six times during each replication (Law 2004). As a countermeasure, with replication length already two years, the analysts checked that several different but representative numbers of these breakdowns occurred among the replications – an approach conceptually akin to stratified sampling. Next, the model achieved credibility with the client engineers and managers by predicting currently observed performance metrics within 4%.
5. RESULTS AND IMPROVEMENTS
In agreement with current observation, the base case model indicated average production of 1.45 planes per day – and also correctly indicated severe blocking (19% of time each paint booth blocked) just downstream from both paint booths (the key operations in Position 2) and hence just upstream from the detailing operation of Position 3. Meanwhile, results of layout analysis had suggested workflow enhancements, not involving capital expenditure, having the potential to create buffer space for at least one plane, maybe two, between the pair of paint booths and the detailing operation. Accordingly, the first two alternative scenarios modeled introduced an as yet hypothetical buffer at this point. Introducing this buffer into the model required less than ½ day of modeling time. Setting the buffer capacity to 1 yielded average production of 1.64 planes per day, with paint booth blocked time reduced from 19% to 9%. Increasing the buffer capacity to 2 yielded average production of 1.75 planes per day, with paint booth blocked time further reduced to 4%.
Next, the collaborating engineers (consultants and clients) turned their attention to the possibility of adding a second paint-detailing station within Position 3. This modification to the model was similarly added, verified, and validated for reasonableness of results within one day. However, its results proved disappointing, especially considering that a second detailing station represented significant capital and operating expense. Indeed, this addition did reduce blocked time at both upstream booths to less than 2%. However, the key performance metric “average planes per day” increased to only 1.81 from 1.75.
6. CONCLUSIONS
The client’s engineers promptly implemented the workflow enhancements suggested by the layout analysis, and concurrently created and used a buffer of capacity 2 between the paint booths and detailing operations. The key performance metric “average planes per day” promptly increased from 1.45 to 1.74, an increase of 20% with no capital investment required. Blocked time at the paint booths also decreased as predicted by the simulation study. Although most welcome, this throughput increase fell short of the “two planes per day” aspirations. Therefore, client and consultant engineers agreed upon follow-up studies, now in progress. These studies are investigating these throughput improvement opportunities:
Standardization of various operations to minimize variability of time required.
Workplace organization and visual controls, partly to manage inventories of paint and partly to minimize wasted time (“muda”) searching for tools.
Development of templates for setup of striping operations (part of detailing) to minimize detailing time; this suggestion came from a client engineer familiar with the practice of “SMED” [Single Minute Exchange of Die] as practiced in many manufacturing industries and pioneered by the Japanese engineer Shigeo Shingo (Collier and Evans 2007).
ACKNOWLEDGMENT
The authors gratefully acknowledge the leadership and help provided by colleague Ravi Lote on this industrial engineering project.
Additionally, constructive criticisms from two anonymous referees have provided the authors significant help in improving the paper.
REFERENCES
Ambrosino, Daniela, and Elena Tànfani. 2009. A Discrete Event Simulation Model for the Analysis of Critical Factors in the Expansion Plan of a Marine Container Terminal. In Proceedings of the 23rd European Conference on Modelling and Simulation, eds. Javier Otamendi, Andrzej Bargiela, José Luis Montes, and Luis Miguel Doncel Pedrera, 288-294.
Chung, Christopher A. 2004. Simulation Modeling Handbook: A Practical Approach. Boca Raton, Florida: CRC Press.
Collier, David A. and James R. Evans. 2007. Operations Management: Goods, Services and Value Chains. Mason, Ohio: Thomson South-Western.
El Wakil, Sherif D. 2002. Processes and Design for Manufacturing, 2nd edition. Long Grove, Illinois: Waveland Press, Incorporated.
Harrell, Charles, and Kerim Tumay. 1995. Simulation Made Easy: A Manager’s Guide. Norcross, Georgia: Engineering & Management Press.
Heragu, Sunderesh S. 2008. Facilities Design, 3rd edition.
Boca Raton, Florida: CRC Press.
Law, Averill, and Michael G. McComas. 2003. How the ExpertFit Distribution-Fitting Software Can Make Your Simulation Models More Valid. In Proceedings of the 2003 Winter Simulation Conference, Volume 1, eds. Stephen E. Chick, Paul J. Sánchez, David Ferrin, and Douglas J. Morrice, 169-174.
Law, Averill M. 2004. Statistical Analysis of Simulation Output Data: The Practical State of the Art. In Proceedings of the 2004 Winter Simulation Conference, Volume 1, eds. Ricki G. Ingalls, Manuel D. Rossetti, Jeffrey S. Smith, and Brett A. Peters, 67-72.
Leemis, Lawrence M. 2004. Building Credible Input Models. In Proceedings of the 2004 Winter Simulation Conference, Volume 1, eds. Ricki G. Ingalls, Manuel D. Rossetti, Jeffrey S. Smith, and Brett A. Peters, 29-40.
Lian, Yang-Hua, and Hendrik Van Landeghem. 2002. An Application of Simulation and Value Stream Mapping in Lean Manufacturing. In Proceedings of the 14th European Simulation Symposium, eds. Alexander Verbraeck and Wilfried Krug, 300-307.
McCartney, Scott. 2011. Lagging Private-Jet Industry Resumes Takeoff. Wall Street Journal CCLVII:33(D5).
Mehta, Arvind, and Ian Rawles. 1999. Business Solutions Using WITNESS. In Proceedings of the 1999 Winter Simulation Conference, Volume 1, eds. Phillip A. Farrington, Harriet Black Nembhard, David T. Sturrock, and Gerald W. Evans, 230-233.
Miller, Scott, and Dennis Pegden. 2000. Introduction to Manufacturing Simulation. In Proceedings of the 2000 Winter Simulation Conference, Volume 1, eds. Jeffrey A. Joines, Russell R. Barton, Keebom Kang, and Paul A. Fishwick, 63-66.
Orsoni, Alessandra, and Nikolaos V. Karadimas. 2006. The Role of Modelling and Simulation in Design-Build Projects. In Proceedings of the 20th European Conference on Modelling and Simulation, eds. Wolfgang Borutzky, Alessandra Orsoni, and Richard Zobel, 315- 320.
Sargent, Robert G. 2004. Validation and Verification of Simulation Models. In Proceedings of the 2004 Winter Simulation Conference, Volume 1, eds. Ricki G. Ingalls, Manuel D. Rossetti, Jeffrey S. Smith, and Brett A. Peters, 17-28.
AUTHOR BIOGRAPHIES
MARCELO ZOTTOLO, born in Buenos Aires, Argentina, came to the United States to finish his college studies. He was graduated from the University of Michigan – Dearborn as an Industrial and Systems Engineer in December 2000, and subsequently earned his master’s degree in the same field in June 2004. He was awarded the Class Honors distinction and his Senior Design Project was nominated for the Senior Design Competition 2001. This project studied the improvement of manufacturing processes for the fabrication of automotive wire harnesses, ultimately proposing an automation tool leading to improvements in future designs. Additionally, he was co-author of a paper on simulation in a distribution system which earned a “best paper” award at the Harbour, Maritime, and Simulation Logistics conference held in Marseille, France, in 2001. He is now a Consulting Project Manager at PMC with a solid background in data- driven process improvement methodologies to optimize the performance of different systems. He is experienced in the concurrent application of lean thinking, theory of constraints, workflow measurement, and simulation modeling across multiple economic sectors including retail, service, healthcare, insurance, and manufacturing. His responsibilities include leading teams in building, verifying, validating, and analyzing simulation models in Enterprise Dynamics®
EDWARD J. WILLIAMS holds bachelor’s and master’s degrees in mathematics (Michigan State University, 1967; University of Wisconsin, 1968). From 1969 to 1971, he did statistical programming and analysis of biomedical data at Walter Reed Army Hospital, Washington, D.C. He joined Ford Motor Company in 1972, where he worked until retirement in December 2001 as a computer software analyst supporting statistical and simulation software. After retirement from Ford, he joined PMC, Dearborn, Michigan, as a senior simulation analyst. Also, since 1980, he has taught classes at the University of Michigan, including both undergraduate and graduate simulation classes using GPSS/HÔ, SLAM IIÔ, SIMANÔ, ProModelÒ, SIMUL8Ò, or Arena®. He is a member of the Institute of Industrial Engineers [IIE], the Society for Computer Simulation International [SCS], and the Michigan Simulation Users Group [MSUG]. He serves on the editorial board of the International Journal of Industrial Engineering – Applications and Practice. During the last several years, he has given invited plenary addresses on simulation and statistics at conferences in Monterrey, México; İstanbul, Turkey; Genova, Italy; Rīga, Latvia; and Jyväskylä, Finland. He served as a co-editor of Proceedings of the International Workshop on Harbour, Maritime and Multimodal Logistics Modelling & Simulation 2003, a conference held in Rīga, Latvia. Likewise, he served the Summer Computer Simulation Conferences of 2004, 2005, and 2006 as Proceedings co-editor. He is the Simulation Applications track co- ordinator for the 2011 Winter Simulation Conference.
ONUR M. ÜLGEN is the president and founder of Production Modeling Corporation (PMC), a Dearborn, Michigan, based industrial engineering and software services company as well as a Professor of Industrial and Manufacturing Systems Engineering at the University of Michigan-Dearborn. He received his Ph.D. degree in Industrial Engineering from Texas Tech University in 1979. His present consulting and research interests include simulation and scheduling applications, applications of lean techniques in manufacturing and service industries, supply chain optimization, and product portfolio management. He has published or presented more that 100 papers in his consulting and research areas.
Under his leadership PMC has grown to be the largest independent productivity services company in North America in the use of industrial and operations engineering tools in an integrated fashion. PMC has successfully completed more than 3000 productivity improvement projects for different size companies including General Motors, Ford, DaimlerChrysler, Sara Lee, Johnson Controls, and Whirlpool. The scientific and professional societies of which he is a member include American Production and Inventory Control Society (APICS) and Institute of Industrial Engineers (IIE). He is also a founding member of the MSUG (Michigan Simulation User Group).
by Daniel Gonzalez | Jul 20, 2011 | White Papers
SIMULATION INCREASES EFFICIENCY OF ENGINE AIR-LEAK TESTING
Ravi Lote | Edward J. Williams | Onur M. Ülgen
PMC
15726 Michigan Avenue Dearborn, Michigan 48126 U.S.A
ABSTRACT
Discrete-event process simulation has long since earned its place as one of the most powerful and frequently applicable analytical tools available for production process analysis and improvement. Also, the manufacturing economic sector has rapidly become more competitive, requiring both greater economy and greater efficiency of operations. In the present application of simulation, a Tier I automotive industry supplier, beset with these economic pressures, sought to improve the productivity and efficiency of a test-&- inspection line being designed for an automotive engine plant. Simulation was successfully used to design this line to meet stringent requirements of productivity, low original equipment cost, and low operating cost. Via comparison of multiple alternatives, this simulation study identified potential bottlenecks and produced recommendations for their improvement or removal.
INTRODUCTION
Simulation has long been a mainstay of manufacturing improvement – historically its first major economic sector of application (Miller and Pegden 2000). In the large and complex automotive industry, it is a commonplace that the major automotive manufacturers enlist engineering firms with specialized expertise to design production lines for installation and subsequent operation. These engineering firms, like production companies which supply vehicle parts directly to the automobile manufacturers, are called “Tier I” suppliers. The engineering firms likewise call upon the experience and expertise of industrial engineering firms to obtain the benefit of consultation on lean manufacturing, process simulation, value-stream mapping, and ergonomics. This context is one of many in which simulation has become a key analytical tool in the automotive industry (Ülgen and Gunal 1998).
In the simulation study described here, the client company, an engineering company, sought the help of simulation to design an engine test line for an engine plant of an automaker. This test line is responsible for performing air tests on engines to check for leaks, making needed repairs on engines which fail this test, re-testing the engines after repair, and rejecting the very few engines that still fail after repeated repair efforts. In a broader context, engine plants, which send their output to vehicle final-assembly plants, are a vital link in the intra-company supply chain of any vehicle manufacturer.
Simulation is a powerful tool for analysis of a manufacturing system such as this one, which involves complex routing logic, installation of expensive equipment, incorporation of material-handling equipment (pallets, conveyors), and a strict requirement not to become the bottleneck in a larger process – here, the overall engine assembly process, which in turns supplies the vehicle assembly process in other automotive plants. Accordingly, numerous examples of such simulation studies appear in the literature. For example, Chramcov and Daníček improved the manufacture of short gun barrels using simulation (Chramcov and Daníček 2009). Improvement of both the manufacture and the material handling of forged metal components was documented in (Lang, Williams, and Ülgen 2008). Illustrating the versatility of simulation when applied to manufacturing, Walker, Mebrahtu, and Strange used it to achieve high efficiency in electronics manufacturing over a period of years during which production was mandated to gradually decrease (Walker, Mebrahtu, and Strange 2005).
OVERVIEW OF TESTING OPERATION
The system studied was the air-leak test area; to which the main lines manufacturing engines send them. As shown in Figure 1 below, engines arriving on pallets from the main line enter the test area via the first of three Life & Rotate stations, “Lift & Rotate 1.” An arriving engine is then sent preferentially to Air Test Station East (“T1”) if there is sufficient vacant capacity on the conveyor preceding it. Failing that, the engine travels to Air Test Station Center (“T2”) if there is sufficient vacant capacity on the conveyor preceding it. Otherwise, the engine proceeds to Air Test Station West (“T3”). Note that “T1” is actually on the “main line” leading from the engine assembly area to the cold-test area; “T2” and “T3” are on spur lines. Engines tested and passed at “T1” and “T3” exit the system (i.e., proceed to the downstream Cold Test area) via Conveyor 1 or Conveyor 3 respectively. Engines tested and passed at “T2” are routed alternately to Conveyor 1 or Conveyor 3. If either conveyor is full and the other is not, the engine will be routed to the available conveyor notwithstanding the “alternating” policy. Engines tested and rejected at “T1” and “T3” recirculate on Spur 1 or Spur 3 respectively. An engine which fails at “T2” is routed to one of these spurs using logic exactly analogous to the logic for choice of Conveyor 1 or Conveyor 3 for engines which pass at “T2.”
An engine which fails the test once recirculates without repair work being done (even through it passes through a repair station, either Repair 1 or Repair 2) and is retested. This long-standing policy is a tacit acknowledgment that a test failure – although certainly requiring further checking and perhaps engine repair – arises from a fault in the test stand more often than it exposes an actual leak in the engine being tested. An engine which fails twice receives repair service at the repair station through which it passes. An engine which fails a third time is manually removed from its pallet and exits the system; the empty pallet continues circulating in the system. In the first phase of the study, the gradual (gradual because it is rare for any one engine to fail three times) accumulation of empty pallets circulating in the system was ignored. Such “benign neglect” of empty pallets constituted a conservative assumption, helping to establish a “worst case” lower bound scenario for system control logic performance. As will be detailed in the Model Results section (below), the primary focus of this study was the investigation of four control-logic proposals and their ability to meet system throughput targets. Later, the actual operational practice of manually reloading an empty pallet with a newly arriving engine (involving brief use of labor) was added to the model. Engines emerging from Repair 1 first try to enter the conveyor preceding “T1;” failing that, they travel to the conveyor preceding “T2.” If that conveyor is also full, the engine travels to the “T3” conveyor section. All engines emerging from Repair 2 go to the “T3” conveyor section. Engines are pushed into and pulled from the repair stations by a dedicated operator. Engines entering a test station for the first time and engines entering after repair have equal priority, and hence these queues are first-come-first-served throughout.
MODEL CONSTRUCTION, VERIFICATION, AND VALIDATION
In keeping with good simulation analysis practice (Chung 2004), the client engineers and the simulation analysts documented agreed-upon assumptions, the most significant of which were:
- Target system throughput 171.4 jobs per hour (JPH), corresponding to a target system cycle time of 21 seconds
- No model mix included in study
- Labor and needed subassemblies always available
- Engines leaving the system encounter no blockage
- No differences among shifts in processing
Process data, including machine cycle times, conveyor capacities and speeds, time between failures, and time to repair were available from client records. The machine cycle times and the conveyor speeds were modeled as constants. Distribution-fitting techniques (Seila, Ceric, and Tadikamalla 2003) suggested modeling times between failures as exponential and times to repair as symmetric triangular (minimum = mean*0.9, mode = mean, maximum = mean*1.1). The process data also included observation and confirmation of local “right-of-way” policies at the conveyor intersections. For example, if an engine moving “north” in Figure 1 from Lift & Rotate 1 and an engine moving “south” from Repair 2 competed for routing “westward” to “T3,” the engine proceeding from Lift & Rotate 1 had priority. Client and consultant engineers jointly chose the WITNESS® simulation software for model development. This software provides constructs for machines and conveyors, accommodates complex queuing logic, has convenient arrangements for modeling time-to-fail and time-to-repair, and supports concurrent development of a simulation model and its animation (Mehta and Rawles 1999).
More subtly, high importance was attached to an investigation of whether engine failure in the air-leak test occurred randomly. If engine failures tended to cluster in time, the air-leak test area would surely be overloaded during those time intervals, and correspondingly underutilized at other times. Accordingly, the time sequence of engine test results (conceptually “P” for “pass” and “F” for “fail”) were subjected to a statistical runs test for independence (Sprent and Smeeton 2007). The result of this test indicated randomness of engine leak-test failure (as opposed to the suspected positive correlation [clustering]) at the 5% significance level. Therefore, the failure probability desired for each experimental scenario was treated as the parameter of a binomial distribution, in accordance with the binomial presumption of independent trials.
Various verification and validation techniques were used during this project (Sargent 2004). They included sending one engine through the model, temporarily eliminating all randomness from the model (and then checking output against Microsoft Excel® computations), viewing the animation with the client engineers, and performing directional tests (e.g., increasing arrival rate and/or cycle times, and checking that queue lengths and wait times likewise increase). After routine error identification and correction, the model achieved credibility with the client engineers and their management.
MODEL RESULTS
All model runs were made on a steady-state basis, since the actual engine air-leak test line (like the upstream engine assembly line and the downstream engine cold- test line) typically run multiple shifts, and do not “empty and clear” themselves between shifts (work remains in situ between shifts). Four experimental scenarios, each using a different control logic algorithm, were considered, as described next; within each, three different and plausible reject rates were compared. These subdivisions of the four experiments reflect the uncertainty of the eventual reject rates, and therefore constitute a sensitivity analysis (Zeigler, Praehofer, and Kim 2000). Each of the twelve distinct experimental situations was run for ten replications, and each replication used a warm-up time of 100 hours and a data-collection run length of 1000 hours.
Experiment 1 represented the proposed Air-Leak Test line in isolation, with reject rates of 23%, 15%, or 10% (these reject rates were taken from a previous successful simulation study of an analogous engine test area performed for the same client). In this experiment, all engines (whether good or rejected) leaving the test machines were routed alternately to the east and west conveyors. Hence this experiment was fundamentally “impractical,” and was considered a base-case benchmark, a conceptually convenient “zero point” for consideration of alternatives. In this experiment, engines entered the model only from the upstream assembly line.
Experiment 2 represented the proposed air-leak test line in isolation, with reject rates of 22%, 20%, and 18%. In this experiment, 75% of rejects leaving the air test center were routed to Repair 1, with the remaining 25% routed to Repair 2. Satisfactory engines leaving the air test center were alternately routed to the east and west exit conveyors. As was the case for experiment 1, engines entered this model only from the upstream assembly line.
Unlike experiments 1 and 2, experiment 3 represented a new system comprising two sub-systems: the proposed air-leak test line, and a black-box representation of all upstream engine-assembly processes. An engine interarrival rate of 21 seconds was added to represent this black box, with removal of the “engines always available” assumption used in the first two experiments. Reject rates were set at 20%, 18%, and 16%. Half of the rejects leaving the testing machines were routed to Repair 1, and half to Repair 2. Satisfactory engines were routed alternately to the east and west exit conveyors, as in experiment 2. Further, in this experiment, logic was added to the model to re- introduce empty pallets and engines which failed cold test into the air-leak test area.
Experiment 4, like experiments 1 and 2, represented the air-leak test line in isolation, with reject rates at 20%, 18%, and 16%. Half of the rejects leaving the testing machines were routed to Repair 1, and half to Repair 2, as in experiment 3. Satisfactory engines leaving the air test center were alternately routed to the east and west exit conveyors, as in experiment 2. Also, in this experiment, the logic added to the model in experiment 3 (re-introducing empty pallets and engines which failed cold test into the air-leak test area) was retained.
Overall results from these experiments are shown in Table 1, below:
Experiment |
Reject Rate (%) |
Throughput |
1 |
23% |
142.4 |
1 |
15% |
190.4 |
1 |
10% |
207.1 |
|
|
|
2 |
22% |
153.5 |
2 |
20% |
167.8 |
2 |
18% |
179.7 |
|
|
|
3 |
20% |
156.0 |
3 |
18% |
161.0 |
3 |
16% |
163.5 |
|
|
|
4 |
20% |
163.2 |
4 |
18% |
174.4 |
4 |
16% |
184.6 |
All comparisons between a pair of scenarios mentioned next were performed with a Student-t test (nine degrees of freedom, in view of ten replications per scenario) at the 5% significance level.
In conclusion, client engineers and management valued the following insights most highly:
- Comparing experiments 2 and 4 (at reject rates of 20% and 18%), re-introducing engines and pallets from the cold test area decreased system throughput slightly (by about 8%),
- The bottleneck in all experiments, at all rejection rates, was the repair stations,
- At air-leak test rejection rates of 18% or lower, this test line could reliably meet its target of 171.4 jobs per hour.
CONCLUSIONS
After combining the insights gained from this simulation study with other considerations involving the plant design as a whole, and constraints imposed by the upstream assembly line and the downstream cold-test line, client management chose an implementation design corresponding to experiment 4. Furthermore, the client invested in a preventive maintenance program at the test stands which succeeded in maintaining the reject rate at 17%. In actual production, this line then achieved a throughput rate of 179.2, slightly exceeding requirements and in good agreement with interpolation between 174.4 (model prediction of throughput at 18% reject rate) and 184.6 (model prediction of throughput at 16% reject rate).
ACKNOWLEDGMENTS
The authors gladly acknowledge the support and help of Chris DeWitt, project leader, and of Marcelo Zottolo, a colleague who provided guidance in the preparation and presentation of these results. Additionally, constructive criticism from four anonymous reviewers spurred useful improvements to this paper.
REFERENCES
Chramcov, Bronislav, and Ladislav Daníček. 2009. Simulation Study of the Short Barrel of the Gun Manufacture. In Proceedings of the 23rd European Conference on Modelling and Simulation, eds. Javier Otamendi, Andrzej Bargiela, José Luis Montes, and Luis Miguel Doncel Pedrera, 275-280.
Chung, Christopher A. 2004. Simulation Modeling Handbook: A Practical Approach. Boca Raton, Florida: CRC Press.
Lang, Teresa, Edward J. Williams, and Onur M. Ülgen. 2008. Simulation Improves Manufacture and Material Handling of Forged Metal Components. In Proceedings of the 22nd European Conference on Modelling and Simulation, eds. Loucas S. Louca, Yiorgos Chrysanthou, Zuzana Oplatková, and Khalid Al-Begain, 247-253.
Mehta, Arvind, and Ian Rawles. 1999. Business Solutions Using WITNESS. In Proceedings of the 1999 Winter Simulation Conference, eds. Phillip A. Farrington, Harriet Black Nembhard, David T. Sturrock, and Gerald W. Evans, 230-233.
Miller, Scott, and Dennis Pegden. 2000. Introduction to Manufacturing Simulation. In Proceedings of the 2000 Winter Simulation Conference, Volume 1, eds. Jeffrey A. Joines, Russell R. Barton, Keebom Kang, and Paul A. Fishwick, 63-66.
Seila, Andrew F., Vlatko Ceric, and Pandu Tadikamalla. 2003. Applied Simulation Modeling. Belmont, California: Thomson Learning, Incorporated.
Sargent, Robert G. 2004. Validation and Verification of Simulation Models. In Proceedings of the 2004 Winter Simulation Conference, Volume 1, eds. Ricki G. Ingalls, Manuel D. Rossetti, Jeffrey S. Smith, and Brett A. Peters, 17-28.
Sprent, Peter, and Nigel C. Smeeton. 2007. Applied Nonparametric Statistical Methods, 4th edition. Boca Raton, Florida: Taylor & Francis Group, LLC.
Ülgen, Onur, and Ali Gunal. 1998. Simulation in the Automotive Industry. In Handbook of Simulation: Principles, Methodology, Advances, Applications, and Practice, ed. Jerry Banks, 547-570.
Walker, Rob, Habrom Mebrahtu, and Caroline Strange. 2005. Achieving Efficiency Through Simulation in a Declining Manufacturing Market. In Proceedings of the 2005 Indusstrial Simulation Conference, eds. Jörg Krüger, Alexei Lisounkin, and Gerhard Schreck, 162-167.
Zeigler, Bernard P., Herbert Praehofer, and Tag Gon Kim. 2000. Theory of Modeling and Simulation: Integrating Discrete Event and Continuous Complex Dynamic Systems, 2nd edition. San Diego, California: Academic Press.
AUTHOR BIOGRAPHIES
RAVI LOTE is a Consulting Project Manager at PMC. Over the last ten years, Ravi has built simulation models for dozens of customers in the U.S. and overseas. His functional areas of expertise include simulation modeling, process improvement and supply chain optimization. Ravi has a Bachelors’ Degree in Mechanical Engineering from Shivaji University, India and a Masters’ Degree in Industrial Engineering from the University of Massachusetts, Amherst. He is currently pursuing an M.B.A. from the University of Michigan, Ann Arbor. Ravi is a certified Six Sigma Black Belt and a certified MODAPTS® professional for conducting Industrial Engineering time studies.
EDWARD J. WILLIAMS holds bachelor’s and master’s degrees in mathematics (Michigan State University, 1967; University of Wisconsin, 1968). From 1969 to 1971, he did statistical programming and analysis of biomedical data at Walter Reed Army Hospital, Washington, D.C. He joined Ford Motor Company in 1972, where he worked until retirement in December 2001 as a computer software analyst supporting statistical and simulation software. After retirement from Ford, he joined PMC, Dearborn, Michigan, as a senior simulation analyst. Also, since 1980, he has taught classes at the University of Michigan, including both undergraduate and graduate simulation classes using GPSS/H, SLAM II, SIMAN, ProModel, SIMUL8 , or Arena®. He is a member of the Institute of Industrial Engineers [IIE], the Society for Computer Simulation International [SCS], and the Michigan Simulation Users Group [MSUG]. He serves on the editorial board of the International Journal of Industrial Engineering – Applications and Practice. During the last several years, he has given invited plenary addresses on simulation and statistics at conferences in Monterrey, México; İstanbul, Turkey; Genova, Italy; Rīga, Latvia; and Jyväskylä, Finland. He served as a co-editor of Proceedings of the International Workshop on Harbour, Maritime and Multimodal Logistics Modelling & Simulation 2003, a conference held in Rīga, Latvia. Likewise, he served the Summer Computer Simulation Conferences of 2004, 2005, and 2006 as Proceedings co-editor. His email address is [email protected]
ONUR M. ÜLGEN is the president and founder of (formerly) Production Modeling Corporation (now PMC), a Dearborn, Michigan, based industrial engineering and software services company as well as a Professor of Industrial and Manufacturing Systems Engineering at the University of Michigan-Dearborn. He received his Ph.D. degree in Industrial Engineering from Texas Tech University in 1979. His present consulting and research interests include simulation and scheduling applications, applications of lean techniques in manufacturing and service industries, supply chain optimization, and product portfolio management. He has published or presented more that 100 papers in his consulting and research areas.
Under his leadership PMC has grown to be the largest independent productivity services company in North America in the use of industrial and operations engineering tools in an integrated fashion. PMC has successfully completed more than 3000 productivity improvement projects for different size companies including General Motors, Ford, DaimlerChrysler, Sara Lee, Johnson Controls, and Whirlpool. The scientific and professional societies of which he is a member include American Production and Inventory Control Society (APICS) and Institute of Industrial Engineers (IIE). He is also a founding member of the MSUG (Michigan Simulation User Group).