# My DokuWiki

### Site Tools

introduction_to_multiple-workstation_production_lines

# Production Line Models: Introduction

NOTE: Exercises below may link to supporting files in a GitHub repository. If you follow such links and (at the GitHub website) right-click a file and choose “Save link as…”, it will appear to download a desired file but may in fact fail. A failure will be discovered when trying to open the downloaded file, usually in MATLAB, and learning that it is not in fact a MATLAB script, function, or SimEvents simulation model.

A remedy is to, at the GitHub website, back up to the project root (e.g. Courseware or Software), choose “Download ZIP” for the entire project, and find the desired file within the project's ZIP. Our apologies for the inconvenience.

## Exercises

### Semantics and Simple Computations

1. Compute the capacity in parts per hour of the following:

• A station with 3 machines operating in parallel with 20 minute service time at each machine.
• A balanced line with single machine stations, all with average processing times of 15 minutes.
• A four-station line with one machine per station, where the average processing times are 15, 20, 10, and 12 minutes, respectively for stations 1, 2, 3, and 4.
• A four-station line with multiple machines at each station, where the number of machines at stations 1, 2, 3, and 4 are 2, 6, 10, and 3, and the average processing times at stations 1, 2, 3, and 4 are 10, 24, 40, and 18 minutes.

2. Consider a three-station line with one machine per station. The average processing time on stations 1, 2, and 3 are 15, 12, and 14 minutes. However, station 2 is subject to random failures, which cause its fraction of uptime to be only 75 percent.

• Which station is the bottleneck?
• What are the bottleneck rate and raw process time for the line?

3. A powder metal manufacturing line produces bushings in three processes (compaction, sinter-harden, and rough/finish turn) which are executed at three single-machine stations with average processing times of 12, 10, and 6 minutes. However, while compaction and sinter-harden are dedicated to producing bushings, the rough/finish turn station also processes bearings from another line; the average processing time for bearings is 14 minutes.

• If the production rates of bushings and bearings are the same, what is the bottleneck?
• If the production rates of bearings is 1/2 that of bushings, what is the bottleneck rate?
• If the production rates of bearings is 1/3 that of bushings, what is the bottleneck rate?

### Simple Simulation of Open and Closed Systems

1. Open the simulation model PennyFab_ClosedSystemCONWIP_RandomProcTimes.slx, which is of a closed system (the same entities keep recirculating) with random processing times. Configure the four workstations with exponentially-distributed service times with means [2, 5, 10, 3] and number of servers [1, 2, 6, 2]. Set the whole-system WIP to 25 and make the stop time sufficiently large to reach steady-state (10000 time units?). Run one replication and use the Simulation Data Inspector to determine a good target for average throughput, i.e., a throughput level that is “achievable” for the system with reasonable certainty.

2. Open the model PennyFab_OpenSystemRandomArrivals_RandomProcTimes.slx, which is of an open system (entities arrive from an exogenous process, usually abstracted as a “source”, and depart to an exogenous process, usually abstracted as a “sink”) with random processing times. Configure the four workstations with exponentially-distributed service times with means [2, 5, 10, 3] and number of servers [1, 2, 6, 2]. Make the stop time sufficiently large to reach steady-state (10000 time units?). Next, configure the arrival process - for each member of the team, convert his/her birthdate to a number of the form “mmddyy”. Add these numbers together and set this as the Initial Seed for the Arrival Generator. For inter-arrival times, empirically determine a number which gives the same throughput as for the previous exercise’s closed system (and explain your choice … What happens if you release work at a rate too slow? What happens if you release work at a rate too fast?). Finally, run one replication, use the Simulation Data Inspector to visualize WIP, and compare to the closed system.

BIG PICTURE: These simulation models are early versions of four-workstation production lines. It’s possible to replace each (queue, server, random number generator) triple by a single GGkWorkstation library block, but this was declined to explicitly show the “single queueing node” structure of an infinite-capacity queue followed by a k-capacity server. This exercise was given in the second week of class, and the Simulation Data Inspector is used to make visualization as simple as possible. (1) is answered by visualizing the “avgTH” signal and reading the steady-state value. (2) is answered by visualizing the “WIP” signal, with an explanation such as “If the four-serial-workstation system is changed from a closed CONWIP configuration to an open random-arrivals configuration, for the same average throughput the WIP level has a fair amount of variability and occasionally spikes well above the closed system’s CONWIP level. It might appear that average WIP is also increasing, but Little’s Law implies that if average throughput and average cycle time remain unchanged, then so does average WIP.”

### Relative Placement of Workstations

Open the simulation model PennyFab_ClosedSystemCONWIP_RandomProcTimes.slx, which is of a closed system (the same entities keep recirculating) with random processing times. Configure the four workstations with exponentially-distributed service times with means [2, 5, 10, 3] and number of servers [1, 2, 6, 2]. Set the whole-system WIP to 25 and make the stop time sufficiently large to reach steady-state (10000 time units?).

1. Run one replication, use the Simulation Data Inspector to visualize average TH and CT (copy & paste your figures below), and explain your results. Note that the bottleneck in this baseline scenario is the second workstation.

2. From the baseline, switch workstations one and two, e.g. configure the four workstations to have exponentially-distributed service times with means 5, 2, 10, and 3, and number of servers 2, 1, 6, and 2. Run one replication, use the Simulation Data Inspector to visualize TH and CT (copy & paste your figures below), and compare this scenario to the baseline.

3. From the baseline, switch workstations two and four, e.g. configure the four workstations to have exponentially-distributed service times with means 2, 3, 10, and 5, and number of servers 1, 2, 6, and 2. Run one replication, use the Simulation Data Inspector to visualize TH and CT (copy & paste your figures below), and compare this scenario to the previous ones.

4. What conclusions might you draw from these experiments?

BIG PICTURE: In both alternate scenarios, the bottleneck placement does not affect the average steady-state throughput or cycle time. Note that it may be necessary to simulate the models for quite a long time to reach steady-state and draw this conclusion. More importantly, note that this is not a general result – all four service processes are exponentially-distributed with equivalent SCVs. In general, relative placement of the bottleneck does matter when there are different service time probability distributions with different amounts of variability.

### Diagnosis and Improvement

[This assignment is based on Hopp & Spearman chapter 7, problem 5 (in ed. 2) or problem 9 (in ed. 3), which begins with the sentence “Positively Rivet Inc. is a small machine shop that produces sheet metal products”.]

A small machine shop has a four-workstation production line dedicated to manufacturing a single product. The old line has [4, 4, 2, 1] machines at each workstation, and machines have processing rates of [15, 12, 20, 50] parts/hour. Because of strong demand, a new four-workstation production line is added alongside the old one and uses higher-capacity automated equipment. The new line has a single machine at each workstation with processing rates of [120, 120, 125, 125] parts/hour. Over the past several months, the old line has averaged 315 parts/day (one eight-hour shift per day) with an average WIP of 400 parts. The new line has averaged 680 parts/day with an average WIP of 350 parts. Management is unhappy with the performance of the old line because of its lower throughput and higher WIP.

1. Compute rb (bottleneck rate), T0 (raw process time), and W0 (critical WIP) for each line. Explain why one line has a larger critical WIP than the other.
2. Use the simulation model PennyFab_ClosedSystemCONWIP_RandomProcTimes.slx to estimate the throughput for each line; use exponentially-distributed service times and a CONWIP level equal to each line's reported average WIP. Let these numbers approximate the practical worst case performance (for the new line, the simulation model almost exactly meets the three conditions). Run the model, use the Simulation Data Inspector to visualize average throughput, and compare management's reported throughput results with the simulated practical worst case. Is management correct in criticizing the old line for inefficiency?
3. Use the simulation model PennyFab_OpenSystemRandomArrivals_RandomProcTimes.slx with management's reported throughput results as the arrival rates, and visualize traces of WIP, average CT, and average TH for each line. Comment, in particular on how the observed WIP compares with what would be expected for exponentially-distributed processing times.
4. Identify and evaluate at least two options for improving the throughput of each line.

BIG PICTURE: (a) is plug-and-chug, with a bit of thinking to compare the consequences of multiple slower machines versus a single fast one. (b) enables simulating practical worst case performance as opposed to estimating it with analytical approximations, and then comparing PWC results with management's reported results. © tries to elucidate not just processing time averages but also variability in the old & new lines, to help answer the next question. (d) has several possible answers; for the old line they might include more capacity, and for the new line variability reduction (to get to PWC) and then relaxing any of the practical worst case assumptions.