Stochastic Scheduling Framework utilising Spry and daedalus

Production schedules for mine sites are notoriously sensitive to input assumptions. Generally these assumptions are constructed using average historical values, with forward looking scalars about performance improvement/degradation applied. Assumptions such as equipment availability, utilisation, and production rates all have large impacts on monthly or annual material movements.

A set of assumptions carries risk; if an availability assumption was overzealous for a month, the production target will not be reached, and the schedule becomes invalidated. It is important to note that these risks are not independent, missing a target can have causal effects on the remainder of the schedule.

A common practice is to run 3 schedule simulations, varying the input assumptions above and below the headline value. These schedules are considered as 'worst, likely, best' (nomenclature varies). This practice is useful to understand sequencing bottlenecks as productivity increases or decreases, but is lacking in data points to identity risks. There are now 3 schedules which become invalidated as reality sets in.

This is where stochastic scheduling is applicable. Productivity assumptions are defined according to a distribution, and by running a Monte Carlo simulation, a range of production outputs can be collected which describe the distribution of production values. A Monte Carlo method is most useful for the scheduling problem due to the numerous coupled degrees of freedom a scheduling system has. This is where the performance of one equipment will have direct impacts on the performance or path of another. With sites with more than 3 resources interacting with one another, a more systematic simulation approach would quickly run into a combinatorial explosion problem. Constructing a stochastic scheduler in the Spry scheduling software package uses a Monte Carlo simulation method at its core.

Inputs

Productivity Inputs

The 3 main productivity drivers in the Spry scheduling software package are availability, utilisation, and rate. All three combine to give an effective consumption/calendar hour, however, the framework will split out each as a separate input assumption. It would be possible to simulate using a single distribution for the consumption, but this method does not lend itself to ergonomic reporting. For instance, a typical report would chart an equipments operating hours across the simulations, so granularity into the operating hours inputs (availability and utilisation) are required.

The framework requires a productivity distribution per input. This means the availability, utilisation, and rate would all be defined using a distribution, per equipment. The framework utilises fields within the calendar table to define a distribution, as the image below presents.

The framework supports various distributions, here the inputs are defined using a PERT distribution, where a, b, c represent the minimum, most likely, maximum values, defining the shape of the distribution. Two other constraints are required, Lower Clamp and Upper Clamp. These values bound the sampled value, since sampled values could end up being invalid, such as negative rates or availabilities greater than 100%. The distributions are defining a factor, not an absolute value. This is intentionally done as it is common to define different rates for different processes and to alter inputs over time as machines undergo major overhauls or replacements. So the framework will generate a sample using the defined distribution, apply the sample as a factor against the scheduled input value, and bound the result using the clamps.

Simply setting up the productivity inputs as continuous distributions yields the below distributions for the example Cat6040 equipment. These distributions were achieved with 100 simulations, each observation is a productive step in the schedules.

Discrete Randomised Events

Equipment productivity is not the only driver to total material movement. Delays such as weather events, unexpected mechanical breakdowns, or manning issues can all negatively impact a schedule. The industry approach is to forecast expected delays and build these delays into the schedule. If these forecasts are too optimistic, production targets will not be achieved, whilst if the forecasts are too pessimistic, under budgeting can occur (this is especially true for manning).

The stochastic scheduling framework accounts for these risk using the notion of Discrete Randomised Events (DRE) where a particular event (e.g. rain) has a likelihood of occurring and a defined impact. The impact is defined as a distribution of delay hours. The framework supports events which can impact equipment or areas, for example a mechanical breakdown would be modelled for the equipment, but a weather event will impact the whole site. DREs can also supplement forecast models to provide insight as to how sensitive a schedule is to changes in the forecasts.

Simulations

Simply put, the more, the better. The framework hooks into the Spry scheduling engine and runs a specified number of simulations. For each simulation, the input values and DREs are precomputed rather than computing on the fly. This helps separate out the stochastic logic from Spry's scheduling logic. The framework also optimises the scheduling runs, trying to reduce the amount of work Spry has to do if it is not going to change per schedule run. For each scheduling run, a collection of outputs are gathered. These outputs are collated into a large tabular output, aggregating the results from the run on a specified period of time (month, year), and separating out the result for each run.

Choosing an adequate number of simulations is a tradeoff between processing time and the output validity. The total processing time is linearly proportional to the number simulations which are to be run, and is directly impacted by how long a single schedule takes to run. Whether the outputs are well distributed depends on the number of degrees of freedom in the model. A model with 2 excavators and no DREs will only require a small number of simulations (say 100), whilst more complex models will require 1000s of simulations to produce well shaped outputs.

It is important to understand how sensitive schedules are to varying inputs. If long tails are present, or secondary humps in the overall output, there may be degenerate schedules. A degenerate schedule typically comes from equipment and dependency interaction. A dragline which must follow a specific path would have to wait until material has been blasted and ready to dig, or slight delays in one area of a pit might have ramifications for another. Running the stochastic scheduler highlights the input schedule's sensitivities to these risks, but it also goes one step further. Each simulation is deterministic, and can be replicated given the same seeding value. So a scheduling engineer has the power to run specific stochastic schedules and visually check how the schedule performs. This can assist the scheduler to identify what particular paths or dependencies in a schedule are causing schedules to fall over so readily.

Visualising Outputs

The output of the stochastic scheduler is a large table as a CSV. This data must be post-processed to visualise the output distributions. As an example of some outputs that can be constructed from the framework, a demonstration model was constructed and 300 simulations run. The post-processing and charting is done with daedalus.report.

First, overall movement of material shows the underlying distribution that the simulations exhibit.

Equipment operating hours are also particularly useful to see if equipment are not being full utilised.

Finally, temporal distributions can be used to look at the spread of production over time.

The above charts are just a few examples of outputs that can be constructed, typically each site would want to track and gain insight into metrics specific to the site's operating constraints.

Conclusion

The stochastic scheduling framework for Spry is a fantastic adjunct to the regular scheduling models employed by the mining industry. The framework takes very little to set up and start running simulations. Most sites already track equipment performance, this data can be used to form the continuous distributions for input into the framework. Leveraging daedalus.report's ability to richly visualise the output data, and to post process the results in a fast, automated fashion, a powerful process is presented for conducting stochastic analysis of schedules, repeatedly.

Previous
Previous

Ogma v0.5 Release

Next
Next

Ogma v0.3 release - now with type inference