Experiment Framework

AnyLogic offers a rich experiment framework that enables you to efficiently manage the simulation runs, collect, display, and compare output results, calibrate and optimize your models. Multiple experiments can be defined for a model; an experiment can be packaged and exported as standalone Java applets or applications. You can design sophisticated interactive UI for the experiments using AnyLogic graphical editor. Every experiment also has a number of “extension points” where you can specify additional actions to be performed before and after each replication, each iteration, on experiment start, etc. The simulation results can be saved to a .csv file and reloaded in the UI at any time or opened in e.g. Excel (feature of AnyLogic Professional). The supported experiment types are:

Simulation

This is the most basic experiment type that allows you to run the model with certain parameter values, view the simulation animation in virtual or real time scale, stop, pause, and resume the model execution, run the model step by step.

During the model execution you can view any object at any level of model hierarchy, inspect the states of events, statecharts, dynamic and plain variables, etc.

Simulation experiment is the one you should use for model debugging and for visual demonstration of dynamic simulation. All other experiment types treat the model as a black box, execute it in the fastest possible mode, and do not show the model animation.

Parameter Variation

In this experiment the model is run multiple times with one or more parameters being varied. You can specify range and step for a parameter and let AnyLogic try all combinations, or you can programmatically control how the parameter value depends on the index of the simulation run.

This experiment can also be used to plug-in your own optimization algorithms if the built-in optimizer does not suit you for any reason: you can specify what code should be called after each iteration to decide on the next parameter set.

Parameter variation experiment

Optimization

AnyLogic uses the built-in OptQuest optimizer to search for the best solution, given the objective function, constraints, requirements, and parameters (decision variables) that can be varied. Optimization under uncertainty is supported by using replications: a stochastic model is run multiple times with the same parameters values (those runs are called replications) and the decision on the next move in the parameter space (next iteration) is then based on their aggregated output.

AnyLogic automatically generated the UI for the optimization experiment that includes the current and best solutions and the dynamic chart of the optimization progress.

optimization.jpg

Compare Runs

[available in AnyLogic Professional]

This is an interactive experiment that allows you to input the model parameters, run simulation, and add the simulation output to the charts where they can be compared with the results of other runs.

The default UI for this experiment includes the input fields and the output charts. You can choose a particular output result, click on its chart, and display the corresponding parameter values.

compareruns.jpg

Sensitivity Analysis

[available in AnyLogic Professional]

This experiment helps you to explore how sensitive are the simulation results to changes of the model parameters. The experiment wizard asks you to choose the parameter to vary and the outputs you are interested in.

For a single value type of output the chart “output vs parameter” is displayed. If the simulation output is a dataset (e.g. dynamics of a certain process in time), a series of curves is shown on one chart for comparison.

sensitivityanalysis.jpg

Monte Carlo

[available in AnyLogic Professional]

Monte Carlo experiment allows you to run a (stochastic) simulation a number of times, obtain the collection of outputs and view them as a histogram. If the model itself in stochastic, then each run would produce a different output even if you do not change the input parameters. Alternatively, you may generate a random parameter value for each simulation run.

The Monte Carlo experiment wizard asks you how many replications you wish to make, whether or not you wish to vary parameters, and which values or datasets should be collected and displayed as histograms. Both regular and 2D histograms may be used.

montecarlo.jpg

Calibration

[available in AnyLogic Professional]

When you have your model structure in place, you may wish to tune some parameters of the model so that its behavior in particular conditions matches a known (observed) pattern. In case there are several parameters to tune it makes sense to use the built-in optimizer to search for the best combination. The objective in this case is to minimize the difference between the simulation output and the observed data.

The experiment wizard will ask you which parameters should be calibrated and what criteria should be used. In case of multiple criteria you can use coefficients. The calibration progress and fitting of each criterion are displayed in the default UI.

calibration.jpg

Custom

[available in AnyLogic Professional]

Custom experiment gives you maximum flexibility with setting parameters, managing simulation runs, making decisions. It simply gives you a code field where you can do all that (and a lot more) by using a rich Java API of AnyLogic engine (methods like run(), stop(), etc.).