Summary: You can try to reduce the memory consumption of ProB's model checker by setting the COMPRESSION preference to TRUE. You can set FORGET_STATE_SPACE and IGNORE_HASH_COLLISIONS to TRUE You can enable operation reuse. This is useful if most operations only affect a subset of the variables. You should move unchanged variables to constants (in B and Event-B). Constants are only stored once, variables are stored for every state. Use SAFETY_MODEL_CHECK TRUE if you do not do LTL model checking.
Use -statistics to uncover where memory is used.
Let us use this machine for illustration:
MACHINE IncrementalStatePackingTestLarge SETS ID={aa,bb,cc} VARIABLES xx,yy,zz,ii INVARIANT xx:ID & yy:ID & zz <: NATURAL & ii : 1..1000 INITIALISATION xx:=aa || yy := bb || zz := (1..1000 \/ 2000..3000) \ {2001} || ii := 1 OPERATIONS SetX(vv) = PRE vv:ID & vv /= xx THEN xx:= vv END; SetY(vv) = PRE vv:ID & vv /= yy THEN yy:= vv END; IncrI = PRE ii<1000 THEN ii:=ii+1 END END
Let us run probcli on this example (note: the -memory command is new and may not be available in your version of probcli):
probcli IncrementalStatePackingTestLarge.mch -model-check -memory ALL OPERATIONS COVERED % All open nodes visited Model checking time: 8994 ms (10249 ms walltime) States analysed: 9000 Transitions fired: 44992 No counter example found. ALL nodes visited. ProB memory used: 466.939 MB ( 362.081 MB program)
By providing the -statistics command you can get a very detailed output of the memory consumption.
probcli IncrementalStatePackingTestLarge.mch -model-check -statistics ... packed_visited_expression(_429,_431) : 9001 facts : 176.650 MB ( 71.792 MB program) freed: 290.304 MB ... transition(_429,_431,_433,_435) : 44992 facts : 166.853 MB ( 61.996 MB program) freed: 9.796 MB ...
Here you can see that most memory is used to store the states (not the transitions). Note: you should look at the freed information for each stored fact.
One way to improve memory consumption is to use the COMPRESSION preference:
probcli IncrementalStatePackingTestLarge.mch -model-check -p COMPRESSION TRUE -memory ALL OPERATIONS COVERED % All open nodes visited Model checking time: 7341 ms (8387 ms walltime) States analysed: 9000 Transitions fired: 44992 No counter example found. ALL nodes visited. ProB memory used: 179.199 MB ( 74.342 MB program)
As you can see, we have considerably reduced the memory consumption in this case, and as a side-benefit the runtime as well (although in general, enabling compression can increase runtime).
If you have lots of transitions you can try and disable time-outs. Time-out checking incurs some overhead, and is probably most useful in animation mode and not in exhaustive model checking mode.
probcli IncrementalStatePackingTestLarge.mch -model-check -p COMPRESSION TRUE -memory -disable-timeout ... Model checking time: 6449 ms (7492 ms walltime) ...
In case transitions take up space you can also use the SAFETY_MODEL_CHECK preference in case you do not need LTL or deadlock checking. For this model this does not result in a big decrease of memory usage. Note that you should add the -nodead flag to disable deadlock checking.
probcli IncrementalStatePackingTestLarge.mch -model-check -p COMPRESSION TRUE -memory -p SAFETY_MODEL_CHECK TRUE -nodead ALL OPERATIONS COVERED % All open nodes visited Model checking time: 7333 ms (8483 ms walltime) States analysed: 9000 Transitions fired: 9000 No counter example found. ALL nodes visited. ProB memory used: 169.784 MB ( 64.926 MB program)
Another way to reduce memory usage is to discard states once they are processed and to ignore hash collisions. Warning: this may result in collisions not being noticed and thus may result in an incomplete model check, without ProB being able to realise it. For ordinary model checking, ProB does check for collisions and will generate the full state space even in the presence of collisions. (This is unlike the TLC model checker, which will ignore hash collisions by default and construction.)
probcli IncrementalStatePackingTestLarge.mch -model-check -memory -p FORGET_STATE_SPACE TRUE -p IGNORE_HASH_COLLISIONS TRUE ALL OPERATIONS COVERED % All open nodes visited Model checking time: 8081 ms (9172 ms walltime) States analysed: 9000 Transitions fired: 9001 No counter example found. ALL nodes visited. ProB memory used: 169.140 MB ( 64.283 MB program)
We can see that even without compression the memory consumption is reduced considerably.
There is, however, one more thing that we can do. Let us first run the -lint command to check the model:
probcli IncrementalStatePackingTestLarge.mch -lint ! A warning occurred (source: check_all_variables_written) ! ! Variable is written by no operation (consider defining it as a constant): zz ! Line: 4 Column: 16 until Line: 4 Column: 18 in file: /Users/leuschel/git_root/prob_examples/public_examples/B/PerformanceTests/ModelChecking/IncrementalStatePackingTestLarge.mch
Let us now follow this counsel and move zz to the constants:
MACHINE IncrementalStatePackingTestLarge_v2 SETS ID={aa,bb,cc} CONSTANTS zz PROPERTIES zz <: NATURAL & zz = (1..1000 \/ 2000..3000) \ {2001} VARIABLES xx,yy,ii INVARIANT xx:ID & yy:ID & ii : 1..1000 INITIALISATION xx:=aa || yy := bb || ii := 1 OPERATIONS SetX(vv) = PRE vv:ID & vv /= xx THEN xx:= vv END; SetY(vv) = PRE vv:ID & vv /= yy THEN yy:= vv END; IncrI = PRE ii<1000 THEN ii:=ii+1 END END
probcli IncrementalStatePackingTestLarge_v2.mch --model-check -memory % unused_constants(1,[zz]) % Runtime for SOLUTION for SETUP_CONSTANTS: 94 ms (walltime: 97 ms) % Runtime to FINALISE SETUP_CONSTANTS: 0 ms (walltime: 0 ms) ALL OPERATIONS COVERED % All open nodes visited Model checking time: 1634 ms (1689 ms walltime) States analysed: 9001 Transitions fired: 44993 No counter example found. ALL nodes visited. ProB memory used: 179.129 MB ( 74.271 MB program)
You can see that even without compression the memory consumption is reduced and runtime very much improved. Combining this with safety model checking results in a further (small) improvement:
probcli IncrementalStatePackingTestLarge_v2.mch --model-check -memory -p SAFETY_MODEL_CHECK TRUE -nodead % unused_constants(1,[zz]) % Runtime for SOLUTION for SETUP_CONSTANTS: 91 ms (walltime: 93 ms) % Runtime to FINALISE SETUP_CONSTANTS: 0 ms (walltime: 0 ms) ALL OPERATIONS COVERED % All open nodes visited Model checking time: 1367 ms (1395 ms walltime) States analysed: 9001 Transitions fired: 9001 No counter example found. ALL nodes visited. ProB memory used: 169.917 MB ( 65.060 MB program)
Use min-max values to determine if any value is increasing in an unexpected or unbounded fashion:
probcli IncrementalStatePackingTestLarge.mch --model-check -get_min_max_coverage user_output ... Min-Max-Coverage VARIABLES zz=#2000:{1,2,...,2999,3000} xx:{aa,bb,cc} yy:{aa,bb,cc} ii:1..1000
We can see (similar to -lint above) that zz has only one value, and we can see the range of values for the other variables. In this case the ranges are not unexpected.
In general you can try and use ParB to parallelize ProB. However, for this model unfortunately the use of parB actually slows down the model checking.
You can also use TLC as a backend in ProB as follows:
probcli IncrementalStatePackingTestLarge.mch -mc_with_tlc ... Parsing time: 182 ms Translation time: 24 ms Model checking time: 0 sec States analysed: 9000 Transitions fired: 44992 Result: NoError exit : exit(0) walltime: 1534 ms
You can increase the number of workers by setting the TLC_WORKERS preference:
probcli IncrementalStatePackingTestLarge.mch -mc_with_tlc -p TLC_WORKERS 3 ... exit : exit(0) walltime: 1392 ms
You can also use the LTSmin backend, which interacts with the ProB interpreter and solver:
probcli -mc_with_lts_seq IncrementalStatePackingTestLarge.mch -nodead ... prob2lts-seq, 0.422: state space 1009 levels, 9001 states 44992 transitions ... LTSMin found no counter example
Model checking here took only 0.4 seconds.
Let us adapt the initial example above and make the transition computations much slower:
MACHINE IncrementalStatePackingTestLargeSlow SETS ID={aa,bb,cc} VARIABLES xx,yy,zz,ii INVARIANT xx:ID & yy:ID & zz <: NATURAL & ii : 1..1000 INITIALISATION xx:=aa || yy := bb || zz := (1..1000 \/ 2000..3000) \ {2001} || ii := 1 OPERATIONS SetX(vv) = PRE vv:ID & vv /= xx & !x.(x:1..300 => 1..x <: zz) // always true THEN xx:= vv END; SetY(vv) = PRE vv:ID & vv /= yy & !x.(x:1..300 => 1..x <: zz) THEN yy:= vv END; IncrI = PRE ii<1000 THEN ii:=ii+1 END END
Here the additional guards slow down ProB considerably.
probcli IncrementalStatePackingTestLargeSlow.mch --model-check -memory -p COMPRESSION TRUE ... * 2025 states checked (48.6% of total 4169), 10121 transitions, 168.719 MB * currently checking 14.52 states/sec, adding 37.02 new states/sec * walltime 2.4 minutes, percentage of checked states decreasing from 50.7% ...
By setting the OPERATION_REUSE flag to full one can tell ProB to cache results for each operation. This works when the operations only use a subset of the variables, and the pre-computed operations can thus be reused for many states:
probcli IncrementalStatePackingTestLargeSlow.mch --model-check -memory -p COMPRESSION TRUE -p OPERATION_REUSE full ALL OPERATIONS COVERED % All open nodes visited Model checking time: 8752 ms (9730 ms walltime) States analysed: 9000 Transitions fired: 44992 No counter example found. ALL nodes visited. ProB memory used: 179.969 MB ( 75.111 MB program)
Note: on this example TLC does fare better then ProB in default settings, but is slower than ProB with operation reuse:
probcli IncrementalStatePackingTestLarge.mch -mc_with_tlc ... Parsing time: 199 ms Translation time: 29 ms Model checking time: 32 sec States analysed: 9000 Transitions fired: 44992 Result: NoError exit : exit(0) walltime: 33551 ms
probcli IncrementalStatePackingTestLargeSlow.mch -mc_with_tlc -p TLC_WORKERS 3 ... exit : exit(0) walltime: 26501 ms
However, LTSmin works very well together with ProB, model checking only takes 0.656 seconds. The LTSMin backend has operation caching built-in:
probcli -mc_with_lts_seq IncrementalStatePackingTestLargeSlow.mch -nodead ... prob2lts-seq, 0.676: state space 1009 levels, 9001 states 44992 transitions ... LTSMin found no counter example
In principle partial order reduction can help, in particular for deadlock checking. Looking at the original example we get:
probcli IncrementalStatePackingTestLarge.mch -model-check -p COMPRESSION TRUE -memory -p por ample_sets -noinv -noass ALL OPERATIONS COVERED % All open nodes visited Model checking time: 45 ms (66 ms walltime) States analysed: 18 Transitions fired: 29 No counter example found. ALL nodes visited. ProB memory used: 162.450 MB ( 57.592 MB program)
For invariant checking it does not pay off, and actually makes runtime worse:
probcli IncrementalStatePackingTestLarge.mch -model-check -p COMPRESSION TRUE -memory -p por ample_sets -nodead -noass ALL OPERATIONS COVERED % All open nodes visited Model checking time: 13935 ms (17163 ms walltime) States analysed: 9000 Transitions fired: 35875 No counter example found. ALL nodes visited. ProB memory used: 178.822 MB ( 73.964 MB program)
Symmetry can achieve considerable reduction, but requires rewriting the model, e.g., to use a deferred set rather than an enumerate set.