ParB

This page explains how to run the distributed model checking prototype.

Note that the implementation does not work with Windows, only Linux andMac OS are supported.

It is required to set the limits for shared memory on some systems, this can be done using sysctl. Here is a little script that sets the limits. It takes the size of shared memory as parameter (usually the size of your memory in GB). You need to run the script with root rights.

#!/bin/bash
if [ $# -gt 0 ]; then
    echo "Setting SHM sizes:"
    sysctl -w kern.sysv.shmmax=`perl -e "print 1073741824*$1"`
    sysctl -w kern.sysv.shmseg=4096
    sysctl -w kern.sysv.shmall=`perl -e "print 262144*$1"`
    echo "Here are the current values:"
    sysctl -a | grep shm
else
    echo "You need to provide the size of shared memort (in full GB)"
fi

After setting up shared memory, you can use the parB.sh script that comes with the ProB distribution (see Download).

Usage

./parB <Nr. of workers> <logfile> <file>

Example usage:

$ ./parB.sh 2 ~/parB.log scheduler.mch

Running in the Cloud/Cluster

The script can only be used for computation on a single physical computer. If you want to use multiple computers, the setup is a bit more complex:

  • On each physical computer you need to start exactly one copy of lib/hasher.
lib/hasher <MASTER_IP> <MASTER_PORT> 1
  • Multiple Copies of probcli configured as workers:
probcli -zmq_worker2 <MASTER_IP> <MASTER_PORT> 0
  • A single instance of probcli configured as the master:
probcli -zmq_master2 <MIN QUEUE SIZE> <MAX NR OF STATES> <MASTER_PORT>  0 <LOGFILE> <MODEL_FILE>

The minimal queue length is used to determine if a worker is allowed to share its queue. The experiments have shown, that a number between 10 and 100 is fine. parB will stop after (at least) the maximal number of states have been explored, a value of -1 will explore all states (beware of this, if the state space is infinite!).


As a rule of thumb use one real core for each of the processes. On hyperthreads the model checking still becomes faster, but the speedup is only 1/4 for each additional hyperthread.

We plan to develop a control interface that allows configuring the logical network in a more convenient way and running the model checker from within ProB.

Options

You can use preferences in parB.sh (and the master) :

./parB <Nr. of workers> <logfile> <file> <additional probcli options>
 ./probcli <additional probcli options> -zmq_master2 <MIN QUEUE SIZE> <MAX NR OF STATES> <MASTER_PORT>  0 <LOGFILE> <MODEL_FILE> <additional probcli options>

If you use -strict, parB will stop as soon as a violation is found, otherwise parB will explore the full state space (up to the maximal number of states)

Cleaning up

If something goes wrong it may be necessary to clean up your shared memory. You can find out if there are still memory blocks occupied using ipcs. Removal can be done using:

ipcrm -M `ipcs | grep <YOUR USERNAME> | grep -o "0x[^ ]*" | sed ':a;N;$!ba;s/\n/ -M /g'`

Files

parB.sh

#!/bin/bash
PROBCOMMAND=probcli

if [ $# -lt 2 ] 
then
	echo "Too few parameters ($#) for parB"
	echo "Usage ./parB <Nr. of workers> <file> [preferences to master/workers]"
	exit -1
else 
    PARB=$0
    CORES=$1
    FILE=$2

    shift
    shift
    
	echo "Running ProB in Parallel (on .prob files)"	
	# dirname
	if [ -h "$PARB" ]
	then
	    realname=`readlink "$PARB"`
	    dirname=`dirname "$realname"`
	else
	    dirname=`dirname "$PARB"`
	fi
	
	#ulimit -d unlimited
	
	echo "Directory: $dirname"

# dirname
if [ -h "$PARB" ]
then
    realname=`readlink "$PARB"`
    dirname=`dirname "$realname"`
else
    dirname=`dirname "$PARB"`
fi

#ulimit -d unlimited

# setting the library path to find the zmq shared libraries
system=`uname`
if [ $system = "Linux" ]; then
  export LD_LIBRARY_PATH="$dirname/lib${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}"
fi

echo "Directory: $dirname"

echo "Starting workers in parallel"
echo "$dirname/$PROBCOMMAND -zmq_worker <worker-id> $*"
for ((  i = 0 ;  i < $CORES;  i++  ))
do
  "$dirname/$PROBCOMMAND" -zmq_worker $i $* &
done
##Usage: ./proxy <master IP> <port start> <own IP> <queue threshold> <logfile> [<proxynumber>]
echo "Starting proxy server: $dirname/lib/proxy localhost 5000"
"$dirname/lib/proxy" localhost 5000 localhost 50 loggyMcLogface-proxy &


echo "Starting master: $dirname/$PROBCOMMAND -bf -zmq_master master $FILE $*"
time "$dirname/$PROBCOMMAND" -bf -zmq_master master $FILE $*
fi