The slurm-X.out is the output produced by the Job, while the files beggining with an "R" are the output of Proteo, and their description can be found in the manual from this branch.
Lastly, the script Checkrun.sh indicates wether the execution has been performed correctly or not. The value should be SUCCESS or REPEATING, in either case Proteo has been compiled correctly. If the value is FAILURE, a major error appeared and it is recommended to contact the code mantainer.
### Clean Up
### Clean Up
To clean the installation and remove compiled binaries, use:
To clean the installation and remove compiled binaries, use:
...
@@ -79,17 +85,40 @@ To reproduce the experiments performed with Proteo the following steps have to b
...
@@ -79,17 +85,40 @@ To reproduce the experiments performed with Proteo the following steps have to b
1. From the main directory of this branch execute:
1. From the main directory of this branch execute:
```bash
```bash
$ cd Results/DataRedist/Synch
$ cd Results/DataRedist/Synch
$ bash ../Exec/runOFI.sh 5 > runOfi.txt
$ bash ../../../Exec/runAll.sh 5 600 > runAll.txt
$ cd ../Asynch
$ bash ../../../Exec/runAll.sh 5 600 > runAll.txt
```
```
The script runAll.sh will create a job for each configuration file in the directory. Each configuration file will be run 5 times, and each run will have a Slurm limited time of 600s. The execution of both scripts create 500 Slurm jobs.
2. After all the jobs have finished, some error checking must be performed:
The Checkrun.txt last line indicates the runs state for each directory. The values are:
- SUCCESS: the directory runs have been completed.
- FAILURE: a major error appeared, it is recommended to contact the code mantainer.
- REPEATING: some configuration files had an error related to monitoring times and are being repeated. The Checkrun.sh script must be executed again for that directory when the new jobs finish.
When both Checkrun.txt return a SUCCESS state, the experiments have been completed and the raw data can be used. It is recommended to process it before analysing the results.
3. (Optional) When the experiments end, you can process the data. To perform this task the optional installation requisites must be meet. To process the data:
3. (Optional) When the experiments end, you can process the data. To perform this task the optional installation requisites must be meet. To process the data:
```bash
```bash
$ cd Exec/
$ cd Analysis/
$ python3 MallTimes.py
$ python3 MallTimes.py R ../Results/DataRedist/Synch/ dataS
FIXME Añadir línea
$ python3 MallTimes.py R ../Results/DataRedist/Asynch/ dataA
After this commands, you will have two files called dataG.pkl, dataM.pkl and dataL*.pkl. These files can be opened in Pandas as dataframes to analyse the data.
After these commands, you will have multiple files called dataG.pkl, dataM.pkl and dataL*.pkl. These files can be opened in Pandas as dataframes to analyse the data.