This repository contains the tool implementing the approach described in an ESEC/FSE19 submission, together with the subjects used in the evaluation. However, only the test generator in its variants (diversity-based and search-based) is reported here. The Apogen tool to extract the page objects given a web application is available here.
A virtual machine running Ubuntu server 18.04 is available for download here. The virtual machine contains this repository and all the dependencies needed to run DIG on the web application subjects.
The virtual machine was created with VirtualBox and was exported in the .ova
format, a platform-independent distribution format for virtual machines. It can be imported by any virtualization software although it was tested only on VirtualBox. Instructions on how to import an .ova
format virtual machine in VirtualBox and VMWare Fusion are listed below:
- VirtualBox: https://www.techjunkie.com/ova-virtualbox/
- VMWare Fusion: https://pubs.vmware.com/fusion-5/index.jsp?topic=%2Fcom.vmware.fusion.help.doc%2FGUID-275EF202-CF74-43BF-A9E9-351488E16030.html
The minimum amount of RAM to assign to the virtual machine is 4GB
.
Login credentials:
- username:
ubuntu
- password:
fse2019
If the automatic setup worked, you can skip to the run experiments section. Otherwise procede to the manual setup section.
- Java JDK 1.8 (https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html)
- Maven 3.6.0 (https://maven.apache.org/download.cgi)
- Chrome browser 71.0 (november 2018). It is not possible to download that version for the official google repository. In an unofficial repository it is possible to download previous versions of chrome, among which the 71.0
- chromedriver 2.46.628411 (http://chromedriver.chromium.org/). This repository contains that version for Mac and Linux OSs. Once unzipped the chromedriver zip for your OS, make sure that the chromedriver binary is in the system
PATH
environment variable. In other words, it should be possible to run the commandchromedriver
from any position in your file system - selenium webdriver 3.3.1 (https://www.seleniumhq.org/projects/webdriver/). Also in this case the library is available in this repository. To install the library is sufficient to move the
selenium
directory you find here to~/.m2/org/seleniumhq
(create the directoriesorg
andseleniumhq
if they do not exist;~/.m2
is created bymaven
to store all the java libraries installed) - Docker CE (https://docs.docker.com/install/). Make sure you can run
docker
commands withoutsudo
(it is sufficient to runsudo usermod -aG docker ${USER}
after installing Docker CE and then reboot the system)
DIG has been tested in MacOS Mojave 10.14.3 and Ubuntu (18.04 LTS and 16.04 LTS).
Before running the experiments (assuming that ~
indicates the path to the home directory in your system):
- clone the repository in
~/workspace
(create the folderworkspace
if it does not exist):cd ~/workspace && git clone https://github.com/matteobiagiola/FSE19-submission-material-DIG.git
assuming that the directory~/workspace
is empty - install evosuite:
cd ~/workspace/evosuite && mvn clean install -DskipTests
- compile each subject:
cd ~/workspace/fse2019/<application_name> && mvn clean compile
where<application_name>
is:dimeshift|pagekit|splittypie|phoenix|retroboard|petclinic
- download docker web application images. The instructions to run each web application are in the relative folders (
fse2019/<application_name>
):docker pull dockercontainervm/dimeshift:latest
(README)docker pull dockercontainervm/pagekit:latest
(README)docker pull dockercontainervm/splittypie:latest
(README)docker pull dockercontainervm/phoenix-trello:latest
(README)docker pull dockercontainervm/retroboard:latest
(README)docker pull dockercontainervm/petclinic:latest
(README)
In each subject directory there is the runExp.sh
script which can be used to run test generation experiments. Let us take the dimeshift
application as example. The run experiments script can be found here for the dimeshift application. The script takes four arguments:
- the first argument represents the number of times (iterations) the test generator has to be run. In the experiments carried out in the paper the number of iterations is 15, in order to cope with the randomness of the algorithms used to generate tests
- the second argument is the test generator to be used. Available values are:
SUBWEB|DIGS|DIGSI|ALL
.SUBWEB
uses the search-based test generator which is described in details in this publication.DIGS
andDIGSI
use the diversity-based test generator, considering only the sequence of methods (DIGS
) or sequence and input values (DIGSI
). TheALL
value means that three experiments will start in parallel, each one with the previous options (respectivelySUBWEB
,DIGS
,DIGSI
). Keep in mind that three docker containers will start and three java programs, hence your machine should have the right amount of CPU and RAM - the third argument is needed to distinguish the
cut
to select in the project that implements the POs for the target application (dimeshift). Available values are:MANUAL|APOGEN
. IfAPOGEN
is chosen then thecut
which models the navigation graph obtained with the POs generated by Apogen will be considered - the fourth argument is the search budget, in seconds, to grant the test generator
Examples of usage for the runExp.sh
script are listed below. The following commands assume you are in the ~/workspace/fse2019/dimeshift
folder, assuming that ~
indicates the path to the home directory in your system:
./runExp.sh 1 SUBWEB APOGEN 60
(SUBWEB
is calledMosa
in the evosuite code)./runExp.sh 1 DIGS APOGEN 60
(DIGS
is calledAdaptiveSequence
in the evosuite code)./runExp.sh 1 DIGSI APOGEN 60
(DIGSI
is calledAdaptiveComplete
in the evosuite code)
Each experiment will run for one minute and there is only one iteration for each test generator.
The runExp.sh script starts the docker container for the given application and removes it when the test generation ends. When the test generation ends, the script saves a directory on the ~/Desktop
with the name test<application_name><Mosa|AdaptiveSequence|AdaptiveComplete>_0
, containing the results of the test generation. The results folder contains the logs, a directory called evosuite-report
which contains the statistics.csv
file with the coverage details and the main
directory (it is called main
because main
is the package in which the cut
is placed in the dimeshift
project) which contains the java file with the generated test cases.
For the dimeshift
application is listed below the content the statistics.csv
file obtained by running each test generator on the virtual machine:
SUBWEB
:~/Desktop/testdimeshiftMosa_0/evosuite_report/statistics.csv
LineCoverage,Statements_Executed,Tests_Executed,Fitness_Evaluations,Total_Time
0.71,0,24,23,169003
DIGS
:~/Desktop/testdimeshiftAdaptiveSequence_0/evosuite_report/statistics.csv
LineCoverage,Statements_Executed,Tests_Executed,Fitness_Evaluations,Total_Time
0.74,0,15,14,163740
DIGSI
:~/Desktop/testdimeshiftAdaptiveComplete_0/evosuite_report/statistics.csv
LineCoverage,Statements_Executed,Tests_Executed,Fitness_Evaluations,Total_Time
0.85,0,19,18,123090
Generated tests are independent by construction. Indeed, in each project that implements the POs for the target application the developer needs to implement a Reset
class that should clean up the state of the application under test after each test case execution. In other words each test should be executed with the same state of the application under test. For the dimeshift
application the Reset
class can be found here. It implements a reset
method with no argument that simply resets the sql
database (which runs inside the specific docker
container but it is accessible from the host machine, through proper port bindings). If the application under test does not need a reset (closing the browser after each test execution is enough to clean up the state), then the reset
method does not need to be implemented.
First checkout the code-coverage
branch: git checkout code-coverage
.
In order to run the test suites generated by the tool and measure the coverage of the javascript client code, you need to:
- install nodejs
- move to the express-istanbul directory and type
npm install
to install the dependencies of the code coverage server project. From the root directory of the repository:cd code-coverage-server/express-istanbul && npm install
- install the dependency of the codecoverage project. From the root directory of the repository:
cd codecoverage && mvn clean install
- compile all the projects containing the POs for each subject system. For instance, for
dimeshift
typecd fse2019/dimeshift && mvn clean compile
The test-generation-results directory contains one test suite for each subject system, generated using the search-based technique (SUBWEB
). For example this is the test suite generated for the dimeshift
application.
The script run-exp-dimeshift.sh in the codecoverage
project takes care of taking the test suite, applying some source code transformation in order to measure the coverage of the client side js code and executing it. The script also starts the docker container and instruments the js code inside the docker container (in each docker container there is a script called run-code-instrumentation.sh
that instruments the js code before starting the application). At the end of the execution, the script stops the container and kills the code-coverage-server
process which collected the coverage information while the selenium test suite was running.
Logs of the execution are saved on the Desktop
. Coverage reports are instead saved in the test suite directory. For example, in the case of dimeshift
, the coverage reports will be saved here. Each coverage report is a text file. An example of such file for the dimeshift
application is reported below:
Name: Statements Percentage: 25.5% Fraction: 709/2780
Name: Branches Percentage: 20.74% Fraction: 224/1080
Name: Functions Percentage: 23.49% Fraction: 132/562
Name: Lines Percentage: 25.6% Fraction: 708/2766
The istanbul
tool used to measure coverage, reports statement, branch, function and line coverage.
The results of the experiments for all the application subjects are available at this link.