Codes and Data accompanying our paper "Privacy-Preserving Bandits"
@inproceedings{malekzadeh2020privacy,
	title        = {Privacy-Preserving Bandits},
	author       = {Malekzadeh, Mohammad and Athanasakis, Dimitrios and Haddadi, Hamed and Livshits, Benjamin},
	booktitle    = {Proceedings of Machine Learning and Systems (MLSys '20)},
	url = {https://proceedings.mlsys.org/paper/2020/file/42a0e188f5033bc65bf8d78622277c4e-Paper.pdf},
	volume = {2},
	pages = {350--362},
	year = {2020}
}Public DOI: 10.5281/zenodo.3685952
- To reproduce the results of the paper, you just need to run codes in the experimentsfolder.
- Multi-Lable datasets will be automatically downloaded for the firs time.
- For criteo dataset, in the first time, use the script experiments/Criteo/criteo_dataset/create_datasets.ipynb
In the directory experiments/Criteo/, we have already run this file for the experiment we have reported in Figure 7 and provided dataset by processing nrows=1000000000, that uses 1 billion rows of the original dataset.
I If one desires to make a dataset of another nrows, for the first time, the script create_datasets.ipynb should be used.
You should first set this parameter (number of rows) in the  create_datasets.ipynb, build the dataset, and then run the Criteo experiment. Please see create_datasets.ipynb for more dtail.
You may need to install packages that are listed in the requirements.txt file.
% pip install -r requirements.txt 
Specifically, these libraries:
%pip install iteround
%pip install pairing 
%pip install scikit-multilearn
%pip install arff
%pip install category_encoders
%pip install matplotlib
%pip install tensorflow
%pip install keras
