-
Notifications
You must be signed in to change notification settings - Fork 0
Home
Welcome to the Learning-based-Power-Allocation-Strategy-in-Small-Cell-Network wiki! This repository is for the UCD EEC289Q 2018 Spring Quarter final project. The content of our project includes:
- data pre-processing
- feature selection
- learning algorithms
- Gibbs based optimization
- conclusion
In this part, the data sets we get are some log files from Huawei Inst. We formalized the data sets with the target be the throughput of a base station and the feature be pilot power, data power, load, number of users, and its neighbor base station features. A typical data frame is as below:
Time | Cell ID | Throughput | Load | Power | User | Neighbor |
---|---|---|---|---|---|---|
1 | 154 | 365 | 0.635 | 30 | 25 | 117,228,165,152,119 |
Finally, we store the data set into CSV files for future usage.
In this part, we do feature selection. By reducing the dimension of input feature, we can accelerate the computation in the base station and simplify the optimization in the following steps. To select the best features, we manually selection several subsets of the features. This table shows the subsets of selected features:
Index | Features |
---|---|
0 | Power |
1 | Power, TimeAveLoad, TimeAveUsers |
2 | Power, Load, Users |
3 | Power, Load, Users,TimeAveLoad, TimeAveUsers |
4 | Power, Neighbor Power*5 |
5 | Power, Load, Users, Neighbor: Power*5 |
6 | Power, Load, Users,TimeAveLoad, TimeAveUsers, Neighbor Power*5, AveLoad, AveUser |
7 | Power, TimeAveLoad, TimeAveUser, Neighbor Power*5 |
8 | Power, Load, Users, Neighbor Power*5, AveLoad, AveUser |
We manually select the features instead of using specific feature selection algorithms is because those features have explicit meaning. We use neuron networks embed method to test the goodness of the selected features. We use two metrics to evaluate the feature set. The first metric is the R value. It is defined as
It measures how close the estimations to the true value with normalization. The following figure shows different feature set with different R value.
The second metric is the MSE. It is defined as:
It simply measures the square error between the estimation and the true value. The figure shows different subsets of features result in different MSE.
Finally, we choose the feature set 7 as our desired features. Also, it is used in the following procedures.
In this part, we are going to learn the function from base station features to base station throughput. We use several methods to learn this function mapping. The methods include linear regression, lasso regression, support vector machine, Bayesian ridge regression, neural network, and decision tree. We use R score and MSE to evaluate the accuracy of the methods. The following two figures show the results.
From those figures, we see the neuron network gives the best results. Thus, we use neuron network as the function approximation.
In this part, we are going to solve the total throughput optimization problem. The total throughput is defined as the sum of all the base station throughput. So, our optimization problem is:
where the optimization variables are the pilot power over all the base stations and U are its own parameters and L are the parameters from its neighboring cells. Remember pi does not only appear in fi but also in its neighboring function fj. So, this optimization problem is very difficult. Here we resort to the coordinate descent algorithm. It means in each step, for base station i, we adjust its base station pilot power pi while fixing other base stations' pilot powers. So, in step t, we solve this simple optimization problem:
where the function f_{ni} means the neighboring base stations which are also functions of pi. We compare our method with online Q-learning.
The result indicates our algorithm is better.
We use the machine learning algorithm to solve the small cell network optimization problem. By selecting the most important features, we reduce the dimension of features and thus increase computional speed. We also use neuron network to approximat the function from the base station parameters to base station throughput. Finally, we use coordinate descent algorithm to solve the joint optimization problem. Our results indicate our algorithm is better than the Q learning based algorithm.