-
Notifications
You must be signed in to change notification settings - Fork 0
Home
Welcome to the Learning-based-Power-Allocation-Strategy-in-Small-Cell-Network wiki! This repository is for the UCD EEC289Q 2018 Spring Quarter final project. The content of our project includes:
- data pre-processing
- feature selection
- learning algorithms
- Gibbs based optimization
- conclusion
In this part, the data sets we get are some log files from Huawei Inst. We formalized the data sets with the target be the throughput of a base station and the feature be pilot power, data power, load, number of users, and its neighbor base station featres. A typical data frame is as below:
Time | Cell ID | Throughput | Load | Power | User | Neighbor |
---|---|---|---|---|---|---|
1 | 154 | 365 | 0.635 | 30 | 25 | 117,228,165,152,119 |
Finally, we store the data set into CSV files for the following usage.
In this part, we do feature selection. By reducing the dimension of input feature, we can accelerate the computation in the base station and simplify the optimization in the following steps. To select the best features, we manually selection several subsets of the features. This table shows the subsets of selected features:
Index | Features |
---|---|
0 | Power |
1 | Power, TimeAveLoad, TimeAveUsers |
2 | Power, Load, Users |
3 | Power, Load, Users,TimeAveLoad, TimeAveUsers |
4 | Power, Neighbor Power*5 |
5 | Power, Load, Users, Neighbor: Power*5 |
6 | Power, Load, Users,TimeAveLoad, TimeAveUsers, Neighbor Power*5, AveLoad, AveUser |
7 | Power, TimeAveLoad, TimeAveUser, Neighbor Power*5 |
8 | Power, Load, Users, Neighbor Power*5, AveLoad, AveUser |
We manually select the features instead of using some feature selection algorithms is because those features have explicit meaning. We use neuron networks to use features fit the throughput. We use two metrics to evaluate the feature set. First is the R value. It is defined as
It measures how close the estimation to the true value but with normalization. Here shows different feature set with different R value.
The second metric is the MSE. It is defined as:
It simply measures the square error between the estimation and the true value. The figure shows different subset along result in different mse.
Finally, we choose feature set 7 as our desired features. Also, it is used in the following procedure.
In this part, we are going to learn the function from base station features to base station throughput. We use several methods to learn this function mapping. The methods include linear regression, lasso regression, support vector machine, Bayesian ridge regression, neural network, and decision tree. We use R score and MSE to evaluate the accuracy of the methods. The following two figures show the results.
From those figures, we see the neuron network gives the best results. Thus, we use neuron network as the function approximation.
In this part, we are going to solve the total throughput optimization problem. The total throughput is defined as the sum of all the base station throughput. So, our optimization problem is:
where the optimization variable is the pilot power over all the base stations and U is its own parameters and L is the parameters from its neighboring cell. Remember pi not only appears in fi but also in its neighboring fj. So, this optimization problem is very difficult. Here we resort to the coordinate descent algorithm. It means in each step, for base station i, we adjust its base station pilot power while fixing other base stations' pilot power. So, in step t, we solve this simple optimization problem:
where the function f_{ni} means the neighboring base station which are also functions of pi.