A Machine Learning Approach for Task and Resource Allocation in Mobile-Edge Computing-Based Networks
Author(s): Wang, Sihua; Chen, Mingzhe; Liu, Xuanlin; Yin, Changchuan; Cui, Shuguang; et al
DownloadTo refer to this page use:
http://arks.princeton.edu/ark:/88435/pr1mg7fw0h
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wang, Sihua | - |
dc.contributor.author | Chen, Mingzhe | - |
dc.contributor.author | Liu, Xuanlin | - |
dc.contributor.author | Yin, Changchuan | - |
dc.contributor.author | Cui, Shuguang | - |
dc.contributor.author | Vincent Poor, H | - |
dc.date.accessioned | 2024-02-03T03:17:45Z | - |
dc.date.available | 2024-02-03T03:17:45Z | - |
dc.date.issued | 2020-07-22 | en_US |
dc.identifier.citation | Wang, Sihua, Chen, Mingzhe, Liu, Xuanlin, Yin, Changchuan, Cui, Shuguang, Vincent Poor, H. (2021). A Machine Learning Approach for Task and Resource Allocation in Mobile-Edge Computing-Based Networks. IEEE Internet of Things Journal, 8 (3), 1358 - 1372. doi:10.1109/jiot.2020.3011286 | en_US |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/pr1mg7fw0h | - |
dc.description.abstract | In this article, a joint task, spectrum, and transmit power allocation problem is investigated for a wireless network in which the base stations (BSs) are equipped with mobile-edge computing (MEC) servers to jointly provide computational and communication services to users. Each user can request one computational task from three types of computational tasks. Since the data size of each computational task is different, as the requested computational task varies, the BSs must adjust their resource (subcarrier and transmit power) and task allocation schemes to effectively serve the users. This problem is formulated as an optimization problem whose goal is to minimize the maximal computational and transmission delay among all users. A multistack reinforcement learning (RL) algorithm is developed to solve this problem. Using the proposed algorithm, each BS can record the historical resource allocation schemes and users’ information in its multiple stacks to avoid learning the same resource allocation scheme and users’ states, thus improving the convergence speed and learning efficiency. The simulation results illustrate that the proposed algorithm can reduce the number of iterations needed for convergence and the maximal delay among all users by up to 18% and 11.1% compared to the standard Q-learning algorithm. | en_US |
dc.format.extent | 1358 - 1372 | en_US |
dc.language.iso | en_US | en_US |
dc.relation.ispartof | IEEE Internet of Things Journal | en_US |
dc.rights | Author's manuscript | en_US |
dc.title | A Machine Learning Approach for Task and Resource Allocation in Mobile-Edge Computing-Based Networks | en_US |
dc.type | Journal Article | en_US |
dc.identifier.doi | doi:10.1109/jiot.2020.3011286 | - |
dc.identifier.eissn | 2327-4662 | - |
pu.type.symplectic | http://www.symplectic.co.uk/publications/atom-terms/1.0/journal-article | en_US |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2007.10102.pdf | 597.36 kB | Adobe PDF | View/Download |
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.