Skip to main content

A Machine Learning Approach for Task and Resource Allocation in Mobile-Edge Computing-Based Networks

Author(s): Wang, Sihua; Chen, Mingzhe; Liu, Xuanlin; Yin, Changchuan; Cui, Shuguang; et al

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1mg7fw0h
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWang, Sihua-
dc.contributor.authorChen, Mingzhe-
dc.contributor.authorLiu, Xuanlin-
dc.contributor.authorYin, Changchuan-
dc.contributor.authorCui, Shuguang-
dc.contributor.authorVincent Poor, H-
dc.date.accessioned2024-02-03T03:17:45Z-
dc.date.available2024-02-03T03:17:45Z-
dc.date.issued2020-07-22en_US
dc.identifier.citationWang, Sihua, Chen, Mingzhe, Liu, Xuanlin, Yin, Changchuan, Cui, Shuguang, Vincent Poor, H. (2021). A Machine Learning Approach for Task and Resource Allocation in Mobile-Edge Computing-Based Networks. IEEE Internet of Things Journal, 8 (3), 1358 - 1372. doi:10.1109/jiot.2020.3011286en_US
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1mg7fw0h-
dc.description.abstractIn this article, a joint task, spectrum, and transmit power allocation problem is investigated for a wireless network in which the base stations (BSs) are equipped with mobile-edge computing (MEC) servers to jointly provide computational and communication services to users. Each user can request one computational task from three types of computational tasks. Since the data size of each computational task is different, as the requested computational task varies, the BSs must adjust their resource (subcarrier and transmit power) and task allocation schemes to effectively serve the users. This problem is formulated as an optimization problem whose goal is to minimize the maximal computational and transmission delay among all users. A multistack reinforcement learning (RL) algorithm is developed to solve this problem. Using the proposed algorithm, each BS can record the historical resource allocation schemes and users’ information in its multiple stacks to avoid learning the same resource allocation scheme and users’ states, thus improving the convergence speed and learning efficiency. The simulation results illustrate that the proposed algorithm can reduce the number of iterations needed for convergence and the maximal delay among all users by up to 18% and 11.1% compared to the standard Q-learning algorithm.en_US
dc.format.extent1358 - 1372en_US
dc.language.isoen_USen_US
dc.relation.ispartofIEEE Internet of Things Journalen_US
dc.rightsAuthor's manuscripten_US
dc.titleA Machine Learning Approach for Task and Resource Allocation in Mobile-Edge Computing-Based Networksen_US
dc.typeJournal Articleen_US
dc.identifier.doidoi:10.1109/jiot.2020.3011286-
dc.identifier.eissn2327-4662-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
2007.10102.pdf597.36 kBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.