Mobile Edge Computing (MEC) is a promising paradigm to respond to the rising computation requests of the users in the emerging wireless networks, and especially in the Internet of Things (IoT). In this paper, we study buffer-aided relay-assisted MEC in systems with discrete transmission time-line and block fading channels. We consider a hierarchical network composed of a source, a buffer-aided relay, and another node in a higher level in the hierarchy. The source sends its tasks to the relay which in turn randomly assigns the received tasks to its own computing server or to the server of the next node in the hierarchy. We provide a framework to take into account the delays in both the transmission and computation buffers which facilitates the derivation of the expression for the Average Response Time (ART) in the system. Based on that and the system average power consumption in each slot, we introduce the concept of Average Response Energy (ARE) as a novel metric to capture the energy efficiency in MEC. Accordingly, we propose two offloading schemes with relevant problem formulations, namely the Minimum ART (MART) and the Minimum ARE (MARE) schemes, to optimize the ART or the ARE while keeping the system queues stable. We analyze the properties of the formulated problems, in terms of the feasible sets and the objective functions and noting them, we propose effective solution methods. Using extensive simulations, we validate the presented analysis and show the effectiveness of the proposed schemes in comparison with various baseline methods.

### 相关内容

iOS 8 提供的应用间和应用跟系统的功能交互特性。
• Today (iOS and OS X): widgets for the Today view of Notification Center
• Share (iOS and OS X): post content to web services or share content with others
• Actions (iOS and OS X): app extensions to view or manipulate inside another app
• Photo Editing (iOS): edit a photo or video in Apple's Photos app with extensions from a third-party apps
• Finder Sync (OS X): remote file storage in the Finder with support for Finder content annotation
• Storage Provider (iOS): an interface between files inside an app and other apps on a user's device
• Custom Keyboard (iOS): system-wide alternative keyboards

Source: iOS 8 Extensions: Apple’s Plan for a Powerful App Ecosystem

We investigate the problem of co-designing computation and communication in a multi-agent system (e.g. a sensor network or a multi-robot team). We consider the realistic setting where each agent acquires sensor data and is capable of local processing before sending updates to a base station, which is in charge of making decisions or monitoring phenomena of interest in real time. Longer processing at an agent leads to more informative updates but also larger delays, giving rise to a delay-accuracy-tradeoff in choosing the right amount of local processing at each agent. We assume that the available communication resources are limited due to interference, bandwidth, and power constraints. Thus, a scheduling policy needs to be designed to suitably share the communication channel among the agents. To that end, we develop a general formulation to jointly optimize the local processing at the agents and the scheduling of transmissions. Our novel formulation leverages the notion of Age of Information to quantify the freshness of data and capture the delays caused by computation and communication. We develop efficient resource allocation algorithms using the Whittle index approach and demonstrate our proposed algorithms in two practical applications: multi-agent occupancy grid mapping in time-varying environments, and ride sharing in autonomous vehicle networks. Our experiments show that the proposed co-design approach leads to a substantial performance improvement (18-82% in our tests).

This paper presents a new approach for trees-based regression, such as simple regression tree, random forest and gradient boosting, in settings involving correlated data. We show the problems that arise when implementing standard trees-based regression models, which ignore the correlation structure. Our new approach explicitly takes the correlation structure into account in the splitting criterion, stopping rules and fitted values in the leaves, which induces some major modifications of standard methodology. The superiority of our new approach over trees-based models that do not account for the correlation is supported by simulation experiments and real data analyses.

Model-free techniques, such as machine learning (ML), have recently attracted much interest towards the physical layer design, e.g., symbol detection, channel estimation, and beamforming. Most of these ML techniques employ centralized learning (CL) schemes and assume the availability of datasets at a parameter server (PS), demanding the transmission of data from edge devices, such as mobile phones, to the PS. Exploiting the data generated at the edge, federated learning (FL) has been proposed recently as a distributed learning scheme, in which each device computes the model parameters and sends them to the PS for model aggregation while the datasets are kept intact at the edge. Thus, FL is more communication-efficient and privacy-preserving than CL and applicable to the wireless communication scenarios, wherein the data are generated at the edge devices. This article presents the recent advances in FL-based training for physical layer design problems. Compared to CL, the effectiveness of FL is presented in terms of communication overhead with a slight performance loss in the learning accuracy. The design challenges, such as model, data, and hardware complexity, are also discussed in detail along with possible solutions.

Intelligent reflecting surface (IRS) can effectively control the wavefront of the impinging signals and has emerged as a cost-effective promising solution to improve the spectrum and energy efficiency of wireless systems. Most existing researches on IRS assume that the hardware operations are perfect. However, both physical transceiver and IRS suffer from inevitable hardware impairments in practice, which leads to severe system performance degradation and increases the complexity of beamforming optimization. Consequently, the existing researches on IRS, including channel estimation, beamforming optimization, spectrum and energy efficiency analysis, etc., cannot directly apply to the case of hardware impairments. In this paper, by taking hardware impairments into consideration, we conduct the joint transmit and reflect beamforming optimization and reevaluate the system performance. First, we characterize the closed-form estimators of direct and cascade channels in both cases of single-user and multi-user and analyze the impact of hardware impairments on channel estimation accuracy. Then, the optimal transmit beamforming solution is derived, and a gradient descent method-based algorithm is also proposed to optimize the reflect beamforming. Moreover, we analyze the three types of asymptotic channel capacities with respect to the transmit power, the antenna number, and the reflecting element number. Finally, in terms of the system energy consumption, we analyze the power scaling law and the energy efficiency. To the best of our knowledge, it is the first research comprehensively evaluating the impact of hardware impairments on IRS-assisted wireless systems.

The design of autonomous underwater vehicles (AUVs) and their docking stations has been a popular research topic for several decades. Although many AUV and dock designs have been proposed, materialized, and commercialized, most of these existing designs prioritize the functionality of the AUV over the dock, or vise versa; there has been limited formal research in analytical optimization for AUV docking systems. In this paper, a multidisciplinary optimization framework is presented with the aim to fill this theoretical gap. We propose a co-design optimization method that optimizes multiple design parameters governing the archetype of an AUV and its docking system. Capturing the user design intents in the optimization process, the proposed method produces a set of optimal design parameters that satisfies a set of predefined bounds, constraints, and initial conditions. Three cases of design optimization are reported for different design intents. Each optimal design found in the three cases is compared to an existing system to show the validity of this design optimization framework.

Many of the devices used in Internet-of-Things (IoT) applications are energy-limited, and thus supplying energy while maintaining seamless connectivity for IoT devices is of considerable importance. In this context, we propose a simultaneous wireless power transfer and information transmission scheme for IoT devices with support from reconfigurable intelligent surface (RIS)-aided unmanned aerial vehicle (UAV) communications. In particular, in a first phase, IoT devices harvest energy from the UAV through wireless power transfer; and then in a second phase, the UAV collects data from the IoT devices through information transmission. To characterise the agility of the UAV, we consider two scenarios: a hovering UAV and a mobile UAV. Aiming at maximizing the total network sum-rate, we jointly optimize the trajectory of the UAV, the energy harvesting scheduling of IoT devices, and the phaseshift matrix of the RIS. We formulate a Markov decision process and propose two deep reinforcement learning algorithms to solve the optimization problem of maximizing the total network sum-rate. Numerical results illustrate the effectiveness of the UAV's flying path optimization and the network's throughput of our proposed techniques compared with other benchmark schemes. Given the strict requirements of the RIS and UAV, the significant improvement in processing time and throughput performance demonstrates that our proposed scheme is well applicable for practical IoT applications.

Today's growth in the volume of wireless devices coupled with the promise of supporting data-intensive 5G-&-beyond use cases is driving the industry to deploy more millimeter-wave (mmWave) base stations (BSs). Although mmWave cellular systems can carry a larger volume of traffic, dense deployment, in turn, increases the BS installation and maintenance cost, which has been largely ignored in their utilization. In this paper, we present an approach to the problem of mmWave BS deployment in urban environments by minimizing BS deployment cost subject to BS association and user equipment (UE) outage constraints. By exploiting the macro diversity, which enables each UE to be associated with multiple BSs, we derive an expression for UE outage that integrates physical blockage, UE access-limited blockage, and signal-to-interference-plus-noise-ratio (SINR) outage into its expression. The minimum-cost BS deployment problem is then formulated as integer non-linear programming (INP). The combinatorial nature of the problem motivates the pursuit of the optimal solution by decomposing the original problem into the two separable subproblems, i.e., cell coverage optimization and minimum subset selection subproblems. We provide the optimal solution and theoretical justifications for each subproblem. The simulation results demonstrating UE outage guarantees of the proposed method are presented. Interestingly, the proposed method produces a unique distribution of the macro-diversity orders over the network that is distinct from other benchmarks.

We resolve the min-max complexity of distributed stochastic convex optimization (up to a log factor) in the intermittent communication setting, where $M$ machines work in parallel over the course of $R$ rounds of communication to optimize the objective, and during each round of communication, each machine may sequentially compute $K$ stochastic gradient estimates. We present a novel lower bound with a matching upper bound that establishes an optimal algorithm.

Location Routing is a fundamental planning problem in logistics, in which strategic location decisions on the placement of facilities (depots, distribution centers, warehouses etc.) are taken based on accurate estimates of operational routing costs. We present an approximation algorithm, i.e., an algorithm with proven worst-case guarantees both in terms of running time and solution quality, for the general capacitated version of this problem, in which both vehicles and facilities are capacitated. Before, such algorithms were only known for the special case where facilities are uncapacitated or where their capacities can be extended arbitrarily at linear cost. Previously established lower bounds that are known to approximate the optimal solution value well in the uncapacitated case can be off by an arbitrary factor in the general case. We show that this issue can be overcome by a bifactor approximation algorithm that may slightly exceed facility capacities by an adjustable, arbitrarily small margin while approximating the optimal cost by a constant factor. In addition to these proven worst-case guarantees, we also assess the practical performance of our algorithm in a comprehensive computational study, showing that the approach allows efficient computation of near-optimal solutions for instance sizes beyond the reach of current state-of-the-art heuristics.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

Vishrant Tripathi,Luca Ballotta,Luca Carlone,Eytan Modiano
0+阅读 · 2021年8月9日
Assaf Rabinowicz,Saharon Rosset
0+阅读 · 2021年8月6日
Ahmet M. Elbir,Anastasios K. Papazafeiropoulos,Symeon Chatzinotas
0+阅读 · 2021年8月6日
0+阅读 · 2021年8月6日
Khoi Khac Nguyen,Antonino Masaracchia,Tan Do-Duy,H. Vincent Poor,Trung Q. Duong
0+阅读 · 2021年8月5日
Miaomiao Dong,Taejoon Kim,Minsung Cho,Kangeun Lee,Sungrok Yoon
0+阅读 · 2021年8月5日
0+阅读 · 2021年8月5日
Felipe Carrasco Heine,Antonia Demleitner,Jannik Matuschke
0+阅读 · 2021年8月5日
Kevin Scaman,Francis Bach,Sébastien Bubeck,Yin Tat Lee,Laurent Massoulié
7+阅读 · 2018年6月1日

LibRec智能推荐
6+阅读 · 2019年9月19日
CreateAMind
6+阅读 · 2019年8月9日
Call4Papers
3+阅读 · 2019年7月3日
Call4Papers
10+阅读 · 2019年6月24日
CreateAMind
8+阅读 · 2019年5月18日

5+阅读 · 2019年3月22日
Call4Papers
3+阅读 · 2018年10月18日

11+阅读 · 2018年1月14日

24+阅读 · 2017年9月8日
Top