The unprecedented proliferation of mobile devices and the emerging mobile applications call for advanced resource allocation schemes in order to achieve an economical and sustainable operation of cognitive wireless communications. Conventional resource allocation schemes that use iterative or alternative algorithms have critical drawbacks due to their high implementation complexity and long processing delay for managing communication, caching and computation resources. The analysis and prediction of 5G network behavior via AI technologies, including the multi-media traffic load, network overhead, and network collision, have paved the way for flexible caching and computing in cognitive communications, which tremendous potential to reduce the implementation complexity and to enable real-time performance for implementation and it has attracted tremendous research interests.
Due to the extreme range of requirements for user experience, efficiency, performance and complex network environments, the design and optimization of networks becomes very challenging. The future networks are considered to involve robust intelligent algorithms to adapt network protocols and resource management for different services in the corresponding scenarios. Thus, predictive and self-aware network technologies, i.e., resource allocation for caching and computing based on the analysis and prediction of user behavior, have become hot topics. By the implementation of content offloading and/or computation offloading, users’ quality of experience is improved with shorter delay. However, existing solutions cannot fully consider the user behavior, so the prediction-based caching and computing technologies for resource allocation are still a great challenge. Novel design of deep-learning methods and the joint optimization of computation, caching, and communication in cognitive communications remain to be addressed.
The objective of this special section is to focus on state-of-the-art research on resource allocation in cognitive wireless communication networks, machine-learning-based resource allocation frameworks, novel solutions and innovative approaches for prediction-based caching and computing and etc.
The topics of interest include, but are not limited to:
Novel design of deep-learning and convolutional neural network approaches for prediction-based caching and computing.
Resource allocation based on the analysis and prediction of user behavior via AI technologies.
Data analytics and behavior prediction for caching and computing in cognitive communications.
AI-based joint optimization of caching and computing frameworks in cognitive communications.
Transfer learning and reinforcement learning for caching and computing in networking and communications.
Artificial intelligence and machine learning techniques and their applications for caching and computing.
Open-source AI algorithms and software for networking prediction-based caching and computing.
图形学与多媒体
Signal Processing
Special Issue on Statistical Signal Processing Solutions and Advances for Data Science: Complex, Dynamic and Large-scale Settings
Statistical Signal Processing has faced new challenges and a paradigm shift towards data science due to technological increase in computational power, explosion in number of connected devices in the internet and the ever increasing amounts of data volumes generated by today’s ubiquitous communication, imaging, e-commerce and social media. Consequently new approaches, methods, theory and tools are developed by signal processing community to account for modern complex, dynamic and large scale settings with complex yet hidden low-dimensional underlying structures.
This special issue will provide a modern look on recent trends and advances on statistical signal processing towards data science that account for a) complexity of the data which can be represented as low rank structures and subspaces, sparsity and missing values, or due to sheer variety of the data b) large scale settings which refers to high-dimensionality but also to the settings where sample size is smaller or not much larger than the dimension and hence make asymptotically optimal methods perform poorly andc) dynamic nature of the data which accumulates or streams at fast pace.
Prospective authors are invited to submit high-quality original contributions and reviews for this Special Issue. Potential topics include, but are not limited to:
* random matrix theory
* large-scale statistical inference and learning
* robust statistics
* large-scale optimization and optimization on manifolds
* regularization techniques and sparsity-driven approaches
* new representations and models to handle such data structures including graph signal processing, tensor data analysis and multi-linear algebra, latent-variable analysis models, and sparse signal representations and dictionaries.