Javatpoint Logo
Javatpoint Logo

Edge Computing Project Ideas List Part- 2

We have already discussed edge computing and its various features in the previous tutorial. Let's extend the ideas discussed in the Edge Computing project list idea part 1.

Scheduling for Deep Reinforcement Learning-Based Offloading in Vehicle Edge Computing

Description of the project:

A new computing paradigm called vehicular cloud services (VEC) has the potential to greatly improve the capabilities of vehicle terminals (VTs) to handle resource-demanding in-car applications with minimal latency and maximum energy efficiency. Due to the variety of task characteristics, the dynamic nature of the wireless environment, and the frequent handover events brought on by vehicle movements, and an ideal scheduling strategy should consider both the location (local computation or unloading) and the timing (order and time of execution) of each task. In this article, we look into a crucial compute offloading scheduling challenge in a typical VEC situation, where a VT moving down an expressway wants to plan its tasks that are waiting in line to reduce the long-term cost by balancing task delay and energy usage.

Implementation of this project:

  • We use a carefully constructed Markov decision process (MDP) to describe the challenging stochastic optimization problem, and deep network learning (DRL) is used to handle the vast state space.
  • The cutting-edge distal policy optimization (PPO) technique served as the foundation for the construction of our DRL implementation.
  • The policy and value functions are approximated using a parametric network architecture and a convolutional neural network (CNN), which can efficiently extract representative features.
  • Several changes are made to the state and reward representations to increase the training effectiveness.
  • The benefits of the suggested DRL-based unloading scheduling method are amply illustrated by extensive simulation trials and thorough comparisons with six well-known baseline methods and their heuristic combinations.

Utilizing mobile-edge cloud computing for intelligent job prediction and processor offloading

Description of the project:

The forefront and focus of mobile-edge distributed technology research right now is edge computing. Edge computing fixes traditional cloud computing's high connection latency issue and offers mobile devices high-reliability, high-bandwidth computing services. However, the offloading technique of straightforward edge devices is no longer relevant to MEC architecture due to mobile consumers' growing needs and services.

Implementation of this project:

  • In conjunction with artificial intelligence technologies, this research proposes a novel MEC design based on intelligent computation offloading.
  • A computation dumping and task migration strategy based on task prediction is proposed following the data size of computation tasks from phone devices and the performance characteristics of edge computing nodes.
  • The edge computing offloading model is optimized with the help of the task migration for the edge cloud scheduling scheme, computation task prediction based on the LSTM algorithm, and computation offloading technique for the mobile device based on task prediction.
  • Experiments demonstrate that our suggested design and technique can reduce the overall task time even while data and subtasks increase.

Deep reinforcement learning-based task offloading research in a mobile edge environment

Description of the project:

Users' demand for fast networks is rising due to the quick development of Internet technologies and mobile terminals. To lessen network latency and boost user service quality, mobile edge computing suggests a distributed caching technique to deal with the effects of high data traffic on communication networks. A deep learning approach is suggested in this research to address the task offloading issue faced by multi-service nodes.

Implementation of this project:

Experiments are performed using Google Cluster Trace data collection and the simulation software iFogSim. The final results demonstrate that the task offloading approach based on the DDQN algorithm positively impacts energy consumption and cost, validating the potential for applying the deep learning algorithm in edge devices.

Multilevel vehicular edge-cloud computing networks with advanced deep learning-based computational offloading

Description of the project:

Recently, the focus has shifted from vehicle cloud computing (VCC) to vehicular edge computing due to the promise of low latency communication and effective bandwidth use (VEC). An improved computational offload algorithm for multilayer automotive edge-cloud computing networks is presented in this paper.

Implementation of this project:

  • An integrated cognitive offloading and resource allocation model is structured as a binary optimal control problem to reduce the overall system's time and energy costs while ensuring the effective utilization of shared infrastructure among various vehicles.
  • Due to the fate problem, this problem is regarded as NP-hard, and its solution is computationally costly, especially for large-scale vehicles.
  • As a result, we build an equivalent supervised learning form and suggest a distributed deep learning method to identify the close-to-optimal computational offloading choices that use several deep neural networks running in parallel.
  • Finally, simulation results demonstrate that, when compared to benchmark solutions, the proposed approach can exhibit quick convergence and greatly reduce overall system usage.

In Wireless Metro Area Networks, Optimal Cloudlet Location and User to Cloudlet Allocation

Description of the project:

While portable mobile devices have a limited computational capacity, mobile apps are getting more and more computation-intensive. Offloading an application's work to neighboring cloudlets, which are made up of groups of computers, is an effective approach to speed up the time it takes for an application to finish running on a mobile device. The placement of cloudlets in a given network to enhance the performance of mobile applications has received very little attention. However, a sizable body of research on mobile cloudlet offloading technology exists.

Implementation of this project:

  • In this research, we investigate the deployment of cloudlets and the allocation of mobile users to the fog nodes in a metropolitan wireless network (WMAN).
  • To solve the issue, we develop an algorithm that places cloudlets in user-dense areas of the WMAN
  • Also, we allocate mobile users to the deployed cloudlets while matching their workload.
  • In addition, we simulate our experiments. The simulation results show that the suggested method performs highly promisingly.

Joint Management and Cloud Unloading for Mobile Applications at the Optimal Level

Description of the project:

Supporting computationally taxing apps on resource-constrained mobile devices requires cloud offloading. In this article, we offer the idea of the wireless aware joint schedule and compute loading (JSCO) for inter systems, in which the best choice is made about which components should be offloaded and their scheduling order. The JSCO technique moves away from a compiler-predetermined scheduled order for the components in favor of a more wireless-aware scheduling order, giving the solution additional degrees of freedom.

Implementation of this project:

  • By parallel processing pertinent parts in the mobile and cloud, the suggested approach can reduce execution times for particular component dependency graph architectures.
  • According to limitations on communication latency, application system execution time, and element precedence order, we establish a net utility that trades off the heat stored by the mobile.
  • Real data measurements from multi-component apps running on an HTC smartphone and the Amazon EC2 with WiFi offloading are used to solve the linear optimization issue.
  • The efficiency is further examined using various component interaction graph topologies and sizes.
  • According to the results, longer program runtime deadlines, faster wireless speeds, and lower offload data payloads all result in greater energy savings.

Mobile Cloud Computing: Distributed Mega Pricing for Effective Application Offloading

Description of the project:

To encourage fair and high-quality cloud services, we suggest three different price structures: a multi-dimensional price corresponding to multi-dimensional resource allocation, a penalty price, and a benefit discount factor that encourages more even resource provisioning across various cloud dimensions.

Implementation of this project:

  • We provide a distributed price-adjustment mechanism for effective resource allocation and QoS-conscious offloading scheduling based on these prices.
  • This manages application loading in mobility cloud systems using hypotheses and pricing mechanisms.
  • We demonstrate that the method can reach the equilibrium core allocation, where the mobile cloud system maximizes the overall system benefit and achieves Pareto efficiency in a finite number of iterations.
  • The simulation findings show that our suggested pricing scheme greatly enhances the system's performance.

An Edge NOde Resource Management Framework

Description of the project:

As more and more devices are connected to the Internet, current computing methods that use the cloud as a host computer will become unworkable. This highlights the importance of fog computing, which combines cloud computing with edge computing on network nodes like routers, base stations, and switches. However, controlling edge nodes will be a barrier that must be overcome to realize fog computing.

Implementation of this project:

  • The goal of this essay is to discuss the problem of resource management.
  • There are suggested mechanisms for providing engine edge node resources.
  • Locally Advanced Resource Management (ENORM) is the first edge node management framework that we have created.
  • The framework's viability is proven based on a PokéMon Go-like online gaming use case.
  • Application latency is decreased by 20-80% when ENORM is used, while data transfer and connection frequency between the network edge and the cloud are lowered by up to 95%.
  • These findings demonstrate how fog computing can raise customer satisfaction and service levels.

Increasing the Reliability of Cloud Services by Using a Proactive Fault-Tolerance Approach

Description of the project:

The widespread usage of cloud computing services for hosting commercial and industrial applications has made cloud service dependability a major concern for both consumers and cloud service providers. The issue of coordination between many virtual machines (VMs) that work together to finish a concurrent application is rarely considered by existing solutions.

Challenges faced in this project:

  • The outcomes of concurrent application execution will be flawed without VM coordination.
  • Two fault tolerance strategies, reactive and proactive, have been proposed to increase cloud service dependability.

To solve this issue, we first suggest an initial virtual cluster allocation mechanism based on the characteristics of the VMs, which will help to cut down on the data center's overall network resource and energy usage. Then, we model CPU temperature (PM) to prepare for a degrading system. We move virtual machines (VMs) from an identified degrading PM to certain ideal PMs. The final step is to describe and solve the choice of the best target PMs using an enhanced particle swarm optimization method. In terms of overall transmission overhead, total network resource usage, and execution time while running several simultaneous applications, we compare our technique to five comparable alternatives. Results from experiments show how efficient and successful our strategy is.

Task assignment for mobile edge computing that considers user mobility

Description of the project:

To provide ubiquitous processing and storage solutions for mobile and large data applications, Mobile Computing (MEC) has developed as a potential computing paradigm. Numerous small-cell base stations (SBS) are placed in MEC (MEN) to create a mobile edge network.

Implementation of this project:

  • Mobile users may often access these slabs directly.
  • The computational activities are first transferred from mobile users to the MEN, where they are subsequently carried out in one or more particular sizes.
  • The option to offload has been carefully considered, while the total completion delay on the MEN side has received less attention.
  • This study intends to decrease MENs' task execution delays using task scheduling. We consider the job characteristics, user mobility, and network restrictions.
  • The issue is defined as a constraint fulfillment problem, and a quick scheduling heuristic is suggested as a lightweight solution.
  • To investigate how well the suggested work performs, we do simulation trials.
  • According to the findings, our work can greatly shorten the time it takes for jobs to complete in MENs, which also shortens the time it takes for duties to complete in MEC.

Deadline-Aware Portable Edge Computing Systems Task Scheduling

Description of the project:

A novel computing strategy called mobile edge computing (MEC) allows computation work performed by mobile devices (MDs) to be either unloaded to MEC servers or performed locally. Since calculation jobs must be completed by certain dates and MDs never have enough battery power, it's critical to plan how to allocate each task's energy efficiently. In contrast to other studies, we investigate a more complicated scenario where many moving MDs share diverse MEC servers and define the challenge of lowest energy usage in deadline-aware MEC systems. Since this issue is demonstrated to be NP-hard, two approximation techniques are suggested that focus on one and multiple MD cases. Theoretical studies and simulations are used to change how well certain algorithms function.

A Privacy-Preserving Data Gathering Scheme for IoT Applications Assisted by Mobile Edge Computing

Description of the project:

As 5G and Internet of Things technologies advance quickly, many mobile devices with particular sensing capabilities have access to the network and significant volumes of data. Low latency and quick data access are requirements for IoT applications that the typical cloud computing architecture cannot meet. These issues may be resolved, and the system's execution efficiency can be increased with the help of mobile edge computing (MEC).

Implementation of this project:

  • In this research, we provide a data aggregation technique for MEC-assisted Internet of Things applications.
  • Three participants-the terminal devices (TD), edge servers (ES), and cloud infrastructure center-are included in our approach (PCC).
  • The data produced by the TDs is encrypted before being sent to the ES, which then compiles the data and sends it on to the PCC.
  • Finally, PCC can retrieve the combined plaintext data using its private key.
  • Our system offers source authenticity and integrity and ensures the TDs' data privacy.
  • Our plan, ideal for MEC-assisted IoT applications, can cut communication costs in half compared to the previous paradigm.

Maximum Processing Capacity in Power-Constrained Edge Computing for IoT Networks

Description of the project:

Next-generation networks benefit greatly from mobile edge computing (MEC). It seeks to provide low-latency computing services and increase the Internet of Things (IoT) processing capacity. For MEC IoT networks with limited power and unpredictable jobs, we examine a resource allocation mechanism in this research to optimize available processing capacity (APC). The APC describes a serviced IoT device's computational power and speed, which is first specified. The link between task partition and resource allocation is then examined to obtain its expression.

Implementation of this project:

  • This expression is used to study the power allocation strategy for the solitary Mec server with a single subcarrier and to consider the factors influencing the APC improvement.
  • An APC optimization problem with a generic utility function is constructed for the multiuser MEC system, and various important criteria for resource allocation are deduced.
  • Using these criteria, a suboptimal approach is provided to distribute the subcarriers among users.
  • A binary-search liquid algorithm is presented to resolve the received power between the local CPU and many subcarriers.
  • Finally, Monte Carlo simulation is used to confirm the correctness of the suggested techniques.

Edge computing events scheduling online using the repeating strategy game

Description of the project:

An edge service provider's (ESP) primary duty is to dynamically assign resources to tasks already taking place at the edges in response to requests. This role is difficult, though, because it requires making decisions at the moment without knowing when someone else will arrive, relying on requests to complete tasks, and managing resources.

Implementation of this project:

  • First, we represent this issue as a repeating game with long-term and short-term options.
  • To overcome these difficulties, we provide an online scheduling system based on a repeating game that allocates various jobs to the available pertinent resources.
  • To optimize the overall satisfaction of tasks, a user with a demand first determines the unit pricing for computational requirements within the applicable budget for every game round.
  • Our suggested methods provide the Non-cooperative game equilibrium between the Isp and the users.
  • The ESP then assumes the role of the follower. It uses a strategy of matching resources with tasks and distributing them among edge centers with various resource types to maximize the long-term profits from users based on users' prices from different rounds (edge mobile devices).
  • The success of our idea is then assessed in terms of task assignments.

Offloading of Multiple Users and Multiple Tasks in Green Virtual Network Cloud Computing.

Description of the project:

By using the resources that are already present at the network edge, Mobile Edge Cloud Computational (MECC) has emerged as an appealing method for increasing the storage and computing capabilities of Mobile Devices (MDs). In this study, we consider computing offloading at the portable edge cloud, comprising a collection of Wireless Devices (WDs), each containing a device for capturing solar energy from the ambient. Additionally, several MDs want to offload their work to the portable edge cloud simultaneously.

Implementation of this project:

  • We first construct the multiuser multi-task resource provisioning problem for green MECC.
  • Then we use the Alternative Heuristic Approach to determine the energy recovery policy: how much energy to harvest at each WD, and the task unloading timetable: the set of cloud computing environment requests to be admitted into the portable edge cloud, the set of WDs delegated to each outed offloading request; and the amount of workload to be digested at the assigned WDs.
  • The task unloading scheduling issue is then shown to be NP-hard, and Greedy Maximal Schedulers are introduced for centralized and distributed solutions.
  • Also covered are the proposed schemes' performance limits.
  • The effectiveness of the suggested algorithms is evaluated thoroughly.

Resource Allocation and Task Offloading in Multi-Server Portable Computing Networks

Description of the project:

A new concept called mobile-edge computing (MEC) enables sophisticated services and applications to be offered close to the end consumers by capillary dispersing cloud computational power to the edge of the cellular access network.

Implementation of this project:

  • In this study, a multi-cell wireless network with MEC functionality is considered.
  • Each base station (BS) is outfitted with a MEC server, which helps mobile users perform computation-intensive jobs through task offloading.
  • To optimize user task offloading gains, which are determined by a weighted sum of decreases in task completion time and energy consumption, the problem of combined task offload and resource allocation is explored.

Challenges faced in this project:

  • The problem at hand is a mixed integer nonlinear program that entails jointly optimizing the task offloading decision, mobile users' uplink transmission power, and MEC server resource allocation.
  • The combinatorial structure of the problem makes it challenging and expensive for a huge network to solve for the ideal answer.

To address this issue, we suggest splitting the original issue into two separate issues: a task offloading problem that optimizes the proper functioning corresponding to the RA problem and a capital allocation problem with a fixed task offloading choice. We use convex and semi-optimization methods to tackle the RA issue. We provide a unique heuristic approach for the TO issue that yields a suboptimal result in polynomial time.

Simulation results demonstrate that our methodology performs almost as well as the ideal solution and that, compared to conventional methods, it greatly increases the customers' offloading utility.







Youtube For Videos Join Our Youtube Channel: Join Now

Feedback


Help Others, Please Share

facebook twitter pinterest

Learn Latest Tutorials


Preparation


Trending Technologies


B.Tech / MCA