0% found this document useful (0 votes)
12 views

Assignment 2 Cloud and edge

The paper introduces the Prioritized Task Offloading Mechanism in Cloud-Fog Computing (PTOMCFIA3C), which enhances task scheduling and offloading in cloud-fog systems by categorizing tasks and utilizing an improved A3C algorithm with RCNN. It achieves significant improvements in task completion time and energy efficiency, while addressing challenges like workload variability and device diversity. However, the method's complexity, reliance on fixed task classifications, and lack of consideration for cost, system failures, and data security are noted limitations.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Assignment 2 Cloud and edge

The paper introduces the Prioritized Task Offloading Mechanism in Cloud-Fog Computing (PTOMCFIA3C), which enhances task scheduling and offloading in cloud-fog systems by categorizing tasks and utilizing an improved A3C algorithm with RCNN. It achieves significant improvements in task completion time and energy efficiency, while addressing challenges like workload variability and device diversity. However, the method's complexity, reliance on fixed task classifications, and lack of consideration for cost, system failures, and data security are noted limitations.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

IOT Cloud and Edge Computing Assignment-2

Name: Himanshu Dhakad


Registration number: 219311189
CSE IOT Sec: C
Q1: What is proposed in your research paper?

The paper suggests a new method called Prioritized Task Offloading Mechanism in Cloud-Fog
Computing (PTOMCFIA3C). It helps in better scheduling and offloading of tasks in cloud-fog systems.
The method groups tasks into two categories: High-Performance Computing (HPC) for tasks that
need quick processing and High Throughput Computing (HTC) for less urgent tasks. It uses a smart
algorithm called the Asynchronous Advantage Actor-Critic (A3C) improved with a Residual
Convolutional Neural Network (RCNN). This setup decides whether to send tasks to nearby fog nodes
or to cloud servers, making the process faster and more energy-efficient.

Q2: What are the objectives achieved in the paper?

The research meets several important goals. It improves task scheduling and offloading by reducing
the time taken to complete tasks (makespan) by 30.2% and energy use by 31.25%. Tasks are carefully
prioritized into two types, ensuring that urgent tasks get quick attention by sending them to fog
nodes. The system also handles challenges like changing workloads, delays, and different types of
devices, making the overall process smooth and efficient for both service providers and users.

Q3: Which technique did the authors use to solve the proposed framework or the problem?

The authors used a combination of two smart methods: A3C, which is a type of machine learning
that improves decisions based on past experience, and RCNN, which helps the system quickly analyze
and understand task details like size, deadlines, and available resources. This combination made the
system better at deciding where to process tasks, whether in the cloud or fog, leading to faster and
more efficient results.

Q4: In which experimental environment have authors compared their proposed technique with
whom?

The experiments were done in a simulated setup using a tool called SimPy, which mimics real-world
situations. The tests used real data from systems like HPC2N and NASA, along with other data
patterns like uniform and normal distributions. The simulations were run on a macOS computer with
16GB RAM and an M1 chip. The proposed method was compared with three existing models: RATS-
HM, MOABCQ, and FOG-AMOSM. The results showed that PTOMCFIA3C performed much better in
reducing task completion time and saving energy.
Q5: What are the limitations of the paper?

The method has some drawbacks. Adding the RCNN to the A3C algorithm makes the system more
complex and harder to scale for bigger setups. The method uses fixed settings for classifying tasks,
which might not work well in highly changing environments. Also, since the tests were done in a
simulation, the results might differ in real-world situations with hardware and network limitations.
The study focused mainly on task time and energy use but didn’t cover other important aspects like
cost or system failures. Lastly, it didn’t consider data security, which is crucial for sensitive
applications like healthcare.

You might also like