Seminar Fog Computing
Seminar Fog Computing
The fog layer consists of fog nodes, which are essentially industrial controllers, gateway computers,
switches, and I/O devices that provide computing, storage, and connectivity services.
The fog computing model extends the cloud closer to the edge of your network where the devices
reside, and facilitates edge intelligence.
The fog computing paradigm was invented to be as a complementary to cloud computing and offer
services at the edge of the network i.e. close to end-user and end devices. Therefore it aimed to
reduce the latency of applications.
This further makes fog computing a practical solution to latency, delivering high quality multimedia
application process data with low delay and packet loss.
Fog computing Architecture
• Fog computing contains three layers:
Terminal Layer i.e.(IOT devices)
Fog Layer i.e.( routers, switches, gateways, access points,
base stations).
Cloud Layer i.e.( servers , storage devices)
Hierarchical architecture of Fog Computing
Terminal layer
is the lowest layer in fog computing architecture (i.e. closest layer to
end devices and physical environment).such as mobile device, smart
vehicles, sensors, and so on. These devices are geographically
distributed and they are responsible for sensing features data of
physical objects or events and sending these sensed data to upper layer
(i.e. Fog layer) to be processed and stored.
Fog layer
is the middle layer in fog computing architecture and located at the
edge of network. Fog Computing layer contains large number of fog
nodes such as routers, switches, gateways, access points, base stations.
Which are distributed between end devices and cloud. They have
ability to compute, transmit, and temporarily store sensed data. The
real time analysis and latency-sensitive applications can be
accomplished in fog layer. Moreover the fog nodes connected with
cloud data-center through IP-core network to obtain more powerful
computation and storage capabilities.
Cloud layer
is the layer that consist of multiple high performance servers and
storage devices, and provide various services applications.it support
massive computation analysis and permanent storage of an enormous
amount of data. However, its differ than the traditional cloud
architecture (i.e. not all the computing and storage task go through the
cloud). Based on load-demand the cloud core layer managed and
scheduled by some control strategy to improve utilization of cloud
resources.
Characteristics of Fog Computing
• Greater business agility: With the use of the right tools, fog
computing applications can be quickly developed and deployed. In
addition, these applications can program the machine to work
according to the customer needs.
• Low latency: The fog has the ability to support real-time services
(e.g., gaming, video streaming)
. • Geographical and large-scale distribution: Fog computing can
provide distributed computing and storage resources to large and
widely distributed applications.
• Lower operating expense: Saving network bandwidth by processing
selected data locally instead of sending them to the cloud for analysis.
• Flexibility and heterogeneity: Fog computing allows the collaboration
of different physical environments and infrastructures among multiple
services
challenges
1- Choice of Virtualization Technology:
Virtualization is the main method to provide isolated environments in fog
computing and also the main factor of fog node performance.
Therefore, an intuitive question here is “hypervisor v.s. container, which
one should we choose?” As we know, Cloudlet is utilizing hypervisor
virtualization technique while ParaDrop is using a lightweight solution:
container i.e. OS- level virtualization. The design choice is made
differently since their hardware's have different capabilities. However,
one disadvantage against container-based virtualization is the loss of
flexibility. For example, it cannot host different kinds of guest operating
systems on a single infrastructure node. There- fore, we prefer hypervisor
virtualization techniques rather than container-based virtualization.
2-Fight with Latency: There are many factors introducing high latency of
application or service performance on fog computing platforms. High
latency will ruin the user experience and satisfaction, since fog
computing is targeting at delay-sensitive applications and services.
There are several possibilities to bring in latency in fog computing:
• Data aggregation. The geo-distributed nature of fog computing
paradigm determines that there will be delay if data aggregation is not
finished before data processing. However, there are many ways to
mitigate this problem, such as applying data partitioning/filtering and
utilizing locality in hierarchy to reduce computation volume on higher
layer.
• Resource provisioning. There will be delay in provisioning resources
for certain tasks, especially for resource- limited fog nodes. We may
need carefully designed scheduling by using priority and mobility
model.
• Node mobility, churn and failure. Fog computing needs to be
resilient to node mobility, churn and failure. Both system monitor and
location service will work together to provide information to help on
choosing mitigation strategies such as check-pointing, rescheduling
and replication.
3- Network Management: The network management will be a burden
for fog computing unless we reap the benefits of applying SDN and NFV
techniques. However the seamless integration of SDN and NFV into fog
computing is not easy, and will certainly be a big challenge. The
difficulties come from re-design the south-bound, north-bound and
also the east-west-bound APIs to include necessary fog computing
primitives. A naive integration may not meet design goals of latency
and efficiency.
4- Security and Privacy: We admit that security and privacy should be
considered in every stage of fog computing platform design. And we
regard it as one of the biggest challenges faced by fog computing. To
overcome this, we need apply access control and intrusion detection
system, which need support from every layer of the platform.