0% found this document useful (0 votes)
4 views18 pages

19EL016_3EL04_assignment-1 (2)

The document discusses the differences between hard and soft real-time embedded systems, highlighting that hard real-time systems must meet strict deadlines to avoid catastrophic failures, while soft real-time systems can tolerate some delays without severe consequences. It also provides an overview of ARM microcontrollers, detailing their architecture, features, and applications, as well as the importance of hardware-software co-design in optimizing embedded systems. Additionally, it covers ARM development tools and the Wireless Application Protocol (WAP) for mobile internet access.

Uploaded by

akshaybhatti.bds
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views18 pages

19EL016_3EL04_assignment-1 (2)

The document discusses the differences between hard and soft real-time embedded systems, highlighting that hard real-time systems must meet strict deadlines to avoid catastrophic failures, while soft real-time systems can tolerate some delays without severe consequences. It also provides an overview of ARM microcontrollers, detailing their architecture, features, and applications, as well as the importance of hardware-software co-design in optimizing embedded systems. Additionally, it covers ARM development tools and the Wireless Application Protocol (WAP) for mobile internet access.

Uploaded by

akshaybhatti.bds
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Q.

1 Write a difference between Soft real time embedded system and


hard real time embedded system.
● Real time system is defined as a system in which job has deadline, job has to finished by the deadline
(strictly finished). If a result is delayed, huge loss may happen.

1. Hard Real Time System :

Hard real time is a system whose operation is incorrect whose result is not produce according to time
constraint.

For example, A. Air traffic control

B. MedicaL system

2. Soft Real Time System :

Soft real time system is a system whose operation is degrade if results are not produce
according to the specified timing requirement.
For example1. Multimedia Transmission and Reception 2. Computer Games
Q.2 Write a brief note on Arm Microcontroller.

ARM microcontroller

The ARM processors have a less number of transistors because they have a reduced instruction set,
which allows a smaller size for the IC. Thereby being space efficient also. Most of the electronic devices
such as tablets, mobiles, smart phones and other mobile devices consist of these processors.By
combining the ARM microprocessor with RAM, ROM and other peripherals in one single chip, we get an
ARM microcontroller, for example, LPC2148.

ARM ARCHITECTURES

A 32 bit RISC Load Store Architecture is present in ARM machines. The manipulation of memory is not
possible directly; the registers are used for this purpose. Different varieties of operations are offered by
the instruction set but the main focus is to reduce the number of cycle required for each instruction.

The instructions present in the ARM ISA are all conditional. The condition AL is accompanied by normal
execution instructions. Excluding the condition AL, 14 other conditions are also available. The number of
transistors has increased from 30000 in ARM2 to 26 million in the cortex-A9 ARM. A Thumb architecture
was also developed which supported 16 bit instructions. Increasing the code density caused a drop in
performance of these machines. But it was compensated by Thumb 2.
THUMB

The thumb mode has been featured by the processors from ARM7TDMI to help in improving the
compiled code density. The 16 bit instructions are executed in this mode and they are mapped to the
normal ARM instructions.

The opcodes that are smaller in size show less functionality in thumb. Some opcodes cannot access all
the registers and only the braches can be opted as conditional. Smaller opcodes are more efficient.

Usually a very small amount of RAM is accessible with a datapath of 32 bit in embedded hardware. Rest
of it is accessed by a 16 bit path. Therefore it becomes logical to use 16 bit thumb code and the wider
instructions can be placed in a memory which is accessible by 32 bit. The ARM7TDI was the first process
having a thumb instruction decoder.

DSP ENHANCEMENT INSTRUCTIONS

New instruction set was added to improve the architecture of ARM for multimedia applications and
digital signal processing. These are specified by adding an E in the architecture’s name such as
ARMv5TEJ and ARMv5TE etc. These are just the variations added on count leading zeros and saturated
subtract and add operations etc.
JAZELLE

Besides the ARM and Thumb modes, a new technology has been introduced which allows the execution
of Java bytecode in hardware. This technology is known as Jazelle. It is most prominently used in mobile
phones so that the execution speed of Java EM games can be increased. The Java Virtual Machine
performs the complicated operations in software while the Java bytecodes are usually run on hardware.

The first processor to use Jazelle was ARM926EJ-S and the architecture of ARMv5TEJ specifies the
functionality of Jazelle. The JVM software depends on the details of hardware interface so that the JVM
and hardware can develop very well together and no other software is affected.

THUMB-2

The Thumb-2 technology was introduced on ARM1156 core and it was announced in the year 2003. It
increases the breadth of the instruction set by adding 32 bit instructions to the limited 16 bit instruction
set of the previously mentioned technology Thumb. The code density resembling the Thumb is achieved
by the Thumb-2 and also its performance is similar to the ARM instruction set on a memory of 32 bit.

The instruction set Thumb-2 is supported by all the ARMv7 chips. But some of the chips for example
cortex-M3 only supports the Thumb-2 instruction set.

THUMB EXECUTION ENVIRONMENT (THUMBEE)

The ThumbEE first appeared in the Cortex-A8 processor. The instruction set is increased as compared to
the Thumb-2 instruction set. Thumb EE is used for languages such as Python, Limbo, Java, C# and Perl
etc. Smaller codes are outputted by the JIT compilers and there is no significant impact on the
performance.

ADVANCED SIMD (NEON):

The advanced SIMD (Single Instruction Multiple Data) also known as the NEON technology, is a
combination of 128 bit and 64 bit SIMD instruction set. It provides acceleration for the applications of
signal processing and media.
Its execution operations include decoding of MP3 audio on 10 MHz CPUs, and runs GSM AMR (adaptive
multi rate) speech codec at 13 Mhz. It supports up to 64 bit integers and also handles games and
graphics as well as audio/video processing. The SIMD in NEON can support 16 operations at a time.

VERSIONS AND FEATURES OF ARM MICROCONTROLLER

One of the most advanced form of these microcontrollers is the cortex microcontroller, developed by
ARMv7. The cortex family is further divided as:

CORETEX M3 MICROCONTROLLER FEATURES

It is a 32 bit processor offering many advantages over other microcontrollers. It is a ‘harward


architecture’. For communication with Ram and Rom, this architecture provides separate instruction
buses and data buses. It consists of a 3 stage pipeline which fetches the instructions, then decodes it and
then finally executes the instruction.
The memory required for the program has been reduced and also it
provides high code density because of the usage of THUMB-2 technology
in coretex-M3. For the good interrupt performance, the core m3 is closely
integrated to NVIC (Nested Vector Interrupt Controller).

• It is a Reduced Instruction Set Computing (RISC) controller. It


has a high performance CPU of 32 bits and the pipelining is
done through 3 stages.
• The Thumb-2 technology has been integrated in these
controllers, which means they can handle 16 bit as well as 32 bit
instructions. This technology also provides high performance in
operations and executions.
• It has low power modes. Sleep modes are also supported by it.
It controls the software efficiently and it consists of multiple
domains of power.
• The NVIC, Nested Vectored interrupt controller provides low
latency as well as low jitter interrupts response. Another
advantage is that there is no need of assembly programming in
it.
COMPARISON BETWEEN DIFFERENT VERSIONS OF
CORTEX
The features and specifications of cortex m3, cortex m4 and cortex R4 can
be compared as in the following table:

APPLICATIONS of ARM microcontrollers


Some of the applications of the cores of the ARM have been listed in the
table below:
• Other than that the ARM microcontrollers can also be used in
space and aerospace technologies.
• Used in many medical equipments such as MRI, CT scanner,
ultrasound and implantable devices.
• Also used at the research level in particle accelerators, nuclear
reactors and X-ray cargo scanning applications

Q3. Write a brief note on Arm Development tool.


Arm Development Tools

Arm development solutions are designed to accelerate product development from SoC architecture to
software application development. From the smallest Cortex-M series microcontroller sensor to
supercomputers, Arm development tools and design services help engineers worldwide develop market-
leading products that fully explore the capabilities of their Arm-based systems.

Arm Development Studio

The Development Studio was developed specifically for arm processors and is the most comprehensive
integrated C++ proprietary toolchain for software development for this architecture. It includes Keil
MDK and accelerates software engineering, helping you develop more robust and efficient products.

Arm Keil MDK

Arm Keil MDK is a complete software development solution for creating, creating and debugging
embedded applications for arm-based microcontrollers. The µVision IDE provides a world-class
experience for Cortex-M-based development.

ULINK debug and trace adapters

A ULINK debug adapter connects your PC’s USB port to your target system (via JTAG or a similar debug
interface) and allows you to debug, trace and analyze embedded programs running on the target
hardware. All ULINK adapters enable you to:
Download programs to your target hardwareExamine memory and registersSingle-step through
programs and insert multiple breakpointsRun programs in real-timeProgram Flash memoryConnect to a
target via JTAG or serial wire modesDebug Arm Cortex-M devices on-the-flyExamine trace information
from Arm Cortex-M3/M4/M7/M33 devices

DSTREAM family

The Arm DSTREAM High-Performance Debug and Trace units enable powerful software debug and
optimization on any Arm processor-based hardware target.

With features such as accelerated hardware bring-up for many development platforms and open debug
interface for use with third-party tools, DSTREAM debug probes provide a comprehensive solution for
the development and debug of complex SoCs when paired with Arm Development Studio.

C166 Development Tools

Keil C166 development tools support the Infineon C166, XC166, XE166, XC2000 and ST10
microcontroller families. The μVision IDE and debugger interfaces to the Infineon DAVE code generation
tool and various debug solutions including the ULINK2.

8051 Development Tools

Keil C51 is the industry-standard toolchain for all 8051-compatible devices. It supports classic 8051,
Dallas 390, NXP MX, extended 8051 variants, and C251 devices. The μVision IDE and debugger interfaces
to the Infineon DAVE code generation tool and various debug solutions including the ULINK.

Q4. Explain any wireless protocol briefly.

Wireless Application Protocol (WAP)Wireless Application Protocol (WAP) is a specification for a set of
communication protocols to standardize the way that wireless devices, such as cellular telephones and
radio transceivers, can be used for internet access, including email, the web, newsgroups and instant
messaging.

What is Wireless Application Protocol (WAP)?

Wireless Application Protocol (WAP) is a specification for a set of communication protocols


to standardize the way wireless devices, such as mobile phones and radio transceivers, can
be used for internet access, including email, the web, newsgroups and instant messaging.
While internet access was possible before the introduction of WAP, different manufacturers
have used varying technologies; WAP promised interoperability between these
technologies.

WAP was conceived in 1997 by Ericsson, Motorola, Nokia and Unwired Planet (now
Phone.com) at an event known as the WAP Forum. In 2002, the WAP Forum became the
Open Mobile Alliance (OMA).

The Wireless Markup Language (WML) was used to create pages that can be delivered
using WAP.

There were other approaches to an industry standard besides WAP, including i-Mode.

How the WAP works

WAP describes a protocol suite. This standard is designed to create interoperability


between WAP equipment, such as mobile phones that use the protocol, and WAP software,
such as WAP-enabled web browsers and network technologies.

These standards optimize mobile experiences that were previously limited by the
capabilities of handheld devices and wireless networks. WAP does this through:

• the WML format for pages, which can be delivered using WAP;
• internet standards that are efficient for wireless environments, such as Extensible
Markup Language (XML), user datagram protocol (UDP) and Internet Protocol
(IP), which are based on standards such as HTML, HTTP and TLS but without
the large amounts of data;
• binary transmission that allows for greater data compression; and
• optimization for high latency, low connection stability and low bandwidth through
using a lightweight protocol stack.
WAP model and layers

WAP model. The WAP model works similar to the traditional client-server model but uses
an additional WAP gateway as an intermediary between the client and server. This gateway
translates the WAP device request, from a microbrowser, into an HTTP URL request and
sends it to the server over the internet.

When the server returns a response, the WAP gateway processes and sends the webpage
to the WAP mobile device as a WML file that is compatible with microbrowsers.

WAP protocol stack. The WAP standard describes the following protocol stack for interoperability of
WAP devices, equipment, software and other technologies, including:

Wireless Application Environment (WAE) for mobile device specifications and programming languages
such as WML;Wireless Session Protocol (WSP), which manages connection suspensions and
reconnections;Wireless Transaction Protocol (WTP), which manages transaction support for requests
and responses to servers;Wireless Transport Layer Security (WTLS) for managing privacy, authentication
and data integrity through public key cryptography; andWireless Datagram Protocol (WDP), which is an
adaptation layer for consistent data formats in the other layers, and it defines how data flows to the
sender, from the receiver.Why use WAP?

Introduced in 1999, WAP proposed benefits for wireless network operators, content providers and end
users:
Wireless network and mobile phone operators. WAP was designed to improve existing wireless data
services such as voicemail while also enabling the development of additional new mobile applications.
These applications can be introduced without any additional infrastructure changes or phone
modifications.Content providers. WAP creates a market for additional applications and mobile phone
functionalities for third-party application developers to exploit. Writing applications in WML was
proposed as a new programming language for developers to create effective mobile device
applications.End users. Mobile phone users were stated to benefit from easy, secure access to online
services, such as banking, entertainment, messaging and other information, on mobile devices. Intranet
information – such as corporate databases and business applications – was also to be accessed through
WAP.

Despite these proposed benefits, WAP did not experience widespread adoption in many countries, and
its use declined significantly around 2010, due to widespread HTML compatibility in mobile phones.

Q5. Why hardware and software co-design are important in


embedded system.

There is an adage that I am sure most of you have heard of, “The
whole is greater than the sum of its parts.” Upon further examination,
there are examples of this truth all around us. For example, a PC.
There are various hardware components in use in its design and
assembly. However, no matter how expensive the components or
impressive their specifications, this is nothing more than an overpriced
paperweight without software.

There are numerous examples of this ideology in play in almost every


recess of our business and professional lives. So, obviously, this is not
a new concept, but the seamless integration of these two sectors of
technology was not always this flawless.

Even when I was in college, the two technologies were taught in


different classes, and there were seldom mentions of the other unless
it was absolutely necessary. This type of division also carried over into
the professional world as well. Although the appeal of knowing both
sides of this proverbial coin was ideal, the job description usually type-
fitted you into one category or the other. However, the tides are
changing. With the onset of smart cars, smart TVs, smartphones, and
smart homes, the need for hardware-software co-design is more
significant than ever before.

What is Hardware Software Co-design?


Hardware-software co-design was a concept that began in the 1990s. Its
core concept was the concurrent designs of hardware and software
components of complex electronic systems. It aimed to incorporate the
two technologies and exploit the synergy between the two. In essence, it
was a way to merge hardware and software in an attempt to optimize
and satisfy design constraints such as cost, performance, and power of
the final product.

At its inception, the best of both worlds' approach ultimately failed.


Mainly because the idea and concept were sound, but the timing was not.
At least in terms of the technology that it would require for the full
benefit of this innovative concept to reach its maximum potential.

In any industry, forward-thinking is the only way to ensure advancement


and longevity. Forward-thinking is precisely what hardware-software
co-design represents. The proof of this is the fact that after nearly two
decades, this concept is finally receiving the attention it deserves.

Why is Hardware Software Co-design the


Way of the Future?
In the field of electronics, we see continuous advancements and changes
in technology. These changes are not merely driven by innovation, but by
demand as well. The continual integration of technology into every
device in our personal and professional lives deems the need for smarter
electronics. We expect more functionality from our devices as we put
more and more demands on them.
With this increased demand comes increased dependency. Who among
us can actually go for one hour without our smartphone, tablet, PC, or
laptop? The quick answer is, none of us. For some, our whole lives are on
some of these devices or stored on a server.

All of the points mentioned above brings us to an inevitable truth,


hardware-software co-design is the future. I stated earlier that
hardware-software co-design was not a new concept and the fact that it
ultimately failed the first time. However, if you carefully examine the
electronics landscape of today, you will realize that it is the only way we
will continue to evolve our technology.

The Integration of Hardware-Software Co-


design
There are two divisions of technology today that epitomizes what
hardware-software co-design can achieve. They are Artificial Intelligence
(AI) and Machine Learning (ML). These two technologies are changing
the way we look at technology and our possible future.

At this time, our world is growing in complexity, and there is an


emphasis on architectural improvements that cannot be achieved
without hardware-software co-design. There is also an increasing need
for our devices to be scalable to stay on par with both advancements and
demand.

Today’s designers (PCB) are finding that software is a critical


component, and its integration affords their designs increased
functionality and performance. Furthermore, the software also provides
a level of flexibility, which is crucial in the design process. This is made
possible by the use of algorithms that are in a nearly constant state of
flux.
How Hardware-Software Co-design is in Use
Today
Hardware-software co-design has many benefits that will pay dividends
now and in the future. For the PCB industry, it will increase
manufacturing efficiency, the innovation of designs, lower cost, and
shorten the time of prototypes to market. In terms of the use of machine
learning, it also reduces input variation analysis by removing those
variables that are already calculated to fail. This will shorten the
development time of designs and improve those designs with the
same amount of accuracy but at a lower cost.

Depending on your design parameters, you can reduce the simulation


times and still maintain the accuracy of your designs. The by-product of
hardware-software co-designs optimizes designs, simulations, and
overall analysis. You are thereby reducing total production time to just
hours or days instead of weeks. These concepts are already in practice in
our automated production systems, power grid, the automotive industry,
and aviation, to name a few.

Every engineer knows that in the field of PCB design, simulation is


king. The reason for this is due to the high-cost that delays and failed
designs bring about. The truth of the matter is, every PCB designer or
manufacturer desires optimal design decision making.

The Benefits of Hardware-Software Co-


design
Hardware-software co-design, with the assistance of machine learning,
can help to optimize hardware and software in everything from IP to
complex systems, based upon a knowledge base of what works best for
which conditions.

This approach assures a certain level of results, regardless of how many


possibilities are involved. This same approach can also help if there are
abnormalities that do not fit into a particular pattern because machine
learning systems can ignore those irregularities.

As a whole, this shift in thinking can bring about a level of innovation not
yet seen in the PCB industry. This, in turn, will also facilitate a new level
of performance and functionality without the added cost and time
constraints previously experienced. Overall, design efficiency will be the
new norm as the PCB industry continues to implement hardware-
software co-design.

An Example of a Real-World Use of


Hardware-Software Co-design
In the field of aviation, electronics (PCB) dominate the field in terms of
functionality. The Aviation industry is continuously moving towards
innovation and automation. So much so, that pilots cannot effectively fly
without the use of some type of electronic system and nor can the actual
planes.

With the recent incidents of aircraft crashes, there is an increasing need


for better testing and diagnosis of faults before they become a problem.
Which leads to the need for better designs and design decision making.
As you may know, the best way to perfect any design is through
simulation. It saves time, lowers cost, increases safety, and improves the
overall design.

In this pursuit of better testing comes the need for better, more efficient
ways to conduct these tests. That is where hardware-software co-design
comes into play. It enables embedded software (chip) to add
functionality to simulations in use to discover design results during
simulative flight tests.
How Hardware-Software Co-design is
Changing Simulations
During a recent flight test simulation, there was a need for different
power outputs of the generators in use within the test. In essence, each
power output represents a different testable scenario. Previously, before
the use of hardware-software co-design, a separate analysis was run
for each power output requirement.

You might also like