Short Notes On The Technical Terms Used
Short Notes On The Technical Terms Used
Continuous pilots are pilots that occur at the same frequency location in every OFDM symbol.
A receiver recovers data from Orthogonal Frequency Division Multiplexed (OFDM) symbols, the
OFDM symbols including a plurality of sub-carrier signals. Some of the sub-carrier signals
carrying data symbols and some of the sub-carrier signals carrying pilot symbols, the pilot
symbols comprising scattered pilots symbols and continuous pilot symbols. The continuous pilot
symbols are distributed across the sub-carrier signals in accordance with a continuous pilot
symbol pattern and the scattered pilot symbols are distributed across the sub-carrier signals in
accordance with a scattered pilot signal pattern.
All active subcarriers with the exception of pilots are transmitted with the same average power.
Pilots are transmitted boosted by a factor of 2 -TBD in amplitude (approximately 6TBD dB).
Scattered pilots do not occur at the same frequency in every symbol; in some cases scattered
pilots will overlap with continuous pilots. If a scattered pilot overlaps with a continuous pilot,
then that pilot is no longer considered to be a scattered pilot. It is treated as a continuous pilot.
Because the locations of scattered pilots change from one OFDM symbol to another, the
number of overlapping continuous and scattered pilots changes from symbol to symbol. Since
overlapping pilots are treated as continuous pilots, the number of scattered pilots changes from
symbol to symbol.
http://www.ieee802.org/3/bn/public/nov13/zhang_3bn_03_1113.pdf
https://patentscope.wipo.int/search/en/detail.jsf?docId=WO2014140520
http://www.ieee802.org/3/bn/public/nov13/zhang_3bn_03_1113.pdf
http://www.ijsr.net/archive/v3i5/MDIwMTMxOTg3.pdf
2. ENERGY SPREADING
A satellite communications system for dispersing energy over a wide bandwidth includes a
transmitter, a communication link, and a receiver. The transmitter takes a digital data signal and
modulates that signal at a prescribed carrier frequency. The modulated digital data signal is
then spread over M adjacent digital channels (M≥2 and being an integer multiple of 2), each
channel containing the same information, to disperse the energy over a wide frequency range.
The spectral bandwidth of the adjacent digital channels is chosen with compressed spacing to
conserve bandwidth. Next, the spread modulated data signal is transmitted via the
communication link to the receiver. In particular, a waveform generator at the transmitter
generates a phase-aligned multichannel frequency diversity waveform according to a data clock
at a predetermined phase relationship to the digital data.
At the receiver, the spread modulated data signal received is mixed with a de-spreading
waveform generated in a similar manner to the waveform spectrum generated at the
transmitter to recover the modulated data signal. The de-spreading waveform is generated
according to a symbol clock signal recovered from the received modulated data signal. A
demodulator recovers the original digital data from the modulated data signal. To achieve
higher spreading factors, multichannel frequency diversity may be utilized with known spread
spectrum techniques to achieve high data recovery rates during adverse weather (fading)
conditions at high radio frequencies in the microwave and higher regions of the radio spectrum.
Method and apparatus for providing energy dispersal using frequency diversity in a satellite
communications system
http://www.freepatentsonline.com/5454009.html
3. MPEG
MPEG-2 (aka H.222/H.262 as defined by the ITU) is a standard for "the generic coding of
moving pictures and ISO/IEC 13818 MPEG-2 at the ISO Store. It describes a combination of lossy
video compression and lossy audio data compression methods, which permit storage and
transmission of movies using currently available storage media and transmission bandwidth.
While MPEG-2 is not as efficient as newer standards such as H.264(/ MPEG-4 Part 10, Advanced
Video Coding (MPEG-4 AVC)) and H.265/HEVC
-------{High Efficiency Video Coding (HEVC), also known as H.265 and MPEG-H Part 2, is a video
compression standard, one of several potential successors to the widely used AVC (H.264 or
MPEG-4 Part 10). In comparison to AVC, HEVC offers about double the data compression ratio
at the same level of video quality, or substantially improved video quality at the same bit rate.
It supports resolutions up to 8192×4320, including 8K UHD.
In most ways, HEVC is an extension of the concepts in H.264/MPEG-4 AVC. Both work by
comparing different parts of a frame of video to find areas that are redundant, both within a
single frame as well as subsequent frames. These redundant areas are then replaced with a
short description instead of the original pixels. The primary changes for HEVC include the
expansion of the pattern comparison and difference-coding areas from 16×16 pixel to sizes up
to 64×64, improved variable-block-size segmentation, improved "intra" prediction within the
same picture, improved motion vector prediction and motion region merging, improved motion
compensation filtering, and an additional filtering step called sample-adaptive offset filtering.
Effective use of these improvements requires much more signal processing capability for
compressing the video, but has less impact on the amount of computation needed for
decompression},----------------------
backwards compatibility with existing hardware and software means it is still widely used, for
example in over-the-air digital television broadcasting and in the DVD-Video standard.
https://en.wikipedia.org/wiki/MPEG-2
https://en.wikipedia.org/wiki/H.264/MPEG-4_AVC
https://en.wikipedia.org/wiki/High_Efficiency_Video_Coding
Lossless compression is a class of data compression algorithms that allows the original data to
be perfectly reconstructed from the compressed data. By contrast, lossy compression permits
reconstruction only of an approximation of the original data, though this usually improves
compression rates (and therefore reduces file sizes).
Lossless data compression is used in many applications. For example, it is used in the ZIP file
format and in the GNU tool gzip.
Lossless compression is used in cases where it is important that the original and the
decompressed data be identical, or where deviations from the original data could be
deleterious. Typical examples are executable programs, text documents, and source code.
Some image file formats, like PNG or GIF, use only lossless compression, while others like TIFF
and MNG may use either lossless or lossy methods. Lossless audio formats are most often used
for archiving or production purposes, while smaller lossy audio files are typically used on
portable players and in other cases where storage space is limited or exact replication of the
audio is unnecessary.
Any lossless compression algorithm that makes some files shorter must necessarily make some
files longer, but it is not necessary that those files become very much longer. Most practical
compression algorithms provide an "escape" facility that can turn off the normal coding for files
that would become longer by being encoded. In theory, only a single additional bit is required
to tell the decoder that the normal coding has been turned off for the entire input; however,
most encoding algorithms use at least one full byte (and typically more than one) for this
purpose.
https://en.wikipedia.org/wiki/Lossless_compression
3.1.2. LOSSY
Lossy compression is most commonly used to compress multimedia data (audio, video, and
images), especially in applications such as streaming media and internet telephony. By contrast,
lossless compression is typically required for text and data files, such as bank records and text
articles
It is possible to compress many types of digital data in a way that reduces the size of a
computer file needed to store it, or the bandwidth needed to transmit it, with no loss of the full
information contained in the original file. A picture, for example, is converted to a digital file by
considering it to be an array of dots and specifying the color and brightness of each dot. If the
picture contains an area of the same color, it can be compressed without loss by saying "200
red dots" instead of "red dot, red dot, ...(197 more times)..., red dot."
In many cases, files or data streams contain more information than is needed for a particular
purpose. For example, a picture may have more detail than the eye can distinguish when
reproduced at the largest size intended; likewise, an audio file does not need a lot of fine detail
during a very loud passage. Developing lossy compression techniques as closely matched to
human perception as possible is a complex task. Sometimes the ideal is a file that provides
exactly the same perception as the original, with as much digital information as possible
removed; other times, perceptible loss of quality is considered a valid trade-off for the reduced
data.
The compression ratio (that is, the size of the compressed file compared to that of the
uncompressed file) of lossy video codecs is nearly always far superior to that of the audio and
still-image equivalents.
Video can be compressed immensely (e.g. 100:1) with little visible quality loss
Audio can often be compressed at 10:1 with imperceptible loss of quality
Still images are often lossily compressed at 10:1, as with audio, but the quality loss is
more noticeable, especially on closer inspection
a) In lossy transform codecs, samples of picture or sound are taken, chopped into small
segments, transformed into a new basis space, and quantized. The resulting quantized
values are then entropy coded.
b) In lossy predictive codecs (portmanteau of coder-decoder), previous and/or subsequent
decoded data is used to predict the current sound sample or image frame. The error
between the predicted data and the real data, together with any extra information
needed to reproduce the prediction, is then quantized and coded.
In some systems the two techniques are combined, with transform codecs being used to
compress the error signals generated by the predictive stage.
Lossy methods are most often used for compressing sound, images or videos. This is because
these types of data are intended for human interpretation where the mind can easily "fill in the
blanks" or see past very minor errors or inconsistencies – ideally lossy compression is
transparent (imperceptible), which can be verified via an ABX test.
Transparency: When a user acquires a lossily compressed file, (for example, to reduce
download time) the retrieved file can be quite different from the original at the bit level while
being indistinguishable to the human ear or eye for most practical purposes. Many compression
methods focus on the idiosyncrasies of human physiology, taking into account, for instance,
that the human eye can see only certain wavelengths of light. The psychoacoustic model
describes how sound can be highly compressed without degrading perceived quality. Flaws
caused by lossy compression that are noticeable to the human eye or ear are known as
compression artifacts.
4. MPEG vs AAC
Advanced Audio Coding (AAC) is an audio coding standard for lossy digital audio compression.
Designed to be the successor of the MP3 format, AAC generally achieves better sound quality
than MP3 at similar bit rates.AAC has been standardized by ISO and IEC, as part of the MPEG-2
and MPEG-4 specifications
AAC Advantages: smaller file size and higher quality sound than MP3
AAC is a newer, adopting more sophisticated codec than MP3 so as to offer you an audio file
with even smaller size and a tad higher quality. See, if your AAC audio file is inherent with
160kbps, then you should adjust your MP3 file with 256 kbps to reach the same audio quality.
But honestly, it is true that you can tell the difference between lossy and lossless. But there
comes a point where it's indistinguishable to the human ear between a small difference of kbps.
MP3 Advantages: Works with Every Music Player and Mobile Device
PC vs Apple, PC wins due to open(ish) standard. AAC vs MP3, MP3 wins because of open(ish)
standard, as well. The argument is never as cut and dried as which is best. More like which is
"good enough" and easiest to work with. MP3 wins, which is friendly with almost all the music
players no matter Windows Media Player, VLC, or Kmplayer and all the Apple Android Microsoft
devices. So in terms of audio compatibility, who can top MP3?
Compared with MP3, the most obvious shortcoming of AAC rests with its compatibility limits,
which refuses many music players and various handheld devices like Android Samsung, Sony,
Blackberry, Nokia, etc. Thus, for Apple users, they feel fine with AAC audio files, but for the
Android users, they would be scratch their head when it comes to playback failure caused by
AAC incompatibility issue.
https://www.macxdvd.com/mac-dvd-video-converter-how-to/aac-vs-mp3-comparison.htm
Improvements include:
Overall, the AAC format allows developers more flexibility to design codecs than MP3 does, and
corrects many of the design choices made in the original MPEG-1 audio specification. This
increased flexibility often leads to more concurrent encoding strategies and, as a result, to
more efficient compression. However, in terms of whether AAC is better than MP3, the
advantages of AAC are not entirely decisive, and the MP3 specification, although antiquated,
has proven surprisingly robust in spite of considerable flaws. AAC and HE-AAC are better than
MP3 at low bit rates (typically less than 128 kilobits per second.) This is especially true at very
low bit rates where the superior stereo coding, pure MDCT, and better transform window sizes
leave MP3 unable to compete.
5. REED SOLOMON
Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S.
Reed and Gustave Solomon in 1960. They have many applications, the most prominent of which
include consumer technologies such as CDs, DVDs, Blu-ray Discs, QR Codes, data transmission
technologies such as DSL and WiMAX, broadcast systems such as DVB and ATSC, and storage
systems such as RAID 6. They are also used in satellite communication.
In coding theory, the Reed–Solomon code belongs to the class of non-binary cyclic error-
correcting codes. The Reed–Solomon code is based on univariate polynomials over finite fields.
It is able to detect and correct multiple symbol errors. By adding t check symbols to the data, a
Reed–Solomon code can detect any combination of up to t erroneous symbols, or correct up to
⌊t/2⌋ symbols. As an erasure code, it can correct up to t known erasures, or it can detect and
correct combinations of errors and erasures. Furthermore, Reed–Solomon codes are suitable as
multiple-burst bit-error correcting codes, since a sequence of b + 1 consecutive bit errors can
affect at most two symbols of size b. The choice of t is up to the designer of the code, and may
be selected within wide limits.
Reed–Solomon coding is very widely used in mass storage systems to correct the burst errors
associated with media defects.
Reed–Solomon coding is a key component of the compact disc. It was the first use of strong
error correction coding in a mass-produced consumer product, and DAT and DVD use similar
schemes. In the CD, two layers of Reed–Solomon coding separated by a 28-way convolutional
interleaver yields a scheme called Cross-Interleaved Reed–Solomon Coding (CIRC).
Almost all two-dimensional bar codes such as PDF-417, MaxiCode, Datamatrix, QR Code, and
Aztec Code use Reed–Solomon error correction to allow correct reading even if a portion of the
bar code is damaged. When the bar code scanner cannot recognize a bar code symbol, it will
treat it as an erasure.
Reed–Solomon coding is less common in one-dimensional bar codes, but is used by the PostBar
symbology.
One significant application of Reed–Solomon coding was to encode the digital pictures sent
back by the Voyager space probe.
Viterbi decoders tend to produce errors in short bursts. Correcting these burst errors is a job
best done by short or simplified Reed–Solomon codes
6. MULTI -2 SCRAMBLING
MULTI2 is the block cipher used in the ISDB standard for scrambling digital multimedia content.
MULTI2 is used in Japan to secure multimedia broadcasting, including recent applications like
HDTV and mobile TV. It is the only cipher specified in the 2007 Japanese ARIB standard for
conditional access systems.
MULTI2 in ISDB. In ISDB,MULTI2 is mainly used via the B-CAS (B-CAS i.e. BS Conditional Access
Systems Co., Ltd.- a vendor and operator of the ISDB CAS system in Japan. All ISDB receiving
apparatus such as DTT TV, tuner, and DVD recorder except 1seg-only devices require a B-CAS
card under regulation and B-CAS cards are supplied with most units at purchase. B-CAS cards
cannot be purchased separately.) card for copy control to ensure that only valid subscribers are
using the service. MULTI2 encrypts transport stream packets in CBC or OFB mode. The same
system key is used for all conditional-access applications, and another system key is used for
other applications (DTV, satellite, etc.). The 64-bit data key is refreshed every second, sent by
the broadcaster and encrypted with another block cipher. Therefore only the data key is really
secret, since the system key can be obtained from the receivers.