Deep Learning Through Sparse and Low Rank Modeling Zhangyang Wang instant download
Deep Learning Through Sparse and Low Rank Modeling Zhangyang Wang instant download
https://ebookultra.com/download/deep-learning-through-sparse-and-
low-rank-modeling-zhangyang-wang/
https://ebookultra.com/download/designing-deep-learning-systems-a-
guide-for-software-engineers-1st-edition-chi-wang/
https://ebookultra.com/download/sparse-modeling-theory-algorithms-and-
applications-1st-edition-irina-rish/
https://ebookultra.com/download/deep-learning-ian-goodfellow/
https://ebookultra.com/download/evolutionary-deep-learning-
meap-v11-micheal-lanham/
Introduction to Deep Learning 1st Edition Eugene Charniak
https://ebookultra.com/download/introduction-to-deep-learning-1st-
edition-eugene-charniak/
https://ebookultra.com/download/deep-learning-with-python-1st-edition-
francois-chollet/
https://ebookultra.com/download/modeling-and-state-estimation-of-
automotive-lithium-ion-batteries-1st-edition-shunli-wang/
https://ebookultra.com/download/effective-philanthropy-organizational-
success-through-deep-diversity-and-gender-equality-mary-ellen-s-capek/
https://ebookultra.com/download/deep-learning-for-coders-with-fastai-
and-pytorch-first-edition-edition-jeremy-howard/
Deep Learning Through Sparse and Low Rank Modeling
Zhangyang Wang Digital Instant Download
Author(s): Zhangyang Wang; Yun Raymond; Thomas S Huang
ISBN(s): 9780128136591, 0128136596
Edition: Paperback
File Details: PDF, 17.79 MB
Year: 2019
Language: english
DEEP LEARNING
THROUGH SPARSE
AND LOW-RANK
MODELING
Computer Vision and
Pattern Recognition Series
Series Editors
Horst Bischof Institute for Computer Graphics and Vision, Graz University of
Technology, Austria
Kyoung Mu Department of Electrical and Computer Engineering, Seoul National
University, Republic of Korea
Sudeep Sarkar Department of Computer Science and Engineering, University of
South Florida, Tampa, United States
Alameda-Pineda, Ricci and Sebe, Multimodal Behavior Analysis in the Wild, 2018, ISBN:
9780128146019
Marco and Farinella, Computer Vision for Assistive Healthcare, 2018, ISBN: 9780128134450
Murino et al., Group and Crowd Behavior for Computer Vision, 2017, ISBN: 9780128092767
Lin and Zhang, Low-Rank Models in Visual Analysis: Theories, Algorithms and Applications,
2017, ISBN: 9780128127315
Zheng et al., Statistical Shape and Deformation Analysis: Methods, Implementation and
Applications, 2017, ISBN: 9780128104934
De Marsico et al., Human Recognition in Unconstrained Environments: Using Computer
Vision, Pattern Recognition and Machine Learning Methods for Biometrics, 2017, ISBN:
9780081007051
Saha et al., Skeletonization: Theory, Methods and Applications, 2017, ISBN: 9780081012918
DEEP LEARNING
THROUGH SPARSE
AND LOW-RANK
MODELING
Edited by
ZHANGYANG WANG
Department of Computer Science and Engineering,
Texas A&M University,
College Station, TX, United States
YUN FU
Department of Electrical and Computer Engineering and
College of Computer and Information Science (Affiliated),
Northeastern University,
Boston, MA, United States
THOMAS S. HUANG
Department of Electrical and Computer Engineering,
University of Illinois at Urbana-Champaign,
Champaign, IL, United States
Academic Press is an imprint of Elsevier
125 London Wall, London EC2Y 5AS, United Kingdom
525 B Street, Suite 1800, San Diego, CA 92101-4495, United States
50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States
The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom
Copyright © 2019 Elsevier Inc. All rights reserved.
No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical,
including photocopying, recording, or any information storage and retrieval system, without permission in writing
from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies
and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing
Agency, can be found at our website: www.elsevier.com/permissions.
This book and the individual contributions contained in it are protected under copyright by the Publisher (other
than as may be noted herein).
Notices
Knowledge and best practice in this field are constantly changing. As new research and experience broaden our
understanding, changes in research methods, professional practices, or medical treatment may become necessary.
Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any
information, methods, compounds, or experiments described herein. In using such information or methods they
should be mindful of their own safety and the safety of others, including parties for whom they have a professional
responsibility.
To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for
any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from
any use or operation of any methods, products, instructions, or ideas contained in the material herein.
ISBN: 978-0-12-813659-1
Yun Fu
Department of Electrical and Computer Engineering and College of Computer and
Information Science (Affiliated), Northeastern University, Boston, MA, United States
Niraj Goel
Department of Computer Science and Engineering, Texas A&M University, College Station,
TX, United States
Boyuan Gong
Department of Computer Science and Engineering, Texas A&M University, College Station,
TX, United States
Sandeep Gottimukkala
Department of Computer Science and Engineering, Texas A&M University, College Station,
TX, United States
Thomas S. Huang
Department of Electrical and Computer Engineering, University of Illinois at
Urbana-Champaign, Champaign, IL, United States
Shuhui Jiang
Department of Electrical and Computer Engineering, Northeastern University, Boston, MA,
United States
Satya Kesav
Department of Computer Science and Engineering, Texas A&M University, College Station,
TX, United States
Steve Kommrusch
Colorado State University, Fort Collins, CO, United States
Yu Kong
B. Thomas Golisano College of Computing and Information Sciences, Rochester Institute of
Technology, Rochester, NY, United States
Yang Li
Department of Electrical and Computer Engineering, Texas A&M University, College Station,
TX, United States
Ding Liu
Beckman Institute for Advanced Science and Technology, Urbana, IL, United States
xi
xii Contributors
Yu Liu
Department of Electrical and Computer Engineering, Texas A&M University, College Station,
TX, United States
Louis-Noël Pouchet
Colorado State University, Fort Collins, CO, United States
Ritu Raj
Department of Computer Science and Engineering, Texas A&M University, College Station,
TX, United States
Wenqi Ren
Chinese Academy of Sciences, Beijing, China
Ming Shao
Computer and Information Science, University of Massachusetts Dartmouth, Dartmouth, MA,
United States
Dacheng Tao
University of Sydney, Sydney, NSW, Australia
Shuyang Wang
Department of Electrical and Computer Engineering, Northeastern University, Boston, MA,
United States
Zhangyang Wang
Department of Computer Science and Engineering, Texas A&M University, College Station,
TX, United States
Caiming Xiong
Salesforce Research, Palo Alto, CA, United States
Guanlong Zhao
Department of Computer Science and Engineering, Texas A&M University, College Station,
TX, United States
ABOUT THE EDITORS
Yun Fu received the BEng degree in Information Engineering and the MEng degree in
Pattern Recognition and Intelligence Systems from Xi’an Jiaotong University, China, as
well as the MS degree in Statistics and the PhD degree in Electrical and Computer Engi-
neering from the University of Illinois at Urbana-Champaign. He is an interdisciplinary
faculty member affiliated with College of Engineering and the College of Computer
and Information Science at Northeastern University since 2012. His research interests
are in machine learning, computational intelligence, Big Data mining, computer vision,
pattern recognition, and cyber-physical systems. He has extensive publications in lead-
ing journals, books/book chapters and international conferences/workshops. He serves
as associate editor, chair, PC member and reviewer of many top journals and interna-
tional conferences/workshops. He received seven prestigious Young Investigator Awards
from NAE, ONR, ARO, IEEE, INNS, UIUC, and Grainger Foundation; seven Best
Paper Awards from IEEE, IAPR, SPIE, and SIAM; three major Industrial Research
Awards from Google, Samsung, and Adobe. He is currently an Associate Editor of the
IEEE Transactions on Neural Networks and Learning Systems (TNNLS). He is a fel-
low of IAPR and SPIE, a Lifetime Senior Member of ACM, Senior Member of IEEE,
Lifetime Member of AAAI, OSA, and Institute of Mathematical Statistics, member of
Global Young Academy (GYA), INNS and was a Beckman Graduate Fellow during
2007–2008.
xiii
xiv About the Editors
Thomas S. Huang received the ScD degree from the Massachusetts Institute of Tech-
nology in 1963. He is currently a Research Professor of Electrical and Computer
Engineering and the Swanlund Endowed Chair Professor at the University of Illinois,
Urbana-Champaign. He has authored or coauthored 21 books and over 600 papers on
network theory, digital filtering, image processing, and computer vision. His current re-
search interests include computer vision, image compression and enhancement, pattern
recognition, and multimodal signal processing. He is a member of the United States
National Academy of Engineering. He is a fellow of the International Association of
Pattern Recognition and the Optical Society of America. He was a recipient of the
IS&T and SPIE Imaging Scientist of the Year Award and the IBM Faculty Award in
2006. In 2008, he served as the Honorary Chair of the ACM Conference on Content-
Based Image and Video Retrieval and the IEEE Conference on Computer Vision and
Pattern Recognition. He has received numerous awards, including the Honda Lifetime
Achievement Award, the IEEE Jack Kilby Signal Processing Medal, the King-Sun Fu
Prize of the International Association for Pattern Recognition, and the Azriel Rosenfeld
Life Time Achievement Award at the International Conference on Computer Vision.
PREFACE
The Authors
xv
CHAPTER 1
Introduction
Zhangyang Wang∗ , Ding Liu†
∗ Department of Computer Science and Engineering, Texas A&M University, College Station, TX, United States
† Beckman Institute for Advanced Science and Technology, Urbana, IL, United States
Contents
1.1. Basics of Deep Learning 1
1.2. Basics of Sparsity and Low-Rankness 2
1.3. Connecting Deep Learning to Sparsity and Low-Rankness 3
1.4. Organization 4
References 4
Deep Learning Through Sparse and Low-Rank Modeling Copyright © 2019 Elsevier Inc.
DOI: 10.1016/B978-0-12-813659-1.00001-9 All rights reserved. 1
2 Deep Learning Through Sparse and Low-Rank Modeling
cent deep neural network architectures, convolutional neural networks (CNNs) and
recurrent neural networks (RNNs) are the two main streams, differing in their con-
nectivity patterns. CNNs deploy convolution operations on hidden layers for weight
sharing and parameter reduction. CNNs can extract local information from grid-like
input data, and have mainly shown successes in computer vision and image processing,
with many popular instances such as LeNet [31], AlexNet [2], VGG [32], GoogLeNet
[33], and ResNet [34]. RNNs are dedicated to processing sequential input data with
variable length. RNNs produce an output at each time step. The hidden neuron at each
time step is calculated based on input data and hidden neurons at the previous time
step. To avoid vanishing/exploding gradients of RNNs in long term dependency, long
short-term memory (LSTM) [35] and gated recurrent unit (GRU) [36] with control-
lable gates are widely used in practical applications. Interested readers are referred to a
comprehensive deep learning textbook [37].
Here x ∈ Rn denotes the input data, a ∈ Rm is the feature to learn, and D ∈ Rn×m
is the representation basis. Function (D, a): Rn×m × Rm → Rn defines the form of
feature representation. The regularization term (a): Rm → R further incorporates the
problem-specific prior knowledge. Not surprisingly, many instances of Eq. (1.1) could
be solved by a similar class of iterative algorithms
Note that even if a nonzero z0 is assumed, it could be absorbed into the bias term −λ.
Equation (1.5) is exactly a fully-connected layer followed by ReLU neurons, one of the
most standard building blocks in existing deep models. Convolutional layers could be
derived similarly by looking at a convolutional sparse coding model [66] rather than a
linear one. Such a hidden structural resemblance reveals the potential to bridge many
sparse and low-rank models with current successful deep models, potentially enhancing
the generalization, compactness and interpretability of the latter.
1.4. ORGANIZATION
In the remainder of this book, Chapter 2 will first introduce the bi-level sparse coding
model, using the example of hyperspectral image classification. Chapters 3, 4 and 5
will then present three concrete examples (classification, superresolution, and cluster-
ing), to show how (bi-level) sparse coding models could be naturally converted to and
trained as deep networks. From Chapter 6 to Chapter 9, we will delve into the extensive
applications of deep learning aided by sparsity and low-rankness, in signal processing,
dimensionality reduction, action recognition, style recognition and kinship understand-
ing, respectively.
REFERENCES
[1] Zhao R, Grosky WI. Narrowing the semantic gap-improved text-based web document retrieval using
visual features. IEEE Transactions on Multimedia 2002;4(2):189–200.
Introduction 5
[2] Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural net-
works. In: NIPS; 2012.
[3] Wang Z, Chang S, Yang Y, Liu D, Huang TS. Studying very low resolution recognition using deep
networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2016.
p. 4792–800.
[4] Liu D, Cheng B, Wang Z, Zhang H, Huang TS. Enhance visual recognition under adverse conditions
via deep networks. arXiv preprint arXiv:1712.07732, 2017.
[5] Wu Z, Wang Z, Wang Z, Jin H. Towards privacy-preserving visual recognition via adversarial training:
a pilot study. arXiv preprint arXiv:1807.08379, 2018.
[6] Bodla N, Zheng J, Xu H, Chen J, Castillo CD, Chellappa R. Deep heterogeneous feature fusion for
template-based face recognition. In: 2017 IEEE winter conference on applications of computer vision,
WACV 2017; 2017. p. 586–95.
[7] Ranjan R, Bansal A, Xu H, Sankaranarayanan S, Chen J, Castillo CD, et al. Crystal loss and quality
pooling for unconstrained face verification and recognition. CoRR 2018. arXiv:1804.01159 [abs].
[8] Ren S, He K, Girshick R, Sun J. Faster R-CNN: towards real-time object detection with region
proposal networks. In: Advances in neural information processing systems; 2015. p. 91–9.
[9] Yu J, Jiang Y, Wang Z, Cao Z, Huang T. Unitbox: an advanced object detection network. In: Pro-
ceedings of the 2016 ACM on multimedia conference. ACM; 2016. p. 516–20.
[10] Gao J, Wang Q, Yuan Y. Embedding structured contour and location prior in siamesed fully convo-
lutional networks for road detection. In: Robotics and automation (ICRA), 2017 IEEE international
conference on. IEEE; 2017. p. 219–24.
[11] Xu H, Lv X, Wang X, Ren Z, Bodla N, Chellappa R. Deep regionlets for object detection. In: The
European conference on computer vision (ECCV); 2018.
[12] Timofte R, Agustsson E, Van Gool L, Yang MH, Zhang L, Lim B, et al. NTIRE 2017 challenge
on single image super-resolution: methods and results. In: Computer vision and pattern recognition
workshops (CVPRW), 2017 IEEE conference on. IEEE; 2017. p. 1110–21.
[13] Li B, Peng X, Wang Z, Xu J, Feng D. AOD-Net: all-in-one dehazing network. In: Proceedings of the
IEEE international conference on computer vision; 2017. p. 4770–8.
[14] Li B, Peng X, Wang Z, Xu J, Feng D. An all-in-one network for dehazing and beyond. arXiv preprint
arXiv:1707.06543, 2017.
[15] Li B, Peng X, Wang Z, Xu J, Feng D. End-to-end united video dehazing and detection. arXiv preprint
arXiv:1709.03919, 2017.
[16] Liu D, Wen B, Jiao J, Liu X, Wang Z, Huang TS. Connecting image denoising and high-level vision
tasks via deep learning. arXiv preprint arXiv:1809.01826, 2018.
[17] Prabhu R, Yu X, Wang Z, Liu D, Jiang A. U-finger: multi-scale dilated convolutional network for
fingerprint image denoising and inpainting. arXiv preprint arXiv:1807.10993, 2018.
[18] Wang Z, Chang S, Zhou J, Wang M, Huang TS. Learning a task-specific deep architecture for clus-
tering. SDM 2016.
[19] Cheng B, Wang Z, Zhang Z, Li Z, Liu D, Yang J, et al. Robust emotion recognition from low quality
and low bit rate video: a deep learning approach. arXiv preprint arXiv:1709.03126, 2017.
[20] Wang Z, Yang J, Jin H, Shechtman E, Agarwala A, Brandt J, et al. DeepFont: identify your font from
an image. In: Proceedings of the 23rd ACM international conference on multimedia. ACM; 2015.
p. 451–9.
[21] Wang Z, Yang J, Jin H, Shechtman E, Agarwala A, Brandt J, et al. Real-world font recognition using
deep network and domain adaptation. arXiv preprint arXiv:1504.00028, 2015.
[22] Wang Z, Chang S, Dolcos F, Beck D, Liu D, Huang TS. Brain-inspired deep networks for image
aesthetics assessment. arXiv preprint arXiv:1601.04155, 2016.
[23] Huang TS, Brandt J, Agarwala A, Shechtman E, Wang Z, Jin H, et al. Deep learning for font
recognition and retrieval. In: Applied cloud deep semantic recognition. Auerbach Publications; 2018.
p. 109–30.
6 Deep Learning Through Sparse and Low-Rank Modeling
[24] Farabet C, Couprie C, Najman L, LeCun Y. Learning hierarchical features for scene labeling. IEEE
Transactions on Pattern Analysis and Machine Intelligence 2013;35(8):1915–29.
[25] Wang Q, Gao J, Yuan Y. A joint convolutional neural networks and context transfer for street scenes
labeling. IEEE Transactions on Intelligent Transportation Systems 2017.
[26] Saon G, Kuo HKJ, Rennie S, Picheny M. The IBM 2015 English conversational telephone speech
recognition system. arXiv preprint arXiv:1505.05899, 2015.
[27] Sutskever I, Vinyals O, Le QV. Sequence to sequence learning with neural networks. In: Advances in
neural information processing systems; 2014. p. 3104–12.
[28] Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial
nets. In: Advances in neural information processing systems; 2014. p. 2672–80.
[29] Silver D, Huang A, Maddison CJ, Guez A, Sifre L, Van Den Driessche G, et al. Mastering the game
of go with deep neural networks and tree search. Nature 2016;529(7587):484–9.
[30] Moravčík M, Schmid M, Burch N, Lisỳ V, Morrill D, Bard N, et al. DeepStack: expert-level artificial
intelligence in no-limit poker. arXiv preprint arXiv:1701.01724, 2017.
[31] LeCun Y, et al. LeNet-5, convolutional neural networks. URL: http://yann.lecun.com/exdb/lenet,
2015.
[32] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv
preprint arXiv:1409.1556, 2014.
[33] Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. In:
Proceedings of the IEEE conference on computer vision and pattern recognition; 2015. p. 1–9.
[34] He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the
IEEE conference on computer vision and pattern recognition; 2016. p. 770–8.
[35] Gers FA, Schmidhuber J, Cummins F. Learning to forget: continual prediction with LSTM. Neural
Computation 2000;12(10):2451–71.
[36] Chung J, Gulcehre C, Cho K, Bengio Y. Empirical evaluation of gated recurrent neural networks on
sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
[37] Goodfellow I, Bengio Y, Courville A. Deep learning. MIT Press; 2016.
[38] Wang Z, Yang J, Zhang H, Wang Z, Yang Y, Liu D, et al. Sparse coding and its applications in
computer vision. World Scientific; 2015.
[39] Baraniuk RG. Compressive sensing [lecture notes]. IEEE Signal Processing Magazine
2007;24(4):118–21.
[40] Huang J, Zhang T, Metaxas D. Learning with structured sparsity. Journal of Machine Learning Re-
search Nov. 2011;12:3371–412.
[41] Xu H, Zheng J, Alavi A, Chellappa R. Template regularized sparse coding for face verification. In:
23rd International conference on pattern recognition, ICPR 2016; 2016. p. 1448–54.
[42] Xu H, Zheng J, Alavi A, Chellappa R. Cross-domain visual recognition via domain adaptive dictionary
learning. CoRR 2018. arXiv:1804.04687 [abs].
[43] Xu H, Zheng J, Chellappa R. Bridging the domain shift by domain adaptive dictionary learning. In:
Proceedings of the British machine vision conference 2015, BMVC 2015; 2015. p. 96.1–96.12.
[44] Xu H, Zheng J, Alavi A, Chellappa R. Learning a structured dictionary for video-based face recog-
nition. In: 2016 IEEE winter conference on applications of computer vision, WACV 2016; 2016.
p. 1–9.
[45] Candès EJ, Li X, Ma Y, Wright J. Robust principal component analysis? Journal of the ACM (JACM)
2011;58(3):11.
[46] Wen Z, Yin W, Zhang Y. Solving a low-rank factorization model for matrix completion by a nonlinear
successive over-relaxation algorithm. Mathematical Programming Computation 2012:1–29.
[47] Wang Z, Li H, Ling Q, Li W. Robust temporal-spatial decomposition and its applications in video
processing. IEEE Transactions on Circuits and Systems for Video Technology 2013;23(3):387–400.
[48] Li H, Lu Z, Wang Z, Ling Q, Li W. Detection of blotch and scratch in video based on video decom-
position. IEEE Transactions on Circuits and Systems for Video Technology 2013;23(11):1887–900.
Random documents with unrelated
content Scribd suggests to you:
Pääntaivutus. Taivutus on aina tehtävä hitaasti ja ilman
nykäyksiä, mutta samassa perinpohjaisesti; ruumiin ryhtiä ei saa
muuttaa.
Päänkierto. Pituusakselinsa ympäri kierretään pää
voimakkaasti vasempaan ja oikeaan. Hartijoitten asento pysyköön
muuttumattomana.
Vartalontaivutus. Jos ruumis on kumarrettava eteenpäin niin
syvään, että selkä tulee vaakasuoraan tasoon, niin taivutus tapahtuu
ainoastaan lonkkaniveleissä (katso kuv. 49). Kumarrus eteen-
alaspäin käy mahdolliseksi sen kautta, että selkäniveleitäkin
taivutetaan (katso kuv. 50). Taaksepäin on taivutus aina tehtävä
jotenkin varovasti eikä erittäin syvään (kuv. 51). Kaulaa ei erityisesti
taivuteta eikä kierretä selkää taivuttaessa. Kun kysymyksessä olevaa
liikettä tehdään eteen- eli taaksepäin perus-, jalka-, varvas- tahi
käyntiasennosta, niin polviniveleitä ei saa ensinkään notkistaa.
Lisäksi on huomioon otettava, että tasaista hengittämistä ei saa
liikkeitä tehtäessä keskeyttää.
Kuva 49.
Kuva 50.
4. Hyppäys- ja hyppyharjoituksia.
Jokaista hyppäystä tehtäessä noustaan varpaille samassa kun
polvi- ja nilkkaniveleet koukistetaan; seuraavassa silmänräpäyksessä
mainitut niveleet uudestaan niin voimakkaasti ojennetaan, että koko
ruumis lentää ylös. Maahan takaisin tullessa ovat polvi- ja
nilkkaniveleet taaskin notkistettavat, mutta sitten ne levollisesti
ojennetaan, niin että vakavasti tullaan perusasentoon. Kaikenlaisia
laskuhyppäyksiä tehtäessä älköön milloinkaan tultako alas
kantapäille, vaan aina varpaille.
Hyppäyksen saattaa tehdä sekä paikalla että paikalta siirtymällä.
Jos edellisessä tapauksessa sääret, täydesti ojennettuina, heitetään
suoraan sivullepäin, mutta ne uudestaan yhdistetään ennenkuin
maahan tullaan, niin harjoitusta sanotaan h y p p ä y k s e k s i
s ä ä r i ä h a j o i t t a m a l l a.
Kun hyppäys tehdään haara-asennosta, niin sääriä saattaa
nopeasti yhdistää ja sitten taas heti uudestaan hajoittaa, joten
tullaan takaisin haara-asentoon. Silloin on tehty h y p p ä y s
s ä ä r i ä y h d i s t ä m ä l l ä.
Hyppäystä saattaa viimemainitusta alkuasennosta tehdä silläkin
tavoin, että sääret hypätessä pikimmältään viedään ristiin, toinen
toisen etupuolelle, mutta siinä tapauksessa ne ovat nopeasti
uudestaan hajoitettavat alkuasentoon menemistä varten. Sellaista
hyppyharjoitusta sanotaan h y p p ä y k s e k s i s ä ä r i ä r i s t i i n
v i e m ä l l ä.
H y p p ä y s j a l a n h e i t o l l a taakse- ylöspäin tehdään siten, että
hypätessä molemmat jalat koskettamiseen saakka heitetään
taaksepäin.
H y p p ä y s p o l v e n h e i t o l l a toimitetaan sillä tavoin, että
polvet hypätessä nostetaan rintaa kohti.
Hyppäyksellä ja hypätessä voipi jostakin alkuasennosta mennä
toiseen. Siten saattaa esim. perusasennosta mennä haara-asentoon,
käyntiasentoon, polviasentoon, kyykkyyn, j. n. e. sekä päinvastoin.
H y p p y k ä ä n n ö k s e k s i sanotaan sellaista hyppäystä, johon
yhdistetään puoli- tahi täysikäännös.
Yksityisten hyppäyksien sarjaa sanotaan h y p y k s i.
Hyppyharjoitus on siis aina tahtiharjoitus. Kuten hyppäyksissäkin
saattaa hyppyharjoituksissa mennä toisesta alkuasennosta toiseen.
K ä y n t i h y p p y ä harjoitetaan sillä tavalla, että ensi hyppäyksessä
viedään toinen jalka taaksepäin, toinen eteenpäin käyntiasentoon;
seuraavalla hyppäyksellä jalat päinvastoin, j. n. e.
Hyppyä vuorotellen haara- ja perusasentoon
harjoitetaan siten, että ensimmäisellä hyppäyksellä sääret
hajoitetaan, niin että tullaan haara-asentoon, sitten hypätään heti
uudestaan perusasentoon, taas haara-asentoon j. n. e.
Hyppyä vuorotellen haara- ja ristiasentoon
tehdään siten, että haara-asennosta viedään sääret hyppäyksellä
ristiin, ensi kerralla vasen jalka etupuolelle; sitten hypätään takaisin
haara-asentoon ja uudestaan ristiasentoon, tällä kertaa oikea jalka
etupuolelle, j. n. e.
Hyppyä harjoitetaan usein ainoastaan t o i s e l l a jalalla. Silloin
toinen sääri, suorine polvi- ja nilkkaniveleineen, tavallisesti pidetään
eteenpäin nostettuna. Viimemainittua säärtä saattaa hypätessä
myöskin heiluttaa eteenpäin ja taaksepäin tahi ulos- ja sisäänpäin.
IV. Sauvaliikkeitä.
Kansakoulussa kuten alkeiskouluissa yleensä älköön
sauvavoimistelua harjoitettako kahtena ensimmäisenä
voimisteluvuotena. Yleensä on sauvavoimistelun harjoittaminen
alotettava vasta silloin, kun oppilaat tyydyttävästi osaavat suorittaa
tavallisimmat vapaaharjoitukset ilman sauvaa. Niiden tulee siis sitä
ennen saavuttaa jonkinmoista valtaa ruumiinsa ja sen liikuntojen yli.
Lisäksi on huomattava, ettei sauva saa olla liian raskas. Sen pituuden
tulee olla noin 90 eli 95 senttimetriä ja paino olkoon niin suuri, että
lapsi, pitämällä vahvemmalla kädellään sauvaa sen toisesta päästä,
jaksaa pari sekuntia pitää sitä suoralla kädellä vaakasuoraan tasoon
ojennettuna.
Vieressä oleva kuva näyttää meille asennon, jota
sauvavoimistelussa tavallisesti käytetään alkuasentona muita liikkeitä
tehtäessä ja muihin asentoihin mentäessä. Sauva pidetään
ojennetuilla käsivarsilla, mutta eteen-ylöspäin taivutettuin rantein
poikittain ruumiin etupuolella. Jos sauva tästä asennosta on vietävä
»j a l a l l e», niin vasen käsi ensi tempussa lykkää sauvan oikean
käden läpi, kunnes kädet tulevat yhteen ja sauvan toinen pää
lähenee lattiaa (katso kuv. 55). Seuraavassa tempussa laskee oikea
käsi sauvan lattiaan ja samassa silmänräpäyksessä vasen käsi
temmataan alas perusasentoonsa (kuv. 56). Kom. sanat: S a u v a
j a l a l l e — v i e ! Kun sauva viimemainitusta asennostaan on
p o i k i t t a i n e t e e n vietävä, niin oikea käsi kohottaa sauvan pari
senttimetriä lattiasta ja samassa vasen käsi tarttuu sauvan ylipäästä
kiinni. Toisessa tempussa vasen käsi vetää sauvan poikittain eteen,
jolloin oikea käsi luiskahtaa sauvan toiseen päähän. Kom. sanat:
Sauva poikittain eteen — vie!
Kuva 54. Kuva 55. Kuva 56.
Kuva 61.
Kuva 72.
Tanko on tehtävä teräksestä tai raudasta ja itse teline sellaiseksi, että tankoa
helposti voi ylentää ja alentaa. Vieressä oleva kuva näyttää meille kolme rekkiä,
joista jokaisen pituus on 2 metriä ja ympärys 10 senttimetriä. Päistään ovat rekit
paksummat ja nelisärmäiset.
Kuva 74.
Kuva 77.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebookultra.com