0% found this document useful (0 votes)
308 views

Android App Development Touch and Gestures

The document discusses touch and gesture recognition in Android. It begins by explaining how Android uses MotionEvents to represent movements from input devices like fingers. It describes how MotionEvents contain information about the touch location, time, ID, etc. It then discusses how Android delivers MotionEvents to views and objects. It covers the different types of action codes for gestures like ACTION_DOWN, ACTION_MOVE, ACTION_UP. Finally, it provides examples of how MotionEvents are generated for different gestures involving multiple touches.

Uploaded by

AsifIqbal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
308 views

Android App Development Touch and Gestures

The document discusses touch and gesture recognition in Android. It begins by explaining how Android uses MotionEvents to represent movements from input devices like fingers. It describes how MotionEvents contain information about the touch location, time, ID, etc. It then discusses how Android delivers MotionEvents to views and objects. It covers the different types of action codes for gestures like ACTION_DOWN, ACTION_MOVE, ACTION_UP. Finally, it provides examples of how MotionEvents are generated for different gestures involving multiple touches.

Uploaded by

AsifIqbal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Week 6 - Touch and Gestures

If you use common applications that display maps or web pages, then you've probably
used gestures like swiping to scroll a view, or pinching and un-pinching your thumb and
index finger to zoom in or zoom out. In this lesson, I'll start by discussing MotionEvents.
Android uses this class to represent the movement in various input devices. Things like
a mouse, a trackball, and most common of all, your finger. Next I'll discuss how Android
takes these motion events and delivers them to views and other objects, so that your
application can respond to them. And finally, I'll finish up with a discussion of how
Android recognizes complex movement patterns or gestures, things like the pinch to
zoom that I mentioned earlier.
MotionEvents
Android uses the MotionEvent class to represent movements in an input device, such
as a pen, a trackball, a mouse or your finger. An individual movement event contains
several pieces of information. It has an action code, which indicates the kind of motion
that has occurred. It also contains a variety of data associated with that motion. For
instance, it has information about the time at which the event occurred, which device the
event came from, the event's location and, if appropriate, how hard the device was
pressed, etc.
This information will vary depending on the kind of input device involved. In the rest of
this lesson, I'll focus particularly on finger touch events that are read by pressing a touch
screen. Many touch screen devices today are multi touch devices. That means that they
can register and track multiple touches all the same time In Android, multi touch devices
emit one movement trace per touch source. And each of these touch sources is referred
to as a pointer. When Android encounters a new pointer, it generates a unique ID that
will remain constant for as long as that pointer is active.
In some cases, Android will group multiple pointers within a single motion event. And in
that case, each pointer within the motion event can be accessed by its index. But be
aware, that that index is not the same as the pointer's ID. The pointer ID is constant for
as long as the pointer is active. The index in which a pointer's data is stored however
may not be.
Let's talk about Motion Events action codes. When a gesture begins, motion events will
be created, and they will contain some of the following action codes:
ACTION_DOWN - indicates that a first finger has been, has started touching the
screen.
ACTION_POINTER_DOWN - means that we've already had an
ACTION_DOWN, and now we have another finger that has started touching the
screen.

ACTION_POINTER_UP - we've had an ACTION_POINTER followed by an


ACTION_POINTER_DOWN but now one of the fingers has stopped touching
the screen.
ACTION_MOVE - some of the fingers that are touching the screen have
changed their position.
ACTION_UP - the last of the fingers that was touching the screen has now
stopped touching it.
ACTION_CANCEL - something has prematurely canceled the current gesture.
While a gesture is playing out Android will try to ensure that it's motion events obey the
following rules (but applications should be tolerant to inconsistency in this).
Touches will go down one at a time.
Touches will move as a group - so a single motion event can refer to multiple
pointers
Touches will come up one at a time or be cancelled.
When you need to process motion events, you can use some of the following methods:
getActionMasked() - returns the action code associated with the motion event.
getActionIndex () - returns the index of the pointer associated with this action
code. For example, if the action is ACTION_POINTER_DOWN then you can
use this method to find the index of the particular pointer that is just touched
down.
getPointerId(int pointerIndex) - given an index, returns the stable ID of the
pointer.
getPointerCount () - returns the number of pointers associated with the motion
event.
getX(int pointerIndex) - given an index, returns the x coordinate of the pointer.
getY(int pointerIndex) - given an index, returns the y coordinate of the pointer.
findPointerIndex(int pointerId) - returns the index associated with a given pointer
ID.
Touch
When a touch occurs on a view, Android generates a motion event, and then attempts
to deliver that event to various objects, one of which is the view itself. Android delivers
the motion event through the View.onTouchEvent(MotionEvent event) method. This
method can process the motion event, and ends by returning true if the motion event
has been consumed, false if not.
Objects interested in receiving motion events that occur on a given view, can register to
receive those events by implementing the View.onTouchListener interface, and by
registering the object with the View.setOnTouchListener method. The listener's onTouch
method will then be called when an event such as pressing, releasing, or dragging,

occurs. This method will be called before the touch event is delivered to the touched
View. And again, onTouch should return true if it consumes the motion event, or false if
it doesn't.
In the simplest case, you can process each touch event independently. But applications
often need to process multiple touches that are part of a more complex gesture. To do
this, your code will need to identify and process particular combinations of touches, e.g.
a double-touch will involve an ACTION_DOWN, an ACTION_UP, another
ACTION_DOWN and finally an ACTION_UP, all in quick succession.
To give some examples, suppose you start a gesture by placing one finger down on the
screen. That will generate an ACTION_DOWN event. And might assign a pointer ID of
zero for that pointer. If you keep that finger down and move it on the screen, you might
get several ACTION_MOVE events associated with pointer ID zero. Suppose now that
you put a second finger down. In that case you'll get an ACTION_POINTER_DOWN
event, and this new pointer might get an ID, say of one. If you keep those fingers down
and you move them, you might get then several ACTION_MOVE events associated with
the pointer IDs zero and one. If you now lift the first finger, then you'll get an
ACTION_POINTER_UP event, associated with pointer zero. And then, when you finally
lift the last finger, you'll get an ACTION_UP event associated with pointer ID 1.

!"#$%&'(#!#
!7#$%&'(#!#
!"#;<=$#!#
!7#;<=$#!#

!"#$%&'
)*+,-./0-1.#
)*+,-./3-45#6#
)*+,-./8-,.+59/0-1.#
)*+,-./3-45#6#
)*+,-./8-,.+59/>8#
)*+,-./>8#

()*'
2#
2#
"#
2:"#
2#
"#

In the next example, we'll start as before, putting down the first finger. Moving it, putting
down a second finger, and then moving those fingers again. But this time, however, we'll
lift the second finger first. In this case, we get an ACTION_POINTER_UP action

associated with pointer ID 1. And then finally, when we lift the last finger, we get the
ACTION_UP action associated with the pointer ID 0.

!!
!"#$%&'(#!#
!7#$%&'(#!#
!7#;<=$#!#
!"#;<=$#!#

"#$%&'!
)*+,-./0-1.#
)*+,-./3-45#6#
)*+,-./8-,.+59/0-1.#
)*+,-./3-45#6#
)*+,-./8-,.+59/>8#
)*+,-./>8#

()!
2#
2#
"#
2:"#
"#
2#

For a last example, we'll use three fingers. We'll put down the first finger, then the
second, and then a third. And then we'll move the fingers, and then we'll lift them up.
First lifting the second finger, then the first finger, and then finally lifting the third finger.

!"#$%&'(#!#
!3#$%&'(#!#
!7#$%&'(#!#
#
!3#;<=$#!#
!"#;<=$#!#
!7#;<=$#!#

!"#$%&'
)*+,-./0-1.#
)*+,-./4-,.+56/0-1.#
)*+,-./4-,.+56/0-1.#
)*+,-./8-95#
)*+,-./4-,.+56/>4#
)*+,-./4-,.+56/>4#
)*+,-./>4#

()'
2#
"#
3#
2:":3#
"#
2#
3#

Our first example application in this lesson is called TouchIndicateTouchLocation. And


this application draws a circle wherever the user touches the screen. The circle's color
is randomly selected, and the application also then redraws the circle, following the
user's finger, if it moves across the screen. And finally, when the user touches the
screen in multiple locations, the size of the circles that are drawn will change to reflect
the number of currently active touches.
Lets take a look at this application in action:

When it starts, the screen is blank because I'm not touching the screen right. Now I'll
place one finger on the screen and that causes a single circle to be drawn at the place
where I've touched the screen. As I slide my finger along the screen, you can see that
the circle is redrawn, to track my finger movements. Now, I'll place a second finger on
the screen. And that causes the second circle to be drawn under that finger. And as you
can see, the size of the two circles, is now about half of what you saw when there was
only a single circle. Now, here I'll take away the second finger, and the first circle goes
back to its original size. Now, I'll put the second finger back, and again the two circles
appear at half size. And, I can drag these two fingers around the screen, and the circles
will follow my movements, and finally here I'll put down more fingers four, six, eight, ten.
I'm out of fingers now. So now I'll move them around, and now I'll start to take away
some fingers. Eight, six, four, two, one.

Let's take a look at the source code for this applications main activity:
package course.examples.Touch.LocateTouch;
import
import
import
import

java.util.HashMap;
java.util.LinkedList;
java.util.Map;
java.util.Random;

import
import
import
import
import
import
import
import
import
import
import
import

android.annotation.SuppressLint;
android.app.Activity;
android.content.Context;
android.graphics.Canvas;
android.graphics.Paint;
android.graphics.Paint.Style;
android.os.Bundle;
android.util.Log;
android.view.MotionEvent;
android.view.View;
android.view.View.OnTouchListener;
android.widget.FrameLayout;

public class IndicateTouchLocationActivity extends Activity {


private static final int MIN_DXDY = 2;
// Assume no more than 20 simultaneous touches
final private static int MAX_TOUCHES = 20;
// Pool of MarkerViews
final private static LinkedList<MarkerView> mInactiveMarkers =
new LinkedList<MarkerView>();
// Set of MarkerViews currently visible on the display
@SuppressLint("UseSparseArrays")
final private static Map<Integer, MarkerView> mActiveMarkers =
new HashMap<Integer, MarkerView>();
protected static final String TAG =
"IndicateTouchLocationActivity";
private FrameLayout mFrame;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
mFrame = (FrameLayout) findViewById(R.id.frame);
// Initialize pool of View.
initViews();

// Create and set on touch listener


mFrame.setOnTouchListener(new OnTouchListener() {
@Override
public boolean onTouch(View v, MotionEvent event) {
switch (event.getActionMasked()) {
// Show new MarkerView
case MotionEvent.ACTION_DOWN:
case MotionEvent.ACTION_POINTER_DOWN: {
int pointerIndex = event.getActionIndex();
int pointerID = event.getPointerId
(pointerIndex);

MarkerView marker = mInactiveMarkers.remove


();
if (null != marker) {
mActiveMarkers.put(pointerID, marker);
marker.setXLoc(event.getX
(pointerIndex));
marker.setYLoc(event.getY
(pointerIndex));
updateTouches(mActiveMarkers.size());
mFrame.addView(marker);
}
break;

// Remove one MarkerView


case MotionEvent.ACTION_UP:
case MotionEvent.ACTION_POINTER_UP: {
int pointerIndex = event.getActionIndex();
int pointerID = event.getPointerId
(pointerIndex);
MarkerView marker = mActiveMarkers.remove
(pointerID);

if (null != marker) {
mInactiveMarkers.add(marker);
updateTouches(mActiveMarkers.size());
mFrame.removeView(marker);
}
break;

// Move all currently active MarkerViews


case MotionEvent.ACTION_MOVE: {

for (int idx = 0; idx <


event.getPointerCount(); idx++) {
int ID = event.getPointerId(idx);
MarkerView marker = mActiveMarkers.get
(ID);
if (null != marker) {
// Redraw only if finger has travel ed a minimum distance
if (Math.abs(marker.getXLoc() event.getX(idx)) > MIN_DXDY
|| Math.abs(marker.getYLoc
() - event.getY(idx))
MIN_DXDY) {
// Set new location
marker.setXLoc(event.getX
(idx));
marker.setYLoc(event.getY
(idx));

}
}

// Request re-draw
marker.invalidate();

break;

default:
}
}

});

Log.i(TAG, "unhandled action");

return true;

// update number of touches on each active MarkerView


private void updateTouches(int numActive) {
for (MarkerView marker : mActiveMarkers.values())
{ marker.setTouches(numActive);
}
}

private void initViews() {


for (int idx = 0; idx < MAX_TOUCHES; idx++) {
mInactiveMarkers.add(new MarkerView(this, -1, -1));
}
}
private class MarkerView extends View {

private float mX, mY;


final static private int MAX_SIZE = 400;
private int mTouches = 0;
final private Paint mPaint = new Paint();
public MarkerView(Context context, float x, float y) {
super(context);
mX = x;
mY = y;
mPaint.setStyle(Style.FILL);

Random rnd = new Random();


mPaint.setARGB(255, rnd.nextInt(256), rnd.nextInt
(256),rnd.nextInt(256));

float getXLoc() {
return mX;
}
void setXLoc(float x) {
mX = x;
}
float getYLoc() {
return mY;
}
void setYLoc(float y) {
mY = y;
}
void setTouches(int touches) {
mTouches = touches;
}

@Override
protected void onDraw(Canvas canvas) {
canvas.drawCircle(mX, mY, MAX_SIZE / mTouches,
mPaint);
}

This code first creates a pool of custom views called marker views. Marker views will be
used to mark the location of a single touch. Next, the code defines a set that holds the
MarkerViews that are currently visible on the display. Down in onCreate, the code gets
the FrameLayout that represents the main view of this activity. And then it creates an
OnTouchListener and sets this as the recipient of that listener's OnTouch callback.

Let's look at that method. When the user touches the screen, this listener's OnTouch
method is called, and that method begins by checking the action code for the new
motion event. If the action code is ACTION_DOWN or ACTION_POINTER_DOWN,
then there's been a new touch. So the code creates and displays a new marker view.
The code does this by recording the pointer ID, and pointer index for this event. It then
takes a marker view from the inactive list. And it then adds that marker view to the
active set, using its pointer ID as the key for this view.
Next, it sets the location of this marker view, and then it updates the total number of
touches for each currently visible marker view. And then it adds the marker view to the
activity's main view.
If instead the action code was ACTION_UP, or ACTION_POINTER_UP, then a finger
has been lifted off the screen, so the code essentially undoes what we just finished
talking about. As before, it begins by recording the pointer ID and pointer index for this
event. It then removes the marker view that was associated with the finger that was just
lifted from the active set. It then adds that marker view back to the inactive list.
Next, it updates the total number of touches for each currently visible marker view, and
then it removes the marker view from the activity's main view. Lastly, if the action code
is ACTION_MOVE, the code adjusts the location of the affected marker views and
initiates their redrawing, by looping over the pointers in the motion event. For each one,
it gets the marker view for that pointer checks whether the pointer's traveled some
minimum distance. If so, it sets a new location for that marker view, and then calls
invalidate on the MarkerView which indicates that the MarkerView wants to be redrawn.
Handling Gestures
Android provides a class called Gesture Detector, that applications can use to recognize
common touch gestures. This class can recognize gestures, such as a confirmed single
tap, a double tap. Which is essentially two single taps in rapid succession. And a fling.
Which is a press, followed by a drag, and release motion that has a reasonably high
velocity. To use a gesture detector, your activity will have to create an instance of the
GestureDetector class and will have to give it an object that implements the
GestureDetector.OnGestureListener interface. And then the activity will need to override
it's onTouchEvent method, which is the method that gets called when the activity
receives a touch event. And this method will then delegate the motion event to the
gesture detectors onTouchEvent method.
Let's look at an example application that uses a gesture detector to recognize a fling
gesture. This application is called TouchGestureViewFlipper and when it starts, it
presents a text view that displays a number. If the user performs a right to left fling
gesture, then the text view will scroll off the left side of the screen. And while it does it, a
new text view, displaying a new number will scroll in behind it, from the right.

Let's see that application in action:

When it starts up, the screen shows a text view displaying the number zero. Now, if I
perform a fling gesture that is if I press and hold the view. And then quickly swipe
towards the left side of the screen and finally lift my finger off the screen. Then we'll see
the animation that I mentioned earlier. Let me do that now. And as you can see the text
view with the number zero slid off the screen. Going towards the left, and the new text
view, displaying the number 1, slid into the screen from the right. Let me do that a few
more times. And notice that this gesture only works if I swipe from right to left. If I try it in
the other direction, nothing will happen.
Let's take a look at the source code for the TouchGestureViewFlipper applications main
activity.
package course.examples.touch.ViewTransitions;
import
import
import
import
import
import
import
import
import

android.app.Activity;
android.os.Bundle;
android.view.GestureDetector;
android.view.MotionEvent;
android.view.animation.Animation;
android.view.animation.LinearInterpolator;
android.view.animation.TranslateAnimation;
android.widget.TextView;
android.widget.ViewFlipper;

public class
private
private
private
private

ViewFlipperTestActivity extends Activity {


ViewFlipper mFlipper;
TextView mTextView1, mTextView2;
int mCurrentLayoutState, mCount;
GestureDetector mGestureDetector;

@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
mCurrentLayoutState = 0;
mFlipper = (ViewFlipper) findViewById(R.id.view_flipper);
mTextView1 = (TextView) findViewById(R.id.textView1);
mTextView2 = (TextView) findViewById(R.id.textView2);
mTextView1.setText(String.valueOf(mCount));
mGestureDetector = new GestureDetector(this,
new GestureDetector.SimpleOnGestureListener() {
@Override
public boolean onFling(MotionEvent e1,
MotionEvent e2,
float velocityX, float velocityY) {

});

if (velocityX < -10.0f) {


mCurrentLayoutState =
mCurrentLayoutState == 0?1:0;
switchLayoutStateTo
(mCurrentLayoutState);
}
return true;

@Override
public boolean onTouchEvent(MotionEvent event) {
return mGestureDetector.onTouchEvent(event);
}
public void switchLayoutStateTo(int switchTo) {
mCurrentLayoutState = switchTo;
mFlipper.setInAnimation(inFromRightAnimation());
mFlipper.setOutAnimation(outToLeftAnimation());
mCount++;
if (switchTo == 0) {
mTextView1.setText(String.valueOf(mCount));
} else {
mTextView2.setText(String.valueOf(mCount));
}
}

mFlipper.showPrevious();

private Animation inFromRightAnimation() {

Animation inFromRight = new TranslateAnimation(


Animation.RELATIVE_TO_PARENT, +1.0f,
Animation.RELATIVE_TO_PARENT, 0.0f,
Animation.RELATIVE_TO_PARENT, 0.0f,
Animation.RELATIVE_TO_PARENT, 0.0f);
inFromRight.setDuration(500);
inFromRight.setInterpolator(new LinearInterpolator());
return inFromRight;

private Animation outToLeftAnimation() {


Animation outtoLeft = new TranslateAnimation(
Animation.RELATIVE_TO_PARENT, 0.0f,
Animation.RELATIVE_TO_PARENT, -1.0f,
Animation.RELATIVE_TO_PARENT, 0.0f,
Animation.RELATIVE_TO_PARENT, 0.0f);
outtoLeft.setDuration(500);
outtoLeft.setInterpolator(new LinearInterpolator());
return outtoLeft;
}

First of all, this application uses the view flipper class to handle the animations. Now I
won't go into that much here, but feel free to study the code, after we finish the
segment.
For now, let's focus on how this application detects the fling gesture. So, in onCreate,
you can see that the code creates a new GestureDetector. And in the constructor for
this object, the code passes in a new SimpleOnGestureListener. And this object defines
an onFling method. When a GestureDetector detects a fling gesture, this method will be
called. We'll come back to that, to this method in a few seconds.
Right now, let's look at the OnTouchEvent method for this activity. This method gets
called when a touch event occurs but no view in the activity handles it. When this
method is called, it will simply delegate the call, to the gesture detector. If the gesture
detector eventually decides that it has seen a complete fling gesture, the above onFling
method will be called.
The onFling method receives a parameter - in this case it's called velocityX, that tells
how fast, and in which direction the swipe gesture was performed. In this example, if the
swipe was moving from right to left, at a speed of more than ten pixels per second, then
the code invokes a method called switchLayoutStateTo, which causes the animation of
the text views to start. If the velocity does not meet the criteria, for instance, if it's a slow
drag instead of a fling, or if it's traveling in the wrong direction, then the fling gesture is
ignored.

Gesture Builder
To recognize more complex gestures, you can use Android's Gesture Builder application
to create and then save custom gestures. This application comes bundled with the SDK.
At runtime, you can use the gesture libraries class to load your custom gestures and to
recognize when a user performs one of those gestures.
To make this work, you include a GestureOverlayView in your application. And this view
essentially intercepts user gestures. And then, it invokes your application code to handle
those gestures.
Here's a screenshot of the gesture builder application:

As you can see, I've created four custom gestures. Next, which is a horizontal swipe,
from left to right, no, which looks a bit like an, an X that you make using a single stroke.
Prev, or previous, which is a horizontal swipe from right to left, and yes, which looks like
a check mark.
On the emulator, GestureBuilder saves your custom gestures to a file called /mnt/
sdcard/gestures. To use these gestures you'll need to copy this file into your
applications /res/raw directory.

Let's look at the TouchGestures application. This application displays a small view with
a candidate color for the entire applications background. The background color for the
whole application is initially set to gray, and the user can use these four custom
gestures that I showed earlier to interact with this application. For example, if the user
performs the next gesture the background color will cycle forward. If the user performs
the previous gesture, the background color cycles back. If the user performs the yes
gesture, the application sets the whole application's background to the current color.
And if the user performs the no gesture, the application's background color is reset to
gray.
Let's see the running TouchGestures application. When it starts up, the application's
background is generally gray. But there's a colored square in the middle of the screen. If
I swipe the screen from left to right. The color of that square in the middle changes. And
if I do it again, the color changes again. And I can go back to the previous color, by
swiping, this time, from right to left, instead of left to right. If I decide that I like the
current color, I can perform the yes gesture. Like so:

As you see the whole application now has a background of that color, but if I change my
mind I can perform the no gesture, like so. And as you can see, the application's
background goes back to its initial grey. The color square reappears in the middle of the
layout and I can keep this issuing gestures to look at new candidate colors.
Let's take a look at the source code for the TouchGestures applications main activity:
package course.examples.touch.Gestures;

import java.util.ArrayList;
import java.util.Random;
import
import
import
import
import
import
import
import
import
import
import
import

android.app.Activity;
android.gesture.Gesture;
android.gesture.GestureLibraries;
android.gesture.GestureLibrary;
android.gesture.GestureOverlayView;
android.gesture.GestureOverlayView.OnGesturePerformedListener;
android.gesture.Prediction;
android.graphics.Color;
android.os.Bundle;
android.widget.FrameLayout;
android.widget.RelativeLayout;
android.widget.Toast;

public class GesturesActivity extends Activity implements


OnGesturePerformedListener {
private static final String NO = "No";
private static final String YES = "Yes";
private static final String PREV = "Prev";
private static final String NEXT = "Next";
private GestureLibrary mLibrary;
private int mBgColor = 0;
private int mFirstColor, mStartBgColor = Color.GRAY;
private FrameLayout mFrame;
private RelativeLayout mLayout;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
mFrame = (FrameLayout) findViewById(R.id.frame);
mBgColor = new Random().nextInt(0xFFFFFF) | 0xFF000000;
mFirstColor = mBgColor;
mFrame.setBackgroundColor(mBgColor);
mLayout = (RelativeLayout) findViewById(R.id.main);
mLayout.setBackgroundColor(mStartBgColor);
mLibrary = GestureLibraries.fromRawResource(this,
R.raw.gestures);
if (!mLibrary.load()) {
finish();
}
// Make this the target of gesture detection callbacks
GestureOverlayView gestureView = (GestureOverlayView)
findViewById(R.id.gestures_overlay);
gestureView.addOnGesturePerformedListener(this);
}

public void onGesturePerformed(GestureOverlayView overlay,


Gesture gesture) {
// Get gesture predictions
ArrayList<Prediction> predictions = mLibrary.recognize
(gesture);
// Get highest-ranked prediction
if (predictions.size() > 0) {
Prediction prediction = predictions.get(0);
// Ignore weak predictions
if (prediction.score > 2.0) {
if (prediction.name.equals(PREV)) {
mBgColor -= 100;
mFrame.setBackgroundColor(mBgColor);
} else if (prediction.name.equals(NEXT)) {
mBgColor += 100;
mFrame.setBackgroundColor(mBgColor);
} else if (prediction.name.equals(YES)) {
mLayout.setBackgroundColor(mBgColor);
} else if (prediction.name.equals(NO)) {
mLayout.setBackgroundColor(mStartBgColor);
mFrame.setBackgroundColor(mFirstColor);
} else {
Toast.makeText(this, prediction.name,
Toast.LENGTH_SHORT)
.show();

}
} else {
Toast.makeText(this, "No prediction",
Toast.LENGTH_SHORT)
.show();
}

And you notice that this activity implements the on gesture performed listener interface,
which means that it provides an on gesture performed method. In on create, the code
gets a reference to the frame layout, which it stores in a variable called m frame. And
this is where the candidate background colors appear. The code also gets a reference

to a relative layout, which it stores in a variable called m layout. And this is the layout for
the entire application.
There's also the code that reads the gestures file from the res/ raw directory. Using the
gesture libraries from raw resource method. This method returns a gesture library
object, and the code then goes on to call the load method for the gesture library. After
that, the code finds the gesture overlay view, which is in the layout, and adds the current
activity as a listener for gestures that are intercepted by the gesture overlay view.
When the gesture overlay view detects a gesture, it calls the onGesturePerformed
method. This method first calls the recognize method, which analyzes the detected
gesture, and then scores each custom gesture as to how much the detected gesture
resembles the custom gestures recorded in the gesture file.
Next the code gets the highest ranked prediction, and if that prediction has a high
enough score the code carries out the action that is associated with that gesture. For
example, if the gesture was the yes gesture, then the code sets the layout's background
color to the current candidate color.
That's all for our lesson on multi-touch, and gestures. Please join me next time, when
we'll discuss multimedia.

You might also like