Contents:
1. Background
2. OpenGL ES
2.1 Getting Started with 2D Graphics in Android with OpenGL ES 2.0
2.2 OpenGL ES 2.0 Drawing Essentials
2.3 Working with Touch in OpenGL ES 2.0
2.4 OpenGL ES 2.0 2D Animation
2.5 3D Graphics with Android OpenGL ES 2.0
2.5.1 Basic of 3D Graphics
2.5.2 3D Graphics using OpenGL
3. View Animation
3.1 Preparing App to Use Both OpenGL as well as View Animations
3.2 XML Based View Animation Basics
4. Property Animation
4.1 Value Animator
4.2 Object Animator
4.3 AnimatorSet
5. Drawable Animation
6. Canvas APIs
7. Conclusion
Highlights:
- A tutorial which draws all OpenGL primitives using triangle concept
- Developing 3D primitives in OpenGL from 2D primitives
- OpenGL touch event handler
- Working with OpenGL and Canvas from Same application
- Abstraction of Canvas API's
- Drawable Animation Deployment Problem Solution
- Use case discussion of all animations
Android being a mobile platform and is one of the popular mobile operating systems for some of the high end mobiles. Users spend huge money on smart phones because of their utility factors. One of the factors is gaming and good apps. Many entertainment apps simply needs good animation capabilities and flicker free rendering of the animation objects. Also animation must be bind with user input such as touch and sensors to make the environment more intuitive. OpenGL is an open source cross platform Graphics platform which utilizes device features and using the hardware features creates and manipulates graphics.
This tutorial will be an afford to teach you the basics of Graphics and Animation in Android with as much fluidity as I can. As usual we will also see some use cases and how to's. But due to the broad range of the topics, we will stick to simple applications for each concept rather than going with a single App as we have done through most of our tutorials.
So far whatever Android we have learnt, we have used Android View to draw the objects. Images are drawn on Image Views, Button, EditText, TextView all are views which are placed over the ContentView of the main form.
However Views are high level classes. When we talk about OpenGL, we talk about hardware level drawing and manipulation. As such Android views are unsuitable for OpenGL operations. Therefore we need to create an OpenGL SurfaceView which is the main view of OpenGL animation and set the current ContentView to that surface view. Now whatever we draw or manipulate would be on the OpenGL surface view.
We can render any number of objects using OpenGL renderer. What is a renderer? For example if you draw a triangle on the screen and you want to rotate it or say you have a line which you want to move across the entire scene, now using standard graphics programming you have to calculate and update the end coordinates ( called vertex coordinates) in every update. A renderer is an engine that takes care of it. So you just tell where and to what angle you need to move the object and underneath geometry is taken care by renderer.
An OpenGL object that a renderer can render on surface view can be a shape object like polygons, rectangle, traingle etc, it can be images or it can be complex objects which combines several basic objects. Each of these objects can have a geometry and coordinate system which is used to draw them over the screen also can have a shader. A shader in a concept of colouring the object either through colors or textures.
Figure 2.1 Clearly explains the concepts.
Figure 2.1 OpenGL ES Workflow in Android
At this point you should know that OpenGL suppor for Android can be distinctly divided into two major classes of APIs: OpenGL ES 1.x and more modern OpenGL ES 2.x.
Both the API classes differ from each other in major ways and in most cases do not go togather. All modern and new devices supports ES 2.x APIs. Therefore we would work with only OpenGL ES 2.0 in this tutorial, afterall who wants to learn a Windows 98 at the age of Windows 8x.
Firstly we need to tell through our Manifest file that for the current application OpenGL is must so that when you publish the app, it will not be shown to the devices that does not support the application.
<uses-feature android:glEsVersion="0x00020000" android:required="true" />
As we discussed, we will use GLSurfaceView as our main view in the activity class. So nothing much is needed in your xml layout. Just create a simple layout as given bellow.
<LinearLayout
xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:orientation="vertical" >
</LinearLayout>
So this is as simple as it goes. Now you need an object of GLSurfaceView to present as main view. However you will render your own objects with your own logic. Therefore it makes sense to extend the class and overrite the methods you would want.
So first Create a simple class by name MyGlSurfaceView by extending GLSurfaceView
public class MyGlSurfaceView extends GLSurfaceView
{
MyRenderer mr;
TextView tv;
MainActivity maContext;
public MyGlSurfaceView(Context context)
{
super(context);
setEGLContextClientVersion(2);
setRenderer(new Renderer() {
public void onSurfaceCreated(GL10 unused, EGLConfig config) {
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
}
public void onDrawFrame(GL10 unused) {
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
}
public void onSurfaceChanged(GL10 unused, int width, int height) {
GLES20.glViewport(0, 0, width, height);
}
});
setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY);
}
@Override
public boolean onTouchEvent(MotionEvent e)
{
final float x=e.getX();
final float y=e.getY();
return false;
}
}
When you create a class extending GLSurface view it will prompt you to implement the constructor. You should implement the constructor by instantiating the object of your surface view class by calling super. Also use setEGLContextClientVersion(2) to tell Android that you will be using ES version 2.
In the constructor you have to set the renderer. Assuming that we have nothing to render at this moment we cal simply write setRenderer(new Renderer() ); Soon you do that Eclipse will automatically create the methods to be overriten for creating the Object of Renderer class. Once the methods are created, we will use simple one liners to make the initialization work.
There are three methods to be overriten :
- onSurfaceCreated: It is called when GLSurface instance initializes rendering objects. It uses glClearClolor to clear the background to black color. Last parameter is ALPHA or Transparancy parameter. You are free to use other colors by experimenting with the r,g,b values which are first three parameters.
- onDrawFrame: It is called whenever something needs to be redrawn ( or rendering change). Everytime screen is rendered we will clear the drawing buffer with preset. As we are intended to do color drawing, we are using GL_COLOR_BUFFER_BIT. You can also use GL_DEPTH_BUFFER_BIT and GL_STENCIL_BUFFER_BIT depending upon your application and rendering preferences.
- onSurfaceChanged: This is called when you change your device view say from Landscape to Potrait. So we will use GLES20.glViewport option to reinitialize the view port based on the current device orientation. You may be already knowing that a ViewPort is a Polygon viewing region and this is the area over which the objects are rendered.
Lastly you can see setRenderMode option which can be set to either RENDERMODE_WHEN_DIRTY or RENDERMODE_CONTINUOUSLY. Android has typical frame rate of about 10fps. Kepping render mode CONTINUESLY will force rendering or redrawing all the graphics object at every refresh instance. WHEN_DIRTY force a call to OnFrameDraw of the renderer only when any of the matrices specific to renderer objects have changed. We will soon see about the matrices.
Even though you are using GL20, the use of GL10 in the argument might surprised you. Don't get driven by it, it's to keep a continuity with previous GL version and will mostly remain unused.
Coming back to Renderer class options:
Though extending the GLSurfaceView does not prompt you to overrite OnTouchEvent, I would urge you to use this method in the basic structure of you application.
Ok, now let's turn our focus to MainActivity. Like you, I am too quite eager to see my first OpenGL application in the Mobile.
Let us declare an object of MyGlSurfaceView class as class member in MainActivity
MyGlSurfaceView mgsv;
Let us initialize the object in onCreate by passing the MainActivity object. Remember in other Android applications we were using setContentView(R.layout.activity_main). But here we are not really concern with activity_main. So we are just going replace R.layout.activity_main with MyGlSurfaceView object.
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
mgsv=new MyGlSurfaceView(this);
setContentView(mgsv);
}
Having done our hadrwork, Let's build and run our application. Here is a screenshot of running the application with GLES20.glClearColor(7.0f, 2.0f, 0.0f, .20f);
Figure 2.2 First OpenGL Screen
Okey! We are now ready to play with OpenGL ES Apis in much more detail. However before that I want to add a little more spice to our discussion which I couldn't find in any other article.
It is often needed to send some data from OpenGL to Activity form. As a case, you may think off returning score of a simple game to the activity where it will be updated to the database, or you can think of triggering a Toast in the activity for some OpenGL work. So we are talking about passing some data or message back to the calling form.
In order to be able to communicate back with the activity instance, all you have to do is create an Activity object in the SurfaceView class and initialize the object with the context that you pass from activity. So let's track the x and y coordinates from touch event handler and pass that data to the title of the MainActivity form.
In MySurfaceView class, declare:
MainActivity maContext;
In the constructor of MySurfaceView, initialize:
maContext=(MainActivity) context;
And finally from onTouchEvent put x and y data to the title!
@Override
public boolean onTouchEvent(MotionEvent e)
{
final float x=e.getX();
final float y=e.getY();
Log.i("GL Surface View","X="+x+" Y="+y);
maContext.runOnUiThread(new Runnable() {
@Override
public void run() {
maContext.setTitle("X="+x+" Y="+y);
}
});
return false;
}
And now when you debug and run your application and move your fingers over the screen you will see the x and values in the title bar.
Figure 2.3 Message passing from OpenGL to MainActivity
This article assumes that you have no prior knowledge to OpenGL. If you have ever worked with OpenGL, you must know that drawing is really not as staright forward in OpenGL ES 2.0 as you might think. Secondly if you have worked with any other graphics platform like Android's native graphics support or .Net GDI+, you are probably habituated with drawing using calls like drawLine, drawCircle etc. But openGL utilizes your hardware for graphics rendering. Therefore drawing even simple shapes like lines and squares needs heafty code.
So, we will do something very special here:
We will first learn about the essential of drawing which helps you understand the mechanics of drawing and then we will develop a custom drawing class wit APIs that makes it really easy for you to draw simple shapes using more conventional draw calls. But first thing first.
In order to understand what it takes to draw an object, try to analyze figure 2.4
Figure 2.4 OpenGL ES 2.0 Drawing Logic
1) First whatever you want to draw needs to be specified as a set of vertex coordinates. Vertecies must form a closed graph. Triangle with have three vertices, square will have four. For defining a line, it needs to be thought of as a rectangle with very thin width, a circle may have 360 vertices, calculated based on center and radious. A point can be defined as a circle with very small redious.
2) OpenGL draws objects using shaders. So the vertex and color code needs to be passed to a Shader. Shader initializes first draw Vertices using Vertext shader and then renders the face of it using a FragmentShader. The whole Shader system must be precompiled and kept an OpenGL program which will be used to draw the Shapes.
3) Projection: It maps the OpenGL coordinate system with Device Coordinate system.
4) Camera Object is a virtual object that makes the view more realistic. You can actually change the camera position to come up with different perspective of the view. This is usually very important for 3D rendering.
Needless to say that draw is called from onFrameDraw of the renderer.
Whenever you see some tutorials in Internet about OpenGL, they present different classes for different shapes as OpenGL approch is little different for every shapes. But as you can see in Figure 2.4 that basic rendering system does not change, we will play it really smart here and beat the learning curve by creating a simple drawing logic.
So prepare MySimpleOpenGLES2DrawingClass
First let us define the shaders.
private final String vertexShaderCode =
"uniform mat4 uMVPMatrix;" +
"attribute vec4 vPosition;" +
"void main() {" +
" gl_Position = uMVPMatrix * vPosition;" +
"}";
private final String fragmentShaderCode =
"precision mediump float;" +
"uniform vec4 vColor;" +
"void main() {" +
" gl_FragColor = vColor;" +
"}";
They are simple C-program which is put inside a string. vertexShader expects a projection matrix and vertex coordinates and produces final coordinates by multiplying both.
The fragmentShader is most straight forward and assigns the color variable to gl_fragColor.
We need to first load the shader with our shader code and then compile.
GLES20.glShaderSource(source,type) can be called with either fragmentShaderCode or vertexShaderCode with type as GLES20.GL_FRAGMENT_SHADER or GLES20.GL_VERTEX_SHADER . This method returns an Integer shader code. This can be compiled using GLES20.glCompileShader(shader). Rather than cluttering them, let us create a simple utility method called loadShader as given bellow:
public static int loadShader(int type, String shaderCode){
int shader = GLES20.glCreateShader(type);
GLES20.glShaderSource(shader, shaderCode);
GLES20.glCompileShader(shader);
return shader;
}
From figure 2.4 it is clear that we have to pass the coordinates of the shape and the color. The system must create an OpenGL object by passing them into the shader code and then creating a precompiled program ready. This program will be used with Camera and Projection objects to draw the shape.
As we already know from point 1) of discussion at the top of this subsection that the vertices must form a closed path. Triangle can be thought of as an elimentary shape. rest of the other shapes can be defined from this shape by following a specific path.
Figure 2.5 Explains how you can draw different shapes using triangle drawing. I have used triangulation because this is also basic building block for 3D mesh. So using a relatively similar concept for 2D helps in simplifying 3D understanding.
Figure 2.5: Defining Shapes in OpenGL ES 2.0 ( Triangle Method)
So it is very clear from the above diagram that no matter what shape we want to draw, if we draw triangle successfully, rest all are just cake walk.
So let us now provide a parameterized constructor for our MyOpenGLES2DrawingClass
public MyGeneralOpenGLES2DrawingClass(int coordsPerVertex,float []coordinates,float[]color,short[]drawOrder)
{
this.drawOrder=drawOrder;
COORDS_PER_VERTEX=coordsPerVertex;
coords=coordinates;
vertexCount = coords.length / COORDS_PER_VERTEX;
vertexStride = COORDS_PER_VERTEX * 4;
this.color=color;
ByteBuffer bb = ByteBuffer.allocateDirect(
coords.length * 4);
bb.order(ByteOrder.nativeOrder());
vertexBuffer = bb.asFloatBuffer();
vertexBuffer.put(coords);
vertexBuffer.position(0);
ByteBuffer dlb = ByteBuffer.allocateDirect(
drawOrder.length * 2);
dlb.order(ByteOrder.nativeOrder());
drawListBuffer = dlb.asShortBuffer();
drawListBuffer.put(drawOrder);
drawListBuffer.position(0);
int vertexShader = loadShader(
GLES20.GL_VERTEX_SHADER, vertexShaderCode);
int fragmentShader = loadShader(
GLES20.GL_FRAGMENT_SHADER, fragmentShaderCode);
mProgram = GLES20.glCreateProgram();
GLES20.glAttachShader(mProgram, vertexShader);
GLES20.glAttachShader(mProgram, fragmentShader);
GLES20.glLinkProgram(mProgram);
}
Above code becomes fascinatingly easy once we understand figure 2.4 and figure 2.5. We initialize vertexBuffer with all the coordinate values. mProgram is an integer variable which is sort of a pointer to precompiled opengl code. The program is first attached with vertexShader, followed by fragmentShader. Finally it is precompiled.
We have all the necessary ingredients ready for our draw method. Let us complete the draw method too which shouldn't be too difficult.
public void draw(float[] mvpMatrix) {
GLES20.glUseProgram(mProgram);
mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition");
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glVertexAttribPointer(
mPositionHandle, COORDS_PER_VERTEX,
GLES20.GL_FLOAT, false,
vertexStride, vertexBuffer);
mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor");
GLES20.glUniform4fv(mColorHandle, 1, color, 0);
mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix");
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
int drawMode=GLES20.GL_TRIANGLES;
GLES20.glDrawElements( drawMode, drawOrder.length, GLES20.GL_UNSIGNED_SHORT, drawListBuffer);
GLES20.glDisableVertexAttribArray(mPositionHandle);
}
So we we first us the mProgram object we created. Using glGetAttributeLocation we obtain the memory location of vPosition variable in our vertextShader. Having obtained it in mPositionHandle, we enable it to hold vertex array with a call to glEnableVertexAttribArray. Finally we load our vertexBuffer to it.
for fragmentShader we obtain vColor memory location and load our color matrix.
uMVPMatrix is Model-View-Projection-Matrix , and is the model matrix that OpenGL program uses. This is updated by Renderer object as par rendering requirement. A pointer of it is obtained in mMVPMatrixHandle by calling glUniformMatrix4fv method. Finally we draw the shape by calling glDrawElements.
Here is our final class:
package com.integratedideas.animationandghraphics.openglutilities;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import java.nio.ShortBuffer;
import android.opengl.GLES20;
public class MyGeneralOpenGLES2DrawingClass
{
private final String vertexShaderCode =
"uniform mat4 uMVPMatrix;" +
"attribute vec4 vPosition;" +
"void main() {" +
" gl_Position = uMVPMatrix * vPosition;" +
"}";
private final String fragmentShaderCode =
"precision mediump float;" +
"uniform vec4 vColor;" +
"void main() {" +
" gl_FragColor = vColor;" +
"}";
private final FloatBuffer vertexBuffer;
private final int mProgram;
private int mPositionHandle;
private int mColorHandle;
private int mMVPMatrixHandle;
private ShortBuffer drawListBuffer;
private short drawOrder[] = { 0, 1, 2};
static int COORDS_PER_VERTEX = 0;
static float coords[] = {};
private final int vertexCount;
private final int vertexStride;
public float []color=new float[4];
int drawMode=GLES20.GL_TRIANGLES;
public MyGeneralOpenGLES2DrawingClass(int coordsPerVertex,float []coordinates,float[]color,short[]drawOrder)
{
this.drawOrder=drawOrder;
COORDS_PER_VERTEX=coordsPerVertex;
coords=coordinates;
vertexCount = coords.length / COORDS_PER_VERTEX;
vertexStride = COORDS_PER_VERTEX * 4;
this.color=color;
ByteBuffer bb = ByteBuffer.allocateDirect(
coords.length * 4);
bb.order(ByteOrder.nativeOrder());
vertexBuffer = bb.asFloatBuffer();
vertexBuffer.put(coords);
vertexBuffer.position(0);
ByteBuffer dlb = ByteBuffer.allocateDirect(
drawOrder.length * 2);
dlb.order(ByteOrder.nativeOrder());
drawListBuffer = dlb.asShortBuffer();
drawListBuffer.put(drawOrder);
drawListBuffer.position(0);
int vertexShader = loadShader(
GLES20.GL_VERTEX_SHADER, vertexShaderCode);
int fragmentShader = loadShader(
GLES20.GL_FRAGMENT_SHADER, fragmentShaderCode);
mProgram = GLES20.glCreateProgram();
GLES20.glAttachShader(mProgram, vertexShader);
GLES20.glAttachShader(mProgram, fragmentShader);
GLES20.glLinkProgram(mProgram);
}
public static int loadShader(int type, String shaderCode){
int shader = GLES20.glCreateShader(type);
GLES20.glShaderSource(shader, shaderCode);
GLES20.glCompileShader(shader);
return shader;
}
public void draw(float[] mvpMatrix) {
GLES20.glUseProgram(mProgram);
mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition");
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glVertexAttribPointer(
mPositionHandle, COORDS_PER_VERTEX,
GLES20.GL_FLOAT, false,
vertexStride, vertexBuffer);
mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor");
GLES20.glUniform4fv(mColorHandle, 1, color, 0);
mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix");
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
GLES20.glDrawElements( drawMode, drawOrder.length, GLES20.GL_UNSIGNED_SHORT, drawListBuffer);
GLES20.glDisableVertexAttribArray(mPositionHandle);
}
}
But before we can test our fascinating work, we have still a bit of work left. We need to work with renderer.
Remember form figure 2.4 that final drawing logic should combine both Camera View and Projection View to render the object.
We can initialize a ViewMatrix mViewMatrix by Matrix.setLookAtM as bellow.
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
reder is encouraged to just hover his mouse over this command in eclipse to get more understanding about parameter details.
Understanding of projection matrix becomes easier with following figure
Figure 2.6 OpenGL Normalized Coordinate System and Concept of Projection
So it is quite clear from the above diagram that the projection in it's simplest for is simply a scaling. As OpenGL gives a device independent normalized coordinate system, as the screen changes ( say changing orientation from potrait to landscape) onSurfaceChanged method is called. So we need to modify this method to update our projection matrix with a scale obtained from the width and height ratio.
@Override
public void onSurfaceChanged(GL10 unused, int width, int height) {
GLES20.glViewport(0, 0, width, height);
float ratio = (float) width / height;
Matrix.frustumM(mProjectionMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
}
Finally our onDrawFrame gets modified as:
@Override
public void onDrawFrame(GL10 unused) {
float[] scratch = new float[16];
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);
new MyGeneralOpenGLES2DrawingClass(3, new float[]{ -0.5f, 0.5f, 0.0f, -0.5f, -0.5f, 0.0f, -0.49f, -0.5f, 0.0f, -0.49f, 0.5f, 0.0f }, new float[]{1.0f,0.0f,0.0f,1.0f},new short[]{0,1,2,0,2,3}).draw(mMVPMatrix);
new MyGeneralOpenGLES2DrawingClass(3, new float[]{ -1.0f, 0.5f, 0.0f, -1.0f, 1.0f, 0.0f, 0, 1.0f, 0.0f, 0, .5f, 0.0f }, new float[]{0.0f,1.5f,0.0f,1.0f},new short[]{0,1,2,0,2,3}).draw(mMVPMatrix);
new MyGeneralOpenGLES2DrawingClass(3, new float[]{ 0.9f, 0.7f, 0.0f, .9f,.2f, 0.0f, .4f, .2f, 0.0f, }, new float[]{0.0f,0.0f,1.0f,1.0f},new short[]{0,1,2}).draw(mMVPMatrix);
}
And it's such an awesome thing to be able to display a Line, a Triangle and a rectangle all by a single line of code and by utilizing our generalized class!
Figure 2.7 Output of Drawing Basic Shapes Using OpenGL ES
Observe that we havn't draw the circle. Because using the pattern given in 2.4, it is tedious to create a circle coordinate system. {v1,v2,v3,v4,v5....v365} would have been better format. Such a formatc can be drawn using drawArray method with GL_TRIANGLE_FAN flag. So we will just modify our General drawing class a little:
if(coords.length>100)
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_FAN, 0, 364);
else
GLES20.glDrawElements( drawMode, drawOrder.length, GLES20.GL_UNSIGNED_SHORT, drawListBuffer);
GLES20.glDisableVertexAttribArray(mPositionHandle);
And you can draw a circle simply by following line of code:
float vertices[] = new float[364 * 3];
vertices[0] = 0;
vertices[1] = 0;
vertices[2] = 0;
float radious=.5;
for(int i =1; i <364; i++){
vertices[(i * 3)+ 0] = (float) (radious * Math.cos((3.14/180) * (float)i ) + vertices[0]);
vertices[(i * 3)+ 1] = (float) (radious * Math.sin((3.14/180) * (float)i ) + vertices[1]);
vertices[(i * 3)+ 2] = 0;
}
new MyGeneralOpenGLES2DrawingClass(3, vertices, new float[]{0.0f,0.0f,1.0f,1.0f},new short[]{0,1,2}).draw(mMVPMatrix);
Where vertices 0,1,2 are the x,y,z of the center of the circle.
Figure 2.8 Results Incorporating Circle Demo at Potrait Mode
Now this is quite trickey here. When you search for a solution of "How to find which of the OpenGL objects is touched" in internet, you get tons of advises but hardly you get to see any working solution. That is because OpenGL does not have any native support to tell you whether or which object is touched.
In order to understand the magnitude of the problem, first refer to figure 2.3. You can see that the touch coordinates are device absolute coordinates. Observe the triangle, rectangle drawing coordinates in demo of onDraw method of the Renderer and figure 2.6. You may quite well be lead to believe that basically projection is all about scaling. So it shouldn't be technically too difficult to convert the absolute coordinate system to "projected" coordinate system. But projection is two stage in OpenGL. Firstly the device coordinate is converted to normalized coordinate system and then a projected coordinate system is developed in onDraw method of renderer using Camera object.
So if you change the bold and underlined z-coordinate of in following line you will see different display.
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
Following figure shows an entirely reversed view with different scales based on camera angle.
Figure 2.9 Change of Display Based on Camera Angle
So if we really want to find out which object is touched, we need to develop following algorithm:
1) in SurfaceView class using the context of main activity, find device current screen width and height.
2) in the onTouch overridden method in the surface view class obtain a normalized coordinate by deviding the coordinates with device with and device height followed by adjusting the origin to center. Call the new coordinates as Normalized Coordinates.
Point 1) and 2) is implemented as bellow:
public float[] SimpleTouch2GLCoord( Point touch)
{
Display display = maContext.getWindowManager().getDefaultDisplay();
Point size = new Point();
display.getSize(size);
float screenW = size.x;
float screenH = size.y;
float normalizedX = 2f * touch.x/screenW - 1f;
float normalizedY = 1f - 2f*touch.y/screenH;
float normalizedZ = 0.0f;
return ( new float[]{normalizedX,normalizedY,normalizedZ});
}
3) Pass the Normalized Coordinates to Render class where using the algorithm given here convert the Normalized Coordinate System to Projected Coordinate System.
public float[]glCoordinate(float normalizedX,float normalizedY)
{
float[] invertedMatrix, transformMatrix,
normalizedInPoint, outPoint;
invertedMatrix = new float[16];
transformMatrix = new float[16];
normalizedInPoint = new float[4];
normalizedInPoint[0] =
normalizedX;
normalizedInPoint[1] =
normalizedY;
normalizedInPoint[2] = - 1.0f;
normalizedInPoint[3] = 1.0f;
outPoint = new float[4];
Matrix.multiplyMM(
transformMatrix, 0,
mProjectionMatrix, 0,
mMVPMatrix, 0);
Matrix.invertM(invertedMatrix, 0,
transformMatrix, 0);
Matrix.multiplyMV(
outPoint, 0,
invertedMatrix, 0,
normalizedInPoint, 0);
if (outPoint[3] == 0.0)
{
Log.e("World coords", "ERROR!");
return new float[]{9999,9999,9999};
}
float []c=new float[]{ outPoint[0] / outPoint[3],outPoint[1] / outPoint[3]};
return c;
}
4) Now in the MyGeneralOpenGLES2DrawingClass, pass the projected coordinate, as well as Projected Coordinate of the drawing objects itself. The comparision is based on a search technique:
a) For circle, calculate the radious by calculating eucledian distacne between center ( first vertex ) and second vertex ( a point on perimeter). If the touch points distance is less than the radious from center then circle is touched.
b) For rectangle check if the touch point is between first and the third vertex or not( they are two vertices of first diagonal which defines the area of the square/rectangle)
c) For triangle, calculate the center point by centerX=(v1X+v2X+v3X)/3, centerY=(v1Y+v2Y+v3Y)/3 where v1,v2 and v3 are three vertices. Now calculate the difference between the center and v2 which acts like threshold. If the distance between touch point and center is less than the threshold then triangle is selected.
public boolean isTouched(float[] touchPoint)
{
float x2=touchPoint[0];
float y2=touchPoint[1];
if(coords.length==9)
{
float midPointX=(coords[0]+coords[3]+coords[6])/3;
float midPointY=(coords[1]+coords[4]+coords[7])/3;
float thrDist=eucledian(midPointX, midPointY, coords[3], coords[4]);
float dstFromTouch=eucledian(midPointX, midPointY, x2, y2);
if(dstFromTouch<=thrDist)
{
Log.i("Matched","Triangle");
return true;
}
}
if(coords.length==12)
{
if(x2>=coords[0] && x2<=coords[6]&& y2>=coords[1] && y2<=coords[7] )
{
Log.i("Matched","Rect/Line");
return true;
}
}
if(coords.length>100)
{
float radi=eucledian(coords[0], coords[1], coords[3], coords[4]);
float dstFromTouch=eucledian(coords[0], coords[1], x2, y2);
if(dstFromTouch<=radi)
{
Log.i("Matched","Circle");
return true;
}
}
return false;
}
Where eucledian method is as implemented bellow:
float eucledian(float x1,float y1,float x2,float y2)
{
float d=(float) Math.sqrt((x1-x2)*(x1-x2)+(y1-y2)*(y1-y2));
return d;
}
In order to test this, just modify the onTouch method as bellow:
@Override
public boolean onTouchEvent(MotionEvent e)
{
final float x=e.getX();
final float y=e.getY();
final float[] normCoord=SimpleTouch2GLCoord(new Point((int)x,(int) y));
final float []glCoord=rend.glCoordinate(normCoord[0], normCoord[1]);
Log.i("GlX="+glCoord[0]+" glY="+glCoord[1] ,"X="+x+" Y="+y);
maContext.runOnUiThread(new Runnable() {
@Override
public void run()
{
String s="GlX="+glCoord[0]+" glY="+glCoord[1] +" X="+x+" Y="+y;
if(rend.triangle.isTouched(glCoord))
{
s=s+" TRI TOUCHED";
}
if(rend.line.isTouched(glCoord))
{
s=s+" LINE TOUCHED";
}
if(rend.circle.isTouched(glCoord))
{
s=s+" CIR TOUCHED";
}
if(rend.rect.isTouched(glCoord))
{
s=s+" RECT TOUCHED";
}
maContext.setTitle(s);
}
});
return false;
}
If all goes smooth, you can see result as bellow:
Figure 2.10: Result of Touched object detection
There are specifically three types of out of the box animation support in OpenGL:
a) Scale, Translation and Rotation Transforms. This is the same concept that is being adopted in wpf. As the basis of OpenGL drawing is a vertex matrix, all you need to do for applying the animations is applying transformations on the matrix at the time of rendering. So original object coordinates are not changed, only they are transformed!
This particular group of Transform is also called "Render Transform" as object's rendering is only transformed.
So if we are to use animation, where we need to write the code?
You got it absolutely spot on! In the onFrameDraw method of rendering.
So rotation transform:
1) create a rotation matrix:
Matrix.setRotateM(mRotationMatrix, 0, mAngle, 0, 0, -1.0f);
where mAngle is the rotation angle.
2)Multiply the Rotation Matrix with projection matrix. Be sure not to change the order as all the multiplications are vector operations here.
Matrix.multiplyMM(scratch, 0, mMVPMatrix, 0, mRotationMatrix, 0);
3) Now all you have to do is draw the object with scratch matrix, rather than with projection:
rect.draw(scratch);
4) If you want object specific manipulation then check for the selection of the object through touch and if the particular object is selected then perform rotation. So from MySurfaceView, set a variable of Render class to notify about the object to be manipulated.
if(rend.triangle.isTouched(glCoord))
{
rend.mover="TRI";
}
if(rend.line.isTouched(glCoord))
{
rend.mover="LINE";
}
if(rend.circle.isTouched(glCoord))
{
rend.mover="CIR";
}
if(rend.rect.isTouched(glCoord))
{
rend.mover="RECT";
}
And finally in onFrameDraw of render method, apply the transform based on the variable mover value:
if(!mover.equals("TRI"))
{
triangle.draw(mMVPMatrix);
}
else
{
triangle.draw(scratch);
}
if(!mover.equals("CIR"))
{
circle.draw(mMVPMatrix);
}
else
{
circle.draw(scratch);
}
mAngle can be set by checking previous and current touch positions.
Figure 2.11: Render Transform Based Animation
For Scale and Translate transforms use Matrix.translateM and Matrix.scaleM methods respectively.
The beuty of OpenGL is that onice you have learnt the 2D essentials, 3D is no big deal. But what is three dimensional graphics?
A 3D graphics is graphics programming which enables the renderer to render more than one face of a 3D object to present it's x-y and z all dimensions and the ability to rotate the axis in a natural way such that different faces can be rotated over the axis for a 360' viewing angle.
take any 3D objects, say a box. Now look at it from the top. You don't get iit's 3D view, but you will see the square on the top of the box. When you go little side wise and look at the object you will see top, left and right faces. That is a 3D view. So in simple terms, 3D is all about rendering more than one face of a multifaced object. Such a rendering is made possible by cleaver placeing of camera object.
Loot at following figure.
Figure 2.12: 3D Coordinate System with Camera
The green dot in the digure is the camera object. You can place an object along with the coordinates and then adjust your camera suitably to view the object from a 3D perspective.
Even while working with our 2D graphics, the coordinate system we adopted was x-y-z and we had used z=0 in all our coordinate system. Also in earlier examples, we had drawn only one face of an object. We delt with objects without height.
We used {Vertex} and {DrawOrder} to specify 2D shapes. While specifying 3D shapes, we use following notations:
Node: a point represented by three coordinates, x, y and z.
Edge: a line connecting two points (can also be called a vertex).
Face: a surface defined by at least three points.
Wireframe: a shape consisting of just nodes and edges.
Figure 2.13: 3D Terminology Explained
So from our discussion thus far we can conclude following basics:
1) A 3D object may have several faces, but to be able to present a 3D perspective, a user must be presented with atleast three faces.
2) A 3D object is nothing but a closed graph of 2D shapes. Therefore rendering must atleast three of such 2D faces and render in a systematic way to give a 3D perspective being presented.
3) There are several mathematics that provides a solution to achive the above point. But the most basic and reliable one is orthographic projection.
Following wikipedia image on orthographic projection is a wonderful diagram to understand 3D perspective.
Figure 2.14: Orthographic Projection ( wiki )
We have already perceived our 2D shapes as being made of basic triangles. We have shown in our earlier section that literally any 2D shape can be modeled as a combination of triangles. As 3D is a closed graph of atleast 3 such 2D faces, a 3D view can be easily referred as group of triangles or as commonly referred: a mesh of triangles
Following figure 2.15 ( from doc.cgal.org) elaborates our theory:
Figure 2.15: 3D Wireframes as Mesh of Triangles
We shall now understand the power of the general drawing class we developed for our 2D shapes. Will you be surprised after reading through section 2.5.1 that even 3D shapes can be easily drawn using our MyGeneralOpenGLES2DrawingClass ? You should not. Because all it's going to need to us is to initialize an object of the class with the coordinates of the nodes ( Vertices in 2D) in sequential order and specifying a triangle drawing order!
Though ideally you should be reading node information from a 3D model file like blender file, I would like to present a raw coordinate and I will show you basic of 3D cube drawing. Once you learn this basic, you are on your own to modify the concept and as par your suitability.
So here are the cube coordinates:
private float verticesCube[] = {
-1.0f, -1.0f, -1.0f,
1.0f, -1.0f, -1.0f,
1.0f, 1.0f, -1.0f,
-1.0f, 1.0f, -1.0f,
-1.0f, -1.0f, 1.0f,
1.0f, -1.0f, 1.0f,
1.0f, 1.0f, 1.0f,
-1.0f, 1.0f, 1.0f
};
This will form the cube absolutely at the center spreading across the length and breath of the coordinate system. But for proper display, I will scale it down:
for(int i=0;i<verticesCube.length;i++)
{
verticesCube[i]=verticesCube[i]/3;
}
Now for drawing this cube, we need to specify the order in which vertices must be drawn:
Here is the order of vertices:
private short indicesCube[] = {
0, 4, 5, 0, 5, 1,
1, 5, 6, 1, 6, 2,
2, 6, 7, 2, 7, 3,
3, 7, 4, 3, 4, 0,
4, 7, 6, 4, 6, 5,
3, 0, 1, 3, 1, 2
};
Note that you can start from any triangle, as long as you cover all the triangles corresponding to all the faces!
Finally let us define an object of MyGeneralOpenGLES2Drawing class and initialize the object as:
private float colorsCube[] = {
0.3f, 0.2f, 1.0f, 1.0f,
};
cube=new MyGeneralOpenGLES2DrawingClass(3, verticesCube,colorsCube,indicesCube);
Now all you have to do is add the following section in MyGeneralOpenGLES2DrawingClass:
if(drawOrder.length==36)
{
GLES20.glDrawElements(drawMode, 36, GLES20.GL_UNSIGNED_SHORT, drawListBuffer);
}
And? Bingo! We have our 3D shape ready.
Figure 2.16 : 3D Rendering in OpenGL
What is most interesting is OpenGL view port does not require any separate configuration for 3D rendering. It cam render 3D and 2D ojects in the same view port with ease!
Go ahead Download AnimationAndGhraphics.zip and play with the classes!
As the name suggests, in this section we are going to deal with Views and Animating them. But how exactly they are used?
For example you are entering an Email ID in one of the text boxes and have typed it wrong, wouldn't it be nice to just change the color copule of times from normal to red-back to red to draw user's attention?
How about a simple Marque type of control which plays some advertisement or news items? When a field in the form needs to be filled, how intuitive it will be just to compress and expand the text box once to draw attention of the user? These are some of the UI level animations that are meant to make the app axperinece better and brings that wow factor into the app.
In all the above examples that we discussed we talked about changing Background, width-height, location of the UI controls or the view. These are the properties of the view. You can actually write a simple timer and do it programatically. But that adds a problem of hardcoding such animation for all your apps differently.
Android offers an unique and Reusable way of doing it using XML. This is called View Animation.
As we have seen, working with OpenGL needs a different rendering surface and content view. However that does not mean that if you are using OpenGL, you can not work with regular UI layouts. You can use fragment concept to use both a OpenGL surface view and a layout togather. However for our current app we will just switch between an openGl view and a layout view.
We will also create two menu options: One that allows to switch to layout view and when the app presents the layout based view, another menu gets loaded which presents the options for view animation. We will understand the concept of View animation by applying different animations to a simple ImageView.
Firstly change your main layout as follows:
<RelativeLayout
xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:orientation="vertical" >
<TextView
android:id="@+id/tvTop"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="@string/graphics_and_animation_demo" />
<ImageView
android:id="@+id/imgMain"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_below="@+id/tvTop"
android:layout_centerHorizontal="true"
android:layout_marginTop="146dp"
android:src="@drawable/gn_logo" />
</RelativeLayout>
So that in the design mode, yur layout loooks something as figure 3.1.
Figure 3.1: Modified activity_main.xml for View Animation Demo
Do not worry if your Eclipse shoots "can't find resource gn_logo" error. Just select an image from drawable using the property editor. You can always upload your images in drawable. ( For knowing more about image management you can read this)
We will present with OpenGL view first and allow the user to select Animation form view through our main menu. When Animation form is presented we will load another menu which provides several animation options as well as an opportunity to go back to openGl form.
res/menu.main.xml
<menu xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
tools:context="com.integratedideas.animationandghraphics.MainActivity" >
<item
android:id="@+id/menuViewAnimation"
android:orderInCategory="100"
android:showAsAction="always"
android:title="View Animation"/>
</menu>
res/menu/animation_choice.xml
<menu xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
tools:context="com.integratedideas.animationandghraphics.MainActivity" >
<item
android:id="@+id/menuAnimationOptions"
android:orderInCategory="100"
android:showAsAction="always"
android:title="Animation Option">
<item
android:id="@+id/menuFadeIn"
android:orderInCategory="100"
android:showAsAction="never"
android:title="Fade In"/>
<item
android:id="@+id/menuFadeOut"
android:orderInCategory="100"
android:showAsAction="never"
android:title="Fade Out"/>
<item
android:id="@+id/menuZoomIn"
android:orderInCategory="100"
android:showAsAction="never"
android:title="Zoom In"/>
<item
android:id="@+id/menZoomOut"
android:orderInCategory="100"
android:showAsAction="never"
android:title="Zoom Out"/>
<item
android:id="@+id/menuSlideUp"
android:orderInCategory="100"
android:showAsAction="never"
android:title="Slide Up"/>
<item
android:id="@+id/menuSlideDown"
android:orderInCategory="100"
android:showAsAction="never"
android:title="Slide Down"/>
<item
android:id="@+id/menuRotate"
android:orderInCategory="100"
android:showAsAction="never"
android:title="Rotate"/>
<item
android:id="@+id/menuOpenGL"
android:orderInCategory="100"
android:showAsAction="always"
android:title="OpenGL View"/></item>
</menu>
Now in ActivityMain.java , inflate the menus based on integer option which is set to 1 that triggers main menu to be displayed. When View Animation optin is selected, the integer option is changed and the other menu is loaded. These two option selection also changes the view. So we will use setContentView to load appropriate views.
int menuNo=1;
@Override
public boolean onCreateOptionsMenu(Menu menu) {
if(menuNo==1)
getMenuInflater().inflate(R.menu.main, menu);
else
getMenuInflater().inflate(R.menu.animation_choice, menu);
return true;
}
@Override
public boolean onOptionsItemSelected(MenuItem item) {
int id = item.getItemId();
switch(id)
{
case R.id.menuViewAnimation:
setContentView(R.layout.activity_main);
menuNo=2;
invalidateOptionsMenu();
break;
case R.id.menuOpenGL:
menuNo=1;
setContentView(new MyGlSurfaceView(this));
invalidateOptionsMenu();
break;
}
return super.onOptionsItemSelected(item);
}
Now build and run your application.
Figure 3.2 Switching between OpenGL and layout view
The objective of covering this section is that many a times it becomes important to combine both layouts as well as opengl viewport in a single app with some kind of switching mechanism. Here we have learnt a very simple yet effective technique of keeping all eggs resources in same basket !!
XML based View animation workflow can be understood by looking at figure 3.3.
Figure 3.3 View Animation Workflow
The workflow is really quiate a simple logic comparing to what you have had to do through the OpenGL section. Firstly you need to create a xml file inside anim folder under res.
In the simplest form, the Android animation xml file must have a PROPERTY tag. There is no restrictions with the tag name. <alpha>
, <scale>
, <translate>
, <rotate>
are some of the common PROPERTY that are used in View Animation.
For example for creating a fade in/fade out animation we can use property name as alpha or intensity . A property tag must have atleast one android:toSomeProperty and one android:fromSomeProperty. Where SomeProperty is the android property which you want to animate:
alpha, XScale,YScale,XDelta,YDelta,Degrees are some of the values you can use for SomeProperty
Using repeatCount you can specify the number of times you want the animation to continue. infinity means the animation will continue forever.
duration specifies the the time in milliseconds that the animation would continue.
If you want several different sequences to be carried out one after the other, then you would need to specify different property tags one after the other from top to bottom.
The xml file is loaded into an Animation object in onCreate method. The Activity class may also implement AnimationListener interface, in which case it will have three event methods namely startAnimation, repeatAnimation and repeatAnimation where any logical code can be put.
Animation can be started simply by calling startAnimation method from the view object which you want to animate.
Time interpolation is a way of specifying the function by means of which the intermediate results for intermediate time stamp for the animation is specified.
Let us see a basic example of fade in animation. Fade in is a process where alpha or the transparancy of the control changes from 1 to 0. Here is the simple xml file for that.
="1.0"="utf-8"
<set xmlns:android="http://schemas.android.com/apk/res/android"
android:fillAfter="true" >
<alpha
android:duration="1000"
android:fromAlpha="0.0"
android:interpolator="@android:anim/accelerate_interpolator"
android:toAlpha="1.0" />
</set>
Create a xml by name fade_in.xml in your res/anim folder and copy the code. If anim folder does not exists, you can always create the folder. Be sure of not to use upper case in either name of the folder or the file name.
Before we work with our MainActivity, we need a little more understanding of how view works.
Look, findViewById() call must always be after the call of setContentView. Basically findViewById can find the id of a child view. Therefore in this particular work if you try to initialize the instance of ImageView in onCreate method, it will be null. Therefore you must initialize the image view instance in onOptionItemSelected method where you are allocating main_activity layout as content view.
ImageView iv;
int menuNo=1;
@Override
public boolean onOptionsItemSelected(MenuItem item) {
int id = item.getItemId();
switch(id)
{
case R.id.menuViewAnimation:
setContentView(R.layout.activity_main);
iv=(ImageView)findViewById(R.id.imgMain);
menuNo=2;
invalidateOptionsMenu();
break;
case R.id.menuOpenGL:
menuNo=1;
setContentView(new MyGlSurfaceView(this));
invalidateOptionsMenu();
break;
case R.id.menuFadeIn:
anim = AnimationUtils.loadAnimation(getApplicationContext(), R.anim.fade_in);
iv.startAnimation(anim);
break;
}
Finally here is the output of the sample fade_in animation:
Figure 3.4: Result of Fade In Animation
As we have discussed you can combine different animations sequentially like the following one:
<set xmlns:android="http://schemas.android.com/apk/res/android"
android:fillAfter="true"
android:interpolator="@android:anim/linear_interpolator" >
<!-- Use startOffset to give delay between animations -->
<!-- Move -->
<scale
xmlns:android="http://schemas.android.com/apk/res/android"
android:duration="4000"
android:fromXScale="1"
android:fromYScale="1"
android:pivotX="50%"
android:pivotY="50%"
android:toXScale="4"
android:toYScale="4" >
</scale>
<!-- Rotate 180 degrees -->
<rotate
android:duration="500"
android:fromDegrees="0"
android:pivotX="50%"
android:pivotY="50%"
android:repeatCount="infinite"
android:repeatMode="restart"
android:toDegrees="360" />
</set>
Download demo for View Animation and play with the different animations provided with the demo. Most important aspect is that you can apply these animation sets to any View or View groups.
Property Animation is another class of animation support. The problem with view animation is that the animation is applicable only for view objects. Secondly view animation only renders the concerned view with new animation properties and does not effect the container. Say for instance you are applying a move animation to a button. Even though button is moved across the containers, it's click position remains same which needs to be handled by user code.
There are several non view objects which needs animation. Take a simple example of a CountDownTimer . It's value should keep getting reduced till 0. You can't use View animation as no view is attached to an Integer number. So you had to declare a timer and update the values had there been no animation support. But Property Animation helps performing such updations efficiently.
Note that you should not get mislead by the term "Animation" which generally means some slow rendering changes on the view port. PropertyAnimation must be perceived as simple and efficient way of changing certain values in smooth way withut the hassles of timer and listeners which may or may not necessaryly include rendering.
It calculation of the intermediate result is based on an interpolation.
Available interpolation modes are:
- LinearInterpolator
- AccelerateDecelerateInterpolator
- AccelerateInterpolator
- AnticipateInterpolator
- AnticipateOvershootInterpolator
- BounceInterpolator
- CycleInterpolator
- DecelerateInterpolator
- LinearInterpolator
- OvershootInterpolator
Property Animations not only can change value of a variable but it can easily be applied on a specific property of an object ( including view objects). So if you perform a property animation of change of height value from 10-100 for an ImageView, as the animation progresses the height of the view will be actually be changing from 10-100 with time.
At the core of Property Animation is a class called ValueAnimator. As the name suggests, it can animate or update the value in a range. In it's update event handler the value can be applied to several different objects or properties of the object.
A valueAnimator can update different values like ofFloat, ofInt or ofProperty. First let's see a beutiful example of animating float value.
Declare an object of ValueAnimator, initialize it by specifying which type of value you want to change. Finally addUpdateListener to it. In the update method you can apply the updated value to any object. Interestingly this also allows UI rendering. So you don't need to implement any other thread or background.
ValueAnimator animation = ValueAnimator.ofFloat(0f, 1f);
animation.setDuration(8000);
animation.addUpdateListener(new AnimatorUpdateListener()
{
@Override
public void onAnimationUpdate(ValueAnimator animation)
{
float val=Float.parseFloat(animation.getAnimatedValue().toString());
iv.setAlpha(val);
iv.setScaleX(val);
iv.setScaleY(val);
tv.setText(animation.getAnimatedValue().toString());
}
});
animation.start();
break;
As you can see in the above example we are using same updated value to print in TextView, to update the alpha and ScaleX properties of ImageView. But with ViewAnimation we could apply animation to only one property for a sequence. Multiple properties are animated sequentially so their values are not synchronized. See the result where same value is being used by multiple controls and properties:
Figure 4.1: Result of Value Animation
The problem with Value animator is that if a view object is to be modified, it needs to be handled using code from update event. What if we could update the property of the object directly through the animator without coding?
Yes, that is possible and that is supported by second group of property animator called ObjectAnimators.
ObjectAnimator oa=ObjectAnimator.ofFloat(iv, "translationX", 0, 400);
oa.setDuration(6000);
oa.start();
break;
Just the above piece of code will move your image from current position to an end position where endX=currentX+400; So the first argument is the object, second one is the property, third and fourth arguments are start and end values respectively.
You can apply ObjectAnimation to other properties like scaleX, alpha, scaleY etc.
ObjectAnimator oa=ObjectAnimator.ofFloat(iv, "scaleX", 0, 4);
ObjectAnimator oa=ObjectAnimator.ofFloat(iv, "rotation", 0, 45);
Remember rotation is specified in Degrees and not in radians.
For both ValueAnimator and ObjectAnimator, if you want the animation to continue from forward->reverse and back from reverse->forward, then use RepeatMode with RepeatCount.
oa.setRepeatMode(ValueAnimator.REVERSE);
oa.setRepeatCount(ValueAnimator.INFINITE);
What if we want to apply different types of animations simultaneously to an object? Well nothing to worry about. AnimatorSet call object can be used to perform multiple animation simultaneously.
One of the most common usage of this is while applying scaling property. Say when you create an ObjectAnimator for scaleX, the object keeps getting enlarged in x direction, but what you want to do in most of the time is apply both scale x and scale y togather shich is not possible using ObjectAnimator. Hence we go for AnimatorSet.
An AnimatorSet can run multiple Animators of either ValueAnimator or ObjectAnimator type. This does not induce any animation property like duration/interpolation etc to the independent animators. The independent animators are governed by their own rule set.
ObjectAnimator oaRotation=ObjectAnimator.ofFloat(iv, "rotation", 0, 45);
oaRotation.setDuration(5000);
oaRotation.setRepeatCount(ValueAnimator.INFINITE);
oaRotation.setRepeatMode(ValueAnimator.REVERSE);
ObjectAnimator oaScaleX=ObjectAnimator.ofFloat(iv, "scaleX", 0, 4);
oaScaleX.setDuration(5000);
ObjectAnimator oaScaleY=ObjectAnimator.ofFloat(iv, "scaleY", 0, 4);
oaScaleY.setDuration(5000);
ObjectAnimator oaAlpha=ObjectAnimator.ofFloat(iv, "alpha", 0, 1);
oaAlpha.setDuration(5000);
AnimatorSet combine = new AnimatorSet();
combine.playTogether(oaScaleX,oaScaleY);
combine.play(oaAlpha).before(oaRotation);
combine.start();
Using playTogether, you can schedule any numbers of animation to be run simultaneously.
You can play an animation before or after another animation. Every animation can have their independent properties like different durations.
Animation in it's purest and oldest form is basically sequantial rendering of images from a set that represents a complete motion. This is the principle on which the animation films, stop motion animation etc depends.
Android provides a great way of animating such image sequence. This class of animation deserves any developer's respect. But when you actualy work with the design you would bound to question the miserable design decision Android developers took while working with this class of animation.
Coming back to drawable animation, as the name suggests, you must have a set of image frames in any of the drawable folders like xhdpi or xxhdpi etc.
Here we will be animating a Fox puppet. See the figure 5.1 to understand what should be your image configuration.
Figure 5.1: Preparing for Drawable Animation
Now you have to create a xml file in drawable-xhdpi (or whatever drawable folder you are using)by the name of your animation. I have created a xml by name fox_puppet.xml.
This xml is a pointer to all the frames as well as their behevior.
<animation-list xmlns:android="http://schemas.android.com/apk/res/android"
android:oneshot="true">
<item android:drawable="@drawable/photo1" android:duration="200" />
<item android:drawable="@drawable/photo2" android:duration="200" />
<item android:drawable="@drawable/photo3" android:duration="200" />
<item android:drawable="@drawable/photo4" android:duration="200" />
<item android:drawable="@drawable/photo5" android:duration="200" />
<item android:drawable="@drawable/photo6" android:duration="200" />
<item android:drawable="@drawable/photo7" android:duration="200" />
.
.
.
</animation-list>
Observe onshot property of the xml file. If it is made true, the animation will be trigerred only once. If it is made false, then the animation will be keep repeating.
DrawableAnimation images are rendered in the background of an image. Therefore we consider one more image in our main layout by name ivPuppet.
For Drawable Animation, first initialize the BackgroundResource property of the image with your xml file name. Then assign the AnimationDrawable class object with the BackgroundResource which is ofcourse a drawable. The only thing you need to do is just call start method for starting the animation.
By the way, don't forget to make the Image of the ImageView as null, otherwise the animation will be playing in the background which would be hidden due to foreground image.
AnimationDrawable puppet;
void PerformDrawableAnimation()
{
iv.setImageBitmap(null);
iv.setBackgroundResource(R.drawable.fox_puppet);
puppet = (AnimationDrawable) iv.getBackground();
puppet.start();
}
Figure 5.2: Result of Drawable Animation
You can also add updateListeners and other event handlers in the same way just like the other Animations. However what is the most rusty about this designis that you might need t organize your animations. Say you have some ten different animations in your app. You would surely want them to be grouped togather in specific directories. So the best approch would be to use sub directories and keep the animations specific to a particular animation in the corresponding directory. Isn't it?
Not quite! This is because drawable does not recognize any subdirectories. Hence you need to clutter all your images inside root drawable flder.
However we are little too smarter as we are in codeproject. Right? So we must find a workaround of the basic design flaws.
The resource that allows sub directories is assets. So all you have to do is create a folder by name fox and put all your image sequences there. Now note that in the xml file you were using drawable. Asset directory will not provide it's resources to be drawable. So you have to do little programming trick here. Instead of specifying the frame sthrough xml file, you have to load the frames from bitmaps using addFrame() method.
First you have to open an input stream from the asset resource. Then you have to convert this to a bitmap which will be used to create a drawable object. This object along with animation ime is passed parameters to addFrame method.
Finally the animation object is assigned to the background resource of the image.
AnimationDrawable puppet;
void PerformDrawableAnimation()
{
iv.setImageBitmap(null);
puppet=new AnimationDrawable();
InputStream is = null;
for(int i=1;i<=31;i++)
{
try
{
is = this.getResources().getAssets().open("fox/photo"+i+".png");
Bitmap b = BitmapFactory.decodeStream(is);
Drawable d=new BitmapDrawable(b);
puppet.addFrame(d, 200);
} catch (Exception e)
{
;
}
}
iv.setBackgroundDrawable(puppet);
puppet.setOneShot(true);
puppet.start();
}
Lots of other beheviors can be added with the animation with set methods of the AnimationDrawable class.
This is world better thechnique anyday than most common xml based first approch we saw. It prevents you from creating and maintaining different xml files.
Android provides an amazing set of APIs for 2D drawing that allow you to draw your own graphics(shapes and bitmaps) onto a canvas. Basically every View in Android is rendered using onDraw call which is handled by a Canvas class. For displaying or drawing ( rendering) any visual items you need a bitmap whose pixels will represent the visual effect and a Canvas to draw into the pixels. This drawing includes drawing primitive shapes like circles, elipse, rectangle or other bitmaps. You also need a paint object that defines the painting on the bitmap using Canvas calls and drawing primitive like circles/rectangles which we have already learnt in our OpenGL section. However unlike OpenGL, the drawing is much more straight forward in Canvas APIs and they utilize absolute coordinate system rather than norrmalized coordinate system with projection that we used in OpenGL section. Following are the major draw calls ( or APIs) that Android canvas class supports.
Figure 6.1 Canvas Drawing APIs
In order to test Canvas APIs you can overwrite onDraw method of your activity class or create a class which extends view. You can incorporate your drawing login in onDraw method of the class.
So we create our View class called DrawingView
public class DrawingView extends View {
private Path drawPath;
private Paint drawPaint, canvasPaint;
private int paintColor = 0xFF660000;
private Canvas drawCanvas;
private Bitmap canvasBitmap;
public DrawingView(Context context, AttributeSet attrs){
super(context, attrs);
setupDrawing();
}
private void setupDrawing(){
drawPath = new Path();
drawPaint = new Paint();
drawPaint.setColor(paintColor);
drawPaint.setAntiAlias(true);
drawPaint.setStrokeWidth(20);
drawPaint.setStyle(Paint.Style.STROKE);
drawPaint.setStrokeJoin(Paint.Join.ROUND);
drawPaint.setStrokeCap(Paint.Cap.ROUND);
canvasPaint = new Paint(Paint.DITHER_FLAG);
}
@Override
protected void onSizeChanged(int w, int h, int oldw, int oldh) {
super.onSizeChanged(w, h, oldw, oldh);
canvasBitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
drawCanvas = new Canvas(canvasBitmap);
}
@Override
protected void onDraw(Canvas canvas) {
canvas.drawBitmap(canvasBitmap, 0, 0, canvasPaint);
canvas.drawPath(drawPath, drawPaint);
}
@Override
public boolean onTouchEvent(MotionEvent event) {
float touchX = event.getX();
float touchY = event.getY();
switch (event.getAction()) {
case MotionEvent.ACTION_DOWN:
drawPath.moveTo(touchX, touchY);
break;
case MotionEvent.ACTION_MOVE:
drawPath.lineTo(touchX, touchY);
break;
case MotionEvent.ACTION_UP:
drawPath.lineTo(touchX, touchY);
drawCanvas.drawPath(drawPath, drawPaint);
drawPath.reset();
break;
default:
return false;
}
invalidate();
return true;
}
public void setColor(String newColor){
invalidate();
paintColor = Color.parseColor(newColor);
drawPaint.setColor(paintColor);
}
public void setColor(int color){
invalidate();
paintColor = color;
drawPaint.setColor(paintColor);
}
public int getColor()
{
return drawPaint.getColor();
}
}
As discussed we have a paint object and a bitmap on which drawing calls will be made using the paint object. drawpath is a path object that hold set of points which are updated depending upon touch on the drawing canvas. canvas draws the path using drawpaint which is a paint object initialized with specific color. When touch is released, a line is drawn from last point till the point where touch is released and drawPath is reset. Had there been no bitmap object, every time you released touch, canvas will be reset. You could verify this by commenting canvas.drawBitmap(canvasBitmap, 0, 0, canvasPaint); line in onDraw method.
We modify our activity_main.xml to hold the new view:
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:orientation="vertical" >
<com.integratedideas.animationandghraphics.DrawingView
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:id="@+id/dvMain"
/>
<TextView
android:id="@+id/tvTop"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="@string/graphics_and_animation_demo" />
<ImageView
android:id="@+id/imgMain"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_below="@+id/tvTop"
android:layout_centerHorizontal="true"
android:layout_marginTop="41dp"
android:src="@drawable/gn_logo"
tools:ignore="ContentDescription" />
<ImageView
android:id="@+id/ivAnimation"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerHorizontal="true"
android:layout_centerVertical="true"
android:src="@drawable/photo11" />
<LinearLayout
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:layout_alignParentLeft="true" >
<Button
android:id="@+id/btnColor"
style="?android:attr/buttonStyleSmall"
android:layout_width="wrap_content"
android:layout_height="29dp"
android:text="Color" />
</LinearLayout>
</RelativeLayout>
We have also used a color button here so that we can trigger a change in drawing color.
Now whenever you have activity_main being set as the content view, you need to initialize the objects of Color button and an object of drawView. We do it inside onOptionsItemSelected event handler.
drawView = (DrawingView)findViewById(R.id.dvMain);
btnColor=(Button)findViewById(R.id.btnColor);
btnColor.setOnClickListener(this);
For implementing Color picker you need to download yuku.ambilwarna color picker from here. Import the project to Android. Now right click on your project, select properties and now select android from left panel. Click on add button from bottom right corner and select AmbilWarna.
In order to be able to show the color dialog when button is clicked, update the onClick method.
@Override
public void onClick(View v)
{
Button b=(Button)v;
if(b.getText().toString().trim().equals("Color"))
{
int c=drawView.getColor();
awd=new AmbilWarnaDialog(MainActivity.this,Color.BLACK,new OnAmbilWarnaListener() {
@Override
public void onOk(AmbilWarnaDialog arg0, int arg1)
{
drawView.setColor(arg1);
}
@Override
public void onCancel(AmbilWarnaDialog arg0) {
}
});
awd.show();
}
}
Result can be seen in figure 6.2
Figure 6.2 Canvas API on Action
In order to see how other drawing calls can be done, let's change the touch event handler a little to add a starting circle and ending circle.
@Override
public boolean onTouchEvent(MotionEvent event) {
float touchX = event.getX();
float touchY = event.getY();
switch (event.getAction()) {
case MotionEvent.ACTION_DOWN:
drawPath.moveTo(touchX, touchY);
drawCanvas.drawCircle(touchX, touchY, 15, drawPaint);
break;
case MotionEvent.ACTION_MOVE:
drawPath.lineTo(touchX, touchY);
break;
case MotionEvent.ACTION_UP:
drawPath.lineTo(touchX, touchY);
drawCanvas.drawCircle(touchX, touchY, 15, drawPaint);
drawCanvas.drawPath(drawPath, drawPaint);
drawPath.reset();
break;
default:
return false;
}
invalidate();
return true;
}
And results looks something like figure 6.3.
Figure 6.3: Result of drawCircle Canvas API Call
Drawing Bitmap using Canvas APIs is no big deal either. You need to have a bitmap object (preferrebly small in size) you can draw this object with drawBitmap call. In order to test the speed of the method, I have used drawBitmap inside touch event to see if the method is responsive enough or not.
case MotionEvent.ACTION_MOVE:
drawPath.lineTo(touchX, touchY);
try{
drawCanvas.drawBitmap(icon, touchX-30, touchY-30,null);
}catch(Exception ex)
{
}
break;
Where icon is a bitmap object initialized in constructor:
icon = BitmapFactory.decodeResource(getResources(), R.drawable.ic_launcher);
Here is how it looks:
Figure 6.4 drawBitmap Canvas API Call result
You can implement your own logic and come up with innovative drawings!
Graphics and Animations are important aspects to design and develop innovative apps. There are several choices for animating objects and visuals in Android. We have tried to present most of these in easy manner in this tutorial. However there remains a larger question. Which method to be used in which use cases. If you are building next generation responsive games then OpenGL must be your only choice. This is because of the hardware utilization for drawing. OpenGL is really fast in comparision to Canvas API counterpart. But If you are building simple drawing apps for Kids, primitive drawing shapes gets difficult with OpenGL, Canvas APIs are better suited for such apps. If you want to create a responsive UI where UI responds to events with certain animations, then Property and View animations are well suited for such cases. Drawasble animations are great tool to play simple animation movies/cartoon clip apps. All in all try out different APIs for your app need and test them for speed, memory and efficiency and select the most suitable one.