Once we have loaded our bitmaps, we can draw them via the Canvas. The easiest method to do this looks as follows:
Canvas.drawBitmap(Bitmap bitmap, float topLeftX, float topLeftY, Paint paint);
The first argument should be obvious. The arguments topLeftX and topLeftY specify the coordinates on the screen where the top-left corner of the bitmap will be placed. The last argument can be null. We could specify some very advanced drawing parameters with the Paint, but we don't really need those.
There's another method that will come in handy, as well: Canvas.drawBitmap(Bitmap bitmap, Rect src, Rect dst, Paint paint);
This method is super-awesome. It allows us to specify a portion of the Bitmap to draw via the second parameter. The Rect class holds the top-left and bottom-right corner coordinates of a rectangle. When we specify a portion of the Bitmap via the src, we do it in the Bitmap's coordinate system. If we specify null, the complete Bitmap will be used.
The third parameter defines where the portion of the the Bitmap should be drawn to, again in the form of a Rect instance. This time the corner coordinates are given in the coordinate system of the target of the Canvas, though (either a View or another Bitmap). The big surprise is that the two rectangles do not have to be the same size. If we specify the destination rectangle to be smaller in size than the source rectangle, then the Canvas will automatically scale for us. The same is true for specifying a larger destination rectangle, of course. The last parameter we'll usually set to null again. Note, however, that this scaling operation is very expensive. We should only use it when absolutely necessary.
So, you might wonder, if we have Bitmap instances with different color formats, do we need to convert them to some kind of standard format before we can draw them via a Canvas? The answer is no. The Canvas will do this for us automatically. Of course, it will be a bit faster if we use color formats that are equal to the native framebuffer format. Usually we just ignore this, though.
Blending is also enabled by default, so if our images contain an alpha component per pixel, it is actually interpreted.
Was this article helpful?