A first version of Othello Legends is now published to the Android Market, check out https://market.android.com/details?id=se.noren.android.othello! The game is the classic Othello/Reversi with a twist for unlocking harder levels when beating opponents. Compete with other players by increasing your total scores.
The aim of the project was to learn how to build an Android application of some complexity with features like OpenGL rendering, integration with backend server, Google AdMob for serving ads, Google Analytics tracking of application usage, SD storage and possibility to run on both phones and tablets with appealing layout and performance.
Lesson learned 1 - 3D
Accelerated 3D graphics is hard even if you're experienced with desktop OpenGL. To make OpenGL ES work on all devices requires a lot of testing which you can't do without devices. And generally debugging accelerated graphics in the Android emulator is no fun. Therefore, use a library to leverage yourself from the low level details.
I looked into jMonkeyEngine, libgdx which are general and large frameworks with quite massive APIs which probably would have worked out great but seemed to have some threshold for a newcomer to overcome.
In the end I decided to work with the more limited jPCT which has worked out very well. A stable and reliable library with an active community. jPCT handles 3DS-models well which makes it easy to create environments via some tooling.
I used the open source modeller Blender which is free and has support for all you would need as sculpt modelling and texture UV coordinate tooling. Antother appealing feature of jPCT is that is developed both for Android and as standard Java for desktop so you can port your apps between them without great effort.
Lesson 2 - Revenue model
If you haven't decided whether charging for your app or using ads I can only say that Ads are easy! If you're familiar with Google AdSense for creating ads on your websites you'll find it intuitive to work with Google AdMob. If you have an Android Activity made up of standard Android layouts you can simply add an AdView view to your layout and the Ad library will populate the container with ads.
Compared to the standard Google AdSense interfaces for managing and following up your ad reports AdMob is more limited and not as well polished but who cares? Will revenues be larger with mobile app ads than with ordinary web ads? I'll come back later on that.
Lesson 3 - Mobile is not desktop
Memory is scarse when you go down the 3D pathway. I early discovered that you must be cheap on your textures and the polygon levels of your meshes. The devices have no problem with rendering polygon heavy meshes with impressive framerates, but you soon run out of memory if you don't do clever texture unloads when you don't need them. My lesson here was: Create a game engine with strict modules for each game state so that you can be sure to deallocate all resources when you change state and use more low res textures than you usually would.
Lesson 4 - Tune your app after how your users use it
So in this game each level becomes more difficult and it seems like a good tuning approach to make the difficulties of the first two levels easy enough to to make sure all players passes them. After that it should be exponentially more difficult. How to know how your app users are doing? I notices that the game was way too hard when I tried it on people. Some sort of surveillance would be nice without intruding in the users' privacy. Lesson here is to not invent anything new. By using Google Analytics you can track how users travel around in your application by marking different states as you would use Google Analytics to mark web pages in a web site to follow traffic around your site and adapt your game to how users respond.
This is my tech diary. I try to write about my hobby projects to remember what I've done for reference and for fun. Hopefully techy people with similar interests can benefit as well.
Sunday, 11 December 2011
Othello Legends 1.0 in Android Market!
Saturday, 5 November 2011
Android - track down memory leaks
My current Android application project is starting to make sense. Unfortunately it crasches after a few levels of playing due to java.lang.OutOfMemoryError. Up to that point I hadn't put much thinking into the memory model of Android applications and simply consumed memory without hesitations. I've now been forced to rewrite some critical parts of the application and i thought I'll write a few words to remember the most useful tools I came across.
First of all, Android apps have small heaps. And with different sizes, it's up to the vendor of the device to decide. Here's a few numbers I came across:
- G1 = 16 Mb
- Droid = 24 Mb
- Nexus One = 32 Mb
- Xoom = 48 Mb
- GalaxyTab = 64 Mb
So you see that allocated heaps are far from using the entire RAM of the devices since no application should be able to clog the system. The natural approach to solving a memory problem would be to increase the heap but that is not so easy. If you have a rooted phone you may edit
/system/build.props
and set the heap size via
dalvik.vm.heapsize=24m
Or, if you're running on a tablet (3.x) Android version there is a manifest setting to ask for a large heap
<application android:label="@string/app_name" android:hardwareAccelerated="true" android:largeHeap="true" android:debuggable="true">
but that is no guarantee and you will instead be punished with longer GC cycle times.On the other hand, changing the VM heap size in your emulator is easy, and could be a good thing in order to verify that your app works on devices with smaller heaps. To do that, fire up your Android SDK and AVD Manager and click edit on your virtual device. Under hardware, there is a setting Max VM application heap size.
So the conclusion is that you have to live with small heaps and limited memory. How to get an estimate of your consumed memory and how much there is available then?
Run your application in the emulator or connect your real device via USB and use the Android Debug Bridge (adb). It's located in your Android SDK tools folder.
To dump memory info for all your running applications use
$>adb shell dumpsys meminfo
or for your specific application
$>adb shell dumpsys meminfo se.noren.android.othello
Applications Memory Usage (kB):
Uptime: 8979886 Realtime: 8979886
** MEMINFO in pid 1073 [se.noren.android.othello] **
native dalvik other total
size: 24648 10119 N/A 34767
allocated: 10869 7335 N/A 18204
free: 2 2784 N/A 2786
(Pss): 2857 8568 9385 20810
(shared dirty): 1508 4092 2556 8156
(priv dirty): 2656 6020 7732 16408
Objects
Views: 0 ViewRoots: 0
AppContexts: 0 Activities: 0
Assets: 2 AssetManagers: 2
Local Binders: 6 Proxy Binders: 10
Death Recipients: 0
OpenSSL Sockets: 0
SQL
heap: 0 memoryUsed: 0
pageCacheOverflo: 0 largestMemAlloc: 0
To understand this table we must know that you have a managed heap, dalvik, and a native heap. For example some graphics are stored in native heap. But important, it is the sum of these heaps that can not exceed the VM heap size. so you can't fool the runtime by putting more stuff in either native or managed heap. So to me, the most important numbers are the number under dalvik and total above. The dalvik heap is the managed VM heap and the native numbers are memory allocated by native libraries (malloc).
You'll probably see these numbers fluctating each time you run the command, that is because objects are allocated by the runtime all the time but GCs are not run particularly often. So, in order to know that you really have garbage collected all unused objects you must either wait for the Android debug log in logcat to say something like
GC_FOR_MALLOC or GC_EXTERNAL_MALLOC or similar to that which indicates that the GC has been invoked. Still, this does not mean that all unused memory has been released since the inner workings of the GC might not have done a complete sweep.
You can of course ask for a GC programmatically by System.gc();
But that is never a good option. You should have trust in the VM to garbage collect for you. If you for example want to allocate a large memory chunk the gc will be invoked if necessary.
You can force a gc using the Dalvik Debug Monitor (DDMS). Either start it from Eclipse or from the ddms tool in the Android SDK installation folders.
If you can't see your process right away, go to menu Actions and Reset adb. After that you can turn on heap updates via the green icon Show heap updates. To force a GC, click on Cause GC.
If you wish to monitor the memory usage programmatically there are a few APIs you can use.
ActivityManager.getMemoryInfo() can be used to get an idea of how the memory situation is for the whole Android system. If running low on the gauges you can expect background processes to be killed off soon.
To start inspecting your process in particular use the Debug APIs http://developer.android.com/intl/de/reference/android/os/Debug.html#getMemoryInfo(android.os.Debug.MemoryInfo. There's an excellent explanation of the data you can retrieve from this here http://stackoverflow.com/questions/2298208/how-to-discover-memory-usage-of-my-application-in-android
For example, to see how much memory is allocated in the native heap, use:
Debug.getNativeHeapAllocatedSize()
So back to DDMS. This tool can also create heap dumps which are particulary useful when tracing down memory leaks. To dump the heap, click on the icon Dump HPROF file. There are many tools for analyzing heap dumps but the one I'm most familiar is the Eclipse Memory Analyzer (MAT). Download it from http://www.eclipse.org/mat/.
MAT can't handle the DDMS heap dumps right away, but there is a converter tool in the Android SDK.
So simply run this command.
C:\Temp\hprof>hprof-conv rawdump.hprof converteddump.hprof
Then you can open the converted heap dump in MAT. An important concept here is retained size. The retained size of an object found in the heap is how much memory could be freed if this object could be garbage collected. That includes the object itself, but also child objects which no other objects outside of the retained set has references to.
MAT gives you an overview of where your memory is allocated and has some good tooling on finding suspicious allocations that could be memory leaks.
So to find my memory leak, I used the dominator tree tab which sorts the allocated objects by retained heap
and I soon discovered that the GLRendered object held far too many references to a large 512x512 texture.
The tool becomes even more valuable when the leaking objects are small but many. The dominator tree tell you right away that you have a single object holding a much larger retained heap than you would expect it to.
If you want to learn more, check out this speech by Patrick Dubroy on Android memory management from Google IO 2011 where he explains the Android memory model in more detail.
Wednesday, 19 October 2011
adb - getting inside your Android simulator
I've been working with persistent state between launches of my Android application and wanted to easily inspect my application data between launches. I discovered the power of the Android Debug Bridge or adb tool. In short a dev tool that lets you hook up against a running Android simulator.
You'll find it in your sdk folder under platform-tools. On my machine:
C:\Program Files (x86)\Android\android-sdk\platform-tools\adb.exe
A few handy commands:
- List currently running devices:
>adb devices
List of devices attached
emulator-5554 device
- Launch a terminal shell against a running device
>adb -s emulator-5554 shell
Then you can do your ordinary linux stuff like cd, ls, cat etcetera to get to know your Android device.
My happiest discovery was that the preference file you fetch and save via the SharedPreference API:
SharedPreferences settings = getSharedPreferences("OthelloLegendsPrefs", 0);
are located under the path
/data/data/se.noren.android.othello/shared_prefs/OthelloLegendsPrefs.xml
and is a simple XML file which you can inspect and edit.
So, to pull a file from the simulator to your development computer
>adb -s emulator-5554 pull <remotefile> <localfile>
Similarly, to upload a file to the device
>adb -s emulator-5554 push <localfile> <remotefile>
For more info on adb, http://developer.android.com/guide/developing/tools/adb.html.
You'll find it in your sdk folder under platform-tools. On my machine:
C:\Program Files (x86)\Android\android-sdk\platform-tools\adb.exe
A few handy commands:
- List currently running devices:
>adb devices
List of devices attached
emulator-5554 device
- Launch a terminal shell against a running device
>adb -s emulator-5554 shell
Then you can do your ordinary linux stuff like cd, ls, cat etcetera to get to know your Android device.
My happiest discovery was that the preference file you fetch and save via the SharedPreference API:
SharedPreferences settings = getSharedPreferences("OthelloLegendsPrefs", 0);
are located under the path
/data/data/se.noren.android.othello/shared_prefs/OthelloLegendsPrefs.xml
and is a simple XML file which you can inspect and edit.
So, to pull a file from the simulator to your development computer
>adb -s emulator-5554 pull <remotefile> <localfile>
Similarly, to upload a file to the device
>adb -s emulator-5554 push <localfile> <remotefile>
For more info on adb, http://developer.android.com/guide/developing/tools/adb.html.
Sunday, 4 September 2011
How to get character encoding correct on Google App Engine
Character encodings can be a primary trigger for stomach ulcers. My Swedish web applications deployed on Google App Engine have had great difficulties to behave when presented with user input containing for example Swedish characters Å, Ä and Ö.
So here's a recipe for treating such characters with respect.
So here's a recipe for treating such characters with respect.
- Don't use ISO-8859-1 as character encoding. Just don't.
- Instead specify you JSP's to use UTF-8 with something like
<%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%>
However, this might not be enough unfortunately. Current browsers might not set a character encoding even if specified in the HTML page or form.
So if you aren't already using Spring, add spring-web to your application and add one of the Spring filters first in your filter chain.
<filter>
<filter-name>SetCharacterEncoding</filter-name>
<filter-class>org.springframework.web.filter.CharacterEncodingFilter</filter-class>
<init-param>
<param-name>encoding</param-name>
<param-value>UTF-8</param-value>
</init-param>
<init-param>
<param-name>forceEncoding</param-name>
<param-value>true</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>SetCharacterEncoding</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
That's all folks.
Sunday, 31 July 2011
Mapping DNS domain to Google App Engine application
I have been working on a small application for promoting and selling books on popular science. I made a prototype and bought a domain name so I can setup all trackers and Amazon Affiliate accounts. I deployed the prototype as a Google App Engine application. Using GAE you can have a site up in 3 minutes.
When you deploy your GAE app, it will automatically get DNS mapped as yourappid.appspot.com. So in my case the app resides as http://popularsciencemedia.appspot.com in the Google cloud. From the Swedish DNS supplier Loopia I had bought the domain popularsciencemedia.com and now simply wanted to DNS remap popularsciencemedia.appspot.com to my own http://www.popularsciencemedia.com.
This is not as easy as you might think! The obvious solution would be to create a DNS CNAME entry with an alias to reroute traffic to www.popularsciencemedia.com to http://popularsciencemedia.appspot.com but but Google won't allow that. So I thought I'd write a few lines on how you do it to avoid you from the same misery. The purpose here is to map www.popularsciencemedia.com since Google App Engine only can be mapped to sub domains. www is good enough for me.
- You need to sign up for Google Apps for you application. Go to https://www.google.com/a/cpanel/domain/new and enter your registrered domain name and fill in the forms. You will in this process create a system user for this Google App. In my case I created the user admin@popularsciencemedia.com. Google Apps let's you create emails, calenders and much more for your app.
- This process will need you to verify that you own the domain. There are several ways to do this, I thought the easiest way was to add a Google verification code in the DNS registry as a TXT entry. The DNS record now looks like this. If you can't create TXT records with your DNS provider Google has other mechanisms for verifying that you own the domain.
$ORIGIN popularsciencemedia.com.
$TTL 300
@ SOA ns1.loopia.se. registry.loopia.se. (
1312117911
3H ; Refresh after three hours
1H ; Retry after one hour
1W ; Expire after one week
1D ) ; Minimum one day TTL
@ IN 3600 NS ns1.loopia.se.
@ IN 3600 NS ns2.loopia.se.
@ IN 3600 A 194.9.94.85
@ IN 3600 A 194.9.94.86
@ IN 3600 TXT google-site-verification=Uy4magKHIasdeEOasdgs6b7qYt8tR8
* IN 3600 CNAME ghs.google.com.
www IN 3600 CNAME ghs.google.com.
- Notice the additions. The TXT entry maps against the value you get from the Google App sign up. This makes it possible for Google Apps to verify that you own the domain you claim to own. Then, add also a CNAME mapping to ghs.google.com for the subdomain www since this is the sub domain we want to use.
- Now to to your Google App account, go to something like https://www.google.com/a/cpanel/popularsciencemedia.com. Under Sites-> Services -> YourAppId (App Engine) there should be a possibility to add a new address under the domain you have registrered.
- I add www so my app is mapped as www.popularsciencemedia.com.
- After a waiting a very short while all DNS changes seems to be working and I can surf to http://www.popularsciencemedia.com/
Friday, 15 July 2011
Profiling an Android application tutorial
I'm spending some spare time on an Android Reversi game which could need some performance tuning. After figuring out how the tooling works for Android profiling it works like a charm.
There are two ways to profile an application, using the debugging server DDMS or manually decide which parts of the code base are interesting for inspection. DDMS could be useful if you are inspecting code you might not be able to recompile. DDMS can also be used to inspect memory usage and more.
The easiest approach however is to use the debug interface provided by the Android API in your sources to specify when to start generating profiling information and when to end.
Run your program and you'll see in the VM logs when the profiler kicks in. (As usual the performance of your app in the emulator will sink to the bottom when profiling is enabled)
Now you got your profiling info written to the SD card of your Android emulator device. If you run into permission issues when writing to the SD card, add something like this to your Android Manifest.
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
To fetch the file to your development computer use the adb tool that comes with the Android SDK. On my Windows machine I did something like this.
C:\Program Files (x86)\Android\android-sdk\platform-tools>adb pull /sdcard/othello_profiling.trace c:\temp\othello_profiling.trace
126 KB/s (2266911 bytes in 17.491s)
The tool traceview can interprete the file.
C:\Program Files (x86)\Android\android-sdk\tools>traceview.bat c:\Temp\othello_profiling.trace
Voila! You get a profiling view similar to what you get from common profilers like JProfiler, hprof etcetera. Here you can see each methods execution time and which parents and children methods it has connection to and much more.
Theres more you can do with the trace file. Traceview can also show you each threads exectution and calls in chronological order. You can simple zoom in on the interesting parts.
You may also want to try the tool dmtracdedump to create graphs over your call stack. See the Android documentation for more information.
There are two ways to profile an application, using the debugging server DDMS or manually decide which parts of the code base are interesting for inspection. DDMS could be useful if you are inspecting code you might not be able to recompile. DDMS can also be used to inspect memory usage and more.
The easiest approach however is to use the debug interface provided by the Android API in your sources to specify when to start generating profiling information and when to end.
public int[] alphabeta(Board b, int maxDepth, long maxTime) { Debug.startMethodTracing("othello_profiling"); // Here goes code to profile Debug.stopMethodTracing(); return result; }
Run your program and you'll see in the VM logs when the profiler kicks in. (As usual the performance of your app in the emulator will sink to the bottom when profiling is enabled)
Now you got your profiling info written to the SD card of your Android emulator device. If you run into permission issues when writing to the SD card, add something like this to your Android Manifest.
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
To fetch the file to your development computer use the adb tool that comes with the Android SDK. On my Windows machine I did something like this.
C:\Program Files (x86)\Android\android-sdk\platform-tools>adb pull /sdcard/othello_profiling.trace c:\temp\othello_profiling.trace
126 KB/s (2266911 bytes in 17.491s)
The tool traceview can interprete the file.
C:\Program Files (x86)\Android\android-sdk\tools>traceview.bat c:\Temp\othello_profiling.trace
Voila! You get a profiling view similar to what you get from common profilers like JProfiler, hprof etcetera. Here you can see each methods execution time and which parents and children methods it has connection to and much more.
Theres more you can do with the trace file. Traceview can also show you each threads exectution and calls in chronological order. You can simple zoom in on the interesting parts.
You may also want to try the tool dmtracdedump to create graphs over your call stack. See the Android documentation for more information.
Monday, 25 April 2011
Dropbox - is it safe to put you files in the cloud?
I really like the simplicity of sharing files with Dropbox. I haven't gone full circle yet, but I have been moving a substantial part of my personal stuff there.
I hadn't thought much about the security of it, but when listening to Steve Gibson and Leo Laporte on the Security Now podcast (http://www.grc.com/securitynow.htm, or search for it on iTunes) examining Dropbox, I got an eye-opener that you can't assume an awesome service to by definition have awesome security.
First of all, Dropbox have claimed that not even their employees are able to see your data. Great! But in a recent change in the terms of agreement it says that the authorities due to US regulations can ask Dropbox to decrypt your data in certain crime investigations. Allright, I'm a good guy so that's not a problem for me. But that means that Dropbox must keep my encryption key in their vaults instead of me doing a client side encryption/decryption of my data. Interesting, that means that a bad apple Dropbox employee also have the possibility to look at my data without my knowledge. Not to mention what would happen if Dropbox would lose the table of private keys in some master planned hacking or insider heist.
So, the lesson should be. If you have some valuable or sensitive data, you should probably encrypt it before even dropping it into Dropbox.
Well, that applies to companies or people with more valuables than family photos like me.
Issue two might be more concerning, Derek Newton, http://dereknewton.com/2011/04/dropbox-authentication-static-host-ids/, has looked into how your Dropbox client authenticates against the cloud service. It seems like all you need is a config file which is set up at install time. That file contains your hostid which is your authentication token against Dropbox. The bad thing is that if someone by social engineering, a trojan or other malware gets a copy of this file, they can access your Dropbox account from any machine. Changing your password is not enough since this is an access token. You must remove your own machine as a valid host from Dropbox to stop the bad guy from using your account. Most probably you won't even know someone is eavesdropping on you.
These guys also seems to trust the cloud a bit too naively
https://forums.aws.amazon.com/thread.jspa?threadID=65649&tstart=0
https://forums.aws.amazon.com/thread.jspa?threadID=65649&tstart=0
Sunday, 17 April 2011
Twitter integration using OAuth
So my Rankington application hosted on Google App Engine wants to read tweets containing mentions of keyword 'rankington' and also update the status of the Twitter system user Rankington on certain occations. There are loads of Twitter APIs out there for Java. Here's a short guide on how get going with Twitter4j and some handy knowledge of OAuth.
Download Twitter4j from http://twitter4j.org/en/index.html. If you wish to do only reads against Twitter you don't need to authenticate in any way, but if you wish to post status updates or similar you must provide credentials for the Twitter user you are using. In the past you could authenticate against the Twitter REST APIs using user/password but this has been shut down by Twitter since August 2010. So don't try that via the Twitter4J API which has not deprecated that code yet.
OAuth
Twitter is now using OAuth as access mechanism. If you are familiar with OpenID you could compare OAuth to OpenID in the sense that OpenID is a decentralised identification infrastructure, whereas OAuth solves authorisation in a decentralised way. OAuth is a RFC under standardization of the IETF. For example the Google Docs API and other Google APIs have recently added OAuth as access restriction mechanims for their REST APIs.
An example of using OAuth would be that you have some resources on a site A, say some private photos. You wish to let another site B access those photos in order to incorporate them into a photo stream or whatever, but you don't want to hand your identification credentials for site A to site B for obvious security reasons. Instead you wish to delegate authorisation to site B to access resources on site A.
What will happen is that site A and site B shares a common OAuth secret. Without going into the handshaking details you will surf against site B and an authorisation request will redirect you to site A where you will be asked to grant site B permission to the appropriate resources. Once redirecting back to site B, an OAuth token will be handed to site B which can be used from now on to access precisely the set of resources granted from site A using the access token. Check out the RFC if interested at http://tools.ietf.org/html/rfc5849.
Twitter4J and OAuth
So, in our case, Twitter is site A and Rankington is site B (service provider and client). To create the shared secret which will be generated by site A (= Twitter), go to http://twitter.com/oauth_clients/new and create a pair of secret keys. A consumer key and a consumer secret.
A very basic test of this could be:
public class TwitterBridge { private static String key = "abcaasdkj1231231lkj123"; private static String secret = "asdkj7987asdjl12312lkj4323423423"; public static void main(String[] argv) throws TwitterException { Twitter twitter = new TwitterFactory().getInstance(); twitter.setOAuthConsumer(key, secret); RequestToken requestToken = twitter.getOAuthRequestToken(); System.out.println(requestToken.getAuthorizationURL()); // Breakpoint here and update value of pin from what // what you get in browser when surfing against // authorisation URL above. String pin = "7117195"; AccessToken accessToken = twitter.getOAuthAccessToken(requestToken, pin); System.out.println("Token: " + accessToken.getToken()); System.out.println("Token secret: " + accessToken.getTokenSecret()); Query query = new Query(); query.setQuery("rankington"); QueryResult queryResult = twitter.search(query); List<Tweet> tweets = queryResult.getTweets(); for (Tweet t : tweets) { System.out.println("From: " + t.getFromUser()); System.out.println("Time: " + t.getCreatedAt()); System.out.println("Text: " + t.getText()); } twitter.updateStatus("New tweet!"); } }So put in your consumer key and secret in the code above as key and secret. Then debug these lines of code.
System.out.println(requestToken.getAuthorizationURL());
will print something like
http://api.twitter.com/oauth/authorize?oauth_token=51IMwiqF8MfEdcNDZxzUgV3guqCpQK6VbFZasdl
Open that link in a browser and you will be prompted
to identify yourself against Twitter, unless you are logged in automatically. In the same dialog you allow the client application "rankington" to access Twitter in your stead.
A bit confusingly, we login as user "rankington" on Twitter and allow application "rankington" access. This is just conincidence that the names are the same in this example.
When granted access, a verification code will be shown. Copy this code and either rerun the Java program or insert it into via the debugger as the variable "pin".
Continue to run the program and the Twitter4J API will verify the pin code against the Twitter REST API and receive an access token. We print this and it will look something like
Token: 45335176-3NEtmOsdfsacZROM9ow3sdfsdfHm5dfu0ShNGTdN2CKw
Token secret: nOXQish8asfasiq4tZINEOJuDasdYDQC4dBJiAM3k
All good. These are the important values which we can use to create a new AccessToken in the future.
The end of the program makes a query for tweets concerning status updates containing "rankington". That would print:
From: rankington
Time: Sun Apr 17 01:10:48 CEST 2011
Text: Rankington alpha is out! Follow progress at http://macgyverdev.blogspot.com
#rankington
And in the end, the program makes a status update in the name of the Twitter user "rankington".
So, now we got all ingredients, all keys and secrets for OAuth authority delegation. The piece of code we can use in our real application is the following.
private static String key = "3649LZ3sasdasdpXWFHkHxaWQQ"; private static String secret = "aFOb86GmafgKtTasdasq3CpcwQw7bA"; private static String token = "28326asdasdasdD1ZxVDDL5Mqe7H"; private static String tokensecret = "7Klyasdasdasdwpc8Xbtm0IsiRA"; public static void main(String[] argv) throws TwitterException { AccessToken accessToken = new AccessToken(token, tokensecret); ConfigurationBuilder confBuilder = new ConfigurationBuilder(); confBuilder.setOAuthAccessToken(accessToken.getToken()) .setOAuthAccessTokenSecret(accessToken.getTokenSecret()) .setOAuthConsumerKey(key) .setOAuthConsumerSecret(secret); Twitter twitter = new TwitterFactory(confBuilder.build()).getInstance(); Query query = new Query(); query.setQuery("rankington"); QueryResult queryResult = twitter.search(query); Listtweets = queryResult.getTweets(); for (Tweet t : tweets) { System.out.println("From: " + t.getFromUser()); System.out.println("Time: " + t.getCreatedAt()); System.out.println("Text: " + t.getText()); } twitter.updateStatus("New tweet again!"); }
You see that we create the AccessTokens needed to authenticate against Twitter using the keys and secrets previously negotiated as a one time routine. The application can now tweet in eternity unless the user revokes the authorisation for the client.
With my basic understanding of OAuth, it feels like a great standard for interconnecting all service providers we got in the cloud.
Saturday, 16 April 2011
Hudson - Continuous Integration for a Google App Engine application
The last blog post described how to configure a Maven project for a Google App Engine application.
To build and deploy the Maven artifacts you will need some command line hacking.
Hudson CI can be used to escape the command line for all this. Hudson is a continuous integration system for building and testing your projects. It has some cool features as distributed building and much more.
To install Hudson, or Jenkins as the main fork has rebranded it now after Oracle came into clinch with the open source community, simply download the war-archive from either the Jenkins or Hudson web site. My installation is old so I use a Hudson build.You can either deploy it in a web container such as Tomcat or simply use the built in bootstrap. To bootstrap the war archive
java -Dhudson.udp=32850 -jar hudson.war --httpPort=9090 --daemon --logfile=/home/johan/hudson/hudson.log
Access Hudson via http://localhost:8080/.
Goto Manage Hudson -> Configure System and enable Maven under the Maven subsection. Goto Manage Hudson -> Manage Plugins and install the plugins you need. In my case I have installed Cobertura Plugin for code coverage, Findbugs for static code analysis, Maven2 Project Plugin, checkstyle for Java code validation and the CVS plugin. You might need to bring in some sub dependencies like Static Analysis Collector Plug-In, static Analysis Utilities.
Now create a new job, under your job, click Configure and setup appropriate version control config. In my case CVSROOT=:pserver:rankington:xxxx@213.xxx.xxx.xxx:/cvsrepo
and correct CVS module and branch.
Under Build, choose correct Maven installation, probably /usr/bin/mvn. Then setup your Maven build goals. In this case the goals are, enable debug, clean, compile, test, create war archive and generate reports.
-X
clean
package
findbugs:findbugs
checkstyle:checkstyle
This will make Maven checkout your code, compile it, run the Maven plugins for creating xml reports for Findbugs and Checkstyle.
Now add Post-build Actions to integrate these reports into Hudson. Enable Publish Findbugs analysis results from file **/target/findbugsXml.xml
Add similar report integrations by enabling publishing the following reports target/surefire-reports/*.xml for JUnit tests, **/target/site/cobertura/coverage.xml for Cobertura code coverage and so forth.
Now build your Hudson job by clicking Build Now and the Maven goals are executed to checkout, build, test and creates reports of the project. Then the post action goals kicks in and updates the dashboards of Hudson to show the results of the build.
Cobertura allows you to see on a package level the unit test coverage and the possibility to drill down on package and file level. Similar graphs and drilling can be done on Findbugs, Checkstyle and JUnit test reports.
Cobertura reporting on file level |
JUnit test reports over time, red indicates test cases have failed during those builds. |
Checkstyle reports Java code issues and the interface makes drilling easy. |
The Maven build could be extended to deploy the war at a local server to also run the web tests as a part of the Hudson job.
If you wish to also incorporate production deployment in the Hudson process you could use the Google App Engine scripting possibilities. Add a build step which does something like this to upload your newly built war archive to GAE.
appengine-java-sdk\bin\appcfg.cmd --email user@gmail.com --passin password update myapp/war
The same script can be used to download logs from production via
appengine-java-sdk/bin/appcfg.sh request_logs myapp/war mylogs.txt
Theres a bunch of more handy commands for scripting GAE if you run through the docs at http://code.google.com/intl/sv-SE/appengine/docs/java/tools/uploadinganapp.html.
Next time I'll try to describe what the production environment offers in addition to the local development environment.
Subscribe to:
Posts (Atom)